Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Dual-frequency pattern scheme for high-speed 3-D shape measurement

Open Access Open Access

Abstract

A novel dual-frequency pattern is developed which combines a high-frequency sinusoid component with a unit-frequency sinusoid component, where the high-frequency component is used to generate robust phase information, and the unit-frequency component is used to reduce phase unwrapping ambiguities. With our proposed pattern scheme, phase unwrapping can overcome the major shortcomings of conventional spatial phase unwrapping: phase jumping and discontinuities. Compared with conventional temporal phase unwrapped approaches, the proposed pattern scheme can achieve higher quality phase data using a less number of patterns. To process data in real time, we also propose and develop look-up table based fast and accurate algorithms for phase generation and 3-D reconstruction. Those fast algorithms can be applied to our pattern scheme as well as traditional phase measuring profilometry. For a 640×480 video stream, we can generate phase data at 1063.8 frames per second and full 3-D coordinate point clouds at 228.3 frames per second. These achievements are 25 and 10 times faster than previously reported studies.

©2010 Optical Society of America

1. Introduction

Structured Light Illumination (SLI) [1,2,3] is an active three dimensional (3-D) shape measurement technique widely used for scientific and industrial purposes [4]. Various algorithms have been proposed to achieve high-accuracy and/or high-speed measurement [5, 6].

Higher frequency patterns have been studied and employed to improve measurement accuracy at the extra computational cost of phase unwrapping. Most unwrapping phase strategies can be fallen in two categories [9]: spatial [10, 5, 11, 12] and temporal [7, 13, 14].

Spatial phase unwrapping approaches assume a continuity of scanned surfaces and likely fail in regions with depth discontinuities (step edges) [9]. Temporal methods, alternatively, use the intermediate phase values typically generated by projecting several additional reference patterns and, therefore, are not efficient for real-time operation. Kim et al. [15] proposed an algorithm for high-frequency SLI that avoided direct phase unwrapping with 4N ≥ 8 patterns. Li et al. [16] proposed a two-frequency grating that integrated both high-frequency and unit-frequency into each pattern with 2N ≥ 6 patterns, where the high frequency had to be equal to N. Su and Liu [17] treated the algorithm of Li et al. as one of two bases in their one-shot pattern strategy.

In this paper, we propose a novel dual-frequency pattern scheme that combines, into a single pattern, a high-frequency SLI pattern with a unit-frequency pattern where the high-frequency component generates wrapped phase, resilient to sensor noise, while the unit-frequency component enables phase unwraping. Differing from Li et al.’s two-frequency grating [16], the minimum number of our proposed patterns is 5 for any high frequencies. In Li et al.’s method, the minimum number of patterns is 6, and the high frequency term is fixed and related to the number of patterns. Our proposed pattern is defined by a single equation while Li et al. employed two equations.

With regard to high-speed or real-time 3-D measurement, researchers have proposed one-shot [2,18,19,20] methods that reconstruct depth from a single, continuously projected pattern, while others have proposed employing a high-speed camera and projector pair. One-shot SLI techniques are attractive for their insensitivity to object motion, where pattern strategies are based on spatial neighborhood coding, but suffering from poor resolution and high computational cost [2]. Therefore, researhers have attempted to achieve the high resolution with the resilience to motion by driving the camera/projector pair at such high frame rates as to minimize impact of object motion. A study of particular importance has been led by Zhang et al. [8, 21] who employed a new algorithm for phase generation instead of the time-consuming arctangent function [22] and also took the advantage of the computation capability of GPU to generate the absolute 3-D coordinates of scanned object [6]. For the image in size of 532×500, their new algorithm was 3.4 times faster than using the arctangent function while the frame rate of 3-D reconstruction was sped up from 6 fps to 25.56 fps, by means of the GPU.

To achieve real-time 3-D video acquisition and reconstruction, this paper also introduces a lossless lookup table (LUT) method for phase, intensity/texture, and depth data that works in combination with a high-speed projector/camera pair widely used in phase measurement profilometry (PMP) [23]. LUT-based methods have been known for the low computational cost and employed in many applications [24]. We develop the novel method that extracts phase and intensity/texture information from the incoming PMP video by reducing the computational complexity of the arctangent and square-root functions. Furthermore, since no LUT-based method has been introduced for inverting the camera’s calibration matrix to produce 3-D point clouds in real time, we introduce such a method that can also be applied to other triangulation-based 3-D reconstruction techniques.

As will be demonstrated in this paper, our experimental system achieves a processing rate of 714.29 fps to generate phase and texture video and 228.83 fps to produce 3-D point clouds using just one core of an Intel Core 2 Duo Quad Q9650 processor running at 3.0 GHz. No additional cores or GPUs [6] are needed, but using such resources would, of course, increase our frame rate if necessary. For the proposed novel dual-frequency pattern scheme, where the number of patterns is set to 6, as shown in Sec. 4, our fast LUT-based algorithms can be also applied.

2. Phase measuring profilometry

PMP is a popular method of SLI due to its high reliability and high accuracy [25]. In practice, either vertical or horizontal sinusoid patterns are projected onto a target object so that the vertical or horizontal correspondence information, between the camera and the projector, can be directly derived from the computed phase data. The PMP patterns are described as:

Inp(xp,yp)=Ap(xp,yp)+Bp(xp,yp)cos(2πfyp2πnN).
 figure: Fig. 1.

Fig. 1. PMP system diagram.

Download Full Size | PDF

where (xp,yp) is the coordinates of a pixel in the projector, Ipn is the intensity of that pixel, Ap and Bp are some constants, f is the frequency of the sine wave, n represents the phase-shift index, and N is the total number of phase shift. Figure (1) shows a group of sine wave patterns with N = 3, f = 1, Ap = 127.5 and Bp = 127.5 for 8-bits color depth projector are projected.

For reconstruction, a camera captures each image where the sine wave pattern is distorted by the scanned surface topology, resulting in the patterned images expressed as:

Inc(xc,yc)=Ac(xc,yc)+Bc(xc,yc)cos(ϕ(xc,yc)2πnN).

where (xc,yc) is the coordinates of a pixel in the camera while Icn(xc,yc) is the intensity of that pixel. To simplify the notation, the coordinates index both in the camera and projector will be removed from our equation henceforth. The term Ac is the averaged pixel intensity across the pattern set, which can be derived according to:

Ac=1Nn=0N1Inc

such that the image Ac is equal to an intensity or texture photograph of the scene. Correspondingly, the term Bc is the intensity modulation of a given pixel and is derived from Icn as:

Bc=2N{[n=0N1Incsin(2πnN)]2+[n=0N1Inccos(2πnN)]2}0.5,

such that Bc can be thought of as the amplitude of the sinusoid reflecting off of a point on the target surface. If Icn is constant or less affected by the projected sinusoid patterns, Bc will be close to zero. Thus Bc is employed as a shadow noise detector/filter [25] such that the shadow-noised regions, with small Bc values, are discarded from further processing. Figure 1 shows, an example scene with a background that includes a fluorescent ceiling light, which over saturates the CMOS cameras pixel and, thereby, erases any signal from the SLI projector. In Fig. 1, Ac looks like a standard video frame absent any indication of the projected pattern sequence Ipn. In contrast, Bc, shown in Fig. 1, looks very similar to Ac except that it only shows texture in those areas of the scene that significantly reflect the projected sequence Ipn. Given the significance of Bc as an indicator of the projected signal strength, the binarized image in Fig. 1 shows only those pixels greater in magnitude to a user defined threshold. It is these pixels that will ultimately be used to reconstruct our 3-D surface with ignored pixels being considered too noisy as to relay an reliable depth information.

 figure: Fig. 2.

Fig. 2. Cross-section of the proposed pattern strategy for N = 5, n = 0, fh = 16, Ap = 127.5, Bp 1 = 102, and Bp 2 = 25.5.

Download Full Size | PDF

Of the reliable pixels with sufficiently large Bc, ϕ represents the phase value of the captured sinusoid pattern derived as:

ϕ=tan1[n=0N1Incsin(2πnN)n=0N1Inccos(2πnN)].

In Fig. 1, we show the resulting phase image corresponding to the scene from sine wave patterns. For reference, Fig. 1 also shows the phase values for all pixels, including those considered unreliable according to Bc.

Having derived the phase image ϕ, we likewise have derived a unique correspondence value yp for every camera pixel (xc,yc) through the linear equation ϕ(xc,yc) = 2πyp. The 3-D world coordinates of the scanned object can, therefore, be derived through triangulation with the projector [7] where, under the assumption that the camera and projector are accurately modeled using the pinhole lens model such that their perspective matrices, Mc and Mp, are given by:

Mc=m11cm12cm13cm14cm21cm22cm23cm24cm31cm32cm33cm34candMp=m11pm12pm13pm14pm21pm22pm23pm24pm31pm32pm33pm34p.

The 3-D world coordinates Xw, Yw, and Zw are defined according to [7] as

XwYwZw=m11cm31cxc,m12cm32cxc,m13cm33cxcm21cm31cyc,m22cm32cyc,m23cm33cycm21pm31pyp,m22pm32pyp,m23pm33pyp1m14cm34cxcm24cm34cycm24pm34pyp.

It is this matrix inversion as well as the actangent computation in Eq. (5) that will later prove to be the bottleneck preventing real-time surface reconstruction as will be discussed in Sec. 4.3.

3. Dual-frequency pattern strategy

For the purpose of minimizing the effects of sensor noise by means of high-frequency PMP while minimizing the computational expense of performing phase unwrapping, we propose a dual-frequency pattern set defined as:

Inp=Ap+B1pcos(2πfhyp2πnN)+B2pcos(2πfuyp4πnN).

where Ipn is the intensity of a pixel in the projector, Ap, Bp 1 and Bp 2 are some constants to make value of Ipn between 0 and 255 for a 8-bit color depth projector, fh is the high frequency of the sine wave, fu is the unit frequency of the sine wave and equals to 1, n represents the phase-shift index, and N is the total number of phase shift and is larger or equal to 5. Figure 2 illustrates one proposed pattern along with its profile for N = 5, n = 0, fh = 16, Ap = 127.5, Bp 1 = 102, and Bp 2 = 25.5 for 8-bits per pixel grayscale intensity.

Using this novel pattern set, Eq. (2) becomes:

Inc=Ac+B1ccos(ϕh2πnN)+B2ccos(ϕu4πnN),

where Icn is the intensity of a pixel in the camera. The term Ac is still the averaged pixel intensity across the pattern set, which can be derived by Eq. 3 such that the image Ac is equal to an intensity or texture photograph of the scene. Correspondingly, the term Bc 1 is the intensity modulation of a given pixel corresponding to ϕh and is derived from Icn as:

Bmc=2N{[n=0N1Incsin(m2πnN)]2+[n=0N1Inccos(m2πnN)]2}0.5,

with m = 1 such that Bc 1 can be thought of as the amplitude of the sinusoid reflecting off of a point on the target surface.

Similar to traditional PMP, if Icn is constant or less affected by the projected sinusoid patterns, then Bc 1 will be close to zero, indicating that Bc 1 is very sensitive to noise in either the camera, projector, and/or ambient light. Thus Bc 1 is employed as an indicator of the signal to noise ratio such that excessively noisy regions, with small Bc 1 values, are discarded from further processing [25]. Bc 2 is derived from Icn by Eq. 10 with m = 2 and it is the intensity modulation corresponding to ϕu. Of the reliable pixels with sufficiently large Bc 1, the phase-pair, (ϕh, ϕu), is derived as:

(ϕh,ϕu)=(tan1[n=0N1Incsin(2πnN)n=0N1Inccos(2πnN)],tan1[n=0N1Incsin(4πnN)n=0N1Inccos(4πnN)],)

where ϕh represents the wrapped phase value of the captured pattern and ϕu represents the base phase used to unwrap ϕh.

4. LUT-based processing

LUT-based programming is a well known and widely used technique [24] for minimizing processing latency. In this section, we implement LUT-based processing for deriving the modulation, Bc, and phase, ϕ, terms for traditional PMP that is suitable for all triangulation-based 3-D measurement, including our proposed dual-frequency pattern set. The proposed LUT-based processing takes advantage of the need by real-time systems to use as few patterns as possible by using the 8-bits per pixel, of the captured pattern set, as the indices into the LUT. By having LUTs that account for every possible combination of captured pixel value over the pattern set while storing double-precision results, the proposed scheme is completely lossless compared to traditional processing.

4.1. Modulation

As a first step in reducing the computational complexity associated with Eqs. (4), (5), and (7), we simplify the determination of Bc by rewriting Eq. (4) as:

Bc=13[3(I1cI2c)2+(2I0cI1cI2c)2]0.5

for N = 3,

Bc=12[(I1cI3c)2+(I0cI2c)2]0.5

for N = 4, or

Bc=16[3(I1c+I2cI4cI5c)2+(2I0c2I3c+I1cI2cI4c+I5c)2]0.5

for N = 6. Noting that we need only solve these equations for 8-bits per pixel Icn images, we can implement a modulation look-up table, MLUT, that, for N = 3, is defined according to

MLUT[U,V]=13[3V2+U2]0.5,

where integer indices V and U are derived from

V=I1cI2candU=2I0cI1cI2c;

for N = 4, is defined according to

MLUT[U,V]=12[V2+U2]0.5,

where integer indices V and U are derived from

V=I1cI3candU=I0cI2c;

or for N = 6, is defined according to

MLUT[U,V]=16[3V2+U2]0.5,

where integer indices V and U are derived from

V=I1c+I2cI4cI5candU=2I0c2I3c+I1cI2cI4c+I5c.

The double-precision results of Eq. (15), (17) or (19) are stored in the MLUT. In contrast to Eq. (4), MLUT reduces the run-time computation cost of modulation without losing accuracy, where the size of the LUT is determined by the number of bits per pixel of the camera sensor and the number of patterns being projected.

For our proposed dual-frequency pattern scheme, the minimum number of patterns is 5, but to take advantage of our LUT-based processing, we set the number of patterns to 6 where the modulation terms Bc 1 and Bc 2, in Eq. (9), are described as:

B1c=16[3(I1c+I2cI4cI5c)2+(2I0c2I3c+I1cI2cI4c+I5c)2]0.5

and

B2c=16[3(I1cI2c+I4cI5c)2+(2I0c+2I3c+I1cI2cI4cI5c)2]0.5.

In practice, we need only calculate Bc 1 as a shadow filter and for representing object texture. The MULT for Bc 1 is the same as Eq. (19).

4.2. Phase

As our second step to producing real-time 3-D video with texture, we intend to minimize the computational complexity associated with generating phase video where the arctangent function has long been considered a significant obstacle for fringe pattern analysis [26]. Previous approaches to this problem include Huang et al.’s cross ratio algorithm [22] and Guo et al.’s approximation algorithm [26]. However, these methods reduce computational cost at the expense of accuracy and are not faster than our LUT-based algorithms.

As we did with Eq. (4) for Bc, we simplify Eq. (5) according to the number of patterns N such that

ϕ=tan1[30.5(I1cI2c)2I0cI1cI2c]

for N = 3,

ϕ=tan1[I1cI3cI0cI2c]

for N = 4, or

ϕ=tan1[30.5(I1c+I2cI4cI5c)2I0c2I3c+I1cI2cI4c+I5c]

for N = 6. Again based on the fact that the intensity values of grabbed images are range-limited integers, we can implement these calculations through a phase LUT (PLUT), for N = 3, defined according to

PLUT[U,V]=tan1[30.5VU],

where U and V are defined in Eq. (16); for N = 4, defined according to

PLUT[U,V]=tan1[VU],

where U and V are defined in Eq. (18); or for N = 6, defined according to

PLUT[U,V]=tan1[30.5VU],

where U and V are defined in Eq. (20). The double-precision results are stored in the PLUT. Thus, the time-consuming arctangent operation is pre-performed, and phase values are obtained by accessing the pre-computed PLUT whose size is, again, determined by the number of bits per pixel of the sensor as well as the number of patterns projected with no loss in accuracy. Compared to Eqs. (5), PLUT avoids computing arctangent function at run-time such that the computational cost of phase is greatly reduced without introducing distortion.

For our proposed dual-frequency pattern scheme, the more accurate wrapped phase ϕh and the coarse unit frequency phase ϕu pair in the Eq. (8) is rewritten as

(ϕh,ϕu)=(tan1[30.5(I1c+I2cI4cI5c)2I0c2I3c+I1cI2cI4c+I5c],tan1[30.5(I1cI2c+I4cI5c)2I0c+2I3cI1cI2cI4cI5c])

The PLUT for ϕh is the same as the Eq. (28), and the PLUT for ϕu is defined as

PLUT[U,V]=tan1[30.5VU],

where V and U are derived from

V=I1cI2c+I4cI5candU=2I0c+2I3cI1cI2cI4cI5c.

Once ϕh and ϕu are obtained, phase unwrapping can be also achieved in real-time.

4.3. 3-D point cloud

Having obtained both the phase and modulation images by means of LUTs, we begin our derivation of a LUT for the purpose of implementing Eq. (7). After matrix inverse and matrix multiplication operations, Eq (7) can be expanded into direct algebraic forms as

Xw=Mx(xc,yc)+Nx(xc,yc)T
Yw=My(xc,yc)+Ny(xc,yc)T
Zw=Mz(xc,yc)+Nz(xc,yc)T

where T = (C(xc,yc)yp + 1)-1 and Mx, My, Mz, Nx, Ny and Nz are defined in the Appendix.

We then note that, based on the mapping of world coordinates to the camera plane [7] by Mc, if Zw is calculated according to

Zw=Mz(xc,yc)+Nz(xc,yc)T,

then Xw and Yw can be computed as

Xw=Ex(xc,yc)Zw+Fx(xc,yc)andYw=Ey(xc,yc)Zw+Fy(xc,yc),

where

Ex(xc,yc)=(m22cm33cm23cm32c)xc+(m13cm32cm12cm33c)yc+(m12cm23cm13cm22c)(m21cm32cm22cm31c)xc+(m12cm31cm11cm32c)yc+(m11cm22cm12cm21c),
Fx(xc,yc)=(m22cm34cm24cm32c)xc+(m14cm32cm12cm34c)yc+(m12cm24cm14cm22c)(m21cm32cm22cm31c)xc+(m12cm31cm11cm32c)yc+(m11cm22cm12cm21c),
Ey(xc,yc)=(m23cm31cm21cm33c)xc+(m11cm33cm13cm31c)yc+(m13cm21cm11cm23c)(m21cm32cm22cm31c)xc+(m12cm31cm11cm32c)yc+(m11cm22cm12cm21c),and
Fy(xc,yc)=(m21cm32cm22cm31c)xc+(m12cm31cm11cm32c)yc+(m11cm22cm12cm21c)(m21cm32cm22cm31c)xc+(m12cm31cm11cm32c)yc+(m11cm22cm12cm21c).

Implementing the 7 parameters Mz, Nz, C, Ex, Ey, Fx, and Fy by means of table look-up for indices (xc,yc) (camera column and row indices), reduces the total computational complexity associated with deriving the 3-D point cloud from the phase term to 7 look-ups, 4 additions, 3 multiplications, and 1 division, which is significantly less than what is involved in performing matrix inversion and matrix multiplication, as required by Eq. (7). It should be noted that the method presented in Eqs. (32) and (33) can be applied to all triangulation-based, 3-D coordinate, reconstruction techniques including stereo vision, 3-D from video, time-of-flight, and so on.

5. Experiments

In order to test the performance of the proposed pattern scheme and LUT-based processing, we programmed the experimental system of Fig. 3 using Microsoft Visual Studio 2005 with managed C++. The imaging sensor is an 8-bits per pixel, monochrome, Prosilica GC640M, gigabit ethernet camera with a frame rate of 120 fps and 640×480 pixel resolution. The projector is composed of a Texas Instrument’s Discovery 1100 board with ALP-1 controller and LED-OM with 225 ANSI lumens. The projector has a maximum frame rate of 150 fps at a resolution of 1024×768 and 8-bits per pixel grayscale. The camera and projector are synchronized by an external triggering circuit. As our processing unit, we used a Dell Optiplex 960 with an Intel Core 2 Duo Quad Q9650 processor running at 3.0 GHz.

 figure: Fig. 3.

Fig. 3. Experimental setup.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Visualization of the scanned object (top left), one of captured images with projected pattern, Ic 0 (top center), Ac (top right), Bc 1 (bottom left), binarized Bc 1 with threshold of 10 (bottom center), and Bc 2 (bottom right).

Download Full Size | PDF

Our experiments include: (1) scanning a static scene with the proposed dual-frequency pattern scheme employing 5 patterns without using LUT-based processing; (2) scanning static scene with the traditional PMP with unit frequency for N = 3, 4 and 6 involving LUT-based processing; (3) scanning a moving hand with dual-frequency patterns for N = 6 by means of LUT-based processing; and (4) scanning a moving white table tennis ball and reconstructing it with a known sphere model.

In the first experiment, the image of Fig. 4 (top-left) shows the scanned static objects. The projected patterns’ parameters, in the Eq. (8), are N = 5, fh = 6, fu = 1, Ap = 155, Bp 1 = 80 and Bp 2 = 20. The minimum value in each pattern is 55 in order to prevent underflow of captured images using our camera. The visualizations of the extracted parameters Ic 0, Ac, Bc 1 and Bc 2 are shown in Fig. 4 (top center), (top right), (bottom left), and (bottom right). The image in Fig. 4 (bottom center) shows the binarized Bc 1, to be used as a shadow noise filter for phase generation, using a threshold of 10.

 figure: Fig. 5.

Fig. 5. The (top) visualization and (bottom) cross-section at the 250th column of the (left) wrapped phase ϕh filtered by binarized Bc 1, the (center) base phase ϕu filtered by Bc 1, and the (right) final unwrapped phase filtered by Bc 1 (right).

Download Full Size | PDF

The visualizations of Eq. (11) are illustrated in Fig. 5 (top-left) and (top-center) where shadow noise was filtered by the binarized Bc 1 while Fig. 5 (bottom-left) and (bottom-center) show the 250th column curve crossing the angel. In comparison, the unit phase term is noisy while the wrapped phase has high quality but needs to be unwrapped. Because we obtain the wrapped phase and the base phase simultaneously, the unwrapping algorithm for multi-frequency PMP can be performed to generate the final, high-quality and non-ambiguous phase. No neighborhood technique need be employed. The final unwrapped phase image without shadow noise is shown in Fig. 5 (top-right) and the 250th column curve crossing the angel is shown in Fig. 5 (bottom-right). Now once the unwrapped phase is obtained, 3-D world coordinates of the scanned object can be computed by triangulation between the camera and projector. Figure 6 shows the reconstructed 3-D point clouds with texture (top) and depth rendering (bottom) in front (left), side (center), and top view (right). Ac acts as the texture for the 3-D point clouds in this experiment.

For our second experiment, a static object was scanned using traditional PMP with unit frequency with N = 3, 4, and 6 for the purpose of measuring the speed at which our LUT-based algorithms perform without phase unwrapping. Although the processor has four cores, our reported processing rates for each LUT algorithm are based on using just a one core. And although Bc could be used as a shadow noise filter, thus reducing the number of pixels that would need to be processed while deriving ϕ, we computed a phase value for the full set of 640×480 pixels such that our observed processing times represent an upper limit. Under these conditions, the average computation time over 1,000 runs for modulation Bc and phase ϕ was then reported in Table 1. Bc acts as texture in this experiment.

In an overview of Table 1 for the total processing time including texture generation, phase generation, and 3-D reconstruction, our LUT-based computations are 173.31 fps for N = 3, 164.20 fps for N = 4, and 139.28 for N = 6 versus, using traditional PMP, 15.16 fps for N = 3, 15.38 fps for N = 4, and 14.55 fps for N = 6, which means our LUT-based algorithms is 10× faster than traditional PMP algorithms. Using the general modulation equation, Eqs. (4), and the general phase equation, (5), the processing time is increased with larger N, because larger N means more sin(·), cos(·) and summation computations. The processing time and rates are listed in Table 1.

 figure: Fig. 6.

Fig. 6. the reconstructed 3-D point clouds with texture (top) and depth rendering (bottom) in front view (left), side view (center) and top view (right)

Download Full Size | PDF

Tables Icon

Table 1. Processing time and rate, in milliseconds and frames per second (in parentheses), respectively, for various stages of PMP processing by means of the equations and LUT described in this paper.

With the simplified modulation equations of Eq. (12), (13), and (14), the processing time decreases significantly for N = 4 (2.34 ms) compared with N = 3 (6.72 ms) because, before performing square-root computation, there are only 2 subtractions, 1 addition, and 2 square computations for N = 4. There are 3 subtractions, 1 addition, 2 multiplications, and 2 square computations for N = 3. Using the simplified phase equation, Eq. (23), (24), and (25), the processing time is almost the same for different N because, although the processing time for the basic computations, such as addition and subtraction, is still varied for different N. Such basic processing time is negligible compared to the processing time for the processing hungry arctangent function.

Furthermore, it is found from Table 1 that the performance of MLUT and PLUT is the same for different N because the computations for the integer indices, U and V, are the same for both MLUT and PLUT with the same N as is the time for accessing MLUT and PLUT. However, the processing time for MLUT/PLUT is increased with increasing N because the time for accessing the image buffer, Icn, increases; therefore, accessing this buffer is the predominant factor in this case. In practice when we want to access MLUT and PLUT, the computations of U and V need only be performed once. So Table 1 also shows the processing time of the row marked as “MLUT and PLUT, combined” is less than the summation time of the row marked as “MLUT: Eq. (15), (17) and then (19)” and the row marked as “PLUT: Eq. (26), (27) and then (28)” in which their U and V were computed respectively.

For testing 3-D reconstruction using a single processing core, we implemented reconstruction by means of matrix-inversion (Eq. (7)) versus our LUT-based implementation of Sec. 4.3. With our 3-D LUT algorithm, the frame rate of 3-D reconstruction is 228.83 fps for 640×480 image resolution, which is 10.3 times faster than matrix inversion by means of Eq (7). For added comparison, Zhang et al. reported a reconstruction frame rate of 25.56 fps with a resolution of 532×500 when performing matrix inversion by means of GPU processing on an nVidia Quadro FX 3450 [6]. For Zhang et al.’s 2 + 1 pattern strategy [21], our LUT analysis can be also applied, and the performance is shown in Table 1.

In our third experiment,we scanned a human subject’s hand gestures with our dual-frequency pattern set (N = 6) using LUT-based processing. Different from our second experiment, the most recent N video frames are captured live and stored in a shared image buffer by a camera thread. At the same time, a second thread performs modulation and phase generation from the shared buffer and stores the computed data into a modulation and phase buffer. A third thread, simultaneously, performs 3-D reconstruction using data from this modulation buffer, storing the 3-D point cloud results into a 3-D data buffer. Finally, a fourth thread displays the data from the 3-D data buffer using OpenGL. Because the speed of the camera/projector pair is 120 fps, the speed of final 3-D reconstruction is also 120 fps while the display speed is limited by the LCD display at 40–50 fps.

Figure 7 shows a human subject’s hand gestures. In the case of Fig. 7 (left), we show the entire system along with the subject’s hand as a way of demonstrating that video is being acquired and displayed on screen and in real-time where there is no noticeable delay between the gesture’s performed by the subject and displayed on screen. On the LCD panel, you can see the XYZ axes labeled as blue, green, and red lines, respectively, while the 3-D point cloud is also being reflected off the XZ plane as a way of better illustrating the hand coming forward and away from the camera. In the video clips of Fig. 7 (center) and (right), we show the subject’s hand held in front of a white cardboard box. The purpose of these clips is to show that we can track multiple, disjoint objects, since there are no phase ambiguities when using dual-frequency pattern scheme. For added inspection, the images in Fig. 8 show the live frames of hand poses with front view and top view where, from visual inspection, one can see the texture/intensity encoding of the points derived from Bc. Points with low modulation strength are not displayed.

 figure: Fig. 8.

Fig. 8. Sample point clouds, using dual-frequency pattern scheme for N = 6, live show of various hand poses. (top) Front view and (middle) Top view.

Download Full Size | PDF

As a final and quantitative evaluation of our proposed methodology, we scanned a texture-less, white table tennis ball, with a radius of 19 mm, swinging like a pendulum on a 20 cm long string. We used our proposed dual-frequency pattern scheme with N = 6, fh = 4, fu = 1, Ap = 155 and Bp 1 = Bp 2 = 50. Our PLUT and 3-D LUT were employed for phase generation and 3-D reconstruction. From the reconstructed point clouds of the moving ball, we found the best-fit sphere with variable position and radius. From this, we compared the estimated sphere radius with the true radius. Plotted in Fig. 9 (top), is this estimated sphere radius. The averaged distance of the table tennis ball point cloud, in Fig. 9 (bottom), was estimated as the centroid of all points in the cloud.

The exact position of the point cloud centroid is of no quantitative significance since we do not have a true measure to compare, but it does indicate the points in time when the ball was at the extreme points of the pendulum swing. What Fig. 9 shows is that we get an accurate and stable measurement of the ball radius at the extreme points of the swing, where the ball comes to a complete stop before changing direction. The mid-way point has the least stable radius estimate due to this being the point in the swing where the ball is moving at its highest velocity. At its farthest point from the sensor, noise in the reconstruction results in a sphere radius that is its smallest because the points of the cloud are not, in any way, constrained to be side by side in the reconstruction. So two neighboring pixels may end up being on the front and back of the best-fit sphere. This behavior is evident in the distance estimate, which has a high degree of variability. At its closest point to the sensor, the point cloud is sufficiently flat as to have the entire point cloud residing on the front-side of the best-fit sphere, resulting in a smooth an consistent distance estimate.

6. Conclusion and future works

In this paper, we presented a novel dual-frequency pattern strategy along with three LUT-based algorithms for calculating the modulation and phase terms along with reconstructing 3-D point clouds based upon a popular method of SLI. The proposed pattern coding technique embeds low and high frequency components into a single pattern where high-quality, high-frequency phase is unwrapped according to low-frequency phase. No any intermediate patterns are needed nor are surface discontinuities an issue. With regards to reconstruction quality, we note that scan accuracy is strongly influenced by features of pattern design, which were also considered in this paper. Being that our proposed method requires at least five patterns, future work will look at reducing this pattern number further.

With regards to LUT-based processing, especially, phase generation and 3-D reconstruction contributions of this paper are of particular importance to real-time 3-D reconstructing. Compared with both traditional PMP algorithm (Table 1) and previous published literatures [6,8,22], the improvement in frame rate performance for phase generation and 3-D reconstruction is significant. Furthermore, the methods presented here are not strictly tied to traditional PMP and the proposed dual-frequency approach where the 2+1 method, introduced by Wizinowich [27] and modified by Zhang in his SLI system [21], can be performed at the equivalent of 719 fps (Table 1) when implemented by means of MLUT and PLUT.

 figure: Fig. 9.

Fig. 9. Radius of best-fit sphere (top) and distance between the camera and the center of the ball (bottom) for 295 measurements.

Download Full Size | PDF

As presented using an 8 bits per pixel sensor, the LUT-based method is lossless but requires significant memory, depending on the number of patterns used. This memory consumption can, of course, be manipulated to use only the most significant bits of the incoming video stream to reduce the overall memory requirements for MLUT and PLUT. The LUTs used for 3-D reconstruction are limited in size by the number of pixels forming the sensor, so reducing the memory requirements here corresponds to reducing the overall field of view of the camera. While losing accuracy in MLUT may be insignificant for most applications, quantizing the incoming video to reduce the memory consumed by PLUT incurs a penalty in phase accuracy. For our experimental system, the variance in the phase error (in radians) ranged from 0.031 for 3-pattern, 4-bits per pixel LUTs to 0.001811 for 3-pattern, 10-bits per pixel LUTs.

Appendix: Definitions of Mx, My, Mz, Nx, Ny and Nz

Mx(xc,yc)=(ax1xc+ax2yc+ax3d1xc+d2yc+d3)(c1xc+c2yc+c3d1xc+d2yc+d3),Nx(xc,yc)=bx1xc+bx2yc+bx3d1xc+d2yc+d3Mx(xc,yc),
My(xc,yc)=(ay1xc+ay2yc+ay3d1xc+d2y2+d3)(c1xc+c2yc+c3d1xc+d2yc+d3),Ny(xc,yc)=by1xc+by2yc+by3d1xc+d2yc+d3My(xc,yc),
Mz(xc,yc)=(az1xc+az2yc+az3d1xc+d2y2+d3)(c1xc+c2yc+c3d1xc+d2yc+d3),Nz(xc,yc)=bz1xc+bz2yc+bz3d1xc+d2yc+d3Mz(xc,yc),

where

ax1=m22cm34cm33p+m23cm32cm34p+m24cm33cm32pm22cm33cm34pm23cm34cm32pm24cm32cm33p,
ax2=m12cm33cm34p+m13cm34cm32p+m14cm32cm33pm12cm34cm33pm13cm32cm34pm14cm33cm32p,
ax3=m12cm24cm33p+m13cm22cm34p+m14cm23cm32pm12cm23cm34pm13cm24cm32pm14cm22cm33p,
ay1=m21cm33cm34p+m23cm34cm31p+m24cm31cm33pm21cm34cm33pm23cm31cm34pm24cm33cm31p,
ay2=m11cm34cm33p+m13cm31cm34p+m14cm33cm31pm11cm33cm34pm13cm34cm31pm14cm31cm33p,
ay3=m11cm23cm34p+m13cm24cm31p+m14cm21cm33pm11cm24cm33pm13cm21cm34pm14cm23cm31p,
az1=m21cm34cm32p+m22cm31cm34p+m24cm32cm31pm21cm32cm34pm22cm34cm31pm24cm31cm32p,
az2=m11cm32cm34p+m12cm34cm31p+m14cm31cm32pm11cm34cm32pm12cm31cm34pm14cm32cm31p,
az3=m11cm24cm32p+m12cm21cm34p+m14cm22cm31pm11cm22cm34pm12cm24cm31pm14cm21cm32p,
bx1=m22cm33cm24p+m23cm34cm22p+m24cm32cm23pm22cm34cm23pm23cm32cm24pm24cm33cm22p,
bx2=m12cm34cm23p+m13cm32cm24p+m14cm33cm22pm12cm33cm24pm13cm34cm22pm14cm32cm23p,
bx3=m12cm23cm24p+m13cm24cm22p+m14cm22cm23pm12cm24cm23pm13cm22cm24pm14cm23cm22p,
by1=m21cm34cm23p+m23cm31cm24p+m24cm33cm21pm21cm33cm24pm23cm34cm21pm24cm31cm23p,
by2=m11cm33cm24p+m13cm34cm21p+m14cm31cm23pm11cm34cm23pm13cm31cm24pm14cm33cm21p,
by3=m11cm24cm23p+m13cm21cm24p+m14cm23cm21pm11cm23cm24pm13cm24cm21pm14cm21cm23p,
bz1=m21cm32cm24p+m22cm34cm21p+m24cm31cm22pm21cm34cm22pm22cm31cm24pm24cm32cm21p,
bz2=m11cm34cm22p+m12cm31cm24p+m14cm32cm21pm11cm32cm24pm12cm34cm21pm14cm31cm22p,
bz3=m11cm22cm24p+m12cm24cm21p+m14cm21cm22pm11cm24cm22pm12cm21cm21pm14cm22cm21p,
c1=m21cm32cm33p+m22cm33cm31p+m23cm31cm32pm21cm33cm32pm22cm31cm33pm23cm32cm31p,
c2=m11cm33cm32p+m12cm31cm33p+m13cm32cm31pm11cm32cm33pm12cm33cm31pm13cm31cm32p,
c3=m11cm22cm33p+m12cm23cm31p+m13cm21cm32pm11cm23cm32pm12cm21cm33pm13cm22cm31p,
d1=m21cm33cm22p+m22cm31cm23p+m23cm32cm21pm21cm32cm23pm22cm33cm21pm23cm31cm22p,
d2=m11cm32cm23p+m12cm33cm21p+m13cm31cm22pm11cm33cm22pm12cm31cm23pm13cm32cm21p,
d3=m11cm23cm22p+m12cm21cm23p+m13cm22cm21pm11cm22cm23pm12cm23cm21pm13cm21cm22p.

Acknowledgements

This work was sponsored, in part, by National Institute of Hometown Security, Somerset, KY, contract 17-07-UK, and by the Kentucky Science and Technology Company Inc., under contract KSEF-148-502-08-226.

References and links

1. S. Zhang, D. Royer, and S. T. Yau, “High-resolution, real-time-geometry video acquisition system,” ACM SIG-GRAPH (2006).

2. C. Guan, L. G. Hassebrook, and D. L. Lau, “Composite structured light pattern for three-dimensional video,” Opt. Express 11, 406–417 (2003). [CrossRef]   [PubMed]  

3. Y. Wang, Q. Hao, A. Fatehpuria, D. L. Lau, and L. G. Hassebrook, “Data acquisition and quality analysis of 3-dimensional fingerprints,” in IEEE conference on Biometrics, Identity and Security, Tampa, Florida, (2009).

4. J. Salvi, J. Pages, and J. Batlle, “Pattern codification strategies in structured light systems,” Pattern Recognition 37, 827–849 (2004). [CrossRef]  

5. S. Zhang, X. Li, and S.-T. Yau, “Multilevel quality-guided phase unwrapping algorithm for real-time three-dimensional shape reconstruction,” Appl. Opt. 46, 50–57 (2007). [CrossRef]  

6. S. Zhang, D. Royer, and S.-T. Yau, “GPU-assisted high-resolution, real-time 3-d shape measurement,” Opt. Express 14, 9120–9129 (2006). [CrossRef]   [PubMed]  

7. J. Li, L. G. Hassebrook, and C. Guan, “Optimized two-frequency phase-measuring-profilometry light-sensor temporal-noise sensitivity,” J. Opt. Soc. Am. A 20, 106–115 (2003). [CrossRef]  

8. S. Zhang and P. S. Huang, “High-resolution, real-time three-dimensional shape measurement,” Opt. Eng. 45, 123601 (2006). [CrossRef]  

9. J. M. Huntley and H. O. Saldner, “Error—reduction methods for shape measurement by temporal phase unwrapping,” J. Opt. Soc. Am. A 14, 3188–3196 (1997). [CrossRef]  

10. S. Li, W. Chen, and X. Su, “Reliability—guided phase unwrapping in wavelet-transform profilometry,” Appl. Opt. 47, 3369–3377 (2008). [CrossRef]   [PubMed]  

11. Y. Shi, “Robust phase unwrapping by spinning iteration,” Opt. Express 15, 8059–8064 (2007). [CrossRef]  

12. M. Costantini, “A novel phase unwrapping method based on network programming,” IEEE Trans. Geoscience and Remote Sensing 36, 813–821 (1998). [CrossRef]  

13. D. S. Mehta, S. K. Dubey, M. M. Hossain, and C. Shakher, “Simple multifrequency and phase—shifting fringe—projection system based on two—wavelength lateral shearing interferometry for three-dimensional profilometry,” Appl. Opt. 44, 7515–7521 (2005). [CrossRef]   [PubMed]  

14. G. Pedrini, I. Alexeenko, W. Osten, and H. J. Tiziani, “Temporal phase unwrapping of digital hologram sequences,” Appl. Opt. 42, 5846–5854 (2003). [CrossRef]   [PubMed]  

15. E.-H. Kim, J. Hahn, H. Kim, and B. Lee, “Profilometry without phase unwrapping using multi-frequency and four-step phase-shift sinusoidal fringe projection,” Opt. Express 17, 7818–7830 (2009). [CrossRef]   [PubMed]  

16. J.-L. Li, H.-J. Su, and X.-Y. Su, “Two-frequency grating used in phase-measuring profilometry,” Appl. Opt. 36, 277–280 (1997). [CrossRef]   [PubMed]  

17. W.-H. Su and H. Liu, “Calibration-based two-frequency projected fringe profilometry: a robust, accurate, and single-shot measurement for objects with large depth discontinuities,” Opt. Express 14, 9178–9187 (2006). [CrossRef]   [PubMed]  

18. S. Y. Chen, Y. F. Li, and J. Zhang, “Vision processing for realtime 3-d data acquisition based on coded structured light,” IEEE Trans. Image Processing 17, 167–176 (2008). [CrossRef]  

19. M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-d object shapes,” Appl. Opt. 22, 3977–3982 (1983). [CrossRef]   [PubMed]  

20. T. P. Koninckx and L. V. Gool, “Real-time range acquisition by adaptive structured light,” IEEE Trans. Pattern Anal. Mach Intell. 28, 432–445 (2006). [CrossRef]   [PubMed]  

21. S. Zhang and S.-T. Yau, “High-speed three-dimensional shape measurement system using a modified two-plus-one phase-shifting algorithm,” Opt. Eng. 46, 113603 (2007). [CrossRef]  

22. P. S. Huang and S. Zhang, “Fast three-step phase-shifting algorithm,” Appl. Opt. 45, 5086–5091 (2006). [CrossRef]   [PubMed]  

23. M. Halioua and H. Liu, “Optical three-dimensional sensing by phase measuring profilometry,” Opt. Lasers in Eng. 11, 185–215 (1989). [CrossRef]  

24. S. F. Frisken, R. N. Perry, A. P. Rockwood, and T. R. Jones, “Adaptively sampled distance fields: A general representation of shape for computer graphics,” in Proceedings of the 27th annual conference on Computer graphics and interactive techniques, 249–254 (2000).

25. X. Su, G. von Bally, and D. Vukicevic, “Phase-stepping grating profilometry: utilization of intensity modulation analysis in complex objects evaluation,” Opt. Commun. 98, 141–150 (1993). [CrossRef]  

26. H. Guo and G. Liu, “Approximations for the arctangent function in efficient frige pattern analysis,” Opt. Express 15, 3053–3066 (2007). [CrossRef]   [PubMed]  

27. P. L. Wizinowich, “Phase shifting interferometry in the presence of vibration: A new algorithm and system,” Appl. Opt. 29, 3271–3279 (1990) [CrossRef]   [PubMed]  

Supplementary Material (3)

Media 1: MOV (1376 KB)     
Media 2: MOV (4037 KB)     
Media 3: MOV (3925 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. PMP system diagram.
Fig. 2.
Fig. 2. Cross-section of the proposed pattern strategy for N = 5, n = 0, fh = 16, Ap = 127.5, Bp 1 = 102, and Bp 2 = 25.5.
Fig. 3.
Fig. 3. Experimental setup.
Fig. 4.
Fig. 4. Visualization of the scanned object (top left), one of captured images with projected pattern, Ic 0 (top center), Ac (top right), Bc 1 (bottom left), binarized Bc 1 with threshold of 10 (bottom center), and Bc 2 (bottom right).
Fig. 5.
Fig. 5. The (top) visualization and (bottom) cross-section at the 250th column of the (left) wrapped phase ϕh filtered by binarized Bc 1, the (center) base phase ϕu filtered by Bc 1, and the (right) final unwrapped phase filtered by Bc 1 (right).
Fig. 6.
Fig. 6. the reconstructed 3-D point clouds with texture (top) and depth rendering (bottom) in front view (left), side view (center) and top view (right)
Fig. 7.
Fig. 7. Sample video clips.(Media 1), (Media 2), (Media 3)
Fig. 8.
Fig. 8. Sample point clouds, using dual-frequency pattern scheme for N = 6, live show of various hand poses. (top) Front view and (middle) Top view.
Fig. 9.
Fig. 9. Radius of best-fit sphere (top) and distance between the camera and the center of the ball (bottom) for 295 measurements.

Tables (1)

Tables Icon

Table 1. Processing time and rate, in milliseconds and frames per second (in parentheses), respectively, for various stages of PMP processing by means of the equations and LUT described in this paper.

Equations (67)

Equations on this page are rendered with MathJax. Learn more.

I n p ( x p , y p ) = A p ( x p , y p ) + B p ( x p , y p ) cos ( 2 πf y p 2 πn N ) .
I n c ( x c , y c ) = A c ( x c , y c ) + B c ( x c , y c ) cos ( ϕ ( x c , y c ) 2 πn N ) .
A c = 1 N n = 0 N 1 I n c
B c = 2 N { [ n = 0 N 1 I n c sin ( 2 πn N ) ] 2 + [ n = 0 N 1 I n c cos ( 2 πn N ) ] 2 } 0.5 ,
ϕ = tan 1 [ n = 0 N 1 I n c sin ( 2 πn N ) n = 0 N 1 I n c cos ( 2 πn N ) ] .
M c = m 11 c m 12 c m 13 c m 14 c m 21 c m 22 c m 23 c m 24 c m 31 c m 32 c m 33 c m 34 c and M p = m 11 p m 12 p m 13 p m 14 p m 21 p m 22 p m 23 p m 24 p m 31 p m 32 p m 33 p m 34 p .
X w Y w Z w = m 11 c m 31 c x c , m 12 c m 32 c x c , m 13 c m 33 c x c m 21 c m 31 c y c , m 22 c m 32 c y c , m 23 c m 33 c y c m 21 p m 31 p y p , m 22 p m 32 p y p , m 23 p m 33 p y p 1 m 14 c m 34 c x c m 24 c m 34 c y c m 24 p m 34 p y p .
I n p = A p + B 1 p cos ( 2 π f h y p 2 πn N ) + B 2 p cos ( 2 π f u y p 4 πn N ) .
I n c = A c + B 1 c cos ( ϕ h 2 πn N ) + B 2 c cos ( ϕ u 4 πn N ) ,
B m c = 2 N { [ n = 0 N 1 I n c sin ( m 2 πn N ) ] 2 + [ n = 0 N 1 I n c cos ( m 2 πn N ) ] 2 } 0.5 ,
( ϕ h , ϕ u ) = ( tan 1 [ n = 0 N 1 I n c sin ( 2 πn N ) n = 0 N 1 I n c cos ( 2 πn N ) ] , tan 1 [ n = 0 N 1 I n c sin ( 4 πn N ) n = 0 N 1 I n c cos ( 4 πn N ) ] , )
B c = 1 3 [ 3 ( I 1 c I 2 c ) 2 + ( 2 I 0 c I 1 c I 2 c ) 2 ] 0.5
B c = 1 2 [ ( I 1 c I 3 c ) 2 + ( I 0 c I 2 c ) 2 ] 0.5
B c = 1 6 [ 3 ( I 1 c + I 2 c I 4 c I 5 c ) 2 + ( 2 I 0 c 2 I 3 c + I 1 c I 2 c I 4 c + I 5 c ) 2 ] 0.5
MLUT [ U , V ] = 1 3 [ 3 V 2 + U 2 ] 0.5 ,
V = I 1 c I 2 c and U = 2 I 0 c I 1 c I 2 c ;
MLUT [ U , V ] = 1 2 [ V 2 + U 2 ] 0.5 ,
V = I 1 c I 3 c and U = I 0 c I 2 c ;
MLUT [ U , V ] = 1 6 [ 3 V 2 + U 2 ] 0.5 ,
V = I 1 c + I 2 c I 4 c I 5 c and U = 2 I 0 c 2 I 3 c + I 1 c I 2 c I 4 c + I 5 c .
B 1 c = 1 6 [ 3 ( I 1 c + I 2 c I 4 c I 5 c ) 2 + ( 2 I 0 c 2 I 3 c + I 1 c I 2 c I 4 c + I 5 c ) 2 ] 0.5
B 2 c = 1 6 [ 3 ( I 1 c I 2 c + I 4 c I 5 c ) 2 + ( 2 I 0 c + 2 I 3 c + I 1 c I 2 c I 4 c I 5 c ) 2 ] 0.5 .
ϕ = tan 1 [ 3 0.5 ( I 1 c I 2 c ) 2 I 0 c I 1 c I 2 c ]
ϕ = tan 1 [ I 1 c I 3 c I 0 c I 2 c ]
ϕ = tan 1 [ 3 0.5 ( I 1 c + I 2 c I 4 c I 5 c ) 2 I 0 c 2 I 3 c + I 1 c I 2 c I 4 c + I 5 c ]
PLUT [ U , V ] = tan 1 [ 3 0.5 V U ] ,
PLUT [ U , V ] = tan 1 [ V U ] ,
PLUT [ U , V ] = tan 1 [ 3 0.5 V U ] ,
( ϕ h , ϕ u ) = ( tan 1 [ 3 0.5 ( I 1 c + I 2 c I 4 c I 5 c ) 2 I 0 c 2 I 3 c + I 1 c I 2 c I 4 c + I 5 c ] , tan 1 [ 3 0.5 ( I 1 c I 2 c + I 4 c I 5 c ) 2 I 0 c + 2 I 3 c I 1 c I 2 c I 4 c I 5 c ] )
PLUT [ U , V ] = tan 1 [ 3 0.5 V U ] ,
V = I 1 c I 2 c + I 4 c I 5 c and U = 2 I 0 c + 2 I 3 c I 1 c I 2 c I 4 c I 5 c .
X w = M x ( x c , y c ) + N x ( x c , y c ) T
Y w = M y ( x c , y c ) + N y ( x c , y c ) T
Z w = M z ( x c , y c ) + N z ( x c , y c ) T
Z w = M z ( x c , y c ) + N z ( x c , y c ) T ,
X w = E x ( x c , y c ) Z w + F x ( x c , y c ) and Y w = E y ( x c , y c ) Z w + F y ( x c , y c ) ,
E x ( x c , y c ) = ( m 22 c m 33 c m 23 c m 32 c ) x c + ( m 13 c m 32 c m 12 c m 33 c ) y c + ( m 12 c m 23 c m 13 c m 22 c ) ( m 21 c m 32 c m 22 c m 31 c ) x c + ( m 12 c m 31 c m 11 c m 32 c ) y c + ( m 11 c m 22 c m 12 c m 21 c ) ,
F x ( x c , y c ) = ( m 22 c m 34 c m 24 c m 32 c ) x c + ( m 14 c m 32 c m 12 c m 34 c ) y c + ( m 12 c m 24 c m 14 c m 22 c ) ( m 21 c m 32 c m 22 c m 31 c ) x c + ( m 12 c m 31 c m 11 c m 32 c ) y c + ( m 11 c m 22 c m 12 c m 21 c ) ,
E y ( x c , y c ) = ( m 23 c m 31 c m 21 c m 33 c ) x c + ( m 11 c m 33 c m 13 c m 31 c ) y c + ( m 13 c m 21 c m 11 c m 23 c ) ( m 21 c m 32 c m 22 c m 31 c ) x c + ( m 12 c m 31 c m 11 c m 32 c ) y c + ( m 11 c m 22 c m 12 c m 21 c ) , and
F y ( x c , y c ) = ( m 21 c m 32 c m 22 c m 31 c ) x c + ( m 12 c m 31 c m 11 c m 32 c ) y c + ( m 11 c m 22 c m 12 c m 21 c ) ( m 21 c m 32 c m 22 c m 31 c ) x c + ( m 12 c m 31 c m 11 c m 32 c ) y c + ( m 11 c m 22 c m 12 c m 21 c ) .
M x ( x c , y c ) = ( a x 1 x c + a x 2 y c + a x 3 d 1 x c + d 2 y c + d 3 ) ( c 1 x c + c 2 y c + c 3 d 1 x c + d 2 y c + d 3 ) , N x ( x c , y c ) = b x 1 x c + b x 2 y c + b x 3 d 1 x c + d 2 y c + d 3 M x ( x c , y c ) ,
M y ( x c , y c ) = ( a y 1 x c + a y 2 y c + a y 3 d 1 x c + d 2 y 2 + d 3 ) ( c 1 x c + c 2 y c + c 3 d 1 x c + d 2 y c + d 3 ) , N y ( x c , y c ) = b y 1 x c + b y 2 y c + b y 3 d 1 x c + d 2 y c + d 3 M y ( x c , y c ) ,
M z ( x c , y c ) = ( a z 1 x c + a z 2 y c + a z 3 d 1 x c + d 2 y 2 + d 3 ) ( c 1 x c + c 2 y c + c 3 d 1 x c + d 2 y c + d 3 ) , N z ( x c , y c ) = b z 1 x c + b z 2 y c + b z 3 d 1 x c + d 2 y c + d 3 M z ( x c , y c ) ,
a x 1 = m 22 c m 34 c m 33 p + m 23 c m 32 c m 34 p + m 24 c m 33 c m 32 p m 22 c m 33 c m 34 p m 23 c m 34 c m 32 p m 24 c m 32 c m 33 p ,
a x 2 = m 12 c m 33 c m 34 p + m 13 c m 34 c m 32 p + m 14 c m 32 c m 33 p m 12 c m 34 c m 33 p m 13 c m 32 c m 34 p m 14 c m 33 c m 32 p ,
a x 3 = m 12 c m 24 c m 33 p + m 13 c m 22 c m 34 p + m 14 c m 23 c m 32 p m 12 c m 23 c m 34 p m 13 c m 24 c m 32 p m 14 c m 22 c m 33 p ,
a y 1 = m 21 c m 33 c m 34 p + m 23 c m 34 c m 31 p + m 24 c m 31 c m 33 p m 21 c m 34 c m 33 p m 23 c m 31 c m 34 p m 24 c m 33 c m 31 p ,
a y 2 = m 11 c m 34 c m 33 p + m 13 c m 31 c m 34 p + m 14 c m 33 c m 31 p m 11 c m 33 c m 34 p m 13 c m 34 c m 31 p m 14 c m 31 c m 33 p ,
a y 3 = m 11 c m 23 c m 34 p + m 13 c m 24 c m 31 p + m 14 c m 21 c m 33 p m 11 c m 24 c m 33 p m 13 c m 21 c m 34 p m 14 c m 23 c m 31 p ,
a z 1 = m 21 c m 34 c m 32 p + m 22 c m 31 c m 34 p + m 24 c m 32 c m 31 p m 21 c m 32 c m 34 p m 22 c m 34 c m 31 p m 24 c m 31 c m 32 p ,
a z 2 = m 11 c m 32 c m 34 p + m 12 c m 34 c m 31 p + m 14 c m 31 c m 32 p m 11 c m 34 c m 32 p m 12 c m 31 c m 34 p m 14 c m 32 c m 31 p ,
a z 3 = m 11 c m 24 c m 32 p + m 12 c m 21 c m 34 p + m 14 c m 22 c m 31 p m 11 c m 22 c m 34 p m 12 c m 24 c m 31 p m 14 c m 21 c m 32 p ,
b x 1 = m 22 c m 33 c m 24 p + m 23 c m 34 c m 22 p + m 24 c m 32 c m 23 p m 22 c m 34 c m 23 p m 23 c m 32 c m 24 p m 24 c m 33 c m 22 p ,
b x 2 = m 12 c m 34 c m 23 p + m 13 c m 32 c m 24 p + m 14 c m 33 c m 22 p m 12 c m 33 c m 24 p m 13 c m 34 c m 22 p m 14 c m 32 c m 23 p ,
b x 3 = m 12 c m 23 c m 24 p + m 13 c m 24 c m 22 p + m 14 c m 22 c m 23 p m 12 c m 24 c m 23 p m 13 c m 22 c m 24 p m 14 c m 23 c m 22 p ,
b y 1 = m 21 c m 34 c m 23 p + m 23 c m 31 c m 24 p + m 24 c m 33 c m 21 p m 21 c m 33 c m 24 p m 23 c m 34 c m 21 p m 24 c m 31 c m 23 p ,
b y 2 = m 11 c m 33 c m 24 p + m 13 c m 34 c m 21 p + m 14 c m 31 c m 23 p m 11 c m 34 c m 23 p m 13 c m 31 c m 24 p m 14 c m 33 c m 21 p ,
b y 3 = m 11 c m 24 c m 23 p + m 13 c m 21 c m 24 p + m 14 c m 23 c m 21 p m 11 c m 23 c m 24 p m 13 c m 24 c m 21 p m 14 c m 21 c m 23 p ,
b z 1 = m 21 c m 32 c m 24 p + m 22 c m 34 c m 21 p + m 24 c m 31 c m 22 p m 21 c m 34 c m 22 p m 22 c m 31 c m 24 p m 24 c m 32 c m 21 p ,
b z 2 = m 11 c m 34 c m 22 p + m 12 c m 31 c m 24 p + m 14 c m 32 c m 21 p m 11 c m 32 c m 24 p m 12 c m 34 c m 21 p m 14 c m 31 c m 22 p ,
b z 3 = m 11 c m 22 c m 24 p + m 12 c m 24 c m 21 p + m 14 c m 21 c m 22 p m 11 c m 24 c m 22 p m 12 c m 21 c m 21 p m 14 c m 22 c m 21 p ,
c 1 = m 21 c m 32 c m 33 p + m 22 c m 33 c m 31 p + m 23 c m 31 c m 32 p m 21 c m 33 c m 32 p m 22 c m 31 c m 33 p m 23 c m 32 c m 31 p ,
c 2 = m 11 c m 33 c m 32 p + m 12 c m 31 c m 33 p + m 13 c m 32 c m 31 p m 11 c m 32 c m 33 p m 12 c m 33 c m 31 p m 13 c m 31 c m 32 p ,
c 3 = m 11 c m 22 c m 33 p + m 12 c m 23 c m 31 p + m 13 c m 21 c m 32 p m 11 c m 23 c m 32 p m 12 c m 21 c m 33 p m 13 c m 22 c m 31 p ,
d 1 = m 21 c m 33 c m 22 p + m 22 c m 31 c m 23 p + m 23 c m 32 c m 21 p m 21 c m 32 c m 23 p m 22 c m 33 c m 21 p m 23 c m 31 c m 22 p ,
d 2 = m 11 c m 32 c m 23 p + m 12 c m 33 c m 21 p + m 13 c m 31 c m 22 p m 11 c m 33 c m 22 p m 12 c m 31 c m 23 p m 13 c m 32 c m 21 p ,
d 3 = m 11 c m 23 c m 22 p + m 12 c m 21 c m 23 p + m 13 c m 22 c m 21 p m 11 c m 22 c m 23 p m 12 c m 23 c m 21 p m 13 c m 21 c m 22 p .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.