Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Composite structured light pattern for three-dimensional video

Open Access Open Access

Abstract

Based on recent discoveries, we introduce a method to project a single structured pattern onto an object and then reconstruct the three-dimensional range from the distortions in the reflected and captured image. Traditional structured light methods require several different patterns to recover the depth, without ambiguity or albedo sensitivity, and are corrupted by object movement during the projection/capture process. Our method efficiently combines multiple patterns into a single composite pattern projection allowing for real-time implementations. Because structured light techniques require standard image capture and projection technology, unlike time of arrival techniques, they are relatively low cost.

©2003 Optical Society of America

1. Introduction

Structured-light illumination [1] is a commonly used technique for automated inspection and measuring surface topologies. Classical 3D acquisition devices use a single scanning laser stripe scanned progressively over the surface of the target object, placing a burden on the object to remain static and a burden on data acquisition to capture all the stripe images. For reducing the technological burdens of scanning and processing each scan position of the laser stripe, many methods have been devised to project and process structured-light patterns, such as multi-stripe [2] and sinusoidal fringe patterns, that illuminate the entire target surface at the same time. But these multi-stripe patterns introduce ambiguities in the surface reconstruction around surface discontinuities, can be sensitive to surface reflectance variations (i.e., albedo), and/or they suffer from lower lateral resolution caused by the required spacing between stripes [3].

The solution to the ambiguity and the albedo problem is to encode the surface repeatedly with multiple light striped patterns [4] with variable spatial frequencies [5, 6, 7], but by doing so, if a real-time system were desired, either temporal multiplexed projection/capture image sequences or color multiplexed using multiple narrow band color filters are required. The temporal multiplexed system is sensitive to object motion. The multi-color techniques [8, 9] also suffer from lower SNR due to the spectral division and are sensitive to surface color spectra. So what has been the missing piece, and in some circles the “holy grail” in structured-light research, is the discovery of a structured-light pattern that allows, with a single image, the measuring of surface topologies without ambiguities, with high accuracy, and insensitive to albedo variations.

Several one-shot projection patterns have been proposed to be able to recover the range data from one single image [10, 11, 12]. For example, a gradient pattern [10, 11] can be used for non-ambiguously retrieving phase. However, this approach is typically noisy and highly sensitive to albedo variation. A single pattern technique that is both insensitive to albedo and non-ambiguous was introduced by Maruyama and Abe that uses binary coding to identify each line in a single frame [12]. While this line index approach is sensitive to highly textured surfaces, we believe the strategy is correct. However, what is needed is a general approach to the single pattern problem.

We have discovered a systematic way of generating such patterns, by combining multi-patterns into a single Composite Pattern (CP) that can be continuously projected. The area of structured light is a crowded art with thousands of custom systems being developed for thousands of different applications over the last 70 years. There are several commercially available structured light scanners, but they are expensive and have limited markets and specialized capabilities. Most of the structured light research has been funded by industry and limited to specific applications.We have pursued a general mathematical model [3, 13] of the different structured light techniques, along with a general depth reconstruction methodology. Our strategy has been to treat structured light systems as wide bandwidth parallel communications channels. Thus, well-known concepts of communications theory can be applied to structured light technology for optimization, comparative analysis and standardization of performance metrics. However, we realized something else from our modeling efforts. We realized that the spatial dimension orthogonal (i.e., orthogonal dimension) to the depth distortion (i.e., phase dimension) was underutilized and could be used to modulate and combine multiple patterns into a single composite pattern [14]. Furthermore, this is a methodology that can be applied to a variety of existing multi-pattern techniques.

Different from the ad hoc single pattern techniques mentioned above, we introduce a systematic methodology to combine multiple patterns into one single composite pattern, based on well-known communications theory. The individual patterns are spatially modulated along the orthogonal dimension, perpendicular to the phase dimension. In this way we can then take advantage of the existing procedure for traditional multiple patterns such as Phase Measuring Profilometry (PMP) [5], Linearly Coded Profilometry (LCP) [7], and other multi-frame techniques [15, 16] while only projecting one single frame onto the target object. Basically, this composite modulation approach is fit for most of the successive projection patterns. However, for the simplicity of demonstration, our paper focuses on the coding and decoding procedures of composite patterns for the PMP technique. In our system, a single frame of composite PMP pattern is formed and projected to the target object. The reflected image is decoded to retrieve multiple PMP frames, and the phase distribution distorted by the object depth is calculated. The depth of the object can then be reconstructed out of the phase following the traditional PMP method.

2. Traditional PMP method

The PMP range finding method has several advantages including its pixel-wise calculation, resistance to ambient light, resistance to reflection variation, and it can have as few as three frames for whole-field depth reconstruction. Sinusoid patterns are projected and shifted by a factor of 2π/N for N times as

Inp(xp,yp)=Ap+Bpcos(2πfϕyp2πnN),

where Ap and Bp are the projection constants and (xp, yp ) is the projector coordinates. The yp dimension is in the direction of the depth distortion and is called the phase dimension. On the other hand, xp dimension is perpendicular to the phase dimension, so we call it the orthogonal dimension. The frequency fϕ of the sinusoid wave is in the phase direction. The subscript n represents the phase shift index and n=1, 2, …, N, where N is the total number of phase shifts.

The reflected intensity images from the object surface after successive projections are

In(x,y)=αxy·[A+Bcos(2πfϕyp+ϕxy2πnN)]

where (x, y) are the image coordinates and α(x, y) is the reflectance variation or the albedo. The pixel-wise phase distortion ϕ(x, y) of the sinusoid wave corresponds to the object surface depth. The value of ϕ(x, y) is determined from the captured patterns by

ϕxy=arctan[Σn=1NInxysin(2πnN)Σn=1NInxycos(2πnN)].

The albedo, α(x, y), is cancelled in this calculation, therefore, the depth through this approach is independent of the albedo.

 figure: Fig. 1.

Fig. 1. Geometrical representation of the experimental Setup.

Download Full Size | PDF

When calibrating the range finding system, the phase map of the reference plane ϕ r (x, y) is pre-calculated from the projections on the reference plane. The depth of the object surface with respect to the reference plane is easily obtained through simple geometric algorithms [17]. As shown in Fig. 1, the distance between the projector lens center, Op , to the camera lens center, Oc , is d. Both the projector and the projectorcamera plane are a distance L from the reference plane. The height of the object at point A, h, is calculated by

h=BC¯·(Ld)1+BC¯d,

and B̄C is proportional to the difference between the phase at point B, ϕ B , and the phase at point C, ϕ C , as

BC¯=β(ϕCϕB).

The constant β, as well as other geometric parameters, L and d, are determined during the calibration procedure.

The phase value calculated from Eq. (3) is wrapped in the range value of (-π, π] independent of the frequencies in phase direction. Phase unwrapping procedure retrieves the non-ambiguous phase value out of the wrapped phase [18, 19]. With relatively higher frequencies in phase direction, the range data have higher signal-to-noise-ratio (SNR) after non-ambiguous phase unwrapping [20].

3. Composite PMP Pattern

In order to combine multiple patterns into one single image, each individual pattern is modulated along orthogonal direction with a distinct carrier frequency and then summed together as shown in Fig. 2. Therefore, each channel in the composite image along the orthogonal direction represents the individual pattern used in PMP for the phase calculation. Similar to the patterns projected in multi-frame approach as in Eq. (1), the image patterns to be modulated are

Inp=c+cos(2πfϕyp2πnN).

A constant c is used here to offset Inp to be non-negative values. Negative signal values will cause incorrect demodulation with our AM based demodulation method, as discussed cussed later. The signal patterns are then multiplied with cosine wave with distinct carrier frequencies along the orthogonal direction. The composite pattern accumulates each channel such that

 figure: Fig. 2.

Fig. 2. A composite pattern (CP) is formed by modulating traditional PMP patterns along the orthogonal direction.

Download Full Size | PDF

Ip=Ap+Bp·n=1NInp·cos(2πfnpxp)

where fnp are the carrier frequencies along the orthogonal direction and n is the shift index from 1 to N. The projection constants Ap and Bp are carefully calculated as

Ap=IminBp·min{n=1NInp·cos(2πfnpxp)}
Bp=(ImaxImin)(max{n=1NInp·cos(2πfnxp)}min{n=1NInp·cos(2πfnxp)})

so that the projection intensity range of the composite pattern falls into [Imin , Imax ]. In order to increase the SNR, Bp should reach its maximum value allowed [20] and therefore, [Imin , Imax ] should match the intensity capacity of the projector to retrieve optimal depth information.

The orthogonal modulation frequencies fnp are designed to be evenly distributed and away from zero frequency. This modulation is analogous to the AM modulation. No patterns are modulated in the “DC” or baseband channel. Although the bandwidth of the composite pattern is degraded by losing the baseband channel, the modulation pattern is less sensitive to ambient light. Ideally, the reflected composite pattern image on the target object surface captured by the camera is

ICPxy=αxy{A+B·n=1NInxy·cos(2πfnx)}

where

Inxy=c+cos(2πfϕyp+ϕxy2πnN),

and α(x, y) is the albedo and ϕ(x, y) is the distorted phase as in Eq. (2). The actual carrier frequencies fn in the camera view may be different from the fnp due to perspective distortion between the projector and the camera. To make the modulation frequency fn as independent as possible of the topology of the object surface on each orthogonal line, the camera and projector are carefully aligned to share about the same world coordinates both in orthogonal direction and depth direction. If the orthogonal and phase axes of the camera and projector fields have a relative rotation between them, the orthogonal carrier modulation of the projector will leak into the phase component captured by the camera.

 figure: Fig. 3.

Fig. 3. Illustration of the spectrum of the captured image for the four channel composite pattern projection.

Download Full Size | PDF

Since projector and camera digitally sample the projection pattern and captured image, the detection of the high frequency carrier wave and the recovery procedure rely heavily on the intensity and the spatial resolution of the projector and camera system. Appropriate carrier frequency, fnp , has to be carefully assigned. Selection of the carrier frequency, fnp , is highly dependent on the projector and camera quality, as well as the experimental setup. Basically, to minimize the channel leakage, adjacent, fnp , should be spread out as much as possible. However, limited by the spatial and intensity resolution, they have to be confined to a certain range for reliable depth recovery.

We process the reflected images as 1-D raster signals where each line along the orthogonal dimension is an independent signal vector. The received orthogonal spectrum for four composite pattern channels, in a typical signal vector, is illustrated in Fig. 3. The four carrier frequencies are evenly distributed and are separated from the ambient light reflection at baseband. The captured image is processed, as a set of 1-D signal vectors, by band-pass filters to separate out each channel. To achieve uniform filtering for the channels, the band-pass filters are centered at fn and are all derived from the same low-pass Butterworth filter design, in other words; they all have the uniform passband span and are symmetric at fn . The Butterworth filter is used in this stage for smoother transition and minimal side-lobe ripple effect. On the other hand, the order of the Butterworth filter is carefully selected to reduce the crosstalk between channels. Compromising between side-lobe effects and crosstalk is required to obtain acceptable reconstruction performance. Cutoff frequencies for each band are designed such that

fnc=12(fn1+fn)

where n=1, 2, 3, …, N and f 0=0, which is the baseband channel. The orthogonal signal vectors after 1-D band-pass filtering are

 figure: Fig. 4.

Fig. 4. Block diagram of the decoding process.

Download Full Size | PDF

InBPxy=ICPxyhBPnInxy·cos(2πfnx)

where * is the convolution operator and hBPn (x) is the band-pass filter along orthogonal direction centered at frequency fn . The baseband image, In (x, y), is assumed to be band limited along the orthogonal dimension with a bandwidth less than or equal to the filter hBPn (x) bandwidth.

The filtered images have to be demodulated to retrieve each individual pattern In (x, y). Two critical practical factors have to be considered in the demodulation process. First, the perspective distortion causes the depth dependent variation of orthogonal carrier frequencies. Second, with the practical experimental setup, the cosine carrier wave on each orthogonal line has an unknown phase shift. That is, considering the perspective distortion, the practical image after band-pass filtering is based on Eq. (13) such that

InBPxy=Inxy·cos(2π(fn+δf)x+δθ)

where fn has the small variation δf and δθ is the unknown phase shift. By squaring both sides of Eq. (14) we have

(InBPxy)2=(Inxy)2·1+cos(4π(fn+δf)x+2δθ)2.

This is low pass filtered by hLP (x) with a cutoff of ƒ n such that

gxy=(InBPxy)2hLP(x)=(Inxy)22.

The modulated image is recovered by square rooting Eq. (16) such that

InRxy=2gxy=2·[(InBPxy)2hLP(x)].

Due to the involvement of the square operation in the demodulation process, InR (x, y) has to be non-negative. It is effectively an AM based modulation technique which recovers the PMP pattern as the positive envelope. The demodulation procedure is summarized in the diagram as in Fig. 4. The recovered images, InR (x, y), represent the individual patterns in traditional PMP and are used to retrieve the depth of the measured object based on the traditional PMP method.

The range data, with respect to the reference plane, can then be calculated the same way as described in Sec. 2. As in Eq. (13), leakage error between orthogonal channels occurs when the measured object surface has significant variation of albedo or depth in the orthogonal direction. However, inherited from the PMP method, reconstructed depth in phase direction is resistant to the depth discontinuity and albedo variation.

4. Experiments

We established the range finding system, shown in Fig. 1, based on the CP technique. The projector used is a Texas Instruments (TI) Digital Light Processor (DLP) projector with an 800×600 micro-mechanical mirror array. The framegrabber, a DT3120, grabs the image from the CCD monochrome camera with spatial resolution of 640×480 with 8 bits intensity resolution.

To simplify the decoding procedure, the frequency across the phase direction ƒϕ is selected to be unit frequency. So no unwrapping algorithm need be implemented. The number of patterns is N=4. The choice of N=4 came from trial and error where the minimum of N=3 has too much inherent reconstruction noise and N>4 reduced the lateral resolution for the given camera resolution. In this experiment, carrier frequencies of the projector fnp are 50, 85, 120 and 155 cycles per field of view for an orthogonal field of view width of 800 pixels. The corresponding received carrier frequencies are 33, 56, 79 and 103 cycles per field of view with a field of view of 640 pixels. The lowest modulation frequency is selected to be higher than the difference of the adjacent modulation frequencies to minimize the effect of the ambient light reflection. The projector has a field of view of 475 mm in height and 638 mm in width while the field of view for the camera is 358 mm high and 463 mm wide. The order of the Butterworth bandpass filter is selected to be 7 and the width of the passband is 10 to reduce the cross-talk between adjacent channels. Figure 5 (a) shows the projection pattern on the reference plane and the recovered reference phase map is shown in Fig. 5 (b). To test sensitivity to depth variation, a half circular step with a diameter of 300mm and a thickness of 85mm is placed on the top of the reference plane. The reflected image and the corresponding phase map are shown in Fig. 5 (c) and (d) respectively. The depths of the object scene are calculated pixel-wise following Eq. (4) and are shown in Fig. 5 (e). The demodulation procedure generates the edge response effects in the reconstructed depths. The original sharp edges of the circle from the reference plane in the world coordinates are reconstructed with edge transitions between the two depth levels in the depth map due to the band limited filtering. The abrupt edges of the depth act as step edges in the orthogonal direction for all pattern channels. The result is the impulse response of the filters smoothes the edges.

To test the performance of this technique in the presence of abrupt albedo variation, the object is set to be a flat plane, with the gray level 255, at zero depth with a dark circular area, with the gray level 40, at the center. The captured image is shown in Fig. 6 (a). The 2D range representation of the reconstructed depths is shown in Fig. 6 (b) and the 3D depths are shown in Fig. 6 (c). The internal areas in the dark circle are properly reconstructed, independent of albedo. However, the abrupt albedo variations on the edge of the circle generate space-variant blurring which results in the depth errors around the edges. The albedo pattern acts as a window operation against the orthogonal sinusoidal patterns. When the window edge intersects a high intensity area, signal leakage occurs which corrupts the individual patterns differently. Thus the reconstructed phase oscillates in value with a spatial dependency. We characterize this behavior with an eye diagram as shown in Fig. 6 (d). To construct an eye diagram of these edge responses, the composite pattern is then shifted nine times along the orthogonal direction with 4-pixel shift at each step and the depths are reconstructed.

 figure: Fig. 5.

Fig. 5. Depth reconstruction of a single depth step with circle shape. (a) Captured image of the reference plane. (b) Phase map of the reference plane. (c) Captured image of the object plane. (d) Phase map of the object plane. (e) Reconstructed depth of the object scene.

Download Full Size | PDF

Figure 6 (d) plots the depths at horizontal line 200 for each shift and centered at the left circle edge on that line. Therefore, an eye diagram is formed in Fig. 6 (d) and the width and the height of the edge response in the eye diagram indicates the performance of this technique in the presence of albedo variation is 111 pixel eye width per 640 pixel scene width. The eye width corresponds to the Butterworth bandpass impulse response width, which has a pulse width of 118, pixels defined by the first 0 crossing. Normalizing the eye width by the field of view gives us a performance measure of 17%≈100%×111/640 for future comparison.

 figure: Fig. 6.

Fig. 6. 3D reconstruction with the present of albedo variation. (a) Captured object image with dark circle at the center. (b) Range image representation of reconstructed 3D surface. (c) Reconstructed 3D object surface. (d) The “eye diagram” generated at line y=200 from the depth map for nine shifts of the composite pattern along the orthogonal direction.

Download Full Size | PDF

The key benefit of our composite pattern technique is that video sequences can be captured at the frame rate of the camera and therefore works well for the real-time 3D reconstruction. Four movie clips are made for the illustration of its feasibility for real-time 3D reconstruction as shown in Figs. 710. In Figs. 7 and 8, the captured frame is shown together with the depth maps. Shadows are represented as black areas in the depth images. Figure 7 shows the subject tossing an icosahedron and Fig. 8 shows the subject stretching out her hands. Two human computer interface examples are given in Figs. 9 and 10. In Fig. 9, the depth value at the location of a specified “button” is monitored. When the subject’s hand crosses the depth threshold in these locations, the button sequence is activated to allow the user to use dynamic button menu options. Although there is some noise in these movies of the subject in real environments, the temporal depth changes are clearly recorded in the range frames.

In Fig. 10, a protocol based on the 3D position of subjects hands is created to control a virtual environment. The upper image is the captured composite pattern reflection and the lower image shows the 3D virtual environment that the hands are controlling. Hand detection is facilitated by thresholding the depth imagery to segment out the hands from the background and subjects body. The 3D centroids of each hand are used to estimate the hand position. The control protocol is generated such that the depth difference of two hands control the rotation of the virtual environment, horizontal movement controls the translation and average depth of the hands controls the zoom in and zoom out.

5. Conclusion

We present a general methodology to combine traditional multi-frame SL patterns into a single composite pattern. Although the band-pass and the low-pass filters used in the decoding procedure will increase the complexity and blur the depth reconstruction, the composite pattern is used to recover the depth data of the moving or non-rigid object in real-time. The resolution is sufficient for application in human computer interfacing and is being pursued. The CP methodology should be applicable to many of the multi-pattern range finding techniques and the depth recovery procedure would follow the traditional methods associated with the particular multi-pattern technique. However, bandwidth along the orthogonal dimension is sacrificed for additional patterns. Furthermore, the characteristics of albedo insensitivity and non-ambiguous depth reconstruction of the PMP method are preserved, but there are problems near edges of abrupt albedo and depth variation. We applied an eye diagram performance measure to these edge regions as a baseline for future optimization. We believe future research will be directed at improving the spatial resolution and decreasing the edge effects from abrupt albedo and depth variation. Another aspect of future research is implementation using Near Infra-Red light to project and capture the composite pattern. We are actively pursuing this and if successful will allow an invisible computer interface and/or a combination of RGB imaging with depth information.

 figure: Fig. 7.

Fig. 7. (1.0 Mb) Movie clip showing real-time depth reconstruction for the subject tossing a octahedron.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. (1.0 Mb) Movie clip showing real-time depth reconstruction for the subject stretching out her hands.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. (0.6 Mb) Movie clip showing real-time depth reconstruction for humancomputer interfacing.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. (1.8 Mb) Movie clip showing hand control of a virtual reality viewpoint.

Download Full Size | PDF

Acknowledgements

We would like to acknowledge the contributions of Paige Baldassaro of the Institute for Scientific Research, Fairmont WV, for developing the hand choreography used in Fig. 10.

References and links

1. G. Schmaltz of Schmaltz Brothers Laboratories, “A method for presenting the profile curves of rough surfaces,” Naturwiss 18, 315–316 (1932). [CrossRef]  

2. P. M. Will and K. S. Pennington, “Grid coding: A preprocessing technique for robot and machine vision,” Artif. Intell. 2, 319–329 (1971). [CrossRef]  

3. R. C. Daley and L. G. Hassebrook, “Channel capacity model of binary encoded structured lightstripe illumination,” Appl. Opt. 37, 3689–3696 (1998). [CrossRef]  

4. J. L. Posdamer and M. D. Altschuler, “Surface measurement by space-encoded projected beam systems,” Comput. Vision Graph. Image Process. 18, 1–17 (1982). [CrossRef]  

5. V. Srinivasan, H. C. Liu, and M. Halioua, “Automated phase measuring profilometry: a phase mapping approach,” Appl. Opt. 24, 185–188 (1985). [CrossRef]   [PubMed]  

6. Jielin Li and L. G. Hassebrook, “A robust svd based calibration of active range sensors,” in SPIE Proceedings on Visual Information Processing IX, S. K. Park and Z. Rahman, eds., (2000).

7. Q. Fang and S. Zheng, “Linearly coded profilometry,” Appl. Opt. 36, 2401–2407 (1997). [CrossRef]   [PubMed]  

8. K. Boyer and A. Kak, “Color-encoded structured light for rapid active ranging,” IEEE Trans. Pattern. Anal. Mach. Intell. 9, 2724–2729 (1991).

9. O. A. Skydan, M. J. Lalor, and D. R. Burton, “Technique for phase measurement and surface reconstruction by use of colored structured light,” Appl. Opt. 41, 6104–6117 (2002). [CrossRef]   [PubMed]  

10. D. S. Goodman and L. G. Hassebrook, “Face recognition under varying pose,” IBM Technical Disclosure Bulletin 27, 2671–2673 (1984).

11. B. Carrihill and R. Hummel, “Experiments with intensity ratio depth sensor,” Comput. Vision Graph. Image Process. 32, 337–358 (1985). [CrossRef]  

12. M. Maruyama and S. Abe, “Range sensing by projecting multiple slits with random cuts,” IEEE Trans. Pattern. Anal. Mach. Intell. 15, 647–651 (1993). [CrossRef]  

13. L. G. Hassebrook, R. C. Daley, and W. Chimitt, “Application of communication theory to high speed structured light illumination,” in SPIE Proceedings, Harding and Svetkoff, eds., Proc. 3204(15), 102–113 (1997)

14. G. Goli, C. Guan, L. G. Hassebrook, and D. L. Lau, “Video rate three dimensional data acquisition using composite light structure pattern,” Tech. Rep. CSP 02-002, University of Kentucky, Department of Electrical and Computer Engineering, Lexington, KY USA (2002).

15. J. Batlle, E. Mouaddib, and J. Salvi, “Recent progress in coded structured light as a technique to solve the correspondence problem: A survey,” Pattern Recogn. 31, 963–982 (1998). [CrossRef]  

16. F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39, 10–22 (2000). [CrossRef]  

17. J. L. Li, H. J. Su, and X. Y. Su, “Two-frequency grating used in phase-measuring profilometry,” Appl. Opt. 36, 277–280 (1997). [CrossRef]   [PubMed]  

18. T. R. Judge and P. J. Bryanston-Cross, “A review of phase unwrapping techniques in fringe analysis,” Opt. Lasers Eng. 21, 199–239 (1994). [CrossRef]  

19. H. Zhao, W. Chen, and Y. Tan, “Phase-unwrapping algorithm for the measurement of threedimensional object shapes,” Appl. Opt. 33, 4497–4500 (1994). [CrossRef]   [PubMed]  

20. J. Li, L.G. Hassebrook, and Chun Guan, “Optimized two-frequency phase measuring profilometry light sensor temporal noise sensitivity,” J. Opt. Soc. Am. A 20, 106–115 (2003). [CrossRef]  

Supplementary Material (4)

Media 1: MPG (1036 KB)     
Media 2: MPG (1016 KB)     
Media 3: MPG (584 KB)     
Media 4: MPG (1886 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Geometrical representation of the experimental Setup.
Fig. 2.
Fig. 2. A composite pattern (CP) is formed by modulating traditional PMP patterns along the orthogonal direction.
Fig. 3.
Fig. 3. Illustration of the spectrum of the captured image for the four channel composite pattern projection.
Fig. 4.
Fig. 4. Block diagram of the decoding process.
Fig. 5.
Fig. 5. Depth reconstruction of a single depth step with circle shape. (a) Captured image of the reference plane. (b) Phase map of the reference plane. (c) Captured image of the object plane. (d) Phase map of the object plane. (e) Reconstructed depth of the object scene.
Fig. 6.
Fig. 6. 3D reconstruction with the present of albedo variation. (a) Captured object image with dark circle at the center. (b) Range image representation of reconstructed 3D surface. (c) Reconstructed 3D object surface. (d) The “eye diagram” generated at line y=200 from the depth map for nine shifts of the composite pattern along the orthogonal direction.
Fig. 7.
Fig. 7. (1.0 Mb) Movie clip showing real-time depth reconstruction for the subject tossing a octahedron.
Fig. 8.
Fig. 8. (1.0 Mb) Movie clip showing real-time depth reconstruction for the subject stretching out her hands.
Fig. 9.
Fig. 9. (0.6 Mb) Movie clip showing real-time depth reconstruction for humancomputer interfacing.
Fig. 10.
Fig. 10. (1.8 Mb) Movie clip showing hand control of a virtual reality viewpoint.

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

I n p ( x p , y p ) = A p + B p cos ( 2 π f ϕ y p 2 πn N ) ,
I n ( x , y ) = α x y · [ A + B cos ( 2 π f ϕ y p + ϕ x y 2 πn N ) ]
ϕ x y = arctan [ Σ n = 1 N I n x y sin ( 2 πn N ) Σ n = 1 N I n x y cos ( 2 πn N ) ] .
h = BC ¯ · ( L d ) 1 + BC ¯ d ,
BC ¯ = β ( ϕ C ϕ B ) .
I n p = c + cos ( 2 π f ϕ y p 2 πn N ) .
I p = A p + B p · n = 1 N I n p · cos ( 2 π f n p x p )
A p = I min B p · min { n = 1 N I n p · cos ( 2 π f n p x p ) }
B p = ( I max I min ) ( max { n = 1 N I n p · cos ( 2 π f n x p ) } min { n = 1 N I n p · cos ( 2 π f n x p ) } )
I CP x y = α x y { A + B · n = 1 N I n x y · cos ( 2 π f n x ) }
I n x y = c + cos ( 2 π f ϕ y p + ϕ x y 2 πn N ) ,
f n c = 1 2 ( f n 1 + f n )
I n BP x y = I CP x y h BP n I n x y · cos ( 2 π f n x )
I n BP x y = I n x y · cos ( 2 π ( f n + δ f ) x + δ θ )
( I n BP x y ) 2 = ( I n x y ) 2 · 1 + cos ( 4 π ( f n + δ f ) x + 2 δ θ ) 2 .
g x y = ( I n BP x y ) 2 h LP ( x ) = ( I n x y ) 2 2 .
I n R x y = 2 g x y = 2 · [ ( I n BP x y ) 2 h LP ( x ) ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.