Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Influence of projector pixel shape on ultrahigh-resolution 3D shape measurement

Open Access Open Access

Abstract

The state-of-art three-dimensional (3D) shape measurement with digital fringe projection (DFP) techniques assume that the influence of projector pixel shape is negligible. However, our research reveals that when the camera pixel size is much smaller than the projector pixel size in object space (e.g., 1/5), the shape of projector pixel can play a critical role on ultimate measurement quality. This paper evaluates the performance of two shapes of projector pixels: rectangular and diamond shaped. Both simulation and experimental results demonstrated that when the camera pixel size is significantly smaller than the projector pixel size, it is advantageous for ultrahigh resolution 3D shape measurement system to use a projector with rectangular-shaped pixels than a projector with diamond-shaped pixels.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

With recent advancements of the three-dimensional (3D) shape measurement field and the increasing availability of affordable commercial 3D sensors (e.g., Microsoft Kinect, Intel Realsense and Orbbec Astra), 3D shape measurement techniques have been rapidly impacting fields ranging from biomedical engineering, manufacturing, robotics, and entertainment [1].

Among existing 3D shape measurement techniques, digital fringe projection (DFP) has been especially popular because of its achievable high measurement accuracy, and high spatial resolution [2]. A typical DFP system consists of one projector that projects fringe patterns onto the object and one camera that receives the fringe patterns scattered by the object surface. Instead of using intensity, a DFP system uses the carrier phase of fringe image(s) for 3D shape recovery. To achieve high quality measurement, high quality phase has to be recovered. As such, DFP techniques can achieve camera pixel level spatial resolution with the fundamental assumption that high-quality sinusoidal fringe patterns can be captured.

There are basically two ways to create sinusoidal fringe patterns [3]: defocus 1-bit binary images and directly use 8-bit sinusoidal images. The latter is straightforward in terms of concept and implementation, but has three major limitations: 1) nonlinear gamma influence of the projector; 2) requirement of a large number of pixels to represent an accurate sinusoidal profile; 3) precise synchronization requirement between the projector and the camera. Typically, narrower fringe patterns could produce higher signal-to-noise ratio (SNR) phase and thus better measurement quality. Therefore, it is desirable to use narrow fringe patterns for 3D shape measurement. However, using multiple level grayscale images to produce sinusoidal patterns may not be preferable due to Limitation #2. The binary defocusing technique, in contrast, can use smaller number of pixels to produce sinusoidal patterns through defocusing, and thus has been extensively studied for high-quality 3D shape measurement.

Since achieving higher speed and higher measurement resolution is increasingly needed whilst simultaneously maintaining a large field of view (FOV). It is more practical to use a smaller camera pixel size in object space because realizing higher projector resolution is more costly. In a typical DFP system, the projector pixel size and the camera pixel size are comparable in the object space, the shape of projector pixel is typically not considered. However, to achieve ultrahigh resolution, the camera pixel size should be much smaller than the projector pixel size (e.g., the camera pixel size is 1/5 of the projector pixel size in length), the shape of the projector pixel might no longer be negligible.

There are two types of digital micromirror devices (DMDs) developed for digital light processing (DLP) projectors: rectangular shaped pixels and diamond-shaped pixels. Our research found that, when the camera pixel size is much smaller than the projector pixel size in order to achieve ultrahigh resolution 3D shape measurement, the diamond-shaped DMD pixels cannot be used to achieve high-quality 3D shape measurement. We believe that this is caused by the sampling effect of mismatched pixel shape from the computer generated pixel to the projected pixel. This paper evaluates the performance of the DFP system for ultrahigh resolution 3D shape measurement using two different types of projectors. Both simulations and experiments demonstrated that a projector with rectangular shaped pixels is more suitable for ultrahigh resolution 3D shape measurement.

Section 2 explains the basic principle of phase-shifting algorithm. Section 3 shows simulation results. Section 4 presents experimental results, and Sec. 5 summarizes this work.

2. Phase-shifting algorithm

Because of high-speed, high-accuracy, and robustness to noise, phase-shifting algorithms have been extensively adopted for 3D optical metrology [4]. For an $N$-step phase-shifting algorithm with equal phase shifts, the $k-th$ fringe pattern can be mathematically described as,

$$I_k(x,y) = I^{\prime}(x,y) + I^{{\prime}{\prime}}(x,y) \cos[\phi+2k\pi/N],$$
where $I'(x,y)$ is the average intensity, $I''(x,y)$ is the intensity modulation, $\phi (x,y)$ is the phase to be solved for, and $k = 1, 2, \dots , N$. The phase can be calculated as,
$$\phi(x,y) ={-}\arctan\left( \frac{\sum_{k=1}^N I_k(x,y) \sin(2k\pi/N)}{\sum_{k=1}^N I_k(x,y) \cos(2k\pi/N)}\right) .$$
Due to the use of an arctangent function, the phase value obtained by Eq. (2) ranges from $-\pi$ to $\pi$ with $2\pi$ discontinuities, which is often regarded as wrapped phase. To recover a continuous phase map, a phase unwrapping algorithm is required. The phase unwrapping algorithm adds or subtracts integer multiples of $2\pi$ to remove $2\pi$ discontinuities for each pixel. There are two different categories of phase unwrapping algorithms: spatial phase unwrapping [5,6] and temporal phase unwrapping [7]. The former can only provide the relative phase information because the algorithm is typically based on a reference point in the phase map. In contrast, a conventional temporal phase unwrapping method requires more patterns to determine the proper number of $2\pi$ for each pixel to unwrap the wrapped phase, whilst some recent geometric constraint based algorithms can also unwrap the phase without temporally acquiring more images [815]. In this research, we used the gray-coding method, which is one of the temporal phase unwrapping algorithms, to obtain the unwrapped phase map [16]. Once the unwrapped phase is obtained, 3D information can be recovered if the system is calibrated [17].

3. Simulation

For a 3D shape measurement system employing the binary defocusing method, a DLP projector is preferable due to its achievable high contrast and high speed [1]. DLP projectors have two types of pixels: the rectangular shaped pixels and the diamond-shaped pixels, as shown in Fig. 1(a) and 1(b). The rotation axis of the rectangular pixel is the middle line of the rectangular pixels and the rotation axis of the diamond-shaped pixel is along the diagonal line. The definition of a projected image is clear for a projector with rectangular shaped pixels because the one-to-one mapping is precisely along each row and each column, as shown in Fig. 1(c). However, the definition of a projected image for a projector with diamond-shaded pixels is not precisely a one-to-one mapping, as shown in Fig. 1(d). The different mapping could introduce differences during the resampling process when the projected image is captured by the camera, especially when the camera pixel size is much higher than the projector pixel size in object space. Therefore, this research endeavors to study such an influence on ultrahigh resolution 3D shape measurement.

 figure: Fig. 1.

Fig. 1. Illustration of two different types of DLP projectors. (a) Rectangular shaped pixels and the rotation axis; (b) diamond-shaded pixels and the rotation axis; (c) row and column definition of rectangular-shaped pixels; (d) row and column definition of diamond-shaped pixels.

Download Full Size | PDF

We first analyzed the phase quality by comparing the phase error between two different pixel shapes with respect to projector-to-camera pixel size ratios. We employed the binary defocusing technique with a fringe pitch as 12 and a phase shift number of 12 (i.e., $N = 12$ in Eq. (1)). Figure 2(a) shows one of computer generated phase-shifted binary patterns for a projector with rectangular-shaped pixels. Figure 2(b) shows one of computer generated phase-shifted binary patterns for a projector with diamond-shaped pixels. It is obvious that the diamond-shaped pixels produce a pattern with saw-tooth edging artifacts, whilst the pattern produced by rectangular-shaped pixels does not have such artifacts.

 figure: Fig. 2.

Fig. 2. Ideal binary patterns with different projector pixel shapes. (a) One of the binary patterns for a projector with rectangular-shaded pixels; (b) one of the binary patterns for a projector with diamond-shaded pixels.

Download Full Size | PDF

These phase-shifted patterns were blurred by applying a Gaussian filter with a size of $5 \times 5$ pixels then resampled with a much smaller camera pixel size (i.e. one projector pixel corresponds to many resampled camera pixel). Figure 3(a) shows the resampled image of the blurred pattern shown in Fig. 2(a) when the projector pixel size is 16 times of the camera pixel size. The resampled phase-shifted patterns are then used to compute the wrapped phase, as shown in Fig. 3(c). The wrapped phase is further unwrapped and compared against the ideal phase to determine the error. Figure 4(a) shows the error map when the projector-to-camera pixel size ratio is 16:1. We then calculated the root-mean-square (rms) value of the error map, 0.12 rad for this case, to quantify the phase quality.

 figure: Fig. 3.

Fig. 3. Simulations for ultrahigh resolution captured system. (a) One of the captured fringe images for the projector with rectangular-shaped pixels; (b) one of the captured fringe images for the projector with diamond-shaped pixels; (c) wrapped phase map for the captured fringe images for the projector with rectangular-shaped pixels; (d) wrapped phase map for the captured fringe images for the projector with diamond-shaped pixels.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Representative phase error maps with different projector-to-camera pixel size ratios. For all images, purely black represents phase error of -0.28 rad, and purely white represents phase error of 0.28 rad. (a) Phase error map for the captured fringe images for the projector with rectangular-shaped pixels (rms 0.12 rad) when projector-to-camera pixel size ratio is 16:1; (b) phase error map for the captured fringe images for the projector with diamond-shaped pixels (rms 0.17 rad) when projector-to-camera pixel size ratio is 16:1; (c)-(d) phase error maps results when the projector-to-camera pixel size ratio is 2:1; (e)-(f) phase error maps results when the projector-to-camera pixel size ratio is 1:1.

Download Full Size | PDF

We employed a similar process to analyze the phase error for the projected image with diamond-shaped pixels. Figures 3(b) shows the resampled blurred pattern shown in Fig. 2(b) when the projector pixel size is 16 times of the camera pixel size. Figure 3(d) shows the wrapped phase map, and Fig. 4(b) shows the error map with a rms error of 0.17 rad when the projector-to-camera pixel size ratio is 16:1. Clearly, the phase rms error is much larger than that obtained using the projector with rectangular-shaped pixels.

We then analyzed the phase error for different camera pixel sizes. We basically combined $m \times m$ pixels of the resampled patterns with the projector pixel size being 16 times of the camera pixel size and recomputed the phase rms error. Figure 4 shows some representative error maps with different projector-to-camera pixel size ratios, and Fig. 5 summarizes the comparing phase rms errors for both types of projectors. When the projector pixel size is similar to that of the camera pixel size, these two types of projectors do not have significant difference, as expected. However, when the projector pixel size is much larger than the camera pixel size (e.g., 5 times), the phase rms error for the projector with rectangular shaded pixels is much smaller than that for the projector with diamond-shaded pixels. Furthermore, one may notice that the difference quickly increases when the projector to camera pixel size ratio is between 2 and 5, and then the separation remains quite large when the ratio is larger than 5 or so.

 figure: Fig. 5.

Fig. 5. Simulation results of two projector types with different projector-to-camera pixels size ratios when projected patterns are defocused binary patterns.

Download Full Size | PDF

In addition, we ran simulation on ideal sinusoidal fringe patterns instead of blurred binary patterns. Figure 6(a) shows one of sinusoidal fringe patterns for the projector with rectangular shaped pixels. Figure 6(b) shows one of sinusoidal fringe patterns for the projector with diamond-shaped pixels. We followed the same process to analyze the phase error, and the corresponding phase error maps are shown in Fig. 6(c) and Fig. 6(d) respectively.

 figure: Fig. 6.

Fig. 6. Comparing simulation results when the projected fringe patterns are sinusoidal. (a) One of the captured fringe images for the projector with rectangular-shaped pixels; (b) one of the captured fringe images for the projector with diamond-shaped pixels; (c) phase error map for the captured fringe images for the projector with rectangular-shaped pixels; (d) phase error map for the captured fringe images for the projector with diamond-shaped pixels.

Download Full Size | PDF

Figure 7 shows the comparing results when the projector pixel size varies from 1 to 16 times of the camera pixel size. Once again, the projector with diamond-shaped pixels produce significantly larger error than that of the projector with rectangular shaped pixels when the projector pixel size is much larger than the camera pixel size.

 figure: Fig. 7.

Fig. 7. Simulation results of two projector types with different projector-to-camera pixel size ratios when projected patterns are sinusoidal patterns.

Download Full Size | PDF

4. Experiment

We developed two DFP systems to experimentally evaluate these two type of projectors. Figure 8 shows photographs of the systems: the first system (System I) used a DLP projector with rectangular-shaped pixels (model: Texas Instrument LightCrafter 6500) and the second system (System II) used a DLP projector with diamond-shaded pixels (model: Texas Instrument LightCrafter 4500). The projector in System I has a resolution of 1920 $\times$ 1080 with a fixed focal length of 900 mm and the projector in System II has a resolution of 912 $\times$ 1140 whose focal length can be changed. Both systems use the same model camera (model: FLIR Grasshopper USB 3.0). The camera has a pixel size of 3.45$\mu m \times$3.45$\mu m$, and the camera resolution was set as 2048$\times$1536. The camera pixel size is 3.45$\mu m \times$3.45$\mu m$, the LightCrafter 4500 projector pixel size is $7.64\mu m \times 7.64\mu m$, and LightCrafter 6500 projector pixel size is $5.5\mu m \times 5.5\mu m$. Both systems used the microprocessor (model: Arduino Uno) to synchronize the camera with the projector. For all experiments, the projector used only green channel and the projection and capturing speed was set as 20 Hz.

 figure: Fig. 8.

Fig. 8. Photographs of experimental system setups. (a) System I: system using a projector with diamond-shaped pixels; (b) System II: system using a projector with rectangular-shaped pixels.

Download Full Size | PDF

We carried out experiments to evaluate performance of these two systems with varying projector-to-camera pixel ratios. We first measured a flat white plane using 12 phase-shifted fringe patterns with a fringe period of 12 pixels. The projector was slightly defocused to reduce the impact of high order harmonics. The projector was positioned further away such that each projector pixel is equivalent to approximately 16 camera pixels on the plane. For these experiments, the camera was attached with 25 mm lenses (model: Kowa LM25HC-V). We captured the phase-shifted fringe patterns and gray coded patterns for absolute phase retrieval. Figure 9(a) shows one of the phase-shifted fringe patterns projected by the projector with rectangular shaped pixels, and Fig. 9(c) shows the zoom-in view that depicts rectangular pixels. Figure 9(b) shows one of phase-shifted fringe patterns projected by the projector with diamond-shaped pixels, and Fig. 9(d) shows zoom-in view that clearly depicts diamond pixels.

 figure: Fig. 9.

Fig. 9. Representative captured images. (a) One of the captured fringe images from System I; (b) one of the captured fringe images from System II; (c) zoom-in view of the pattern shown in (a); (d) zoom-in view of the pattern shown in (b).

Download Full Size | PDF

We obtained the absolute phase map from these captured fringe patterns and then calculated the phase error map by subtracting the reference plane. The reference plane was created by applying a Gaussian filter with a size of 21 $\times$ 21 pixels to the unwrapped phase map. Figure 10 shows some representative phase error maps. Figure 10(a) shows the error map of System I when the projector pixel size is 16 times of the camera pixel size. Figure 10(b) shows the error map of System II when the projector pixel size is 16 times of the camera pixel size. Once again, we combined $m \times m$ pixels of the camera captured images to artificially create smaller projector-to-camera pixel size ratios.

 figure: Fig. 10.

Fig. 10. Phase error analysis for the flat plane experiments. (a) Phase error map of System I when projector-to-camera pixel ratio is 16:1; (b) Phase error map of System II when projector-to-camera pixel ratio is 16:1; (c)-(d) phase error maps results when the projector-to-camera pixel size ratio is 2:1; (e)-(f) phase error maps results when the projector-to-camera pixel size ratio is 1:1.

Download Full Size | PDF

Figure 11 shows the phase rms error for both systems with respect to the projector-to-camera pixel ratio. This demonstrated that the DFP system using a projector with diamond-shaped pixels has lower quality phase than the system using a projector with a rectangular shaped projector when the projector pixel size is much larger than the camera pixel size. Furthermore, when the projector-to-camera pixel size ratio ranges from 1 to 3, the error difference rapidly increases, and remains relatively large when the ratio is larger than 5.

 figure: Fig. 11.

Fig. 11. Experimental result of two projector types with respect to different projector-to-camera pixel size ratios.

Download Full Size | PDF

We then measured a sphere with a diameter of 40 mm using both systems. For these experiments, the camera was attached with 25 mm lenses (model: Kowa LM25HC-V). We configured both systems to ensure that the the projector pixel size is approximately 8 times of the camera pixel size. Figure 12(a) shows the photograph of the sphere captured by System I. Figure 12(b) shows one of the captured fringe patterns. We then employed the phase-shifting algorithm to obtain the absolute phase map that was further converted 3D geometry by using the reference plane based method [18]. Figure 12(c) shows the final 3D reconstruction. To better visually compare the measurement results, we generated a zoom-in view of the 3D reconstruction, as shown in Fig. 12(d). Figures 12(e)–12(h) respectively shows the photograph of the object captured by System II, one of the fringe patterns, and the corresponding 3D reconstruction, and the zoom-in view of 3D reconstruction. These results demonstrated that the data quality captured System I is higher than that captured by System II.

 figure: Fig. 12.

Fig. 12. Experimental result of a sphere. (a) Photograph captured by System I; (b) one of the fringe images captured by System I; (c) 3D result by System I; (d) zoom-in view of (c); (e) photograph captured by System II; (f) one of the fringe images captured by System II; (g) 3D result by System II; (h) zoom-in view of (g).

Download Full Size | PDF

We also measured a statue with more complex 3D surface geometry when the projector pixel size is approximately 16 times of the camera pixel size. Figure 13 shows measurement results. Figure 13(a) shows the photograph of the statue. Figure 13(b) shows 3D reconstruction, and Fig. 13(c) shows the corresponding zoom-in view captured by System I. Similarly, we measured the same statue with System II, and the corresponding results are shown in Fig. 13(d) and Fig. 13(e). Obviously, the results obtained from System I is visually better.

 figure: Fig. 13.

Fig. 13. Experimental results of a statue. (a) Photograph of the statue measured with projector-to-camera pixel size ratio of 16:1; (b) 3D result with System I; (c) zoom-in view of (b); (d) 3D result with System II; (e) zoom-in view of (d).

Download Full Size | PDF

Lastly, we reconfigured the system with different projector-to-camera pixel size ratios (approximately 2:1 and 1:1), and measured two complex 3D objects. For the 1:1 ratio systems, the camera was attached with a 8 mm lens (Model: Computar M0814-MP2), and for the 2:1 ratio systems, the camera was attached with a 16 mm lens (Model: Computar M1614-MP2). Figure 14 shows the corresponding results. The comparing data shows that when the projector-to-camera pixel size ratio is approximately 2:1, the result obtained from System I is better than that obtained from System II; whilst the data quality is similar when the projector-to-camera pixel size ratio is approximately 1:1. All the simulation and experimental results confirmed that 1) when the camera pixel is much smaller than the projector pixel in object space, 3D shape measurement quality obtained from a system using a projector with rectangular shaped pixels is much higher than that obtained from a system using a projector with diamond-shaped pixels; 2) the projector-to-camera pixel size ratio has less influence on measurement quality for a DFP system using a projector with rectangular shaped pixels than that using a projector with diamond-shaped pixels; and 3) overall, the DFP system using a projector with rectangular pixels produces much smoother 3D geometry with higher measurement accuracy.

 figure: Fig. 14.

Fig. 14. Experimental results of complex 3D objects with different projector-to-camera pixel size ratios. (a) Photograph of the object measured with projector-to-camera pixel size ratio of 2:1; (b) 3D result with System I; (c) zoom-in view of (b); (d) 3D result with System II; (e) zoom-in view of (d); (f) Photo of the object measured with projector-to-camera pixel size ratio of 1:1; (g) 3D result with System I; (h) zoom-in view of (g); (i) 3D result with System II; (j) zoom-in view of (i).

Download Full Size | PDF

5. Summary

This paper evaluated the performance of DLP projectors with two different shaped pixels for ultrahigh-resolution 3D shape measurements. Our simulation and experimental results demonstrated that if the camera pixel size is similar to the projector pixel size, the pixel shape has negligible influence on measurement quality. However, if a single projector pixel corresponds to a lot more camera pixel, the diamond-shaped projector pixels introduce measurement artifacts that reduces measurement quality.

Funding

Directorate for Computer and Information Science and Engineering (IIS-1637961, IIS-1763689).

Acknowledgments

This study was sponsored by the National Science Foundation (NSF) under grant numbers IIS-1637961 and IIS-1763689. The views expressed in this paper are those of the authors and not necessarily those of the NSF.

Disclosures

JSH: The author declares no conflicts of interest; SZ: The author declares no conflicts of interest.

References

1. S. Zhang, “High-speed 3d shape measurement with structured light methods: a review,” Opt. Lasers Eng. 106, 119–131 (2018). [CrossRef]  

2. S. Zhang, High-speed 3D imaging with digital fringe projection technique, (CRC Press, New York, NY, 2016), 1st ed.

3. S. Lei and S. Zhang, “Digital sinusoidal fringe generation: defocusing binary patterns vs focusing sinusoidal patterns,” Opt. Lasers Eng. 48(5), 561–569 (2010). [CrossRef]  

4. D. Malacara, Optical Shop Testing, (John Wiley and Sons, New York, NY, 2007), 3rd ed.

5. K. Chen, J. Xi, and Y. Yu, “Quality-guided spatial phase unwrapping algorithm for fast three-dimensional measurement,” Opt. Commun. 294, 139–147 (2013). [CrossRef]  

6. M. Arevalillo-Herráez, M. Gdeisat, F. Lilley, and D. R. Burton, “A spatial algorithm to reduce phase wraps from two dimensional signals in fringe projection profilometry,” Opt. Lasers Eng. 82, 70–78 (2016). [CrossRef]  

7. S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: a review,” Opt. Lasers Eng. 107, 28–37 (2018). [CrossRef]  

8. K. Zhong, Z. Li, Y. Shi, and C. Wang, “Analysis of solving the point correspondence problem by trifocal tensor for real-time phase measurement profilometry,” Proc. SPIE 8493, 849311 (2012). [CrossRef]  

9. Z. Li, K. Zhong, Y. Li, X. Zhou, and Y. Shi, “Multiview phase shifting: a full-resolution and high-speed 3d measurement framework for arbitrary shape dynamic objects,” Opt. Lett. 38(9), 1389–1391 (2013). [CrossRef]  

10. K. Zhong, Z. Li, Y. Shi, C. Wang, and Y. Lei, “Fast phase measurement profilometry for arbitrary shape objects without phase unwrapping,” Opt. Lasers Eng. 51(11), 1213–1222 (2013). [CrossRef]  

11. R. Ishiyama, S. Sakamoto, J. Tajima, T. Okatani, and K. Deguchi, “Absolute phase measurements using geometric constraints between multiple cameras and projectors,” Appl. Opt. 46(17), 3528–3538 (2007). [CrossRef]  

12. C. Brauer-Burchardt, P. Kuhmstedt, M. Heinze, P. Kuhmstedt, and G. Notni, “Using geometric constraints to solve the point correspondence problem in fringe projection based 3d measuring systems,” in Proc. of the 16th international conference on Image analysis and processing, vol. Part II (2011), pp. 265–274.

13. Y. R. Huddart, J. D. R. Valera, N. J. Weston, and A. J. Moore, “Absolute phase measurement in fringe projection using multiple perspectives,” Opt. Express 21(18), 21119–21130 (2013). [CrossRef]  

14. W. Lohry, V. Chen, and S. Zhang, “Absolute three-dimensional shape measurement using coded fringe patterns without phase unwrapping or projector calibration,” Opt. Express 22(2), 1287–1301 (2014). [CrossRef]  

15. Y. An, J.-S. Hyun, and S. Zhang, “Pixel-wise absolute phase unwrapping using geometric constraints of structured light system,” Opt. Express 24(16), 18445–18459 (2016). [CrossRef]  

16. Y. Wang, S. Zhang, and J. H. Oliver, “3d shape measurement technique for multiple rapidly moving objects,” Opt. Express 19(9), 8539–8545 (2011). [CrossRef]  

17. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

18. Y. Xu, L. Ekstrand, J. Dai, and S. Zhang, “Phase error compensation for three-dimensional shape measurement with projector defocusing,” Appl. Opt. 50(17), 2572–2581 (2011). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. Illustration of two different types of DLP projectors. (a) Rectangular shaped pixels and the rotation axis; (b) diamond-shaded pixels and the rotation axis; (c) row and column definition of rectangular-shaped pixels; (d) row and column definition of diamond-shaped pixels.
Fig. 2.
Fig. 2. Ideal binary patterns with different projector pixel shapes. (a) One of the binary patterns for a projector with rectangular-shaded pixels; (b) one of the binary patterns for a projector with diamond-shaded pixels.
Fig. 3.
Fig. 3. Simulations for ultrahigh resolution captured system. (a) One of the captured fringe images for the projector with rectangular-shaped pixels; (b) one of the captured fringe images for the projector with diamond-shaped pixels; (c) wrapped phase map for the captured fringe images for the projector with rectangular-shaped pixels; (d) wrapped phase map for the captured fringe images for the projector with diamond-shaped pixels.
Fig. 4.
Fig. 4. Representative phase error maps with different projector-to-camera pixel size ratios. For all images, purely black represents phase error of -0.28 rad, and purely white represents phase error of 0.28 rad. (a) Phase error map for the captured fringe images for the projector with rectangular-shaped pixels (rms 0.12 rad) when projector-to-camera pixel size ratio is 16:1; (b) phase error map for the captured fringe images for the projector with diamond-shaped pixels (rms 0.17 rad) when projector-to-camera pixel size ratio is 16:1; (c)-(d) phase error maps results when the projector-to-camera pixel size ratio is 2:1; (e)-(f) phase error maps results when the projector-to-camera pixel size ratio is 1:1.
Fig. 5.
Fig. 5. Simulation results of two projector types with different projector-to-camera pixels size ratios when projected patterns are defocused binary patterns.
Fig. 6.
Fig. 6. Comparing simulation results when the projected fringe patterns are sinusoidal. (a) One of the captured fringe images for the projector with rectangular-shaped pixels; (b) one of the captured fringe images for the projector with diamond-shaped pixels; (c) phase error map for the captured fringe images for the projector with rectangular-shaped pixels; (d) phase error map for the captured fringe images for the projector with diamond-shaped pixels.
Fig. 7.
Fig. 7. Simulation results of two projector types with different projector-to-camera pixel size ratios when projected patterns are sinusoidal patterns.
Fig. 8.
Fig. 8. Photographs of experimental system setups. (a) System I: system using a projector with diamond-shaped pixels; (b) System II: system using a projector with rectangular-shaped pixels.
Fig. 9.
Fig. 9. Representative captured images. (a) One of the captured fringe images from System I; (b) one of the captured fringe images from System II; (c) zoom-in view of the pattern shown in (a); (d) zoom-in view of the pattern shown in (b).
Fig. 10.
Fig. 10. Phase error analysis for the flat plane experiments. (a) Phase error map of System I when projector-to-camera pixel ratio is 16:1; (b) Phase error map of System II when projector-to-camera pixel ratio is 16:1; (c)-(d) phase error maps results when the projector-to-camera pixel size ratio is 2:1; (e)-(f) phase error maps results when the projector-to-camera pixel size ratio is 1:1.
Fig. 11.
Fig. 11. Experimental result of two projector types with respect to different projector-to-camera pixel size ratios.
Fig. 12.
Fig. 12. Experimental result of a sphere. (a) Photograph captured by System I; (b) one of the fringe images captured by System I; (c) 3D result by System I; (d) zoom-in view of (c); (e) photograph captured by System II; (f) one of the fringe images captured by System II; (g) 3D result by System II; (h) zoom-in view of (g).
Fig. 13.
Fig. 13. Experimental results of a statue. (a) Photograph of the statue measured with projector-to-camera pixel size ratio of 16:1; (b) 3D result with System I; (c) zoom-in view of (b); (d) 3D result with System II; (e) zoom-in view of (d).
Fig. 14.
Fig. 14. Experimental results of complex 3D objects with different projector-to-camera pixel size ratios. (a) Photograph of the object measured with projector-to-camera pixel size ratio of 2:1; (b) 3D result with System I; (c) zoom-in view of (b); (d) 3D result with System II; (e) zoom-in view of (d); (f) Photo of the object measured with projector-to-camera pixel size ratio of 1:1; (g) 3D result with System I; (h) zoom-in view of (g); (i) 3D result with System II; (j) zoom-in view of (i).

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

I k ( x , y ) = I ( x , y ) + I ( x , y ) cos [ ϕ + 2 k π / N ] ,
ϕ ( x , y ) = arctan ( k = 1 N I k ( x , y ) sin ( 2 k π / N ) k = 1 N I k ( x , y ) cos ( 2 k π / N ) ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.