Abstract
A three-dimensional (3D) shape measurement system using defocusing binary fringe projection can perform high-speed and flexible measurements. In this technology, determining the fringe pitch that matches the current projection’s defocus amount is of great significance for an accurate measurement. In this paper, we propose an online binary fringe pitch selection framework. First, by analyzing the fringe images captured by the camera, the defocus amount of projection can be obtained. Next, based on analysis of the harmonic error and camera noise, we establish a mathematical model of the normalized phase error. The fringe pitch that minimizes this normalized phase error is then selected as the optimal fringe pitch for subsequent measurements, which can also lead to more accuracy and robust measurement results. Compared with current methods, our method does not require offline defocus-distance calibration. However, it can achieve the same effect as the offline calibration method. It is also more flexible and efficient. Our experiments validate the effectiveness and practicability of the proposed method.
© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
1. Introduction
The phase-shifting method based on defocusing binary fringe projection is widely used in real-time 3D measurement because of its fast projection speed [1–3]. In general, the projector's lens is fixed, and the defocus amount of projection changes along with the depth direction. The best measurement accuracy can be achieved when the fringe pitch matches the defocus amount. However, in actual industrial measurement processes, the fringe pitch does not always match the defocus amount at the moving object position. Current methods use offline calibration to determine the relationship between the defocus amount and the working distance and subsequently choose the optimal fringe pitch to match the defocus amount [4,5]. The calibration process is complex and relies on other high-precision equipment. When the projector's lens is changed by vibration or human factors, it needs to be recalibrated. Therefore, online adaptive fringe pitch adjustment is essential.
To reduce harmonic errors and improve the signal-to-noise ratio (SNR) of defocused binary patterns, current approaches can be divided into two categories: a) providing input patterns similar to sinusoid fringe patterns, or b) adapting the fringe pitch to the amount of defocus. For the first idea, methods like changing the pattern mode [6–8], using the dithering technique [9,10], using the error compensation method [11,12], and using the motion blur [13] have been proposed to improve the measurement accuracy. Although these methods show good results for large pitch fringes, they are not valid on fringes with a small pitch. Therefore, the measurement accuracy is relatively decreased because the fringe with a smaller pitch tends to have higher measurement accuracy [14].
The other idea is to adapt the fringe pitch to the defocus amount of projection. As the defocus amount increases, the SNR of the small pitch fringe will decrease. As the defocus amount decreases, the harmonic error of the non-sinusoidal fringes will increase. The effect of defocus on binary square patterns (BSPs) has been studied [15], but determining the projector's defocus amount is still a challenging task. Offline calibration methods use a point spread function (PSF) to estimate the projector's defocus amount [4,5]. These methods need to project a specific pattern on a flat plate at various distances to calibrate the relationship between the defocus amount and the working distance. The best fringe pitch is obtained according to the actual working distance and a mathematical model of phase error. These methods can achieve adaptive adjustment of the fringe pitch, but they are not efficient. When the projector lens is adjusted, the system needs to be recalibrated.
In this study, we propose an online method that adaptively adjusts the fringe pitch to match the current projector defocus amount. The quantitative estimation of the projection defocusing at the object position is obtained by an analysis based on the fringe images captured by the camera. The optimal fringe pitch is selected using the mathematical model of phase error that we established and the defocus estimation result.
The remainder of this paper is organized as follows: Section 2 explains the principles of our approach and presents some simulation results. Section 3 shows several experimental results to demonstrate the correctness and practicability of our method. Section 4 summarizes this paper.
2. Principle
Figure 1 shows our entire workflow. As shown in Figs. 1(a) and 1(b), the projector’s input BSPs are defocused by the projector lens, forming defocused BSPs (DBSPs). DBSPs are similar to sinusoidal fringe images because the high-frequency components of the BSP are suppressed. First, as shown in Fig. 1(c), the absolute phase can be obtained using a traditional multi-step phase-shifting method. Subsequently, as shown in Figs. 1(d), 1(e), and 1(f), a new BSP with a fixed pitch is projected, and based on the absolute phase, a projector image corresponding to this BSP can be reconstructed. As shown in Fig. 1(g), (a) sparse defocus map and a defocus estimation can be obtained based on this projector image. This estimation is used in the mathematical model of the normalized phase error to obtain the optimal fringe pitch. Finally, as shown in Fig. 1(h), BSPs with an optimal pitch are generated and used in the next measurement process, and online adaptive adjustment of the fringe pitch can be achieved. For each measurement process, reconstruction results can be obtained from the absolute phase.
To introduce our method more clearly, we first define the following mathematical descriptions in the process of DBSP projection. Assuming that the defocusing effect of the projector can be simulated by Gaussian convolution and only considering a one-dimensional situation, a Gaussian convolution kernel with parameter ${\sigma _\textrm{g}}$ can be expressed as
The defocus amount varies with the projection distance, and ${\sigma _\textrm{g}}$ varies with the defocus amount, so we use ${\sigma _\textrm{g}}$ to represent the defocus amount. We use i to represent the index of the phase-shifting patterns. ${S_i}$ and ${S_{\textrm{g}i}}$ refer to one-row data of BSP and DBSP, respectively. The relationship between ${S_i}$ and ${S_{\textrm{g}i}}$ can be expressed as follows:
where $\textrm{p}$ represents the projector plane and ${x_\textrm{p}}$ represents the pixel coordinates on the projector. DBSP’s corresponding ideal sinusoidal fringe pattern ${I_i}$ can be expressed as follows:Using the three-step phase-shifting method as an example, after the DBSP’s projection is captured by the camera, the wrapped phase of each camera pixel, $\phi $, can be calculated as
2.1 Projection’s defocusing parameter determination
In this section, we will introduce the method used to obtain the defocus amount at the current target position using the captured fringe images. First, we need to calibrate the camera and projector. We suppose that both the camera and the projector satisfy the pinhole camera model and intrinsic parameters matrix of camera (${{\boldsymbol A}_\textrm{c}}$) and projector (${{\boldsymbol A}_\textrm{p}}$) can be expressed as
To get the defocus amount of the current projection, we need to reproject the image captured by the camera to the DMD plane of the projector. As shown in Fig. 1(f), this reprojected image, which we call the projector image, corresponds to the projector input BSP. In [18], Zhang et al. reconstructed the projector image with fringe images in two directions for calibration. But when the calibration parameters of the camera and the projector are known, we only need to project the fringes in one direction to reconstruction of the projector image, which corresponds exactly to one measurement process. Suppose that the coordinate of point Q in the camera coordinate system is ${{\boldsymbol x}_\textrm{c}} = ({{\textrm{x}_\textrm{c}},{y_\textrm{c}},{z_\textrm{c}}} )$, the coordinate of Q in the projector coordinate system is ${{\boldsymbol x}_\textrm{p}} = ({{x_\textrm{p}},{y_\textrm{p}},{z_\textrm{p}}} )$, the projection of Q on the camera imaging plane is $({{u_\textrm{c}},{v_\textrm{c}}} )$ and the projection of Q on the projector’s DMD is $({{u_\textrm{p}},{v_\textrm{p}}} )$. The relationship between these coordinates can be expressed as
Using the projector image, the defocus amount of the projection at the BSP’s edges can be calculated. Calculating the Gaussian kernel parameters at the image edge is a widely used method in defocus estimation [19,20]. We improved upon this method by combining it with the characteristics of BSP. Because the DMD's pixels are discrete, we use discrete convolution to analyze the projector image. So Eq. (2) can also be expressed as:
Suppose that a rising edge of BSP is between ${x_{\textrm{p}0}}$ and ${x_{\textrm{p}0}} + 1$. Figure 2 shows an example in which the rising edge of a BSP is between ${x_{\textrm{p}0}} = 7$ and ${x_{\textrm{p}0}} + 1 = 8$, the pitch of BSP is ${T_\textrm{s}} = 6$ pixels, and the BSP’s amplitude is 1.
It can be inferred from Fig. 2 that nearly half of the terms in Eq. (14) are 0. Omitting terms with ${S_i}({x + {x_\textrm{p}}} )= 0$ in Eq. (14), ${S_{\textrm{g}i}}({{x_{\textrm{p}0}}} )$ can be expressed as:
Positioning edges in the projector image is easy because projector image corresponds to the projector’s input BSP, and $\nabla {S_{\textrm{g}i}}({{x_{\textrm{p}0}}} )$ can be calculated from the projector image using Eq. (17). With known ${T_s}$ and $\nabla {S_{\textrm{g}i}}({{x_{\textrm{p}0}}} )$, ${\sigma _\textrm{g}}$ at ${x_{\textrm{p}0}}$ can be obtained numerically using Eq. (18). Using Newton’s iteration method as an example, assuming that ${\sigma _\textrm{g}} \ne 0$, a new equation can be constructed based on Eq. (18):
To verify the validity of our method, we performed two simulations. First, we generated a fixed-pitch BSP and applied Gaussian convolution with known ${\sigma _\textrm{g}}$ to change it to a DBSP. After that, we obtained $\nabla {S_{\textrm{g}i}}({{x_{\textrm{p}0}}} )$ from this DBSP. Finally, we evaluated ${\sigma _\textrm{g}}$ using Eq. (20). We set ${\sigma _\textrm{g}} = 5.0$ and ${T_s} = 72$ pixels. After four iterations with the initial$\; \sigma _\textrm{g}^{(0 )} = 1.0$ (this initial value is also used in the rest of our experiments), we obtain $\sigma _\textrm{g}^{(4 )} = 5.0006$.
In addition, we selected different values of ${\sigma _\textrm{g}}$ in [1.0, 21.0] with different iterations to test the performance of our method; the results are shown in Fig. 3(a). When ${\sigma _\textrm{g}}/{T_s}$ exceeds 0.3, our method will become unstable because Eq. (18) is an approximate formula. When ${\sigma _\textrm{g}} = 20$ and ${T_s} = 72$, that is, ${\mathrm{\sigma }_\textrm{g}}/{T_s} = 0.278$, as shown in Fig. 3(b), the amplitude of the binary fringe is small. Such a large defocus situation is not common, and occurs only when the measurement plane is sufficiently far away from the focus plane of the projector. We use five iterations in the rest of our experiments, which is sufficient for most cases (${\sigma _\textrm{g}}/{T_s}$ < 0.28).
2.2 Optimal fringe pitch selection
The optimal fringe pitch that minimizes the measurement error is selected based on the mathematical model of normalized phase error and the defocus amount we obtain in Section 2.1. In phase-shifting technology, the theoretical phase error is primarily derived from the harmonic error and the camera noise.
Harmonic error analysis methods include frequency-domain methods [21] and time-domain methods [22,23]; the latter is used in our phase error analysis. Suppose that ${a_k}$ is the $k$-th coefficient of the Fourier series of a DBSP, which can be expressed as
The camera noise in the phase-shifting method is generally considered to be additive Gaussian noise [21], but our experiment observed that the variance of the CMOS camera noise was proportional to the grayscale value. As shown in Fig. 4(a), in our experiment, we acquired 75 images of a fixed scene under the same exposure and the lighting environments without moving the camera. We calculated the variance and average gray value for each pixel. A linear regression was performed using the average as the independent variable and the variance as the dependent variable. The regression result is shown in Fig. 4(b).
The fitting equations for variance and image gray value can be expressed as
where $\sigma _\textrm{n}^2$ and I are the average and variance of the gray value, respectively. ${f_\textrm{n}}$ is the scale factor, and b is the fitting bias. As shown in Fig. 4(b), the value of b is small ($b = 0.061$), we ignore it for the convenience of analysis. The scale factor fitting result ${f_n} = 0.045$, the fit RMS is 0.34, and the R-squared value is $0.956$. We also consider camera noise to be Gaussian additive noise, but the variance of this noise changes according to the gray value. Assuming that a Gaussian distribution with mean $\mu $ and variance ${\sigma ^2}$ can be expressed as${\; }{\cal{N}}({\mu ,{\sigma^2}} )$, the camera noise in fringe projections with different phases can be expressed as where $I_i^c(\phi )$ is one of the ideal sinusoidal fringe patterns captured by the camera. We do not consider the harmonic error here because in the DBSP projection process, when the camera's noise error becomes the main source of phase error, the contrast of the fringe is low, and the harmonic error is much smaller than the noise error. Considering the camera noise, the fringe pattern captured by the camera, $I_{\textrm{n}i}^\textrm{c}(\phi )$, can be expressed as where $\textrm{c}$ also represents the camera plane.According to Eq. (4), we directly use $I_{\textrm{n}i}^\textrm{c}(\phi )$ to evaluate the wrapped phase $phi^{\prime}$, and we use ${A^\textrm{c}}$ and ${B^\textrm{c}}$ to represent the average intensity and intensity modulation of $I_{\textrm{n}i}^\textrm{c}(\phi )$, respectively. $\phi^{\prime}$ can be expressed as
Because we greatly simplified Eq. (29), to prove the effectiveness of Eq. (32), we conducted a simulation experiment. In this simulation, we added Gaussian additive noise, such as Eq. (25), to the ideal sinusoidal fringe pattern, set various$\; {f_\textrm{n}}$ values, and calculated the variance of the wrapped phase. As shown in Fig. 5, comparing this variance with Eq. (32), we found that when $f \ll {B^\textrm{c}}$ ($f < 0.1{B^\textrm{c}}$), Eq. (32) can express the phase error caused by camera noise correctly. Actually, as shown in the camera noise experiment above, ${f_\textrm{n}} = 0.045$, which is much smaller than ${B^\textrm{c}}$ in most cases after grayscale normalization.
Finally, a mathematical model that considers both phase error and noise error is constructed to select the optimal fringe pitch. We assume that the wrapped phase $\phi $ is a uniform distribution over $({ - \mathrm{\pi },\mathrm{\pi }} )$. According to Eq. (23), $\nabla {\phi _\textrm{h}}(\phi )$ obeys an arcsine distribution, which can be expressed as
The result of Eq. (36) is the standard deviation of $\phi $; however, for different values of ${T_s}$, the same standard deviation of $\phi $ will cause different measurement standard deviations. To convert the phase error into actual measurement error, we introduce normalization in Eq. (36). We use ${T_s} = 100$ pixels as our standard pitch [24], where the normalized phase error can be expressed as
According to Eq. (21), ${a_1},{a_5},{a_7}$ in Eq. (37) are only related to ${\sigma _\textrm{g}}$ and ${T_s}$. Additionally, the camera noise parameter ${f_\textrm{n}}\; $is only related to the camera. Substituting Eq. (21) into Eq. (37), ${\mathrm{\sigma }_{\textrm{norm}}}$ results in the following equation:
After the defocus amount of the current target position, ${\mathrm{\sigma }_\textrm{g}}$, has been determined by the method mentioned in Section 2.1, we can use Eq. (38) to obtain the optimal fringe pitch. In practice, considering the fringe pitch of a BSP must be an integer and the complexity of Eq. (38), we use a simple bisection method to obtain the fringe pitch that can minimize ${\sigma _{\textrm{norm}}}$. Specifically, our optimal fringe pitch ranges from 6 to 120 pixels, and the initial value of the iteration is 60 pixels. When the iteration error is less than 1, the iteration is stopped. The two possible pitches around it are compared, and the one that has smaller ${\mathrm{\sigma }_{\textrm{norm}}}$ is chosen as the optimal fringe pitch, which is also the fringe pitch that can minimize the current measurement error.
3. Experiment
We conducted three experiments with a defocused projector system. The system was composed of a CMOS camera (AVT Prosilica GT 2000 camera with a resolution of 2048 × 1088) with a 12 mm capture and a DLP projector (DLP LightCrafter 4500 projector with a resolution of 912×1140). In our experiments, all projector defocus amounts were calculated based on the captured DBSP with ${T_s} = 72$ pixels. In each experiment, we also captured an image of full black projection and an image of full power projection for grayscale normalization.
In Section 2.1, we suppose that the projector's pixels are square, while the pixels of the projector used in our experiment are diamond. To rigorously illustrate the effectiveness of our method, here we use simulation to prove that the projection of square pixels and diamond pixels is the same after defocusing. We first generated a binary fringe’ edge of diamond pixels and square pixels. In order to match the actual situation as much as possible, we zoomed the two images by nine times. We applied Gaussian convolution to the two images and calculate the difference of their results, and suppose that the parameter of the Gaussian convolution kernel is $\mathrm{\sigma }$. Part of the simulation results are shown in Fig. 6(a). The average of the absolute difference gray value is shown in Fig. 6(b). Only when the defocus amount is minimal does the diamond pixel’s results differ from the square pixel’s results. As the defocus amount increases, the difference between them becomes smaller. Besides, since we zoomed the image by nine times, if it corresponds to the real situation, then all $\mathrm{\sigma }$ in this simulation should be divided by nine. In our experiments, ${\mathrm{\sigma }_\textrm{g}} > 2$ (i.e., $\mathrm{\sigma }$ in this simulation is over 18). In this case, we suppose that the defocused results of diamond pixels and square pixels are the same.
3.1 Verification experiment
In our first experiment, we measured a standard plane with 21 different fringe pitch (6, 12, 18, ..., 126) BSPs at four different defocus amounts. The plane was placed approximately 40 cm from our system. The sparse defocus map on the plane was calculated, and its mean value was used as the defocus estimation. We analyzed the relationship between the measurement error and the normalized phase error calculated by Eq. (38), as shown in Fig. 7, and considered the plane fitting RMS error as the measurement error. Our normalized phase error results have the same trend as the actual measurement error, and the optimal fringe pitch is consistent with the actual results. To demonstrate that the proposed approach has the same performance as the offline calibration methods, the optimal pitch was compared with the results obtained by the empirical formula described in [5]. Figure 8 shows several details of these experimental results. As shown in Figs. 8(a) and 8(c), if BSPs have an over-small pitch, camera noise will be the primary contributor to the measurement error, and if BSPs have an over-large pitch, the harmonics will significantly affect the results and periodic errors will occur.
3.2 Practical experiments
To verify our method’s effectiveness in practical applications, we first measured a standard ceramic ball with a diameter of 50 mm, which was placed approximately 40 cm from our measurement system. The defocus estimation was obtained using the method mentioned in Section 2.1. A column data of sparse defocus maps is shown in Fig. 9(c), and we take the mean value of 3.83 as the defocus estimation. According to the mathematical model of the phase error established above, the best measurement accuracy is achieved at${\; }{T_\textrm{s}}{\; } = {\; }24$ pixels, as shown in Fig. 9(d). In Fig. 10, we show some captured fringe images and the distribution of fitting errors after 3D reconstruction using the optimal fringe pitch and other similar fringes pitches. It can be seen that other pitches near the optimal fringe pitch can also obtain high-precision reconstruction results, but the reconstruction result using the optimal fringe pitch is more accurate. This is also consistent with the trend of the normalized phase error curve shown in Fig. 9(d).
To test the performance of our method when measuring general objects, we measured a statue. The defocus estimation was obtained using the method mentioned in Section 2.1. Because of the complex shape of the statue, the gray distribution of the captured image is uneven, and the gray variation of the image will considerably affect the result of the defocus parameter estimation. In this experiment, we used the area where the grayscale value changes smoothly to estimate the defocus parameters. The average value of the defocus parameter data marked by the red line in Fig. 1(g) is used as the estimation of the defocus parameter. As shown in Fig. 11(a), the estimated defocus parameter at the statue position is 3.15.
After substituting ${\sigma _\textrm{g}}$ = 3.15 in Eq. (38), we obtained the normalized phase error for various fringe pitches. As shown in Fig. 11(b), the best measurement accuracy should be obtained when ${T_s} = 20$ pixels. To determine the measurement error of the 3D reconstruction results of the statue, we first used the traditional multi-step phase shifting method (10-step phase-shifting method with ${T_s} = 30$ pixels) to obtain the reconstruction result. Then, we extracted two areas with smooth depth changes from this result and applied surface fitting. The difference between the measurement results and the fitting result is used as the measurement error. Reconstruction results and measurement errors are presented in Fig. 12. The best measurement accuracy is obtained at ${T_\textrm{s}} = 18$ pixels. We find that when ${T_\textrm{s}} = 18$ pixels and ${T_\textrm{s}} = 24$ pixels, the 3D reconstruction errors are nearly identical. From Fig. 11(b), it is observed that when ${T_\textrm{s}} = 18$ pixels, ${\mathrm{\sigma }_{\textrm{norm}}} = 0.0045$ rad, and when ${\textrm{T}_\textrm{s}} = 24$ pixels, ${\mathrm{\sigma }_{\textrm{norm}}} = 0.0046$ rad. Our experiments above prove that the reconstruction height error can be evaluated using the normalized phase error we propose.
4. Conclusion
To summarize, we proposed a novel method for online adaptive adjustment of the fringe pitch to match the current defocus amount at the object position. The proposed method does not require any calibration tools for defocus-distance calibration. Defocus estimation and online fringe pitch adjustment can be achieved using only the captured fringe images. In addition, our optimal fringe pitch estimation result is equivalent to the estimation result obtained by offline calibration methods, so it can be used flexibly in various systems. In addition, because changes in the projector lens does not affect the effectiveness of the proposed method, this method also provides a technical foundation for measurement systems that require dynamic adjustment of the projector lens to achieve high-precision and wide-range measurements.
Funding
National Natural Science Foundation of China (51575033).
Disclosures
The authors declare no conflicts of interest.
References
1. S. Zhang, “Flexible 3D shape measurement using projector defocusing: extended measurement range,” Opt. Lett. 35(7), 934–936 (2010). [CrossRef]
2. Y. Gong and S. Zhang, “Ultrafast 3-D shape measurement with an off-the-shelf DLP projector,” Opt. Express 18(19), 19743–19754 (2010). [CrossRef]
3. B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3D shape measurement with digital binary defocusing techniques,” Opt. Lasers Eng. 54, 236–246 (2014). [CrossRef]
4. G. Rao, L. Song, S. Zhang, X. Yang, K. Chen, and J. Xu, “Depth-driven variable-frequency sinusoidal fringe pattern for accuracy improvement in fringe projection profilometry,” Opt. Express 26(16), 19986–20008 (2018). [CrossRef]
5. Y. Wang, H. Zhao, H. Jiang, and X. Li, “Defocusing parameter selection strategies based on PSF measurement for square-binary defocusing fringe projection profilometry,” Opt. Express 26(16), 20351–20367 (2018). [CrossRef]
6. G. A. Ayubi, J. A. Ayubi, J. M. Di Martino, and J. A. Ferrari, “Pulse-width modulation in defocused three-dimensional fringe projection,” Opt. Lett. 35(21), 3682–3684 (2010). [CrossRef]
7. A. Silva, J. L. Flores, A. Muñoz, G. A. Ayubi, and J. A. Ferrari, “Three-dimensional shape profiling by out-of-focus projection of colored pulse width modulation fringe patterns,” Appl. Opt. 56(18), 5198–5203 (2017). [CrossRef]
8. Y. Wang, S. Basu, and B. Li, “Binarized dual phase-shifting method for high-quality 3D shape measurement,” Appl. Opt. 57(23), 6632–6639 (2018). [CrossRef]
9. W. Lohry and S. Zhang, “Genetic method to optimize binary dithering technique for high-quality fringe generation,” Opt. Lett. 38(4), 540–542 (2013). [CrossRef]
10. C. Zuo, T. Tao, S. Feng, L. Huang, A. Asundi, and Q. Chen, “Micro Fourier transform profilometry (µftp): 3D shape measurement at 10,000 frames per second,” Opt. Lasers Eng. 102, 70–91 (2018). [CrossRef]
11. Z. Cai, X. Liu, H. Jiang, D. He, X. Peng, S. Huang, and Z. Zhang, “Flexible phase error compensation based on Hilbert transform in phase shifting profilometry,” Opt. Express 23(19), 25171–25181 (2015). [CrossRef]
12. D. Zheng, F. Da, Q. Kemao, and H. S. Seah, “Phase error analysis and compensation for phase shifting profilometry with projector defocusing,” Appl. Opt. 55(21), 5721–5728 (2016). [CrossRef]
13. H. Zhao, X. Diao, H. Jiang, and X. Li, “High-speed triangular pattern phase-shifting 3D measurement based on the motion blur method,” Opt. Express 25(8), 9171–9185 (2017). [CrossRef]
14. S. Lei and S. Zhang, “Digital sinusoidal fringe pattern generation: defocusing binary patterns VS focusing sinusoidal patterns,” Opt. Lasers Eng. 48(5), 561–569 (2010). [CrossRef]
15. A. Kamagara, X. Wang, and S. Li, “Optimal defocus selection based on normed Fourier transform for digital fringe pattern profilometry,” Appl. Opt. 56(28), 8014–8022 (2017). [CrossRef]
16. J. Lai, J. Li, C. He, and F. Liu, “A robust and effective phaseshift fringe projection profilometry method for the extreme intensity,” Optik 179, 810–818 (2019). [CrossRef]
17. Z. Li, Y. Shi, C. Wang, and Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008). [CrossRef]
18. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]
19. S. Zhuo and T. Sim, “Defocus map estimation from a single image,” Pattern Recogn. 44(9), 1852–1858 (2011). [CrossRef]
20. C. Tang, C. Hou, and Z. Song, “Defocus map estimation from a single image via spectrum contrast,” Opt. Lett. 38(10), 1706–1708 (2013). [CrossRef]
21. M. Servín, J. A. Quiroga, and J. M. Padilla, Fringe pattern analysis for optical metrology: theory, algorithms, and applications. (Wiley-VCH, 2014)
22. B. Pan, Q. Kemao, L. Huang, and A. Asundi, “Phase error analysis and compensation for nonsinusoidal waveforms in phase-shifting digital fringe projection profilometry,” Opt. Lett. 34(4), 416–418 (2009). [CrossRef]
23. M. Servin, J. C. Estrada, and J. A. Quiroga, “The general theory of phase shifting algorithms,” Opt. Express 17(24), 21867–21881 (2009). [CrossRef]
24. C. Zuo, L. Huang, M. Zhang, L. Huang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: a comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]