Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Online fringe pitch selection for defocusing a binary square pattern projection phase-shifting method

Open Access Open Access

Abstract

A three-dimensional (3D) shape measurement system using defocusing binary fringe projection can perform high-speed and flexible measurements. In this technology, determining the fringe pitch that matches the current projection’s defocus amount is of great significance for an accurate measurement. In this paper, we propose an online binary fringe pitch selection framework. First, by analyzing the fringe images captured by the camera, the defocus amount of projection can be obtained. Next, based on analysis of the harmonic error and camera noise, we establish a mathematical model of the normalized phase error. The fringe pitch that minimizes this normalized phase error is then selected as the optimal fringe pitch for subsequent measurements, which can also lead to more accuracy and robust measurement results. Compared with current methods, our method does not require offline defocus-distance calibration. However, it can achieve the same effect as the offline calibration method. It is also more flexible and efficient. Our experiments validate the effectiveness and practicability of the proposed method.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The phase-shifting method based on defocusing binary fringe projection is widely used in real-time 3D measurement because of its fast projection speed [13]. In general, the projector's lens is fixed, and the defocus amount of projection changes along with the depth direction. The best measurement accuracy can be achieved when the fringe pitch matches the defocus amount. However, in actual industrial measurement processes, the fringe pitch does not always match the defocus amount at the moving object position. Current methods use offline calibration to determine the relationship between the defocus amount and the working distance and subsequently choose the optimal fringe pitch to match the defocus amount [4,5]. The calibration process is complex and relies on other high-precision equipment. When the projector's lens is changed by vibration or human factors, it needs to be recalibrated. Therefore, online adaptive fringe pitch adjustment is essential.

To reduce harmonic errors and improve the signal-to-noise ratio (SNR) of defocused binary patterns, current approaches can be divided into two categories: a) providing input patterns similar to sinusoid fringe patterns, or b) adapting the fringe pitch to the amount of defocus. For the first idea, methods like changing the pattern mode [68], using the dithering technique [9,10], using the error compensation method [11,12], and using the motion blur [13] have been proposed to improve the measurement accuracy. Although these methods show good results for large pitch fringes, they are not valid on fringes with a small pitch. Therefore, the measurement accuracy is relatively decreased because the fringe with a smaller pitch tends to have higher measurement accuracy [14].

The other idea is to adapt the fringe pitch to the defocus amount of projection. As the defocus amount increases, the SNR of the small pitch fringe will decrease. As the defocus amount decreases, the harmonic error of the non-sinusoidal fringes will increase. The effect of defocus on binary square patterns (BSPs) has been studied [15], but determining the projector's defocus amount is still a challenging task. Offline calibration methods use a point spread function (PSF) to estimate the projector's defocus amount [4,5]. These methods need to project a specific pattern on a flat plate at various distances to calibrate the relationship between the defocus amount and the working distance. The best fringe pitch is obtained according to the actual working distance and a mathematical model of phase error. These methods can achieve adaptive adjustment of the fringe pitch, but they are not efficient. When the projector lens is adjusted, the system needs to be recalibrated.

In this study, we propose an online method that adaptively adjusts the fringe pitch to match the current projector defocus amount. The quantitative estimation of the projection defocusing at the object position is obtained by an analysis based on the fringe images captured by the camera. The optimal fringe pitch is selected using the mathematical model of phase error that we established and the defocus estimation result.

The remainder of this paper is organized as follows: Section 2 explains the principles of our approach and presents some simulation results. Section 3 shows several experimental results to demonstrate the correctness and practicability of our method. Section 4 summarizes this paper.

2. Principle

Figure 1 shows our entire workflow. As shown in Figs. 1(a) and 1(b), the projector’s input BSPs are defocused by the projector lens, forming defocused BSPs (DBSPs). DBSPs are similar to sinusoidal fringe images because the high-frequency components of the BSP are suppressed. First, as shown in Fig. 1(c), the absolute phase can be obtained using a traditional multi-step phase-shifting method. Subsequently, as shown in Figs. 1(d), 1(e), and 1(f), a new BSP with a fixed pitch is projected, and based on the absolute phase, a projector image corresponding to this BSP can be reconstructed. As shown in Fig. 1(g), (a) sparse defocus map and a defocus estimation can be obtained based on this projector image. This estimation is used in the mathematical model of the normalized phase error to obtain the optimal fringe pitch. Finally, as shown in Fig. 1(h), BSPs with an optimal pitch are generated and used in the next measurement process, and online adaptive adjustment of the fringe pitch can be achieved. For each measurement process, reconstruction results can be obtained from the absolute phase.

 figure: Fig. 1.

Fig. 1. Principles of the proposed method. (a) Input BSPs used for phase-shifting method. (b) Captured DBSPs of (a). (c) Absolute phase map. (d) One input BSP with a fixed fringe pitch. (e) Captured DBSP of (d). (f) Projector image. (g) Sparse ${\sigma _\textrm{g}}$ map. (h) BSPs with optimal fringe pitch.

Download Full Size | PDF

To introduce our method more clearly, we first define the following mathematical descriptions in the process of DBSP projection. Assuming that the defocusing effect of the projector can be simulated by Gaussian convolution and only considering a one-dimensional situation, a Gaussian convolution kernel with parameter ${\sigma _\textrm{g}}$ can be expressed as

$$g(x) = \frac{1}{{{\sigma _\textrm{g}}\sqrt {2\pi } }}{e^{ - \frac{{{x^2}}}{{2\sigma _\textrm{g}^\textrm{2}}}}}. $$

The defocus amount varies with the projection distance, and ${\sigma _\textrm{g}}$ varies with the defocus amount, so we use ${\sigma _\textrm{g}}$ to represent the defocus amount. We use i to represent the index of the phase-shifting patterns. ${S_i}$ and ${S_{\textrm{g}i}}$ refer to one-row data of BSP and DBSP, respectively. The relationship between ${S_i}$ and ${S_{\textrm{g}i}}$ can be expressed as follows:

$${S_{\textrm{g}i}}({x_\textrm{p}}) = {S_i}({x_\textrm{p}}) \otimes g({x_\textrm{p}}), $$
where $\textrm{p}$ represents the projector plane and ${x_\textrm{p}}$ represents the pixel coordinates on the projector. DBSP’s corresponding ideal sinusoidal fringe pattern ${I_i}$ can be expressed as follows:
$${I_i}({x_\textrm{p}}) = A + B\cos \left( {\frac{{2\pi }}{{{T_\textrm{s}}}}{x_\textrm{p}} + \frac{{2i\pi }}{3}} \right), $$
where ${T_s}$ represents the pitch of ${S_i}$ and ${S_{\textrm{g}i}}$, and also period of ${I_i}$; A and B are the average intensity and intensity modulation of ${I_i}({{x_\textrm{p}}} )$, respectively.

Using the three-step phase-shifting method as an example, after the DBSP’s projection is captured by the camera, the wrapped phase of each camera pixel, $\phi $, can be calculated as

$$\phi ={-} \arctan \left( {\frac{{\sqrt 3 ({S_{\textrm{g1}}^\textrm{c} - S_{\textrm{g2}}^\textrm{c}} )}}{{2S_{\textrm{g0}}^\textrm{c} - S_{\textrm{g1}}^\textrm{c} - S_{\textrm{g2}}^\textrm{c}}}} \right), $$
where $\textrm{c}$ represents the camera plane and $S_{\textrm{g}i}^\textrm{c}$ is the camera image of the DBSP projection ${S_{\textrm{g}i}}$. Generally, 3D reconstruction requires unwrapping the wrapped phase to obtain the absolute phase. There are many methods available for phase unwrapping; for convenience, we used a multi-frequency method [16] to get absolute phase $\mathrm{\Phi }$.

2.1 Projection’s defocusing parameter determination

In this section, we will introduce the method used to obtain the defocus amount at the current target position using the captured fringe images. First, we need to calibrate the camera and projector. We suppose that both the camera and the projector satisfy the pinhole camera model and intrinsic parameters matrix of camera (${{\boldsymbol A}_\textrm{c}}$) and projector (${{\boldsymbol A}_\textrm{p}}$) can be expressed as

$${{\boldsymbol A}_\textrm{c}} = \left[ {\begin{array}{ccc} {{f_{\textrm{c}x}}}&0&{{u_{\textrm{c}0}}}\\ 0&{{f_{\textrm{c}y}}}&{{v_{\textrm{c}0}}}\\ 0&0&1 \end{array}} \right], $$
$${{\boldsymbol A}_\textrm{p}} = \left[ {\begin{array}{ccc} {{f_{\textrm{p}x}}}&0&{{u_{\textrm{p}0}}}\\ 0&{{f_{\textrm{p}y}}}&{{v_{\textrm{p}0}}}\\ 0&0&1 \end{array}} \right], $$
where $({{u_{\textrm{c}0}},{v_{\textrm{c}0}}} )$ and $({{u_{\textrm{p}0}},{v_{\textrm{p}0}}} )$ are the coordinate of principle point of camera and projector, respectively. ${f_{\textrm{c}x}}$ and ${f_{\textrm{c}y}}$ are focal lengths along the x and y axes of the camera image plane.${\; }{f_{\textrm{p}x}}$ and ${f_{\textrm{p}y}}$ are focal lengths along the x and y axes of the projector’s DMD plane. In addition, we also need to know the external parameters between the camera and the projector, i.e. the rotation matrix ${\boldsymbol R}$ and the translation vector ${\boldsymbol t}$. They can be expressed as
$${\boldsymbol R} = \left[ {\begin{array}{ccc} {{r_{11}}}&{{r_1}_2}&{{r_1}_3}\\ {{r_2}_1}&{{r_2}_2}&{{r_2}_3}\\ {{r_3}_1}&{{r_3}_2}&{{r_3}_3} \end{array}} \right], $$
$${\boldsymbol t} = {\left[ {\begin{array}{ccc} {{t_1}}&{{t_2}}&{{t_3}} \end{array}} \right]^\textrm{T}}. $$
In our experiments, the method from [17] was used to calibrate ${{\boldsymbol A}_\textrm{c}}$, ${{\boldsymbol A}_\textrm{p}}$, ${\boldsymbol R}$ and ${\boldsymbol t}$.

To get the defocus amount of the current projection, we need to reproject the image captured by the camera to the DMD plane of the projector. As shown in Fig. 1(f), this reprojected image, which we call the projector image, corresponds to the projector input BSP. In [18], Zhang et al. reconstructed the projector image with fringe images in two directions for calibration. But when the calibration parameters of the camera and the projector are known, we only need to project the fringes in one direction to reconstruction of the projector image, which corresponds exactly to one measurement process. Suppose that the coordinate of point Q in the camera coordinate system is ${{\boldsymbol x}_\textrm{c}} = ({{\textrm{x}_\textrm{c}},{y_\textrm{c}},{z_\textrm{c}}} )$, the coordinate of Q in the projector coordinate system is ${{\boldsymbol x}_\textrm{p}} = ({{x_\textrm{p}},{y_\textrm{p}},{z_\textrm{p}}} )$, the projection of Q on the camera imaging plane is $({{u_\textrm{c}},{v_\textrm{c}}} )$ and the projection of Q on the projector’s DMD is $({{u_\textrm{p}},{v_\textrm{p}}} )$. The relationship between these coordinates can be expressed as

$${z_\textrm{c}}{\left[ {\begin{array}{ccc} {{u_\textrm{c}}}&{{v_\textrm{c}}}&1 \end{array}} \right]^\textrm{T}} = {{\boldsymbol A}_\textrm{c}}{{\boldsymbol x}_\textrm{c}}, $$
$${z_\textrm{p}}{\left[ {\begin{array}{ccc} {{u_\textrm{p}}}&{{v_\textrm{p}}}&1 \end{array}} \right]^\textrm{T}} = {{\boldsymbol A}_\textrm{p}}{{\boldsymbol x}_\textrm{p}}, $$
$${{\boldsymbol x}_\textrm{p}} = {\boldsymbol R}{{\boldsymbol x}_\textrm{c}} + {\boldsymbol t}. $$
In addition, the relationship between the absolute phase and the projector’s DMD coordinates is determined according to the projector’s input. We suppose that $\mathrm{\Phi }$ changes linearly with ${u_\textrm{p}}$:
$${u_\textrm{p}} = m\Phi + n, $$
Simultaneous Eq. (9), Eq. (10) and Eq. (11), ${v_\textrm{p}}$ can be calculated:
$${v_\textrm{p}} = \frac{{({k_3}{t_2} - {k_2}{t_3}){f_{\textrm{p}y}}({u_\textrm{p}} - {u_\textrm{p}}_0) + {f_{\textrm{p}x}}{f_{\textrm{p}y}}({k_2}{t_1} - {k_1}{t_2})}}{{({k_3}{t_1} - {k_1}{t_3}){f_{\textrm{p}x}}}} + {v_{\textrm{p}0}}, $$
where ${k_i} = {\textrm{r}_{i1}}({{u_\textrm{c}} - {u_{\textrm{c}0}}} )/{f_{\textrm{c}x}} + {r_{i2}}({{v_\textrm{c}} - {v_{\textrm{c}0}}} )/{f_{\textrm{c}y}} + {r_{i3}}$. According to Eq. (12) and Eq. (13), the correspondence between $({{u_\textrm{c}},{v_\textrm{c}}} )$ and $({{u_\textrm{p}},{v_\textrm{p}}} )$ can be established based on absolute phase $\mathrm{\Phi }$ and calibration parameters. In our experiments, we projected a new BSP with a fixed ${T_s}$, using which the corresponding projector image can be reconstructed. Since the calculated $({{u_\textrm{p}},{v_\textrm{p}}} )$ are not integers, they cannot be directly used for projector image’s reconstruction. In our experiments, we first round the calculated results to integers, and then use Gaussian filtering to eliminate quantization errors. After the defocus amount calculation process, this Gaussian filter is removed to obtain an accurate defocus parameter estimation.

Using the projector image, the defocus amount of the projection at the BSP’s edges can be calculated. Calculating the Gaussian kernel parameters at the image edge is a widely used method in defocus estimation [19,20]. We improved upon this method by combining it with the characteristics of BSP. Because the DMD's pixels are discrete, we use discrete convolution to analyze the projector image. So Eq. (2) can also be expressed as:

$${S_{\textrm{g}i}}({x_\textrm{p}}) = \sum\limits_{x ={-} \infty }^\infty {{S_i}({x + {x_\textrm{p}}} )} g(x). $$

Suppose that a rising edge of BSP is between ${x_{\textrm{p}0}}$ and ${x_{\textrm{p}0}} + 1$. Figure 2 shows an example in which the rising edge of a BSP is between ${x_{\textrm{p}0}} = 7$ and ${x_{\textrm{p}0}} + 1 = 8$, the pitch of BSP is ${T_\textrm{s}} = 6$ pixels, and the BSP’s amplitude is 1.

 figure: Fig. 2.

Fig. 2. Schematic diagram of the relationship between BSP and DBSP.

Download Full Size | PDF

It can be inferred from Fig. 2 that nearly half of the terms in Eq. (14) are 0. Omitting terms with ${S_i}({x + {x_\textrm{p}}} )= 0$ in Eq. (14), ${S_{\textrm{g}i}}({{x_{\textrm{p}0}}} )$ can be expressed as:

$$\begin{aligned} {S_{\textrm{g}i}}({x_{\textrm{p}0}}) &= \sum\limits_{x ={-} \infty }^\infty {{S_i}(x + {x_{\textrm{p}0}})g(x)} \\ &= \ldots + \sum\limits_{x ={-} {T_\textrm{s}} + 1}^{ - \frac{{{T_\textrm{s}}}}{2}} {U \cdot g(x) + } \sum\limits_{x = 1}^{\frac{{{T_s}}}{2}} U \cdot g(x) + \cdots \end{aligned}, $$
where U is the amplitude of ${S_i}(x )$. Similarly, ${S_{\textrm{g}i}}({{x_{\textrm{p}0}} + 1} )$ can be expressed as:
$$\begin{aligned} {S_{\textrm{g}i}}({x_{\textrm{p}0}} + 1) &= \sum\limits_{x ={-} \infty }^\infty {{S_i}(x + {x_{\textrm{p}0}} + 1)g(x)} \\ &= \ldots + \sum\limits_{x ={-} {T_\textrm{s}}}^{ - \frac{{{T_\textrm{s}}}}{2} - 1} {U \cdot g(x) + } \sum\limits_{x = 0}^{\frac{{{T_s}}}{2} - 1} U \cdot g(x) + \cdots \end{aligned}. $$
We use the gradient of the projector image to calculate the defocus parameter of projection. Our gradient calculation is defined as follows, so that most of the duplicates in both Eq. (15) and Eq. (16) can be eliminated:
$$\begin{aligned} \nabla {S_{\textrm{g}i}}({x_{\textrm{p0}}}) &= |{{S_{\textrm{g}i}}({x_{\textrm{p0}}}) - {S_{\textrm{g}i}}({x_{\textrm{p0}}} + 1)} |\\ &= U \cdot g(0) - 2U \cdot g\left( {\frac{{{T_s}}}{2}} \right) + 2U \cdot g( - {T_s}) + \cdots \\ &= U \cdot \frac{1}{{{\sigma _\textrm{g}}\sqrt {2\pi } }} - 2U \cdot \frac{1}{{{\sigma _\textrm{g}}\sqrt {2\pi } }}{e^{ - \frac{{T_\textrm{s}^2}}{{8\sigma _\textrm{g}^2}}}} + 2U \cdot \frac{1}{{{\sigma _\textrm{g}}\sqrt {2\pi } }}{e^{ - \frac{{T_\textrm{s}^2}}{{2\sigma _\textrm{g}^2}}}}\textrm{ + } \cdots \end{aligned}. $$
For simplicity, we ignore the higher-power terms, and Eq. (17) can be changed to:
$$\nabla {S_{\textrm{g}i}}({x_{\textrm{p0}}}) \approx U \cdot \frac{1}{{{\sigma _\textrm{g}}\sqrt {2\pi } }} - 2U \cdot \frac{1}{{{\sigma _\textrm{g}}\sqrt {2\pi } }}{e^{ - \frac{{T_\textrm{s}^2}}{{8\sigma _\textrm{g}^2}}}}. $$

Positioning edges in the projector image is easy because projector image corresponds to the projector’s input BSP, and $\nabla {S_{\textrm{g}i}}({{x_{\textrm{p}0}}} )$ can be calculated from the projector image using Eq. (17). With known ${T_s}$ and $\nabla {S_{\textrm{g}i}}({{x_{\textrm{p}0}}} )$, ${\sigma _\textrm{g}}$ at ${x_{\textrm{p}0}}$ can be obtained numerically using Eq. (18). Using Newton’s iteration method as an example, assuming that ${\sigma _\textrm{g}} \ne 0$, a new equation can be constructed based on Eq. (18):

$$f({\sigma _\textrm{g}}) = \nabla {S_{\textrm{g}i}}({x_{\textrm{p0}}}) \cdot \sqrt {2\pi } \cdot {\sigma _\textrm{g}} + 2U \cdot {e^{ - \frac{{T_\textrm{s}^2}}{{8\sigma _\textrm{g}^2}}}} - U. $$
The solution to $f({{\sigma_\textrm{g}}} )= 0$ yields the required ${\sigma _\textrm{g}}$. The iterative formula for Newton’s method can be expressed as follows:
$$\sigma _g^{(i + 1)} = \sigma _g^{(i)} - \frac{{f(\sigma _g^{(i)})}}{{\frac{{df(\sigma _g^{(i)})}}{{d\sigma _g^{(i)}}}}}. $$
Using Eqs. (19) and (20), we can estimate ${\mathrm{\sigma }_\textrm{g}}$ (i.e., the defocusing parameter) at the image edge of the defocused binary fringe projection.

To verify the validity of our method, we performed two simulations. First, we generated a fixed-pitch BSP and applied Gaussian convolution with known ${\sigma _\textrm{g}}$ to change it to a DBSP. After that, we obtained $\nabla {S_{\textrm{g}i}}({{x_{\textrm{p}0}}} )$ from this DBSP. Finally, we evaluated ${\sigma _\textrm{g}}$ using Eq. (20). We set ${\sigma _\textrm{g}} = 5.0$ and ${T_s} = 72$ pixels. After four iterations with the initial$\; \sigma _\textrm{g}^{(0 )} = 1.0$ (this initial value is also used in the rest of our experiments), we obtain $\sigma _\textrm{g}^{(4 )} = 5.0006$.

In addition, we selected different values of ${\sigma _\textrm{g}}$ in [1.0, 21.0] with different iterations to test the performance of our method; the results are shown in Fig. 3(a). When ${\sigma _\textrm{g}}/{T_s}$ exceeds 0.3, our method will become unstable because Eq. (18) is an approximate formula. When ${\sigma _\textrm{g}} = 20$ and ${T_s} = 72$, that is, ${\mathrm{\sigma }_\textrm{g}}/{T_s} = 0.278$, as shown in Fig. 3(b), the amplitude of the binary fringe is small. Such a large defocus situation is not common, and occurs only when the measurement plane is sufficiently far away from the focus plane of the projector. We use five iterations in the rest of our experiments, which is sufficient for most cases (${\sigma _\textrm{g}}/{T_s}$ < 0.28).

 figure: Fig. 3.

Fig. 3. Simulation results of projector defocus amount evaluation. (a) Performance simulation. (b) Simulation schematic when ${\sigma _\textrm{g}}/{T_\textrm{s}} = 0.278$.

Download Full Size | PDF

2.2 Optimal fringe pitch selection

The optimal fringe pitch that minimizes the measurement error is selected based on the mathematical model of normalized phase error and the defocus amount we obtain in Section 2.1. In phase-shifting technology, the theoretical phase error is primarily derived from the harmonic error and the camera noise.

Harmonic error analysis methods include frequency-domain methods [21] and time-domain methods [22,23]; the latter is used in our phase error analysis. Suppose that ${a_k}$ is the $k$-th coefficient of the Fourier series of a DBSP, which can be expressed as

$${a_k} = \left\{ {\begin{array}{ll} {\frac{1}{2}}&{k = 0}\\ {\frac{2}{{k\pi }}{e^{ - 2{{\left( {\frac{{k\pi {\sigma_g}}}{{{T_\textrm{s}}}}} \right)}^2}}}}&{k = 1,2,3, \cdots } \end{array}} \right.. $$
For convenience, the wrapped phase $\phi $ is used as the function parameter. ${S_{\textrm{g}i}}$ can be expressed in the form of a Fourier series, as follows:
$$S_{gi}^c\textrm{(}\phi \textrm{) = }{a_0} + \sum\limits_{k = 1}^\infty {{a_k}\cos (k(\phi + \frac{{2i\pi }}{3}))} \;. $$
The phase error $\nabla {\phi _h}$ caused by the harmonic can be expressed as in [12], as follows:
$$\begin{array}{c} \nabla {\phi _\textrm{h}}(\phi ) ={-} \arctan \left( {\frac{{\sqrt 3 ({S_{g1}^c - S_{g2}^c} )}}{{2S_{g0}^c - S_{g1}^c - S_{g2}^c}}} \right) - \phi \\ \approx \frac{{{a_5} + {a_7}}}{{{a_1}}}\sin 6\phi \end{array}. $$

The camera noise in the phase-shifting method is generally considered to be additive Gaussian noise [21], but our experiment observed that the variance of the CMOS camera noise was proportional to the grayscale value. As shown in Fig. 4(a), in our experiment, we acquired 75 images of a fixed scene under the same exposure and the lighting environments without moving the camera. We calculated the variance and average gray value for each pixel. A linear regression was performed using the average as the independent variable and the variance as the dependent variable. The regression result is shown in Fig. 4(b).

 figure: Fig. 4.

Fig. 4. Experimental schematic and linear regression result. (a) Experimental schematic and one pixel’s gray value distribution. (b) Linear regression result of gray value and gray value variance.

Download Full Size | PDF

The fitting equations for variance and image gray value can be expressed as

$$\sigma _\textrm{n}^\textrm{2} = {f_\textrm{n}} \cdot I + b, $$
where $\sigma _\textrm{n}^2$ and I are the average and variance of the gray value, respectively. ${f_\textrm{n}}$ is the scale factor, and b is the fitting bias. As shown in Fig. 4(b), the value of b is small ($b = 0.061$), we ignore it for the convenience of analysis. The scale factor fitting result ${f_n} = 0.045$, the fit RMS is 0.34, and the R-squared value is $0.956$. We also consider camera noise to be Gaussian additive noise, but the variance of this noise changes according to the gray value. Assuming that a Gaussian distribution with mean $\mu $ and variance ${\sigma ^2}$ can be expressed as${\; }{\cal{N}}({\mu ,{\sigma^2}} )$, the camera noise in fringe projections with different phases can be expressed as
$${{\cal{N}}_i} = {\cal{N}}({0,fI_i^c(\phi )} ), $$
where $I_i^c(\phi )$ is one of the ideal sinusoidal fringe patterns captured by the camera. We do not consider the harmonic error here because in the DBSP projection process, when the camera's noise error becomes the main source of phase error, the contrast of the fringe is low, and the harmonic error is much smaller than the noise error. Considering the camera noise, the fringe pattern captured by the camera, $I_{\textrm{n}i}^\textrm{c}(\phi )$, can be expressed as
$$I_{\textrm{n}i}^\textrm{c}(\phi ) = I_i^\textrm{c}(\phi ) + {{\cal{N}}_i}, $$
where $\textrm{c}$ also represents the camera plane.

According to Eq. (4), we directly use $I_{\textrm{n}i}^\textrm{c}(\phi )$ to evaluate the wrapped phase $phi^{\prime}$, and we use ${A^\textrm{c}}$ and ${B^\textrm{c}}$ to represent the average intensity and intensity modulation of $I_{\textrm{n}i}^\textrm{c}(\phi )$, respectively. $\phi^{\prime}$ can be expressed as

$$\begin{aligned} \phi ^{\prime}(\phi ) &={-} \arctan \left( {\frac{{\sqrt 3 ({I_{\textrm{n1}}^\textrm{c}(\phi ) - I_{\textrm{n2}}^\textrm{c}(\phi )} )}}{{2I_{\textrm{n0}}^\textrm{c}(\phi ) - I_{\textrm{n1}}^\textrm{c}(\phi ) - I_{\textrm{n2}}^\textrm{c}(\phi )}}} \right)\\ &={-} \arctan \left( {\frac{{ - 3{B^\textrm{c}}\sin (\phi ) + \sqrt 3 ({{\cal{N}}_1} - {{\cal{N}}_2})}}{{3{B^\textrm{c}}\cos (\phi ) + 2{{\cal{N}}_0} - {{\cal{N}}_1} - {{\cal{N}}_2}}}} \right) \end{aligned}. $$
Therefore, the phase error caused by camera noise can be expressed as
$$\begin{aligned} \nabla {\phi _n}(\phi ) &= \phi ^{\prime}(\phi ) - \phi \\ &= \arctan \left( {\frac{{2\sin (\phi ){{\cal{N}}_0} + \sqrt 3 \cos (\phi )({{\cal{N}}_1} - {{\cal{N}}_2}) - \sin (\phi )({{\cal{N}}_1} + {{\cal{N}}_2})}}{{3{B^\textrm{c}} + 2\cos (\phi ){{\cal{N}}_0} - \sqrt 3 \sin (\phi )({{\cal{N}}_1} - {{\cal{N}}_2}) - \cos (\phi )({{\cal{N}}_1} + {{\cal{N}}_2})}}} \right) \end{aligned}. $$
Using the properties of the Gaussian distribution to simplify Eq. (28), we obtain
$$\nabla {\phi _\textrm{n}}(\phi ) = \arctan \left( {\frac{{{\cal{N}}(0,6{A^c}{f_\textrm{n}} - 3{B^\textrm{c}}{f_\textrm{n}}\cos (3\phi ))}}{{{\cal{N}}(3{B^\textrm{c}},6{A^c}{f_\textrm{n}} + 3{B^\textrm{c}}{f_\textrm{n}}\cos (3\phi ))}}} \right). $$
For the convenience of analysis, we further simplify Eq. (29). In general, ${f_\textrm{n}} \ll {B^\textrm{c}}$, so we can ignore the Gaussian distribution contained in the denominator:
$${\cal{N}}(3{B^\textrm{c}},6{A^\textrm{c}}{f_\textrm{n}} + 3{B^\textrm{c}}{f_\textrm{n}}\cos (3\phi )) \to 3{B^\textrm{c}}$$
Additionally, we ignore high-frequency terms in the numerator:
$${\cal{N}}(0,6{A^\textrm{c}}{f_\textrm{n}} - 3{B^\textrm{c}}{f_\textrm{n}}\cos (3\phi )) \to {\cal{N}}(0,6{A^\textrm{c}}{f_\textrm{n}})$$
Finally, $\nabla {\phi _\textrm{n}}(\phi )\to 0$ because ${f_\textrm{n}} \ll {B^\textrm{c}}$. Substituting Eq. (31) and Eq. (30) into Eq. (29), $\nabla {\phi _\textrm{n}}$ can be expressed as
$$\begin{array}{c} \nabla {\phi _\textrm{n}}(\phi ) \approx \arctan \left( {\frac{{{\cal{N}}(0,6{A^\textrm{c}}{f_\textrm{n}})}}{{3{B^c}}}} \right)\\ \approx {\cal{N}}\left( {0,\frac{{2{A^\textrm{c}}{f_\textrm{n}}}}{{3{{({B^\textrm{c}})}^2}}}} \right) \end{array}$$

Because we greatly simplified Eq. (29), to prove the effectiveness of Eq. (32), we conducted a simulation experiment. In this simulation, we added Gaussian additive noise, such as Eq. (25), to the ideal sinusoidal fringe pattern, set various$\; {f_\textrm{n}}$ values, and calculated the variance of the wrapped phase. As shown in Fig. 5, comparing this variance with Eq. (32), we found that when $f \ll {B^\textrm{c}}$ ($f < 0.1{B^\textrm{c}}$), Eq. (32) can express the phase error caused by camera noise correctly. Actually, as shown in the camera noise experiment above, ${f_\textrm{n}} = 0.045$, which is much smaller than ${B^\textrm{c}}$ in most cases after grayscale normalization.

 figure: Fig. 5.

Fig. 5. Simulation results for camera noise. (a) When ${f_\textrm{n}} \in [{0,0.5} ]$ and ${A^\textrm{c}} = {B^\textrm{c}} = 0.5$. (b) When ${f_\textrm{n}} \in [{0,0.05} ]$ and ${A^\textrm{c}} = {B^\textrm{c}} = 0.5$.

Download Full Size | PDF

Finally, a mathematical model that considers both phase error and noise error is constructed to select the optimal fringe pitch. We assume that the wrapped phase $\phi $ is a uniform distribution over $({ - \mathrm{\pi },\mathrm{\pi }} )$. According to Eq. (23), $\nabla {\phi _\textrm{h}}(\phi )$ obeys an arcsine distribution, which can be expressed as

$$\nabla {\phi _\textrm{h}}(\phi )\sim \arcsin \left( { - \frac{{{a_5} + {a_7}}}{{{a_1}}},\frac{{{a_5} + {a_7}}}{{{a_1}}}} \right), $$
and its standard deviation can be expressed as
$${\sigma _{\phi \textrm{h}}} = \frac{{({a_5} + {a_7})}}{{\sqrt 2 {a_1}}}. $$
In addition, according to Eq. (32), the noise error $\nabla {\phi _\textrm{n}}(\phi ){\; }$can be reduced to a normal distribution that is independent of $\phi$. In fact, because of the harmonic error of the DBSP, the projection is not an ideal sinusoidal fringe pattern, so we use ${a_0}$ and ${a_1}$ in Eq. (21) to approximate${\; }{A^\textrm{c}}$ and ${B^\textrm{c}}$ in Eq. (32), respectively. The standard deviation of phase error caused by camera noise can be expressed as
$${\sigma _{\phi \textrm{n}}} = \sqrt {\frac{{2{a_0}{f_\textrm{n}}}}{{3a_1^2}}}. $$
So, the composite standard deviation to represent the total phase error:
$${\sigma _\phi } = \sqrt {\sigma _{\phi \textrm{h}}^2 + \sigma _{\phi \textrm{n}}^2}. $$

The result of Eq. (36) is the standard deviation of $\phi $; however, for different values of ${T_s}$, the same standard deviation of $\phi $ will cause different measurement standard deviations. To convert the phase error into actual measurement error, we introduce normalization in Eq. (36). We use ${T_s} = 100$ pixels as our standard pitch [24], where the normalized phase error can be expressed as

$$\begin{array}{c} {\sigma _{\textrm{norm}}} = \frac{{{T_s}}}{{100}}{\sigma _\phi }\\ = \frac{{{T_s}}}{{100}}\sqrt {{{\left( {\frac{{{a_5} + {a_7}}}{{\sqrt 2 {a_1}}}} \right)}^2} + \frac{{2{a_0}{f_\textrm{n}}}}{{3a_1^2}}} \end{array}. $$

According to Eq. (21), ${a_1},{a_5},{a_7}$ in Eq. (37) are only related to ${\sigma _\textrm{g}}$ and ${T_s}$. Additionally, the camera noise parameter ${f_\textrm{n}}\; $is only related to the camera. Substituting Eq. (21) into Eq. (37), ${\mathrm{\sigma }_{\textrm{norm}}}$ results in the following equation:

$${\sigma _{\textrm{norm}}}({T_\textrm{s}}) = \frac{{{T_\textrm{s}}}}{{7000}}\sqrt {50{c^{48}} + 140{c^{36}} + 98{c^{24}} + \frac{{1225}}{3}{f_\textrm{n}}{\pi ^2}{c^{ - 1}}}, $$
where $c = \textrm{exp}({ - 4{\pi^2}\sigma_\textrm{g}^2/T_\textrm{s}^2} )$ and ${\sigma _{\textrm{norm}}}$ is the objective function that needs to be minimized.

After the defocus amount of the current target position, ${\mathrm{\sigma }_\textrm{g}}$, has been determined by the method mentioned in Section 2.1, we can use Eq. (38) to obtain the optimal fringe pitch. In practice, considering the fringe pitch of a BSP must be an integer and the complexity of Eq. (38), we use a simple bisection method to obtain the fringe pitch that can minimize ${\sigma _{\textrm{norm}}}$. Specifically, our optimal fringe pitch ranges from 6 to 120 pixels, and the initial value of the iteration is 60 pixels. When the iteration error is less than 1, the iteration is stopped. The two possible pitches around it are compared, and the one that has smaller ${\mathrm{\sigma }_{\textrm{norm}}}$ is chosen as the optimal fringe pitch, which is also the fringe pitch that can minimize the current measurement error.

3. Experiment

We conducted three experiments with a defocused projector system. The system was composed of a CMOS camera (AVT Prosilica GT 2000 camera with a resolution of 2048 × 1088) with a 12 mm capture and a DLP projector (DLP LightCrafter 4500 projector with a resolution of 912×1140). In our experiments, all projector defocus amounts were calculated based on the captured DBSP with ${T_s} = 72$ pixels. In each experiment, we also captured an image of full black projection and an image of full power projection for grayscale normalization.

In Section 2.1, we suppose that the projector's pixels are square, while the pixels of the projector used in our experiment are diamond. To rigorously illustrate the effectiveness of our method, here we use simulation to prove that the projection of square pixels and diamond pixels is the same after defocusing. We first generated a binary fringe’ edge of diamond pixels and square pixels. In order to match the actual situation as much as possible, we zoomed the two images by nine times. We applied Gaussian convolution to the two images and calculate the difference of their results, and suppose that the parameter of the Gaussian convolution kernel is $\mathrm{\sigma }$. Part of the simulation results are shown in Fig. 6(a). The average of the absolute difference gray value is shown in Fig. 6(b). Only when the defocus amount is minimal does the diamond pixel’s results differ from the square pixel’s results. As the defocus amount increases, the difference between them becomes smaller. Besides, since we zoomed the image by nine times, if it corresponds to the real situation, then all $\mathrm{\sigma }$ in this simulation should be divided by nine. In our experiments, ${\mathrm{\sigma }_\textrm{g}} > 2$ (i.e., $\mathrm{\sigma }$ in this simulation is over 18). In this case, we suppose that the defocused results of diamond pixels and square pixels are the same.

 figure: Fig. 6.

Fig. 6. Simulation of defocused square pixels and diamond pixels. (a) Original fringe edge image and defocused results. (b) Average gray value difference between defocused square pixels and diamond pixels. (c), (d) and (e) are the gray value difference between defocused square pixels and defocused diamond pixels in different $\mathrm{\sigma }$.

Download Full Size | PDF

3.1 Verification experiment

In our first experiment, we measured a standard plane with 21 different fringe pitch (6, 12, 18, ..., 126) BSPs at four different defocus amounts. The plane was placed approximately 40 cm from our system. The sparse defocus map on the plane was calculated, and its mean value was used as the defocus estimation. We analyzed the relationship between the measurement error and the normalized phase error calculated by Eq. (38), as shown in Fig. 7, and considered the plane fitting RMS error as the measurement error. Our normalized phase error results have the same trend as the actual measurement error, and the optimal fringe pitch is consistent with the actual results. To demonstrate that the proposed approach has the same performance as the offline calibration methods, the optimal pitch was compared with the results obtained by the empirical formula described in [5]. Figure 8 shows several details of these experimental results. As shown in Figs. 8(a) and 8(c), if BSPs have an over-small pitch, camera noise will be the primary contributor to the measurement error, and if BSPs have an over-large pitch, the harmonics will significantly affect the results and periodic errors will occur.

 figure: Fig. 7.

Fig. 7. Comparison between minimum normalized phase error, the minimum plane fit RMS, and empirical formula result [5] for various defocus amounts. (a)${\; }{\sigma _\textrm{g}} = 17.15$. (b) ${\sigma _\textrm{g}} = 10.97$. (c) ${\sigma _\textrm{g}} = 5.17$. (d) ${\sigma _\textrm{g}} = 2.30$.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Height error of reconstruction results when ${\sigma _\textrm{g}} = 5.17$ and (a) ${T_\textrm{s}} = 18$ pixels. (b) ${T_\textrm{s}} = 36$ pixels. (c) ${T_\textrm{s}} = 54$ pixels.

Download Full Size | PDF

3.2 Practical experiments

To verify our method’s effectiveness in practical applications, we first measured a standard ceramic ball with a diameter of 50 mm, which was placed approximately 40 cm from our measurement system. The defocus estimation was obtained using the method mentioned in Section 2.1. A column data of sparse defocus maps is shown in Fig. 9(c), and we take the mean value of 3.83 as the defocus estimation. According to the mathematical model of the phase error established above, the best measurement accuracy is achieved at${\; }{T_\textrm{s}}{\; } = {\; }24$ pixels, as shown in Fig. 9(d). In Fig. 10, we show some captured fringe images and the distribution of fitting errors after 3D reconstruction using the optimal fringe pitch and other similar fringes pitches. It can be seen that other pitches near the optimal fringe pitch can also obtain high-precision reconstruction results, but the reconstruction result using the optimal fringe pitch is more accurate. This is also consistent with the trend of the normalized phase error curve shown in Fig. 9(d).

 figure: Fig. 9.

Fig. 9. Defocus estimation on the ceramic sphere and corresponding normalized phase error. (a) Projector map. (b) Sparse ${\sigma _\textrm{g}}$ map. (c) One column data in (b), marked with a red line. (d) Normalized phase error under various fringe pitch conditions when ${\mathrm{\sigma }_\textrm{g}} = 3.83$.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Captured pictures and fit error of reconstruction results under various fringe pitches. (a) Captured image when ${T_\textrm{s}} = 18$ pixels. (b) Captured image when ${T_\textrm{s}} = 24$ pixels. (c) Captured image when ${T_\textrm{s}} = 30$ pixels. (d) Captured image when ${T_\textrm{s}} = 36$ pixels. (e), (f), (g), and (h) are fit error results using the fringe pitches of (a), (b), (c), and (d), respectively.

Download Full Size | PDF

To test the performance of our method when measuring general objects, we measured a statue. The defocus estimation was obtained using the method mentioned in Section 2.1. Because of the complex shape of the statue, the gray distribution of the captured image is uneven, and the gray variation of the image will considerably affect the result of the defocus parameter estimation. In this experiment, we used the area where the grayscale value changes smoothly to estimate the defocus parameters. The average value of the defocus parameter data marked by the red line in Fig. 1(g) is used as the estimation of the defocus parameter. As shown in Fig. 11(a), the estimated defocus parameter at the statue position is 3.15.

 figure: Fig. 11.

Fig. 11. Defocus estimation on the statue and corresponding normalized phase error. (a) One column data in Fig. 1(g), marked with a red line. (d) Normalized phase error under various fringe pitch conditions when ${\mathrm{\sigma }_\textrm{g}} = 3.15$.

Download Full Size | PDF

After substituting ${\sigma _\textrm{g}}$ = 3.15 in Eq. (38), we obtained the normalized phase error for various fringe pitches. As shown in Fig. 11(b), the best measurement accuracy should be obtained when ${T_s} = 20$ pixels. To determine the measurement error of the 3D reconstruction results of the statue, we first used the traditional multi-step phase shifting method (10-step phase-shifting method with ${T_s} = 30$ pixels) to obtain the reconstruction result. Then, we extracted two areas with smooth depth changes from this result and applied surface fitting. The difference between the measurement results and the fitting result is used as the measurement error. Reconstruction results and measurement errors are presented in Fig. 12. The best measurement accuracy is obtained at ${T_\textrm{s}} = 18$ pixels. We find that when ${T_\textrm{s}} = 18$ pixels and ${T_\textrm{s}} = 24$ pixels, the 3D reconstruction errors are nearly identical. From Fig. 11(b), it is observed that when ${T_\textrm{s}} = 18$ pixels, ${\mathrm{\sigma }_{\textrm{norm}}} = 0.0045$ rad, and when ${\textrm{T}_\textrm{s}} = 24$ pixels, ${\mathrm{\sigma }_{\textrm{norm}}} = 0.0046$ rad. Our experiments above prove that the reconstruction height error can be evaluated using the normalized phase error we propose.

 figure: Fig. 12.

Fig. 12. Reconstruction result and measurement error using various fringe pitches. (a) Reconstruction result. (b), (c), (d), and (e) are measurement errors in the red box in (a) with ${\textrm{T}_\textrm{s}} = 12$ pixels, ${\textrm{T}_\textrm{s}} = 18$ pixels, ${\textrm{T}_\textrm{s}} = 24$ pixels, and ${\textrm{T}_\textrm{s}} = 30$ pixels, respectively. (f), (g), (h), and (i) are measurement errors in the yellow box in (a) with ${\textrm{T}_\textrm{s}} = 12$ pixels, ${\textrm{T}_\textrm{s}} = 18$ pixels, ${\textrm{T}_\textrm{s}} = 24$ pixels, and ${\textrm{T}_\textrm{s}} = 30$ pixels, respectively.

Download Full Size | PDF

4. Conclusion

To summarize, we proposed a novel method for online adaptive adjustment of the fringe pitch to match the current defocus amount at the object position. The proposed method does not require any calibration tools for defocus-distance calibration. Defocus estimation and online fringe pitch adjustment can be achieved using only the captured fringe images. In addition, our optimal fringe pitch estimation result is equivalent to the estimation result obtained by offline calibration methods, so it can be used flexibly in various systems. In addition, because changes in the projector lens does not affect the effectiveness of the proposed method, this method also provides a technical foundation for measurement systems that require dynamic adjustment of the projector lens to achieve high-precision and wide-range measurements.

Funding

National Natural Science Foundation of China (51575033).

Disclosures

The authors declare no conflicts of interest.

References

1. S. Zhang, “Flexible 3D shape measurement using projector defocusing: extended measurement range,” Opt. Lett. 35(7), 934–936 (2010). [CrossRef]  

2. Y. Gong and S. Zhang, “Ultrafast 3-D shape measurement with an off-the-shelf DLP projector,” Opt. Express 18(19), 19743–19754 (2010). [CrossRef]  

3. B. Li, Y. Wang, J. Dai, W. Lohry, and S. Zhang, “Some recent advances on superfast 3D shape measurement with digital binary defocusing techniques,” Opt. Lasers Eng. 54, 236–246 (2014). [CrossRef]  

4. G. Rao, L. Song, S. Zhang, X. Yang, K. Chen, and J. Xu, “Depth-driven variable-frequency sinusoidal fringe pattern for accuracy improvement in fringe projection profilometry,” Opt. Express 26(16), 19986–20008 (2018). [CrossRef]  

5. Y. Wang, H. Zhao, H. Jiang, and X. Li, “Defocusing parameter selection strategies based on PSF measurement for square-binary defocusing fringe projection profilometry,” Opt. Express 26(16), 20351–20367 (2018). [CrossRef]  

6. G. A. Ayubi, J. A. Ayubi, J. M. Di Martino, and J. A. Ferrari, “Pulse-width modulation in defocused three-dimensional fringe projection,” Opt. Lett. 35(21), 3682–3684 (2010). [CrossRef]  

7. A. Silva, J. L. Flores, A. Muñoz, G. A. Ayubi, and J. A. Ferrari, “Three-dimensional shape profiling by out-of-focus projection of colored pulse width modulation fringe patterns,” Appl. Opt. 56(18), 5198–5203 (2017). [CrossRef]  

8. Y. Wang, S. Basu, and B. Li, “Binarized dual phase-shifting method for high-quality 3D shape measurement,” Appl. Opt. 57(23), 6632–6639 (2018). [CrossRef]  

9. W. Lohry and S. Zhang, “Genetic method to optimize binary dithering technique for high-quality fringe generation,” Opt. Lett. 38(4), 540–542 (2013). [CrossRef]  

10. C. Zuo, T. Tao, S. Feng, L. Huang, A. Asundi, and Q. Chen, “Micro Fourier transform profilometry (µftp): 3D shape measurement at 10,000 frames per second,” Opt. Lasers Eng. 102, 70–91 (2018). [CrossRef]  

11. Z. Cai, X. Liu, H. Jiang, D. He, X. Peng, S. Huang, and Z. Zhang, “Flexible phase error compensation based on Hilbert transform in phase shifting profilometry,” Opt. Express 23(19), 25171–25181 (2015). [CrossRef]  

12. D. Zheng, F. Da, Q. Kemao, and H. S. Seah, “Phase error analysis and compensation for phase shifting profilometry with projector defocusing,” Appl. Opt. 55(21), 5721–5728 (2016). [CrossRef]  

13. H. Zhao, X. Diao, H. Jiang, and X. Li, “High-speed triangular pattern phase-shifting 3D measurement based on the motion blur method,” Opt. Express 25(8), 9171–9185 (2017). [CrossRef]  

14. S. Lei and S. Zhang, “Digital sinusoidal fringe pattern generation: defocusing binary patterns VS focusing sinusoidal patterns,” Opt. Lasers Eng. 48(5), 561–569 (2010). [CrossRef]  

15. A. Kamagara, X. Wang, and S. Li, “Optimal defocus selection based on normed Fourier transform for digital fringe pattern profilometry,” Appl. Opt. 56(28), 8014–8022 (2017). [CrossRef]  

16. J. Lai, J. Li, C. He, and F. Liu, “A robust and effective phaseshift fringe projection profilometry method for the extreme intensity,” Optik 179, 810–818 (2019). [CrossRef]  

17. Z. Li, Y. Shi, C. Wang, and Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008). [CrossRef]  

18. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

19. S. Zhuo and T. Sim, “Defocus map estimation from a single image,” Pattern Recogn. 44(9), 1852–1858 (2011). [CrossRef]  

20. C. Tang, C. Hou, and Z. Song, “Defocus map estimation from a single image via spectrum contrast,” Opt. Lett. 38(10), 1706–1708 (2013). [CrossRef]  

21. M. Servín, J. A. Quiroga, and J. M. Padilla, Fringe pattern analysis for optical metrology: theory, algorithms, and applications. (Wiley-VCH, 2014)

22. B. Pan, Q. Kemao, L. Huang, and A. Asundi, “Phase error analysis and compensation for nonsinusoidal waveforms in phase-shifting digital fringe projection profilometry,” Opt. Lett. 34(4), 416–418 (2009). [CrossRef]  

23. M. Servin, J. C. Estrada, and J. A. Quiroga, “The general theory of phase shifting algorithms,” Opt. Express 17(24), 21867–21881 (2009). [CrossRef]  

24. C. Zuo, L. Huang, M. Zhang, L. Huang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: a comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Principles of the proposed method. (a) Input BSPs used for phase-shifting method. (b) Captured DBSPs of (a). (c) Absolute phase map. (d) One input BSP with a fixed fringe pitch. (e) Captured DBSP of (d). (f) Projector image. (g) Sparse ${\sigma _\textrm{g}}$ map. (h) BSPs with optimal fringe pitch.
Fig. 2.
Fig. 2. Schematic diagram of the relationship between BSP and DBSP.
Fig. 3.
Fig. 3. Simulation results of projector defocus amount evaluation. (a) Performance simulation. (b) Simulation schematic when ${\sigma _\textrm{g}}/{T_\textrm{s}} = 0.278$.
Fig. 4.
Fig. 4. Experimental schematic and linear regression result. (a) Experimental schematic and one pixel’s gray value distribution. (b) Linear regression result of gray value and gray value variance.
Fig. 5.
Fig. 5. Simulation results for camera noise. (a) When ${f_\textrm{n}} \in [{0,0.5} ]$ and ${A^\textrm{c}} = {B^\textrm{c}} = 0.5$. (b) When ${f_\textrm{n}} \in [{0,0.05} ]$ and ${A^\textrm{c}} = {B^\textrm{c}} = 0.5$.
Fig. 6.
Fig. 6. Simulation of defocused square pixels and diamond pixels. (a) Original fringe edge image and defocused results. (b) Average gray value difference between defocused square pixels and diamond pixels. (c), (d) and (e) are the gray value difference between defocused square pixels and defocused diamond pixels in different $\mathrm{\sigma }$.
Fig. 7.
Fig. 7. Comparison between minimum normalized phase error, the minimum plane fit RMS, and empirical formula result [5] for various defocus amounts. (a)${\; }{\sigma _\textrm{g}} = 17.15$. (b) ${\sigma _\textrm{g}} = 10.97$. (c) ${\sigma _\textrm{g}} = 5.17$. (d) ${\sigma _\textrm{g}} = 2.30$.
Fig. 8.
Fig. 8. Height error of reconstruction results when ${\sigma _\textrm{g}} = 5.17$ and (a) ${T_\textrm{s}} = 18$ pixels. (b) ${T_\textrm{s}} = 36$ pixels. (c) ${T_\textrm{s}} = 54$ pixels.
Fig. 9.
Fig. 9. Defocus estimation on the ceramic sphere and corresponding normalized phase error. (a) Projector map. (b) Sparse ${\sigma _\textrm{g}}$ map. (c) One column data in (b), marked with a red line. (d) Normalized phase error under various fringe pitch conditions when ${\mathrm{\sigma }_\textrm{g}} = 3.83$.
Fig. 10.
Fig. 10. Captured pictures and fit error of reconstruction results under various fringe pitches. (a) Captured image when ${T_\textrm{s}} = 18$ pixels. (b) Captured image when ${T_\textrm{s}} = 24$ pixels. (c) Captured image when ${T_\textrm{s}} = 30$ pixels. (d) Captured image when ${T_\textrm{s}} = 36$ pixels. (e), (f), (g), and (h) are fit error results using the fringe pitches of (a), (b), (c), and (d), respectively.
Fig. 11.
Fig. 11. Defocus estimation on the statue and corresponding normalized phase error. (a) One column data in Fig. 1(g), marked with a red line. (d) Normalized phase error under various fringe pitch conditions when ${\mathrm{\sigma }_\textrm{g}} = 3.15$.
Fig. 12.
Fig. 12. Reconstruction result and measurement error using various fringe pitches. (a) Reconstruction result. (b), (c), (d), and (e) are measurement errors in the red box in (a) with ${\textrm{T}_\textrm{s}} = 12$ pixels, ${\textrm{T}_\textrm{s}} = 18$ pixels, ${\textrm{T}_\textrm{s}} = 24$ pixels, and ${\textrm{T}_\textrm{s}} = 30$ pixels, respectively. (f), (g), (h), and (i) are measurement errors in the yellow box in (a) with ${\textrm{T}_\textrm{s}} = 12$ pixels, ${\textrm{T}_\textrm{s}} = 18$ pixels, ${\textrm{T}_\textrm{s}} = 24$ pixels, and ${\textrm{T}_\textrm{s}} = 30$ pixels, respectively.

Equations (38)

Equations on this page are rendered with MathJax. Learn more.

g ( x ) = 1 σ g 2 π e x 2 2 σ g 2 .
S g i ( x p ) = S i ( x p ) g ( x p ) ,
I i ( x p ) = A + B cos ( 2 π T s x p + 2 i π 3 ) ,
ϕ = arctan ( 3 ( S g1 c S g2 c ) 2 S g0 c S g1 c S g2 c ) ,
A c = [ f c x 0 u c 0 0 f c y v c 0 0 0 1 ] ,
A p = [ f p x 0 u p 0 0 f p y v p 0 0 0 1 ] ,
R = [ r 11 r 1 2 r 1 3 r 2 1 r 2 2 r 2 3 r 3 1 r 3 2 r 3 3 ] ,
t = [ t 1 t 2 t 3 ] T .
z c [ u c v c 1 ] T = A c x c ,
z p [ u p v p 1 ] T = A p x p ,
x p = R x c + t .
u p = m Φ + n ,
v p = ( k 3 t 2 k 2 t 3 ) f p y ( u p u p 0 ) + f p x f p y ( k 2 t 1 k 1 t 2 ) ( k 3 t 1 k 1 t 3 ) f p x + v p 0 ,
S g i ( x p ) = x = S i ( x + x p ) g ( x ) .
S g i ( x p 0 ) = x = S i ( x + x p 0 ) g ( x ) = + x = T s + 1 T s 2 U g ( x ) + x = 1 T s 2 U g ( x ) + ,
S g i ( x p 0 + 1 ) = x = S i ( x + x p 0 + 1 ) g ( x ) = + x = T s T s 2 1 U g ( x ) + x = 0 T s 2 1 U g ( x ) + .
S g i ( x p0 ) = | S g i ( x p0 ) S g i ( x p0 + 1 ) | = U g ( 0 ) 2 U g ( T s 2 ) + 2 U g ( T s ) + = U 1 σ g 2 π 2 U 1 σ g 2 π e T s 2 8 σ g 2 + 2 U 1 σ g 2 π e T s 2 2 σ g 2  +  .
S g i ( x p0 ) U 1 σ g 2 π 2 U 1 σ g 2 π e T s 2 8 σ g 2 .
f ( σ g ) = S g i ( x p0 ) 2 π σ g + 2 U e T s 2 8 σ g 2 U .
σ g ( i + 1 ) = σ g ( i ) f ( σ g ( i ) ) d f ( σ g ( i ) ) d σ g ( i ) .
a k = { 1 2 k = 0 2 k π e 2 ( k π σ g T s ) 2 k = 1 , 2 , 3 , .
S g i c ( ϕ ) =  a 0 + k = 1 a k cos ( k ( ϕ + 2 i π 3 ) ) .
ϕ h ( ϕ ) = arctan ( 3 ( S g 1 c S g 2 c ) 2 S g 0 c S g 1 c S g 2 c ) ϕ a 5 + a 7 a 1 sin 6 ϕ .
σ n 2 = f n I + b ,
N i = N ( 0 , f I i c ( ϕ ) ) ,
I n i c ( ϕ ) = I i c ( ϕ ) + N i ,
ϕ ( ϕ ) = arctan ( 3 ( I n1 c ( ϕ ) I n2 c ( ϕ ) ) 2 I n0 c ( ϕ ) I n1 c ( ϕ ) I n2 c ( ϕ ) ) = arctan ( 3 B c sin ( ϕ ) + 3 ( N 1 N 2 ) 3 B c cos ( ϕ ) + 2 N 0 N 1 N 2 ) .
ϕ n ( ϕ ) = ϕ ( ϕ ) ϕ = arctan ( 2 sin ( ϕ ) N 0 + 3 cos ( ϕ ) ( N 1 N 2 ) sin ( ϕ ) ( N 1 + N 2 ) 3 B c + 2 cos ( ϕ ) N 0 3 sin ( ϕ ) ( N 1 N 2 ) cos ( ϕ ) ( N 1 + N 2 ) ) .
ϕ n ( ϕ ) = arctan ( N ( 0 , 6 A c f n 3 B c f n cos ( 3 ϕ ) ) N ( 3 B c , 6 A c f n + 3 B c f n cos ( 3 ϕ ) ) ) .
N ( 3 B c , 6 A c f n + 3 B c f n cos ( 3 ϕ ) ) 3 B c
N ( 0 , 6 A c f n 3 B c f n cos ( 3 ϕ ) ) N ( 0 , 6 A c f n )
ϕ n ( ϕ ) arctan ( N ( 0 , 6 A c f n ) 3 B c ) N ( 0 , 2 A c f n 3 ( B c ) 2 )
ϕ h ( ϕ ) arcsin ( a 5 + a 7 a 1 , a 5 + a 7 a 1 ) ,
σ ϕ h = ( a 5 + a 7 ) 2 a 1 .
σ ϕ n = 2 a 0 f n 3 a 1 2 .
σ ϕ = σ ϕ h 2 + σ ϕ n 2 .
σ norm = T s 100 σ ϕ = T s 100 ( a 5 + a 7 2 a 1 ) 2 + 2 a 0 f n 3 a 1 2 .
σ norm ( T s ) = T s 7000 50 c 48 + 140 c 36 + 98 c 24 + 1225 3 f n π 2 c 1 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.