Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Defocused projection model for phase-shifting profilometry with a large depth range

Open Access Open Access

Abstract

Phase-shifting 3D profilometry is widely combined with defocused projection, but the accuracy of defocused projection could be far below expectations especially in the case of large depth range measurement. In this paper, a new defocus-induced error related to the shape of the measured object is pinpointed and a novel defocused projection model is established to cope with such a error to improve the accuracy of defocusing phase-shifting profilometry. Supplemented with a specialized calibration and reconstruction procedure, the phase is well corrected to obtain accurate measurement results. Furthermore, the impact of the defocus-induced error is analyzed through simulations, and the feasibility of our method is verified by experiments. Faced with issues involving a large measurement range, the proposed method is expected to give a competitive performance.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

At present, 3D profilometry on dynamic objects usually adopts spatial code [1], laser speckle [2], and time of flight (ToF) [3], of which the measurement accuracy is relatively poor. While high accuracy is demanded, phase-shifting profilometry methods [4,5] could be alternatives, which offer higher accuracy and resolution, but consume much more exposure time due to their feature of time division multiplexing.

In order to apply these high-accuracy methods to the measurement of dynamic objects, many studies focused on shortening the exposure time of phase-shifting profilometry, among which defocused projection is widely studied. For example, Lei and Zhang [6] adopted binary fringe patterns to improve the projection speed of DLP, and simply analyzed the phase error on different levels of defocusing. Li et al. [7] analyzed the point spread function (PSF) of defocused projection and proposed a calibration strategy for defocusing projector. Merner et al. [8] used low-order polynomial functions to calibrate the defocusing measurement system, which does not require estimating the parameters of projector.

Upon defocused projection, many papers further reduced the number of projection patterns or improved the phase quality. For example, An et al. [9] decreased the required Gray code patterns by using the minimum phase map to restrict the range of phase. Hyun and Zhang [10] used only 5 binary fringe patterns to solve absolute phase. Garnica et al. [11] proposed a binarized dual phase-shifting algorithm to generate high-quality phase maps and Wang et al. [12] proposed an enhanced version using just three phase-shifting binary patterns.

To reduce the phase error of defocused projection, some researches studied the design of projection patterns. Zuo et al. [13] proposed optimized pulse width modulation patterns. Wang et al. [14] compared three existing pattern design algorithms and drew out a conclusion that OPWM proposed in [15] performs the best. Silva et al. [16] proposed color-based binary fringe patterns with lower amplitude of harmonics. Lv et al. [17] and Silva et al. [18] proposed algorithms to design dithering binary fringe patterns, which could further improve the phase quality.

But yet, the accuracy of defocused projection is not satisfactory, especially in the case of large depth range, and therefore, some researches tried to deal with the error of defocused projection in large measurement range. For example, Zhang [19] presented a technique to extend the measurement range and Wu et al. [20] proposed shifting Gray-code patterns, but both methods mainly handled the unwrapping error caused by the blurred Gray code under severe defocus. Xu et al. [21] found that the error of defocused projection varies at different defocusing levels, but they did not analyze the mechanism of this error, and directly used a lookup table (LUT) to model the error under different measurement ranges.

On the other hand, aimed at adjustment of defocusing levels, Wang et al. [22] presented defocusing parameter selection strategies to manipulate defocusing degree by measuring the point spread function (PSF) of the projector. Similarly, Han et al. [23] and Kamagara et al. [24] proposed methods to adjust the projector to make the projected pattern in the optimal defocusing degree.

Different from these previous researches, we propose a new idea to improve the accuracy of defocusing phase-shifting profilometry. Through further analysis on the defocused projection optical system, we pinpoint a new error in the defocusing system, which is related to the shape of the measured object. And thereupon, a novel defocused projection model coping with this kind of error is established. Experiments show that this model can further improve the accuracy of both calibration and reconstruction in defocused projection 3D measurement system, and is suitable for phase-shifting profilometry with large depth range.

The main contributions of this paper are as follows:

1) The traditional defocused projection model is analyzed and an error induced by the local shape of the measured object under defocusing is pinpointed.

2) A new model coping with this defocus-induced error is established along with specialized calibration and reconstruction procedure.

3) The impact of the defocus-induced error is analyzed through simulations, and the feasibility of our method is verified by experiments.

2. Principles

2.1 Analysis of the traditional projecting model

Defocused projection models have been widely used in high-speed 3D measurement. However, most existing methods are based on the defocused projection model introduced by [6], in which the defocusing process is regarded as applying low pass filter to focused projection, and the filter, also known as the point spread function (PSF), renders binary fringes into sinusoidal fringes. Generally, the effect of a filter can be divided into two aspects, the amplitude and the phase, and the former one is widely studied in defocusing phase-shifting measurement. Taking the conclusion in [6,7] as an example, severe defocus might lead to low fringe modulation. However, the existing studies do not give much attention to the phase, most of which assume that the PSF of defocused projection has no effect on the phase under any position, any distance, and any measured object.

This assumption is appropriate when the surface of the measured object is approximately parallel to the focal plane of the projector, as depicted in Fig. 1(a). However, in the practical applications, this condition is unlikely to be satisfied, and thus the accuracy of defocused projection could be far below expectations.

 figure: Fig. 1.

Fig. 1. The comparison between (a) measurement of parallel plane and (b) measurement of angled plane. The generation principle of asymmetric PSF and the resulting defocus-induced phase error are also displayed.

Download Full Size | PDF

Simply put, in the phase-shifting measurement, we pay more attention to the phase of the fringes, which usually remains unchanged under symmetric PSF. However, an asymmetric one, commonly existed in defocused projection, might result in a small deviation on the phase-shifting fringe pattern.

To make it easier to explain, we assume that the object to be measured is a plane, considering two cases: 1) The measured plane is parallel to the focal plane. 2) The measured plane is angled. The corresponding light paths are depicted in the Fig. 1.

It can be seen from Fig. 1(a) that for a lens imaging system, light emitted from the whole pupil (Circle $\mathit {OGH}$) converges to Point $M$. When the measured plane is parallel to the focal plane, the PSF is an isotropic filter, which is also usually adopted in traditional defocusing models. However, when the measurement plane is slanted, as shown in Fig. 1(b), one side of the measured plane gets closer to the focal plane, while the other side far away from it, resulting in asymmetric PSF. When applying this kind of filter to the projected pattern, the phase of the fringe shifts slightly. Without taking into account this phenomenon, the traditional methods introduce additional errors in both calibration and measurement procedure.

2.2 Modified defocused projection model

We have briefly explained the generation principle of asymmetric PSF. In this subsection, we make further quantitative analysis on each factor that might affect PSF.

In Fig. 2, $M({x_M},{y_M},{z_M})$ and $N({x_N},{y_N},{z_N})$ are one of the points on the focal plane and the measured plane respectively, with origin at $O$, the optical center of the projector, and the defocusing degree is determined by $k$ as $N = kM$. With slope $m_x$, $m_y$, the equation of the measured plane can be expressed as:

$$z = {m_x}(x - {x_N}) + {m_y}(y - {y_N}) + {z_N}$$

 figure: Fig. 2.

Fig. 2. The influence of the measured plane angle on PSF under defocused projection and the illustration of geometric relations. The illumination distribution on surface $\mathit {CD}$ is calculated to obtain the generally defined PSF.

Download Full Size | PDF

The parametric equation of Cone $\mathit {MGH}$ is:

$$\left\{ \begin{array}{c} x = (1 - p){x_M} + pr\cos (\theta ) \hfill \\ y = (1 - p){y_M} + pr\sin (\theta ) \hfill \\ z = (1 - p){z_M} \hfill \\ \end{array} \right.$$
where $p$, $r$, $\theta$ are parameters and each $(r,\theta )$ corresponds to a point on the lens of projector. The position ${\mathit {PSF}_x}(r,\theta )$, ${\mathit {PSF}_y}(r,\theta )$, and the illumination ${\mathit {PSF}_i}(r,\theta )$ of the light passing through each point $(r,\theta )$ should be tracked, so as to plot the image of PSF as $({\mathit {PSF}_x}, {\mathit {PSF}_y}, {\mathit {PSF}_i})$.

Substitute (2) into (1) and solve $p$ as $p = {p_{\mathit {AB}}}(r,\theta )$, and then the parametric equation of Ellipse $\mathit {AB}$ can be expressed as:

$$\left\{ \begin{array}{c} x = (1 - {p_{\mathit{AB}}}(r,\theta )){x_M} + {p_{\mathit{AB}}}(r,\theta )r\cos (\theta ) \hfill \\ y = (1 - {p_{\mathit{AB}}}(r,\theta )){y_M} + {p_{\mathit{AB}}}(r,\theta )r\sin (\theta ) \hfill \\ z = (1 - {p_{\mathit{AB}}}(r,\theta )){z_M} \hfill \\ \end{array} \right.$$
The above formula is simply transformed to derive the position of the light passing through point $(r,\theta )$ in unit of pixel as:
$$\begin{gathered} {\mathit{PSF}_x}(r,\theta ) = (\frac{x}{z} - \frac{{{x_M}}}{{{z_M}}}){f_x} \hfill \\ {\mathit{PSF}_y}(r,\theta ) = (\frac{y}{z} - \frac{{{y_M}}}{{{z_M}}}){f_y} \hfill \\ \end{gathered}$$
where ${f_x}$, ${f_y}$ are the parameters on the diagonal of the intrinsic matrix of the projector.

The calculation of illumination ${\mathit {PSF}_i}(r,\theta )$ is based on the constant luminous flux. In Fig. 2, the luminous flux $\Phi$ through surface $\mathit {CD}$, surface $\mathit {AB}$, and surface $\mathit {GH}$ is equal, as:

$$\begin{aligned} {\Phi _{\mathit{CD}}} & = \int_{{\sigma _{\mathit{CD}}} \in {D_{\mathit{CD}}}} {{I_{\mathit{CD}}}d{\sigma _{\mathit{CD}}}} \\ & = \int_{{\sigma _{\mathit{AB}}} \in {D_{\mathit{AB}}}} {{I_{\mathit{AB}}}d{\sigma _{\mathit{AB}}}} = {\Phi _{\mathit{AB}}} \\ & = \int_{{\sigma _{\mathit{GH}}} \in {D_{\mathit{GH}}}} {{I_{\mathit{GH}}}d{\sigma _{\mathit{GH}}}} = {\Phi _{\mathit{GH}}} \end{aligned}$$
where $I_{\mathit {AB}}$, $I_{\mathit {CD}}$, and $I_{\mathit {GH}}$ are the illumination distribution of surface $\mathit {AB}$, surface $\mathit {CD}$, and surface $\mathit {GH}$. $D_{\mathit {AB}}$, $D_{\mathit {CD}}$, and $D_{\mathit {GH}}$ are the corresponding integral range (effective area of light passing through).

The parametric equation of Ellipse $\mathit {CD}$ can be derived from geometric relations as:

$$\left\{ \begin{array}{c} x = {x_N} + {p_{\mathit{AB}}}(r,\theta ){p_{\mathit{CD}}}(r,\theta )r\cos (\theta ) \hfill \\ y = {y_N} + {p_{\mathit{AB}}}(r,\theta ){p_{\mathit{CD}}}(r,\theta )r\sin (\theta ) \hfill \\ z = {z_N} \hfill \\ \end{array} \right.$$
where
$${p_{\mathit{CD}}}(r,\theta ) = \frac{k}{{1 - {p_{\mathit{AB}}}(r,\theta )}}$$
Then the Jacobian matrix of (6) is calculated as:
$$J = \left| {\begin{aligned} \begin{array}{cc}{\frac{{\partial x}}{{\partial r}}} & {\frac{{\partial x}}{{\partial \theta }}} \end{array} \\ \begin{array}{cc}{\frac{{\partial y}}{{\partial r}}} & {\frac{{\partial y}}{{\partial \theta }}} \end{array} \end{aligned}} \right|$$
According to (5), the ratio of illumination of each area element on surface $\mathit {CD}$ to that on surface $\mathit {GH}$ is equal to the ratio of $d{\sigma _{\mathit {GH}}}$ to $d{\sigma _{\mathit {CD}}}$. Considering $d{\sigma _{\mathit {GH}}} = rdrd\theta$ and $d{\sigma _{\mathit {CD}}} = Jdrd\theta$, the illumination can be expressed as:
$${\mathit{PSF}_i}(r,\theta ) = \frac{{d{\sigma _{\mathit{GH}}}}}{{d{\sigma _{\mathit{CD}}}}}\mathit{PSF}(r){\textrm{ = }}\frac{r}{J}\mathit{PSF}(r)$$
where $\mathit {PSF}(r)$ is the point spread function of the traditional defocused projection model.

By now, the image of PSF can be plotted as $({\mathit {PSF}_x}, {\mathit {PSF}_y}, {\mathit {PSF}_i})$, where ${\mathit {PSF}_i}$ is the illumination of pixel $({\mathit {PSF}_x}, {\mathit {PSF}_y})$ at each $(r,\theta )$.

The above formula is a theoretical model and is used in the Simulations section. However, in the actual projection system, the aperture is not always circular and the lens structure is much more complex, which does not conform to our ideal model. Nevertheless, the form of the error is similar, and therefore, for practical applications, we propose a strategy to replace the complex calculation of PSF with a simple neural network, and the parameters of the network are learnt during calibration.

Specifically, in practical applications, we are concerned about the phase of the fringe instead of the image of PSF, so according to the above analyzed factors that might have influence on the phase, we can simplify the deviation of the phase as:

$$du,dv = f(x,y,z,m_x,m_y)$$
where $m_x$ and $m_y$, the local slope at point $(x, y, z)$, can be obtained by simple transformation of the normal vector. $du$ and $dv$ are the compensatory phase that should be added to the projector phase $u$ and $v$ to correct the defocus-induced error. $f$ is a nonlinear function implemented by a Multi-Layer Perceptron (MLP) with five inputs and two outputs, with two hidden layers of 16 and 8 nodes, and with tanh activation function.

The MLP can be expressed as:

$$\mathit{mlp}(x) = {W_3}\tanh ({W_2}\tanh ({W_1}x + {b_1}) + {b_2})$$
where $W_1$, $W_2$, $W_3$, $b_1$, $b_2$ are network parameters, among which $W_3$ is initialized with normal distribution $\mathcal {N}(1,\textrm {1e-6})$, and other parameters are initialized with $\mathcal {N}(1,\textrm {1e-1})$. Adam [25] optimizer is then used to train the network with the initial learning rate of 1e-5 multiplied by a factor of 0.1 every 40 epochs. Also, the weight decay is set to 1e-5, batch size is 32, and the total number of epochs is 100.

We choose MLP to fit the function just for the fact that the deep learning framework is currently very convenient. In fact, the network is very simple, so we believe it can also be replaced by look-up tables or parametric models.

This strategy avoids the complex PSF calculation and loosens the applicable conditions of our model, so that it can be applied to more projection systems with less computational complexity.

2.3 Reconstruction procedure

In the reconstruction procedure, the traditional method is firstly utilized to estimate the approximate coordinates of each pixel, and then $m_x$, $m_y$ are determined by normal estimation so that the corresponding phase error can be obtained through neural network inference to compensate the phase. The normal vectors are estimated through Principal Component Analysis (PCA), calculating the eigenvector of $5 \times 5$ pixels around each point. Finally, the point cloud is reconstructed again according to the corrected phase, and the procedure is shown in Fig. 3(c).

 figure: Fig. 3.

Fig. 3. The overall flowchart of (a) calibration procedure, (b) training step of calibration, and (c) reconstruction procedure.

Download Full Size | PDF

2.4 Calibration procedure

A new defocused projection model is established in this paper to cope with defocus-induced error and the existing calibration methods are obviously inapplicable. Therefore, together with our model, we propose a calibration strategy. The procedure is shown in Fig. 3(a).

As is analyzed above, when the calibration board is approximately parallel to the focal plane of the projector, the influence of the defocus-induced error can be ignored. By placing the calibration board in different distances, but all parallel to the focal plane of the projector, we can collect images of more than four poses and calibrate the system with the traditional method in [7]. Then, we collect images of calibration board with random angles and distances, and optimize the network parameters with Adam optimizer.

In our experiments, eight parallel poses and eight angled poses are captured for calibration, and in order to carry out calibration in a larger depth range, we use two calibration boards with different sizes, the small one for closer distance and the large one for further distance, both with circles grid of $11 \times 9$.

A total of $16 \times 11 \times 9 = 1584$ points (16 calibration board images each with $11 \times 9$ points) are used as the dataset to optimize the model. This process is similar to the last step of Zhang’s calibration method [26], where the parameters are optimized with all calibration points. Essentially, our training process is to fit the function of (11) into 1584 points.

In addition, we use the basic calibration parameters to estimate the rotation and translation of these slanted calibration board as the initial value for training, but it should be noted that due to the defocus-induced error, this estimation is not accurate, and thus $R$ and $t$ should be variables, which will be modified together with MLP parameters during the training process.

3. Simulations

In this paper, an error induced by the local shape of the measured object under defocusing is pinpointed. However, to what extent this error might weaken the measurement accuracy is still unknown. In this section, simulations are conducted to observe the images of PSF under different situations and the values of resulting defocus-induced phase error.

We use typical parameters of the projector for simulations as ${f_x}={f_y}=1117$, $z_M=500$, resolution is $912 \times 570$, and radius of lens is $R=10$. The images of PSF under different combinations of parameters are obtained through our defocused projection model, and thereupon, the values of phase error under different situations are estimated.

3.1 Point spread function (PSF)

Examples of the PSF under different combinations of defocusing degree $k$ and plane slope $m_x$, $m_y$ are displayed in Fig. 4. It can be seen from the figure that the PSF can be quite different under different situations.

 figure: Fig. 4.

Fig. 4. Examples of the PSF under different combinations of defocusing degree $k$ and plane slope $m_x$, $m_y$. It can be seen from the first row that the image of PSF flips horizontally when the sign of $m_x$ is opposite.

Download Full Size | PDF

3.2 Defocus-induced phase error

Based on the method in [17], a 16-period binary fringe pattern is generated, to which the PSF is applied to obtain the sinusoidal pattern, upon that Fast Fourier Transformation (FFT) is utilized to analyze the phase deviation under different defocusing degree and different slope of the measured plane, and the results are listed in Table 1.

Tables Icon

Table 1. Phase error in unit of rad / pixel under different combinations of defocusing degree $k$ and plane slope $m_x$. Only results of $m_x \le 0$ are listed on the ground that the sign of $m_x$ will not change the absolute value of the error.

According to our simulation results, the phase error can be up to 0.1 pixels in a small depth range (40 cm to 50 cm), and can even be larger than 0.25 pixels in a large depth range. By contrast, the calibration error of many existing methods reaches about 0.1 pixels. Take [7] as an example, the root mean square reprojection error (RMSE) is reported as 0.15 pixels for the camera and 0.13 pixels for the projector. Under this circumstance, the phase error introduced in this paper is comparably considerable.

4. Experiments

In the experiments, defocusing three-step phase-shifting method is adopted and the fringe patterns are generated by the method proposed in [17], and two hardware configurations are used to verify the proposed method.

Configuration 1: The camera is acA800-510um, Basler, and the projector is LightCrafter 4500, TI. The frame rate is set to 500 fps, the focal plane is at 500 mm from the projector, and the designed depth range is 250 mm to 500 mm.

Configuration 2: The camera is MT9T031, Micron, and the projector is a self-made OLED projection system, of which the structure is shown in Fig. 5. The frame rate is set to 10 fps, the focal plane is at 100 mm from the projector, and the designed depth range is 90 mm to 100 mm.

 figure: Fig. 5.

Fig. 5. The structure of the OLED projection system, consisting of an OLED panel and a lens. This projection system has the advantages of low cost, low power consumption, and small size.

Download Full Size | PDF

The measurement range of Configuration 2 is approximately $48 \times 36 \times 10$ ($W \times H \times D$) in unit of mm, which seems to be a small depth range. However, this hardware uses a large aperture to achieve low power consumption, resulting in severe defocus. Under such severe defocus, 10 mm is already a large depth range, where our method can show advantages over traditional methods.

4.1 Configuration 1: reprojection error

Firstly, we calibrate the system with the method in [7], and the root mean square reprojection error (RMSE) for the projector is 0.1087 pixels. Afterwards, on the same hardware, we perform the first step of our calibration procedure, namely using the parallel calibration poses with the traditional method [7] to obtain the basic calibration parameters, and the resulting reprojection error is 0.0946 pixels, proving that even if we adopt only the strategy of keeping calibration board parallel to the focal plane, the error can be considerably reduced. Finally, with the MLP network added and trained, the reprojection error is further reduced to 0.0881 pixels.

According to the above results, our method can reduce the reprojection error, but not that sharply. Through our analysis, we believe the reasons are as follows: 1) A large proportion of the remaining error is caused by the circle center finding uncertainty. 2) In Zhang’s calibration method [26] and its derivatives, it is the relative position between the points on the calibration board that determines the parameters. Therefore, when all the points on a calibration board have similar deviation, it will not affect the reprojection error (the system will mistakenly think that it is the whole calibration board that moves a little), and our presented defocus-induced error exactly conforms to this kind of deviation. Nevertheless, this seemingly low reprojection error will not cover up the error in the actual measurement.

4.2 Configuration 1: block

A block with five steps is used to evaluate the accuracy in actual measurement. The design value of the distance between two adjacent steps is 25 mm, which is directly used as the ground truth. The real values of the distance are not easy to obtain, but through some indirect measurements, the error of the design value is considered to be within 0.02 mm, which is sufficient to prove our conclusion.

Before the measurement, we adjust the focus plane of the projector to about 500 mm, and calibrate the system with [7] and our method respectively. The distance from the projector to the block surfaces is between 250 mm and 450 mm, and the block is placed in three positions marked as far, near, and slanted. The block is then measured, and afterwards, by fitting one surface into a plane, the distance from the center point of the other surface to the fitted plane is regarded as the distance between the two surfaces.

As is displayed in Fig. 6, visible deformation can be spotted at the nearest point of the measurement range in the output point cloud of the traditional method. Moreover, the distance between surfaces is quantitatively measured, and the results are shown in Table 2.

 figure: Fig. 6.

Fig. 6. Reconstruction results of the block. The left one shows the reconstruction results obtained by using calibration method in [7], and the right one shows the results of ours on the same hardware.

Download Full Size | PDF

Tables Icon

Table 2. The measurement distance between surfaces in unit of mm and corresponding root mean square error (RMSE). The base method refers to [7].

It can be seen from the table that the accuracy of the traditional method is consistent with the reported results only in a small depth range. Through more experiments, we observe that in the case of large depth range, the accuracy of the traditional method decreases seriously either on near side or on far side. Comparably, our method can maintain relatively consistent accuracy in a large depth range and in different poses.

4.3 Configuration 2: sphere and cylinder

According to our model, the phase error is related to the local slope. As the slopes of the sphere and the cylinder vary at different positions, the measurement of them is supposed to be affected by the defocus-induced error, and thus a sphere and a cylinder are used to evaluate the effectiveness of our method. The designed diameters of the sphere and the cylinder are 8 mm, and the diameters measured by caliper, 7.995 mm and 7.993 mm respectively, are used as the ground truth.

With fringe images acquired on the same hardware, as shown in Fig. 7(a), the point clouds are reconstructed through the traditional model [7] and our model. The fitting results of our method are shown in Fig. 7(b), and also, the diameter and the standard deviation of the fitted surface are shown in Table 3. As can be seen from the results, our method can improve both the accuracy of diameter and the standard deviation.

 figure: Fig. 7.

Fig. 7. The sphere and the cylinder, including (a) one of the acquired fringe images and (b) the fitting results of our method.

Download Full Size | PDF

Tables Icon

Table 3. The diameter and the standard deviation (SD) of the fitted surface in unit of mm. The base method refers to [7].

4.4 Configuration 2: keys

The hardware of Configuration 2 is practically applied in a portable key reading device. In our experiments, two keys are used to evaluate the precision of our method, as illustrated in Fig. 8. We measure the two keys at several different depths and angles to occupy the entire depth range. By measuring the depth of the key bits to the reference plane, the base method [7] and our method are intuitively compared in precision. The results are shown in the Fig. 9 in unit of cmm, the common unit in the key industry (1 mm = 100 cmm), illustrating that the maximum error of the traditional method is 9 cmm, and with our proposed model, the error can be well compensated to a maximum error within 4 cmm, which proves the effectiveness of our method in practical applications.

 figure: Fig. 8.

Fig. 8. Instructions of the keys. (a) and (b) show the pictures of the two keys. (c) is one of the measurement results and upon that the definition of key bits is illustrated in (d).

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Measurement results of the two keys. The dotted lines show the ground truth of key bits and the error bars display the maximum error of each key bit in several measurements.

Download Full Size | PDF

5. Conclusion

In this paper, a new defocused projection model is proposed to improve the accuracy and to extend the depth range of defocused projection 3D measurement system.

Through further analysis on the defocused projection optical system, we pinpoint a new error in the defocusing system, which is related to the shape of the measured object. An analytical model is then established to conduct simulations, proving the proposed error does exist and is positively correlated to both the degree of defocus and the local slope of the measured object. Supplemented by our theory, a guidance is provided for the traditional defocus-based calibration procedure, that is, the calibration board should be parallel to the focal plane of the projector to reduce the defocus-induced error.

By replacing the complex analytical calculation with a simple neural network, our model is applied to practical applications. Through the experiments, it is proved that our method can reduce the reprojection error in calibration, and can significantly extend the measurement range in the actual measurement scenarios under the premise of maintaining high measurement accuracy. Practical systems often require a trade-off between power consumption, depth range, and accuracy. Take the key measurement system mentioned in the experiments as an example, its accuracy requirements are not easy to meet in that low-power hardware. With theoretical innovation, our method successfully solves such problems, which is of great significance in practical applications.

Funding

Science, Technology and Innovation Commission of Shenzhen Municipality (JCYJ20180306174455080); Natural Science Foundation of Jiangsu Province (BK20181269); Special Project on Basic Research of Frontier Leading Technology of Jiangsu Province (BK20192004C).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

No data were generated or analyzed in the presented research.

References

1. Z. Zhang, “Review of single-shot 3d shape measurement by phase calculation-based fringe projection techniques,” Opt. Lasers Eng. 50(8), 1097–1106 (2012). [CrossRef]  

2. D. Khan, M. A. Shirazi, and M. Y. Kim, “Single shot laser speckle based 3d acquisition system for medical applications,” Opt. Lasers Eng. 105, 43–53 (2018). [CrossRef]  

3. V. Ganapathi, C. Plagemann, D. Koller, and S. Thrun, “Real time motion capture using a single time-of-flight camera,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (IEEE, 2010), pp. 755–762.

4. S. Zhang, “Recent progresses on real-time 3d shape measurement using digital fringe projection techniques,” Optics and lasers in engineering 48(2), 149–158 (2010). [CrossRef]  

5. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

6. S. Lei and S. Zhang, “Flexible 3-d shape measurement using projector defocusing,” Opt. Lett. 34(20), 3080–3082 (2009). [CrossRef]  

7. B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured-light system with an out-of-focus projector,” Appl. Opt. 53(16), 3415–3426 (2014). [CrossRef]  

8. L. Merner, Y. Wang, and S. Zhang, “Accurate calibration for 3d shape measurement system using a binary defocusing technique,” Opt. Lasers Eng. 51(5), 514–519 (2013). [CrossRef]  

9. Y. An, J. S. Hyun, and S. Zhang, “Pixel-wise absolute phase unwrapping using geometric constraints of structured light system,” Opt. Express 24(16), 18445–18459 (2016). [CrossRef]  

10. J. S. Hyun and S. Zhang, “Superfast 3d absolute shape measurement using five binary patterns,” Opt. Lasers Eng. 90, 217–224 (2017). [CrossRef]  

11. G. Garnica, M. Padilla, and M. Servin, “Dual-sensitivity profilometry with defocused projection of binary fringes,” Appl. Opt. 56(28), 7985–7989 (2017). [CrossRef]  

12. Y. Wang, S. Basu, and B. Li, “Binarized dual phase-shifting method for high-quality 3d shape measurement,” Appl. Opt. 57(23), 6632–6639 (2018). [CrossRef]  

13. C. Zuo, Q. Chen, S. Feng, F. Feng, G. Gu, and X. Sui, “Optimized pulse width modulation pattern strategy for three-dimensional profilometry with projector defocusing,” Appl. Opt. 51(19), 4477–4490 (2012). [CrossRef]  

14. Y. Wang and S. Zhang, “Comparison of the squared binary, sinusoidal pulse width modulation, and optimal pulse width modulation methods for three-dimensional shape measurement with projector defocusing,” Appl. Opt. 51(7), 861–872 (2012). [CrossRef]  

15. Y. Wang and S. Zhang, “Optimal pulse width modulation for sinusoidal fringe generation with projector defocusing,” Opt. Lett. 35(24), 4121–4123 (2010). [CrossRef]  

16. A. Silva, J. L. Flores, A. Muñoz, G. A. Ayubi, and J. A. Ferrari, “Three-dimensional shape profiling by out-of-focus projection of colored pulse width modulation fringe patterns,” Appl. Opt. 56(18), 5198–5203 (2017). [CrossRef]  

17. J. Lv, F. Da, and D. Zheng, “Projector defocusing profilometry based on sierra lite dithering algorithm,” Acta Opt. Sin. 34(3), 0312004 (2014). [CrossRef]  

18. A. Silva, A. Muñoz, J. L. Flores, and J. Villa, “Exhaustive dithering algorithm for 3d shape reconstruction by fringe projection profilometry,” Appl. Opt. 59(13), D31–D38 (2020). [CrossRef]  

19. S. Zhang, “Flexible 3d shape measurement using projector defocusing: extended measurement range,” Opt. Lett. 35(7), 934–936 (2010). [CrossRef]  

20. Z. Wu, W. Guo, and Q. Zhang, “High-speed three-dimensional shape measurement based on shifting gray-code light,” Opt. Express 27(16), 22631–22644 (2019). [CrossRef]  

21. Y. Xu, L. Ekstrand, J. Dai, and S. Zhang, “Phase error compensation for three-dimensional shape measurement with projector defocusing,” Appl. Opt. 50(17), 2572–2581 (2011). [CrossRef]  

22. Y. Wang, H. Zhao, H. Jiang, and X. Li, “Defocusing parameter selection strategies based on psf measurement for square-binary defocusing fringe projection profilometry,” Opt. Express 26(16), 20351–20367 (2018). [CrossRef]  

23. B. Han, S. Yang, and S. Chen, “Determination and adjustment of optimal defocus level for fringe projection systems,” Appl. Opt. 58(23), 6300–6307 (2019). [CrossRef]  

24. A. Kamagara, X. Wang, and S. Li, “Optimal defocus selection based on normed fourier transform for digital fringe pattern profilometry,” Appl. Opt. 56(28), 8014–8022 (2017). [CrossRef]  

25. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, (2015).

26. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

Data availability

No data were generated or analyzed in the presented research.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. The comparison between (a) measurement of parallel plane and (b) measurement of angled plane. The generation principle of asymmetric PSF and the resulting defocus-induced phase error are also displayed.
Fig. 2.
Fig. 2. The influence of the measured plane angle on PSF under defocused projection and the illustration of geometric relations. The illumination distribution on surface $\mathit {CD}$ is calculated to obtain the generally defined PSF.
Fig. 3.
Fig. 3. The overall flowchart of (a) calibration procedure, (b) training step of calibration, and (c) reconstruction procedure.
Fig. 4.
Fig. 4. Examples of the PSF under different combinations of defocusing degree $k$ and plane slope $m_x$ , $m_y$ . It can be seen from the first row that the image of PSF flips horizontally when the sign of $m_x$ is opposite.
Fig. 5.
Fig. 5. The structure of the OLED projection system, consisting of an OLED panel and a lens. This projection system has the advantages of low cost, low power consumption, and small size.
Fig. 6.
Fig. 6. Reconstruction results of the block. The left one shows the reconstruction results obtained by using calibration method in [7], and the right one shows the results of ours on the same hardware.
Fig. 7.
Fig. 7. The sphere and the cylinder, including (a) one of the acquired fringe images and (b) the fitting results of our method.
Fig. 8.
Fig. 8. Instructions of the keys. (a) and (b) show the pictures of the two keys. (c) is one of the measurement results and upon that the definition of key bits is illustrated in (d).
Fig. 9.
Fig. 9. Measurement results of the two keys. The dotted lines show the ground truth of key bits and the error bars display the maximum error of each key bit in several measurements.

Tables (3)

Tables Icon

Table 1. Phase error in unit of rad / pixel under different combinations of defocusing degree k and plane slope m x . Only results of m x 0 are listed on the ground that the sign of m x will not change the absolute value of the error.

Tables Icon

Table 2. The measurement distance between surfaces in unit of mm and corresponding root mean square error (RMSE). The base method refers to [7].

Tables Icon

Table 3. The diameter and the standard deviation (SD) of the fitted surface in unit of mm. The base method refers to [7].

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

z = m x ( x x N ) + m y ( y y N ) + z N
{ x = ( 1 p ) x M + p r cos ( θ ) y = ( 1 p ) y M + p r sin ( θ ) z = ( 1 p ) z M
{ x = ( 1 p A B ( r , θ ) ) x M + p A B ( r , θ ) r cos ( θ ) y = ( 1 p A B ( r , θ ) ) y M + p A B ( r , θ ) r sin ( θ ) z = ( 1 p A B ( r , θ ) ) z M
P S F x ( r , θ ) = ( x z x M z M ) f x P S F y ( r , θ ) = ( y z y M z M ) f y
Φ C D = σ C D D C D I C D d σ C D = σ A B D A B I A B d σ A B = Φ A B = σ G H D G H I G H d σ G H = Φ G H
{ x = x N + p A B ( r , θ ) p C D ( r , θ ) r cos ( θ ) y = y N + p A B ( r , θ ) p C D ( r , θ ) r sin ( θ ) z = z N
p C D ( r , θ ) = k 1 p A B ( r , θ )
J = | x r x θ y r y θ |
P S F i ( r , θ ) = d σ G H d σ C D P S F ( r )  =  r J P S F ( r )
d u , d v = f ( x , y , z , m x , m y )
m l p ( x ) = W 3 tanh ( W 2 tanh ( W 1 x + b 1 ) + b 2 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.