Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Monotonicity analysis of absolute phase unwrapping by geometric constraint in a structured light system

Open Access Open Access

Abstract

The monotonicity of depth in a geometric constraint based absolute phase unwrapping is analyzed and a monotonic discriminant of Δ(uc,vc) is presented in this paper. The sign of the discriminant determines the distance selection for the virtual plane to create the artificial absolute phase map for a given structured light system. As Δ(uc,vc) ≥ 0 at an arbitrary point on the CCD pixel coordinates the minimum depth distance is selected for the virtual plane, and the maximum depth distance is selected as Δ(uc,vc) ≤ 0. Two structured light systems with different signs of the monotonic discriminant are developed and the validity of the theoretical analysis is experimentally demonstrated.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Recently, three-dimensional (3D) shape measurement has become one of the most active research areas in optical metrology [1]. Among 3D shape measurement techniques, the structured light method possesses advantages of fast-speed, high-accuracy and non-destruction, and has been applied in diverse fields [24], including reverse engineering, hyperspectral imaging, and microscopic measurement.

In structured light technology, the phase recovery from the fringe patterns is required and the main popular methods include phase-shifting algorithm [5], Fourier transform method [6], windowed Fourier transform [7] and deep learning [8]. By the above fringe analysis methods only wrapped phase ranging from $-{\pi}$ to ${\pi}$ can be obtained, and the phase unwrapping is required to obtain a continuous phase distribution. Conventional phase unwrapping methods can be classified into two categories, the spatial phase unwrapping (SPU) and temporal phase unwrapping (TPU) [9]. SPU provides a relative phase by integrating the phase gradient over a path that covers the domain of interest. Although SPU has the advantages of simplicity and convenience, it fails in simultaneously measurement of multiple isolated objects. In contrast, TPU unwraps the wrapped phase by capturing additional images, and the retrieval of absolute phase is achieved. This method is competent for the measurement of multiple isolated objects and has the advantage of stronger suppression of noise than SPU. In a review paper, Zuo et al. [10] thoroughly studied three types of TPU methods, including multi-frequency, multi-wavelength and number-theoretical approaches.

Although TPU can retrieve the absolute unwrapped phase, it is not suitable for high-speed measurement applications due to the additional images’ acquisition. In order to solve aforementioned problems, An et al. [11] proposed an absolute phase unwrapping method based on geometric constraints of the structured light system. In the method, an artificial absolute phase map, ${\Phi _{\min }}$, at a given virtual depth plane $z = {z_{\min }}$, is firstly generated based on the internal and external parameters of the calibrated structured light system. Then, the fringe order of each pixel in the wrapped phase map is determined by computing the difference between the wrapped phase and the artificial absolute phase. Finally, with the determined fringe orders, the absolute phase is obtained from the wrapped phase. The geometric constraint method has the advantages of high-speed, simple system setup, simultaneous multiple objects’ measurement and robustness to noise. This method has also been applied to other phase recovery to improve the accuracy and speed of 3D measurement. For example, Hyun and Zhang [12] used this method to enhance two-frequency phase-shifting. Li et al. [13] proposed a single-shot method based on geometric constraint to overcome the limitation in Fourier transform profilometry and reduce the motion-induced error [14]. In recent years, researchers have also proposed many methods to improve its measurement depth [9,15,16].

However, we found in our actual experiments that the selection of closest depth distance for the virtual plane used in reference [11] to generate an artificial absolute phase map was not always correct, and for some systems the maximum depth distance should be selected. Therefore, how to determine the correct depth distance for the virtual plane in a given structured light system is the first important issue in the application of geometric constraint.

In order to solve this issue, in this paper, based on the theory of An et al. [11], the monotonicity of the depth $\textrm{z}({u^c},{v^c})$ with respect to projector pixel coordinate of ${u^p}$ by means of geometric constraint is theoretically investigated and a discriminant of $\Delta ({u^c},{v^c})$ determined by the internal and external parameters of the calibrated structured light system is presented. It is shown that as $\Delta ({u^c},{v^c}) \ge 0$ at arbitrary point on the CCD pixel coordinates, the minimum depth distance is selected for the virtual plane to create the artificial absolute phase map, and the maximum depth distance must be selected as $\Delta ({u^c},{v^c}) \le 0$. Several experiments with different structured light systems are conducted to verify the correctness of the proposed monotonic discriminant of $\Delta ({u^c},{v^c})$.

Section 2 explains principles of the proposed method. Section 3 presents several experimental results to validate the method. Section 4 gives the summary of this paper.

2. Principle

2.1. N-step phase-shifting algorithm

For the N-step phase-shifting algorithm with equal phase shifts, the nth fringe pattern can be described as

$${I_n} = {I^{\prime}}(x,y) + {I^{\prime\prime}}(x,y)\cos (\phi (x,y) + \frac{{2\pi n}}{N}),$$
where ${I^{\prime}}(x,y)$ is the average intensity, ${I^{\prime\prime}}(x,y)$ is the intensity modulation, and $\phi (x,y)$ is the phase to be solved. Simultaneously solving the N equations by a least-squares method, $\phi (x,y)$ can be written as
$$\phi (x,y) ={-} {\tan ^{ - 1}}(\frac{{\sum\limits_{n = 1}^N {{I_n}\sin (\frac{{2n\pi }}{N})} }}{{\sum\limits_{n = 1}^N {{I_n}\cos (\frac{{2n\pi }}{N})} }}).$$
Since the arctangent function only ranges from $-\pi$ to $\pi$, a phase unwrapping algorithm is needed to obtain the absolute phase value
$$\Phi (x,y) = \phi (x,y) + 2\pi k(x,y),$$
where $\Phi (x,y)$ denotes the absolute unwrapped phase of $\phi (x,y)$ and $k(x,y)$ is an integer number called the fringe order [17]. Essentially, the process of the absolute phase unwrapping in the structured light system is to determine the fringe order of $k(x,y)$ for each stripe captured by CCD.

2.2. Structured light system model and calibration

In structured light system, a well-known linear pinhole model is usually used to describe the imaging process of a camera. Its mathematical expression is as follows,

$${s^c}{[{u^c},{v^c},1]^T} = {P^c}{[{x^w},{y^w},{z^w},1]^T},$$
$${P^c} = {A^c}[{R^c},{t^c}],$$
where ${s^c}$ is a scale factor, superscript c denotes camera, superscript w denotes the world coordinates, ${A^c}$ is the intrinsic matrix of the camera, ${R^c}$ and ${t^c}$ are the camera’s extrinsic matrices, and ${P^c}$ is a $3 \times 4$ projector matrix used to describe the process of projecting a point $({x^w},{y^w},{z^w})$ of the world coordinates to 2D image coordinates $({u^c},{v^c})$. The projector can be regarded as an inverse camera. Similarly, the pinhole model of the projector is as follows,
$${s^p}{[{u^p},{v^p},1]^T} = {P^p}{[{x^w},{y^w},{z^w},1]^T},$$
$${P^p} = {A^p}[{R^p},{t^p}],$$
where ${s^p}$ is a scale factor, superscript p denotes projector, ${A^p}$ is the intrinsic matrix of the projector, ${R^p}$ and ${t^p}$ are the projector’s extrinsic matrices, and ${P^p}$ is a $3 \times 4$ projector matrix used to describe the process of projecting a point $({x^w},{y^w},{z^w})$ of the world coordinates to DMD coordinates $({u^\textrm{p}},{v^p})$.

In the actual calibration process, in order to make the projector be able to capture images, Zhang and Huang [18] proposed a phase assistant method to establish a one-to-one correspondence relationship between the camera and projector pixels using two sets of orthogonal sinusoidal fringe patterns. Then the calibration target images captured by CCD are mapped onto the DMD plane using this established correspondence.

2.3. Absolute phase unwrapping using geometric constraint

Once the structured light system is calibrated by the method described in section 2.2, the projector matrices ${P^c}$ and ${P^p}$ can be estimated. Then, there are six equations with seven unknowns $({s^c},{s^p},{x^w},{y^w},{z^w},{u^p},{v^p})$ provided by Eqs. (4) and (6). Generally, in order to acquire 3D shape information of an object, another constraint equation is provided by the retrieval of absolute phase from the captured fringe patterns. The projector pixel coordinates ${u^p}$ and ${v^p}$ can be determined by the following equations,

$${u^p} = \frac{{{\Phi ^V}({u^c},{v^c})}}{{2\pi }}T,$$
$${v^p} = \frac{{{\Phi ^H}({u^c},{v^c})}}{{2\pi }}T,$$
where ${\Phi ^V}$ and ${\Phi ^H}$ are the absolute phase maps along the vertical and horizontal directions, respectively, and $T$ is the fringe period.

An et al. [11] proposed an alternative approach. It was suggested that if a virtual plane was placed at the closest distance from the camera coordinate system ${\textrm{z}^w} = z_{\min }^w$, the remaining six parameters could be solved uniquely. With Eq. (4), the correspondent $x_{\min }^w$ and $y_{\min }^w$ for each camera pixel $({u^c},{v^c})$ can be expressed as,

$$\left[ {\begin{array}{c} {x_{\min }^w}\\ {y_{\min }^w} \end{array}} \right] = {\left[ {\begin{array}{cc} {p_{31}^c{u^c} - p_{11}^c} &{p_{32}^c{u^c} - p_{12}^c}\\ {p_{31}^c{v^c} - p_{21}^c} &{p_{32}^c{v^c} - p_{22}^c} \end{array}} \right]^{ - 1}}\left[ {\begin{array}{c} {p_{14}^c - p_{34}^c{u^c} - (p_{33}^c{u^c} - p_{13}^c){z_{\min }}}\\ {p_{24}^c - p_{34}^c{v^c} - (p_{33}^c{v^c} - p_{23}^c){z_{\min }}} \end{array}} \right].$$
Substituting $(x_{\min }^w,y_{\min }^w,z_{\min }^w)$ into Eq. (6), the correspondent projector pixel coordinates $({u^p},{v^p})$ can be described as,
$$\begin{array}{l} {{u^p} = \frac{{p_{11}^px_{\min }^w + p_{12}^py_{\min }^w + p_{13}^pz_{\min }^w + p_{14}^p}}{{p_{31}^px_{\min }^w + p_{32}^py_{\min }^w + p_{33}^pz_{\min }^w + p_{34}^p}}}\\ {{v^p} = \frac{{p_{21}^px_{\min }^w + p_{22}^py_{\min }^w + p_{23}^pz_{\min }^w + p_{24}^p}}{{p_{31}^px_{\min }^w + p_{32}^py_{\min }^w + p_{33}^pz_{\min }^w + p_{34}^p}}} \end{array}.$$
Where $p_{ij}^c$ and $p_{ij}^p$ are the matrix elements of the ${P^c}$ and ${P^p}$ respectively, with subscripts of i and j being the row and column numbers respectively. Without losing generality, we assume that the projected fringe patterns are along the vertical direction. The absolute phase $\Phi _{\min }^V({u^c},{v^c})$ corresponding to $z_{\min }^w$ can be determined as,
$$\Phi _{\min }^V({u^c},{v^c}) = \frac{{2\pi }}{T}{u^p}.$$
Then, the fringe order of $k({u^c},{v^c})$ can be determined as,
$$k({u^c},{v^c}) = ceil\left[ {\frac{{\Phi _{\min }^V({u^c},{v^c}) - \phi ({u^c},{v^c})}}{{2\pi }}} \right],$$
where ceil is a function that gives the nearest upper integer number. Due to the generated phase $\Phi _{\min }^V({u^c},{v^c})$ is absolute phase, this method gives absolute phase unwrapping. Essentially, by means of geometric constrains the wrapped phase is unwrapped by referring to the generated phase of $\Phi _{\min }^V({u^c},{v^c})$ which is an ideal noiseless absolute phase distribution. Therefore this method has advantage of strong suppression of noise. This is compared with TUP method where the phase unwrapping is done by referring to the absolute phase distribution with noise.

2.4. Monotonicity analysis

In this part, we theoretically investigate the monotonicity of absolute phase unwrapping by geometric constraint in detail, and a monotonic discriminant is derived which is used to determine the correct depth distance of the virtual plane for a given structured light system.

Since ceil is a function that gives the nearest upper integer number, the unwrapped phase $\Phi ({u^c},{v^c})$ is always greater than the generated phase $\Phi _{\min }^V({u^c},{v^c})$ for arbitrary point on the CCD pixel coordinates, and the mathematical expression is as

$$\Phi ({u^c},{v^c}) \ge \Phi _{\min }^V({u^c},{v^c}).$$
Moreover, the depth distance of arbitrary point at the tested object is always greater than this known closest distance, i.e.,
$$z({u^c},{v^c}) \ge z_{\min }^w({u^c},{v^c}).$$
Obviously, if the two conditions of Eqs. (14) and (15) are simultaneously true for arbitrary point on the CCD pixel coordinates, the measurement depth of $z({u^c},{v^c})$ must be a monotonically increasing function as unwrapped phase of $\Phi ({u^c},{v^c})$. As shown in Eq. (8), the absolute phase of ${\Phi ^V}({u^c},{v^c})$ monotonically increases as the projector pixel of ${u^p}$. Therefore, to know the monotonicity of depth $z({u^c},{v^c})$ as $\Phi ({u^c},{v^c})$, we only need to investigate the variation of depth $z({u^c},{v^c})$ with respect to ${u^p}$, i.e.
$$z({u^c},{v^c}) \to \Phi ({u^c},{v^c}) = \frac{{2\pi }}{T}{u^p} \to {u^p}.$$
Solving Eqs. (4) and (6) simultaneously leads to
$$\left[ {\begin{array}{c} {{u^c}P_{3,4}^c - P_{1,4}^c}\\ {{v^c}P_{3,4}^c - P_{2,4}^c}\\ {{u^p}P_{3,4}^p - P_{1,4}^p} \end{array}} \right] = \left[ {\begin{array}{ccc} {P_{1,1}^c - {u^c}P_{3,1}^c} &{P_{1,2}^c - {u^c}P_{3,2}^c} &{P_{1,3}^c - {u^c}P_{3,3}^c}\\ {P_{2,1}^c - {v^c}P_{3,1}^c} &{P_{2,2}^c - {v^c}P_{3,2}^c} &{P_{2,3}^c - {v^c}P_{3,3}^c}\\ {P_{1,1}^p - {u^p}P_{3,1}^p} &{P_{1,2}^p - {u^p}P_{3,2}^p} &{P_{1,3}^p - {u^p}P_{3,3}^p} \end{array}} \right]\left[ {\begin{array}{c} x\\ y\\ z \end{array}} \right].$$
In order to facilitate the derivation process, we abbreviate the above formula as follows
$$\left[ {\begin{array}{c} {{b_1}}\\ {{b_2}}\\ {{b_3}} \end{array}} \right] = \left[ {\begin{array}{ccc} {{m_1}} &{{m_2}} &{{m_3}}\\ {{m_4}} &{{m_5}} &{{m_6}}\\ {{m_7}} &{{m_8}} &{{m_9}} \end{array}} \right]\left[ {\begin{array}{c} x\\ y\\ z \end{array}} \right],$$
$$\begin{array}{l} {{b_1} = {u^c}P_{3,4}^c - P_{1,4}^c,{b_2} = {v^c}P_{3,4}^c - P_{2,4}^c,{b_3} = {u^p}P_{3,4}^p - P_{1,4}^p,{m_1} = P_{1,1}^c - {u^c}P_{3,1}^c,}\\ {{m_2} = P_{1,2}^c - {u^c}P_{3,2}^c,{m_3} = P_{1,3}^c - {u^c}P_{3,3}^c,\,{m_4} = P_{2,1}^c - {v^c}P_{3,1}^c,{m_5} = P_{2,2}^c - {v^c}P_{3,2}^c,}\\ {{m_6} = P_{2,3}^c - {v^c}P_{3,3}^c,{m_7} = P_{1,1}^p - {u^p}P_{3,1}^p,\,{m_8} = P_{1,2}^p - {u^p}P_{3,2}^p,\,{m_9} = P_{1,3}^p - {u^p}P_{3,3}^p.} \end{array}$$
Then, the depth $z({u^c},{v^c})$ can be written as follows,
$$\textrm{z} = \frac{{({m_4}{m_8} - {m_5}{m_7}){b_1} + ({m_2}{m_7} - {m_1}{m_8}){b_2} + ({m_1}{m_5} - {m_2}{m_4}){b_3}}}{{({m_1}{m_9} - {m_3}{m_7}){m_5} + ({m_2}{m_7} - {m_1}{m_8}){m_6} + ({m_3}{m_8} - {m_2}{m_9}){m_4}}}.$$
In the system calibration, the camera’s coordinate system is selected as the world coordinate system, and so the projection matrix of the camera can be described as,
$${P^c} = \left[ {\begin{array}{llll} {f_x^c} &0 &{u_0^c} &0\\ 0 &{f_y^c} &{v_0^c} &0\\ 0 &0 &1 &0 \end{array}} \right],\,\,\begin{array}{l} {{m_1} = f_x^c;{m_2} = 0;{m_3} = u_0^c - {u^c};{b_1} = 0;}\\ {{m_4} = 0;{m_5} = f_y^c;{m_6} = v_0^c - {v^c};{b_2} = 0.} \end{array}$$
$f_x^c$ and $f_y^c$ describe the focal lengths of the camera lens along x and y axes respectively, and $(u_0^c,v_0^c)$ is the principle point coordinate. Thus, Eq. (18) can be simplified as,
$$\textrm{z} = \frac{{{m_1}{m_5}{b_3}}}{{{m_1}{m_5}{m_9} - {m_3}{m_5}{m_7} - {m_1}{m_6}{m_8}}}.$$
Generally, the focal lengths of the camera lens in both x and y are approximately equal, and therefore we assume ${m_1} = {m_5}$ without losing generality. So the depth expression can be further simplified as,
$$\textrm{z} = \frac{{{m_1}{b_3}}}{{{m_1}{m_9} - {m_3}{m_7} - {m_6}{m_8}}}.$$
It should be noted that ${b_3},{m_7},{m_8},{m_9}$ are functions of ${u^p}$. We define two functions as $g({u_p}) = {m_1}{b_3}$ and $h({u_p}) = {m_1}{m_9} - {m_3}{m_7} - {m_6}{m_8}$. Then, the derivative of the depth $z({u^c},{v^c})$ with respect to projector pixel coordinate of ${u^p}$ can be expressed as
$$\frac{{dz}}{{d{u_p}}} = \frac{{\frac{{dg}}{{d{u_p}}}h - \frac{{dh}}{{d{u_p}}}g}}{{{h^2}}}.$$
Monotonicity is determined by the sign of the numerator, due to the denominator is a square term. After calculation, the expression of the numerator is derived which is as
$$\Delta ({u^c},{v^c}) = f_x^c(h_{13}^ph_{34}^p - h_{33}^ph_{14}^p) + (u_0^c - {u^c})(h_{31}^ph_{14}^p - h_{11}^ph_{34}^p) + (v_0^c - {v^c})(h_{32}^ph_{14}^p - h_{12}^ph_{34}^p).$$
In this paper, we call this expression as the monotonic discriminant. It is a function of the internal and external parameters of the system, and the pixel coordinates of the CCD. In order to judge its sign, it is necessary to substitute the calibrated parameters into Eq. (24).

The discriminant gives the insight that there may be a monotonically decreasing form of the depth distance of $z({u^c},{v^c})$ with respect to the projector pixel coordinate of ${u^p}$. Obviously, the maximum depth distance, rather than the minimum depth distance, should be selected to generate the absolute phase of ${\Phi _{\max }}({u^c},{v^c})$ for the phase unwrapping in this case. In a summary, as $\Delta ({u^c},{v^c}) \ge 0$ at arbitrary point on the CCD pixel coordinates the minimum depth distance is selected for the virtual plane to create the artificial absolute phase map, and the maximum depth distance is selected as $\Delta ({u^c},{v^c}) \le 0$.

3. Experiment

To verify the performance of the proposed discriminant, we develop a structured light system, shown in Fig. 1. The hardware system includes a single CCD (DAHENG MER-131-210U3M-L) and a projector (DLP6500). The camera’s resolution is $\textrm{1024} \times \textrm{1280}\; pixels$ and the projector’s resolution is $\textrm{1080} \times \textrm{1920}\; pixels$. The camera has a focal length of $\textrm{12}\textrm{.5 }mm$ and sensor unit size of $\textrm{4}\textrm{.8} \mu m \times \textrm{4}\textrm{.8} \mu m$. The projector has a focal length of $\textrm{23}\; mm$ and sensor unit size of $7.6 \mu m \times 7.6 \mu m$.

 figure: Fig. 1.

Fig. 1. Structured light system setup which meets the condition of monotonic increase of depth distance $z({u^c},{v^c})$ as projector pixel coordinate ${u^p}$ ($\Delta ({u^c},{v^c}) \ge 0$).

Download Full Size | PDF

In this paper, the structured light system is calibrated by the methods proposed in reference [19]. The calibration results of the system are as follows. The intrinsic parameter matrices for the camera and the projector are, respectively,

$${A^c} = \left[ {\begin{array}{ccc} {\textrm{2650}\textrm{.16}} &0 &{631.99}\\ 0 &{\textrm{2650}\textrm{.16}} &{506.73}\\ 0 &0 &1 \end{array}} \right],\,\textrm{and}\,\,\,{A^p} = \left[ {\begin{array}{ccc} {\textrm{2896}\textrm{.53}} &0 &{\textrm{1002}\textrm{.45}}\\ 0 &{\textrm{2896}\textrm{.67}} &{\textrm{544}\textrm{.85}}\\ 0 &0 &1 \end{array}} \right].$$
In the determination of extrinsic parameter matrices, the camera’s coordinate system is selected as the world coordinate system. The extrinsic parameter matrices of the established measurement system can be acquired which are as
$$[{R^c},{t^c}] = \left[ {\begin{array}{cccc} 1 &0 &0 &0 \\ 0 &1 &0 &0 \\ 0 &0 &1 & 0 \end{array}} \right]\,\textrm{and}\,\,\,[{R^p},{t^p}] = \left[ {\begin{array}{cccc} {0.9962} &{ - 0.0199} & {0.0846} &{ - 148.9366} \\ {0.0158} &{0.9987} &{0.0489} &{- 28.3887} \\ {- 0.0854} &{ - 0.0473} &{0.9952} &{122.0496} \end{array}} \right].$$
The values of the monotonic discriminant on the CCD pixel coordinates, $\Delta ({u^c},{v^c})$, are calculated by substituting these calibrated system parameters into Eq. (24) and the result is shown in Fig. 2. Obviously, this is the case of the $\Delta ({u^c},{v^c}) \ge 0$, indicating that the depth distance of $z({u^c},{v^c})$ monotonically increases as the projector pixel coordinate of ${u^p}$.

 figure: Fig. 2.

Fig. 2. Calculation result of the monotonic discriminant of the structured light system shown in Fig. 1.

Download Full Size | PDF

In the 3D measurement experiments, a sculpture shown in Fig. 3(a) is measured by the three-frequency phase-shifting approach firstly. We adopt nine-, five-, and five-step phase-shifting method for ${T_1} = 80\; pixels$, ${T_2} = 120\; pixels$ and ${T_3} = 270\; pixels$, respectively. Figure 3(b) shows the acquired wrapped phase from the fringe patterns of ${T_1} = 80\; pixels$. The 3D measurement result of the sculpture is shown in Fig. 3(c), where the minimum and maximum distances of this sculpture depth are ${z_{\min }} = 546.7465\; mm$ and ${z_{\max }} = 595.6813\; mm$, respectively.

 figure: Fig. 3.

Fig. 3. 3D shape measurement by the three-frequency phase-shifting approach. (a) Photograph of the measured sculpture; (b) Wrapped phase acquired from the fringe patterns of ${T_1} = 80\; pixels$; (c) 3D measurement result by this approach.

Download Full Size | PDF

Then, the minimum and maximum distances of the object are separately adopted to generate the artificial phase map for the absolute phase unwrapping by geometric constraint. To be concrete, the artificial phase map of ${\Phi _{\min }}$ is created at the virtual plane of $z = 546\; mm$, as described in section 2.3. The wrapped phase shown in Fig. 3(b) is then unwrapped by substituting the minimum phase map into Eq. (13) and the reconstructed 3D shape of the sculpture is shown in Fig. 4(a) where the minimum and maximum depths of the sculpture are ${z_{\min }} = 546.7465\; mm$ and ${z_{\max }} = 595.6813\; mm$, respectively. This result is the same as that obtained by the three-frequency phase-shifting approach. In a comparison, as the artificial phase map is created at the virtual plane of $z = 596\; mm$ for the absolute phase unwrapping, the reconstructed 3D shape is as Fig. 4(b), where the minimum and maximum depths are ${z_{\min }} = {629}{.7983}\; mm$ and ${z_{\max }} = 69\textrm{7}.\textrm{218}6\; mm$, respectively, different from that obtained by the three-frequency phase-shifting approach. The above experimental results prove that as $\Delta ({u^c},{v^c}) \ge 0$ at arbitrary point on the CCD pixel coordinates the minimum depth distance should be selected for the virtual plane to create the artificial absolute phase map by geometric constraint approach.

 figure: Fig. 4.

Fig. 4. Measurement result by geometric constraint approach. (a) 3D shape reconstruction by geometric constraint approach with $z = 546\; mm$ to be the depth of virtual plane; (b) 3D shape reconstruction by geometric constraint approach with $z = 596\; mm$ to be the depth of virtual plane.

Download Full Size | PDF

To more intuitively verify the above conclusion, we randomly select a row on the CCD pixel coordinates and then compare the depth distributions at this row in Figs. 3(c), 4(a) and 4(b). The results are shown in Fig. 5 where the red dashed line, the blue solid line and the black solid line represent the depth distributions at the row in Figs. 3(c), 4(a) and 4(b), respectively. Obviously, the red dashed line, the blue solid line are overlapping well, and the black solid line deviates upwards from the red dashed line, further verifying that as $\Delta ({u^c},{v^c}) \ge 0$ at arbitrary point on the CCD pixel coordinates the minimum depth distance should be selected for the virtual plane to create the artificial absolute phase map.

 figure: Fig. 5.

Fig. 5. The depth distributions at a randomly selected row acquired by different absolute phase unwrapping approaches, with red dashed line, blue solid line and black solid line representing the depth distribution in Figs. 3(c), 4(a) and 4(b), respectively.

Download Full Size | PDF

For the case of monotonic decrease of depth distance, we develop another structured light system using the same hardware equipment as shown in Fig. 6.

 figure: Fig. 6.

Fig. 6. Structured light system setup which meets the condition of monotonic decrease of depth distance $z({u^c},{v^c})$ as projector pixel coordinate ${u^p}$ ($\Delta ({u^c},{v^c}) \le 0$).

Download Full Size | PDF

The calibration of the system is conducted and the result is as follows. The intrinsic parameter matrices for the camera and the projector are, respectively,

$${A^c} = \left[ {\begin{array}{ccc} {\textrm{2649}\textrm{.44}} &0 &{630.05}\\ 0 &{\textrm{2649}\textrm{.30}} &{509.65}\\ 0 &0 &1 \end{array}} \right]\,\textrm{and}\,\,\,{A^p} = \left[ {\begin{array}{ccc} {\textrm{2896}\textrm{.25}} &0 &{\textrm{1008}\textrm{.81}}\\ 0 &{\textrm{2896}\textrm{.36}} &{\textrm{544}\textrm{.34}}\\ 0 &0 &1 \end{array}} \right].$$
The extrinsic parameters matrices for the camera and the projector are, respectively,
$$[{R^c},{t^c}] = \left[ {\begin{array}{cccc} 1 &0 &0 &0 \\ 0 &1 &0 &0 \\ 0 &0 &1 &0 \end{array}} \right]\,\textrm{and}\,\,\,[{R^p},{t^p}] = \left[ {\begin{array}{cccc} {0.9826} &{- 0.0043} &{ - 0.1858} &{158.1531} \\ {0.0138} &{0.9987} &{0.0497} &{ - 28.8705} \\ {0.1853} &{- 0.0514} &{0.9813} &{105.9751}\end{array}} \right].$$
Figure 7 shows the acquired values of the discriminant on the CCD pixel coordinates of the system after calculation. Obviously, this is the case of $\Delta ({u^c},{v^c}) \le 0$, indicating that the depth distance of $z({u^c},{v^c})$ monotonically decreases as the projector pixel coordinate of ${u^p}$.

 figure: Fig. 7.

Fig. 7. Calculation result of the monotonic discriminant of the structured light system shown in Fig. 6.

Download Full Size | PDF

We then apply this structured light system to the 3D shape measurements of the sculpture by different phase unwrapping approaches. Figure 8(a) shows the reconstructed 3D shape by three-frequency phase-shifting approach, in which the minimum and maximum depth values are ${z_{\min }} = 517.9246\; mm$ and ${z_{\max }} = 579.4308\; mm$, respectively. Figure 8(b) shows the reconstructed 3D shape by geometric constraint approach with the artificial phase map of ${\Phi _{\min }}$ created at the virtual plane of $z = 517\; mm$. The minimum and maximum depths are ${z_{\min }} = 463.6201\; mm$ and ${z_{\max }} = 516.9674\; mm$, respectively. Obviously, these values are different from that obtained by three-frequency phase-shifting approach. On the contrary, as the artificial phase map of ${\Phi _{\max }}$ is created at the virtual plane of $z = 580\; mm$ for the absolute phase unwrapping, the reconstructed 3D shape is as Fig. 8(c), where the minimum and maximum depths are ${z_{\min }} = 517.9246\; mm$ and $ {z_{\max }} = 579.4308\; mm$, respectively. These values are the same as that obtained by three-frequency phase-shifting approach. The above experimental results prove that as $\Delta ({u^c},{v^c}) \le 0$ at arbitrary point on the CCD pixel coordinates the maximum depth distance should be selected to create the artificial absolute phase map by geometric constraint approach.

 figure: Fig. 8.

Fig. 8. The 3D shape measurement results by different methods. (a) 3D shape reconstruction by three-frequency phase-shifting approach; (b) 3D shape reconstruction by geometric constraint approach with $z = 517\; mm$ to be the depth of virtual plane; (c) 3D shape reconstruction by geometric constraint approach with $z = 580\; mm$ to be the depth of virtual plane.

Download Full Size | PDF

We also randomly select a row on the CCD pixel coordinates and then compare the depth distributions at this row in Figs. 8(a)–8(c). The results are shown in Fig. 9 where the red dashed line, the black solid line and the blue solid line represent the depth distributions at the row in Figs. 8(a)–8(c), respectively. It can be seen that the red dashed line and the blue solid line are overlapping well, and the black solid line deviates downwards from the red dashed line, further verifying that as $\Delta ({u^c},{v^c}) \le 0$ at arbitrary point on the CCD pixel coordinates the maximum depth distance should be selected for the virtual plane to create the artificial absolute phase map.

 figure: Fig. 9.

Fig. 9. The depth comparison in a randomly selected row acquired by different phase unwrapping approaches, with red dashed line, black solid line and blue solid line representing the depth distributions in Fig. 8(a), Fig. 8(b) and Fig. 8(c), respectively.

Download Full Size | PDF

4. Summary

We have theoretically investigated the monotonicity of depth $z({u^c},{v^c})$ with respect to projector pixel coordinate of ${u^p}$ in the absolute phase unwrapping by geometric constraint, and presented a monotonic discriminant used to choose correct depth distance of the virtual plane to create the artificial absolute phase map in this paper. The monotonic discriminant indicates that as $\Delta ({u^c},{v^c}) \ge 0$ at arbitrary point on the CCD pixel coordinates, the minimum depth distance is selected for the virtual plane, and on the contrary, as $\Delta ({u^c},{v^c}) \le 0$ the maximum depth distance must be selected. Experimental results with different structured light systems demonstrate the validity of the theoretical analysis.

Funding

Scientific Instrument Developing Project of the Chinese Academy of Sciences (YJKYYQ20180067).

Disclosures

The authors declare no conflicts of interest.

References

1. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Laser Eng. 48(2), 133–140 (2010). [CrossRef]  

2. S. Heist, C. Zhang, K. Reichwald, P. Kuhmstedt, G. Notni, and A. Tunnermann, “5D hyperspectral imaging: fast and accurate measurement of surface shape and spectral characteristics using structured light,” Opt. Express 26(18), 23366–23379 (2018). [CrossRef]  

3. Y. Yin, M. Wang, B. Z. Gao, X. Liu, and X. Peng, “Fringe projection 3D microscopy with the general imaging model,” Opt. Express 23(5), 6846–6857 (2015). [CrossRef]  

4. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Laser Eng. 48(2), 149–158 (2010). [CrossRef]  

5. D. Malacara, Optical Shop Testing3rd ed (John Wiley & Sons, 2007).

6. M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-D object shapes,” Appl. Opt. 22(24), 3977–3982 (1983). [CrossRef]  

7. Q. Kemao, “Windowed Fourier transform for fringe pattern analysis,” Appl. Opt. 43(13), 2695–2702 (2004). [CrossRef]  

8. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018). [CrossRef]  

9. J. Dai, Y. An, and S. Zhang, “Absolute three-dimensional shape measurement with a known object,” Opt. Express 25(9), 10384–10396 (2017). [CrossRef]  

10. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Laser Eng. 85, 84–103 (2016). [CrossRef]  

11. Y. An, J. S. Hyun, and S. Zhang, “Pixel-wise absolute phase unwrapping using geometric constraints of structured light system,” Opt. Express 24(16), 18445–18459 (2016). [CrossRef]  

12. J. S. Hyun and S. Zhang, “Enhanced two-frequency phase-shifting method,” Appl. Opt. 55(16), 4395–4401 (2016). [CrossRef]  

13. B. Li, Y. An, and S. Zhang, “Single-shot absolute 3D shape measurement with Fourier transform profilometry,” Appl. Opt. 55(19), 5219–5225 (2016). [CrossRef]  

14. B. Li, Z. Liu, and S. Zhang, “Motion-induced error reduction by combining Fourier transform profilometry with phase-shifting profilometry,” Opt. Express 24(20), 23289–23303 (2016). [CrossRef]  

15. C. Jiang, B. Li, and S. Zhang, “Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers,” Opt. Laser Eng. 91, 232–241 (2017). [CrossRef]  

16. Y. An and S. Zhang, “Pixel-by-pixel absolute phase retrieval assisted by an additional three-dimensional scanner,” Appl. Opt. 58(8), 2033–2041 (2019). [CrossRef]  

17. S. Lv, Q. Sun, J. Yang, Y. Jiang, F. Qu, and J. Wang, “An improved phase-coding method for absolute phase retrieval based on the path-following algorithm,” Opt. Laser Eng. 122, 65–73 (2019). [CrossRef]  

18. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

19. B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured-light system with an out-of-focus projector,” Appl. Opt. 53(16), 3415–3426 (2014). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Structured light system setup which meets the condition of monotonic increase of depth distance $z({u^c},{v^c})$ as projector pixel coordinate ${u^p}$ ($\Delta ({u^c},{v^c}) \ge 0$).
Fig. 2.
Fig. 2. Calculation result of the monotonic discriminant of the structured light system shown in Fig. 1.
Fig. 3.
Fig. 3. 3D shape measurement by the three-frequency phase-shifting approach. (a) Photograph of the measured sculpture; (b) Wrapped phase acquired from the fringe patterns of ${T_1} = 80\; pixels$; (c) 3D measurement result by this approach.
Fig. 4.
Fig. 4. Measurement result by geometric constraint approach. (a) 3D shape reconstruction by geometric constraint approach with $z = 546\; mm$ to be the depth of virtual plane; (b) 3D shape reconstruction by geometric constraint approach with $z = 596\; mm$ to be the depth of virtual plane.
Fig. 5.
Fig. 5. The depth distributions at a randomly selected row acquired by different absolute phase unwrapping approaches, with red dashed line, blue solid line and black solid line representing the depth distribution in Figs. 3(c), 4(a) and 4(b), respectively.
Fig. 6.
Fig. 6. Structured light system setup which meets the condition of monotonic decrease of depth distance $z({u^c},{v^c})$ as projector pixel coordinate ${u^p}$ ($\Delta ({u^c},{v^c}) \le 0$).
Fig. 7.
Fig. 7. Calculation result of the monotonic discriminant of the structured light system shown in Fig. 6.
Fig. 8.
Fig. 8. The 3D shape measurement results by different methods. (a) 3D shape reconstruction by three-frequency phase-shifting approach; (b) 3D shape reconstruction by geometric constraint approach with $z = 517\; mm$ to be the depth of virtual plane; (c) 3D shape reconstruction by geometric constraint approach with $z = 580\; mm$ to be the depth of virtual plane.
Fig. 9.
Fig. 9. The depth comparison in a randomly selected row acquired by different phase unwrapping approaches, with red dashed line, black solid line and blue solid line representing the depth distributions in Fig. 8(a), Fig. 8(b) and Fig. 8(c), respectively.

Equations (29)

Equations on this page are rendered with MathJax. Learn more.

I n = I ( x , y ) + I ( x , y ) cos ( ϕ ( x , y ) + 2 π n N ) ,
ϕ ( x , y ) = tan 1 ( n = 1 N I n sin ( 2 n π N ) n = 1 N I n cos ( 2 n π N ) ) .
Φ ( x , y ) = ϕ ( x , y ) + 2 π k ( x , y ) ,
s c [ u c , v c , 1 ] T = P c [ x w , y w , z w , 1 ] T ,
P c = A c [ R c , t c ] ,
s p [ u p , v p , 1 ] T = P p [ x w , y w , z w , 1 ] T ,
P p = A p [ R p , t p ] ,
u p = Φ V ( u c , v c ) 2 π T ,
v p = Φ H ( u c , v c ) 2 π T ,
[ x min w y min w ] = [ p 31 c u c p 11 c p 32 c u c p 12 c p 31 c v c p 21 c p 32 c v c p 22 c ] 1 [ p 14 c p 34 c u c ( p 33 c u c p 13 c ) z min p 24 c p 34 c v c ( p 33 c v c p 23 c ) z min ] .
u p = p 11 p x min w + p 12 p y min w + p 13 p z min w + p 14 p p 31 p x min w + p 32 p y min w + p 33 p z min w + p 34 p v p = p 21 p x min w + p 22 p y min w + p 23 p z min w + p 24 p p 31 p x min w + p 32 p y min w + p 33 p z min w + p 34 p .
Φ min V ( u c , v c ) = 2 π T u p .
k ( u c , v c ) = c e i l [ Φ min V ( u c , v c ) ϕ ( u c , v c ) 2 π ] ,
Φ ( u c , v c ) Φ min V ( u c , v c ) .
z ( u c , v c ) z min w ( u c , v c ) .
z ( u c , v c ) Φ ( u c , v c ) = 2 π T u p u p .
[ u c P 3 , 4 c P 1 , 4 c v c P 3 , 4 c P 2 , 4 c u p P 3 , 4 p P 1 , 4 p ] = [ P 1 , 1 c u c P 3 , 1 c P 1 , 2 c u c P 3 , 2 c P 1 , 3 c u c P 3 , 3 c P 2 , 1 c v c P 3 , 1 c P 2 , 2 c v c P 3 , 2 c P 2 , 3 c v c P 3 , 3 c P 1 , 1 p u p P 3 , 1 p P 1 , 2 p u p P 3 , 2 p P 1 , 3 p u p P 3 , 3 p ] [ x y z ] .
[ b 1 b 2 b 3 ] = [ m 1 m 2 m 3 m 4 m 5 m 6 m 7 m 8 m 9 ] [ x y z ] ,
b 1 = u c P 3 , 4 c P 1 , 4 c , b 2 = v c P 3 , 4 c P 2 , 4 c , b 3 = u p P 3 , 4 p P 1 , 4 p , m 1 = P 1 , 1 c u c P 3 , 1 c , m 2 = P 1 , 2 c u c P 3 , 2 c , m 3 = P 1 , 3 c u c P 3 , 3 c , m 4 = P 2 , 1 c v c P 3 , 1 c , m 5 = P 2 , 2 c v c P 3 , 2 c , m 6 = P 2 , 3 c v c P 3 , 3 c , m 7 = P 1 , 1 p u p P 3 , 1 p , m 8 = P 1 , 2 p u p P 3 , 2 p , m 9 = P 1 , 3 p u p P 3 , 3 p .
z = ( m 4 m 8 m 5 m 7 ) b 1 + ( m 2 m 7 m 1 m 8 ) b 2 + ( m 1 m 5 m 2 m 4 ) b 3 ( m 1 m 9 m 3 m 7 ) m 5 + ( m 2 m 7 m 1 m 8 ) m 6 + ( m 3 m 8 m 2 m 9 ) m 4 .
P c = [ f x c 0 u 0 c 0 0 f y c v 0 c 0 0 0 1 0 ] , m 1 = f x c ; m 2 = 0 ; m 3 = u 0 c u c ; b 1 = 0 ; m 4 = 0 ; m 5 = f y c ; m 6 = v 0 c v c ; b 2 = 0.
z = m 1 m 5 b 3 m 1 m 5 m 9 m 3 m 5 m 7 m 1 m 6 m 8 .
z = m 1 b 3 m 1 m 9 m 3 m 7 m 6 m 8 .
d z d u p = d g d u p h d h d u p g h 2 .
Δ ( u c , v c ) = f x c ( h 13 p h 34 p h 33 p h 14 p ) + ( u 0 c u c ) ( h 31 p h 14 p h 11 p h 34 p ) + ( v 0 c v c ) ( h 32 p h 14 p h 12 p h 34 p ) .
A c = [ 2650 .16 0 631.99 0 2650 .16 506.73 0 0 1 ] , and A p = [ 2896 .53 0 1002 .45 0 2896 .67 544 .85 0 0 1 ] .
[ R c , t c ] = [ 1 0 0 0 0 1 0 0 0 0 1 0 ] and [ R p , t p ] = [ 0.9962 0.0199 0.0846 148.9366 0.0158 0.9987 0.0489 28.3887 0.0854 0.0473 0.9952 122.0496 ] .
A c = [ 2649 .44 0 630.05 0 2649 .30 509.65 0 0 1 ] and A p = [ 2896 .25 0 1008 .81 0 2896 .36 544 .34 0 0 1 ] .
[ R c , t c ] = [ 1 0 0 0 0 1 0 0 0 0 1 0 ] and [ R p , t p ] = [ 0.9826 0.0043 0.1858 158.1531 0.0138 0.9987 0.0497 28.8705 0.1853 0.0514 0.9813 105.9751 ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.