Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Real-time 3D shape measurement with dual-frequency composite grating and motion-induced error reduction

Open Access Open Access

Abstract

Phase-shifting profilometry has been increasingly sought and applied in dynamic three-dimensional (3D) shape measurement. However, the object motion will lead to extra phase shift error and thus measurement error. In this paper, a real-time 3D shape measurement method based on dual-frequency composite phase-shifting grating and motion-induced error reduction is proposed for a complex scene containing dynamic and static objects. The proposed method detects the motion region of a complex scene through the phase relations of the dual-frequency composite grating and reduces the motion-induced error with the combination of the phase calculated by a phase-shifting algorithm and the phase extracted by Fourier fringe analysis. It can correctly reconstruct the 3D shape of a complex dynamic scene and ensure high measurement accuracy of its static object as well. With the aid of the phase-shifting image ordering approach, the dynamic 3D shape of complex scenes can be reconstructed and the motion-induced error can also be suppressed in real time. Experimental results well proved that the proposed method is effective and practical.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The optical 3D shape measurement method, with the advantages of non-contact, high measurement speed, high precision and easy to realize automatic measurement under the control of computer, has been extensively studied and widely used in various fields [14]. The fringe projection profilometry (FPP) [512] based on phase calculation is widely used because of its high measurement accuracy and high spatial resolution. FPP can be divided into two categories according to the different analytical approaches of the phase value: phase-shift method and transform domain method. The common application of the former is phase-shifting profilometry (PSP) [912]. The latter method usually transforms the original image into the frequency domain [78,13] or the wavelet domain [14] to retrieve its phase value. Fourier Transform Profilometry (FTP) [7,8] is a promising one among them.

FTP only needs single-frame high-frequency fringe pattern to extract the phase by applying a properly designed bandpass filter in the frequency domain. However, PSP usually requires multiple (normally at least three) phase-shifting fringe patterns to reconstruct the 3D shape of a tested object. Compared with FTP, PSP has higher measurement accuracy and is more robustness to variable surface reflectivity.

For PSP and FTP, the calculated phase is wrapped between -π and π, it is a phase ambiguity problem. In order to realize the accurate mapping of phase to height, phase unwrapping must be carried out.

The traditional phase unwrapping algorithms can be divided into two principal groups: spatial phase unwrapping [15,16] and temporal phase unwrapping [11,12]. Spatial phase unwrapping usually implements on single wrapped phase map. However, limited by the phase continuity assumption, it cannot handle surface discontinuities and isolated scenes, unless by using some Gray-code patterns to recover the fringe order [17,18]. Temporal phase unwrapping method solves the phase ambiguity problem by using multiple wrapped phase maps [12,19]. Recently, the stereo phase unwrapping (SPU) method [2022] based on geometric constraints has also been used to solve this problem, but the multiple cameras are required to build the geometry-constraint.

With the rapid advances in high-frame-rate image sensors, high-speed digital projection technology, and high-performance processors, PSP techniques have been increasingly applied in 3D shape measurement of dynamic scenes. However, the PSP techniques require the object should be kept stationary during the projection and collection of multiple fringe patterns for each 3D reconstructed result. In the actual dynamic scenes, the phase shift is deviated due to the object’s motion, which leads to the error of the measurement result [23]. At the same time, if multi-frequency phase unwrapping is used, the phase distribution of each frequency has different phase motion error, which will also bring error to phase unwrapping result [23,24].

Admittedly, the phase error caused by motion can be reduced by increasing the imaging speed or reducing the number of fringe patterns required for measurement. Many scholars have carried out researches on reducing the number of the required fringe patterns for measurement, Liu et al. [25] proposed a new dual-frequency mode to achieve real-time 3D measurement. Zuo et al. [26] invented a temporal phase unwrapping algorithm based on 3 + 2 phase shift to realize high-speed dynamic 3D measurement. Wu et al. [27] presented a method of grating recombination, which can retrieve the wrapped phase and their corresponding fringe orders simultaneously from 4 patterns.

With the application of PSP in dynamic measurement, the compensation of motion phase error has attracted the attention of many scholars. Lu et al. proposed marker-based [28,29] and Scale Invariant Feature Transform (SIFT) [30] method to compensate the measurement error caused by rigid object motion. Feng et al. [23] put forward a method based on the assumption that the phase shift error within a single segmentation object is homogeneous. Wang et al. [31] used the Hilbert transform to calculate an additional phase map, then employed the average phase map of the original and additional phase map for 3D reconstruction. Their method can reduce motion-introduced error but requires additional processing to suppress errors at fringe edges. Liu et al. [32] brought out a new method to estimate the unknown phase shifts by calculating the difference between three adjacent phase maps. Wang et al. [33] proposed a motion-induced error reduction method by taking advantage of the additional temporal sampling.

Considering the efficiency of the phase error compensation algorithm, and the single-frame measurement characteristic of FTP [34], Cong et al. [35] proposed a phase-shift error estimation method using the computed phase map differences of continuously captured fringe images. Li et al. [36,37] proposed a hybrid method to reduce motion-induced error by combining FTP with PSP. Qian et al. [38] proposed a fused FTP and PSP surface reconstruction method by phase-based pixel-wise motion detection. These works provide the foundation and inspiration for the research of this paper.

Recently, Feng et al. [39] have realized high-accuracy phase acquisition from a single fringe pattern by using deep learning. The single-frame reconstruction method based on deep learning can avoid the phase error caused by motion and realize the 3D measurement of dynamic scene without motion-artifacts [40,41].

In this paper, a real-time 3D shape measurement method based on dual-frequency composite phase-shifting grating and motion-induced error reduction is proposed for the complex scene containing dynamic and static objects. N (N>=5) frame dual-frequency composite gratings are projected onto the measured complex scene, and the high- and low-frequency phase of each group of the phase-shifting images can be calculated by PSP method, called PSP-based phase. If the tested object is in motion, the different motion-induced error will occur in the high- and low-frequency phase calculated from PSP algorithm. In order to eliminate this kind error, the wrapped phase of each single frame high- and/or low-frequency fringe image are simultaneously obtained by Fourier fringe analysis, called FFA-based phase. At the same time, a virtual high-frequency phase is used to locate the motion region in the measured scene, in where, the PSP-based phase with large error caused by the motion is replaced by the FFA-based phase with minor error due to the instantaneous sampling. After then, the N-frame 3D results of a group of N-step phase-shifting images can be reconstructed by image ordering approach and phase unwrapping algorithm. Finally, 3D reconstruction of dynamic complex scene with higher time resolution, as fast as the recording rate of camera, is realized. Experiments are performed to verify the performance of the proposed method.

The rest of the paper is organized as follows: Section 2 illustrates the principle of the proposed 3D measurement method of motion-induced error reduction. Section 3 shows some experimental results to validate the proposed method. Section 4 summarizes and discusses the feature of the proposed method.

2. Principle

2.1 Calculation of the wrapped phase of dual-frequency composite grating

In this paper, the 3D shape measurement is based on the FPP. Aimed to realize the dynamic 3D shape measurement, the phase measurement with high accuracy should be ensured while the total number of projected patterns should also be reduced. The dual-frequency composite grating is used to calculate the high- and low-frequency wrapped phase values, and the minimum number of the composite phase-shifting grating is 5 [25]. The principle diagram of the measurement system is shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. Schematic diagram of the 3D shape measurement system.

Download Full Size | PDF

In the measuring process, the projection system is used to project the composite grating to the measured surface, and the deformation fringe is collected by an imaging device. The patterns of the dual-frequency composite gratings can be expressed by:

$$I_n^p = {A^p} + B_1^pcos(2\pi {f_h}{x^p} + m2\pi n/N) + B_2^pcos(2\pi {f_l}{x^p} + 2\pi n/N)$$
where $I_{n}^{p}$ is the intensity of a pixel in the projector. Ap is the background intensity, Bp1 and Bp2 means the modulation of the fringe, fh and fl is the frequency of the high- and low-frequency fringe, respectively. And s is their rate, fh=s*fl. n represents the phase shift steps, N is the total number of phase shift(N≥5), and m represents the high-frequency sinusoidal fringe with 2 as a full-cycle phase shift. The value of m is determined by the following equation:
$$m \ne \left\{\begin{array}{l} kN - 1 \\ kN + 1 \quad k = 0,1,2 \cdots \\ kN/2 \end{array}\right.$$
For 3D reconstruction, a camera captures each image where the corresponding projected pattern is distorted by the scanned surface topology, resulting in the deformed patterns as:
$$I_n^c = {A^c} + B_1^c\cos ({{\phi_h} + {{m2\pi n} / N}} )+ B_2^c\cos ({{\phi_l} + {{2\pi n} / N}} )$$
where $I_{n}^{c}$ is the intensity of a pixel in the camera. $\phi_{h}$ and $\phi_{l}$ represents the phase information of the high- and low-frequency fringe respectively. Both of them reflect the height distribution of the measured object and can be obtained as:
$${{\phi _h} = {{\tan }^{ - 1}}\frac{{\mathop \sum \nolimits_{n = 0}^{N - 1} I_n^c\sin ({{{m2\pi n} / N}} )}}{{\mathop \sum \nolimits_{n = 0}^{N - 1} I_n^c\cos ({{{m2\pi n} / N}} )}}}\quad,\quad{{\phi _l} = {{\tan }^{ - 1}}\frac{{\mathop \sum \nolimits_{n = 0}^{N - 1} I_n^c\sin ({{{2\pi n} / N}} )}}{{\mathop \sum \nolimits_{n = 0}^{N - 1} I_n^c\cos ({{{2\pi n} / N}} )}}}$$
Ac is the averaged pixel intensity across the pattern set, which can be obtained as:
$${A^c} = \frac{1}{N}\mathop \sum \limits_{n = 0}^{N - 1} I_n^c$$
Correspondingly, the intensity modulation Bc of a given pixel can be derived from $I_{n}^{c}$ as:
$$\begin{aligned}&{B_1^c = \frac{2}{N}\ast sqrt\left\{ {{{\left[ {\mathop \sum \limits_{n = 0}^{N - 1} I_n^c\sin ({{{m2\pi n} / N}} )} \right]}^2} + {{\left[ {\mathop \sum \limits_{n = 0}^{N - 1} I_n^c\cos ({{{m2\pi n} / N}} )} \right]}^2}} \right\}}\\ & {B_2^c\; = \frac{2}{N}\ast sqrt\left\{ {{{\left[ {\mathop \sum \limits_{n = 0}^{N - 1} I_n^c\sin ({{{2\pi n} / N}} )} \right]}^2} + {{\left[ {\mathop \sum \limits_{n = 0}^{N - 1} I_n^c\cos ({{{2\pi n} / N}} )} \right]}^2}} \right\}}\end{aligned}$$
where sqrt{.} is the operation of calculating the square root. Two modulation information γ1 and γ2 can be calculated according to Eqs. (5) and (6).
$${\gamma _1} = {{B_1^c} / {{A^c}}}\;\;,\;\;{\gamma _2} = {{B_2^c} / {{A^c}}}$$

According to the two phase information obtained from Eq. (4) and the two modulation information in Eq. (7), the motion region can be detected. Further more information will be explained in section 2.2.

2.2 Recognition of the motion region

The motion lead to different phase errors in the high- and low-frequency fringe of the motion region, and also lead to differences in the modulation information of high- and low-frequency fringe. Thus, the changes of phase and modulation can be used to identify and locate the motion region. When the measured object is moving or changing in the process of measurement, Eq. (3) can be rewritten as:

$$I_n^{\prime} = {A^c} + B_1^c\cos ({{\phi_h} + {{m2\pi n} / N} + {\varepsilon_{nh}}} )+ B_2^c\cos ({{\phi_l} + {{2\pi n} / N} + {\varepsilon_{nl}}} )$$
where the $\varepsilon_{nl}$ and $\varepsilon_{nh}$ are the extra phase shift caused by motion for the nth pattern. According to the relation of the projected pattern fh=s*fl, we can simplify the extra phase shift $\varepsilon_{nh} = {s}{\ast}\varepsilon_{nl}$. When Eq. (8) is substituted into Eq. (4), the measured phase is calculated as:
$${\phi _h^{\prime} = {{\tan }^{ - 1}}\frac{{\mathop \sum \nolimits_{n = 0}^{N - 1} I_n^{\prime}\sin ({{{m2\pi n} / N}} )}}{{\mathop \sum \nolimits_{n = 0}^{N - 1} I_n^{\prime}\cos ({{{m2\pi n} / N}} )}}} , {\phi _l^{\prime} = {{\tan }^{ - 1}}\frac{{\mathop \sum \nolimits_{n = 0}^{N - 1} I_n^{\prime}\sin ({{{2\pi n} / N}} )}}{{\mathop \sum \nolimits_{n = 0}^{N - 1} I_n^{\prime}\cos ({{{2\pi n} / N}} )}}}$$
Because the object moves in the process of phase shift, there is a difference between the calculated phase and the actual phase.
$$\varDelta {\phi _h} = \phi _h^{\prime} - {\phi _h}\quad,\quad\varDelta {\phi _l} = \phi _l^{\prime} - {\phi _l}$$
It is different from the traditional PSP that the motion phase error is the doubled frequency of projected fringes [22]. The motion phase error of high- and low-frequency phase in the composite grating is:
$$ \begin{aligned}\varDelta {\phi _h} &= {{\tan }^{ - 1}}\left[ {\frac{{{a_0} + {a_1}\cos 2{\phi_h} + {a_2}\sin 2{\phi_h} + {a_3}\cos \frac{{s + 1}}{s}{\phi_h} + {a_4}\sin \frac{{s + 1}}{s}{\phi_h} + {a_5}\cos \frac{{s - 1}}{s}{\phi_h} + {a_6}\sin \frac{{s - 1}}{s}{\phi_h}}}{{{a_7} - {a_2}\cos 2{\phi_h} + {a_1}\sin 2{\phi_h} - {a_4}\cos \frac{{s + 1}}{s}{\phi_h} + {a_3}\sin \frac{{s + 1}}{s}{\phi_h} - {a_6}\cos \frac{{s - 1}}{s}{\phi_h} + {a_5}\sin \frac{{s - 1}}{s}{\phi_h}}}} \right] \\ \varDelta {\phi _l} &= {{\tan }^{ - 1}}\left[ {\frac{{{b_0} + {b_1}\cos 2{\phi_l} + {b_2}\sin 2{\phi_l} + {b_3}\cos ({s + 1} ){\phi_l} + {b_4}\sin ({s + 1} ){\phi_l} + {b_5}\cos ({s - 1} ){\phi_l} + {b_6}\sin ({s - 1} ){\phi_l}}}{{{b_7} - {b_2}\cos 2{\phi_l} + {b_1}\sin 2{\phi_l} - {b_4}\cos ({s + 1} ){\phi_l} + {b_3}\sin ({s + 1} ){\phi_l} - {b_6}\cos ({s - 1} ){\phi_l} + {b_5}\sin ({s - 1} ){\phi_l}}}} \right]\end{aligned}$$
The two phase errors caused by the object motion are obvious different. Following the definition of Eq. (8) and assigning the parameters as fl = 2, fh=s*fl, s=7, m=7, A=0.5, B1=B2=0.25, we simulated the effect of uniform motion on the dual-frequency composite grating by introducing extra phase shifts $\varepsilon_{nl} = n{\ast}\varepsilon_{1l} rad \,\textrm{and}\, \varepsilon_{nl} rad$ into the fringe images.

The simulation result is shown in Fig. 2. Figure 2(a) is a simulated object with a motion part. In order to well show the characteristic of the motion-induced error, two uniform motion processes with different speeds are simulated, the resulting extra phase shifts $\varepsilon_{1l}$ are 0.01 rad and 0.02 rad, respectively. There is also a group of static experiments of $\varepsilon_{1l}=0$ rad. Figures 2(e) and 2(g) show the wrapped phase of low- and high-frequency fringe calculated from different $\varepsilon_{1l}$, Figs. 2(h) and 2(j) show the motion phase error distributions of low-frequency and high-frequency fringe respectively. The motion phase error is the difference between the calculated wrapped phase and the actual phase (or the calculated phase of $\varepsilon_{1l}=0$ rad). It can be seen that the two errors are obvious differences.

 figure: Fig. 2.

Fig. 2. Effect of motion error on the high- and low-frequency phase. (a) Simulated object. (b) Motion phase difference between Virtual high frequency and high frequency. (c) Motion region detected by phase difference. (d) Threshold binarization result. (e) Low-frequency wrapped phase. (f) Virtual high-frequency wrapped phase. (g) High-frequency wrapped phase. (h)Low-frequency phase motion error. (i) Virtual high-frequency phase motion error. (j) High-frequency phase motion error.

Download Full Size | PDF

The motion region can be detected by different motion error distributions. However, in the actual experiment, we can only calculate the high- and low-frequency phase information, and cannot directly obtain the error distribution to detect the motion region. A virtual high-frequency approach is proposed to detect the motion region with high- and low-frequency phase. According to the relationship between two phase values, the virtual high-frequency phase, shown in Fig. 2(f), can be obtained from the low-frequency phase by Eq. (12). Notably, when the s is even, the high-frequency component of Eq. (4) has to be π phase-shifted to align the virtual high-frequency with the actual one. The difference between the virtual high-frequency phase and the actual one as shown in Fig. 2(i), which shows that the error phase of the virtual high-frequency phase has similar error distribution as the low-frequency phase, only the error is magnified m times. The virtual high-frequency phase and the calculated high-frequency phase carry the same phase information of the tested object’s shape, when the object is stationary, the phase difference between the virtual and the calculated high-frequency phase is near zeros and only affected by ambient noise and signal-to-noise ratio (SNR) [12]. So the difference between the virtual high-frequency phase and the calculated one has a fluctuation characteristic in the motion region, as shown in Fig. 2(b).

$$\phi _{vh}^{\prime} = s\ast mod[{\phi_l^{\prime} + \pi , 2\pi /s} ]- \pi $$
The phase difference between the virtual and the calculated high-frequency phase is processed by threshold binarization to distinguish the motion region from the stationary region as shown in Fig. 2(d). Because the phase difference of the motion region has volatility, setting the threshold will cause the motion region to be discontinuous. The connected motion region can be obtained by image processing closed operation on the binarization data as shown in Fig. 2(c). The convolution kernel size for the closed operation on the binary data is selected based on the frequency of the motion phase error in the Eq. (11). The closed operation can well point out the moving region and eliminate the influence of noise. The selection of threshold values for phase difference will be described in 2.4.2.

At the same time, Eqs. (5)–(7) show that the modulation information obtained by the least square algorithm will fluctuate due to the extra phase shift caused by the motion. Also, the varying surface reflectivity and the ambient light can affect the modulation. We also simulated the reflectance of object surface variations as a Hanning distribution. Figures 3(a) and 3(b) show the modulations of low- and high-frequency. As shown in Figs. 3(a) and 3(b), two modulations are different due to the motion, so the ratio between two modulations has volatility characteristic in the motion region of the object, as shown in Fig. 3(c). It is worth mentioning that by using the ratio of the two modulations, the influence caused by the varying surface reflectivity can be avoided, which is beneficial to the selection of threshold.

 figure: Fig. 3.

Fig. 3. Effect of motion on the modulations of high- and low-frequency fringe. (a) Modulation of low-frequency fringe. (b) Modulation of high-frequency fringe. (c) modulation ratio of (a) and (b).

Download Full Size | PDF

The threshold processing is carried out on the ratio between two modulations, which is similar to the phase difference threshold process mentioned above. The detailed determination process is also described in 2.4.2.

In order to detect the motion region more accurately, it is suggested to combine the phase detect region and the modulation detect region.

$${S_c} = {S_p} \cap {S_m}$$
Sp and Sm is the motion region detected from the phase difference and the modulation ratio, respectively. Sc is their intersection and the final motion region for the following data analysis.

Figure 4 shows the process of motion region detection, the measured scene is including a static petal model and a shaking head doll. Figures 4(a) and 4(b) are the motion region detected by the phase difference and the modulation ratio, and Fig. 4(c) is the final optimal motion region by Eq. (13).

 figure: Fig. 4.

Fig. 4. Process of motion region recognition. (a) Motion region detected by phase difference. (b) Motion region detected by modulation ratio. (c) Optimal motion region detected by mixed criteria.

Download Full Size | PDF

2.3 Phase alternation of the motion region

For the phase error caused by motion, if the motion state of the object is random, or there is non-rigid motion, that is, the phase shift between adjacent frames is not fixed εnεn+1, and the extra phase shift of each pixel within a frame εn(x,y) is different [23]. Methods by assuming uniform motion or iteration do not apply to these cases. On the other hand, FTP has the advantage of calculating wrapped phase from a single frame image [37,42], which can effectively avoid the phase error caused by motion in PSP. Combining the respective advantages of PSP and FTP, the PSP-based phase in the motion region successfully located in Section 2.2 is replaced by the FFA-based phase. It can eliminate the phase error caused by motion and retain high measurement accuracy for static object in the measurement scenes.

When the Fourier fringe analysis is performed on the composite grating, and assuming fh=s*fl, m = s, Eq. (3) can be rewritten as:

$$I_n^c = {A^c} + B_1^c\cos {\phi _{hn}} + B_2^c\cos {\phi _{ln}}$$
where ϕhn and ϕln are those phase of high- and low-frequency fringe, respectively. For the nth frame pattern projection, they can be expressed more specifically as:
$$\left\{ \begin{array}{l} {\phi_{ln}} = 2\pi {f_l}x + \mathrm{\Delta }\phi + {{2\pi n} / N}\\ {\phi_{hn}} = 2\pi s{f_l}x + s\mathrm{\Delta }\phi + {{m2\pi n} / N} = s{\phi_{ln}} \end{array} \right.$$
where Δϕ is the low-frequency phase change distorted by the object’s surface. s is the magnification of high- and low-frequency.

Figure 5 shows the process of calculation the FFA-based phase of high- and low-frequency fringe from single frame dual-frequency composite grating. The high- and low-frequency terms in the spectrum can be picked up by the appropriate filter windows respectively. Two complex signals can be recovered by inverse Fourier transform, and then two corresponding wrapped phase values ϕhn and ϕln can be obtained.

 figure: Fig. 5.

Fig. 5. Calculation the FFA-based phase of the dual-frequency composite grating.

Download Full Size | PDF

After obtaining the PSP-based phase and FFA-based phase, the hybrid unwrapped phase of the low-frequency fringe can be expressed as follows:

$$\varDelta {\phi _{cl}} = \left\{\begin{array}{c} {\left\langle {{\phi_l}} \right|{{\phi_{r\_l1}}} \rangle }\quad,\quad{if\quad {S_c} = 0} \\ {\left\langle {{\phi_{ln}}} \right|{{\phi_{r\_ln}}} \rangle }\quad,\quad{if\quad {S_c} = 1} \end{array} \right. \quad,\quad \left\langle A \right|B \rangle = \left\{ \begin{array}{c} {A - B}\quad,\quad{if\quad A \ge B} \\ {A - B + 2\pi }\quad,\quad{if\quad A < B} \end{array} \right.$$
where <·|·> is the operation to unwrap the low-frequency phase. ϕr_ln is the nth low-frequency wrapped phase of the original reference plane.

The high-frequency unwrapped phase of the hybrid PSP-based phase and FFA-based phase can be expressed as follows:

$$\varDelta {\phi _{ch}} = \left\{ \begin{array}{l} {{\phi_h} + 2\pi \cdot round(\frac{{s\varDelta {\phi_{cl}} + {\phi_{r\_h1}} - {\phi_h}}}{{2\pi }}) - {\phi_{r\_h1}}}\quad,\quad{if\quad {S_c} = 0} \\ {{\phi_{hn}} + 2\pi \cdot round(\frac{{s\varDelta {\phi_{cl}} + {\phi_{r\_hn}} - {\phi_h}}}{{2\pi }}) - {\phi_{r\_hn}}}\quad,\quad{if\quad {S_c} = 1} \end{array} \right.$$
where round(·) is the operation of rounding to the nearest integer. ϕr_hn is the nth high-frequency wrapped phase of the original reference plane.

To restore the motion object surface correctly, the low-frequency terms and the high-frequency terms must be separated from the frequency domain. It is necessary to choose a proper low-frequency [42,43]. However, because the low-frequency wrapped phase is not only responsible for guiding the phase unwrapping, but also determines the measurement range of the system, in order to make the measurement depth range larger, we can choose a lower frequency grating, and filter the low-frequency phase obtained by PSP algorithm in the motion region to realize the process of phase unwrapping.

2.4 System calibration and parameters preparation

Before the system being used to reconstruct 3D shape of a complex dynamic scene, the measurement system should be well calibrated, as well as two aforementioned threshold values should be determined.

2.4.1 Calibration of the measuring system

The structured light 3D shape measurement system in this paper was calibrated using the unwrapped phase-to-height lookup table method (UPLUT) [43], shown in Fig. 6, and the camera calibration technique [44]. The system structure is shown in Fig. 6(a). The main steps are as follows: by using the PSP technique, multi-frame dual-frequency composite fringes are projected onto the reference plane in turn for the establishment of UPLUT in the measuring depth range [0, Hm]. Q+1 wrapped phase values can be collected while the reference plane is moving along the depth direction with equal spacing h = Hm/Q, the value of Q will be chosen according to the requirement of the measuring accuracy. As shown in Fig. 6(b), the wrapped phases of high- and low-frequency fringe with different heights are calculated by Eq. (4). These wrapped phase values and their corresponding known height positions can be used to establish the high-frequency UPLUT, as shown in Fig. 6(d).

$$\begin{aligned} {\phi _{r\_hn}} &= {{\tan }^{ - 1}}\frac{{\mathop \sum \nolimits_{w = 0}^{N - 1} I_{mod[{({n + w} )/N} ]}^c\sin ({{{m2\pi w} / N}} )}}{{\mathop \sum \nolimits_{w = 0}^{N - 1} I_{mod[{({n + w} )/N} ]}^c\cos ({{{m2\pi w} / N}} )}} \\ {\phi _{r\_ln}} &= {{\tan }^{ - 1}}\frac{{\mathop \sum \nolimits_{w = 0}^{N - 1} I_{mod[{({n + w} )/N} ]}^c\sin ({{{2\pi w} / N}} )}}{{\mathop \sum \nolimits_{w = 0}^{N - 1} I_{mod[{({n + w} )/N} ]}^c\cos ({{{2\pi w} / N}} )}} \end{aligned}$$
where mod(.) is the operation of division to find the remainder.

 figure: Fig. 6.

Fig. 6. Principle of phase-to-height mapping.

Download Full Size | PDF

The initial reference plane phases ϕr_ln and ϕr_hn used in the Eqs. (16) and (17) are shown in Fig. 6(e). According to the Eq. (18), 2*N phase distributions are obtained by adjusting the image order of the initial reference plane.

2.4.2 Determination of the thresholds for phase difference and modulation ratio

In Section 2.2, two appropriate threshold values need to be set for the phase difference and the modulation ratio to distinguish between motion and stationary regions in the measured scene. As shown in Fig. 6(c), the phase difference between the virtual and the calculated high-frequency phase of each reference plane is calculated, and the phase difference distribution of Q group is obtained. Since the environmental noise satisfies the normal distribution, according to the 3σ principle of normal distribution. The statistical Q-group phase difference distribution was used to obtain the phase difference threshold Tp=3σ. That is, the probability of the phase difference value caused by noise in the range of (-Tp, Tp) is 0.997.

The selection of the modulation ratio threshold is similar to that of the phase difference threshold, and the ratio of high- and low-frequency modulations for the Q-group reference plane satisfies the normal distribution with center Bc1/Bc2, The statistical Q-group modulation ratio distribution was used to obtain the modulation ratio threshold Tm1 and Tm2, That is, the probability of the modulation ratio value caused by noise in the range of (Tm1, Tm2) is 0.997.

2.5 Framework of the proposed method

The method proposed in this paper focuses on the 3D reconstruction of dynamic scenes. To explain the whole process of the proposed method clearly, the whole framework of this method is illustrated in Fig. 7(a).

 figure: Fig. 7.

Fig. 7. Overview of the proposed method. (a) Processing flow. (b) 3D reconstruction sequence.

Download Full Size | PDF

Step1: N frames of phase shift images of the composite grating are cyclically projected onto the measured scene, and the camera captures the sequence image modulated by the object synchronously.

Step2: the newly captured N-frame images are ordered according to the step of phase shift, and the corresponding high- and low-frequency wrapped phase and two modulation information are calculated. Taking Fig. 7(a) as an example, the image sequence of the latest N (N=5) image captured in Step1 is 3∼4∼5∼1∼2. The image order approach is modified them to 1∼2∼3∼4∼5. The calculated high- and low-frequency wrapped phase corresponds to the reference plane ϕr_h1 and ϕr_l1.

Step3: the motion region in the measured scenes is identified according to the proposed virtual high-frequency phase and modulation ratio approach.

Step4: the high- and low-frequency phase of the latest frame is calculated by Fourier analysis, and the PSP-based phase is replaced by the FFA-based phase in the motion region. According to the Eqs. (16) and (17), the merged high-frequency phase change ΔФch is obtained. In Fig. 7(a), the image order of the latest captured pattern is 2 (green). The phase of the motion region after merge corresponds to the reference plane ϕr_h2 and ϕr_l2. This purpose is to subtract the corresponding reference plane phase, which can guarantee that the phase distribution of the motion region is correct, accurate and continuous on the time axis.

Step5: with the UPLUT, the high-frequency phase change ΔФch is mapped into the spatial height distribution, then the 3D shape of the measured scene is reconstructed according to the camera calibration model.

In our proposed method, every one more deformed image we obtain, a new 3D shape result can be calculated as shown in Fig. 7(b), and thus the 3D frame can be reconstructed associated with each captured fringe pattern. So it is more efficient and fast compared with the traditional method in dynamic 3D shape measurement.

3. Experiments and results

Experiments have been conducted to test the performance of our proposed method. A measuring system was developed, including a digital projector (LightCrafter4500), an imaging equipment (Baumer HXC40 camera with the imaging resolution of 1280*800 pixels and a 16-mm imaging lens) and a linear translation station with the precision of repeated positioning of 5 µm. The camera was synchronized by the trigger signal of the projector.

The two periods, Pl and Ph, in the dual-frequency composite grating were 45 pixels and 9 pixels, respectively. A reference plane moved with each interval of 1 mm within a depth range of 100 mm. Five frames dual-frequency composite gratings with phase shift were orderly projected onto the moving reference plane. The wrapped phase values of the low-frequency phase ϕl and the high-frequency phase ϕh were obtained by using the Eq. (4), and the UPLUTs of the high- and low-frequency phase were established respectively [43,45].

3.1 Accuracy analysis of dynamic scene

To quantitatively evaluate the accuracy of the proposed method, as shown in Fig. 8(a), two standard spheres with diameters of 50.7991 mm and 50.7970 mm fixed at the distance of 100.2537 mm was moving in the measuring volume. Figure 8(b) shows the deformed composite image. Figures 8(c) and 8(d) respectively show the reconstructed result and the error evaluation relative to its fitted sphere only by pure PSP algorithm where the motion error on the two standard spheres is obvious. Figures 8(e) and 8(f) show the result and the error evaluation by Hilbert transform compensation method [31], in which the motion errors on two standard spheres are partially compensated. Figures 8(g) and 8(h) show the result and the error evaluation by our method, which clearly demonstrates that motion error is removed. The larger RMS of the two spheres using the pure PSP algorithm is 0.730 mm, that of Hilbert transform compensation method is 0.406 mm, and that of our method is 0.126 mm.

 figure: Fig. 8.

Fig. 8. Accuracy analysis for the pure PSP method, Hilbert transform compensation method and our method.

Download Full Size | PDF

If the motion state is random, the motion error compensation method based on Hilbert transform cannot effectively eliminate the motion error. However, our method uses the FFA-based phase to replace the PSP-based phase, which can thoroughly eliminate the motion error in the motion region.

3.2 Measurement on a complex dynamic scene

The second experiment was performed to further demonstrate the performance of our proposed method in the complex scene with isolated objects. Four objects were measured including two stationary objects (a statue and a petal model) and two shaking head dolls. Three of the deformed composite fringes are shown in Figs. 9(a)–9(c), the detected motion region is shown in Fig. 9(d), and the measurement result only by PSP algorithm is shown in Fig. 9(e), and the top view of the motion region measurement result by PSP is shown in Fig. 9(g), and the measurement result and its top view of the motion region by our method is shown in Figs. 9(f) and 9(h). The height distribution of Figs. 9(e) and 9(f) along the marked line is shown in Fig. 9(i). Experimental results demonstrate that our proposed method can effectively eliminate the motion-induced error and achieve robust 3D measurement on complex and isolated objects.

 figure: Fig. 9.

Fig. 9. Measurement result of a complex scene. (a)-(c) Captured deformed fringe images. (d) Detected motion region. (e) Reconstructed result only by PSP. (g) Reconstructed result in the motion region only by PSP. (f) Reconstructed result by our method. (h) Reconstructed result in the motion region by our method. (i) Height profile of (e) and (f) along the marked line.

Download Full Size | PDF

At the same time, the proposed method carries on the phase analysis on frequency domain of each frame deformed composite fringe in the phase shift process which make the 3D reconstruction of the dynamic process to reach higher time resolution. The phase shift fringe patterns of 5 adjacent frames and the corresponding reconstruction results at different times are shown in Fig. 10 and Visualization 1. The acquisition rate of the camera is 100fps, and for better display effect, the playback rate is set to 10fps.

 figure: Fig. 10.

Fig. 10. Measuring results of 5 adjacent frames on the dynamic scene. (a) Captured pattern sequences of 5 adjacent frames. (b) Corresponding 3D frames (Visualization 1).

Download Full Size | PDF

3.3 Real-time measurement on a complex dynamic scene

To verify the performance of the proposed method for real-time applications, we developed the measuring system in a computer with a GPU(NVIDIA GeForce GTX1080) and a CPU(Intel i5-7400). System calibration parameters were pre-calculated and stored on the GPU before measurement. All computations were performed on the GPU, and the computing results by GPU can be displayed directly using OpenGL.

The measured scene includes a statue and a swinging ball. The 3D shapes with the imaging resolution of 1280*800 pixels can be reconstructed and displayed at a speed of 60 fps by this proposed method. The real-time measuring results are shown in Fig. 11 and Visualization 2. The comparison of real-time measurement results between PSP algorithm and our method are shown in Fig. 12 and Visualization 3. It should be reminded that Visualization 3 is only a qualitative comparison of the measuring results of two different method, not measured at the same time.

 figure: Fig. 11.

Fig. 11. Real-time measurement processes and results (Visualization 2).

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. Comparison of real-time measurement results by PSP and our method (Visualization 3).

Download Full Size | PDF

4. Conclusion and discussion

A real-time motion error elimination method for dynamic 3D shape measurement is proposed. Based on the high- and low-frequency wrapped phases obtained from the dual-frequency composite grating, the virtual high-frequency approach is used to identify the motion region, and the PSP-based phase and FFA-based phase are merged according to the motion state of the object. Finally, according to the image ordering approach and the corresponding phase unwrapping algorithm, the 3D reconstruction of the full-frame time resolution in the phase shift algorithm is realized. Our proposed method can eliminate the motion error and correctly reconstruct 3D shape of the dynamic object while ensuring the reconstruction accuracy of the static object. The experimental results show that this method can implement real-time 3D shape measurement for complex scenes with dynamic and static parts.

Compared to existing methods of motion-induced error reduction, the proposed method has the following features:

  • • Based on the different error distribution of high- and low-frequency phase affected by motion in composite grating, the approach of the virtual high-frequency phase and the modulation ratio is proposed to locate the motion region in the measurement scene.
  • • By taking full advantage of the phase shift and transform domain characteristics of the composite grating, the fusion of PSP-based phase and FFA-based phase is achieved by distinguishing the motion state of the measured object in the complex scene, and the phase unwrapping and height reconstruction are realized based on the dual-frequency characteristics.
  • • PSP algorithm is used to achieve high precision, but a new 3D shape result can be obtained with each newly deformed fringe pattern. According to the image ordering and the phase unwrapping approach of the Eqs. (16) and (17), the corresponding 3D shape is restored for each new frame during the phase shifting. So, it can realize 3D reconstruction of a motion scene with higher time resolution with phase shift algorithm.
  • • It is applicable to do real-time 3D shape measurement for complex scenes with dynamic and static parts.
The proposed method aims to improve the efficiency of the phase-shift method and hopes to extract more useful information from the phase-shift images, such as obtaining the high- and low-frequency phase that can be used for phase unwrapping by the composite phase-shift images; determining the motion region using the relationship of two frequencies’ phase and the ratio of their modulations; and obtaining two frequencies’ phase information in the motion state by performing frequency domain analysis on each single frame image during the phase shifting.

However, this method needs morphological operation for the motion regions directly identified by the virtual-high frequency phase and modulation ratio approach, the fixed morphological structural element is not robust to the changing motion state, which will lead to the recognized motion region and the actual motion region cannot be exactly the same, especially at the boundary of the motion region, and the future necessary work is to realize the adaptive structural element selection or pixel-wise motion detection. At the same time, the proposed method uses the FFA-based phase to replace the PSP-based phase in the motion region, which still suffer from the inherent limitations of Fourier fringe analysis method, for example, the high frequency details of the measured object are smoothed, the boundary phase of the object is prone to errors, and the phase of low-frequency fringe is difficult to be accurately calculated due to the bandpass filtering in frequency domain. Furthermore, based on the phase difference obtained by the virtual high-frequency phase approach and the high- and low-frequency motion error model expressed in Eq. (11), a more accurate error compensation model need to be established.

Funding

National Natural Science Foundation of China (61675141).

Disclosures

The authors declare no conflicts of interest.

References

1. K. R. Ford, G. D. Myer, and T. E. Hewett, “Reliability of landing 3D motion analysis: implications for longitudinal analyses,” Med. Sci. Sports Exercise 39(11), 2021–2028 (2007). [CrossRef]  

2. E. Malamas, E. Petrakis, M. Zervakis, L. Petit, and J. Legat, “A survey on industrial vision systems, applications and tools,” Image Vision Comput. 21(2), 171–188 (2003). [CrossRef]  

3. F. Chen, G. M. Brown, and M. Song, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng. 39(1), 8–22 (2000). [CrossRef]  

4. J. Geng, “Structured-light 3d surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011). [CrossRef]  

5. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Laser Eng. 48(2), 133–140 (2010). [CrossRef]  

6. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognit. 43(8), 2666–2680 (2010). [CrossRef]  

7. X. Su and Q. Zhang, “Dynamic 3-d shape measurement method: a review,” Opt. Laser Eng. 48(2), 191–204 (2010). [CrossRef]  

8. Z. Zhang, “Review of single-shot 3D shape measurement by phase calculation-based fringe projection techniques,” Opt. Laser Eng. 50(8), 1097–1106 (2012). [CrossRef]  

9. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Laser Eng. 48(2), 149–158 (2010). [CrossRef]  

10. S. Van der Jeught and J. J. Dirckx, “Real-time structured light profilometry: a review,” Opt. Laser Eng. 87, 18–31 (2016). [CrossRef]  

11. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Laser Eng. 109, 23–59 (2018). [CrossRef]  

12. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Laser Eng. 85, 84–103 (2016). [CrossRef]  

13. Q. Kemao, “Two-dimensional windowed fourier transform for fringe pattern analysis: principles, applications and implementations,” Opt. Laser Eng. 45(2), 304–317 (2007). [CrossRef]  

14. J. Zhong and J. Weng, “Spatial carrier-fringe pattern analysis by means of wavelet transform: wavelet transform profilometry,” Appl. Opt. 43(26), 4993–4998 (2004). [CrossRef]  

15. M. Zhao, L. Huang, Q. Zhang, X. Su, A. Asundi, and Q. Kemao, “Quality-guided phase unwrapping technique: comparison of quality maps and guiding strategies,” Appl. Opt. 50(33), 6214–6224 (2011). [CrossRef]  

16. Q. Zhang, Y. Han, and Y. Wu, “Comparison and combination of three spatial phase unwrapping algorithms,” Opt. Rev. 26(4), 380–390 (2019). [CrossRef]  

17. G. Sansoni, M. Carocci, and R. Rodella, “Three-dimensional vision based on a combination of gray-code and phase-shift light projection: analysis and compensation of the systematic errors,” Appl. Opt. 38(31), 6565–6573 (1999). [CrossRef]  

18. Z. Wu, W. Guo, Y. Li, Y. Liu, and Q. Zhang, “High-speed and high-efficiency three-dimensional shape measurement based on Gray-coded light,” Photonics Res. 8(6), 819–829 (2020). [CrossRef]  

19. J. M. Huntley and H. Saldner, “Temporal phase-unwrapping algorithm for automated interferogram analysis,” Appl. Opt. 32(17), 3047–3052 (1993). [CrossRef]  

20. T. Weise, B. Leibe, and L. Van Gool, “Fast 3d scanning with automatic motion compensation,” 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2007).

21. Y. An, J. S. Hyun, and S. Zhang, “Pixel-wise absolute phase unwrapping using geometric constraints of structured light system,” Opt. Express 24(16), 18445–18459 (2016). [CrossRef]  

22. T. Tao, Q. Chen, S. Feng, J. Qian, Y. Hu, L. Huang, and C. Zuo, “High-speed real-time 3d shape measurement based on adaptive depth constraint,” Opt. Express 26(17), 22440–22456 (2018). [CrossRef]  

23. S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Laser Eng. 103, 127–138 (2018). [CrossRef]  

24. L. Lu, Z. Jia, Y. Luan, and J. Xi, “Reconstruction of isolated moving objects with high 3d frame rate based on phase shifting profilometry,” Opt. Commun. 438, 61–66 (2019). [CrossRef]  

25. K. Liu, Y. Wang, D. Lau, Q. Hao, and L. Hassebrook, “Dual-frequency pattern scheme for high-speed 3-D shape measurement,” Opt. Express 18(5), 5229–5244 (2010). [CrossRef]  

26. C. Zuo, Q. Chen, G. Gu, S. Feng, F. Feng, R. Li, and G. Shen, “High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection,” Opt. Laser Eng. 51(8), 953–960 (2013). [CrossRef]  

27. G. Wu, Y. Wu, L. Li, and F. Liu, “High-resolution few-pattern method for 3D optical measurement,” Opt. Lett. 44(14), 3602–3605 (2019). [CrossRef]  

28. L. Lu, J. Xi, Y. Yu, and Q. Guo, “New approach to improve the accuracy of 3-D shape measurement of moving object using phase shifting profilometry,” Opt. Express 21(25), 30610–30622 (2013). [CrossRef]  

29. L. Lu, J. Xi, Y. Yu, and Q. Guo, “New approach to improve the performance of fringe pattern profilometry using multiple triangular patterns for the measurement of objects in motion,” Opt. Eng. 53(11), 112211 (2014). [CrossRef]  

30. L. Lu, Y. Ding, Y. Luan, Y. Yin, Q. Liu, and J. Xi, “Automated approach for the surface profile measurement of moving objects based on PSP,” Opt. Express 25(25), 32120–32131 (2017). [CrossRef]  

31. Y. Wang, Z. Liu, C. Jiang, and S. Zhang, “Motion induced phase error reduction using a Hilbert transform,” Opt. Express 26(26), 34224–34235 (2018). [CrossRef]  

32. X. Liu, T. Tao, Y. Wan, and J. Kofman, “Real-time motion-induced-error compensation in 3D surface-shape measurement,” Opt. Express 27(18), 25265–25279 (2019). [CrossRef]  

33. Y. Wang, V. Suresh, and B. Li, “Motion-induced error reduction for binary defocusing profilometry via additional temporal sampling,” Opt. Express 27(17), 23948–23958 (2019). [CrossRef]  

34. Q. Zhang and X. Su, “High-speed optical measurement for the drumhead vibration,” Opt. Express 13(8), 3110–3116 (2005). [CrossRef]  

35. P. Cong, Z. Xiong, Y. Zhang, S. Zhao, and F. Wu, “Accurate dynamic 3D sensing with Fourier-assisted phase shifting,” IEEE J. Sel. Top. Signal Process. 9(3), 396–408 (2015). [CrossRef]  

36. B. Li and S. Zhang, “Superfast high-resolution absolute 3D recovery of a stabilized flapping flight process,” Opt. Express 25(22), 27270–27282 (2017). [CrossRef]  

37. B. Li, Z. Liu, and S. Zhang, “Motion-induced error reduction by combining fourier transform profilometry with phase-shifting profilometry,” Opt. Express 24(20), 23289–23303 (2016). [CrossRef]  

38. J. Qian, T. Tao, S. Feng, Q. Chen, and C. Zuo, “Motion-artifact-free dynamic 3D shape measurement with hybrid Fourier-transform phase-shifting profilometry,” Opt. Express 27(3), 2713–2731 (2019). [CrossRef]  

39. S. Feng, Q. Chen, G. Gu, T. Tao, L. Zhang, Y. Hu, W. Yin, and C. Zuo, “Fringe pattern analysis using deep learning,” Adv. Photonics 1(02), 1–7 (2019). [CrossRef]  

40. H. Yu, X. Chen, Z. Zhang, C. Zuo, Y. Zhang, D. Zheng, and J. Han, “Dynamic 3-D measurement based on fringe-to-fringe transformation using deep learning,” Opt. Express 28(7), 9405–9418 (2020). [CrossRef]  

41. J. Qian, S. Feng, T. Tao, Y. Hu, Y. Li, Q. Chen, and C. Zuo, “Deep-learning-enabled geometric constraints and phase unwrapping for single-shot absolute 3D shape measurement,” APL Photonics 5(4), 046105 (2020). [CrossRef]  

42. W. Su and H. Liu, “Calibration-based two-frequency projected fringe profilometry: a robust, accurate, and single-shot measurement for objects with large depth discontinuities,” Opt. Express 14(20), 9178–9187 (2006). [CrossRef]  

43. W. Guo, Z. Wu, R. Xu, Q. Zhang, and M. Fujigaki, “A fast reconstruction method for three-dimensional shape measurement using dual-frequency grating projection and phase-to-height lookup table,” Opt. Laser Technol. 112, 269–277 (2019). [CrossRef]  

44. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

45. W. Guo, Z. Wu, Q. Zhang, and M. Fujigaki, “Real time three-dimensional shape measurement with phase-to-height lookup table,” Proc. SPIE 11205, 112052F (2019). [CrossRef]  

Supplementary Material (3)

NameDescription
Visualization 1       Measurement result of a complex scene
Visualization 2       Real-time measurement processes and results
Visualization 3       Comparison of real-time measurement results by PSP and our method

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Schematic diagram of the 3D shape measurement system.
Fig. 2.
Fig. 2. Effect of motion error on the high- and low-frequency phase. (a) Simulated object. (b) Motion phase difference between Virtual high frequency and high frequency. (c) Motion region detected by phase difference. (d) Threshold binarization result. (e) Low-frequency wrapped phase. (f) Virtual high-frequency wrapped phase. (g) High-frequency wrapped phase. (h)Low-frequency phase motion error. (i) Virtual high-frequency phase motion error. (j) High-frequency phase motion error.
Fig. 3.
Fig. 3. Effect of motion on the modulations of high- and low-frequency fringe. (a) Modulation of low-frequency fringe. (b) Modulation of high-frequency fringe. (c) modulation ratio of (a) and (b).
Fig. 4.
Fig. 4. Process of motion region recognition. (a) Motion region detected by phase difference. (b) Motion region detected by modulation ratio. (c) Optimal motion region detected by mixed criteria.
Fig. 5.
Fig. 5. Calculation the FFA-based phase of the dual-frequency composite grating.
Fig. 6.
Fig. 6. Principle of phase-to-height mapping.
Fig. 7.
Fig. 7. Overview of the proposed method. (a) Processing flow. (b) 3D reconstruction sequence.
Fig. 8.
Fig. 8. Accuracy analysis for the pure PSP method, Hilbert transform compensation method and our method.
Fig. 9.
Fig. 9. Measurement result of a complex scene. (a)-(c) Captured deformed fringe images. (d) Detected motion region. (e) Reconstructed result only by PSP. (g) Reconstructed result in the motion region only by PSP. (f) Reconstructed result by our method. (h) Reconstructed result in the motion region by our method. (i) Height profile of (e) and (f) along the marked line.
Fig. 10.
Fig. 10. Measuring results of 5 adjacent frames on the dynamic scene. (a) Captured pattern sequences of 5 adjacent frames. (b) Corresponding 3D frames (Visualization 1).
Fig. 11.
Fig. 11. Real-time measurement processes and results (Visualization 2).
Fig. 12.
Fig. 12. Comparison of real-time measurement results by PSP and our method (Visualization 3).

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

Inp=Ap+B1pcos(2πfhxp+m2πn/N)+B2pcos(2πflxp+2πn/N)
m{kN1kN+1k=0,1,2kN/2
Inc=Ac+B1ccos(ϕh+m2πn/N)+B2ccos(ϕl+2πn/N)
ϕh=tan1n=0N1Incsin(m2πn/N)n=0N1Inccos(m2πn/N),ϕl=tan1n=0N1Incsin(2πn/N)n=0N1Inccos(2πn/N)
Ac=1Nn=0N1Inc
B1c=2Nsqrt{[n=0N1Incsin(m2πn/N)]2+[n=0N1Inccos(m2πn/N)]2}B2c=2Nsqrt{[n=0N1Incsin(2πn/N)]2+[n=0N1Inccos(2πn/N)]2}
γ1=B1c/Ac,γ2=B2c/Ac
In=Ac+B1ccos(ϕh+m2πn/N+εnh)+B2ccos(ϕl+2πn/N+εnl)
ϕh=tan1n=0N1Insin(m2πn/N)n=0N1Incos(m2πn/N),ϕl=tan1n=0N1Insin(2πn/N)n=0N1Incos(2πn/N)
Δϕh=ϕhϕh,Δϕl=ϕlϕl
Δϕh=tan1[a0+a1cos2ϕh+a2sin2ϕh+a3coss+1sϕh+a4sins+1sϕh+a5coss1sϕh+a6sins1sϕha7a2cos2ϕh+a1sin2ϕha4coss+1sϕh+a3sins+1sϕha6coss1sϕh+a5sins1sϕh]Δϕl=tan1[b0+b1cos2ϕl+b2sin2ϕl+b3cos(s+1)ϕl+b4sin(s+1)ϕl+b5cos(s1)ϕl+b6sin(s1)ϕlb7b2cos2ϕl+b1sin2ϕlb4cos(s+1)ϕl+b3sin(s+1)ϕlb6cos(s1)ϕl+b5sin(s1)ϕl]
ϕvh=smod[ϕl+π,2π/s]π
Sc=SpSm
Inc=Ac+B1ccosϕhn+B2ccosϕln
{ϕln=2πflx+Δϕ+2πn/Nϕhn=2πsflx+sΔϕ+m2πn/N=sϕln
Δϕcl={ϕl|ϕr_l1,ifSc=0ϕln|ϕr_ln,ifSc=1,A|B={AB,ifABAB+2π,ifA<B
Δϕch={ϕh+2πround(sΔϕcl+ϕr_h1ϕh2π)ϕr_h1,ifSc=0ϕhn+2πround(sΔϕcl+ϕr_hnϕh2π)ϕr_hn,ifSc=1
ϕr_hn=tan1w=0N1Imod[(n+w)/N]csin(m2πw/N)w=0N1Imod[(n+w)/N]ccos(m2πw/N)ϕr_ln=tan1w=0N1Imod[(n+w)/N]csin(2πw/N)w=0N1Imod[(n+w)/N]ccos(2πw/N)
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.