Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Efficient intensity-based fringe projection profilometry method resistant to global illumination

Open Access Open Access

Abstract

Intensity-based fringe projection profilometry (IBFPP) is used widely because of its simple structure, high robustness, and noise resilience. Most IBFPP methods assume that any scene point is illuminated by direct illumination only, but global illumination effects introduce strong biases in the reconstruction result for many real-world scenes. To solve this problem, this paper describes an efficient IBFPP method for reconstructing three-dimensional geometry in the presence of global illumination. First, the average intensity of two sinusoidal patterns is used as a pixel-wise threshold to binarize the codeword patterns. The binarized template pattern is then used to convert other binarized fringe patterns into traditional Gray-code patterns. A proprietary compensation algorithm is then applied to eliminate fringe errors caused by environmental noise and lens defocusing. Finally, simple, efficient, and robust phase unwrapping can be achieved despite the effects of subsurface scattering and interreflection. Experimental results obtained in different environments show that the proposed method can obtain three-dimensional information reliably when influenced by global illumination.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical three-dimensional (3D) shape measurement has numerous applications in the fields of vision navigation, quality control, depth segmentation, and decamouflaging [14]. Of all the various 3D shape-measurement techniques, fringe projection profilometry (FPP) outperforms the rest in terms of its low cost and high robustness, speed, and resolution [57]. Unlike other such systems, FPP uses phase instead of intensity to establish correspondence between projector and camera. However, the phase information obtained directly from FPP is wrapped across the interval $(-\pi , \pi )$ rad and so cannot provide unique pixel correspondence between camera and projector. The phase information must therefore be unwrapped to form an absolute phase map [8].

Numerous different phase-unwrapping methods have been proposed in the past few years and can be classified broadly as either spatial phase unwrapping (SPU) or temporal phase unwrapping (TPU). During phase unwrapping, SPU detects phase discontinuities in the wrapped phase map itself and removes them by adding $k(x, y)$ multiples of 2$\pi$ rad, where the integer $k(x, y)$ is referred to as the fringe order. Consequently, SPU requires no additional fringe patterns to provide extra information for phase unwrapping. However, regardless of the measurement speed and robustness of SPU, its use is inherently difficult regarding accurate measurements of multiple isolated objects or surfaces containing sharp edges [9].

In contrast, TPU is a multi-frame-based method that eliminates phase ambiguity by including additional captured fringe patterns [10]. Depending on their encoding, most TPU methods are either phase-based or intensity-based approaches. Phase-based approaches embed fringe orders into the phase component, and these can be categorized further into the multi-frequency (hierarchical) approach, multi-wavelength (heterodyne) approach, number-theoretical approach, and phase-coding approach [1114]. All of these methods provide highly accurate reconstruction results and have been used to measure moving objects in different situations, including dynamic fan blades and facial expressions [15,16]. However, traditional phase-based approaches tend to suffer from unwrapping errors when they contain patterns with excessively high spatial frequencies. Recently, some new techniques have used the geometry-constraint relationships of FPP systems to achieve faster and more-precise measurements, but that approach is often limited by the effective measuring-depth range [17,18].

Intensity-based approaches eliminate the phase ambiguity by casting a set of binary or ternary fringe patterns [1922]. One of the best-known intensity-based methods is the Gray code plus phase shifting (GCPS) method, which can label $W$ periods of a wrapped phase map with only ${\log }_2 W$ frames of the fringe patterns. Benefitting from its simple structure and high reliability, GCPS has been used in numerous scenarios that have special requirements in terms of measurement precision, including ground-truth establishment and industrial monitoring [23,24]. However, the phase-jump regions of the wrapped phase map have been shown to be unstable, and the fringe orders around these regions are easily altered [25]. To solve this problem, Zhang and colleagues proposed a series of different complementary Gray code (CGC)-based methods [2629] that eliminate these types of errors by projecting an additional codeword pattern. However, although such methods are effective, they sacrifice coding efficiency.

Despite these advances, most current FPP methods assume that the captured scene points are illuminated directly by a perfect point light source with infinite depth of field. However, as well as this direct illumination, scenes in real-world scenarios are also illuminated indirectly by different types of global illumination (e.g., subsurface scattering and interreflections). Therefore, failing to account for global illumination may lead to serious system errors in the recovered 3D reconstruction result [30,31].

For this reason, numerous methods have been proposed to address the shape-recovery problem in the presence of global illumination. The earliest research work regarding global illumination considered interreflections. Using an iterative approach, Nayar et al. recovered the actual shape of Lambertian objects from their pseudo-shape estimates with a photometric stereo technique [32]. Subsequently, the errors caused by interreflections or other global-illumination effects have also been eliminated in other fields of machine vision, including obtaining shapes from defocused illumination and using photometric stereo [3335].

Various different techniques have also been used in FPP to tackle the problem of global illumination. Chen et al. used polarizers to construct polarization-difference images to filter out subsurface scattering [36]. This method is effective but may reduce the signal-to-noise ratio of the captured images, and it cannot address the problem of interreflections. Hermans et al. moved the projector with a constant velocity and performed per-pixel analysis to reconstruct surfaces [37]. Park et al. moved either the camera or the scene to eliminate errors caused by global illumination in an FPP setup [38]. Holroyd et al. set up a coaxial optical scanner to acquire 3D geometry and surface reflectance simultaneously [39]. Xu et al. compensated for geometric errors by establishing the relationship between the fringe period and the depth offset, and they measured translucent objects using FPP [40]. Chen et al. proposed an effective compensation framework comprising a process of error detection and correction, and they succeeded in cancelling errors caused by global illumination [41]. However, although all those methods are resistant to global illumination, they lead to higher hardware costs and/or algorithmic complexity.

In 2006, Nayar et al. demonstrated that high-frequency fringe patterns can be used to separate global illumination from the direct-illumination component [42]. Inspired by the core concept of that research, several subsequent techniques were proposed. Zhang et al. embedded a speckle-like signal into three sinusoidal fringe patterns to eliminate phase ambiguity [43]. Because all the fringe patterns cast by that method have high spatial frequency, they are resistant to global illumination. However, the embedded signal may not be recognized correctly in every modulated region, thereby degrading the integrity of phase unwrapping. Chen et al. developed a modulated phase-shifting method and verified the effectiveness of that approach in scenarios influenced by subsurface scattering and interreflections [44]. Gupta et al. improved the modulated phase-shifting method by casting four sets of patterns to eliminate global illumination at the capturing stage [45]. Unlike the modulated phase-shifting method, the former method does not require explicit separation of the global component. Furthermore, they proposed a simple and effective technique known as micro phase shifting that requires fringe patterns to be projected onto the objects with only an adequately small bandwidth and high frequency [46]; that method is highly flexible and resistant to global illumination. Tang et al. conducted in-depth investigations into this micro phase-shifting method and presented a rule for frequency selection [47]. However, as noted earlier in this section, the unwrapping processes in phase-based approaches are less robust than intensity-based approaches in the presence of high spatial frequencies.

To address these problems, the present paper proposes an intensity-based fringe-projection technique known as efficient intensity-based FPP (EIBFPP), the aims being higher measurement efficiency and allowing 3D reconstruction in real-life settings in which global illumination is ubiquitous. Unlike conventional intensity-based methods, the spatial frequencies of the codeword patterns do not vary much. Moreover, EIBFPP uses sinusoidal fringe patterns instead of intensity fringe patterns to reach pixel-wise resolution, the former being more robust to the effects of lens defocusing. Based on the reciprocal structure of sinusoidal fringe patterns, binarization can be realized using only two fringe patterns. This method gives much higher binarization efficiency when compared with traditional intensity-based approaches [45].

The remainder of this paper is organized as follows. Section 2 introduces the related techniques and the coding strategy for the EIBFPP method. Section 3 presents the details of phase unwrapping using EIBFPP. Section 4 shows some experimental results to validate EIBFPP, including measurements of isolated objects and of scenarios influenced by different amounts of interreflections and subsurface scattering. A summary is presented in Section 5.

2. Efficient intensity-based fringe projection profilometry

2.1 Related techniques

2.1.1 Hilbert transform

The Hilbert transform is a linear operator that takes a function $\mu (t)$ of a real variable and produces another function $\mathscr {H}(\mu )(t)$ of a real variable by convolution with the function $\frac {1}{\pi t}$, such that

$$\mathscr{H}(\mu)(t)=\frac{1}{\pi} \int_{-\infty}^{\infty} \frac{\mu(\tau)}{t-\tau} \mathrm{d} \tau .$$
In the frequency domain, the process of the Hilbert transform is expressed mathematically as
$$\mathscr{F}[\mathscr{H}(\mu)](\omega)=\delta_{\mathscr{H}}(\omega)\times \mathscr{F}(\mu)(\omega),$$
where $\mathscr {F}[\cdot ]$ is the Fourier-transform operator, and
$$\delta_{\mathscr{H}}(\omega)=\left\{\begin{array}{ll} i=e^{i \pi / 2} & \omega<0, \\ 0 & \omega=0, \\ -i=e^{-i \pi / 2} & \omega>0. \end{array}\right.$$
Therefore, the Hilbert transform has the effect of shifting the phases of the negative and positive frequency components by $\frac {\pi }{2}$ and $-\frac {\pi }{2}$ rad, respectively. For example, if a real cosine signal $\cos t$ is transformed by $\mathscr {H}(\cdot )$, it is converted to a sine signal $\sin t$.

2.1.2 Gray code and complementary Gray code

Phase-shifting profilometry has been used widely in optical metrology because of its robustness, flexibility, and high resolution. The intensity distribution of each fringe pattern can be expressed as

$$P_m(x, y)=a(x, y)+b(x, y)\cos [\phi(x, y)+\delta_m], \qquad i=1, 2, \dots, M\ (M\geq3),$$
where the subscript $m$ denotes the number of phase steps, and $a(x, y)$ and $b(x, y)$ represent the average intensity (DC component) and the fringe modulation, respectively. The value of $\delta _m = 2\pi (m-1) / M$ is the amount of phase shift, and $\phi (x, y)$ is the wrapped phase, which can be calculated by applying the least-squares algorithm
$$\phi(x, y)=\arctan \frac{-\sum_{m=1}^MP_m(x, y)\sin (\delta_m)} {\sum_{m=1}^MP_m(x, y)\cos (\delta_m)}.$$
The wrapped phase calculated by Eq. (5) is periodic with $2\pi$ phase ambiguity due to the value range of the arctangent function.

To unwrap the wrapped phase map, the Gray-code method is used to remove the phase ambiguity by casting extra intensity-based fringe patterns. As shown in Fig. 1(a), the GCPS method has two remarkable features: (i) $W$ wrapped phase periods can be labeled by ${\log }_2 W$ frames of the codeword pattern; (ii) the edges of codewords (enclosed by red rectangles) are aligned perfectly with wrapped-phase jump points. However, the phase-jump regions of the wrapped phase map have been shown to be unstable, and the fringe orders around these regions are easily altered.

 figure: Fig. 1.

Fig. 1. Sketch maps of patterns for (a) Gray code plus phase shifting (GCPS), (b) complementary Gray code (CGC), and (c) efficient intensity-based fringe projection profilometry (EIBFPP) methods.

Download Full Size | PDF

To solve these problems, a CGC method is proposed in which an additional codeword pattern $C$ is projected. During phase unwrapping, traditional codeword patterns [$G_1$$G_3$ in Fig. 1(a)] generate a traditional decoding fringe order $k_1$, and all codeword patterns [$G_1$$G_3$ and $C$ in Fig. 1(b)] produce another fringe order $k_2$. Because the boundary of $k_1$ and $k_2$ is complementary, the unwrapped phase result can be recovered correctly by combining $k_1$ and $k_2$.

2.2 Coding strategy resistant to global illumination

2.2.1 Pattern structure of EIBFPP

Most classical measurement techniques (e.g., GCPS and CGC) assume that the measured scene points are illuminated directly by the light source. Therefore, for many real-world scenarios, global illumination effects degrade severely the measurement performance of structured-light techniques.

To solve this problem, one general solution is to set the frequency of the cast patterns to be higher than the light-transport bandwidth ($f_{\mathrm {LT}}$) of the measured scenarios. Details of how to select a suitable fringe frequency can be found in [47]. A novel EIBFPP method derived from CGC is thereby proposed to prevent the errors caused by global illumination while preserving the coding efficiency of codeword patterns. Similar to other intensity-based FPP (IBFPP) techniques, EIBFPP is also designed with a structure of codeword patterns plus sinusoidal patterns.

As illustrated in Fig. 1(c), there are two types of codeword patterns in EIBFPP. One is the template pattern $T$, and the other is the converted patterns $X_i\ (i=1, 2, 3, \ldots )$. The structure of $T$ is the same as that of the additional pattern ($C$) in CGC. As well as preventing the generation of fringe-order errors, this pattern performs the roles of fringe generation and fringe decoding. During fringe generation, the template pattern is used to convert low-frequency coding patterns into high-frequency ones. In the decoding stage, this pattern is regarded as the key to converting the other binarized patterns into traditional Gray-code patterns.

The converted patterns are produced from a template pattern and traditional codeword patterns, which can be described as

$$X_i(x, y)=G_i(x, y)\oplus T(x, y)\quad i=1, 2, 3, \ldots, N,$$
where $\oplus$ is the exclusive OR (XOR) operator and $N$ is the number of traditional codeword patterns in EIBFPP.

To guarantee measurement efficiency, EIBFPP uses only two frames of sinusoidal patterns to retrieve the wrapped phase, and these patterns can be expressed as

$$P_j(x, y)=a(x, y)+(-1)^jb(x, y)\cos [\phi(x, y)] \quad j=1, 2.$$

2.2.2 Limit condition of EIBFPP

Because the converted patterns are produced from traditional codeword patterns and a template pattern using an XOR operator, the frequency of each converted pattern ($f_{X_i}$) is always higher than ($f_{G_i}$) and ($f_{T}$). Additionally, a frequency relationship exists between the template pattern and the sinusoidal patterns such that $2f_{T}=f_{P_j}$. In summary, the frequencies of different patterns in EIBFPP are related by

$$R(f_{X_i}, f_{P_j}, f_T)= \left\{ \begin{aligned} f_{X_i}>f_T \\ f_{P_j}>f_T \end{aligned} \qquad i=1, 2, \ldots, N \quad j=1, 2 \right.$$
(i.e., the template pattern has the lowest pattern frequency among all the patterns in EIBFPP). Therefore, to avoid the errors induced by global illumination, the frequency of the cast patterns must obey the limit condition $f_T>f_{\textrm {LT}}$.

3. Phase unwrapping based on EIBFPP

To retrieve the 3D information from measured scenarios, the phase-unwrapping process of EIBFPP includes the following steps.

  • 1. Wrapped-phase retrieval

    As presented in Eq. (7), the direct component of the projected sinusoidal patterns can be canceled through

    $$P_C(x, y)=P_2(x, y)-P_1(x, y)=2b(x, y)\cos [\phi(x, y)].$$
    Applying the Hilbert transform $\mathscr {H}[\cdot ]$ to Eq. (9) leads to
    $$P_S(x, y)=\mathscr{H}[P_C(x, y)]=2b(x, y)\sin [\phi(x, y)].$$
    The wrapped phase map can then be computed by using the arctangent function:
    $$\phi(x, y)=\arctan \frac{P_S(x, y)}{P_C(x, y)}.$$

  • 2. Binarization of codeword patterns and background removal

    The second step for phase unwrapping is to binarize the codeword patterns. Because the DC component of the sinusoidal pattern also represents the texture of the illuminated objects, the average intensity of the two sinusoidal patterns can be regarded as a suitable threshold $T_h(x, y)$ for binarizing the codeword patterns:

    $$T_h(x, y)=a(x, y)=\frac{1}{2}\sum_{j=1}^2P_j(x, y).$$
    The fringe modulation $b(x, y)$ is acquired through
    $$b(x, y)=\sqrt{P_C(x, y)^2+P_S(x, y)^2},$$
    which is effective in revealing the region of interest for measurement through simple thresholding.

  • 3. Transcoding and compensation for codeword patterns

    After binarization, the captured converted fringe patterns in EIBFPP can be transcoded into traditional Gray-code patterns:

    $$G_i(x, y)=X_i(x, y)\oplus T(x, y)\quad i=1, 2, 3, \ldots, N.$$
    However, under the influences of lens defocusing and environmental noise, some tiny speckles, as shown in Fig. 2(a), tend to break the structure of the recovered codeword patterns. Consequently, it is necessary to handle optimization before phase unwrapping.

    The rationale for the optimization is based on the fact that environmental noise is distributed randomly, and the process for $G_1(x, y)$ is selected as an example and presented in Fig. 2. The first step of optimization is to delete the speckles in black regions using the area-thresholding function

    $$\mathscr{P}(x, y)=\left\{ \begin{array}{lcr} 0 & & \mathrm{area}\leq A_\mathrm{th}, \\ 1 & & \mathrm{area}> A_\mathrm{th}, \end{array} \right.$$
    where $\mathscr {P}(x, y)$ is a point in the block and $A_{\mathrm {th}}$ is an area threshold. After this, $G_1(x, y)$ is converted to $G_{b1}(x, y)$ and the speckles in the black areas are removed completely [Fig. 2(b)]. Then, as presented in Fig. 2(c), a logical NOT is applied to $G_{b1}(x, y)$, the black speckles in the white region are transformed into a similar situation as shown in Fig. 2(a), and the area-thresholding function is applied again to cancel these speckles. Finally, a noise-suppressed pattern $G_{O1}(x, y)$ is achieved by applying a logical NOT to $\sim G _{b1}(x, y)$ once more.

  • 4. Decoding

    The first step for decoding is to find the decimal number $D(x, y)$ of each pixel, namely

    $$D(x, y)=\sum_{q=1}^QG_{Oq}(x, y)*2^{(Q-q)},$$
    where $q$ and $Q$ denote the number and the amount of codeword patterns, respectively. After this, the fringe order $k(x, y)$ for phase unwrapping can be acquired from a given look-up table $L_Q$, which can be established through a one-to-one mapping relationship between $D(x, y)$ and its serial number when the field of view contains the entirety of each codeword pattern. For example, the structure of $L_4$ is presented in Table 1.

    By substituting traditional Gray-code patterns into Eq. (16), the decimal number $D_1(x, y)$ and the corresponding fringe order $k_1(x, y)$ can be determined. Substituting all the codeword patterns into Eq. (16) leads to another decimal number $D_2(x, y)$. Unlike the calculation of $k_1(x, y)$, the fringe order $k_2(x, y)$ is determined using

    $$k_2(x, y)=\mathrm{fix}\left\{\frac{L_Q[D_2(x, y)]}{2}\right\},$$
    where $\mathrm {fix}\{\cdot \}$ returns the argument rounded down to the nearest integer.

  • 5. Phase unwrapping

    Finally, phase unwrapping can be executed using the two fringe orders $k_1(x, y)$ and $k_2(x, y)$, with

    $$\Phi(x, y)=\left\{ \begin{array}{lcr} \phi(i, j)+2(k_2(i, j)+1)\pi & & \phi(x, y)\leq-\frac{\pi}{2}, \\ \phi(i, j)+2k_1(i, j)\pi & & -\frac{\pi}{2}<\phi(x, y)<\frac{\pi}{2}, \\ \phi(i, j)+2k_2(i, j)\pi & & \phi(x, y)\geq\frac{\pi}{2}. \end{array} \right.$$

Note that the errors in Fig. 2(a) look like phase-jump errors, which reduce the measurement accuracy of IBFPP. However, these errors are located around not the phase-jump regions but the boundaries of our codeword patterns, thereby showing that these errors merely resemble phase-jump errors but are not actually such. Moreover, this type of error can be corrected only by our area-based method and not by projecting one or more additional Gray-code patterns.

 figure: Fig. 2.

Fig. 2. Optimization process for $G_1(x, y)$: a transformed pattern $G_1(x, y)$; (b) noise-suppressed pattern in black region $G_{b1}(x, y)$; (c) $\sim$ $G_{b1}(x, y)$; (d) fully noise-suppressed pattern $G_{O1}(x, y)$.

Download Full Size | PDF

Tables Icon

Table 1. Structure of $L_4$.

4. Experiments

To demonstrate the performance of the proposed method, we constructed an FPP system and conducted some experiments. The system comprised a computer, a projector (DLP LightCrafter 4500; Texas Instruments, Inc., USA), and a camera (BFS-U3-13Y3M-C; Point Grey Research, Inc., Canada). The camera had a resolution of $1280 \times 1024$ pixels and a maximum frame rate of 180 fps. The resolution of the projector was $912 \times 1140$ pixels, and its maximum projection rate was 120 Hz for an eight-bit model. The system was placed 0.3–0.4 m in front of the objects to be measured, and the angle between the optical axis of the camera and that of the projector was approximately 30$^\circ$.

The system was calibrated using the method proposed by Li et al. [48]. The key concept regarding this calibration framework is that the projector “captures” images like a camera, thereby making the projector and camera calibration the same. For our measurement system, we used nine different poses for system calibration, and for each pose the mapping relationship between camera and projector was created by projecting two sets of mutually orthogonal CGC fringe patterns. Figure 3 shows an example of the captured images with horizontal, vertical, and no fringe pattern. The intrinsic parameter matrices for the camera ($\mathbf {A}^{c}$) and the projector ($\mathbf {A}^{p}$) are

$$\mathbf{A}^{c}=\left[\begin{array}{cccc} 2534.12009 & 0 & 628.25409 \\ 0 & 2533.98304 & 512.58072 \\ 0 & 0 & 1 \end{array}\right],$$
$$\mathbf{A}^{p}=\left[\begin{array}{cccc} 2277.16941 & 0 & 833.55384 \\ 0 & 2262.90636 & 1048.36868 \\ 0 & 0 & 1 \end{array}\right].$$

 figure: Fig. 3.

Fig. 3. Examples of captured images with (a) horizontal pattern projection, (b) vertical pattern projection, and (c) no pattern projection.

Download Full Size | PDF

As shown in Fig. 4, the first experiment was designed to test the accuracy of the proposed method by measuring a sculpture. We also measured the same object using a standard three-step phase-shifting approach with the CGC method as the ground truth for comparison. In this and the following experiments, sinusoidal patterns with $T=16$ pixels and an EIBFPP pattern with seven codeword patterns were used to illuminate these objects. The reconstruction result of the CGC method is presented in Fig. 4(b), which overall agrees well with the 3D result calculated from our proposed framework shown in Fig. 4(c). To better examine their differences, we evaluated the accuracy of the proposed method in Geomagic Wrap 2017, and the corresponding difference result is presented in Fig. 4(d). The standard deviation of the 3D reconstruction errors is 0.1989 mm and the root-mean-square error is 0.1992 mm. This shows that the proposed method performs well in practical measurement while projecting fewer patterns.

 figure: Fig. 4.

Fig. 4. Three-dimensional (3D) measurement results for a sculpture: (a) object for measurement; (b) 3D measurement result using CGC method; (c) 3D measurement result using proposed method; (d) difference in geometry between CGC and proposed method.

Download Full Size | PDF

The EIBFPP method was then tested by measuring two isolated statues with diffuse surfaces (i.e., almost no global illumination). Figure 5 shows images of the captured fringe patterns modulated by the statues. As presented in Figs. 6(a)–6(f), the converted patterns were converted successfully into traditional Gray-code patterns using the template pattern. The variables $a(x, y)$, $b(x, y)$, and $\phi (x, y)$, as shown in Figs. 6(g)–6(i), were then obtained by substituting the related patterns into Eqs. (12), (13), and (11). After decoding, fringe compensation, and shape reconstruction, the 3D geometries of the measured scenarios were recovered, and these are presented in Fig. 6(j). This experimental result shows clearly that both statues were reconstructed properly and that these reconstructions were of high quality. This confirms that the proposed method works well for measuring isolated objects with diffuse surfaces.

 figure: Fig. 5.

Fig. 5. Captured fringe patterns including converted patterns $X_1$$X_6$, template pattern $T$, and sinusoidal patterns $P_1$ and $P_2$.

Download Full Size | PDF

The third experiment was designed to measure a scene that included a cup lid placed in a V-shaped groove to test the robustness of EIBFPP to interference from interreflections [enclosed by red dashed lines in Fig. 7(a)]. To assess the performance of the proposed method, EIBFPP was compared with CGC. Figure 7(b) presents one of the codeword patterns of CGC. It can be seen that artifacts caused by interreflections destroy the fringe quality considerably, and these artifacts induce errors during binarization, as shown in Fig. 7(d). In contrast, Fig. 7(c) shows the corresponding pattern converted from Fig. 7(b). The fringe quality seems better than the codeword pattern captured by the camera, and there are no obvious errors in Fig. 7(c). Figures 7(f) and 7(g) further present the reconstruction results calculated from CGC and EIBFPP, respectively. Remarkably, when compared with CGC, EIBFPP produces nearly error-free results using the same period of sinusoidal fringes but with fewer patterns.

Figure 8(a) shows a scene comprising two translucent nylon pillars, which was used to assess the performance of EIBFPP when influenced by subsurface scattering. Figures 8(b) and 8(d) present one of the captured codeword patterns and its corresponding binary result. Influenced by subsurface scattering, some binary errors, enclosed by the red rectangle in Fig. 8(d), occur around the edge of the codeword pattern. Figure 8(c) shows the corresponding pattern converted from Fig. 8(b). After binarization, as shown in Fig. 8(d), although the fringe pattern is influenced by subsurface scattering, no obvious error occurs in the codeword pattern. Figures 8(f) and 8(g) present the reconstruction results from CGC and EIBFPP, respectively. Clearly, EIBFPP outperforms CGC in this subsurface scattering scenario.

 figure: Fig. 6.

Fig. 6. (a)–(f) Traditional Gray-code patterns $G_1$$G_6$ converted from $X_1$$X_6$; (g) $a(x, y)$; (h) $b(x, y)$; (i) $\phi (x, y)$; (j) 3D reconstruction result.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Errors due to interreflections: (a) measured scenario; (b) a captured codeword pattern in CGC; (c) corresponding converted pattern in EIBFPP; (d) binary result of (b); (e) binary result of (c); (f) 3D reconstruction result from CGC; (g) 3D reconstruction result from EIBFPP.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Errors due to subsurface scattering: (a) measured scenario; (b) a captured CGC pattern; (c) a captured EIBFPP pattern; (d) binary result of (b); (e) binary result of (c); (f) 3D reconstruction result from CGC; (g) 3D reconstruction result from EIBFPP.

Download Full Size | PDF

The scene in Fig. 9(a) has interreflections between the cup lid, V-groove, and the statues, and subsurface scattering on the nylon pillar. As presented in Fig. 9(b), interreflections induce errors around the margins of the statues, and subsurface scattering causes errors on the surface of the nylon pillar. The reconstruction result from EIBFPP is shown in Fig. 9(c). It can be seen that our method reduces significantly the errors caused by global illumination.

 figure: Fig. 9.

Fig. 9. Errors due to global illumination: (a) measured scenario; (b) 3D reconstruction result from CGC; (c) 3D reconstruction result from EIBFPP.

Download Full Size | PDF

5. Discussion and conclusion

This paper has presented a novel IBFPP technique for 3D reconstruction, known as the EIBFPP method. Unlike in conventional IBFPP methods, the spatial frequencies of the projected codeword patterns are designed to have a high frequency bandwidth to resist global illumination. During phase unwrapping, a template pattern is used to convert the patterns into traditional Gray-code patterns. After this, a proprietary compensation algorithm is used to tackle the fringe errors caused by environmental noise and lens defocusing. The sinusoidal patterns have a reciprocal structure, and the average intensity of the captured sinusoidal patterns was used as a pixel-wise threshold to binarize the codeword patterns. Facilitated by the character of the Hilbert transform, only two sinusoidal patterns are required to retrieve the wrapped phase map, and this enables high measurement efficiency. The ability of the EIBFPP method to measure isolated objects was confirmed by initial experiments. Further experiments on scenarios influenced by different types of global illumination verified that the proposed EIBFPP method outperforms conventional IBFPP methods because of its greater robustness and use of fewer projected patterns. In actual measured scenarios, the accuracy of proposed method could be improved further by implementing an active/passive Gamma correction algorithm or by dithering the projected fringe patterns [4951].

Funding

National Natural Science Foundation of China (61873183); Major Scientific Instrument and Equipment Development Project in the National Key Research and Development Program of China (2016YFF0101802).

Disclosures

The authors declare no conflicts of interest.

References

1. E. N. Malamas, E. G. Petrakis, M. Zervakis, L. Petit, and J.-D. Legat, “A survey on industrial vision systems, applications and tools,” Image Vis. Comput. 21(2), 171–188 (2003). [CrossRef]  

2. B. Li, Y. An, D. Cappelleri, J. Xu, and S. Zhang, “High-accuracy, high-speed 3d structured light imaging techniques and potential applications to intelligent robotics,” International Journal of Intelligent Robotics and Applications pp. 86–103 (2017).

3. J. Deng, J. Li, H. Feng, and Z. Zeng, “Flexible depth segmentation method using phase-shifted wrapped phase sequences,” Opt. Lasers Eng. 122, 284–293 (2019). [CrossRef]  

4. J. Deng, J. Li, S. Ding, H. Feng, Y. Xiao, W. Han, and Z. Zeng, “Fringe projection decamouflaging,” Opt. Lasers Eng. 134, 106201 (2020). [CrossRef]  

5. X. Su and Q. Zhang, “Dynamic 3-d shape measurement method: A review,” Opt. Lasers Eng. 48(2), 191–204 (2010). [CrossRef]  

6. S. Zhang, “Recent progresses on real-time 3d shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]  

7. S. Zhang, “High-speed 3d shape measurement with structured light methods: A review,” Opt. Lasers Eng. 106, 119–131 (2018). [CrossRef]  

8. S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107, 28–37 (2018). [CrossRef]  

9. X. Su and W. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42(3), 245–261 (2004). [CrossRef]  

10. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

11. Y. Wang and S. Zhang, “Superfast multifrequency phase-shifting technique with optimal pulse width modulation,” Opt. Express 19(6), 5149–5155 (2011). [CrossRef]  

12. Y.-Y. Cheng and J. C. Wyant, “Two-wavelength phase shifting interferometry,” Appl. Opt. 23(24), 4539–4543 (1984). [CrossRef]  

13. J. Zhong and Y. Zhang, “Absolute phase-measurement technique based on number theory in multifrequency grating projection profilometry,” Appl. Opt. 40(4), 492–500 (2001). [CrossRef]  

14. Y. Wang and S. Zhang, “Novel phase-coding method for absolute phase retrieval,” Opt. Lett. 37(11), 2067–2069 (2012). [CrossRef]  

15. C. Zuo, Q. Chen, G. Gu, S. Feng, F. Feng, R. Li, and G. Shen, “High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection,” Opt. Lasers Eng. 51(8), 953–960 (2013). [CrossRef]  

16. C. Jiang, B. Li, and S. Zhang, “Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers,” Opt. Lasers Eng. 91, 232–241 (2017). [CrossRef]  

17. Y. An, J.-S. Hyun, and S. Zhang, “Pixel-wise absolute phase unwrapping using geometric constraints of structured light system,” Opt. Express 24(16), 18445–18459 (2016). [CrossRef]  

18. W. Yin, C. Zuo, S. Feng, T. Tao, Y. Hu, L. Huang, J. Ma, and Q. Chen, “High-speed three-dimensional shape measurement using geometry-constraint-based number-theoretical phase unwrapping,” Opt. Lasers Eng. 115, 21–31 (2019). [CrossRef]  

19. Y. Wang, L. Liu, J. Wu, X. Chen, and Y. Wang, “Spatial binary coding method for stripe-wise phase unwrapping,” Appl. Opt. 59(14), 4279–4285 (2020). [CrossRef]  

20. D. Zheng, Q. Kemao, F. Da, and H. S. Seah, “Ternary gray code-based phase unwrapping for 3d measurement using binary patterns with projector defocusing,” Appl. Opt. 56(13), 3660–3665 (2017). [CrossRef]  

21. G. P. Butel, G. A. Smith, and J. H. Burge, “Binary pattern deflectometry,” Appl. Opt. 53(5), 923–930 (2014). [CrossRef]  

22. S. Zhang, “Flexible 3d shape measurement using projector defocusing: extended measurement range,” Opt. Lett. 35(7), 934–936 (2010). [CrossRef]  

23. D. Zheng, F. Da, Q. Kemao, and H. S. Seah, “Phase-shifting profilometry combined with gray-code patterns projection: unwrapping error removal by an adaptive median filter,” Opt. Express 25(5), 4700–4713 (2017). [CrossRef]  

24. G. Sansoni, M. Carocci, and R. Rodella, “Three-dimensional vision based on a combination of gray-code and phase-shift light projection: analysis and compensation of the systematic errors,” Appl. Opt. 38(31), 6565–6573 (1999). [CrossRef]  

25. Y. Wang, L. Liu, J. Wu, X. Song, X. Chen, and Y. Wang, “Dynamic three-dimensional shape measurement with a complementary phase-coding method,” Opt. Lasers Eng. 127, 105982 (2020). [CrossRef]  

26. Q. Zhang, X. Su, L. Xiang, and X. Sun, “3-d shape measurement based on complementary gray-code light,” Opt. Lasers Eng. 50(4), 574–579 (2012). [CrossRef]  

27. Z. Wu, W. Guo, and Q. Zhang, “High-speed three-dimensional shape measurement based on shifting gray-code light,” Opt. Express 27(16), 22631–22644 (2019). [CrossRef]  

28. Z. Wu, C. Zuo, W. Guo, T. Tao, and Q. Zhang, “High-speed three-dimensional shape measurement based on cyclic complementary gray-code light,” Opt. Express 27(2), 1283–1297 (2019). [CrossRef]  

29. Z. Wu, W. Guo, Y. Li, Y. Liu, and Q. Zhang, “High-speed and high-efficiency three-dimensional shape measurement based on gray-coded light,” Photonics Res. 8(6), 819–829 (2020). [CrossRef]  

30. J. Gu, T. Kobayashi, M. Gupta, and S. K. Nayar, “Multiplexed illumination for scene recovery in the presence of global illumination,” in 2011 International Conference on Computer Vision, (2011), pp. 691–698.

31. M. Gupta, Y. Tian, S. G. Narasimhan, and L. Zhang, “(de) focusing on global light transport for active scene recovery,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition, (2009), pp. 2969–2976.

32. S. Nayar, K. Ikeuchi, and T. Kanade, “Shape from interreflections,” Int. J. Comput. Vis. 6(3), 173–195 (1991). [CrossRef]  

33. M. Gupta, Y. Tian, S. Narasimhan, and l. Zhang, “A combined theory of defocused illumination and global light transport,” Int. J. Comput. Vis. 98(2), 146–167 (2012). [CrossRef]  

34. M. K. Chandraker, F. Kahl, and D. J. Kriegman, “Reflections on the generalized bas-relief ambiguity,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 1 (2005), pp. 788–795 vol. 1.

35. M. Liao, X. Huang, and R. Yang, “Interreflection removal for photometric stereo by using spectrum-dependent albedo,” in CVPR 2011, (2011), pp. 689–696.

36. T. Chen, H. P. A. Lensch, C. Fuchs, and H. Seidel, “Polarization and phase-shifting for 3d scanning of translucent objects,” in 2007 IEEE Conference on Computer Vision and Pattern Recognition, (2007), pp. 1–8.

37. C. Hermans, Y. Francken, T. Cuypers, and P. Bekaert, “Depth from sliding projections,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition, (2009), pp. 1865–1872.

38. J. Park and A. Kak, “3d modeling of optically challenging objects,” IEEE Trans. Visual. Comput. Graphics 14(2), 246–262 (2008). [CrossRef]  

39. M. Holroyd, J. Lawrence, and T. Zickler, “A coaxial optical scanner for synchronous acquisition of 3d geometry and surface reflectance,” ACM Trans. Graph. 29(4), 1–12 (2010). [CrossRef]  

40. Y. Xu, H. Zhao, H. Jiang, and X. Li, “High-accuracy 3d shape measurement of translucent objects by fringe projection profilometry,” Opt. Express 27(13), 18421–18434 (2019). [CrossRef]  

41. X. Chen and Y.-H. Yang, “Scene adaptive structured light using error detection and correction,” Pattern Recognition 48(1), 220–230 (2015). [CrossRef]  

42. S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar, “Fast separation of direct and global components of a scene using high frequency illumination,” ACM Trans. Graph. 25(3), 935–944 (2006). [CrossRef]  

43. Y. Zhang, Z. Xiong, and F. Wu, “Unambiguous 3d measurement from speckle-embedded fringe,” Appl. Opt. 52(32), 7797–7805 (2013). [CrossRef]  

44. T. Chen, H. Seidel, and H. P. A. Lensch, “Modulated phase-shifting for 3d scanning,” in 2008 IEEE Conference on Computer Vision and Pattern Recognition, (2008), pp. 1–8.

45. M. Gupta, A. Agrawal, A. Veeraraghavan, and S. G. Narasimhan, “Structured light 3d scanning in the presence of global illumination,” in CVPR 2011, (2011), pp. 713–720.

46. M. Gupta and S. K. Nayar, “Micro phase shifting,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition, (2012), pp. 813–820.

47. S. Tang, X. Zhang, and D. Tu, “Micro-phase measuring profilometry: Its sensitivity analysis and phase unwrapping,” Opt. Lasers Eng. 72, 47–57 (2015). [CrossRef]  

48. B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured-light system with an out-of-focus projector,” Appl. Opt. 53(16), 3415–3426 (2014). [CrossRef]  

49. S. Zhang, “Comparative study on passive and active projector nonlinear gamma calibration,” Appl. Opt. 54(13), 3834–3841 (2015). [CrossRef]  

50. S. Lei and S. Zhang, “Flexible 3-d shape measurement using projector defocusing,” Opt. Lett. 34(20), 3080–3082 (2009). [CrossRef]  

51. Y. Wang and S. Zhang, “Three-dimensional shape measurement with binary dithered patterns,” Appl. Opt. 51(27), 6631–6636 (2012). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Sketch maps of patterns for (a) Gray code plus phase shifting (GCPS), (b) complementary Gray code (CGC), and (c) efficient intensity-based fringe projection profilometry (EIBFPP) methods.
Fig. 2.
Fig. 2. Optimization process for $G_1(x, y)$ : a transformed pattern $G_1(x, y)$ ; (b) noise-suppressed pattern in black region $G_{b1}(x, y)$ ; (c) $\sim$ $G_{b1}(x, y)$ ; (d) fully noise-suppressed pattern $G_{O1}(x, y)$ .
Fig. 3.
Fig. 3. Examples of captured images with (a) horizontal pattern projection, (b) vertical pattern projection, and (c) no pattern projection.
Fig. 4.
Fig. 4. Three-dimensional (3D) measurement results for a sculpture: (a) object for measurement; (b) 3D measurement result using CGC method; (c) 3D measurement result using proposed method; (d) difference in geometry between CGC and proposed method.
Fig. 5.
Fig. 5. Captured fringe patterns including converted patterns $X_1$ $X_6$ , template pattern $T$ , and sinusoidal patterns $P_1$ and $P_2$ .
Fig. 6.
Fig. 6. (a)–(f) Traditional Gray-code patterns $G_1$ $G_6$ converted from $X_1$ $X_6$ ; (g) $a(x, y)$ ; (h) $b(x, y)$ ; (i) $\phi (x, y)$ ; (j) 3D reconstruction result.
Fig. 7.
Fig. 7. Errors due to interreflections: (a) measured scenario; (b) a captured codeword pattern in CGC; (c) corresponding converted pattern in EIBFPP; (d) binary result of (b); (e) binary result of (c); (f) 3D reconstruction result from CGC; (g) 3D reconstruction result from EIBFPP.
Fig. 8.
Fig. 8. Errors due to subsurface scattering: (a) measured scenario; (b) a captured CGC pattern; (c) a captured EIBFPP pattern; (d) binary result of (b); (e) binary result of (c); (f) 3D reconstruction result from CGC; (g) 3D reconstruction result from EIBFPP.
Fig. 9.
Fig. 9. Errors due to global illumination: (a) measured scenario; (b) 3D reconstruction result from CGC; (c) 3D reconstruction result from EIBFPP.

Tables (1)

Tables Icon

Table 1. Structure of L 4 .

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

H ( μ ) ( t ) = 1 π μ ( τ ) t τ d τ .
F [ H ( μ ) ] ( ω ) = δ H ( ω ) × F ( μ ) ( ω ) ,
δ H ( ω ) = { i = e i π / 2 ω < 0 , 0 ω = 0 , i = e i π / 2 ω > 0.
P m ( x , y ) = a ( x , y ) + b ( x , y ) cos [ ϕ ( x , y ) + δ m ] , i = 1 , 2 , , M   ( M 3 ) ,
ϕ ( x , y ) = arctan m = 1 M P m ( x , y ) sin ( δ m ) m = 1 M P m ( x , y ) cos ( δ m ) .
X i ( x , y ) = G i ( x , y ) T ( x , y ) i = 1 , 2 , 3 , , N ,
P j ( x , y ) = a ( x , y ) + ( 1 ) j b ( x , y ) cos [ ϕ ( x , y ) ] j = 1 , 2.
R ( f X i , f P j , f T ) = { f X i > f T f P j > f T i = 1 , 2 , , N j = 1 , 2
P C ( x , y ) = P 2 ( x , y ) P 1 ( x , y ) = 2 b ( x , y ) cos [ ϕ ( x , y ) ] .
P S ( x , y ) = H [ P C ( x , y ) ] = 2 b ( x , y ) sin [ ϕ ( x , y ) ] .
ϕ ( x , y ) = arctan P S ( x , y ) P C ( x , y ) .
T h ( x , y ) = a ( x , y ) = 1 2 j = 1 2 P j ( x , y ) .
b ( x , y ) = P C ( x , y ) 2 + P S ( x , y ) 2 ,
G i ( x , y ) = X i ( x , y ) T ( x , y ) i = 1 , 2 , 3 , , N .
P ( x , y ) = { 0 a r e a A t h , 1 a r e a > A t h ,
D ( x , y ) = q = 1 Q G O q ( x , y ) 2 ( Q q ) ,
k 2 ( x , y ) = f i x { L Q [ D 2 ( x , y ) ] 2 } ,
Φ ( x , y ) = { ϕ ( i , j ) + 2 ( k 2 ( i , j ) + 1 ) π ϕ ( x , y ) π 2 , ϕ ( i , j ) + 2 k 1 ( i , j ) π π 2 < ϕ ( x , y ) < π 2 , ϕ ( i , j ) + 2 k 2 ( i , j ) π ϕ ( x , y ) π 2 .
A c = [ 2534.12009 0 628.25409 0 2533.98304 512.58072 0 0 1 ] ,
A p = [ 2277.16941 0 833.55384 0 2262.90636 1048.36868 0 0 1 ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.