Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-frequency color-encoded fringe-projection profilometry based on geometry constraint for large depth range

Open Access Open Access

Abstract

In multi-view fringe projection profilometry (FPP), a limitation of geometry-constraint based approaches is the reduced measurement depth range often used to reduce the number of candidate points and increase the corresponding point selection reliability, when high-frequency fringe patterns are used. To extend the depth range, a new method of high-frequency fringe projection profilometry was developed by color encoding the projected fringe patterns to allow reliable candidate point selection even when six candidate points are in the measurement volume. The wrapped phase is directly retrieved using the intensity component of the hue-saturation-intensity (HSI) color space and complementary-hue is introduced to identify color codes for correct corresponding point selection. Mathematical analyses of the effect of color crosstalk on phase calculation and color code identification show that the phase calculation is independent of color crosstalk and that color crosstalk has little effect on color code identification. Experiments demonstrated that the new method can achieve high accuracy in 3D measurement over a large depth range and for isolated objects, using only two high-frequency color-encoded fringe patterns.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Fringe projection profilometry (FPP) is commonly used for non-contact three-dimensional (3D) shape measurement due to its advantages of high resolution and accuracy [1]. Typical FPP techniques, such as Fourier transform profilometry (FTP) [2,3] and phase-shifting profilometry (PSP) [4,5], provide a wrapped phase ranging from $\textrm{ - }\pi $ to $\pi $ with $2\pi $ discontinuities. The wrapped phase needs to be unwrapped to a continuous phase map without ambiguity for 3D shape reconstruction.

Spatial phase unwrapping algorithms can be robust for a continuous surface. However, since the phase is unwrapped based on phase values of neighboring pixels [6,7], errors can occur at abrupt depth changes, either on a single surface or when measuring multiple isolated objects. In contrast, temporal phase unwrapping (TPU) methods overcome this problem by acquiring additional sequences of images, most commonly using patterns at multiple frequencies or wavelengths [8,9]. Fringe orders can be determined using the additional information [1013], and the phase at each pixel is unwrapped independently. However, the large number of images required in TPU methods slows the measurement process.

To avoid the additional images of TPU methods, researchers have investigated alternative approaches of determining the absolute fringe order by using geometry constraints without capturing additional images. One geometry-constraint based phase unwrapping method used the geometric relationship between the camera and projector, where the phase was unwrapped pixel by pixel using the predefined artificial absolute phase [14]. However, this method is limited to measuring objects within a small depth range from a predefined virtual plane. To extend the depth range, some researchers developed alternative methods; however, in [15,16], prior knowledge of the approximate object geometry was needed. Color fringe patterns were used in [17] to extend depth range; however, the determination of the specific red, green, and blue (RGB) channel that should be used in the wrapped phase calculation depended on the color code identification, which may have errors at the period edges of the sinusoidal fringes. Background and amplitude encoding has been used with geometric constraints [18]; however, the reduced amplitude of the fringe pattern may degrade measurement accuracy.

The aforementioned methods used a single-camera single-projector FPP system. Multi-view geometry-constraint based methods, which use a second camera in the FPP system, provide additional geometric constraints to eliminate the phase ambiguity without phase unwrapping [1921]. In the multi-view system, the phase ambiguity problem can be solved using the constraints of the wrapped phase, epipolar geometry, and depth of the measurement volume. The methods work for discontinuous surfaces and isolated objects without requiring additional images or prior-knowledge of object geometry. These methods are normally designed with a relatively low fringe frequency to reduce the number of candidate points in the measurement volume, in order to provide higher reliability when determining multi-view correspondences. However, there is a challenging tradeoff. High-frequency fringe patterns are commonly preferred to achieve high-accuracy measurement [4], but when high-frequency fringe patterns are used, the number of the candidate corresponding points for determining multi-view correspondences would be large, thus reducing the reliability in corresponding point selection. To solve this problem, statistical patterns [22], triangular patterns [23] and background coding [24] have been embedded into fringe patterns to assist in selecting the correct corresponding point. While these methods worked well, embedding the parameters reduces the signal amplitude, which may reduce measurement accuracy. Recently, Liu and Kofman [25] and Liu et al. [26] improved their implementation of high-frequency fringe patterns with geometry-constraint based methods by using a shortened camera-projector baseline to allow only two or one candidate point, respectively, and thus achieve more reliable corresponding-point selection. However, a remaining limitation of these geometry-constraint based approaches is the reduced depth of the measurement volume, which may be an important limitation for some applications, such as measurement in uncontrolled environments. To enlarge the measurement volume, Tao et al. [27] developed an adaptive depth constraint strategy that can dynamically update the compact depth range pixel-wise according to the object shape. However, a correct initial depth map was required and the pixel-wise depth range adaptation depended on the depth in the neighboring region. Thus, there is a need for a method that uses high-frequency fringe patterns while still allowing a large depth range of the measurement volume, without sacrificing the fringe pattern amplitude.

This paper presents a method of high-frequency color-encoded fringe projection profilometry that extends the depth range in multi-view geometry-constraint based methods. Six binary codes are encoded in a grayscale sinusoidal fringe pattern into red, green, and blue color channels of a color image, to enable reliable corresponding-point selection even when six candidate points are in the measurement volume. Since six candidate points would be allowed in the measurement depth range, the method permits a large measurement volume even with high-frequency fringe patterns. Furthermore, with the use of color fringe patterns, the amplitude of the pattern is not reduced, and measurement accuracy is thus not degraded. The wrapped phase is directly retrieved using the intensity component of the hue-saturation-intensity (HSI) color space, rather than relying on color code identification, and complementary-hue is introduced for color code identification. In addition, mathematical analyses of the effect of color crosstalk on phase calculation and color code identification are performed, and show that the phase calculation is independent of color crosstalk, and that color crosstalk has little effect on color code identification.

2. Principle and method

2.1 Background-normalized π-shift FTP

In this paper, background-normalized π-shift FTP is employed for phase extraction. In the projector space, two sinusoidal fringe patterns with π phase shift are given as:

$$I_0^p({x^p},{y^p}) = {a^p} + {b^p}\cos (2\pi f_0^p{x^p})$$
$$I_1^p({x^p},{y^p}) = {a^p} + {b^p}\cos (2\pi f_0^p{x^p} - \pi ), $$
where ${a^p}$ is the background, ${b^p}$ amplitude, $({x^p},{y^p})$ projector pixel coordinates, and $f_0^p$ projected fringe frequency. The intensities of the captured fringe patterns can be expressed as:
$${I_0}(x,y) = R(x,y){a^p} + R(x,y){b^p}\cos \varphi (x,y) = A(x,y) + B(x,y)\cos \varphi (x,y)$$
$${I_1}(x,y) = R(x,y){a^p} - R(x,y){b^p}\cos \varphi (x,y) = A(x,y) - B(x,y)\cos \varphi (x,y), $$
where $(x,y)$ are the captured image coordinates, $R(x,y)$, $A(x,y)$ and $B(x,y)$ represent the reflectivity of the object, background intensity, and intensity modulation, respectively; and $\varphi (x,y)$ is the phase, which contains depth information. By taking the difference between I0 and I1, the zero-frequency term can be effectively removed, and the fundamental frequency term after background normalization [28] can be expressed as:
$${I_d}(x,y) = \frac{{{I_0}(x,y) - {I_1}(x,y)}}{{{I_0}(x,y) + {I_1}(x,y)}} = \frac{{B(x,y)}}{{A(x,y)}}\cos \varphi (x,y). $$
The Fourier transform is applied to retrieve the wrapped phase:
$$\varphi (x,y) = {\tan ^{ - 1}}\left[ {\frac{{{\mathop{\rm Im}\nolimits} ({{F^{ - 1}}\{{\Gamma [{F({{I_d}(x,y)} )} ]} \}} )}}{{{\mathop{\rm Re}\nolimits} ({{F^{ - 1}}\{{\Gamma [{F({{I_d}(x,y)} )} ]} \}} )}}} \right], $$
where Im() and Re() represent the imaginary and real part of the image, respectively; $F()$ and ${F^{ - 1}}()$ are Fourier transform and inverse Fourier transform operators, respectively; and $\Gamma [{} ]$ denotes the bandpass filtering process.

2.2 Geometry-constraints for a single-projector two-camera system

A typical geometry-constraint based multi-view system includes one projector and two cameras, as illustrated in Fig. 1. Since the projector can be regarded as an inverse camera, each pair of the three devices forms a standard stereovision system. In this paper, the basic method of combining stereovision with fringe projection [29] is used, where phase values are determined by the fringe projection system, the phase values are used for determining correspondences between the projector and two cameras, and 3D reconstruction of a surface is carried out point by point, using standard stereovision techniques based on two cameras calibrated for intrinsic and extrinsic parameters. The calibration parameters of the system can be obtained prior to measurement [25,30,31].

 figure: Fig. 1.

Fig. 1. Diagram of geometry-constraint based multi-view system.

Download Full Size | PDF

When a projector projects a fringe pattern onto the object surface, the deformed fringe patterns can be captured by two cameras in different views. For any pixel PL in the left camera image, there is a corresponding line in 3D space, that projects to the projector and right-camera image planes on the epipolar lines eP and eRC, respectively (Fig. 1). The wrapped phase $\varphi (x,y)$ can be calculated using Eq. (6). If the number of fringe periods is N, then there are N candidate corresponding points in the 3D space having the same phase value as point PL. Although there are multiple points on the line through PL in the measurement volume that project to epipolar lines eP and eRC with the same phase value, only one point lies on the object surface (shown red) and this would be the correct corresponding surface point to the projected point PP and observed points PL in the left camera image and PR in the right camera image. A basic geometry-constraint approach toward selecting the correct candidate point from the different candidates in the measurement volume is to use a predefined depth range between Zmin and Zmax to limit the number of candidate points. Candidate points outside the predefined depth range are considered as false candidates. In this paper, high frequency patterns are used, since they tend to reduce phase error and thus improve measurement accuracy in FPP. However, this will increase the number of candidate points in the measurement volume compared to use of low frequency patterns. A highly constrained depth range to limit the number of candidate points is considered to be a limitation in FPP measurement. However, enlarging the measurement depth range will generate more candidate points in the depth range, especially when high-frequency fringe patterns are used. In this paper, reliable selection of the correct corresponding point, even when more candidate points are allowed in the depth range, is done by color encoding the projected fringe patterns with binary codes, as explained in the following section. Once the correspondences between the left camera and projector are determined based on matching phase values, correspondences between the right and left cameras can be determined as detailed in [25]. This includes correspondence refinement, where a search is performed for the true corresponding point on the right-camera epipolar line eRC that has the same phase value as left-camera pixel PL. Finally, the 3D coordinates of the object surface points can be reconstructed by standard stereovision techniques, using the final correspondences between the left and right cameras and the calibration parameters of the two cameras.

2.3 Color encoding and decoding

The method of color encoding with binary codes aims to reliably select the correct corresponding point among multiple candidate points allowed in the measurement volume. The method involves use of color cameras and only two color images are required.

Six binary codes for RGB channels are illustrated in Fig. 2 for six different periods. Two fringe patterns with π phase shift, shown in Fig. 3(a), are respectively encoded into the RGB channels of two color images with six binary codes repeated every six periods, as illustrated in Fig. 3(b). The intensities of the two color-encoded fringe patterns can be expressed as:

$$I_{c0}^p({x^p},{y^p}) = C({x^p},{y^p})I_0^p({x^p},{y^p})$$
$$I_{c1}^p({x^p},{y^p}) = C({x^p},{y^p})I_1^p({x^p},{y^p}), $$
where $C({x^p},{y^p})$ is the binary code for RGB channels. If the codeword can be extracted from the captured patterns, the corresponding point can be correctly selected from six candidate points based on the codeword.

 figure: Fig. 2.

Fig. 2. Binary codes for RGB channels.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Pattern coding: (a) two fringe patterns with π phase shift, (b) two color fringe patterns, and (c) color codes.

Download Full Size | PDF

As illustrated in Fig. 3(b), different binary codes in RGB channels result in different colors. The color information can be represented in HSI color space, which allows decoupling of the color and intensity information. The two color-encoded fringe patterns in Eqs. (7) and (8) can be rewritten as:

$$I_{ci}^p({x^p},{y^p}) = I_{ri}^p({x^p},{y^p}) + I_{gi}^p({x^p},{y^p}) + I_{bi}^p({x^p},{y^p}), $$
where $I_{ri}^p,I_{gi}^p{\kern 1pt} ,I_{bi}^p{\kern 1pt} {\kern 1pt} (i = 0,1)$ are the intensities of RGB channels of the two color-encoded fringe patterns, respectively. After projecting the color-encoded fringe patterns onto an object during measurement, the captured color fringe patterns can be expressed as:
$${I_{ci}}(x,y) = {I_{ri}}(x,y) + {I_{gi}}(x,y) + {I_{bi}}(x,y),$$
where ${I_{ri}},{I_{gi}},{I_{bi}}{\kern 1pt} (i = 0,1)$ are the intensities of the RGB channels of the two captured color fringe patterns, respectively.

For the color decoding, the captured color images can be transformed from RGB to HSI space using the following equations [32]:

$${H_i} = \left\{ \begin{array}{l} \theta ,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \textrm{if}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {I_{bi}} < {I_{gi}};\\ 2\pi - \theta ,{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \textrm{if}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {I_{bi}} \ge {I_{gi}}; \end{array} \right.{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \textrm{where}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \theta = {\cos ^{ - 1}}\left\{ {\frac{{0.5[{({I_{ri}} - {I_{gi}}) + ({I_{ri}} - {I_{bi}})} ]}}{{{{[{{{({I_{ri}} - {I_{gi}})}^2} + ({I_{ri}} - {I_{bi}})({I_{gi}} - {I_{bi}})} ]}^{{1 \mathord{\left/ {\vphantom {1 2}} \right.} 2}}}}}} \right\}$$
$${S_i} = 1 - \min ({I_{ri}},{I_{gi}},{I_{bi}})\left( {\frac{3}{{{I_{ri}} + {I_{gi}} + {I_{bi}}}}} \right)$$
$${I_{Mi}} = \frac{{{I_{ri}} + {I_{gi}} + {I_{bi}}}}{3} $$
IMi, the mean of the three intensities, hue Hi, and saturation Si are the image components in HSI space. Note that in Eqs. (11)–(13) and the remainder of this paper, for brevity, (x, y) may be omitted for any image H, S, I in HSI space and I in RGB. The intensity component of the color fringe patterns ${I_{M0}}(x,y)$ and ${I_{M1}}(x,y)$ are used in computing the background normalized amplitude ${I_{Md}}(x,y)$ (using Eq. (5)), which is used in calculating the wrapped phase by background-normalized π-shift FTP (Eq. (6)). Hue with range [0, 2π] is used to identify color codes as follows. The hue values after normalization ($H_i^\textrm{N} = {{{H_i}} \mathord{\left/ {\vphantom {{{H_i}} {2\pi }}} \right.} {2\pi }}$) are 0 or 1, 1/6, 2/6, 3/6, 4/6, 5/6, which correspond to the binary codes 100, 110, 010, 011, 001, 101, respectively. Thus, color codes 1, 2, 3, 4, 5, 6 can be identified, as shown in Fig. 3(c), and employed to determine the correct corresponding point among six candidate points, instead of using the binary codes. Since the intensity component is decoupled from hue in HSI space, the phase map calculation is not dependent on the color code identification.

The color codes in the two captured images should theoretically be the same at the same pixel. Therefore, just one of the two images would be needed to determine the color code from hue and select the correct corresponding point during surface measurement. However, the hue values in regions with low intensity are error-prone, as shown in Fig. 4(a), which makes choosing suitable thresholds difficult for normalized-hue quantization. During surface measurement, normalized-hue quantization in the presence of error is performed, making use of the hue of the two color images. The complementary hue $H_\textrm{c}^\textrm{N}(x,y)$ can be obtained by:

$$H_\textrm{c}^\textrm{N}(x,y) = \left\{ \begin{array}{l} H_0^\textrm{N}(x,y),{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \textrm{if}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {I_{Md}}(x,y) < = 0\\ H_1^\textrm{N}(x,y),{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \textrm{if}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {I_{Md}}(x,y) > 0 \end{array} \right. .$$
where $H_0^\textrm{N}(x,y)$ and $H_1^\textrm{N}(x,y)$ are the normalized hues of first and second color images, and ${I_{Md}}(x,y)$ is the background normalized amplitude (Fig. 4(b)). The complementary hue, shown in Fig. 4(c), has less noise than the normalized hues of the two color images (Fig. 4(a)), and is therefore used for color code identification. Thresholds can be applied to the complementary hue to perform quantization to obtain hue values 0 or 1, 1/6, 2/6, 3/6, 4/6, 5/6, which correspond to color codes 1, 2, 3, 4, 5, 6, (Fig. 4(d)). However, the complementary hue at the fringe-period boundaries may not be sharp due to residual noise (Fig. 4(c)). Color code errors may thus still occur at the boundaries of adjacent color codes after quantization, resulting in incorrect color code identification. Since the phase map still has high accuracy (further explained in Sec. 2.4), it can be used to correct the color code at the boundaries using a correction method as in [24] (Fig. 4(e)). When a mismatch of the phase and color code boundaries occurs, the color code is corrected to make the color code boundary match the phase boundary. For example, as seen in Fig. 4(f), the boundaries of the phase (black) and color code before correction (blue) are mismatched between pixels 209 and 210. The color code at pixel 209 is corrected from 4 (blue) to 3 (red), resulting in matched phase and color code boundaries. The final color code after correction (red) is shown in Fig. 4(e).

 figure: Fig. 4.

Fig. 4. Color code identification: (a) normalized hue of the two captured images, (b) background normalized amplitude, (c) complementary hue, (d) color code before correction, (e) color code before and after correction with wrapped phase, and (f) correction of color code at boundary using phase.

Download Full Size | PDF

2.4 Effect of color crosstalk on phase calculation and color code identification

Color cameras are designed with overlap between the RGB color channels to cover the entire color spectrum. Color crosstalk between RGB channels is one of the main sources of phase noise and generally cannot be neglected when using color fringe patterns in FPP. If there is no color crosstalk, the captured color fringe patterns can be expressed as:

$$I_{ci}^0(x,y) = I_{ri}^0(x,y) + I_{gi}^0(x,y) + I_{bi}^0(x,y).$$
When taking the color crosstalk into account, the captured color fringe patterns in Eq. (10) can be expressed using the Caspi model [33]:
$$\left[ {\begin{array}{c} {{I_{ri}}}\\ {{I_{gi}}}\\ {{I_{bi}}} \end{array}} \right] = \left[ {\begin{array}{ccc} {{c_{rr}}}&{{c_{rg}}}&{{c_{rb}}}\\ {{c_{gr}}}&{{c_{gg}}}&{{c_{gb}}}\\ {{c_{br}}}&{{c_{bg}}}&{{c_{bb}}} \end{array}} \right]\left[ {\begin{array}{c} {I_{ri}^0}\\ {I_{gi}^0}\\ {I_{bi}^0} \end{array}} \right],$$
where the $3 \times 3$ matrix is the color-crosstalk matrix. For different binary codes, the distribution of the intensities $I_{ci}^p({x^p},{y^p})$ across R, G, B channels will be different in Eq. (9), as well as in Eq. (16). Using the binary code 110 for example, the intensities in Eqs. (1) and (2) with π phase shift are encoded in the R and G channels, and the color-encoded fringe patterns are expressed as:
$$I_{ci}^p({x^p},{y^p}) = I_{ri}^p({x^p},{y^p}) + I_{gi}^p({x^p},{y^p}),$$
where $I_{ri}^p({x^p},{y^p}) = I_{gi}^p({x^p},{y^p}) = I_i^p({x^p},{y^p})$. Equation (16) can be rewritten as:
$$\begin{array}{l} {I_{ri}} = {c_{rr}}I_{ri}^0 + {c_{rg}}I_{gi}^0\\ {I_{gi}} = {c_{gr}}I_{ri}^0 + {c_{gg}}I_{gi}^0\\ {I_{bi}} = {c_{br}}I_{ri}^0 + {c_{bg}}I_{gi}^0 \end{array}.$$
After transforming the captured color fringe patterns from RGB color space into HSI color space, the intensity component can be expressed as:
$${I_{Mi}} = \frac{1}{3}({{I_{ri}} + {I_{gi}} + {I_{bi}}} )= \frac{1}{3}[{({{c_{rr}}I_{ri}^0 + {c_{rg}}I_{gi}^0} )+ ({{c_{gr}}I_{ri}^0 + {c_{gg}}I_{gi}^0} )+ ({{c_{br}}I_{ri}^0 + {c_{bg}}I_{gi}^0} )} ]. $$
In the present method, the color of the object surface is assumed to be neutral, and thus the reflectivity for RGB colors is approximately the same ($I_{ri}^0 = I_{gi}^0 = {I_i}$ for binary code 110), and the captured π-shift fringe patterns can be further expressed as:
$${I_{M0}} = \frac{1}{3}({{c_{rr}} + {c_{rg}} + {c_{gr}} + {c_{gg}} + {c_{br}} + {c_{bg}}} ){I_0}$$
$${I_{M1}} = \frac{1}{3}({{c_{rr}} + {c_{rg}} + {c_{gr}} + {c_{gg}} + {c_{br}} + {c_{bg}}} ){I_1}.$$
Substituting Eqs. (3) and (4) into the above equations:
$${I_{M0}}(x,y) = A^{\prime}(x,y) + B^{\prime}(x,y)\cos \varphi (x,y)$$
$${I_{M1}}(x,y) = A^{\prime}(x,y) - B^{\prime}(x,y)\cos \varphi (x,y),$$
where
$$A^{\prime}(x,y) = \frac{1}{3}A(x,y)({c_{rr}} + {c_{rg}} + {c_{gr}} + {c_{gg}} + {c_{br}} + {c_{bg}})$$
$$B^{\prime}(x,y) = \frac{1}{3}B(x,y)({c_{rr}} + {c_{rg}} + {c_{gr}} + {c_{gg}} + {c_{br}} + {c_{bg}}).$$

When background-normalized π-shift FTP is implemented using ${I_{M0}}(x,y)$ and ${I_{M1}}(x,y)$, the parameters of the color crosstalk will cancel in computation of background normalized amplitude ${I_{Md}}(x,y)$ (using Eq. (5)) prior to wrapped phase calculation (Eq. (6)). Similarly, the intensities of the captured fringe patterns encoded with the other binary codes can be expressed in the same format as Eqs. (22) and (23). Therefore, the full-field phase calculation is independent of the color crosstalk.

To analyze the effect of color crosstalk on the color code identification, for example for binary code 110, Eq. (18) can be substituted into Eq. (11), and the normalized hue with the effect of color crosstalk can be estimated as:

$$H_i^\textrm{N} = {\cos ^{ - 1}}\frac{{0.5(2{c_{rr}} + 2{c_{rg}} - {c_{gr}} - {c_{gg}} - {c_{br}} - {c_{bb}})}}{{2\pi \sqrt {{{({c_{rr}} + {c_{rg}} - {c_{gr}} - {c_{gg}})}^2} + ({c_{rr}} + {c_{rg}} - {c_{br}} - {c_{bb}})({c_{gr}} + {c_{gg}} - {c_{br}} - {c_{bb}})} }}. $$
Similarly, normalized hue $H_i^\textrm{N}$ for the other binary codes can be expressed in terms of the parameters of the color-crosstalk matrix. The influence of color crosstalk on hue can be analyzed using the color crosstalk matrix, which is determined by calibration in advance of measurement [33]. The normalized hue of the central area 400${\times} $400 of any image that would be captured during a measurement is estimated using Eq. (26) for binary code 110 and similar equations for the five other binary codes, resulting in six normalized hue values for every pixel. The combined histogram of the estimated normalized hue for all six binary codes (Fig. 5) has six distinct levels of hue with no overlap. Although the color crosstalk does have some effect on the hue values (wider distribution for each code), the color codes can be easily identified using thresholds. Thus, the color crosstalk does not have an effect on the determination of color codes.

 figure: Fig. 5.

Fig. 5. Combined histogram of normalized hue computed using color crosstalk parameters for all binary codes.

Download Full Size | PDF

3. Experiments and results

To verify the new high-frequency color-encoded FPP geometry-constraint method, a multi-view FPP system was set up, as in Fig. 1. The system included a digital-light-processing (DLP) projector (Wintech PRO4500) with 912×1140 resolution and two 3-CMOS color cameras (JAI AP-1600 T) with 1456×1088 resolution and 12 mm focal length lens. The baselines between the left camera and projector and between the two cameras were 79.1 mm and 221.5 mm, respectively. The system was calibrated by the method in [25], and 3D surface reconstruction was performed using stereovision. The measurement depth range was set to 400 mm.

3.1 Measurement accuracy evaluation

The use of six color codes in the new method permits as many as six candidate points in the measurement depth range. To compare the measurement accuracy when different number of candidate points are allowed in the same measurement depth range, the smallest wavelength (highest frequency) for different number of candidates was first calculated for 1, 2, 3, 4, 5, 6 candidates, respectively, based on the geometry-constraint model in [24]. Measurements of a white flat plate were then conducted by projecting and capturing two color-encoded π-shift fringe patterns, with the smallest wavelengths 84, 42, 28, 21, 17, 15 pixels corresponding to 1, 2, 3, 4, 5, 6 candidate points, respectively. The root-mean-square (RMS) error decreased with increasing number of the permitted candidate points (Fig. 6) to 0.116 mm at 6 candidate points. The new method allows increasing the number of candidate points beyond two, permitting use of higher frequency fringe patterns, and achieving higher measurement accuracy. Even with the measurement depth range as large as 400 mm, a high frequency (15-pixel wavelength) was reached with high measurement accuracy. In the remainder of this paper, all measurements with the new method used the highest permitted frequency, 15-pixel wavelength.

 figure: Fig. 6.

Fig. 6. Measurement accuracy for a flat plate with wavelengths corresponding to number of candidate points.

Download Full Size | PDF

The results of the measurement of the plate using the highest frequency (15-pixel wavelength) is shown in Fig. 7. One of the captured color images is shown in Fig. 7(a). The wrapped phase map calculated by background-normalized FTP using the intensity component of the HSI color images is shown in Fig. 7(b), and the color code extracted from the complementary hue after correction is shown in Fig. 7(c). The wrapped phase for the pixels indicated by the white line in Fig. 7(a) and the corresponding color code both before and after correction are plotted in Fig. 7(d). The color codes are correctly extracted, even at the boundaries of adjacent periods by using the correction method in Sec. 2.3. Finally, the surface of the plate is successfully reconstructed, as shown in Fig. 7(e).

 figure: Fig. 7.

Fig. 7. Measurement results for a flat plate: (a) one of the captured color images (white line overlaid), (b) wrapped phase map, (c) extracted color code, (d) wrapped phase and corresponding color code (before and after correction) for white line in (a), and (e) 3D reconstruction.

Download Full Size | PDF

To further evaluate the performance of the new color-encoded FPP method, experiments were carried out using the background-modulated modified FTP method [25] with the highest number of candidate points permitted, two, a depth of 200 mm used in [25], and the corresponding smallest wavelength, 21 pixels, based on the geometric-constraint model in [24]; and the new method with six candidate points, depth of 400 mm, and corresponding smallest wavelength, 15 pixels. Measurements were performed on a double-hemisphere object (Fig. 8(a)) located approximately at the middle of the measurement depth range using the same experimental setup for both methods. For measurement using the new method, one of the captured color images, the wrapped phase map, the color code, and the 3D shape reconstruction are shown in Figs. 8(b)–(e), respectively. The 3D reconstruction has some errors evident at high gradient regions close to the edges of the hemisphere, as commonly occurring with FTP methods [25]. The accuracy of the measurement was computed by least-square fitting of a sphere to each hemisphere point cloud. The measure accuracies of the two methods, computed based on the radii of two hemispheres, distance between hemisphere centers, mean absolute error (MAE), RMS error, and sphere-fitting standard deviation (SD) are compared in Table 1. The hemispheres had true radii 50.800 ± 0.015 mm and distance between centers 120.000 ± 0.005 mm. The performance of the new color encoded FPP method was similar to that of the background-modulated modified FTP (for both odd and even sets), with the added advantage of double the measurement depth range with the new method compared to the background-modulated modified FTP method. The ability to achieve accurate measurement for discontinuous surfaces and over the full enlarged measurement depth is demonstrated in further experiments, as explained below.

 figure: Fig. 8.

Fig. 8. Measurement result of a double-hemisphere: (a) texture image, (b) one of the captured color images, (c) wrapped phase map, (d) extracted color code, and (e) 3D reconstruction.

Download Full Size | PDF

Tables Icon

Table 1. Comparison of experimental results for the new color-encoded FPP method and background-modulated modified FTP method

3.2 Isolated-object measurement

To demonstrate the ability of the new method to perform measurement of discontinuous surfaces, as possible with temporal phase unwrapping methods, two spatially isolated objects (cylinder and manikin head, Fig. 9(a)) were measured using the new color-encoded FPP geometry-constraint method. One of the captured color images of the isolated objects is shown in Fig. 9(b) and the 3D reconstruction in Fig. 9(c). The new method permitted measurement of isolated objects with discontinuous surfaces.

 figure: Fig. 9.

Fig. 9. Measurement of two isolated objects: (a) texture image, (b) one captured image of a cylinder and manikin head, and (c) 3D reconstruction.

Download Full Size | PDF

3.3 Measurement over large depth range

As discussed in Sec. 3.1, the new method and the background-modulated modified FTP method [25] had similar measurement accuracy, but the number of candidate points in the latter method was limited to only two, and the depth range was only 200 mm. If the measurement depth range were enlarged to 400 mm for the background-modulated modified FTP, the wavelength would have to be increased, which would lead to higher errors. In this paper, since six candidate points are allowed in the measurement volume owing to the designed color coding of the new method, the measurement depth range can be enlarged to 400 mm without sacrificing measurement accuracy. To verify the performance of the new method over the entire measurement depth range, a cylinder was measured at three different depths. The 3D reconstruction of the cylinder at the three positions (Fig. 10) shows that objects located over the large depth range can be reconstructed successfully using the new method.

 figure: Fig. 10.

Fig. 10. Reconstruction results of a cylinder at three depths.

Download Full Size | PDF

To further evaluate the measurement accuracy over the large depth range, a measurement was simultaneously performed on two double-hemispheres, which were positioned near the extremities of the measurement depth range. One of the captured color images is shown in Fig. 11(a), the 3D object-surface reconstruction of the two double-hemispheres in Fig. 11(b), and the depth of the near and far 3D reconstructed surfaces in Figs. 11(c) and (d), respectively. The blank regions in the 3D reconstruction were invalid points, mainly due to occlusion, and discarded based on color code. A comparison of the measurement accuracy of the near and far double-hemispheres is shown in Table 2. The new method generally performed well over the entire depth range, with the far double-hemisphere having slightly higher errors than those of the near object. The measurement accuracies of both double-hemispheres were mainly consistent with the results for measurement of the double hemisphere in the middle of the depth range (Sec. 3.1).

 figure: Fig. 11.

Fig. 11. Measurement of two double-hemispheres: (a) one of the captured color images, (b) 3D object-surface reconstruction; depth of 3D reconstructed surface of: (c) near double-hemisphere, and (d) far double-hemisphere.

Download Full Size | PDF

Tables Icon

Table 2. Comparison of measurement results for the near and far double-hemispheres

4. Conclusion

A new method of high-frequency color-encoded fringe projection profilometry was developed to enlarge the depth range in multi-view geometry-constraint based 3D shape measurement. The designed color coding allows reliable candidate point selection even when six candidate points are in the measurement volume. This permits a large measurement depth range to be used even when high-frequency fringe patterns are employed. Experimental results demonstrated that high measurement accuracy could be achieved over a large depth range. Mathematical analyses of the effect of color crosstalk on phase calculation and color code identification demonstrated that the phase calculation is not dependent on color crosstalk, and that color crosstalk has little effect on color code identification. The color codes were able to be extracted from the complementary hue. Afterwards, the phase map, computed independently of color crosstalk and color code, could be used to correct the color code at the boundaries of adjacent periods. The new method was demonstrated for 3D shape measurement of isolated objects and requires only two high-frequency color-encoded fringe patterns. The method may have limited application in measuring colored objects, since the colored-surface reflectivity would likely affect color code identification. Developing the method for colored surfaces would need further investigation.

Funding

Natural Sciences and Engineering Research Council of Canada; Special Grand National Project of China (2009ZX02204-008); China Scholarship Council.

Disclosures

The authors declare no conflicts of interest.

References

1. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010). [CrossRef]  

2. M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3D object shapes,” Appl. Opt. 22(24), 3977–3982 (1983). [CrossRef]  

3. X. Su and W. Chen, “Fourier transform profilometry: a review,” Opt. Lasers Eng. 35(5), 263–284 (2001). [CrossRef]  

4. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

5. C. Chen, Y. Wan, and Y. Cao, “Instability of projection light source and real-time phase error correction method for phase-shifting profilometry,” Opt. Express 26(4), 4258–4270 (2018). [CrossRef]  

6. X. Su and W. Chen, “Reliability-guided phase unwrapping algorithm: a review,” Opt. Lasers Eng. 42(3), 245–261 (2004). [CrossRef]  

7. M. Zhao, L. Huang, Q. Zhang, X. Su, A. Asundi, and K. Qian, “Quality-guided phase unwrapping technique: comparison of quality maps and guiding strategies,” Appl. Opt. 50(33), 6214–6224 (2011). [CrossRef]  

8. S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107, 28–37 (2018). [CrossRef]  

9. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

10. Z. Wu, W. Guo, and Q. Zhang, “High-speed three-dimensional shape measurement based on shifting Gray-code light,” Opt. Express 27(16), 22631–22644 (2019). [CrossRef]  

11. Y. Wang and S. Zhang, “Novel phase-coding method for absolute phase retrieval,” Opt. Lett. 37(11), 2067–2069 (2012). [CrossRef]  

12. X. He, D. Zheng, K. Qian, and G. Christopoulos, “Quaternary gray-code phase unwrapping for binary fringe projection profilometry,” Opt. Lasers Eng. 121, 358–368 (2019). [CrossRef]  

13. Y. Wang, S. Yang, and X. Gou, “Modified Fourier transform method for 3D profile measurement without phase unwrapping,” Opt. Lett. 35(5), 790–792 (2010). [CrossRef]  

14. Y. An, J. S. Hyun, and S. Zhang, “Pixel-wise absolute phase unwrapping using geometric constraints of structured light system,” Opt. Express 24(16), 18445–18459 (2016). [CrossRef]  

15. J. Dai, Y. An, and S. Zhang, “Absolute three-dimensional shape measurement with a known object,” Opt. Express 25(9), 10384–10396 (2017). [CrossRef]  

16. Y. An and S. Zhang, “Pixel-by-pixel absolute phase retrieval assisted by an additional three-dimensional scanner,” Appl. Opt. 58(8), 2033–2041 (2019). [CrossRef]  

17. T. Cheng, Q. Du, and Y. Jiang, “Color fringe projection profilometry using geometric constraints,” Opt. Commun. 398, 39–43 (2017). [CrossRef]  

18. X. Liu and J. Kofman, “Background and amplitude encoded fringe patterns for 3D surface-shape measurement,” Opt. Lasers Eng. 94, 63–69 (2017). [CrossRef]  

19. C. Bräuer-Burchardt, C. Munkelt, M. Heinze, P. Kühmstedt, and G. Notni, “Using geometric constraints to solve the point correspondence problem in fringe projection based 3D measuring systems,” in International Conference on Image Analysis and Processing, G. Maino and G. L. Foresti, eds. (2011), pp. 265–274.

20. Z. Li, K. Zhong, Y. F. Li, X. Zhou, and Y. Shi, “Multiview phase shifting: a full-resolution and high-speed 3D measurement framework for arbitrary shape dynamic objects,” Opt. Lett. 38(9), 1389–1391 (2013). [CrossRef]  

21. K. Zhong, Z. Li, Y. Shi, C. Wang, and Y. Lei, “Fast phase measurement profilometry for arbitrary shape objects without phase unwrapping,” Opt. Lasers Eng. 51(11), 1213–1222 (2013). [CrossRef]  

22. W. Lohry, V. Chen, and S. Zhang, “Absolute three-dimensional shape measurement using coded fringe patterns without phase unwrapping or projector calibration,” Opt. Express 22(2), 1287–1301 (2014). [CrossRef]  

23. T. Tao, Q. Chen, J. Da, S. Feng, Y. Hu, and C. Zuo, “Real-time 3-D shape measurement with composite phase-shifting fringes and multi-view system,” Opt. Express 24(18), 20253–20269 (2016). [CrossRef]  

24. X. Liu and J. Kofman, “High-frequency background modulation fringe patterns based on a fringe-wavelength geometry-constraint model for 3D surface-shape measurement,” Opt. Express 25(14), 16618–16628 (2017). [CrossRef]  

25. X. Liu and J. Kofman, “Real-time 3D surface-shape measurement using background-modulated modified Fourier transform profilometry with geometry-constraint,” Opt. Lasers Eng. 115, 217–224 (2019). [CrossRef]  

26. X. Liu, T. Tao, Y. Wan, and J. Kofman, “Real-time motion-induced-error compensation in 3D surface-shape measurement,” Opt. Express 27(18), 25265–25279 (2019). [CrossRef]  

27. T. Tao, Q. Chen, S. Feng, J. Qian, Y. Hu, L. Huang, and C. Zuo, “High-speed real-time 3D shape measurement based on adaptive depth constraint,” Opt. Express 26(17), 22440–22456 (2018). [CrossRef]  

28. C. Zuo, T. Tao, S. Feng, L. Huang, A. Asundi, and Q. Chen, “Micro Fourier Transform Profilometry (µFTP): 3D shape measurement at 10,000 frames per second,” Opt. Lasers Eng. 102, 70–91 (2018). [CrossRef]  

29. X. Han and P. Huang, “Combined stereovision and phase shifting method: a new approach for 3-D shape measurement,” Proc. SPIE 7389, 73893C (2009). [CrossRef]  

30. Z. Li, Y. Shi, C. Wang, and Y. Wang, “Accurate calibration method for a structured light system,” Opt. Eng. 47(5), 053604 (2008). [CrossRef]  

31. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

32. Y. Wan, Y. Cao, C. Chen, Y. Wang, G. Fu, L. Wang, and C. Li, “Active phase error suppression for color phase-shifting fringe projection based on hue pre-correction,” Opt. Laser Technol. 118, 102–108 (2019). [CrossRef]  

33. D. Caspi, N. Kiryati, and J. Shamir, “Range imaging with adaptive color structured light,” IEEE Trans. Pattern Anal. Mach. Intell. 20(5), 470–480 (1998). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Diagram of geometry-constraint based multi-view system.
Fig. 2.
Fig. 2. Binary codes for RGB channels.
Fig. 3.
Fig. 3. Pattern coding: (a) two fringe patterns with π phase shift, (b) two color fringe patterns, and (c) color codes.
Fig. 4.
Fig. 4. Color code identification: (a) normalized hue of the two captured images, (b) background normalized amplitude, (c) complementary hue, (d) color code before correction, (e) color code before and after correction with wrapped phase, and (f) correction of color code at boundary using phase.
Fig. 5.
Fig. 5. Combined histogram of normalized hue computed using color crosstalk parameters for all binary codes.
Fig. 6.
Fig. 6. Measurement accuracy for a flat plate with wavelengths corresponding to number of candidate points.
Fig. 7.
Fig. 7. Measurement results for a flat plate: (a) one of the captured color images (white line overlaid), (b) wrapped phase map, (c) extracted color code, (d) wrapped phase and corresponding color code (before and after correction) for white line in (a), and (e) 3D reconstruction.
Fig. 8.
Fig. 8. Measurement result of a double-hemisphere: (a) texture image, (b) one of the captured color images, (c) wrapped phase map, (d) extracted color code, and (e) 3D reconstruction.
Fig. 9.
Fig. 9. Measurement of two isolated objects: (a) texture image, (b) one captured image of a cylinder and manikin head, and (c) 3D reconstruction.
Fig. 10.
Fig. 10. Reconstruction results of a cylinder at three depths.
Fig. 11.
Fig. 11. Measurement of two double-hemispheres: (a) one of the captured color images, (b) 3D object-surface reconstruction; depth of 3D reconstructed surface of: (c) near double-hemisphere, and (d) far double-hemisphere.

Tables (2)

Tables Icon

Table 1. Comparison of experimental results for the new color-encoded FPP method and background-modulated modified FTP method

Tables Icon

Table 2. Comparison of measurement results for the near and far double-hemispheres

Equations (26)

Equations on this page are rendered with MathJax. Learn more.

I 0 p ( x p , y p ) = a p + b p cos ( 2 π f 0 p x p )
I 1 p ( x p , y p ) = a p + b p cos ( 2 π f 0 p x p π ) ,
I 0 ( x , y ) = R ( x , y ) a p + R ( x , y ) b p cos φ ( x , y ) = A ( x , y ) + B ( x , y ) cos φ ( x , y )
I 1 ( x , y ) = R ( x , y ) a p R ( x , y ) b p cos φ ( x , y ) = A ( x , y ) B ( x , y ) cos φ ( x , y ) ,
I d ( x , y ) = I 0 ( x , y ) I 1 ( x , y ) I 0 ( x , y ) + I 1 ( x , y ) = B ( x , y ) A ( x , y ) cos φ ( x , y ) .
φ ( x , y ) = tan 1 [ Im ( F 1 { Γ [ F ( I d ( x , y ) ) ] } ) Re ( F 1 { Γ [ F ( I d ( x , y ) ) ] } ) ] ,
I c 0 p ( x p , y p ) = C ( x p , y p ) I 0 p ( x p , y p )
I c 1 p ( x p , y p ) = C ( x p , y p ) I 1 p ( x p , y p ) ,
I c i p ( x p , y p ) = I r i p ( x p , y p ) + I g i p ( x p , y p ) + I b i p ( x p , y p ) ,
I c i ( x , y ) = I r i ( x , y ) + I g i ( x , y ) + I b i ( x , y ) ,
H i = { θ , if I b i < I g i ; 2 π θ , if I b i I g i ; where θ = cos 1 { 0.5 [ ( I r i I g i ) + ( I r i I b i ) ] [ ( I r i I g i ) 2 + ( I r i I b i ) ( I g i I b i ) ] 1 / 1 2 2 }
S i = 1 min ( I r i , I g i , I b i ) ( 3 I r i + I g i + I b i )
I M i = I r i + I g i + I b i 3
H c N ( x , y ) = { H 0 N ( x , y ) , if I M d ( x , y ) <= 0 H 1 N ( x , y ) , if I M d ( x , y ) > 0 .
I c i 0 ( x , y ) = I r i 0 ( x , y ) + I g i 0 ( x , y ) + I b i 0 ( x , y ) .
[ I r i I g i I b i ] = [ c r r c r g c r b c g r c g g c g b c b r c b g c b b ] [ I r i 0 I g i 0 I b i 0 ] ,
I c i p ( x p , y p ) = I r i p ( x p , y p ) + I g i p ( x p , y p ) ,
I r i = c r r I r i 0 + c r g I g i 0 I g i = c g r I r i 0 + c g g I g i 0 I b i = c b r I r i 0 + c b g I g i 0 .
I M i = 1 3 ( I r i + I g i + I b i ) = 1 3 [ ( c r r I r i 0 + c r g I g i 0 ) + ( c g r I r i 0 + c g g I g i 0 ) + ( c b r I r i 0 + c b g I g i 0 ) ] .
I M 0 = 1 3 ( c r r + c r g + c g r + c g g + c b r + c b g ) I 0
I M 1 = 1 3 ( c r r + c r g + c g r + c g g + c b r + c b g ) I 1 .
I M 0 ( x , y ) = A ( x , y ) + B ( x , y ) cos φ ( x , y )
I M 1 ( x , y ) = A ( x , y ) B ( x , y ) cos φ ( x , y ) ,
A ( x , y ) = 1 3 A ( x , y ) ( c r r + c r g + c g r + c g g + c b r + c b g )
B ( x , y ) = 1 3 B ( x , y ) ( c r r + c r g + c g r + c g g + c b r + c b g ) .
H i N = cos 1 0.5 ( 2 c r r + 2 c r g c g r c g g c b r c b b ) 2 π ( c r r + c r g c g r c g g ) 2 + ( c r r + c r g c b r c b b ) ( c g r + c g g c b r c b b ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.