Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Accurate calibration for crosstalk coefficient based on orthogonal color phase-shifting pattern

Open Access Open Access

Abstract

The crosstalk coefficient calibration of de-crosstalk in color fringe projection profilometry is an essential step for the high-accuracy measurement. In this paper, a novel approach for calibrating crosstalk matrix of color camera is proposed. The wrapped phase error model introduced by color crosstalk in orthogonal pattern is established. Compared with the existing calibration methods depending on calculating the modulation of the crosstalk channel, the crosstalk coefficients are obtained from phase error in our method. By projecting the designed color orthogonal phase-shifting fringe patterns onto a white plate, the phase-shifting fringe patterns in both horizontal and vertical directions can be separated from captured images. The coefficients between different channels are calibrated by fitting the error relationship between the wrapped phase containing crosstalk and the standard ones. Coefficient fitting simulations and experimental validations including shape measurement of a white plate and distance measurement of a step block are carried out to verify the effectiveness of the proposed method.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The proliferation of color camera imaging technology has significantly facilitated its utilization in digital three-dimensional (3D) reconstruction. Especially the simultaneous capture of RGB channels by the color camera has improved the efficiency of acquiring object information. To achieve expedited and precise measurements, numerous color-encoded 3D reconstruction techniques have been proposed [14]. Of the available methodologies, color fringe projection profilometry (CFPP) has emerged as a prominent choice due to its straightforward measurement process and impressive robustness in complex environments [5].

However, there are still some challenges of color crosstalk needed to be solved for the further utilization of color cameras. The presence of color crosstalk can significantly impact measurement accuracy, particularly in cameras that rely on a Bayer array with a single CMOS or CCD sensor to acquire RGB information. The obtained RGB information is inevitably impeded by interference from other channels, leading to a reduction in pixel intensity accuracy and further introducing error in the final measurement results [6,7]. Thus, the precise calibration of color patterns is particularly crucial in CFPP.

Numerous solutions have been proposed to resolve the issue of color crosstalk. According to different principles, these approaches can be broadly classified into two categories: phase compensation method and pixel intensity correction method. Several investigations have been proposed to solve crosstalk through phase compensation. For the issue with mild to moderate crosstalk, additional phase-shifting (PS) fringe patterns were projected to compensate phase error stemming from color crosstalk [8,9]. Based on fringe pattern normalization and effective iterative phase retrieving algorithm, the color crosstalk can be alleviated [10], but it is difficult to obtain convergent result in some situations. In the case of severe crosstalk, the squeezing interferometry phase-demodulation method have been found to be effective [11]. Recently, by analyzing the impact of unwrapped phase error resulting from crosstalk, Hilbert transform had been employed for color pattern decoupling [1214]. However, the transformed pattern exhibits significant spectral leakage at the image edges.

Based on pixel intensity correction, several effective solutions have been developed as well. To investigate the degree of color coupling effects, pure red, green, and blue PS fringe patterns were captured to study the coupling effect from pixel modulation, and defined equations were then employed to calculate the compensation [15]. Although this method can successfully achieve de-crosstalk by utilizing captured patterns, its application in precision measurement remains challenging. Also based on the pixel modulation, the crosstalk matrix model was quantitatively analyzed by calculating the modulation ratio, and specific crosstalk coefficients were obtained in Zhang’s report [16]. Afterward, the frame accumulation technique was reported to improve the signal-to-noise ratio (SNR) of acquired pure R, G, and B images, enabling accurate crosstalk matrix obtaining [17], but the method needed to capture 1000-frame images, which is inconvenient for calibration. Some researchers also realized color de-crosstalk by calibrating the crosstalk equations [18,19], which is extremely time-consuming in solving non-linear equations. Besides, artificial neural nets also have been applied to alleviate crosstalk [20,21], but it requires massive samples for training and can be easily affected by the change of environment.

In this paper, a novel and accurate method for calibrating the crosstalk coefficient using orthogonal color fringe patterns by analyzing wrapped phase error is proposed. The novelty of the proposed method lies in the skillful utilization of orthogonal color PS fringe patterns to calibrate the crosstalk coefficient by fitting the phase error parameters between the wrapped phase maps of different channel patterns and the standard ones. In addition, the related phase compensation is presented to optimize the wrapped phase error in some special points for accurate fitting results. The simulations and experiments indicate that our method is robust against random noise and effectively enhances the accuracy of coefficient calibration, leading to improved accuracy in 3D measurements, as compared with the conventional methods. The rest of this paper is organized as follows. Section 2 introduces the calibration principle of the proposed method. Section 3 and Section 4 present simulation and experimental results to validate the effectiveness of the proposed method, respectively. Finally, Section 5 summarizes the paper.

2. Principle

2.1 Wrapped phase calculation

In a typical CFPP system, a series of sinusoidal fringe patterns are projected onto the plate and then captured by a camera. Generally, the intensity of the separated single-channel fringe patterns can be described as

$${I_i}{(}x{, }y{) = }a(x,y){ + }b(x,y){ \cos[}\phi {(}x{, }y{) + }{\delta _i}{]},$$
where ${(x, y)}$ is the pixel coordinate in the camera, ${a(x, y)}$ is the background intensity, ${b(x, y)}$ is modulation intensity, $\phi {(x, y)}$ is the wrapped phase, ${i}$ is the sequence of N-step PS fringe patterns, which ranges from $[{{0, N - 1}} ]$. ${{\delta }_{i}}{ = i} \cdot {\ 2\pi \ /\ N}$ denotes the phase shift. The wrapped phase of the patterns can be solved, as shown in Eq. (2):
$$\phi (x,y) ={-} \arctan \left[ {{{\sum\limits_0^{N - 1} {{I_i}(x,y)\sin ({\delta_i})} } / {\sum\limits_0^{N - 1} {{I_i}(x,y)\cos ({\delta_i})} }}} \right].$$

2.2 Color crosstalk matrix model

The principle of crosstalk matrix model is illustrated in Fig. 1. Because of the color filtering and demosaicing process in the color camera, the obtained pixel RGB intensity is always coupled with the interference from other channels. The mathematic model can be described as [22]

$$\left[ {\begin{array}{c} {{I_{R^{\prime}}}}\\ {{I_{G^{\prime}}}}\\ {{I_{B^{\prime}}}} \end{array}} \right] = {K_c}\left[ {\begin{array}{c} {{I_R}}\\ {{I_G}}\\ {{I_B}} \end{array}} \right] = \left[ {\begin{array}{ccc} {{c_{rr}}}&{{c_{gr}}}&{{c_{br}}}\\ {{c_{rg}}}&{{c_{gg}}}&{{c_{bg}}}\\ {{c_{rb}}}&{{c_{gb}}}&{{c_{bb}}} \end{array}} \right]\left[ {\begin{array}{c} {{I_R}}\\ {{I_G}}\\ {{I_B}} \end{array}} \right],$$
where ${{I}_{{R^{\prime}}}}$, ${{I}_{{G^{\prime}}}}$, and ${{I}_{{B^{\prime}}}}$ are actually captured R, G, and B intensity, respectively. ${{I}_{R}}$, ${{I}_{G}}$, and ${{I}_{B}}$ denote the ideal RGB intensity without color crosstalk, ${{c}_{{mn}}}$ (m, n =r, g, b) are crosstalk coefficients between different channels, which specifically represents the crosstalk ratio of ${m}$ component in the output ${n}$ component. Since R, G, and B channels do not cause crosstalk to itself, ${{c}_{{rr}}}$, ${{c}_{{gg}}}$, and ${{c}_{{bb}}}$ are normally set to 1, which means only the remaining six parameters are required calibration.

 figure: Fig. 1.

Fig. 1. The color crosstalk coefficient model.

Download Full Size | PDF

After obtaining all the coefficients, the pure intensities of each component can be calculated by inverse operation, as shown in Eq. (4):

$$\left[ {\begin{array}{{c}} {{I_R}}\\ {{I_G}}\\ {{I_B}} \end{array}} \right] = inv({{K_c}} )\cdot \left[ {\begin{array}{{c}} {{I_{R^{\prime}}}}\\ {{I_{G^{\prime}}}}\\ {{I_{B^{\prime}}}} \end{array}} \right].$$

2.3 Obtaining crosstalk coefficient

In our method, the crosstalk coefficients are obtained from orthogonal PS fringe patterns. The procedure of the proposed method is shown in Fig. 2. Twelve pre-generated orthogonal PS patterns are projected on a white plate. The frequency and phase shift of the horizontal pattern is the same as that in the vertical direction in this paper. It does not mean that these pattern attributes must be equal in two directions, just for the convenience of generating the corresponding PS patterns and analyzing the phase error easily. The patterns in two directions are respectively placed into two channels of the crosstalk coefficient to be calibrated. Once the images are captured by the color camera, the vertical and horizontal patterns can be separated from the orthogonal fringe pattern. Due to the utilization of the two colors in the pattern, the crosstalk coefficient model can be simplified to

$$\left[ {\begin{array}{{c}} {{I_{m^{\prime}}}}\\ {{I_{n^{\prime}}}} \end{array}} \right] = {k_{mn}}\left[ {\begin{array}{{c}} {{I_m}}\\ {{I_n}} \end{array}} \right] = \left[ {\begin{array}{{cc}} {{c_{mm}}}&{{c_{nm}}}\\ {{c_{mn}}}&{{c_{nn}}} \end{array}} \right]\left[ {\begin{array}{{c}} {{I_m}}\\ {{I_n}} \end{array}} \right] = \left[ {\begin{array}{{cc}} 1&{{c_{nm}}}\\ {{c_{mn}}}&1 \end{array}} \right]\left[ {\begin{array}{{c}} {{I_m}}\\ {{I_n}} \end{array}} \right],$$
where ${{k}_{{mn}}}\; $ denotes the crosstalk matrix of any two chosen channels, obviously, it is a component part of ${{K}_{c}}$. ${{c}_{{mm}}}$, ${{c}_{{nn}}}$, ${{c}_{{mn}}}$, and ${{c}_{{nm}}}$ are the crosstalk coefficients in ${{k}_{{mn}}}$, ${{c}_{{mm}}}$ and ${{c}_{{nn}}}$ are set to 1. ${{I}_{m}}$ and ${{I}_{n}}$ represent the ideal intensity, ${{I}_{\mathrm{m^{\prime}}}}$ and ${{I}_{{n^{\prime}}}}$ represent the crosstalk intensity in ${m}$ and ${n}$ channel. Ideally, each channel exhibits patterns only in the corresponding direction as shown in Eq. (1). Nevertheless, the influence of crosstalk leads to the addition of the response from the other channel, as shown in Fig. 3. We use the coefficient ${{c}_{{nm}}}$ as an example to demonstrate, Eq. (6) expresses the extracted ${m}$ channel patterns (the vertical pattern is loaded in ${m}$ channel, and the horizontal pattern is loaded in the ${n}$ channel):
$$I_{m^{\prime}}^i{(}x{, }y{) = }a(x,y){ + }b(x,y){\cos[}{\varphi _m}\textrm{(}x\textrm{, }y\textrm{) + }{\delta _i}\textrm{] + }\{{{c_{nm}}a(x,y)\textrm{ + }{c_{nm}}b(x,y){\cos[}{\varphi_n}\textrm{(}x\textrm{, }y\textrm{) + }{\delta_i}\textrm{]}} \},$$
where ${I}_{{m^{\prime}}}^{{i}}{(x, y)}$ represents the actual intensity extracted from the orthogonal color patterns, ${{\varphi }_{m}}{(x, y)}$ and ${{\varphi }_{n}}{(x, y)}$ denote the wrapped phase in the vertical and horizontal directions, respectively.

 figure: Fig. 2.

Fig. 2. Procedure of the proposed calibration method.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Crosstalk effect in the real obtained orthogonal crosstalk pattern.

Download Full Size | PDF

Since we adopt 4-step PS algorithm to calculate the wrapped phase, the wrapped phase of the crosstalk pattern in ${m}$ channel is calculated as follows:

$${\phi ^{\prime}_m}(x,y) ={-} \arctan \left[ {{{\sum\limits_0^3 {I_{m^{\prime}}^i\textrm{(}x\textrm{, }y\textrm{)}\sin (i \cdot \pi /2)} } / {\sum\limits_0^3 {I_{m^{\prime}}^i\textrm{(}x\textrm{, }y\textrm{)}\cos (i \cdot \pi /2)} }}} \right].$$

Equation (7) can be simplified to

$${\phi ^{\prime}_m}(x,y)\textrm{ = arctan}\left\{ {\frac{{\sin [{\varphi_m}(x,y)] + {c_{nm}}\sin [{\varphi_n}(x,y)]}}{{\cos [{\varphi_m}(x,y)] + {c_{nm}}\cos [{\varphi_n}(x,y)]}}} \right\}.$$

It should be noted that we have performed the de-gamma distortion proactively before capturing patterns [23]. Compared with the ideal phase ${\phi _{m}}{(x, y)}$ calculated by Eq. (2), the wrapped phase error model introduced by color crosstalk $\Delta \phi _{m}^{\prime}{(x, y)}$ is

$$\Delta {\phi ^{\prime}_m}(x,y) = {\phi ^{\prime}_m}(x,y) - {\phi _m}(x,y) = \arctan \left\{ {\frac{{{c_{nm}}\sin [{\varphi_n}(x,y) - {\varphi_m}(x,y)]}}{{1 + {c_{nm}}\cos [{\varphi_m}(x,y) - {\varphi_n}(x,y)]}}} \right\}.$$

When crosstalk is not severe, the coefficient ${{c}_{{nm}}}$ is slight, which means Eq. (9) can be further simplified to

$$\Delta {\phi ^{\prime}_m}(x,y) = {\phi ^{\prime}_m}(x,y) - {\phi _m}(x,y) \approx {c_{nm}}\sin [{\varphi _n}(x,y) - {\varphi _m}(x,y)].$$

In Eq. (10), the crosstalk coefficient ${{c}_{{nm}}}$ between the ${m}$ channel and ${n}$ channel can be determined by fitting the error of the wrapped phase. Similarly, through the comparison of the extracted horizontal pattern (${n}$ channel) wrapped phase with the ideal wrapped phase calculated by 18-step horizontal patterns, we can calibrate the coefficient ${{c}_{{mn}}}$. The remaining crosstalk coefficients can be obtained using the same procedure. Finally, a complete color crosstalk matrix ${{K}_{c}}$ can be obtained by composing all the fitting coefficients. Once ${{K}_{c}}$ is obtained, the corrected RGB components can be acquired by solving Eq. (4).

It should be emphasized that the establishment of Eq. (9) must satisfy a specific requirement, as shown in Eq. (11). However, there are some points cannot meet the requirements, which causes the non-sinusoidal distribution of the obtained phase error. We intercept a portion of the wrapped phase error when ${{c}_{{nm}}}$ is 0.1, as shown in Fig. 4. For points that do not satisfy the condition, the implementation of the atan2 function results in a phase error of $\mathrm{\ -\ 2\pi }$ or $\mathrm{2\pi }$, which may influence the accuracy of the coefficient fitting, as shown in Figs. 4(a)-(c). By compensating $\mathrm{2\pi }$ or $\mathrm{\ -\ 2\pi }$ to these special points, higher quality fitting phase can be obtained. In Figs. 4(d)-(f), we can find that the compensated phase distribution satisfies the form we have deduced in Eq. (10).

$$\frac{{\sin [{\varphi _m}(x,y)] + {c_{nm}}\sin [{\varphi _n}(x,y)]}}{{\cos [{\varphi _m}(x,y)] + {c_{nm}}\cos [{\varphi _n}(x,y)]}}\cdot \frac{{\sin [{\varphi _m}(x,y)]}}{{\cos [{\varphi _m}(x,y)]}} > - 1.$$

 figure: Fig. 4.

Fig. 4. Illustration of phase compensation when the crosstalk coefficient is 0.1. (a) Wrapped phase error map before compensation. (b) Row 35 phase error distribution before compensation. (c) Row 116 phase error distribution before compensation. (d) Wrapped phase error map after compensation. (e) Row 35 phase error distribution after compensation. (f) Row 116 phase error distribution after compensation.

Download Full Size | PDF

In summarize, the focal point of crosstalk correction in our method lies in fitting the phase error map. Figure 2 illustrates the calibration procedure, which comprises two key phases: pre-processing and coefficient obtaining. The specific steps are described in detail below:

Step 1: Generate horizontal and vertical patterns, and the color orthogonal patterns can be composed of two selected color channels. Besides, generate 18-step PS fringe patterns in the two directions.

Step 2: Perform gamma correction for different channels, then capture images of the orthogonal patterns and 18-step PS fringe patterns. Then separate R, G, and B channels of the captured patterns.

Step 3: Calculate the wrapped phase error caused by crosstalk using the separated R or G or B channel patterns and 18-step PS fringe patterns.

Step 4: Calibrate the crosstalk coefficient by fitting the wrapped phase error map based on Eq. (9) or Eq. (10), and return to Step 2 until obtaining the complete crosstalk matrix ${{K}_{c}}$.

3. Simulations

In this section, the modulation-based coefficient calibration (MCC) mentioned in Ref. [16] and the proposed method phase error-based coefficient calibration (PECC) under different noise interference and different frequency patterns were tested. In the simulations, the crosstalk matrix is set as

$$\left[ {\begin{array}{{ccc}} {{c_{rr}}}&{{c_{gr}}}&{{c_{br}}}\\ {{c_{rg}}}&{{c_{gg}}}&{{c_{bg}}}\\ {{c_{rb}}}&{{c_{gb}}}&{{c_{bb}}} \end{array}} \right] = \left[ {\begin{array}{{ccc}} {1.000}&{0.100}&{0.015}\\ {0.070}&{1.000}&{0.030}\\ {0.200}&{0.050}&{1.000} \end{array}} \right].$$

Considering the presence of the camera noise and external environmental factors, randomly occurring noise is introduced into the collected patterns. Consequently, it is crucial to investigate the impact of noise. We generated 4-step PS simulated patterns with a resolution of 2048 × 2048 and a pattern frequency of 64. Additionally, 18-step PS fringe patterns were generated with the same frequency and resolution. According to the MCC, we also generated corresponding simulated patterns. Random noise with different standard deviations (0, 5, 10, 15, and 20) was added to the simulated patterns. To simplify the process, our simulations are conducted based on completing the de-Gamma distortion. The fitting results of MCC and PECC are shown in Fig. 5.

 figure: Fig. 5.

Fig. 5. Absolute error under different random noise. (a) Simulation result of MCC. (b) Simulation result of PECC.

Download Full Size | PDF

The absolute error of calibration using MCC is shown in Fig. 5(a). For small crosstalk coefficients like ${{c}_{{br}}}$, when the random noise reaches 20, the absolute error of ${{c}_{{br}}}$ reaches 0.052, which is nearly 3.5 times its own value. But for larger crosstalk coefficients like ${{c}_{{rb}}}$, which is less affected by noise. In the case of random noise 20, the calibration error of ${{c}_{{rb}}}$ is only 0.006638. We believe that noise interference affects the calculation of pixel modulation, especially under mild crosstalk. When the crosstalk is mild, the pixel intensity caused by crosstalk is weak, and the camera noise becomes the main factor affecting the pixel modulation ${M(x, y)}$, as shown in Eq. (13):

$$M(x,y) = \frac{\textrm{1}}{\textrm{2}}\sqrt {{{\left[ {\sum\limits_{i = \textrm{0}}^\textrm{3} {{I_i}(x,y)\sin (\pi \cdot i/\textrm{2})} } \right]}^\textrm{2}}\textrm{ + }{{\left[ {\sum\limits_{i = \textrm{0}}^\textrm{3} {{I_i}(x,y)\cos (\pi \cdot i/\textrm{2})} } \right]}^\textrm{2}}} .$$

When the crosstalk is moderate or severe, the crosstalk interference becomes the key factor in calculating pixel modulation, and the impact of noise on the crosstalk coefficient calibration weakens. Therefore, it can be seen that the MCC method is very sensitive to noise. Figure 5(b) shows the calibration absolute error of the proposed method. Compared with MCC, PECC obtains the crosstalk coefficient by fitting the phase error map, resulting in improved robustness and stability, with calibration errors typically below ${2}\mathrm{.5\ \times 1}{{0}^{{ - 4}}}$. In addition, the calibration error of PECC increases less under the same noise influence.

The frequency of the PS fringe patterns will affect the accuracy of phase calculation [5]. Thus, the effect of pattern frequency also needs to be investigated. To illustrate the effect of pattern frequency on the two methods, the simulations using MCC and PECC at different pattern frequencies (25, 36, 49, 64, and 81) were carried out. The reason for selecting frequencies ${{N}^{{2}}}$ (${N= 5, 6, 7, 8, 9}$) is based on the effective noise suppression provided by the frequency sequence ${{N}^{{2}}}$, ${{N}^{{2}}}{ - 1}$, and ${ }{{N}^{{2}}}{ - N}$ [24]. The selection of this frequency composition for the reconstruction experiments in the following sections is also based on this reason. The pattern frequency was adjusted by changing the number of stripes in the pattern. Both were tested under random noise with a standard deviation of 5. Figures 6(a) and 6(b) illustrate the simulation result using MCC and PECC, respectively. The absolute errors of both calibration methods remained nearly unchanged under different pattern frequencies, except for the ${{c}_{{br}}}$ in Fig. 6(b). We believe that the calibration error fluctuation is so feeble that will have a scarce impact on the result of de-crosstalk. Thus, the fluctuation is acceptable and does not affect the stability of the proposed method, in summary, we can conclude that the frequency of the PS pattern has a negligible effect on the two methods.

 figure: Fig. 6.

Fig. 6. Absolute error under different pattern frequencies. (a) Simulation result of MCC. (b) Simulation result of PECC.

Download Full Size | PDF

4. Experiments

4.1 Crosstalk coefficient calibration

The experimental setup is shown in Fig. 7. The system mainly consists of a CMOS color camera (Daheng MER2-503-36U3C 2448 × 2048) equipped with a 25 mm lens, a digital projector (Gimi-Z6X 1920 × 1080) with 100% optical offset, and a white plate. The orthogonal patterns with a frequency of 64 in both horizontal and vertical directions, as well as the corresponding 18-step PS fringe patterns, were generated to calibrate the coefficients. To avoid crosstalk and chromatic aberration problems during capturing the 18-step PS fringe patterns, the blue non-composite fringe patterns were projected, and only the blue channels of the captured images were utilized for calculating the wrapped phase [25]. According to the principle described in Section 2, we calibrated the crosstalk coefficients using both MCC and PECC, and the results are shown in Table 1.

 figure: Fig. 7.

Fig. 7. Experimental setup.

Download Full Size | PDF

From Table 1, we can find that the two calibration methods provide similar results for large crosstalk coefficients, such as ${{c}_{{rg}}}$ and ${{c}_{{bg}}}$. However, for mild crosstalk, the calibration outcomes significantly differ, such as ${{c}_{{rb}}}$ and ${{c}_{{gr}}}$. To validate the calibration accuracy of both approaches, we used the calibrated crosstalk matrix to correct the color orthogonal patterns we captured. The de-crosstalk effect of the two calibration methods on the captured crosstalk images is illustrated in Fig. 8. It is evident that the crosstalk matrix calibrated using the proposed method significantly improves de-crosstalk and effectively suppresses interference from other channels. However, there is still significant interference in the separated patterns using MCC.

Tables Icon

Table 1. Crosstalk coefficient calibration results of MCC and PECC

 figure: Fig. 8.

Fig. 8. The results of correcting containing crosstalk using MCC and PECC.

Download Full Size | PDF

4.2 3D reconstruction with corrected color patterns

To further evaluate the effectiveness of our method, we conducted two comparative experiments: one is 3D shape measurement of a plate, and the other is 3D shape measurement of a standard step block. The frequencies of three-color channels are selected as ${{f}_{R}}{= 64}$, ${{f}_{G}}{= 63}$, and ${{f}_{B}}{= 56}$, which provide unambiguous 3D reconstructions for the whole projection range. The intrinsic and extrinsic parameters of the CFPP system were calibrated in advance [26,27], and the absolute phase was retrieved using the method in Ref. [16].

In the first experiment, we reconstructed a white plate using 3-step color patterns corrected by both calibration methods. The flatness of the plate is 0.005 mm. Figures 9(a) and 10(a) display the reconstruction plate using MCC and PECC, respectively. Firstly, we compared the calibration effectiveness of the two methods by calculating the fitting plate standard deviation. The standard deviation is 0.295 mm by using MCC, as shown in Fig. 9(b), while reduced to 0.114 mm by using PECC, as shown in Fig. 10(b). The reconstruction quality of the plate has been improved through our calibration method.

 figure: Fig. 9.

Fig. 9. Measurement result with MCC. (a) Plate reconstruction result. (b) Difference between the fitted ideal plate and the reconstruction plate. (c) Z-axis coordinates error in the X direction of (b). (d) Z-axis coordinates error in the Y direction of (b).

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Measurement result with PECC. (a) Plate reconstruction result. (b) Difference between the fitted ideal plate and the reconstruction plate. (c) Z-axis coordinates error in the X direction of (b). (d) Z-axis coordinates error in the Y direction of (b).

Download Full Size | PDF

To give a more comprehensive comparison, we also used 12-step PS fringe patterns to reconstruct the plate as the ground-truth value to compare the effectiveness of MCC and PECC. We extracted 449 points of Z-axis values in the X and Y directions of the center area and compared them with the ground-truth values. The Z-axis coordinates error with MCC and no correction in the X and Y directions are shown in Figs. 9(c) and 9(d), respectively. Figures 10(c) and 10(d) depict the Z-axis coordinates error with PECC and no correction in the X and Y directions, respectively. We can observe that both correction methods can improve the reconstruction quality of the plate. After using MCC, the Z-axis coordinates error in the X direction is limited to [-0.37, 0.59] mm, while after using PECC, the error is limited to [-0.33, 0.32] mm. In the Y direction, the error is in the range of [-0.18, 0.34] mm using MCC, while the proposed method is [-0.22, 0.25] mm. The results demonstrate that the proposed method can more accurately realize crosstalk reduction.

In the second experiment, we measured a step block consisting of three steps. We conducted a total of 11 measurements to avoid contingency of the results. As shown in Fig. 11(a), the distances A and B had been determined in advance using CMM, which are 9.9974 mm and 20.0098 mm, respectively. Figure 11(b) shows a box plot of the absolute error with No correction, PECC and MCC. We can observe that the distance error caused by crosstalk is significant without correction. The mean absolute error (MAE) of the distance A is 0.027 mm, and the MAE of the distance B is 0.046 mm. They are reduced to 0.018 mm and 0.031 mm with MCC. However, they are 0.014 mm and 0.021 mm using PECC, which reduced by 0.004 mm and 0.010 mm compared with the MCC, respectively. Furthermore, compared with the MCC and No correction, it can be seen that the distribution range of absolute error is smaller, implying that the measurement results will be more stable by using the proposed method. The results provide compelling evidence that the proposed PECC calibration method can effectively and accurately calibrate crosstalk coefficients.

 figure: Fig. 11.

Fig. 11. Measurement result of a step block. (a) The step block. (b) Box plot of measurement absolute error with No correction, PECC, and MCC.

Download Full Size | PDF

5. Conclusion

In this paper, we propose a crosstalk coefficient calibration method for improving the accuracy of crosstalk coefficient calibration. The basic principle and implementation procedure of the PECC are described in detail. The proposed method establishes the wrapped phase error model introduced by color crosstalk based on orthogonal color patterns to calibrate the crosstalk coefficient. Meanwhile, due to the special condition included in the derived formula, the phase error of some points is compensated to meet sinusoidal distribution, which realizes more accurate fitting. According to the simulations, the proposed method is insensitive to the camera noise and the pattern frequency hardly affects the calibration results, proving the robustness of the method. Moreover, according to the experiments, the standard deviation of the reconstruction plate is reduced by 0.181 mm. The MAE of distance A and distance B are also reduced by 0.004 mm and 0.010 mm, which provides the proposed PECC is more accurate for calibrating the color crosstalk compared with the MCC. Certainly, this method is not perfect. It needs a standard phase map to obtain the phase error, which demands capturing 18-step PS fringe patterns. In our further research, we will explore a rapid method to reduce the calibration time.

Funding

Seed Foundation of Tianjin University (2023XHX-0019); National Natural Science Foundation of China (62071325); Open Foundation of Beijing Laboratory of Optical Fiber Sensing and System (GXKF2022001).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. Y. Wan, Y. Cao, X. Liu, T. Tao, and J. Kofman, “High-frequency color-encoded fringe-projection profilometry based on geometry constraint for large depth range,” Opt. Express 28(9), 13043–13058 (2020). [CrossRef]  

2. Z. Zhang, D. P. Towers, and C. E. Towers, “Snapshot color fringe projection for absolute three-dimensional metrology of video sequences,” Appl. Opt. 49(31), 5947–5953 (2010). [CrossRef]  

3. H. Lin, L. Nie, and Z. Song, “A single-shot structured light means by encoding both color and geometrical features,” Pattern Recognit. 54, 178–189 (2016). [CrossRef]  

4. Z. H. Zhang, “Review of single-shot 3D shape measurement by phase calculation-based fringe projection techniques,” Opt. Lasers Eng. 50(8), 1097–1106 (2012). [CrossRef]  

5. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

6. X. Wu and X. Zhang, “Joint color decrosstalk and demosaicking for CFA cameras,” IEEE Trans. on Image Process. 19(12), 3181–3189 (2010). [CrossRef]  

7. Z. Zhang, W. Kuang, B. Shi, and Z. L. Huang, “Pushing the colorimetry camera-based fluorescence microscopy to low light imaging by denoising and dye combination,” Opt. Express 30(19), 33680–33696 (2022). [CrossRef]  

8. M. Wu, N. Fan, G. Wu, S. Zhang, and F. Liu, “An inverse error compensation method for color-fringe pattern profilometry,” J. Opt. 22(3), 035705 (2020). [CrossRef]  

9. L. Cui, G. Feng, H. Li, H. Yuan, and Z. Bao, “Absolute phase correction for phase-measuring profilometry based on color-coded fringes,” Opt. Eng. 60(02), 024101 (2021). [CrossRef]  

10. J. L. Flores, A. Muñoz, S. Ordoñes, G. Garcia-Torales, and J. A. Ferrari, “Color-fringe pattern profilometry using an efficient iterative algorithm,” Opt. Commun. 391, 88–93 (2017). [CrossRef]  

11. M. Padilla, M. Servin, and G. Garnica, “Fourier analysis of RGB fringe-projection profilometry and robust phase-demodulation methods against crosstalk distortion,” Opt. Express 24(14), 15417–15428 (2016). [CrossRef]  

12. S. Ma, R. Zhu, C. Quan, B. Li, C. J. Tay, and L. Chen, “Blind phase error suppression for color-encoded digital fringe projection profilometry,” Opt. Commun. 285(7), 1662–1668 (2012). [CrossRef]  

13. Z. Cai, X. Liu, H. Jiang, D. He, X. Peng, S. Huang, and Z. Zhang, “Flexible phase error compensation based on Hilbert transform in phase shifting profilometry,” Opt. Express 23(19), 25171–25181 (2015). [CrossRef]  

14. Y. Wang, L. Liu, J. Wu, X. Chen, and Y. Wang, “Hilbert transform-based crosstalk compensation for color fringe projection profilometry,” Opt. Lett. 45(8), 2199–2202 (2020). [CrossRef]  

15. P. S. Huang, Q. Hu, J. Feng, and F. Chiang, “Color-encoded digital fringe projection technique for high-speed 3-D surface contouring,” Opt. Eng. 38(6), 1065–1071 (1999). [CrossRef]  

16. Z. Zhang, C. E. Towers, and D. P. Towers, “Time efficient color fringe projection system for 3D shape and color using optimum 3-frequency Selection,” Opt. Express 14(14), 6444–6455 (2006). [CrossRef]  

17. Y. Ye, H. Li, G. Li, and L. Lin, “A crosstalk correction method to improve multi-wavelength LEDs imaging quality based on color camera and frame accumulation,” Signal Process.-Image Commun. 102, 116624 (2022). [CrossRef]  

18. Z. Wang, Q. Gao, and J. Wang, “A triple-exposure color PIV technique for pressure reconstruction,” Sci. China Technol. Sci. 60(1), 1–15 (2017). [CrossRef]  

19. M. Yue, J. Wang, J. Zhang, Y. Zhang, Y. Tang, and X. Feng, “Color crosstalk correction for synchronous measurement of full-field temperature and deformation,” Opt. Lasers Eng. 150, 106878 (2022). [CrossRef]  

20. L. Rao and F. Da, “Neural network based color decoupling technique for color fringe profilometry,” Opt. Laser Technol. 70, 17–25 (2015). [CrossRef]  

21. B. Zhang, S. Lin, J. Lin, and K. Jiang, “Single-shot high-precision 3D reconstruction with color fringe projection profilometry based BP neural network,” Opt. Commun. 517, 128323 (2022). [CrossRef]  

22. D. Caspi, N. Kiryati, and J. Shamir, “Range imaging with adaptive color structured light,” IEEE Trans. Pattern Anal. Machine Intell. 20(5), 470–480 (1998). [CrossRef]  

23. S. Zhang, “Comparative study on passive and active projector nonlinear gamma calibration,” Appl. Opt. 54(13), 3834–3841 (2015). [CrossRef]  

24. C. E. Towers, D. P. Towers, and J. D. C. Jones, “Absolute fringe order calculation using optimised multi-frequency selection in full-field profilometry,” Opt. Lasers Eng. 43(7), 788–800 (2005). [CrossRef]  

25. J. Qian, S. Feng, Y. Li, T. Tao, J. Han, Q. Chen, and C. Zuo, “Single-shot absolute 3D shape measurement with deep-learning-based color fringe projection profilometry,” Opt. Lett. 45(7), 1842–1845 (2020). [CrossRef]  

26. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

27. L. Feng, J. Kang, H. Li, Z. Sun, Z. Zhang, L. Yuan, Z. Su, and B. Wu, “Rapid and flexible calibration of DFPP using a dual-sight fusion target,” Opt. Lett. 48(8), 2086–2089 (2023). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. The color crosstalk coefficient model.
Fig. 2.
Fig. 2. Procedure of the proposed calibration method.
Fig. 3.
Fig. 3. Crosstalk effect in the real obtained orthogonal crosstalk pattern.
Fig. 4.
Fig. 4. Illustration of phase compensation when the crosstalk coefficient is 0.1. (a) Wrapped phase error map before compensation. (b) Row 35 phase error distribution before compensation. (c) Row 116 phase error distribution before compensation. (d) Wrapped phase error map after compensation. (e) Row 35 phase error distribution after compensation. (f) Row 116 phase error distribution after compensation.
Fig. 5.
Fig. 5. Absolute error under different random noise. (a) Simulation result of MCC. (b) Simulation result of PECC.
Fig. 6.
Fig. 6. Absolute error under different pattern frequencies. (a) Simulation result of MCC. (b) Simulation result of PECC.
Fig. 7.
Fig. 7. Experimental setup.
Fig. 8.
Fig. 8. The results of correcting containing crosstalk using MCC and PECC.
Fig. 9.
Fig. 9. Measurement result with MCC. (a) Plate reconstruction result. (b) Difference between the fitted ideal plate and the reconstruction plate. (c) Z-axis coordinates error in the X direction of (b). (d) Z-axis coordinates error in the Y direction of (b).
Fig. 10.
Fig. 10. Measurement result with PECC. (a) Plate reconstruction result. (b) Difference between the fitted ideal plate and the reconstruction plate. (c) Z-axis coordinates error in the X direction of (b). (d) Z-axis coordinates error in the Y direction of (b).
Fig. 11.
Fig. 11. Measurement result of a step block. (a) The step block. (b) Box plot of measurement absolute error with No correction, PECC, and MCC.

Tables (1)

Tables Icon

Table 1. Crosstalk coefficient calibration results of MCC and PECC

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

I i ( x , y ) = a ( x , y ) + b ( x , y ) cos [ ϕ ( x , y ) + δ i ] ,
ϕ ( x , y ) = arctan [ 0 N 1 I i ( x , y ) sin ( δ i ) / 0 N 1 I i ( x , y ) cos ( δ i ) ] .
[ I R I G I B ] = K c [ I R I G I B ] = [ c r r c g r c b r c r g c g g c b g c r b c g b c b b ] [ I R I G I B ] ,
[ I R I G I B ] = i n v ( K c ) [ I R I G I B ] .
[ I m I n ] = k m n [ I m I n ] = [ c m m c n m c m n c n n ] [ I m I n ] = [ 1 c n m c m n 1 ] [ I m I n ] ,
I m i ( x , y ) = a ( x , y ) + b ( x , y ) cos [ φ m ( x y ) +  δ i ] +  { c n m a ( x , y )  +  c n m b ( x , y ) cos [ φ n ( x y ) +  δ i ] } ,
ϕ m ( x , y ) = arctan [ 0 3 I m i ( x y ) sin ( i π / 2 ) / 0 3 I m i ( x y ) cos ( i π / 2 ) ] .
ϕ m ( x , y )  = arctan { sin [ φ m ( x , y ) ] + c n m sin [ φ n ( x , y ) ] cos [ φ m ( x , y ) ] + c n m cos [ φ n ( x , y ) ] } .
Δ ϕ m ( x , y ) = ϕ m ( x , y ) ϕ m ( x , y ) = arctan { c n m sin [ φ n ( x , y ) φ m ( x , y ) ] 1 + c n m cos [ φ m ( x , y ) φ n ( x , y ) ] } .
Δ ϕ m ( x , y ) = ϕ m ( x , y ) ϕ m ( x , y ) c n m sin [ φ n ( x , y ) φ m ( x , y ) ] .
sin [ φ m ( x , y ) ] + c n m sin [ φ n ( x , y ) ] cos [ φ m ( x , y ) ] + c n m cos [ φ n ( x , y ) ] sin [ φ m ( x , y ) ] cos [ φ m ( x , y ) ] > 1.
[ c r r c g r c b r c r g c g g c b g c r b c g b c b b ] = [ 1.000 0.100 0.015 0.070 1.000 0.030 0.200 0.050 1.000 ] .
M ( x , y ) = 1 2 [ i = 0 3 I i ( x , y ) sin ( π i / 2 ) ] 2  +  [ i = 0 3 I i ( x , y ) cos ( π i / 2 ) ] 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.