Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Simulation framework for fringe projection profilometry using ray tracing and light transport coefficient measurement

Open Access Open Access

Abstract

Fringe projection profilometry is widely used in optical metrology, and fringe analysis is important to improve measurement accuracy. However, the fringe images captured by cameras are influenced by many factors, an analytical study of which, to characterize the imaging process, is difficult to perform. We propose a method to accurately simulate the real imaging system in the virtual environment using ray tracing algorithm. The light transport coefficients of the cameras are measured to simulate defocus instead of using Gaussian function. Experimental results show that the proposed method can simulate a physical system in the virtual environment more accurately than the Gaussian function at large defocus condition.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Fringe projection profilometry (FPP) is a popular optical 3D shape measurement method [1], which is widely used in various fields, such as quality control, medicine, and industrial inspection. A general FPP system consists of one projector to project sinusoidal fringes on the object and two cameras to capture the fringe images. Then, fringe analysis methods such as phase-shifting algorithms [2], are used to obtain the phase values to find corresponding points between the two cameras. The fringe analysis is key to measurement accuracy.

The fringe images are influenced by many factors, such as projector’s gamma effect, defocus of camera and projector, and the measured object’s property. Zhang [3] revealed that the active projectors’ nonlinear gamma calibration method is robust and can effectively reduce the phase error caused by the gamma effect. Rao et al. [4] concluded that the projector defocus only affects the amplitude of the projected fringe patterns and decreases the signal-to-noise ratio (SNR), which can be improved by increasing the phase-shifting steps [5] or selecting the optimal fringe frequency [6]. They also analyzed the influence of camera defocus that appeared in the texture areas on the phase quality. They found that the proposed method can alleviate the system errors and improve the final accuracy. Kühmstedt et al. [7] analyzed the accuracy considering the angle of incidence, angle of acquisition, triangulation angle, and field angle, which depend on camera constant and chip size. They demonstrated that 3D reconstruction results are acceptable until an acquisition angle value of maximal 70°. The FPP system is a nonlinear system with all these factors. However, these factors are studied separately; thus, the actual system cannot be characterized accurately. With the development of artificial intelligence and enhanced computational power, deep learning is adopted in fringe analysis [8]. Feng et al. [9] presented a fringe-pattern analysis framework using a Bayesian convolutional neural network (BNN). The network can demodulate the phase information with a single frame and output pixel-wise uncertainty maps. Zuo et al. [10] proposed a deep learning technique to analyze nonsinusoidal fringe images resulting from different factors including gamma effect of projector, residual harmonics, and the image saturation. They revealed that these factors can be represented by a generalized model. Deep learning techniques require tremendous data, and the ground-truth of FPP is unavailable, thereby limiting the network generalization and accuracy.

Simulation method is a practical solution to analyze the effect of different error sources. Lutzke et al. [11] proposed a method to simulate the 3D measurement at translucent object to enhance the understanding of the error caused by the light scattered in the object’s volume. Middendorf et al. [12] used the ray tracing approach to model the FPP system. They identified locations, which are affected by multiple reflections, using virtual scanning and eliminated these areas. The defocus of the cameras is not considered in the two simulation methods. Zheng et al. [13] proposed a framework to establish the digital twin of a real-world system to generate training data for deep learning algorithm automatically. The defocus of the cameras is regarded as Gaussian function in the computer graphics (CG) software, where variance with the real imaging system at large defocus condition exists.

In summary, the imaging process of the camera was not fully explored by previous studies, hindering the accuracy of analysis and application of FPP. Gaussian function is an ideal model to represent camera defocus and it is position-invariant. Considering the optical aberration of the camera lens, the distribution of light at the camera pixel cannot be characterized by Gaussian function when the object is at different position. The light transport coefficients (LTC) characterize the radiance captured by a camera pixel using the weighted sum of intensities of every possible position on the light source [14]. Thus, given the LTC of camera pixel, the light captured by the pixel can be traced and the defocus of camera can be simulated. The present study proposes a simulation framework to characterize a real-world FPP system in a virtual environment, accurately simulating the camera defocus. In this work, we use ray tracing algorithm [15] to simulate the light propagation between the projector and cameras. For the camera defocus simulation, we measure the LTC of the camera based on single-pixel imaging [14,16,17] instead of using Gaussian function.

The remainder of this paper is organized as follows. Section 2 explains the principles of the proposed method, including FPP principle, light propagation simulation using ray tracing and LTC, and LTC measurement principle. Section 3 shows the experimental setup and results. Section 4 discusses our conclusions.

2. Principles

In this study, ray tracing algorithm is used to characterize the light propagation of a real-world FPP system. According to geometrical optics, the cameras’ LTC at the optical center is interpolated using the LTC measured at other depth and input in the ray tracing algorithm to simulate the camera defocus. The light propagation of an FPP system is shown in Fig. 1. The LTC measurement requires a liquid crystal display (LCD) screen to display specific patterns [17], as shown in Fig. 2.

 figure: Fig. 1.

Fig. 1. Light propagation of an FPP system. The light is projected on the object by the projector and captured by the two cameras.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. LTC measurement setup. An LCD screen is used to display the patterns. The LCD depth is changed by moving the camera using a precision translation stage.

Download Full Size | PDF

The framework of the proposed simulation method is organized as follows:

Step 1: Modeling the light propagation between the camera and the projector. The FPP system comprises two cameras and one projector. We calibrate the cameras and the projector [18,19] to obtain the intrinsic and extrinsic parameters. Then, the light propagation is modeled.

Step 2: Tracing the path of light through pixels in the camera sensor plane and projector sensor plane. When the intrinsic and extrinsic parameters are calibrated, the origin and direction of light can be calculated. In this study, the ray tracing algorithm is used to obtain the intensity of pixels in the image sensor plane.

Step 3: Measuring the cameras’ LTC to simulate real imaging system. Single-pixel imaging is used to measure the LTC at different depths. The LTC at the camera’s optical center is interpolated using LTC at other depths and input in ray tracing algorithm to accurately simulate the camera defocus instead of using Gaussian function.

2.1 Modeling the light propagation based on pinhole model

The light propagation between the projector and cameras can be modeled using a pinhole model, as follows:

$${s_\textrm{c}}\left[ {\begin{array}{*{20}{c}} {{u_\textrm{c}}}\\ {{v_\textrm{c}}}\\ 1 \end{array}} \right] = {{\textbf A}_\textrm{c}}[{{{\bf R}_\textrm{c}}|{{{\bf T}_\textrm{c}}} } ]\left[ {\begin{array}{*{20}{c}} {{X_\textrm{W}}}\\ {{Y_\textrm{W}}}\\ {{Z_\textrm{W}}}\\ 1 \end{array}} \right]$$
$${s_\textrm{p}}\left[ {\begin{array}{*{20}{c}} {{u_\textrm{p}}}\\ {{v_\textrm{p}}}\\ 1 \end{array}} \right] = {{\textbf A}_\textrm{p}}[{{{\bf R}_\textrm{p}}|{{{\bf T}_\textrm{p}}} } ]\left[ {\begin{array}{*{20}{c}} {{X_\textrm{W}}}\\ {{Y_\textrm{W}}}\\ {{Z_\textrm{W}}}\\ 1 \end{array}} \right]$$
where (XW, YW, ZW) represents a 3D point of the object in the world coordinate, sc and sp are scaling factors, (uc, vc) is the pixel of imaging plane, which is CCD for camera, (up, vp) is the pixel of imaging plane, which is DMD for projector. Ac and Ap are the intrinsic parameter matrices of the camera and projector, respectively; Rc, Tc, Rp, and Tp are the extrinsic parameter matrices of the camera and projector. The intrinsic parameter matrix and extrinsic parameter matrix can be obtained by calibration.

In FPP, the projector projects sinusoidal fringes on the object, and the two cameras capture images simultaneously, as shown in Fig. 1. In this study, phase shifting [20] is utilized, and the reconstruction results are obtained by triangulation [21]. The intensity of captured images can be formulated as follows:

$${I_i}({u_\textrm{c}},{v_\textrm{c}}) = w\left\{ {A({u_\textrm{p}},{v_\textrm{p}}) + B({u_\textrm{p}},{v_\textrm{p}})\cos [\phi ({u_\textrm{p}},{v_\textrm{p}})\textrm{ - }i \cdot \frac{{\mathrm{2\pi }}}{N}]} \right\},i = 0,1,2, \ldots ,N\textrm{ - }1$$
where Ii(uc, vc) (i =0, 1, 2, …, N−1) is the intensity of each pixel of camera, A(up, vp) is the average intensity, B(up, vp) is the modulation, and w is the weight coefficient, which characterizes the influence of reflectance. The wrapped phase ϕ(up, vp) can be solved by the following:
$$\phi ({u_\textrm{p}},{v_\textrm{p}}) = \arctan \frac{{\sum\limits_{i = 0}^{N - 1} {{I_i}({u_\textrm{c}},{v_\textrm{c}})\sin (2\pi i/N)} }}{{\sum\limits_{i = 0}^{N - 1} {{I_i}({u_\textrm{c}},{v_\textrm{c}})\cos (2\pi i/N)} }}$$

According to Eqs. (1), (2), and (3), the intensity of the camera pixel can be obtained by tracing the light inversely.

2.2 Tracing the path of light to obtain the pixel intensity

The process of tracing the light path is shown in Fig. 3. According to Eq. (1), the origin and direction of the light emitted from pixel on image sensor plane can be formulated as follows:

$$\begin{array}{l} {{\textbf r}_\textrm{o}} = {{\textbf R}_\textrm{c}}^\textrm{T}({{{\textbf p}_{\textrm{Len}}} - {{\textbf T}_\textrm{c}}} ),\\ {{\textbf r}_\textrm{d}} = {{\textbf R}_\textrm{c}}^\textrm{T}\frac{{{{\textbf T}_\textrm{d}}}}{{||{{{\textbf T}_\textrm{d}}} ||}},\\ {{\textbf T}_\textrm{d}} = {\left[ {\begin{array}{*{20}{c}} {\frac{{{f_\textrm{d}}}}{f}{u_\textrm{n}} - {x_\textrm{l}}}&{\frac{{{f_\textrm{d}}}}{f}{v_\textrm{n}} - {y_\textrm{l}}}&{{f_\textrm{d}}} \end{array}} \right]^\textrm{T}}. \end{array}$$
where ro is the origin, rd is the direction, fd is the focal distance, f is the focal length, pLen(x1, y1, 0) is the point on the lens aperture, and (un, vn) is the normalized image plane coordinate. Given a pixel (uc, vc) of image sensor plane, (un, vn) can be calculated from the intrinsic parameters of the camera, as follows:
$$\begin{array}{l} {u_\textrm{n}} = \frac{{{u_\textrm{c}} - {u_{0\textrm{c}}}}}{{{\alpha _\textrm{x}}}}\\ {v_\textrm{n}} = \frac{{{v_\textrm{c}} - {v_{0\textrm{c}}}}}{{{\alpha _\textrm{y}}}} \end{array}$$
where (u0c, v0c) is the coordinates of the principal point, αx and αy are the scale factors, (uc, vc) is undistorted in advance [18].

 figure: Fig. 3.

Fig. 3. Framework of the ray tracing algorithm used in this study. The graphics processing unit (GPU) is used to accelerate the process.

Download Full Size | PDF

The intersection P of the light and the object can be calculated using ro, rd, and CAD model. As shown in Fig. 4, the intensity I(uc, vc) of the camera pixel can be formed, as follows:

$$I({{u_\textrm{c}},{v_\textrm{c}}} )= w \cdot \{{A({{u_\textrm{p}},{v_\textrm{p}}} )+ B({{u_\textrm{p}},{v_\textrm{p}}} )\cos \phi ({{u_\textrm{p}},{v_\textrm{p}}} )} \}\cdot ({{\textbf N} \cdot {\textbf L}} )\cdot V({\textbf P} )$$
where w is the weight coefficient, (up, vp) can be obtained according to Eq. (2), N is the normal vector at P, and L is the direction from P to the optical center of projector. V(P) is the visibility of P, as follows:
$$V({\textbf P} )= \left\{ \begin{array}{l} 1,\textrm{without occlusion}\\ 0,\textrm{occlusion} \end{array} \right.$$

 figure: Fig. 4.

Fig. 4. Light path from camera pixel to projector pixel. In ray tracing algorithm, the light is emitted by the camera pixel, and the intersection is projected on the DMD plane to determine the projector pixel whose intensity can be obtained by the predesigned fringe images.

Download Full Size | PDF

2.3 Real imaging system simulation by measuring the cameras’ LTC at the optical center

According to pinhole model, only one light can be received by the camera pixel. Thus, the pLen(x1, y1, 0) is the optical center, and w equals one, indicating that the camera defocus does not exist. However, the aperture of a real imaging system cannot be regarded as a pinhole, and the camera has only one focus plane. According to geometrical optics, the actual light received by a camera pixel is shown in Fig. 5, and the intensity can be formed as follows:

$$I({{u_\textrm{c}},{v_\textrm{c}}} )= \sum\limits_{k = 1}^K {{w_k}{I_k}({{x_k},{y_k}} )}$$
where wk is the weight coefficient, and Ik(xk, yk) is the light passing (xk, yk), which is one of the pLen. wk and (xk, yk) characterize the light energy distribution at the aperture. They are obtained by LTC measurement, and with Eq. (5) and Eq. (9) the intensity of camera pixel is determined.

 figure: Fig. 5.

Fig. 5. Light received by the pixel with a lens. fd is the focal distance, f is the focal length.

Download Full Size | PDF

LTC including light source position and weight coefficient characterizes the radiance captured by a camera pixel [14]. In this study, an LCD is used as the light source, and single pixel imaging is used to measure the LTC at different depths [14,17], representing the LCD area imaged by a camera pixel. The sinusoidal structured patterns of different frequency are displayed by the LCD and used to compute the Fourier coefficients [17].

The LTC at the optical center is required to trace light. The optical center is a virtual point whose LTC cannot be measured directly. According to geometrical optics, the LTC at the optical center can be obtained by interpolation using the LTC at other depths [22]. As shown in Fig. 6, δ1, δ2, and δO satisfy similar triangles, and the ratio relationship can be formed as follows:

$$\begin{array}{l} \frac{{{\delta _1}}}{{{\delta _\textrm{2}}}} = \frac{{{z_\textrm{f}} - {z_1}}}{{{z_\textrm{f}} - {z_2}}}\textrm{ = }\frac{{{x_\textrm{f}} - {x_1}}}{{{x_\textrm{f}} - {x_2}}} = \frac{{{y_\textrm{f}} - {y_1}}}{{{y_\textrm{f}} - {y_2}}}\\ \frac{{{\delta _1}}}{{{\delta _\textrm{O}}}}\textrm{ = }\frac{{{z_\textrm{f}} - {z_1}}}{{{z_\textrm{f}} - {z_\textrm{O}}}} = \frac{{{x_\textrm{f}} - {x_1}}}{{{x_\textrm{f}} - {x_\textrm{O}}}} = \frac{{{y_\textrm{f}} - {y_1}}}{{{y_\textrm{f}} - {y_\textrm{O}}}} \end{array}$$

Optical center is the intersection of each pixels’ principle ray, as shown in Fig. 7. Thus, zO can be calculated by fitting principle rays of each pixel and estimating their intersection. According to geometrical optics, the energy of light is concentrated around the principle ray. Thus, the gray scale centroid of LTC, as shown in Fig. 8, can be regarded as the intersection of principle ray and LTC. The gray scale centroid is formed as follows:

$$\begin{array}{l} {x_g} = \frac{{\sum\limits_{({x,y} )\in D} {x \cdot h({x,y,z;{u_\textrm{c}},{v_\textrm{c}}} )} }}{{\sum\limits_{({x,y} )\in D} {h({x,y,z;{u_\textrm{c}},{v_\textrm{c}}} )} }}\\ {y_g} = \frac{{\sum\limits_{({x,y} )\in D} {y \cdot h({x,y,z;{u_\textrm{c}},{v_\textrm{c}}} )} }}{{\sum\limits_{({x,y} )\in D} {h({x,y,z;{u_\textrm{c}},{v_\textrm{c}}} )} }} \end{array}$$
where (xg, yg) is the gray scale centroid of LTC, D represents the LCD area captured by the camera pixel, and h(x, y, z; uc, vc) is the LTC. Given depth z and pixel size s of the LCD, point (xgs, ygs, z) is a sample point of the principle ray. Several samples of the principle ray can be calculated by measuring LTC at different depths shown in Fig. 8, and the equation of principle ray can be fitted. Then, the optical center depth zO can be obtained by estimating the intersection of the principle rays of each camera pixel.

 figure: Fig. 6.

Fig. 6. LTC varies with depth changes. δ1, δ2, and δO represent the length of LTC at z1, z2, and zO projected on Y axis, respectively.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Optical center is the intersection of principle rays according to geometrical optics.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. LTC measurement at different depth and gray scale centroid calculated of certain pixel.

Download Full Size | PDF

In summary, the measurement of LTC at the optical center includes the following steps:

  • 1. Measuring LTC at different depths.
  • 2. Calculating gray scale centroids and fitting principle rays.
  • 3. Estimating the intersection of the principle rays to obtain the depth of the optical center.
  • 4. Calculating the LTC at the optical center by interpolation using the LTC at other depth.

3. Experiment

The experimental setup is shown in Fig. 9. Figure 9(a) shows the real-world FPP system used in this study, including two cameras and one projector. The resolution of the cameras is 4096×2168 pixels, and that of the projector is 1920×1080 pixels. The focus length of the lens of the cameras is 16 mm. The designed measurement accuracy of the real-world FPP system is 0.02 mm. Figure 9(b) shows the LTC measurement setup. The resolution of the LCD is 3840×2160 pixels, and the pixel size is 0.2715 mm ×0.2715 mm. The camera is obtained from the FPP system and fixed on the precision translation stage, which is used to change the depth between the camera and the LCD. The LCD, camera, and precision translation stage are controlled by a workstation.

 figure: Fig. 9.

Fig. 9. Experimental setup. (a) FPP system consisting of two cameras and one projector; (b) LTC measurement setup including LCD, camera, and precision translation stage.

Download Full Size | PDF

3.1 Calculating the LTC at the optical center

Maintaining a parallel coordinate system of the LCD and camera can simplify the calculation of gray scale centroid and principle ray fitting. Thus, the camera pose is adjusted to align the optical axis of the camera to be perpendicular to the LCD [22]. To achieve the alignment, a crosshair is displayed on the LCD screen and the pose of camera is adjusted until the crosshair is in the center of the captured image. Then the precision translation stage is moved and the adjustment is repeated until the crosshair is in the center of the captured image at different depths. The gamma effect of the LCD is corrected prior to display the patterns [23].

In this study, LTC is measured at six different depths, and the depth is changed by controlling the precision translation stage. The focus distance zf of the camera is 760 mm. The frequencies fx, fy of the Fourier sinusoidal-structured patterns [14] are in the range of [0, 23/24]. Following the principles in Section 2.3, the LTC and gray scale centroid of certain pixel are shown in Fig. 10 and Table 1, respectively.

 figure: Fig. 10.

Fig. 10. LTC at six different depths of a certain pixel of the FPP system’s left camera shown in Fig. 9(a).

Download Full Size | PDF

Tables Icon

Table 1. Gray scale centroids at different depths of the pixel

According to Table 1, the principle ray of the pixel can be fitted. Similarly, the principle rays of each camera pixel are obtained and shown in Fig. 11. The average of the fitting residual of the principle rays is 1.491 mm and the standard deviation of the fitting residual of the principle rays is 0.809 mm.

 figure: Fig. 11.

Fig. 11. Principle rays of each pixel and optical center of the FPP system’s left camera.

Download Full Size | PDF

The optical center is obtained by calculating the intersection of the principle rays shown in Fig. 11 using the least square method, and the coordinate of the optical center is shown in Table 2.

Tables Icon

Table 2. Coordinate of the optical center of the FPP system’s left camera

Then, according to Eq. (10) the LTCs at the optical center of each pixel can be obtained by interpolation. This process is repeated for the FPP system’s right camera, and the LTCs at the optical center of certain pixels of the FPP system’s cameras are shown in Fig. 12.

 figure: Fig. 12.

Fig. 12. LTC at the optical center of a certain pixel. (a) FPP system’s left camera; (b) FPP system’s right camera.

Download Full Size | PDF

The motion of precision translation stage has an error, which may influence the accuracy of the depth values. The error is noted by Δ, and the interpolation equation can be formed as follows:

$$\begin{array}{l} \frac{{{\delta _1}}}{{{\delta _\textrm{2}}}} = \frac{{({z_\textrm{f}} + \Delta ) - ({z_1} + \Delta )}}{{({z_\textrm{f}} + \Delta ) - ({z_2} + \Delta )}} = \frac{{{z_\textrm{f}} - {z_1}}}{{{z_\textrm{f}} - {z_2}}}\\ \frac{{{\delta _1}}}{{{\delta _\textrm{O}}}}\textrm{ = }\frac{{({z_\textrm{f}} + \Delta ) - ( {z_1} + \Delta ) }}{{( {z_\textrm{f}} + \Delta ) - ( {z_\textrm{O}} + \Delta ) }} = \frac{{{z_\textrm{f}} - {z_1}}}{{{z_\textrm{f}} - {z_\textrm{O}}}} \end{array}$$
where zf+Δ, z1+Δ, and z2+Δ are the truth values, and zf, z1, and z2 are the measured values obtained from the precision translation stage. According to Eq. (12), the interpolation accuracy is not influenced by the motion error of the precision translation stage.

3.2 Validating the simulation method

The LTCs at the optical center of each pixel measured in Section 3.1 are input in the ray tracing algorithm, and the calibration results of the real-world FPP system are shown in Table 3. The calibration reprojection error of the camera is 0.021 pixels and the calibration reprojection error of the projector is 0.043 pixels.

Tables Icon

Table 3. Extrinsic and intrinsic parameters of the FPP system shown in Fig. 9(a).

As shown in Fig. 13, to validate the simulation method, a standard sphere with a diameter of 50.8007 mm is measured by the real-world FPP system and the virtual scanner using the proposed simulation method. The standard sphere is also measured by the virtual scanner using the Gaussian function as a contrast.

 figure: Fig. 13.

Fig. 13. Experimental setup. (a) Standard sphere measured by the FPP system; (b) a–h represent the positions of the sphere.

Download Full Size | PDF

The standard sphere is placed at eight different positions successively, as shown in Fig. 13(b), and the point clouds of the real-world FPP and the virtual scanner are aligned using Iterative Closest Point (ICP) [24]. Then, the average and the root mean square error (RMSE) of distances between the corresponding points are computed to validate the consistency of the real-world FPP system and the virtual scanner, representing the accuracy of the simulation. The fringe images captured by the FPP system and the virtual scanner at certain position are shown in Fig. 14, and the results are shown in Fig. 15 and Table 4. The sphere fitting error is shown in Table 5.

 figure: Fig. 14.

Fig. 14. Fringe images at a certain position. (a) Captured by the FPP system; (b) Captured by the virtual scanner.

Download Full Size | PDF

 figure: Fig. 15.

Fig. 15. Point clouds and distances between the corresponding points. The left side of (a)–(h) shows the results using LTC, and the right side shows the results using Gaussian function. The defocus of (a)–(d) is slight, and the defocus of (e)–(h) is large.

Download Full Size | PDF

Tables Icon

Table 4. Average and RMSE of distances between the corresponding points

Tables Icon

Table 5. Sphere fitting error of the FPP system and virtual scanner

According to Table 4, the average distance is nearly 0, and the RMSE is less than 0.022 mm when LTC is used. The results of positions (a)–(d) show that the accuracy of simulation using LTC and Gaussian function are similar, demonstrating that the camera defocus can be approximated by Gaussian function when the defocus is slight. The results of positions (e)–(h) show that the average of simulation using LTC is smaller than that using Gaussian function, demonstrating that systematic error of the proposed method is reduced compared with Gaussian function at large defocus condition. The increase of RMSE shows that the random error of the LTC measurement exists. Thus, the noise suppression of LTC measurement needs to be studied in the future work. Moreover, Table 5 shows that the variation trend of sphere fitting error of simulation using LTC is more consistent with the real-world FPP system.

A standard ball bar with a length of 209.8219 mm is also measured by the real-world FPP system and the virtual scanner using the proposed simulation method. The standard ball bar is placed at four different positions successively as shown in Fig. 16(b). The 3D data of the standard ball bar are used to calculate the two-ball center distance and the result is shown in Table 6.

 figure: Fig. 16.

Fig. 16. (a) Standard ball bar; (b) a–d represent the positions of the standard ball bar. The defocus of (a) and (d) is large, and the defocus of (b) and (c) is slight.

Download Full Size | PDF

Tables Icon

Table 6. Two-ball center distance measured by the FPP system and the virtual scanner

According to Table 6, the results of (a) and (d) show the simulation using LTC is closer to the real-world FPP system than that using Gaussian function at large defocus condition. The results of (b) and (c) show the simulation using LTC can be approximated by that using Gaussian function at slight defocus condition. Moreover, the results of (a)–(d) show the length measurement of real-world FPP system changes with defocus condition, which can be simulated by the proposed method.

As shown in Fig. 17, an object is measured using the virtual scanner, and the error of the 3D points is calculated according to the CAD model, which can be regarded as the ground-truth. Thus, the effect of different factors on fringe analysis can be quantified, and the training data generated by the virtual scanner can improve the performance of deep learning technique used in fringe analysis. These studies will be conducted in the future work.

 figure: Fig. 17.

Fig. 17. (a) Object measured by the virtual scanner; (b) The difference between the result and the ground-truth.

Download Full Size | PDF

4. Conclusion

This research presents a novel simulation framework to characterize a real-world FPP system in the virtual environment. The ray tracing algorithm is used to model the light propagation of the cameras and the projector, and the single pixel imaging is used to measure the LTCs at the optical center of the cameras; it can accurately simulate the imaging process of the cameras. The experiments indicate that the difference between the proposed method and real-world FPP system is highly small, and the change of length measurement of the real-world FPP system caused by defocus can be simulated by the proposed method. Moreover, the accuracy of simulation using LTC is higher than that using Gaussian function at large defocus condition. The experiments demonstrate the proposed simulation method can achieve high accuracy and be used for fringe analysis and deep learning technique in the future work.

In the proposed method, the principle rays are calculated based on the assumption that the energy of light is concentrated around the principle ray. However, the energy of light is dispersed when larger defocus is considered. Therefore, the calculation of principle rays can be improved by analyzing the light dispersion in the future works.

Funding

National Key Research and Development Program of China (2020YFB2010701); National Natural Science Foundation of China (61735003).

Disclosures

The authors declare no conflicts of interest.

Data availability

No data were generated or analyzed in the presented research.

References

1. J. Xu and S. Zhang, “Status, challenges, and future perspectives of fringe projection profilometry,” Opt. Lasers Eng. 135, 106193 (2020). [CrossRef]  

2. S. Zhang, “Absolute phase retrieval methods for digital fringe projection profilometry: A review,” Opt. Lasers Eng. 107, 28–37 (2018). [CrossRef]  

3. S. Zhang, “Comparative study on passive and active projector nonlinear gamma calibration,” Appl. Opt. 54(13), 3834–3841 (2015). [CrossRef]  

4. L. Rao and F. Da, “Local blur analysis and phase error correction method for fringe projection profilometry systems,” Appl. Opt. 57(15), 4267–4276 (2018). [CrossRef]  

5. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: a review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

6. G. Rao, L. Song, S. Zhang, X. Yang, K. Chen, and J. Xu, “Depth-driven variable-frequency sinusoidal fringe pattern for accuracy improvement in fringe projection profilometry,” Opt. Express 26(16), 19986–20008 (2018). [CrossRef]  

7. P. Kühmstedt, C. Bräuer-Burchardt, and G. Notni, “Measurement accuracy of fringe projection depending on surface normal direction,” Proc. SPIE 7432, 743203 (2009). [CrossRef]  

8. C. Zuo, J. Qian, S. Feng, W. Yin, Y. Li, P. Fan, J. Han, K. Qian, and Q. Chen, “Deep learning in optical metrology: a review,” Light: Sci. Appl. 11(1), 39 (2022). [CrossRef]  

9. S. Feng, C. Zuo, Y. Hu, Y. Li, and Q. Chen, “Deep-learning-based fringe-pattern analysis with uncertainty estimation,” Optica 8(12), 1507–1510 (2021). [CrossRef]  

10. S. Feng, C. Zuo, L. Zhang, W. Yin, and Q. Chen, “Generalized framework for non-sinusoidal fringe analysis using deep learning,” Photonics Res. 9(6), 1084–1098 (2021). [CrossRef]  

11. P. Lutzke, P. Kühmstedt, and G. Notni, “Fast error simulation of optical 3D measurements at translucent objects,” Proc. SPIE 8493, 84930U–84930U-8 (2012). [CrossRef]  

12. P. Middendorf, P. Kern, N. Melchert, M. Kästner, and E. Reithmeier, “A GPU-based ray tracing approach for the prediction of multireflections on measurement objects and the a priori estimation of low-reflection measurement poses,” Proc. SPIE 11787, 9 (2021). [CrossRef]  

13. Y. Zheng and B. Li, “Digital twin-trained deep convolutional neural networks for fringe analysis,” Proc. SPIE 11698, 16 (2021). [CrossRef]  

14. H. Jiang, Y. Li, H. Zhao, X. Li, and Y. Xu, “Parallel Single-Pixel Imaging: A General Method for Direct–Global Separation and 3D Shape Reconstruction Under Strong Global Illumination,” Int J Comput Vis 129(4), 1060–1086 (2021). [CrossRef]  

15. P. Shirley and R. Keith Morley, Realistic Ray Tracing, 2nd ed. (A K Peters, 2008).

16. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat Commun 6(1), 6225 (2015). [CrossRef]  

17. H. Jiang, Y. Liu, X. Li, H. Zhao, and F. Liu, “Point spread function measurement based on single-pixel imaging,” IEEE Photonics J. 10(6), 1–15 (2018). [CrossRef]  

18. Z. Zhang, “A flexible new technique for camera calibration,” IEEE. Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

19. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

20. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Opt. Lasers Eng. 85, 84–103 (2016). [CrossRef]  

21. R. Hartley and A. Zisserman, Multiple view geometry in computer vision, (Cambridge University, 2003), Chap. 9.

22. H. Jiang, Y. Wang, X. Li, H. Zhao, and Y. Li, “Space-variant point spread function measurement and interpolation at any depth based on single-pixel imaging,” Opt. Express 28(7), 9244–9258 (2020). [CrossRef]  

23. Y. Matsushita, “Radiometric Response Function,” in Computer Vision: A Reference Guide, K. Ikeuchi, (ed.) (Springer,2014).

24. P. J. Besl and N. D. McKay, “Method for registration of 3-d shapes,” in Sensor fusion IV: control paradigms and data structures (International Society for Optics and Photonics) (1992), pp.586–606.

Data availability

No data were generated or analyzed in the presented research.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (17)

Fig. 1.
Fig. 1. Light propagation of an FPP system. The light is projected on the object by the projector and captured by the two cameras.
Fig. 2.
Fig. 2. LTC measurement setup. An LCD screen is used to display the patterns. The LCD depth is changed by moving the camera using a precision translation stage.
Fig. 3.
Fig. 3. Framework of the ray tracing algorithm used in this study. The graphics processing unit (GPU) is used to accelerate the process.
Fig. 4.
Fig. 4. Light path from camera pixel to projector pixel. In ray tracing algorithm, the light is emitted by the camera pixel, and the intersection is projected on the DMD plane to determine the projector pixel whose intensity can be obtained by the predesigned fringe images.
Fig. 5.
Fig. 5. Light received by the pixel with a lens. fd is the focal distance, f is the focal length.
Fig. 6.
Fig. 6. LTC varies with depth changes. δ1, δ2, and δO represent the length of LTC at z1, z2, and zO projected on Y axis, respectively.
Fig. 7.
Fig. 7. Optical center is the intersection of principle rays according to geometrical optics.
Fig. 8.
Fig. 8. LTC measurement at different depth and gray scale centroid calculated of certain pixel.
Fig. 9.
Fig. 9. Experimental setup. (a) FPP system consisting of two cameras and one projector; (b) LTC measurement setup including LCD, camera, and precision translation stage.
Fig. 10.
Fig. 10. LTC at six different depths of a certain pixel of the FPP system’s left camera shown in Fig. 9(a).
Fig. 11.
Fig. 11. Principle rays of each pixel and optical center of the FPP system’s left camera.
Fig. 12.
Fig. 12. LTC at the optical center of a certain pixel. (a) FPP system’s left camera; (b) FPP system’s right camera.
Fig. 13.
Fig. 13. Experimental setup. (a) Standard sphere measured by the FPP system; (b) a–h represent the positions of the sphere.
Fig. 14.
Fig. 14. Fringe images at a certain position. (a) Captured by the FPP system; (b) Captured by the virtual scanner.
Fig. 15.
Fig. 15. Point clouds and distances between the corresponding points. The left side of (a)–(h) shows the results using LTC, and the right side shows the results using Gaussian function. The defocus of (a)–(d) is slight, and the defocus of (e)–(h) is large.
Fig. 16.
Fig. 16. (a) Standard ball bar; (b) a–d represent the positions of the standard ball bar. The defocus of (a) and (d) is large, and the defocus of (b) and (c) is slight.
Fig. 17.
Fig. 17. (a) Object measured by the virtual scanner; (b) The difference between the result and the ground-truth.

Tables (6)

Tables Icon

Table 1. Gray scale centroids at different depths of the pixel

Tables Icon

Table 2. Coordinate of the optical center of the FPP system’s left camera

Tables Icon

Table 3. Extrinsic and intrinsic parameters of the FPP system shown in Fig. 9(a).

Tables Icon

Table 4. Average and RMSE of distances between the corresponding points

Tables Icon

Table 5. Sphere fitting error of the FPP system and virtual scanner

Tables Icon

Table 6. Two-ball center distance measured by the FPP system and the virtual scanner

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

s c [ u c v c 1 ] = A c [ R c | T c ] [ X W Y W Z W 1 ]
s p [ u p v p 1 ] = A p [ R p | T p ] [ X W Y W Z W 1 ]
I i ( u c , v c ) = w { A ( u p , v p ) + B ( u p , v p ) cos [ ϕ ( u p , v p )  -  i 2 π N ] } , i = 0 , 1 , 2 , , N  -  1
ϕ ( u p , v p ) = arctan i = 0 N 1 I i ( u c , v c ) sin ( 2 π i / N ) i = 0 N 1 I i ( u c , v c ) cos ( 2 π i / N )
r o = R c T ( p Len T c ) , r d = R c T T d | | T d | | , T d = [ f d f u n x l f d f v n y l f d ] T .
u n = u c u 0 c α x v n = v c v 0 c α y
I ( u c , v c ) = w { A ( u p , v p ) + B ( u p , v p ) cos ϕ ( u p , v p ) } ( N L ) V ( P )
V ( P ) = { 1 , without occlusion 0 , occlusion
I ( u c , v c ) = k = 1 K w k I k ( x k , y k )
δ 1 δ 2 = z f z 1 z f z 2  =  x f x 1 x f x 2 = y f y 1 y f y 2 δ 1 δ O  =  z f z 1 z f z O = x f x 1 x f x O = y f y 1 y f y O
x g = ( x , y ) D x h ( x , y , z ; u c , v c ) ( x , y ) D h ( x , y , z ; u c , v c ) y g = ( x , y ) D y h ( x , y , z ; u c , v c ) ( x , y ) D h ( x , y , z ; u c , v c )
δ 1 δ 2 = ( z f + Δ ) ( z 1 + Δ ) ( z f + Δ ) ( z 2 + Δ ) = z f z 1 z f z 2 δ 1 δ O  =  ( z f + Δ ) ( z 1 + Δ ) ( z f + Δ ) ( z O + Δ ) = z f z 1 z f z O
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.