Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Large-scale calibration method for MEMS-based projector 3D reconstruction

Open Access Open Access

Abstract

Projectors based on Micro-Electro-Mechanical System (MEMS) have the advantages of small size and low cost. Moreover, uniaxial MEMS projectors have high projection accuracy, and have been widely used in structured light 3D reconstruction. However, the existing calibration methods for uniaxial MEMS projectors are not effective in large-scale scenes. To solve this problem, this paper proposes a novel efficient large-scale calibration method, which is easily implemented. The proposed method first calibrates a partial light plane for a fixed sampling period, then obtains the rest of the light plane by exploiting a non-fixed rotating shaft linear interpolation method. Experimental results verify that the proposed method attains high accuracy in a large depth field with only 11 sets of calibration data. Specifically, at a distance of 3000mm, the standard deviation of the plane fitting error reaches 0.2584mm on the standard plane, and the measurement accuracy attains 0.9124mm on the standard step object with 200mm interval.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Currently, 3D modeling technology based on structured light has been widely used in many scenarios, such as industrial inspection, heritage conservation, computer-aided medical diagnosis, and intelligent terminals [18]. Generally, a structured light based 3D reconstruction system consists of a camera and a projector. Moreover, the working principle of the 3D reconstruction system is mainly composed of three steps: firstly, the projector projects a coded structured light pattern; secondly, the camera captures the structured light; finally, the calibrated system calculates the 3D coordinates of the pixel based on the encoded information [914]. As a kind of structured light projector, Digital Light Processing (DLP) has been wildly used due to its high precision. Moreover, a core unit of DLP is the digital micromirror device which uses MEMS technology to build millions of micron-scale moveable mirrors to project images. For this reason, the complex structure results in DLP being very large and very expensive. In contrast to DLP, the projector with a single MEMS, called MEMS-based projector, is much smaller in size and much lower in price. Additionally, the MEMS-based projector can be divided into two technical categories: (1) the biaxial MEMS projector and (2) the uniaxial MEMS projector. Under the same conditions, the accuracy of the biaxial one is generally worse than the uniaxial one, due to the difficulty of achieving high-precision orthogonal projection. However, the implementation of the uniaxial one is often plagued by the low accuracy of the existing calibration methods. To solve this problem, this paper proposes a high-precision monocular calibration method for the uniaxial MEMS-based projector system.

Recently, the monocular reconstruction system based on the phase-height model for a uniaxial projector has become a classical method for phase-measuring profilometry [1518]. In this model, however, the positions of the optical center and the optical axis of its projector and camera are limited by some strict conditions. To address this problem, Zhang [19] proposed a polynomial function between height and phase by moving the standard distance each time during calibration. Moreover, this method only requires that the camera optical axis is perpendicular to the reference plane, which is simpler than the traditional method. To further reduce the space restrictions of classical methods, Da [20] proposed a new model based on the relationship between the fringe patterns of the projected plane and the fringe strips projected into the 3D space, which no longer restricts the camera and projector position. Meanwhile, other scholars used a different model to achieve the same results. For example, Vignesh and Holton [21] followed Zhang and Huang’s concept [22] and treated the projector as an inverse camera. Moreover, they proposed a new method that takes one degree of freedom away from projector calibration using the least-squares estimation method, which achieves the accuracy comparable to Zhang and Huang’s method.

The above methods easily cause lower reconstruction accuracy because of the lensless structure of MEMS projectors. For this reason, some scholars have studied new system models based on MEMS mirror structure characteristics in recent years. For instance, Yang [23] proposed a curved surface equation with seven parameters to replace the ideal plane equation as the mathematical model of the light stripes, which is called the curved light surface model (CLSM). Moreover, CLSM could effectively reduce the distortion of the reconstructed point cloud. Nevertheless, its projection optical axis needs to be adjusted in advance, which greatly increases the requirements for calibration. Furthermore, the errors introduced by the projector adjustment of CLSM cause a shift in the curved light surface, which increases the reconstruction error with distance. In addition, the complex curved surface interpolation also makes it difficult to determine the system equations for the uncalibrated curved surface accurately. Together, these problems result in low reconstruction accuracy of CLSM over long distances.

To improve the performance of the classical phase-coordinate polynomial fitting model, Miao [24] used the Light Plane Model (LPM) to directly establish a reciprocal polynomial between phase and space coordinates for each pixel point. However, this method only ensures that the pixels projected from the structured light to the calibration board are valid values. In the large-scale calibration condition, the calibration data needs to be spread over the entire calibration space. However, due to the limitation of the calibration board size, the captured image data is often valid for only part of the pixel points in a large-scale condition. Moreover, the effective area decreases as the distance increases. Therefore, good fitting results require a large amount of calibration data, which results in the calibration board being placed at a distance that is not ideal. Especially when only 11 sets of calibration data are used, the method cannot meet the requirement of large-scale reconstruction accuracy.

To better address these issues, a novel calibration method is proposed for the 3D reconstruction based on the uniaxial MEMS-based projector in a monocular system. Firstly, the proposed method uses the principle of linear structured light to establish the isophase light plane model. Secondly, the method accurately calibrates partial light plane by calculating the 3D spatial coordinates of the sub-pixel-level isophase pixel points with an interval fixed phase period. Finally, our proposed method uses linear interpolation to obtain the uncalibrated light plane. In fact, our proposed method uses LPM, so there is no requirement to pre-adjust the optical axis of the projector. Moreover, the proposed interpolation method can accurately obtain the system equations for the uncalibrated light plane. For this reason, the calibration board only needed to capture the part of the light plane to complete the plane equation fitting. Therefore, our proposed method can obtain a good calibration for these light planes under a small amount of data.

2. Principle

2.1 Pinhole imaging model

The pinhole model [25] is used to describe and simplify the camera imaging process, as shown in Fig. 1. $O_w-\left (X_w,Y_w,Z_w\right )$ is the world coordinate system (WCS), $O_c-(X_c,\ Y_c,\ Z_c)$ is the camera coordinate system (CCS) where $O_c$ is the projection center of the camera. For the point $P\ \left (X_w,\ Y_w,\ Z_w\right )$ in the WCS, its image plane coordinate is $\left (u,\ v\right )$. The relationship between the WCS and the CCS can be represented as:

$$\left[ \begin{array}{c} X_c\\ Y_c\\ Z_c\\ \end{array} \right] =\left[ \begin{matrix} \boldsymbol{R} & \boldsymbol{t}\\ \end{matrix} \right] \left[ \begin{array}{c} X_w\\ Y_w\\ Z_w\\ 1\\ \end{array} \right] .$$

The point WCS coordinate and its projection on the image sensor can be represented as:

$$s_c\left[ \begin{array}{c} u\\ v\\ 1\\ \end{array} \right] =\boldsymbol{A_c}\left[ \begin{matrix} \boldsymbol{R} & \boldsymbol{t}\\ \end{matrix} \right] \left[ \begin{array}{c} X_w\\ Y_w\\ Z_w\\ 1\\ \end{array} \right] ,$$
with
$$\boldsymbol{A_c}=\left[ \begin{matrix} f_x & \gamma & u_0\\ 0 & f_y & v_0\\ 0 & 0 & 1\\ \end{matrix} \right] ,$$
where $\boldsymbol {R}$ is a rotation matrix and $\boldsymbol {t}$ is a translation vector, $s_c$ denotes a scale factor. $\boldsymbol {A_c}$ describes the intrinsic parameters of the camera, $f_x$ , $f_y$ are the valid focus lengths along $u$ and $v$ directions, $\left ( u_0,v_0 \right )$ represents the coordinates of the principal point, $\gamma$ denotes the skew factor of $u$ and $v$ axes.

 figure: Fig. 1.

Fig. 1. Pinhole imaging model.

Download Full Size | PDF

Meanwhile, the distortion produced by the camera can be represented by the following model [26]:

$$\left[ \begin{array}{c} u_d\\ v_d\\ \end{array} \right] =\left( 1+k_1r^2+k_2r^4+k_3r^6 \right) \left[ \begin{array}{c} u\\ v\\ \end{array} \right] +\left[ \begin{array}{c} 2p_1uv+p_2\left( r^2+2u^2 \right)\\ 2p_2uv+p_1\left( r^2+2v^2 \right)\\ \end{array} \right] ,$$
with
$$r^2=u^2+v^2,$$
where $k_1, k_2, k_3$ and $p_1, p_2$ are the radial and tangential distortion coefficients. $u_d, v_d$ is the image plane coordinate after the distortion correction, and $r$ is the distance from the undistorted image point to the principle point.

2.2 Light plane model

The system structure of a uniaxial MEMS projector is shown in Fig. 2. The laser diode is used as a light source of the projector system, and the original light is transformed into a line laser through the Powell lens. The output line laser hits the center of the MEMS mirror. Simultaneously, the MEMS mirror scans at a certain frequency. Then, the camera can capture the structured light pattern reflected by the MEMS mirror. Different structured light patterns can be obtained by adjusting the laser light source through the external control circuit.

 figure: Fig. 2.

Fig. 2. A uniaxial MEMS-based projector structure diagram.

Download Full Size | PDF

Assuming the line laser had no change after the MEMS mirror reflection. Then, the projected series of sinusoidal stripes can be used to obtain the absolute phase by the phase unwrapping method. Each unique line laser reflection produced a light plane corresponding to a unique absolute phase. It means any point in each light plane is in the same absolute phase. According to this isophase plane theory, we can derive the 3D coordinates of the points within the light field by calibrating the system equations of the light plane corresponding to each phase.

3. Proposed method: large-scale calibration and 3D reconstruction

In order to achieve large-scale calibration, it is important to require phase information to characterize 3D spatial information well even in a long-distance scene. Since the accuracy of the phase gets worse with distance, the general method of fitting the phase to the 3D coordinate polynomial will perform worse at long range. The following calibration method and the non-fixed rotating shaft light plane interpolation method can better locate the spatial position of each light plane and suppress the loss of accuracy caused by different kinds of phase errors.

3.1 System calibration

The system calibration method is shown in Fig. 3. In this method, $\lambda \pi$ is the sampling period of the light plane. Moreover, the calibration board is arranged at different positions, and the MEMS mirror projects a sequence of structured light patterns at each position. The intrinsic parameters $\boldsymbol {A_c}$ of the camera and the rotation matrix and translation vector $\left [\boldsymbol {R} \quad \boldsymbol {t} \right ]$ of the calibration board at different positions relative to the camera are obtained by Zhang’s calibration method [25]. The plane equation of the calibration board in WCS is $z=0$, its normal $\boldsymbol {n}_{\boldsymbol {w}}=\left ( 0 \quad 0 \quad 1 \right ) ^T$, using Eq. (1) to rotate $\boldsymbol {n}_{\boldsymbol {w}}$ to the CCS:

$$\boldsymbol{n}_{\boldsymbol{c}}=R\cdot \boldsymbol{n}_{\boldsymbol{w}}=\left[ \begin{matrix} r_{13} & r_{23} & r_{33}\\ \end{matrix} \right] ^T,$$
where $r_{13},r_{23},r_{33}$ are the components of the first, second and third rows of the third column of the rotation matrix $\boldsymbol {R}$, respectively. The calibration board plane also passes the point $T\left ( t_1,t_2,t_3 \right )$. $t_1,t_2,t_3$ are the three components of the translation vector $\boldsymbol {t}$. Then this plane equation can be expressed as:
$$r_{13}x+r_{23}y+r_{33}z-\left( r_{13}t_1+r_{23}t_2+r_{33}t_3 \right) =0.$$

Combining Eq. (7) and the ray equation of the camera optical center (i.e., Eqs. (1) and (2)), we can obtain the following intersection of the light plane and the calibration board plane,

$$\begin{cases} X_c=\frac{u-u_0}{f_x}Z_c\\ Y_c=\frac{v-v_0}{f_y}Z_c\\ Z_c=\frac{r_{13}t_1+r_{23}t_2+r_{33}t_3}{r_{13}\frac{u-u_0}{f_x}+r_{23}\frac{v-v_0}{f_y}+r_{33}}\\ \end{cases}.$$

As shown in Fig. 3, the camera first captures the structured light information on the calibration board at different positions. Then, the absolute phase of each pixel on the calibration board can be calculated at each position. Moreover, the sub-pixel coordinates of a specific phase can be obtained by linear interpolation. For the sampled phase, the sub-pixel coordinates of the phase at different positions can be obtained by this process. The sub-pixel coordinates corresponding to all isophases are divided into different groups. Additionally, using Eq. (9), these isophase sub-pixel coordinates are converted into 3D coordinates in the CCS. Furthermore, the plane equation of the isophase 3D points can be fitted by the weighted least squares method. All the light planes of a specific phase that need to be calibrated are realized in this way.

 figure: Fig. 3.

Fig. 3. System calibration model of a uniaxial MEMS-based projector.

Download Full Size | PDF

3.2 3D reconstruction

Since the reconstruction needs to know the system equation of the light plane corresponding to an arbitrary phase, we propose a novel interpolation method between light planes to complete the 3D reconstruction. An isophase light plane equation can be expressed in the CCS as $Ax+By+Cz+D=0$ by the light plane model. As shown in Fig. 4, the phase values of the light planes $A$ and $B$ are $\varPhi _A$ and $\varPhi _B$, and the phase value of any light plane $X$ between light planes $A$ and $B$ is $\varPhi _x$. The intersection line between two adjacent light planes $A: A_1x+B_1y+C_1z+D_1=0$ and $B: A_2x+B_2y+C_2z+D_2=0$ is the rotating shaft of the MEMS mirror. This rotating shaft is generally nearly parallel to the y-axis of the CCS, so there is a point $O_p$ on the rotating shaft as:

$$\left( \frac{A_1D_2-D_1A_2}{C_1A_2-A_1C_2}, 0, \frac{C_1D_2-D_1C_2}{A_1C_2-C_1A_2}\\ \right).$$

 figure: Fig. 4.

Fig. 4. Light plane linear interpolation model. (a) Interpolation model 3D diagram; (b) Interpolation model vertical view.

Download Full Size | PDF

Assuming all light planes be trapezoids of equal height, the upper line is the rotating shaft, and the lower line is the farthest projection distance. The two vertical lines from $O_p$ intersect the lower line at $P_A$ and $P_B$ respectively, and two vectors $\overrightarrow {O_pP_A}$ and $\overrightarrow {O_pP_B}$ can be formed on the two light planes $A$ and $B$. The plane $C$ is formed by the lower line of the light plane $A$ and $B$, the proposed method uses the unwrapped phase encode the plane $C$, so the pixel coordinates on the line $P_AP_B$ has a linear relationship with the unwrapped phase value of this point. There is also a one-to-one correspondence between phase values to 3D coordinates on $P_AP_B$. Therefore, any point $P_x$ on $P_AP_B$ followed the relation:

$$\frac{\overrightarrow{P_AP_x}}{\overrightarrow{P_AP_B}}=\frac{\varPhi _x-\varPhi _A}{\varPhi _B-\varPhi _A}.$$

From the Eq. (10), we can obtain:

$$\overrightarrow{O_PP_x}=\overrightarrow{O_pP_A}+\overrightarrow{P_AP_x}=\frac{\varPhi _B-\varPhi _x}{\varPhi _B-\varPhi _A}\overrightarrow{O_pP_A}+\frac{\varPhi _x-\varPhi _A}{\varPhi _B-\varPhi _A}\overrightarrow{O_pP_B}.$$

The normal vectors of the light plane $A$ and $B$ are $\boldsymbol {n}_{\boldsymbol {A}}=\left ( x_{nA} \;\; y_{nA} \;\; z_{nA}\right ) ^T$ and $\boldsymbol {n}_{\boldsymbol {B}}=\left ( x_{nB}\;\; y_{nB} \;\; z_{nB} \right ) ^T$, and the light plane $X$ where $P_x$ is located has the normal vector $\boldsymbol {n}_{\boldsymbol {x}}=\left ( x_{nx} \quad y_{nx} \quad z_{nx} \right ) ^T$. The vector $\boldsymbol {n}_{\boldsymbol {x}}$ can be expressed as:

$$\boldsymbol{n}_{\boldsymbol{x}}=\frac{\varPhi _B-\varPhi _x}{\varPhi _B-\varPhi _A}\boldsymbol{n}_{\boldsymbol{A}}+\frac{\varPhi _x-\varPhi _A}{\varPhi _B-\varPhi _A}\boldsymbol{n}_{\boldsymbol{B}}.$$

According to the coordinate of the point $O_p$, the equation of the light plane $X: A_xx+B_xy+C_xz+D_x=0$ in the CCS can be expressed as:

$$\begin{cases} A_x=x_{nx}\\ B_x=y_{nx}\\ C_x=z_{nx}\\ D_x={-}\boldsymbol{n}_{\boldsymbol{x}}^{T}O_p\\ \end{cases}.$$

In terms of the above partial light plane calibration method and the interpolation method between light planes, we can complete the reconstruction by projecting the same structured light as the calibration process. Moreover, due to the phase error in the middle region of the MEMS projector will be smaller compared to the two sides, if the interpolation method uses a rotating shaft that is obtained by fitting all light planes, then this fixed and unique rotating shaft will allow large phase errors on both sides to affect the interpolation results in the middle part. It will result in a reduction in overall accuracy. However, the non-fixed rotating shaft interpolation method will ensure higher accuracy of interpolated light planes with relatively accurate phases part. Therefore, we use the intersection line of two adjacent calibrated light planes as the current rotating shaft in the light plane interpolation process. The three steps that are included in the 3D reconstruction are as follows:

Step 1: Project the same structured light in the calibration process to the object. Unwrap the phase to retrieve the absolute phase.

Step 2: For the absolute phase value of each pixel, find the light plane equation corresponding to the calibrated upper and lower phases. Calculate a point coordinate on the rotating shaft of these two planes by Eq. (7). Calculate the normal vector of this phase which corresponds to the light plane by Eq. (12). The plane equation in the CCS of this light plane will be obtained by Eq. (13).

Step 3: Combine the light plane equation obtained from Step 2 and the ray equation from the camera optical center (Eqs. (1) and (2)), and we can calculate each pixel corresponding to the CCS 3D coordinates.

The structure map of our method is shown in Fig. 5.

 figure: Fig. 5.

Fig. 5. Structure of the proposed method.

Download Full Size | PDF

4. Experiments and analysis

In this section, two sets of comparison experiments will be set up to test the accuracy of three different MEMS-based projector calibration methods at different distances. Then we will analyze some factors that cause errors in the proposed method in this paper.

4.1 Calibration results

This experiment’s structured light 3D modeling system is shown in Fig. 6(a) and an enlarged view of the MEMS-based projector part is shown in Fig. 6(b). The system includes a camera (MV-CA013-20GN, 1280$\times$1024) attached with an 8mm lens and a MEMS-based projector (Ainstec-BN1LF545X) with a scanning angle of 45 degrees. The projector’s laser diode optical source wavelength is 450nm. Moreover, the system baseline is 617mm, which is the vertical distance from the camera’s optical center to the rotating shaft of the MEMS mirror.

The calibration board used in the calibration process is shown in Fig. 6(c). All benchmarks center distances are 90 mm. The plane with black circle benchmarks on a white background can extract more accurate phase information. We placed multiple poses under 1000mm, 1500mm, 2000mm, 2500mm, and 3000mm, respectively, and tried to ensure that the calibration board covered each column of the image plane more than two times. In the process of calibration and reconstruction, the following structured light patterns are used: 8-step sinusoidal fringe, 8-bit positive, and negative gray code coding. Absolute phase can be obtained by using the Gray-code and phase-shift method [27].

 figure: Fig. 6.

Fig. 6. (a) The physical picture of the proposed structured light 3D modeling system; (b) The physical picture of the MEMS-based projector; (c) Schematic diagram of calibration board used in the experiment.

Download Full Size | PDF

Structured light patterns in Fig. 7 are projected in sequence by the MEMS-based projector. The 1st all-white pattern is used to fill the calibration board with light, so that the benchmarks information of the calibration board can be obtained more clearly to complete the camera calibration. The method to obtain the absolute phase is the 8-step phase-shift strategy with the Gray coding method. The 2nd pattern is the sinusoidal fringe with a total of 128 periods, and the wrapped phase can be obtained by eight times phase shifting process with a one-eighth period of each phase shift. The 2nd to 9th patterns are the process of this phase-shift strategy. The 10th to 17th patterns and the 18th to 25th patterns are positive and negative 8-bit Gray code patterns respectively. The eight-group Gray code pattern can encode 128 periods well, and the encoded value of a pixel is determined more precisely by comparing the gray values of the positive and negative Gray codes’ structured light images.

 figure: Fig. 7.

Fig. 7. Structured light coding strategy

Download Full Size | PDF

The camera calibration would complete by projecting an all-white image at the beginning. After we obtained the calibration parameters of the camera and the calibration board, we would get rotation and translation matrices of the calibration board at different positions. Then the CCS plane equation of each calibration board plane would be calculated. The calibration results by applying Zhang’s method [25] are as follows:

$$\left[ \begin{matrix} f_x & \gamma & u_0\\ 0 & f_y & v_0\\ 0 & 0 & 1\\ \end{matrix} \right] =\left[ \begin{matrix} 1714.2970 & 0 & 660.5575\\ 0 & 1715.3861 & 533.5350\\ 0 & 0 & 1\\ \end{matrix} \right] ,$$
$$\left[ \begin{array}{c} k_1\\ k_2\\ p_1\\ p_2\\ \end{array} \right] =\left[ \begin{array}{c} -0.111538\\ 0.150493\\ 0.000532\\ 0.000593\\ \end{array} \right] .$$

The phase quality of the structured light in black circle benchmarks parts was poor, and we removed these parts in the unwrapped phase image to reduce the error when fitting the light plane equation. It could also automatically extract the calibration board part in the image according to the board measurement with the projection relationships.

By extracting isophase pixel points coordinates with period $\pi /4$ ($\lambda =1/4$), we could calculate the CCS coordinates corresponded to the pixel points under each sampled phase (i.e., Eq. (8)). Moreover, using the weighted least squares method fitted the light plane equation of different phases, so the result would restrain the influence of the intersection of the light plane and the calibration board plane with a significant error. The fitting result is shown in Fig. 8. As with the effect of line structured light accuracy, the more vertical the camera’s optical axis and the light plane are, the more accurate the calibration data obtained, and the lower the fitting error. Therefore, the fitting error of the light plane around 560 piece is the lowest, and the error of both sides will increase with the degree of deviation.

 figure: Fig. 8.

Fig. 8. The standard deviation of each sampled light plane

Download Full Size | PDF

4.2 Reconstruction results

After we completed the calibration with the same data by the CLSM method, Miao’s method, and ours respectively, we reconstructed a standard plane and a standard step object at 5 different distances.

The point cloud reconstructed from the standard plane is used to fit an ideal plane, and we calculated the distance of each point relative to the fitting plane. The accuracy of these three methods was compared by the standard deviation (STD). The results are shown in Fig. 9. At the same distance, the reconstruction result of the proposed method is smoother. Its maximum error is limited between -0.8456mm and 0.9593mm. The point cloud obtained by Miao’s method will have some undulation. Its maximum error is limited between -2.4598mm and 3.6325mm. The point cloud obtained by the CLSM method has the most apparent undulation. Its maximum error is limited between -9.2368mm and 8.7389mm. Thus, the proposed method performs better in reconstructing long-range planes.

 figure: Fig. 9.

Fig. 9. Using different methods to reconstruct the standard plane at 3000mm. (a) The measured object; (b) The measured object with stripe; (c) The reconstructed model by the proposed method; (d) The reconstructed model by Miao's method; (e) The reconstructed model by the CLSM method.

Download Full Size | PDF

The distance between the two planes of the standard step object is 200mm. We took this standard distance to test the proposed method’s accuracy. The results are shown in Fig. 10. The results are similar to the standard plate reconstruction. From the overall reconstruction result and the enlarged view of some regions, the results show that the reconstruction point clouds of the Miao’s and CLSM methods have higher roughness than the proposed method.

 figure: Fig. 10.

Fig. 10. Using different methods to reconstruct the standard step object at 3000mm. (a) The measured object; (b) The measured object with stripe; (c) The reconstructed model by the proposed method; (d) The reconstructed model by Miao's method; (e) The reconstructed model by the CLSM method.

Download Full Size | PDF

The experiments illustrate that the proposed method can maintain high accuracy when the distance is more than 1000mm. Table 1 and Table 2 show the detailed reconstruction precision data. As for the reconstruction of the standard plane, the STDs of plane fitting error are 2.0203mm, 1.1583mm, and 0.1784mm at 3000mm. The result shows that the proposed method is far better than the other two methods in terms of the flatness of plane reconstruction. Moreover, the advantage of our proposed method would become more apparent at greater distances. The reconstruction errors of the standard step object are 2.2558mm, 1.6738mm, and 0.9124mm at 3000mm. For this result, the accuracy of our method is similar to that of the other two methods at close range but higher at long distances. The gap between different methods’ accuracy for the standard step object is less obvious than that of the standard plane. Even though these methods can reconstruct the object’s general shape reasonably well, the method with a poor large-scale reconstruction effect will result in bad surface smoothness and a severe loss of details. We use our method to reconstruct another complex object, as shown in Fig. 11. The reconstruction result shows that our method can still obtain some details of the object even at 3000mm. Meanwhile, the results also show that our method has the least noise and effect on details. In the lower part of the object, the reconstruction errors introduced by the Miao’s and CLSM methods make the point cloud in the notch part almost invisible. However, limited by the camera resolution, some minor details cannot be effectively reconstructed.

 figure: Fig. 11.

Fig. 11. Reconstruction of the standard object for testing. (a) The measured object; (b) The measured object with stripe; (c) The size parameters of measured object; (d) The reconstructed model by the proposed method; (e) The reconstructed model by Miao's method; (f) The reconstructed model by the CLSM method.

Download Full Size | PDF

Tables Icon

Table 1. Measurement results of the standard plane.

Tables Icon

Table 2. Measurement results of the standard step object.

4.3 Influencing factors of accuracy

The accuracy of MEMS-based projectors affects the reconstruction effect mainly in two aspects. The first is the repeatability error, which is caused by the projector’s inability to project the same structured light pattern repeatedly and exactly. Although the same set of structured light patterns is used throughout the experiment, the obtained phase information is slightly inaccurate each time. The second is the sinusoidal error caused by the accuracy of the fringe phase when the projector projects the sinusoidal fringe pattern. A non-sinusoidal phase causes the absolute phase to become non-linear, which results in errors in the light plane interpolation.

By projecting structured light on the standard plane, the unwrapped phase is obtained to test the repeatability error and sinusoidal error of the MEMS-based projector. First is the repeatability test. When the projector system is stable, the repeatability error can be expressed by taking two sets of results and calculating the phase difference. The results are shown in Fig. 12. The phase error fluctuates from -0.04 rad to +0.04 rad. The repeatability error is difficult to solve at the calibration algorithm level, and this error will cause the light plane to shift during reconstruction, which will reduce the reconstruction accuracy.

 figure: Fig. 12.

Fig. 12. The repeatability error of the projector during 21 hours. Calculate the standard deviation of the unwrapped phase error from the first time every 6 minutes.

Download Full Size | PDF

The sinusoidal error can be reflected by comparing the angle between each sampled light plane. The intersection angle under the fringe frequency of 128 and sample 1024 light planes is shown in Fig. 13. Because the calibration of the light plane is based on a specific phase value, in the reconstruction stage, the light plane equation where the calibrated phase is located will not be changed due to the sinusoidal error.

 figure: Fig. 13.

Fig. 13. The intersection angle of each adjacent sampled light planes, and the standard deviation of the angle is 0.001741

Download Full Size | PDF

Since the light plane of other phases is obtained by interpolating two adjacent light planes, the sinusoidal error can be restrained by increasing the sampling period when calibrating. At the same time, the phase error of low-frequency fringes will become worse, so it will cause a more significant error in linear interpolation. Therefore, under the same sample frequency, the higher the frequency of fringe, the smaller the reconstruction error. However, due to the physical precision limitation of projection and camera, too high fringe frequency and sampling frequency will lead to the decline of the reconstruction results.

5. Conclusion

In this paper, we proposed a large-scale 3D calibration method for monocular and uniaxial MEMS-based projectors. By analyzing the Intersection angle model of the light plane, it was demonstrated that the non-fixed rotating shaft linear interpolation method of light planes worked. Based on the isophase light plane model, a portion of the light plane was calibrated with a set phase interval. The best interval period was related to the physical resolution of the projector and camera. Due to the characteristics of the isophase light plane model, it could easily realize large-scale calibration. The experiment results verified that the proposed method had a very obvious accuracy improvement after 1500mm. The details of distant objects can also be reconstructed better. With the proposed method, better accuracy can be obtained under a long-distance situation compared with other MEMS-based projector calibration methods.

Funding

Innovation workstation of Suzhou Institute of Nano-Tech and Nano-Bionics (E010210101); Suzhou Institute of Nanotechnology, Chinese Academy of Sciences (E290010201); Chinese Academy of Sciences Project (E21Z010101).

Acknowledgments

This work was partially supported by Suzhou Foreign Experts Project under grant No. E290010201, and Chinese Academy of Sciences Project under grant No. E21Z010101. This work was also supported by the innovation workstation of Suzhou Institute of Nano-Tech and Nano-Bionics (SINANO) under grant No. E010210101.

Disclosures

The authors declare no conflicts of interest.

Data availability

No data were generated or analyzed in the presented research.

References

1. D. Bak, “Rapid prototyping or rapid production? 3d printing processes move industry towards the latter,” Assembly Autom. 23(4), 340–345 (2003). [CrossRef]  

2. C. P. Honrado and W. F. Larrabee Jr, “Update in three-dimensional imaging in facial plastic surgery,” Curr. Opin. Otolaryngol. Head Neck Surg. 12(4), 327–331 (2004). [CrossRef]  

3. G. Sansoni, M. Trebeschi, and F. Docchio, “State-of-the-art and applications of 3d imaging sensors in industry, cultural heritage, medicine, and criminal investigation,” Sensors 9(1), 568–601 (2009). [CrossRef]  

4. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010). [CrossRef]  

5. A. Haleem and M. Javaid, “3d scanning applications in medical field: a literature-based review,” Clinical Epidemiology and Global Health 7(2), 199–210 (2019). [CrossRef]  

6. C. H. Dagli, Artificial neural networks for intelligent manufacturing (Springer Science & Business Media, 2012).

7. F. Stanco, S. Battiato, and G. Gallo, “Digital imaging for cultural heritage preservation,” Analysis, Restoration, and Reconstruction of Ancient Artworks (2011).

8. B. Li, Y. An, D. Cappelleri, J. Xu, and S. Zhang, “High-accuracy, high-speed 3d structured light imaging techniques and potential applications to intelligent robotics,” Int. J. Intell. Robot. Appl. 1(1), 86–103 (2017). [CrossRef]  

9. S. Zhang, “Recent progresses on real-time 3d shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]  

10. Z. Cai, X. Liu, X. Peng, Y. Yin, A. Li, J. Wu, and B. Z. Gao, “Structured light field 3d imaging,” Opt. Express 24(18), 20324–20334 (2016). [CrossRef]  

11. S. Van der Jeught and J. J. Dirckx, “Real-time structured light profilometry: a review,” Opt. Lasers Eng. 87, 18–31 (2016). [CrossRef]  

12. S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Opt. Lasers Eng. 103, 127–138 (2018). [CrossRef]  

13. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Opt. Lasers Eng. 109, 23–59 (2018). [CrossRef]  

14. J. Geng, “Structured-light 3d surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011). [CrossRef]  

15. L. Biancardi, G. Sansoni, and F. Docchio, “Adaptive whole-field optical profilometry: A study of the systematic errors,” IEEE Trans. Instrum. Meas. 44(1), 36–41 (1995). [CrossRef]  

16. A. Tian, Z. Jiang, and Y. Huang, “A flexible new three-dimensional measurement technique by projected fringe pattern,” Opt. Laser Technol. 38(8), 585–589 (2006). [CrossRef]  

17. Q. Hu, P. S. Huang, Q. Fu, and F.-P. Chiang, “Calibration of a three-dimensional shape measurement system,” Opt. Eng. 42(2), 482–493 (2003). [CrossRef]  

18. X. Su, W. Song, Y. Cao, and L. Xiang, “Both phase-height mapping and coordinates calibration in pmp,” Proc. SPIE 4829, 874–875 (2003). [CrossRef]  

19. Z. Zhang, H. Ma, S. Zhang, T. Guo, C. E. Towers, and D. P. Towers, “Simple calibration of a phase-based 3d imaging system based on uneven fringe projection,” Opt. Lett. 36(5), 627–629 (2011). [CrossRef]  

20. F. Da and S. Gai, “Flexible three-dimensional measurement technique based on a digital light processing projector,” Appl. Opt. 47(3), 377–385 (2008). [CrossRef]  

21. V. Suresh, J. Holton, and B. Li, “Structured light system calibration with unidirectional fringe patterns,” Opt. Lasers Eng. 106, 86–93 (2018). [CrossRef]  

22. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

23. D. Yang, D. Qiao, and C. Xia, “Curved light surface model for calibration of a structured light 3d modeling system based on striped patterns,” Opt. Express 28(22), 33240–33253 (2020). [CrossRef]  

24. Y. Miao, Y. Yang, Q. Hou, Z. Wang, X. Liu, Q. Tang, X. Peng, and B. Z. Gao, “High-efficiency 3d reconstruction with a uniaxial mems-based fringe projection profilometry,” Opt. Express 29(21), 34243–34257 (2021). [CrossRef]  

25. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

26. J. Heikkila and O. Silvén, “A four-step camera calibration procedure with implicit image correction,” in Proceedings of IEEE computer society conference on computer vision and pattern recognition, (IEEE, 1997), pp. 1106–1112.

27. G. Sansoni, M. Carocci, and R. Rodella, “Three-dimensional vision based on a combination of gray-code and phase-shift light projection: analysis and compensation of the systematic errors,” Appl. Opt. 38(31), 6565–6573 (1999). [CrossRef]  

Data availability

No data were generated or analyzed in the presented research.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. Pinhole imaging model.
Fig. 2.
Fig. 2. A uniaxial MEMS-based projector structure diagram.
Fig. 3.
Fig. 3. System calibration model of a uniaxial MEMS-based projector.
Fig. 4.
Fig. 4. Light plane linear interpolation model. (a) Interpolation model 3D diagram; (b) Interpolation model vertical view.
Fig. 5.
Fig. 5. Structure of the proposed method.
Fig. 6.
Fig. 6. (a) The physical picture of the proposed structured light 3D modeling system; (b) The physical picture of the MEMS-based projector; (c) Schematic diagram of calibration board used in the experiment.
Fig. 7.
Fig. 7. Structured light coding strategy
Fig. 8.
Fig. 8. The standard deviation of each sampled light plane
Fig. 9.
Fig. 9. Using different methods to reconstruct the standard plane at 3000mm. (a) The measured object; (b) The measured object with stripe; (c) The reconstructed model by the proposed method; (d) The reconstructed model by Miao's method; (e) The reconstructed model by the CLSM method.
Fig. 10.
Fig. 10. Using different methods to reconstruct the standard step object at 3000mm. (a) The measured object; (b) The measured object with stripe; (c) The reconstructed model by the proposed method; (d) The reconstructed model by Miao's method; (e) The reconstructed model by the CLSM method.
Fig. 11.
Fig. 11. Reconstruction of the standard object for testing. (a) The measured object; (b) The measured object with stripe; (c) The size parameters of measured object; (d) The reconstructed model by the proposed method; (e) The reconstructed model by Miao's method; (f) The reconstructed model by the CLSM method.
Fig. 12.
Fig. 12. The repeatability error of the projector during 21 hours. Calculate the standard deviation of the unwrapped phase error from the first time every 6 minutes.
Fig. 13.
Fig. 13. The intersection angle of each adjacent sampled light planes, and the standard deviation of the angle is 0.001741

Tables (2)

Tables Icon

Table 1. Measurement results of the standard plane.

Tables Icon

Table 2. Measurement results of the standard step object.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

[ X c Y c Z c ] = [ R t ] [ X w Y w Z w 1 ] .
s c [ u v 1 ] = A c [ R t ] [ X w Y w Z w 1 ] ,
A c = [ f x γ u 0 0 f y v 0 0 0 1 ] ,
[ u d v d ] = ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) [ u v ] + [ 2 p 1 u v + p 2 ( r 2 + 2 u 2 ) 2 p 2 u v + p 1 ( r 2 + 2 v 2 ) ] ,
r 2 = u 2 + v 2 ,
n c = R n w = [ r 13 r 23 r 33 ] T ,
r 13 x + r 23 y + r 33 z ( r 13 t 1 + r 23 t 2 + r 33 t 3 ) = 0.
{ X c = u u 0 f x Z c Y c = v v 0 f y Z c Z c = r 13 t 1 + r 23 t 2 + r 33 t 3 r 13 u u 0 f x + r 23 v v 0 f y + r 33 .
( A 1 D 2 D 1 A 2 C 1 A 2 A 1 C 2 , 0 , C 1 D 2 D 1 C 2 A 1 C 2 C 1 A 2 ) .
P A P x P A P B = Φ x Φ A Φ B Φ A .
O P P x = O p P A + P A P x = Φ B Φ x Φ B Φ A O p P A + Φ x Φ A Φ B Φ A O p P B .
n x = Φ B Φ x Φ B Φ A n A + Φ x Φ A Φ B Φ A n B .
{ A x = x n x B x = y n x C x = z n x D x = n x T O p .
[ f x γ u 0 0 f y v 0 0 0 1 ] = [ 1714.2970 0 660.5575 0 1715.3861 533.5350 0 0 1 ] ,
[ k 1 k 2 p 1 p 2 ] = [ 0.111538 0.150493 0.000532 0.000593 ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.