Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Curved light surface model for calibration of a structured light 3D modeling system based on striped patterns

Open Access Open Access

Abstract

Structured light is an optical 3D surface measurement technique with the merits of high speed and high robustness. However, the huge size of traditional digital light processing (DLP) projectors limits its convenience in numerous applications. In this paper, a one-axis MEMS mirror is used as the structured light projector in 3D modeling systems, and has the advantages of small volume and low cost. Limited by the inability to project orthogonal patterns and projection distortion, it is difficult for the one-axis MEMS mirror based 3D modeling system to obtain high accuracy through existing calibration methods. This paper proposed a calibration method for structured light 3D modeling systems that can only project stripes in one direction with projection distortion. A curved surface equation called curved light surface model was proposed to replace the ideal plane equation as the mathematic model of the projected structured light stripes. Experiment results verified that this method can significantly reduce the effect of projection distortion and an accuracy of 0.11 mm was achieved when measuring a standard dumbbell-shaped object with 201.10 mm center-to-center distance.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Structured light 3D modeling systems have been widely used in quality inspection, medicine, robotics and other fields [17]. In general, the structured light 3D modeling system consists of one camera and one projector. When structured light patterns are emitted by the projector onto an object, these patterns are distorted by the surface of the measured object and then captured by the camera. Once the camera and projector is calibrated, the three-dimension coordinates of the measured object can be obtained from the distorted structured light patterns via triangulation.

The conventional projectors used in the structured light 3D modeling system are DLP projectors with large volume and heavy weight. In addition to the DLP projector, the galvanometric [8,9] and MEMS mirror [10,11] are used as new projection technologies with the advantages of small size, light weight, and no demand of focusing optics [12,13]. In this paper, a one-axis MEMS mirror system containing a laser diode and a one-axis MEMS mirror was adopted to replace the DLP projector for realizing miniaturization of the structured light system. It can project laser stripes in one direction, but with slight bending caused by assembly error. By incorporating the one-axis MEMS mirror, this compact structured light system is easy to achieve a large measuring range, due to the system’s depth of field is only up to the camera’s focus range.

The measurement accuracy of the structured light 3D modeling system mainly depends on the accuracy of the calibration of the camera and projector. By being studied for several decades, there are lots of methods proposed for the camera calibration [1417]. One of the most widely used methods is proposed by Zhang [17], which can obtain an accurate calibration result just by utilizing the camera to capture a planar calibration target at different locations. The projector calibration is a more complicated task compared to the camera calibration because the projector is unable to capture images. Numerous calibration methods were proposed to obtain the system parameters used in 3D reconstruction [1820]. These methods are generally time consuming and complicated. Reference-plane-based methods, with simple process, were widely applied to the systems that with no demand for high accuracy [2124]. Zhang and Huang proposed a method that enables the projector to capture images by treating the projector as a reverse camera [25], from which the camera calibration method can be applied to the projector. This method can achieve highly accurate calibration through a simple process, but can only be applied to the projector with the ability of projecting orthogonal patterns. By treating each structured light stripe as an ideal light plane, some Light Plane Model (LPM) methods were proposed to calibrate each structured light stripe directly [26,27]. These methods neglect the distortion of the light stripes, so it is difficult to obtain high-precision calibration results in severely distorted systems. To correct the distortion of the light stripes, Yang [28] proposed a method of fitting the light stripes using the general conicoid equation. This method works well on correcting the distortion, but the calibration procedures are complicated with demand of two cameras, one of which is not necessary in monocular systems. Another problem with this method is that it is prone to overfitting, so the reconstructed surface is probably not smooth enough. Lu [29] proposed a method to correct the distortion by directly creating the relationship between the sub-pixel coordinates of each stripe and 3D coordinates. In order to achieve high accuracy, the adopted order of polynomial is 5th, so the calculation process is time-consuming.

In this paper, we proposed a calibration method for the structured light 3D modeling systems incorporating the one-axis MEMS mirror that can only project light stripes in one direction. By treating each light stripe as a curved light surface, a curved surface equation with seven parameters was proposed to replace the ideal plane equation as the mathematic model of the light stripes, called curved light surface model (CLSM), and was used in the projector calibration and 3D reconstruction to reduce the effect of the light stripe’s distortion. The main idea of the calibration is to fit the light stripes one-by-one using CLSM. Experiments were conducted to demonstrate that the proposed method can significantly reduce the effect of the projection distortion, and the accuracy of the system is 0.11 mm when measuring a dumbbell-shape two spheres object with a center-to-center distance of 201.10 mm.

2. Curved light surface model

The schematic diagram of the one-axis MEMS mirror system is illustrated in Fig. 1. A mirror with a diameter of 3 mm is supported by a coaxial cantilever, and it can sinusoidal vibrate at a frequency of 1.15 kHz when actuated by an AC voltage. By emitting a line laser onto the rotation axis of the mirror and controlling the laser on and off in the time domain as the mirror vibrates with a frequency much higher than camera capturing frequency, a structured light pattern can be generated and captured by the camera. The line laser is achieved by letting a collimated spot laser incident perpendicularly on a cylindrical lens. The size of the entire system above is less than 30 mm, which is much smaller than the size of DLP projectors. In practical applications, since slight assembly errors cannot be avoided, it is almost impossible to ensure that the line laser and the rotation axis of the MEMS mirror completely coincide, nor can it ensure that the spot laser is perpendicularly incident on the cylindrical lens. Therefore, the line laser emitted from the cylindrical lens has slight bending, forming a curved light surface spatially.

 figure: Fig. 1.

Fig. 1. Schematic diagram of the one-axis MEMS mirror system.

Download Full Size | PDF

In this paper, the structured light pattern produced by the one-axis MEMS mirror system consists of 1024 vertical stripes, each of which is formed independently. They are numbered as stripe numbers $I = 1,2,3,\ldots ,1024$. Due to the distortion of these stripes, the ideal plane equation is unable to accurately describe them. The general conicoid equation can describe the distorted light [28] but is prone to overfitting. Therefore, we proposed a curved surface equation as the mathematic model of the light stripes for the projector calibration and 3D reconstruction and the derivation process is as follows. Since the formation processes of all the 1024 curved light surfaces are similar, any curved light surface can be chosen to demonstrate the concept of curved light surface model. Here, the curved light surface with I = 512 is selected. In Fig. 2, ${O_p}{x_p}{y_p}{z_p}$ is the projector coordinate system (PCS), where is fixed on the center of the mirror, ${O_c}{x_c}{y_c}{z_c}$ is the camera coordinate system (CCS). Except the cylindrical lens used to generate the line laser, there are no other optical lenses in the one-axis MEMS mirror system. Therefore, after being reflected by the MEMS mirror, there will be no distortion such as radial distortion and tangential distortion in the projected light. In such a system, PCS can be set freely and we set it to be parallel to CCS to facilitate model building. When the laser stripe was projected onto the mirror at a specific angle, this stripe is reflected by the mirror to form $Surfac{e_{512}}$. Let $Plan{e_1}$ and $Plan{e_2}$ be arbitrary planes in PCS, ${L_1}$ is the intersection curve of $Surfac{e_{512}}$ on $Plan{e_1}$, ${L_2}$ is the intersection curve of $Surfac{e_{512}}$ on $Plan{e_2}$, ${L_3}$ is a light ray from ${O_p}$ that belongs to $Surfac{e_{512}}$, and respectively intersects $Plan{e_1}$ and $Plan{e_2}$ to form points ${P_1}({x_1},{y_1},{z_1})$ and ${P_2}({x_2},{y_2},{z_2})$ in PCS respectively.

 figure: Fig. 2.

Fig. 2. Geometric diagram of the curved light surface with I = 512.

Download Full Size | PDF

The design working distance of the proposed system is more than 300 mm, which is much larger than the diameter of the MEMS mirror. Therefore, the MEMS mirror is assumed to be a point. It means that all the paths of the projected light converge at the point of ${O_p}$, so it is easy to derive Eq. (1):

$$\left\{ {\begin{array}{c} {{{{z_2}} / {{y_2}}} = {{{z_1}} / {{y_1}}}}\\ {{{{x_2}} / {{y_2}}} = {{{x_1}} / {{y_1}}}} \end{array}} \right..$$
By observing the shapes of ${L_1}$ and ${L_2}$, the general parabolic equation can fit them well. Let $Plan{e_1}$ be perpendicular to the axis ${z_p}$, and let D represents the distance between the $Plan{e_1}$ and the mirror, then ${P_1}({x_1},{y_1},{z_1})$ satisfies the following equation:
$$\left\{ {\begin{array}{c} {{x_1} = Ay_1^2 + B{y_1} + C}\\ {{z_1} = D} \end{array}} \right.,$$
where ${[{A,B,C,D} ]^T}$ is the coefficient term. Substitute Eq. (1) into Eq. (2), then:
$${x_2}{z_2} = ADy_2^2 + B{y_2}{z_2} + {{Cz_2^2} / D}.$$
Due to the variables in Eq. (3) are only $({x_2},{y_2},{z_2})$ and $Plan{e_2}$ is an arbitrary plane, Eq. (3) can be seen as the equation of $Surfac{e_{512}}$. Since ${O_p}{x_p}{y_p}{z_p}$ is parallel to ${O_c}{x_c}{y_c}{z_c}$, the transformation between the two coordinate systems can be expressed as follow:
$$\left\{ {\begin{array}{c} {{x_c} = {x_p} + E}\\ {{y_c} = {y_p} + F}\\ {{z_c} = {z_p} + G} \end{array}} \right.,$$
where ${[{E,F,G} ]^T}$ is the translation vector. Substitute Eq. (4) into Eq. (3), then:
$$({x_c} - E)({z_c} - G) = AD{({y_c} - F)^2} + B({y_c} - F)({z_c} - G) + {{C{{({z_c} - G)}^2}} / D}.$$
Equation (5) defines the mathematic model of the curved light surface whose stripe number is $I = 512$ in CCS. Apply this derivation process to the other 1023 curved light surfaces, the entire model of the structured light projected by the MEMS mirror is defined by a $7 \times 1024$ matrix called projector parameter COE:
$$COE = \left[ {\begin{array}{cccc} {{A_1}}&{{A_2}}&{{A_3}}\\ {{B_1}}&{{B_2}}&{{B_3}}\\ {{C_1}}&{{C_2}}&{{C_3}}\\ {{D_1}}&{{D_2}}&{{D_3}}\\ {{E_1}}&{{E_2}}&{{E_3}}\\ {{F_1}}&{{F_2}}&{{F_3}}\\ {{G_1}}&{{G_2}}&{{G_3}} \end{array}\ldots \ldots \begin{array}{c} {{A_{1024}}}\\ {{B_{1024}}}\\ {{C_{1024}}}\\ {{D_{1024}}}\\ {{E_{1024}}}\\ {{F_{1024}}}\\ {{G_{1024}}} \end{array}} \right],$$
where ${({{A_I},{B_I},{C_I},{D_I},{E_I},{F_I},{G_I}} )^T}$ describes the model of the curved light surface with $I = 1,2,3,\ldots ,1024$

3. System calibration and 3D reconstruction

3.1. Camera calibration

The pinhole imaging model is widely used to describe the camera model. As shown in Fig. 3, (P) is a spatial point, ${P_i}(u,v)$ is the ideal projection of P onto the image plane Ouv, ${O_w}{x_w}{y_w}{z_w}$ is the world coordinate system (WCS). The transformation between $P({x_w},{y_w},{z_w})$ and ${P_i}(u,v)$ is expressed as:

$$s\left[ {\begin{array}{c} u\\ v\\ 1 \end{array}} \right] = \left[ {\begin{array}{ccc} {{f_u}}&\gamma &{{u_0}}\\ 0&{{f_v}}&{{v_0}}\\ 0&0&1 \end{array}} \right]\left[ {\begin{array}{cc} {{R_C}}&{{T_C}} \end{array}} \right]\left[ {\begin{array}{c} {{x_w}}\\ {{y_w}}\\ {{z_w}}\\ 1 \end{array}} \right],$$
where s represents a scale factor, ${f_u}$ and ${f_v}$ represent the equivalent focus lengths alone the axis u and axis v respectively, $({u_0},{v_0})$ represents the principle point of the image plane, $\gamma$ represents the skew between the axis u and axis v, ${R_C}$ and ${T_C}$ describe the transformation between CCS and WCS, called rotation matrix and translation vector respectively.

 figure: Fig. 3.

Fig. 3. Pinhole imaging model.

Download Full Size | PDF

Influenced by the distortion of lens, the measured projection of P on the image plane is ${P_d}({u_d},{v_d})$. In general, the lens distortion is composed of the radial distortion and tangential distortion, which can be expressed by the following equation [14]:

$$\left\{ {\begin{array}{c} {\left[ {\begin{array}{c} {{{\tilde{u}}_d}}\\ {{{\tilde{v}}_d}} \end{array}} \right] = (1 + {k_1}{r^2} + {k_2}{r^4} + {k_5}{r^6})\left[ {\begin{array}{c} {\tilde{u}}\\ {\tilde{v}} \end{array}} \right] + \left[ {\begin{array}{c} {2{k_3}{{\tilde{u}}^2}{{\tilde{v}}^2} + {k_4}({r^2} + 2{{\tilde{u}}^2})}\\ {{k_3}({r^2} + 2{{\tilde{v}}^2}) + 2{k_4}{{\tilde{u}}^2}{{\tilde{v}}^2}} \end{array}} \right]}\\ {r = {{\tilde{u}}^2} + {{\tilde{v}}^2}} \end{array}} \right.,$$
where $({k_1},{k_2},{k_5})$ represents the radial distortion, $({k_3},{k_4})$ represents the tangential distortion, $({\tilde{u}_d},{\tilde{v}_d})$ and $(\tilde{u},\tilde{v})$ represent the normalized terms of $({u_d},{v_d})$ and $(u,v)$.

Camera calibration estimates the intrinsic parameters $({f_u},{f_v},{u_0},{v_0},\gamma )$, extrinsic parameters $({R_C},{T_C})$, and distortion coefficients $({k_1},{k_2},{k_3},{k_4},{k_5})$. The method proposed by Zhang [17] is adopted with the advantages of convenience and high-precision, and a camera calibration toolbox is available in OpenCV.

3.2. Projector calibration based on curved light surface model

Projector calibration is to estimate the projector parameter COE. Based on CLSM proposed in section 2, this paper introduced a calibration method for the projectors that can project stripes in one direction. The schematic diagram of this method is shown in Fig. 4. A planar chessboard fixed on a removable shelf is required and the board is large enough to allow all the 1024 curved light surfaces to be projected onto it. The main idea is to obtain 3D coordinates of sufficient spatial points belonging to these curved light surfaces by placing chessboard at random positions and adjusting the one-axis MEMS mirror system to project these curved light surfaces onto the chessboard. Then, we apply these data to fit the 1024 curved light surfaces one-by-one using Eq. (5). It is time-consuming to project only one stripe onto the chessboard at a time and repeat this process 1024 times to obtain the data of all the 1024 curved light surfaces. Therefore, the structured light encoding technique combining 8-bit gray coding and four steps line-shift strategy is adopted to assist the calibration [30,31], by which the data of all the 1024 curved light surfaces can be obtained from only 12 images.

 figure: Fig. 4.

Fig. 4. Schematic diagram of the projector calibration.

Download Full Size | PDF

The structured light patterns used in the calibration are composed of 1024 vertical stripes, and their stripe number is $I = 1,2,3,\ldots ,1024$. The conventional coding methods are random pattern, gray coding, and sinusoidal phase coding [32]. Due to the one-axis MEMS mirror can only project light stripes in one direction and it is difficult to accurately control the light intensity, 8-bit gray coding which only contains white and black striped patterns is adopted. As shown in Fig. 5(a), the entire projection image is divided into ${2^8}$ sub-regions by 8-bit gray coding patterns, corresponding to the decoding number ${I_G} = 0,1,2,\ldots ,255$ which can be converted by the 8-bit gray code words. The finest resolution of 8-bit gray coding is 4 pixels, which is not fine enough for high-precision 3D reconstruction. In theory, 10-bit gray coding has a resolution of 1 pixel. However limited by the width of the line laser, it is difficult for the one-axis MEMS mirror to ensure a good contrast in the patterns whose finest resolution is less than 4 pixels. Therefore, 8-bit gray coding pattern is recommended.

 figure: Fig. 5.

Fig. 5. Structured light coding strategy. (a) 8-bit gray coding divide the image into 256 sub-regions; (b) four steps line-shift patterns use stripe edges to encode each gray coding sub-region.

Download Full Size | PDF

The conventional methods to further decompose the sub-region are line-shift strategy and phase-shift strategy. Stripe edges are robust and not easily affected by laser speckle and surface texture, so we adopt the four steps line-shift strategy to encode these sub-regions. As shown in Fig. 5(b), the first pattern of the line-shift strategy is a binary stripe pattern with a stripe width of 4 pixels and is shifted three times in steps of 1 pixel to produce the three other patterns. In the line-shift patterns, edges of stripes can be detected with line-shift number ${I_S} = 1,2,3,4$ after binaryzation. By matching these edges with the gray coding sub-regions and then combining ${I_G}$ and ${I_S}$, the stripe number I can be ascertained by the follow equation:

$$I = 4{I_G} + {I_S}.$$

During the calibration, the one-axis MEMS mirror system is controlled to project the structured light sequence onto the chessboard, meanwhile the camera is used to capture the image sequence. The projected curved light surfaces intersect the chessboard, generating abundant intersection points. By performing sub-pixel stripe edge detection [33] and structured light decoding, the stripe number I and sub-pixel coordinates $(u,v)$ of these intersection points are ascertained. The 3D coordinates of these intersection points in CCS are required for fitting, which can be calculated by combining the equations of camera rays and the chessboard plane. To simplify calculation, the origin ${O_w}$ of WCS is fixed on the corner of the chessboard, the axis ${O_z}$ is perpendicular to the chessboard, and the ${x_w}{y_w}$ plane coincides with the chessboard. The equation of the chessboard plane can be expressed by its extrinsic parameters, which is shown as:

$$\left\{ {\begin{array}{c} {{r_{13}}{x_c} + {r_{23}}{y_c} + {r_{33}}{z_c} + d = 0}\\ {d = {r_{13}}{t_1} + {r_{23}}{t_2} + {r_{33}}{t_3}} \end{array}} \right..$$

A convenient approach to ascertain the extrinsic parameters of the chessboard plane is to generate the floodlighting images containing the chessboard from the image sequence and add it into the camera calibration procedures. The camera ray is a ray passing through the spatial point ${P_c}({x_c},{y_c},{z_c})$ and the projection point ${P_i}(u,v)$ on the normalized image plane, which can be expressed as:

$$s\left[ {\begin{array}{c} {\tilde{u}}\\ {\tilde{v}}\\ 1 \end{array}} \right] = \left[ {\begin{array}{c} {{x_c}}\\ {{y_c}}\\ {{z_c}} \end{array}} \right],$$
where $(\tilde{u},\tilde{v})$ represents the normalized term of $({u,\; v} )$. Substitute Eq. (11) into Eq. (10), then:
$${r_{13}}s\tilde{u} + {r_{23}}s\tilde{v} + {r_{33}}s + {r_{13}}{t_1} + {r_{23}}{t_2} + {r_{33}}{t_3} = 0.$$
Solve Eq. (12) to ascertain s, then $P({x_c},{y_c},{z_c})$ can be calculated by Eq. (11).

By applying the calculations above, the coordinates of all the intersection points in CCS can be obtained from the image sequence captured at one position. Since the chessboard is placed approximately perpendicular to the axis ${z_c}$, the axis ${z_c}$ coordinate range of the intersection points from the chessboard placed at one position is much smaller than the depth of the measuring range. It is difficult to obtain good fit results by such data, so we move the chessboard to another position along the axis ${z_c}$ to expand the axis ${z_c}$ coordinate range of the intersection points. The greater the distance between the two positions, the higher accuracy of the calibration. The data obtained from two random positions is sufficient to do the fitting. Therefore, in order to simplify the calibration process as much as possible, no more positions are used. We place the chessboard at the forefront and the last end of the measuring range to obtain high accuracy within the entire measuring range. These intersection points are divided into 1024 groups according to the stripe number I to fit each column of COE. The fitting is a non-linear least squares problem which can be solved by minimizing the following function:

$$F = {\sum\nolimits_{j = 1}^N {||{{x_{Ij}} - \tilde{x}({A_I},{B_I},{C_I},{D_I},{E_I},{F_I},{G_I},{y_{Ij}},{z_{Ij}})} ||} ^2},$$
where $({x_{Ij}},{y_{Ij}},{z_{Ij}})$ represents the 3D coordinates of the j-th intersection point from the curved light surface whose stripe number is I. The Levenberg-Marquardt Algorithm is adopted to solve this problem.

3.3. 3D reconstruction

To obtain the 3D coordinates of an object, the one-axis MEMS mirror system projects the same structured light sequence used in the projector calibration onto the measured object, meanwhile, the camera captures these stripe images. For a point to be measured, by performing the sub-pixel stripe edge detection and structured light decoding on these stripe images, the stripe number I and sub-pixel coordinates $(u,v)$ of this point can be ascertained, and then the camera ray equation and curved light surface equation of the measured point can be expressed by Eq. (11) and Eq. (5) respectively. Combine the two equations, then:

$$\begin{array}{l} (\tilde{u} + {B_I}\tilde{v} - {{{C_I}} / {{D_I} - {A_I}}}){s^2} + ({A_I}{D_I}{F_I}\tilde{v} - \tilde{u}{G_I} - {E_I} + {B_I}{F_I} + {B_I}{G_I}\tilde{v} + \\ {{2{C_I}{G_I}} / {{D_I}}})s + {G_I}{E_I} + {A_I}{D_I}F_I^2 + {B_I}{G_I}{F_I} + {{{C_I}G_I^2} / {{D_I}}} = 0. \end{array}$$

After solving Eq. (14) to obtain the resolution of s, the 3D coordinates of the measured point in CCS can be expressed by Eq. (11). Applying the above calculation to all points detected from the stripe images, the 3D reconstruction can be implemented.

4. Experiments and results

The structured light 3D modeling system proposed in this paper is shown in Fig. 6, (a) camera (IDS UI-3240CP-NIR-GL, $1280 \times 1024$) assembling a 6 mm lens and a projector are assembled in a structure with a horizontal distance of 80 mm. The projector consists of a line laser with a wave length of 830 nm, and a one-axis MEMS mirror with a diameter of 3 mm (commercially available from Zhisensor Tech.). The measuring range of the axis ${x_c}$ and axis ${y_c}$ increases with working distance and is 490 mm (${x_c}$) ${\times}$ 425 mm (${y_c}$) at focus distance (about 500 mm). A planar chessboard (800 mm ${\times}$600 mm) with 10 ${\times}$10 square patterns (30 mm ${\times}$ 30 mm), which is fixed on a movable shelf, was used in system calibration. The flatness error of the chessboard is 0.002 mm.

 figure: Fig. 6.

Fig. 6. The composition of the proposed structured light 3D modeling system.

Download Full Size | PDF

4.1. System calibration

In the camera calibration, the chessboard needs to be captured at many different positions so that all detected points are distributed uniformly. Generally speaking, when the number of images is less than 20, the more images captured, the greater the accuracy of the camera calibration. As the number of photos increases, the accuracy will gradually stabilize. It is recommended to use 20 images, because when the number of images exceeds 20, adding images will no longer produce significant benefits for improving accuracy. We first placed the chessboard at 20 different positions to capture 20 images, named ${I_1} \sim {I_{20}}$. Then, the chessboard was moved to the forefront and the last end of the measuring range, meanwhile, we adjusted the one-axis MEMS mirror system to project the structured light sequence onto the chessboard and utilized the camera to capture two image sequences, named ${S_{21}}$ and ${S_{22}}$. The floodlighting images ${I_{21}}$ and ${I_{22}}$ containing the chessboard were created from ${S_{21}}$ and ${S_{22}}$. The camera calibration was conducted by applying Zhang’s method [17] to ${I_1} \sim {I_{22}}$ and the results are as follows:

$$\left[ {\begin{array}{ccc} {{f_u}}&\gamma &{{u_0}}\\ 0&{{f_v}}&{{v_0}}\\ 0&0&1 \end{array}} \right] = \left[ {\begin{array}{ccc} {1182.695}&0&{647.604}\\ 0&{1182.872}&{504.812}\\ 0&0&1 \end{array}} \right],$$
$$\left[ {\begin{array}{c} {{k_1}}\\ {{k_2}}\\ {{k_3}}\\ {{k_4}}\\ {{k_5}} \end{array}} \right] = \left[ {\begin{array}{c} { - 0.0624}\\ {0.0855}\\ {0.00002645}\\ { - 0.00007024}\\ { - 0.0067} \end{array}} \right].$$

The extrinsic matrixes [${R_{21}},{T_{21}}$] and [${R_{22}},{T_{22}}$] obtained from the camera calibration were required in the projector calibration. The procedures described in section 3.2 were performed to ${S_{21}}$ and ${S_{22}}$ to ascertain the 3D coordinates and stripe numbers I of 1,599,584 intersection points. The Levenberg-Marquardt Algorithm were used to calculate the projector parameter COE, with an initial guess of [1,1,1,1,1,1,1]$^T$. The fitting of all the 1024 curved light surfaces are similar, therefore, the result of any curved light surface can be discussed to evaluate the performance of CLSM. Here, the result of the curved light surface with the stripe number $I = 512$ is selected, which is shown as:

$$\begin{array}{l} {[{{A_{512}},{B_{512}},{C_{512}},{D_{512}},{E_{512}},{F_{512}},{G_{512}}} ]^T} = \\ {[{ - 0.0107, - 0.0108, - 0.1499,1.0153,80.9382,3.6237,7.1419} ]^T}. \end{array}$$

Moreover, we used the ideal plane equation to fit the same curved light surface to evaluate the performance of Light Plane Model (LPM) method, the obtained equation is shown as:

$${x_c} + 0.0107{y_c} + 0.1484{z_c} - 82.09 = 0.$$

Figure 7 shows the residuals of the intersection points of the curved light surface whose stripe number is I = 512 and the chessboard placed at the farther position from the two equations. The maximum absolute values of the residuals and the root mean square errors are 0.1597 mm and 0.0558 mm for CLSM method, and 0.8996 mm and 0.414 mm for LPM method. The results demonstrated that the fitting in CLSM method does not require an high-precision initial guess and the proposed curved surface equation performs much better in fitting the curved light surface compared to the ideal plane equation.

 figure: Fig. 7.

Fig. 7. Residuals of the intersection points of the curved light surface with I = 512 and the chessboard placed at the farther position from the two equations.

Download Full Size | PDF

4.2. Experiment of 3D reconstruction

By performing the 3D reconstruction procedures described in section 3.3, the chessboard used in the calibration was measured to compare CLSM method and LPM method. The measurement was conducted at eight different distances to evaluate the accuracy over the entire measuring range. The obtained point clouds were used to fit ideal planes and calculate the errors between the measured points and the best-fit planes, and the results are shown in Table 1. We can see that in the entire measuring range, the root mean square errors of CLSM method are significantly small than LPM method. Another phenomenon that can be seen is that the errors of the two methods increase along with the working distance. The main reason is that CLSM and LPM are approximate models, so there are deviations from the true model in the two methods, although the deviation of CLSM method is much smaller than LPM method, and the influence of the deviation in triangulation increases along with the working distance. To compare the two methods in more detail, Fig. 8 shows the errors of all measured points from the chessboard placed at position No. 4, No. 6, and No. 8. It can be easily observed that the errors of LPM method are distributed unevenly, those of the upper and lower parts of the chessboard are greater than 3 mm, while the errors in the middle are less than -1.5 mm. These uneven error distributions indicated that the measurements using LPM method were strongly impacted by the distortion of the light stripes. A better performance is shown in CLSM method that the errors are evenly distributed and significantly smaller than LPM method. The results verified that CLSM method can greatly reduce the effect caused by the distortion of the light stripes.

 figure: Fig. 8.

Fig. 8. Analysis of measurement results at different positions. (a) Photograph of the chessboard captured at position No. 4, and the measurement errors using the two methods; (b) the corresponding results obtained at position No. 6; (c) the corresponding results obtained at position No. 8.

Download Full Size | PDF

Tables Icon

Table 1. Measurement results of the chessboard using CLSM method and LPM method.

To further analyze the accuracy of CLSM method and LPM method, a standard dumbbell-shaped object was measured at six different positions over the measuring range. The measured object is shown in Fig. 9(a), with a diameter of 38.10 ${\pm}$0.01 mm, and with a center-to-center distance of the two spheres of 201.10 ${\pm}$ 0.01 mm. Figure 9(b) shows one of the obtained point clouds of the standard dumbbell-shaped object. We used the point cloud to fit spheres by least squares approach to calculate the center-to-center distance. Table 2 shows the results, the root mean square error and the standard deviation of the center-to-center distance are 0.11 mm and 0.11 mm for CLSM method, and 0.55 mm and 0.41 mm for LPM method. The standard deviation of the error is equal to the standard deviation of the center-to-center distance. Any one of them can be computed to compare the stability of the two methods, so the standard deviation of the error is not computed here. The results showed that the accuracy of CLSM method is better than LPM method.

 figure: Fig. 9.

Fig. 9. Measurement of the standard dumbbell-shaped object. (a) Photograph of the standard dumbbell-shaped object; (b) a obtained point cloud.

Download Full Size | PDF

Tables Icon

Table 2. Measurement results of the standard dumbbell-shaped object.

Moreover, we conducted a 3D reconstruction of a plaster statue containing complex details to evaluate the system’s performance. After removing the useless points of the obtained point cloud, the results are shown in Fig. 10. It can be observed that the surface of reconstruction is smooth and the details such as eyes, nose defect, and hair were recovered well.

 figure: Fig. 10.

Fig. 10. Reconstruction of the plaster statue using CLSM method. (a) Photograph of the plaster statue; (b) six-th gray coding pattern; (c) first stripe-shift pattern; (d) reconstruction result.

Download Full Size | PDF

In summary, by performing the three experiments above, the root mean square errors of the chessboard plane and standard dumbbell-shaped object were obtained to evaluate the accuracy of our method and LPM method. The results demonstrated that CLSM method perform better than LMP method. The plaster statue was reconstructed to evaluate the performance of the proposed system on object containing complex details.

5. Conclusion

In this paper, a calibration method was proposed for the calibration of the structured light 3D modeling system incorporating a one-axis MEMS mirror that can only support unidirectional stripe projection. By analyzing the geometry of the projected light stripes, a curved surface equation called curved light surface model was proposed as the mathematic model of the light stripes for the projector calibration and 3D reconstruction. Due to the projection distortion has been considered in CLSM, no other processing is required to rectify the projection distortion in the 3D reconstruction. The experiment results verified that CLSM method can significantly reduce the effect of the light stripe’s distortion, and an accuracy of 0.11 mm was achieved when measuring a dumbbell-shaped object with a center-to-center distance of 201.10 mm. This method can also be adaptable to DLP projectors and other systems that can project stripes in one direction with distortion caused by lens or assembly error.

Funding

National Key Research and Development Program of China (2018YFF01010900).

Disclosures

The authors declare no conflicts of interest.

References

1. S. S. Gorthi and P. Rastogi, “Fringe projection techniques: Whither we are?” Opt. Lasers Eng. 48(2), 133–140 (2010). [CrossRef]  

2. M. Korosec, J. Duhovnik, and N. Vukasinovic, “Identification and optimization of key process parameters in noncontact laser scanning for reverse engineering,” Comput. Aided Des. 42(8), 744–748 (2010). [CrossRef]  

3. I. Léandry, C. Brèque, and V. Valle, “Calibration of a structured-light projection system: development to large dimension objects,” Opt. Lasers Eng. 50(3), 373–379 (2012). [CrossRef]  

4. E. Bagci, “Reverse engineering applications for recovery of broken or worn parts and re-manufacturing: three case studies,” Adv. Eng. Softw. 40(6), 407–418 (2009). [CrossRef]  

5. A. Haleem and M. Javaid, “3D scanning applications in medical field: a literature-based review,” Clin. Epidem- iol Glob. Health 7(2), 199–210 (2019). [CrossRef]  

6. O. Hall-Holt and S. Rusinkiewicz, “Stripe boundary codes for real-time structuredlight range scanning of moving objects,” Proceedings of IEEE International Conference on Computer Vision (IEEE2001), pp. 359–366 vol.2.

7. S. Zhang, “Recent progresses on real-time 3d shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]  

8. C. Yu, X. Chen, and J. Xi, “Modeling and calibration of a novel one-mirror galvanometric laser scanner,” Sensors 17(12), 164 (2017). [CrossRef]  

9. T. Wissel, B. Wagner, and P. Stüber, “Data-driven learning for calibrating galvanometric laser scanners,” IEEE Sens J. 15(10), 5709–5717 (2015). [CrossRef]  

10. T. Wakayama and T. Yoshizawa, “Compact camera for three-dimensional profilometry incorporating a single MEMS mirror,” Opt. Eng. 51(1), 013601 (2012). [CrossRef]  

11. T. Yoshizawa, T. Wakayama, and H. Takano, “Applications of a MEMS scanner to profile measurement,” Proc. SPIE 6762, 67620B (2007). [CrossRef]  

12. J. Tauscher, W. O Davis, D. Brown, M. Ellis, Y. Ma, M. E. Sherwood, D. Bowman, M. P. Helsel, S. Lee, and J. W. Coy, “Evolution of MEMS scanning mirrors for laser projection in compact consumer electronics,” Proc. SPIE 7594, 75940A (2010). [CrossRef]  

13. A. D. Yalcinkaya, H. Urey, D. Brown, T. Montague, and R. Sprague, “Two-axis electromagnetic microscanner for high resolution displays,” IEEE J. Microelectromechan Syst. 15(4), 786–794 (2006). [CrossRef]  

14. J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” Proc. of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1106–1112 (1997).

15. I. Sobel, “On calibrating computer controlled cameras for perceiving 3-d scenes,” Artif. Intell. 5(2), 185–198 (1974). [CrossRef]  

16. R. Tsai, “A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv cameras and lenses,” IEEE J. Robot Autom. 3(4), 323–344 (1987). [CrossRef]  

17. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

18. E. Zappa and G. Busca, “Fourier-transform profilometry calibration based on an exhaustive geometric model of the system,” Opt. Laser Eng. 47(7-8), 754–767 (2009). [CrossRef]  

19. Q. Hu, P. S. Huang, Q. Fu, and F. P. Chiang, “Calibration of a three-dimensional shape mea-surement system,” Opt. Eng. 42(2), 487–493 (2003). [CrossRef]  

20. X. Mao, W. Chen, and X. Su, “Improved fourier-transform profilometry,” Appl. Opt. 46(5), 664–668 (2007). [CrossRef]  

21. H. Liu, W. H. Su, K. Reichard, and S. Yin, “Calibration-based phase-shifting projected fringe profilometry for accurate absolute 3D surface profile measurement,” Opt. Commun. 216(1-3), 65–80 (2003). [CrossRef]  

22. Y. Xiao, Y. Cao, and Y. Wu, “Improved algorithm for phase-to-height mapping in phase measuring profilome-try,” Appl. Opt. 51(8), 1149–1155 (2012). [CrossRef]  

23. P. J. Tavares and M. A. Vaz, “Linear calibration procedure for the phase-to-height relationship in phase measure- ment profilometry,” Opt. Commun. 274(2), 307–314 (2007). [CrossRef]  

24. Y. Wen, S. Li, H. Cheng, X. Su, and Q. Zhang, “Universal calculation formula and calibration method in fourier transform profilometry,” Appl. Opt. 49(34), 6563–6569 (2010). [CrossRef]  

25. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

26. Z. Wei, L. Cao, and G. Zhang, “A novel 1D target-based calibration method with unkno- wn orientation for structured light vision sensor,” Opt. Laser Technol. 42(4), 570–574 (2010). [CrossRef]  

27. F. Zhou and G. Zhang, “Complete calibration of a structured light stripe vision sensor through planar target of unknown orientations,” Image Vision Comput. 23(1), 59–67 (2005). [CrossRef]  

28. R. Yang, S. Cheng, and Y Chen, “Flexible and accurate implementation of a binocular structured light system,” Opt. Lasers Eng. 46(5), 373–379 (2008). [CrossRef]  

29. X. Lu, Q. Wu, and H. Huang, “Calibration based on ray-tracing for multi-line structured light projection system,” Opt. Express 27(24), 35884–35894 (2019). [CrossRef]  

30. J. Gühring, “Dense 3-D surface acquisition by structured light using off-the-shelf components,” Proc. SPIE 4309, 220–231 (2000). [CrossRef]  

31. G. Sansoni, M. Carocci, and R. Rodella, “Three-dimensional vision based on a combination of gray-code and phase-shift light projection: analysis and compensation of the systematic errors,” Appl. Opt. 38(31), 6565–6573 (1999). [CrossRef]  

32. S. Zhang, “High-speed 3D shape measurement with structured light methods: A review,” Opt. Lasers Eng. 106, 33 (2018). [CrossRef]  

33. M. Trobina, “Error model of a coded-light range sensor,” Commun. Technol. Lab. (1995).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Schematic diagram of the one-axis MEMS mirror system.
Fig. 2.
Fig. 2. Geometric diagram of the curved light surface with I = 512.
Fig. 3.
Fig. 3. Pinhole imaging model.
Fig. 4.
Fig. 4. Schematic diagram of the projector calibration.
Fig. 5.
Fig. 5. Structured light coding strategy. (a) 8-bit gray coding divide the image into 256 sub-regions; (b) four steps line-shift patterns use stripe edges to encode each gray coding sub-region.
Fig. 6.
Fig. 6. The composition of the proposed structured light 3D modeling system.
Fig. 7.
Fig. 7. Residuals of the intersection points of the curved light surface with I = 512 and the chessboard placed at the farther position from the two equations.
Fig. 8.
Fig. 8. Analysis of measurement results at different positions. (a) Photograph of the chessboard captured at position No. 4, and the measurement errors using the two methods; (b) the corresponding results obtained at position No. 6; (c) the corresponding results obtained at position No. 8.
Fig. 9.
Fig. 9. Measurement of the standard dumbbell-shaped object. (a) Photograph of the standard dumbbell-shaped object; (b) a obtained point cloud.
Fig. 10.
Fig. 10. Reconstruction of the plaster statue using CLSM method. (a) Photograph of the plaster statue; (b) six-th gray coding pattern; (c) first stripe-shift pattern; (d) reconstruction result.

Tables (2)

Tables Icon

Table 1. Measurement results of the chessboard using CLSM method and LPM method.

Tables Icon

Table 2. Measurement results of the standard dumbbell-shaped object.

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

{ z 2 / y 2 = z 1 / y 1 x 2 / y 2 = x 1 / y 1 .
{ x 1 = A y 1 2 + B y 1 + C z 1 = D ,
x 2 z 2 = A D y 2 2 + B y 2 z 2 + C z 2 2 / D .
{ x c = x p + E y c = y p + F z c = z p + G ,
( x c E ) ( z c G ) = A D ( y c F ) 2 + B ( y c F ) ( z c G ) + C ( z c G ) 2 / D .
C O E = [ A 1 A 2 A 3 B 1 B 2 B 3 C 1 C 2 C 3 D 1 D 2 D 3 E 1 E 2 E 3 F 1 F 2 F 3 G 1 G 2 G 3 A 1024 B 1024 C 1024 D 1024 E 1024 F 1024 G 1024 ] ,
s [ u v 1 ] = [ f u γ u 0 0 f v v 0 0 0 1 ] [ R C T C ] [ x w y w z w 1 ] ,
{ [ u ~ d v ~ d ] = ( 1 + k 1 r 2 + k 2 r 4 + k 5 r 6 ) [ u ~ v ~ ] + [ 2 k 3 u ~ 2 v ~ 2 + k 4 ( r 2 + 2 u ~ 2 ) k 3 ( r 2 + 2 v ~ 2 ) + 2 k 4 u ~ 2 v ~ 2 ] r = u ~ 2 + v ~ 2 ,
I = 4 I G + I S .
{ r 13 x c + r 23 y c + r 33 z c + d = 0 d = r 13 t 1 + r 23 t 2 + r 33 t 3 .
s [ u ~ v ~ 1 ] = [ x c y c z c ] ,
r 13 s u ~ + r 23 s v ~ + r 33 s + r 13 t 1 + r 23 t 2 + r 33 t 3 = 0.
F = j = 1 N | | x I j x ~ ( A I , B I , C I , D I , E I , F I , G I , y I j , z I j ) | | 2 ,
( u ~ + B I v ~ C I / D I A I ) s 2 + ( A I D I F I v ~ u ~ G I E I + B I F I + B I G I v ~ + 2 C I G I / D I ) s + G I E I + A I D I F I 2 + B I G I F I + C I G I 2 / D I = 0.
[ f u γ u 0 0 f v v 0 0 0 1 ] = [ 1182.695 0 647.604 0 1182.872 504.812 0 0 1 ] ,
[ k 1 k 2 k 3 k 4 k 5 ] = [ 0.0624 0.0855 0.00002645 0.00007024 0.0067 ] .
[ A 512 , B 512 , C 512 , D 512 , E 512 , F 512 , G 512 ] T = [ 0.0107 , 0.0108 , 0.1499 , 1.0153 , 80.9382 , 3.6237 , 7.1419 ] T .
x c + 0.0107 y c + 0.1484 z c 82.09 = 0.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.