Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Coding line structured light based on a line-scan camera and its calibration

Open Access Open Access

Abstract

In a conventional three-dimensional (3D) measurement technique of a line-scan camera, the projection system based on surface structured light is a compromise of traditional projection technology, which suffers from complex calibration, complex structure and low accuracy. To this end, the coding line structured light based on the coded line laser projection system is proposed to address the 3D measurement of a line-scan camera. The single-line projection and codeable characteristics of coded line laser projection system (constructed by a point laser and a micro-electro-mechanical system (MEMS) scanning galvanometer and modeled as the line projection model) are fully matched with the imaging mode of the line-scan camera. The 3D measurement model based on the height information, lateral information and absolute phase of the coding line structured light is derived. The multi-position flat display calibration method is proposed to calibrate the system parameters. In addition, in order to obtain the accurate absolute phase from the phase shift combined binary code, the periodic error correction method based on expansion-corrosion is proposed to correct the phase error. Contrary to conventional structured light methods based on a line-scan camera, the proposed method has the advantages of high measurement accuracy, high efficiency, more compactness and low cost. The experiments affirm that the coding line structured light is valid and the proposed calibration method is feasible. Experimental results also indicate that the proposed method performs well for both diffuse reflective surfaces and reflective surfaces that are difficult to measure with conventional structured light methods based on a line-scan camera.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The optical imaging systems are mainly divided into two categories: area scan imaging (matrix camera) and line scan imaging (line-scan camera). Based on the matrix camera, a variety of optical 3D measurement methods have been developed to suit diverse measurement applications, such as surface structured light [1], line structured light [2] and phase measurement defelctometry [3]. The 3D measurement techniques based on matrix camera have been researched relatively sufficient and mature, but the measurement technique based on line-scan camera still needs to be studied urgently.

As the name implies, the imaging mode of the line-scan camera is one-dimensional. Through scanning with a motion mechanism, the two-dimensional image of the object can be obtained by the line-scan camera [4]. The line-scan camera has become the research focus due to its superiority in wide dynamic range, high performance-price ration, high resolution, high accuracy, high scanning frequency and especially its potential in 3D measurement, which are very suitable for computational optical imaging, aerospace, intelligent measurement and industrial inspection. Due to difference in imaging modes and lack of corresponding structured light projection equipment, it is difficult to directly apply the calibration and 3D measurement methods developed based on the matrix camera to the line-scan camera. Currently, many scholars have launched study and achieved many research results from calibration [5,6] and 3D measurement of line-scan camera. 3D measurement systems based on the line-scan camera are mainly divided into two categories: passive light source measurement systems [712] and active light source measurement systems [1315].

Generally, the passive light source measurement systems consist of two or more line-scan cameras. And the 3D information about the measured object is calculated by stereo matching method from the images under ambient light captured by line-scan cameras. The stochastic binary local descriptor and energy minimization optimization approach methods are proposed by Valentín Kristián [7,8], which are used to improve the reconstruction accuracy and significantly enhance details in dense line-scan stereo matching. Affine scale invariant feature transform (SIFT) feature detector for establishing dense line-scan stereo correspondence is proposed by Zhang Pengchang [9]. The effectiveness of the method is verified through measuring the highly textured masks. The stereo matching method based on digital image correlation (DIC) technique is proposed by Chen Pocheng [10]. When spraying speckle images on the measured object in advance, their system accuracy can reach micro meter level in the industrial defect inspection area. The height-adaptive shading correction method is proposed by Timo Eckhard [11] to account for the dependence of scene illumination on scanning object height with respect to the camera, which deteriorates the spectral and color measurement performance in the multi-spectral line-scan camera system. Erik Lilienblum's research [12] shows that passive stereo line-scan camera system can only measure the surface with significant texture features. And the lateral resolution of system is related to the size of the window for calculating texture correlation. The lateral resolution decreases as window sizes increase.

Measurement accuracy and resolution depend highly on the richness of textures. It is the main reason of limiting the applications of passive light stereo line-scan camera system. The similar conflict has also appeared in the measurement system based on matrix camera. The surface structured light method of matrix camera is developed to address this conflict. The projection technology in surface structured light is also investigated to form the active light source measurement systems composed of line-scan camera. Such as, the stereo line-scan sensor composed of dual line-scan cameras along with a projector is proposed by Sun Bo [13]. A two-step calibration process is applied to calibrate the sensor. And the experiment results show that the sensor can achieve 0.2 mm measurement accuracy when measuring diffuse reflective surfaces. Although the introduction of the projector increased measurement accuracy and robustness, it also weakened the advantages of line-scan camera. Such as, the size of the sensor is increased, the measurement speed is reduced and the measurement deficiencies [14] (for example, measurement errors caused by reflected or scattered light on reflective surfaces) is also introduced. A handtailor surface light projection system composed of LED and cylindrical lenses are applied to the line-scan structured light system proposed by Erik Lilienblum [15]. The method is preferable for high measurement accuracy of flat or slightly curved surface. But limited by the handtailor surface light projection system, the effective depth range is only 6 mm.

In summary, the introduction of projection technology in surface structured light based on matrix camera provides the reference to the optical 3D measurement of line-scan camera. However, the conventional projection technologies developed for surface structured light do not match the imaging properties (such as wide dynamic range, high resolution, high accuracy and especially single line imaging properties) of the line-scan camera. The development of line structured light instead of surface structured light for the imaging properties of line-scan camera is a more feasible research direction to solve the 3D measurement of line-scan camera. Consequently, this paper is dedicated to solving this long-standing goal of constructing line-scan structured light system based on one dimensional projection system. Therefore, the coding line structured light based on coded line laser projection system is proposed. The 3D measurement model with height information, lateral information and absolute phase of the coding line structured light is derived and the corresponding calibration method is proposed to calibrate the system parameters. In addition, in order to obtain the accurate absolute phase, the periodic error correction method based on expansion-corrosion is proposed to correct the phase error.

The details of the proposed coding line structured light based on line-scan camera are described in the next four sections. Section 2 shows the fundamentals of 3D measurement of the coding line structured light. And the corresponding calibration methods are shown in section 3. The implementation and experiment results are given in section 4. The last section is the conclusion.

2. Principle of coding line structured light

Structuring projected light to provide dominant visual characteristics is a very important technique in the field of optical 3D measurement based on matrix camera. For instance, the surface structured light technique is developed based on the structured light coding characteristics of the projector [1]. And the line structured light technique is developed based on the line light projected by the laser [16]. However, the specific projected light structuring technique for line-scan camera to further develop the coding line structured light technique remains to be studied. The Micro-Electro-Mechanical System (MEMS) system projecting single-point laser is generally used in radar ranging technique (time of flight method). In this paper, the intensity of the point laser of MEMS is coded and proved to be suitable for projecting line structured light. Therefor, the coded line laser projection system is constructed based on a point laser and a MEMS scanning galvanometer [17]. And then, the projection model of the coded line laser projection system is analyzed and the 3D reconstruction model is derived.

2.1 Coded line laser projection system

As shown in Fig. 1, the coded line laser projection system is constructed by a point laser and a MEMS scanning galvanometer. The range of scanning angle and total number of scanning points of the MEMS galvanometer are $\theta$ and ${n_p}$, respectively. The point laser can be coded to achieve intensity control at a specific scanning angle. Equidistant projection of light spots is ensured by controlling the scanning angle $\alpha$ of the galvanometer. The galvanometer is driven by a given sinusoidal current signal, and its rotational angular speed also conforms to the sinusoidal law. In order to ensure the repeat accuracy and speed, the sinusoidal current signal is specific, that is, regardless of the projected pattern, the movement of the galvanometer is consistent. Hence, the time ${t_\alpha }$ of the galvanometer reaching the scanning angle $\alpha$ can be accurately calculated. At time ${t_\alpha }$, the intensity of the laser is adjusted to the project intensity. Therefore, it is only necessary to adjust the intensity signal of the laser according to the current driving signal of the galvanometer to realize the high-speed projection of the coded stripe pattern.

$$\alpha = \frac{\theta }{2} - \textrm{atan}\left( {\frac{{{\raise0.7ex\hbox{${{x_L}}$} \!\mathord{\left/ {\vphantom {{{x_L}} 2}} \right.}\!\lower0.7ex\hbox{$2$}} - {x_n}}}{l}} \right)\begin{array}{cc} ,&{{x_n} = \frac{n}{{{n_p}}}} \end{array}{x_L},$$
where, l is the scanning distance, ${x_L}$ is the scanning range in $x$direction. The coordinate of the $n$-th point in the scanning direction is ${x_n}$, $0 \le {x_n} \le {x_L}$.

 figure: Fig. 1.

Fig. 1. Phase shift coded by MEMS scanning galvanometer

Download Full Size | PDF

The phase shift pattern (Fig. 1(c)) is generated by adjusting the intensity of the projected light point at each angle according to the pre-coded intensity coordinates. It should be noted that any encoding method can be implemented by the above manner, not just the phase shift encoding. In essence, the above coded line laser projection system is modeled as the line projection model with focal length ${f_m}$ and resolution ${n_p}$ (the resolution is the total number of projection points), as shown in Fig. 1(b). The line structured light is emitted by the point light source. And the direction and position of each ray are described by the imaging points on the MEMS-virtual imaging line. This is the theoretical premise that the coded line laser projection system can be used with line-scan camera to construct coding line structured light measurement system on the strictly coplanar structure.

2.2 Measurement principle of the coding line structured light system

As shown in Fig. 2, the coding line structured light system is constructed by the coded line laser projection system described by the line projection model, and the line-scan camera. The phase shift pattern reflected by the measured surface, which is projected from the coded line laser projection system, is captured by the line-scan camera. The height information can be determined from the absolute phase which is restored from the captured phase shift patterns.

 figure: Fig. 2.

Fig. 2. Measurement principle. ${\phi _B}$ and ${\phi _H}$ represent the phase of B and H on the MEMS-virtual image line, respectively. C and M represent the optical center of line-scan camera and the coded line laser projection system, respectively.

Download Full Size | PDF

In detail, ${o_m}$ is the intersection between the optical axis of line projection model system (Fig. 1(b)) and $x$-axis in the reference coordinates system $\{ f\}$. ${o_c}$ is the intersection between the optical axis of line-scan camera and the $x$-axis in the reference coordinates system $\{ f\}$. The angle between the line from the optical center of line-scan camera to the optical center of line projection model system and $x$-axis in the reference coordinates system $\{ f\}$ is $\theta$. The angle between the optical axis of line projection model system and $x$-axis in the reference coordinates system $\{ f\}$ is $\gamma$. According to the similar triangle principle, the triangle $\Delta CEF$ and triangle $\Delta CB{O_c}$ are similar, the triangle $\Delta MEH$ and triangle $\Delta ABH$ are similar. The coordinates of two points C and $B$on the $x$-axis in the reference coordinates system $\{ f\}$ are ${x_C}$ and ${x_B}$, respectively. According to the principle of equal side length ratios in similar triangles:

$$\left\{ {\begin{array}{{c}} {\frac{{\overline {EF} }}{{|{x_B} - {x_C}|}} = \frac{{D\sin (\theta )}}{L},}\\ {\frac{{\overline {AB} }}{{D\cos (\theta ) - \overline {EF} }} = \frac{h}{{L - D\sin (\theta ) - h}},} \end{array}} \right.$$
where, $\overline {AB} \sin (\gamma ) = \frac{{\Delta \phi }}{{2\pi {f_0}}}$, $\Delta \phi = {\phi _B} - {\phi _H}$, ${f_0}$ is the space fundamental frequency.

Hence, according to Eq. (2):

$$\frac{{L - D\sin (\theta ) - h}}{h} = \frac{{2\pi {f_0}\sin (\gamma )}}{{\Delta \phi }}(D\cos (\theta ) - \frac{{D\sin (\theta )}}{L}|{x_B} - {x_c}|).$$
Therefore, Eq. (4) is determined from Eq. (3).
$$\frac{1}{h} = \frac{1}{{L - D\sin (\theta )}} + \frac{{2\pi {f_0}\sin (\gamma )}}{{\Delta \phi (L - D\sin (\theta ))}}(D\cos (\theta ) - \frac{{D\sin (\theta )}}{L}|{x_B} - {x_c}|),$$
where, L, $D$, $\gamma$, $\theta$, ${x_B}$ and ${x_c}$ are system structure parameters. So, the Eq. (4) can be simplified to:
$$\frac{1}{h} = a + b\frac{1}{{\Delta \phi }},$$
where, a and b are coefficient parameters. In the reference coordinates system $\{ f\}$, the relationship between the height information and phase difference can be established after calibrating the coefficient parameters a and b. The 3D measurement model of coding line structured light is established based on the relationship between the phase and height information. The key factor that affects the reconstruction accuracy of the coding line structured light is the absolute phase information. The 3D measurement model is inspired by the phase shifting profilometry [1,18]. Phase shifting profilometry is exactly to solve the 3D information by establishing the relationship between the height and phase, which is widely used in the structured light technique based on matrix cameras. Compared with triangulation, the phase shifting profilometry has the advantages of simple calibration and high accuracy. However, the coded line laser projection system is relatively different from the projector in its projection mode, so the coding line structured light does not directly apply the measurement model of phase shifting profilometry. The above analysis demonstrates that when the coded line laser projection system is modeled as the line projection model, the 3D measurement model based on phase and height can be established. Compared with projector-based projection system, the coded line laser projection system has no lens and therefore no depth-of-field limitations on measurement (The limitation of depth-of-field is a key deficiency of structured light constructed with a projector, such as phase shifting profilometry [1] and active structured light systems based on a projector and line-scan cameras [13]). Compared with the projector, the proposed coded line laser projection system is more suitable for line-scan cameras due to its single-line projection characteristics.

2.3 Phase solution

In the field of structured light, many phases encoding and decoding methods have been developed. Among them, the phase shift fringe method [19] is widely used because of its advantages of good information fidelity, simple calculation and high accuracy of information restoration. For the coded line laser projection system, the number of phase steps is set to N, and the period of phase shift is defined as ${T_\alpha }$. Hence, in the perspective of scanning angle $\alpha$ (Fig. 1), the phase shift fringe intensity projected by the coded line laser projection system is expressed as follows.

$${I_n}(\alpha ) = A + B\cos \left( {\frac{\alpha }{{{T_\alpha }}} + \frac{{n - 1}}{N}2\pi } \right)\begin{array}{cc} ,&{n = 1,2 \cdots N} \end{array}.$$
The wrapped phase is calculated from Eq. (7).
$$\varphi (\alpha ) = {\tan ^{ - 1}}\left( {\frac{{\sum\limits_{n = 1}^N {{I_n}(\alpha )\sin \left( {\frac{{n - 1}}{N}2\pi } \right)} }}{{\sum\limits_{n = 1}^N {{I_n}(\alpha )\cos \left( {\frac{{n - 1}}{N}2\pi } \right)} }}} \right).$$
The range of the wrapped phase is $[ - \pi ,\pi )$. In order to determine the absolute phase $\phi (\alpha )$, various phase unwrapping methods have been developed to calculate the period ${\mathbf k}(\alpha )$ of the wrapped phase, such as binary coding method [20], multi-frequency heterodyne method [21], robust Chinese Remainder Theorem (rCRT) [22].
$$\phi (\alpha ) = {\mathbf k}(\alpha ) \cdot 2\pi + {\boldsymbol{\mathrm{\varphi}} }(\alpha ).$$
Among them, the binary coding method is widely used because of its advantages of small number of projections and high robustness. For example, the number of phase shift steps and periodic are set to 4 and 30, respectively. In order to properly decode, the minimum number of encoded images of the binary coding method, multi-frequency heterodyne and rCRT are 9, 12 and 12 respectively. In terms of high speed measurement, the number of encoded images is the key factor in choosing the unwrapping methods. However, due to the noise in the captured images, there will be periodic errors at the periodic boundary of binary decoding and phase shift decoding. Therefore, in terms of phase solution, the periodic error correction methods are the research focus. The conventional periodic error correction methods are mainly the periodic boundary error judgement methods [2326]. These methods generally require neighborhood pixels information about the target pixel as a basis of judgement reference and are applicable to the structured light technique based on matrix camera. Hence, these methods are not well suitable for line-scan camera, as the single line imaging properties result in less information about the neighborhood pixels about the target pixel. In this paper, the periodic error corrected with expansion-corrosion is proposed to accurately determine the absolute phase. The binary decoding period errors are corrected by morphological image processing with the wrapped phase as the benchmark. The proposed periodic error correction method is fully compatible with the imaging characteristics of line-scan cameras. A correction example of the proposed method in the experiments is shown in Fig. 3 and Fig. 4.

 figure: Fig. 3.

Fig. 3. The periodic error corrected with expansion-corrosion. (a): the wrapped phase and the binary decoding; (b): the absolute phase with no periodic error correction; (c): the absolute phase with the periodic error correction by expansion-corrosion.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Periodic error corrected with expansion-corrosion.

Download Full Size | PDF

Figure 4 describes the expansion-corrosion based periodic error correction method of the period m. Based on the period of wrapped phase, the period obtained by the binary decoding is corrected by the morphological method. The algorithm steps of the periodic error corrected with expansion-corrosion are expressed as follows.

Step 1: Determining the wrapped phase ${{\boldsymbol{\mathrm{\varphi}} }_{phi}}$ and initializing binary decoding period ${{\mathbf k}_{phi}}$

Step 2: Periodic error correction by expansion-corrosion (Fig. 4)

For the period m, the initial matrix parameters are defined as ${{\mathbf k}_m} = ({{\mathbf k}_{phi}} ={=} m)$, ${{\boldsymbol{\mathrm{\varphi}} }_{up}} = ({{\boldsymbol{\mathrm{\varphi}} }_{phi}} > 0)$ and ${{\boldsymbol{\mathrm{\varphi}} }_{down}} = ({{\boldsymbol{\mathrm{\varphi}} }_{phi}} \le 0)$. The expansion operator and corrosion operator are defined as ${{\mathbf D}_{ilation}}$ and ${{\mathbf E}_{rosion}}$, respectively. ${\otimes}$ represents the convolution operation and ${\cdot}$ represents the point multiplication operation. It is worth noting that the vector ${{\boldsymbol{\mathrm{\varphi}} }_{up}}$ and ${{\boldsymbol{\mathrm{\varphi}} }_{down}}$ represent the vector interval where the wrapped phase is greater than 0 and less than or equal 0, respectively. For the convenience of illustration, Fig. 4 only shows the interval of period m.

Step 2.1 Initial expansion operation: ${{\mathbf k}_{m\_ini}} = {{\mathbf k}_m} \otimes {{\mathbf D}_{ilation}}$

Step 2.2 Point multiplication:${{\mathbf k}_{m\_up\_1}} = {{\mathbf k}_{m\_ini}} \cdot {{\boldsymbol{\mathrm{\varphi}} }_{up}}$, ${{\mathbf k}_{m\_down\_1}} = {{\mathbf k}_{m\_ini}} \cdot {{\boldsymbol{\mathrm{\varphi}} }_{down}}$

Step 2.3 Corrosion operation:${{\mathbf k}_{m\_up\_2}} = {{\mathbf k}_{m\_up\_1}} \otimes {{\mathbf E}_{rosion}}$, ${{\mathbf k}_{m\_down\_2}} = {{\mathbf k}_{m\_down\_1}} \otimes {{\mathbf E}_{rosion}}$

Step 2.4 Expansion operation: ${{\mathbf k}_{m\_up\_\textrm{3}}} = {{\mathbf k}_{m\_up\_\textrm{2}}} \otimes {{\mathbf D}_{ilation}}$, ${{\mathbf k}_{m\_down\_\textrm{3}}} = {{\mathbf k}_{m\_down\_\textrm{2}}} \otimes {{\mathbf D}_{ilation}}$

Step 2.5 Point multiplication: ${{\mathbf k}_{m\_up\_correct}} = {{\mathbf k}_{m\_up\_3}} \cdot {{\boldsymbol{\mathrm{\varphi}} }_{up}}$, ${{\mathbf k}_{m\_down\_correct}} = {{\mathbf k}_{m\_down\_3}} \cdot {{\boldsymbol{\mathrm{\varphi}} }_{down}}$

Step 2.6 Period ${k_m}$ correction: ${\mathbf k}_m^{correct} = {{\mathbf k}_{m\_up\_correct}} + {{\mathbf k}_{m\_down\_correct}}$

The correction interference of other periodic errors on the target period can be effectively eliminated through steps 2.3 and 2.4. ${\mathbf k}_m^{correct}$ is the periodic error correction result for ${{\mathbf k}_m}$. Binary period correction ${\mathbf k}_{phi}^{correct}$ is determined by traversing all binary decoding period values follow the above steps. That is ${\mathbf k}_{phi}^{correct} = \sum\limits_{m = 1}^n {m \cdot {\mathbf k}_m^{correct}}$, n is the total period number of wrapped phase. Therefore, the absolute phase is obtained from Eq. (9).

$$\phi _{phi}^{correct} = {\mathbf k}_{phi}^{correct} \cdot 2\pi + {\varphi _{phi}}.$$

3. Calibration of coding line structured light system

As shown in Fig. 5, for the coding line structured light in two dimensional coordinate system, the multi-position plane calibration method is applied to calibrate the coefficient parameters in z direction. The calibration plane is placed on a linear guide and is moved n positions along the z direction from the zero datum. The absolute phase ${\phi _n}(x)$ corresponding to the ${z_n}(x)$ is calculated by phase solution method (chapter 2.3). The phase difference $\Delta {\phi _n}(x)$ is the difference between the absolute phase ${\phi _n}(x)$ and ${\phi _\textrm{0}}(x)$. In order to inhibit systematic errors, the second-order polynomials is introduced to calibrate the relationship between the height information and the phase difference. Therefore, the coefficient parameters are determined by solving the linear equations (Eq. (10)).

$$\left\{ {\begin{array}{{c}} {\frac{1}{{{z_1}(x)}} = a(x) + \frac{{b(x)}}{{\Delta {\phi_\textrm{1}}(x)}} + \frac{{c(x)}}{{\Delta {\phi_\textrm{1}}{{(x)}^2}}},}\\ {\frac{1}{{{z_2}(x)}} = a(x) + \frac{{b(x)}}{{\Delta {\phi_2}(x)}} + \frac{{c(x)}}{{\Delta {\phi_2}{{(x)}^2}}},}\\ \vdots \\ {\frac{1}{{{z_n}(x)}} = a(x) + \frac{{b(x)}}{{\Delta {\phi_n}(x)}} + \frac{{c(x)}}{{\Delta {\phi_n}{{(x)}^2}}}.} \end{array}} \right.$$
However, conventional calibration methods generally focus on the z direction information and ignore the x direction. In this paper, the multi-position flat display calibration method is proposed to calibrate the z direction information and the x direction simultaneously. The calibration plane is replaced by the flat display and the physical pixel coordinates on the flat display are used to calibrate the coordinate information in the x direction. The correspondence between the pixel in line-scan camera and the physical pixel coordinates on the flat display is determined by the phase solution method. For a specific pixel u, the relationship between the x direction coordinates and the z direction coordinates is expressed as follows.
$${x_n} = {z_n}\tan (\beta ) + {x_0},$$
where, $\beta$ is the angle between the light direction determined by the pixel u and the plane determined by the flat display. Therefore, Eq. (11) can be simplified to Eq. (12).
$${x_n} = {d_n} \cdot {z_n} + {x_0},$$
where, ${x_0}$ is the x direction coordinates on the zero datum. It can be seen that the x direction coordinates is a linear equation about z direction coordinates. It is worth noting that the flat display is placed perpendicular to the optical axis of the line-scan camera.

 figure: Fig. 5.

Fig. 5. The calibration of the coding line structured light system

Download Full Size | PDF

The detail algorithm steps of the multi-position flat display calibration method are expressed as follows.

Step 1: The line structured light patterns which are projected by the coded line laser projection system, are captured by the line-scan camera. The absolute phase is calculated from these captured images.

Step 2: The structured light patterns which are projected by the flat display, are captured by the line-scan camera. And the physical pixel coordinates is determined from the absolute phase which is calculated from these captured patterns.

Step 3: Moving the flat display along the optical axis of the line-scan camera and repeating the step 1 and step 2. The moving distance of the flat display is expressed as the z direction coordinates.

Step 4: The coefficient parameters of z direction are determined by solving the linear equations which are established by the phase difference and z direction coordinates. The coefficient parameters of x direction are determined by solving the linear equations which are established by the physical pixel coordinates on the flat display and z direction coordinates.

4. Implementation and experimental verification

4.1 System configuration of the coding line structured light system

The coding line structured light system is implemented based on the above measurement model, as shown in Fig. 6. The setup of the imaging unit consists of a Baser line-scan camera with the resolution of 2048×1 pixels and a lens with focal length of 12 mm. The laser wavelength of point laser, scanning frequency, scanning angle range and output power of the coded line laser projection system are 658 nm, 1760Hz, 60 degrees and 100 mW, respectively. Through a mechanical joint element, the line-scan camera and the coded line laser projection system are constructed to the coding line structured light system, as shown in Fig. 6(b). The pixel resolution of laser coding line is 2400×1 pixels and the period of phase shift fringe is 80. Hence, the number of bits in binary code is 5. The measurement speed can reach 195 frames per second. The coding line structured light system is fixed at the three-axis mobile platform through the mechanical joint element. The optical axis of the line-scan camera is parallel to the $z$-axis of the three-axis mobile platform. During measurement, the coding line structured light can only obtain the coordinate information on x and z directions. The coordinate information on the y direction is the y axis coordinate of three-axis mobile platform.

 figure: Fig. 6.

Fig. 6. The coding line structured light system. (a): the experimental scene; (b) is an enlarged view of the red box in (a); (c) is the coded line laser projection system, which is shown of the blue box in (b); (d) is the coded phase shift fringe pattern, which is projected by the coded line laser projection system and captured by line-scan camera (e).

Download Full Size | PDF

4.2 Calibration experiments

In order to calibrate the coefficient parameters (including the coefficient parameters in z direction and x direction), the flat display is placed 350 mm below the coding line structured light system. The coding line structured light system is moved 30 positions at regular intervals along the $z$-axis direction of the three-axis mobile platform. And each coordinates of $z$-axis is record. First, at each position, the phase shift fringe and binary coded patterns projected by the coded line laser projection system are captured by the line-scan camera. And the precise absolute phase is determined by the proposed periodic error corrected with expansion-corrosion method. The absolute phases of 15 positions are shown in Fig. 7. Then, the physical pixel coordinates are obtained from fringe images which are projected from the flat display and captured by line-scan camera. The coefficient parameters are determined by solving the linear equation which is established according to the phase difference, the physical pixel coordinates and the $z$-axis coordinates. The coefficient parameters of z direction are shown in Fig. 8(a), and the coefficient parameters of x direction are shown in Fig. 8(b).

 figure: Fig. 7.

Fig. 7. The absolute phase of 15 positions. (a): The absolute phase; (b) is the enlarged view of red box in (a).

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. The coefficient parameters. (a): The coefficient parameters of z direction; (b): The coefficient parameters of x direction.

Download Full Size | PDF

4.3 Measurement experiments

In order to verify the accuracy of the coding line structured light system, the stepped plane consisting of standard gauge blocks and the standard cylindrical surface are measured in the measurement experiments. As shown in Fig. 9, the point cloud data of the three planes in the stepped plane are subjected to plane fitting. The root mean square (RMS) deviations of measurement error for plane 1, plane 2 and plane 3 are 0.047 mm, 0.031 mm and 0.043 mm, respectively. The maximum error of plane is less than 0.2 mm.

 figure: Fig. 9.

Fig. 9. The measurement of stepped plane. (a): the point cloud of the stepped plane; (b), (c) and (d) are the error between the point cloud and fitted plane of plane 1, plane 2 and plane 3, respectively.

Download Full Size | PDF

As shown in Fig. 10, the point cloud data of the cylindrical surface are subjected to cylindrical surface fitting. The cylindrical diameter of the fitted cylindrical surface and the actual cylindrical surface are 85.023 mm and 85 mm, respectively. Hence, the measured diameter deviation is 0.023 mm. The maximum error of cylindrical measurement result is less than 0.2 mm, and the RMS deviation of the measured data is 0.047 mm. The measurement results of the diffuse reflective surface (mask) are shown in Fig. 11.

 figure: Fig. 10.

Fig. 10. The measurement of cylindrical surface. (a): the point cloud of the cylindrical surface; (a): the error between the point cloud and the fitted cylindrical surface.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. The measurement of the diffuse reflective surface. (a): the mask; (b): the point cloud of the mask; (c) is the surface description of the point cloud in (b).

Download Full Size | PDF

In addition, the coding line structured light system can also measure highly reflective surface which is difficult measured by conventional structured light techniques based on line-scan camera, such as ceramic objects. As shown in Fig. 12(a), when a laser coding line is projected on a reflecting or scattering surface, the reflected light or scattered light will be interfered with the imaging of the matrix camera. The measurement errors caused by reflected light or scattered light will be introduced to the conventional surface structured light and line structured light. However, according to the proposed method in this paper, the laser coding line is directly imaged on the line-scan camera, and the interfering reflected or scattered light is not imaged on the line-scan camera. That is to mean, the reflected light or scattered light does not cause the measurement error for the proposed coding line structured light. The measurement results of reflective surface of ceramic dishes are shown in Fig. 12.

 figure: Fig. 12.

Fig. 12. The measurement of the reflective or scattering surface. (a): the ceramic; (b): the point cloud of the ceramic; (c) is the surface description of the point cloud in (b).

Download Full Size | PDF

5. Conclusions

In this paper, the coding line structured light based on line-scan camera is proposed. The coded line laser projection system is composed of a point laser and a MEMS scanning galvanometer, which is applied to project line structured light pattern. The 3D measurement coordinates are determined by establishing the relationships between the absolute phase and height information, lateral information. And the periodic error corrected with expansion-corrosion is proposed to obtain the accurate absolute phase. Compared with conventional structured light system based on line-scan camera, the proposed coding line structured light has advantages of more compactness, high measurement accuracy and more robustness in measuring reflective surfaces. In the absolute phase calculation mode of the proposed method, the measurement speed can reach 195 frames per second.

The proposed coding line structured light based on coded line laser projection system is a practical approach for solving the 3D measurement of line-scan camera. The measurement results for diffuse reflective surfaces and reflective surfaces show that the coding line structured light has high potential for industrial applications. The measurement method proposed in this paper can be extended to other measurement models, such as stereo coding line structured light which is composed by two line-scan cameras and one coded line laser projection system. And the proposed method can also be further improved and extended to high speed profilometry.

In the future research, we will pursue higher measurement rate by studying novel coding methods (such as, high-speed 3D reconstruction from single fringe image [27]) or construct faster MEMS scanning galvanometer, and investigate other measurement modes for higher measurement accuracy (such as stereo coding line structured light).

Funding

National Natural Science Foundation of China (51535004, 51975344); China Postdoctoral Science Foundation (2019M662591).

Disclosures

We declare that there are no conflicts of interest related to this article.

References

1. X. Zhang, C. Li, Q. Zhang, and D. Tu, “Rethinking phase height model of fringe projection profilometry: A geometry transformation viewpoint[J],” Opt. Laser Technol. 108, 69–80 (2018). [CrossRef]  

2. X. Lu, Q. Wu, and H. Huang, “Calibration based on ray-tracing for multi-line structured light projection system[J],” Opt. Express 27(24), 35884–35894 (2019). [CrossRef]  

3. C. Li, Y. Li, Y. Xiao, X. Zhang, and D. Tu, “Phase measurement deflectometry with refraction model and its calibration[J],” Opt. Express 26(26), 33510–33522 (2018). [CrossRef]  

4. G. Lopes, F. Ribeiro A, N. Sillero, L. Goncalves-Seo, C. Sliva, and M. Franch, “High resolution trichromatic road surface scanning with a line-scan camera and light emitting diode lighting for road-kill detection[J],” Sensors 16(4), 558 (2016). [CrossRef]  

5. R. Liao, J. Zhu, L. Yang, J. Lin, B. Sun, and J. Yang, “Flexible calibration method for line-scan cameras using a stereo target with hollow stripes[J],” Opt. Lasers Eng. 113, 6–13 (2019). [CrossRef]  

6. S. Donné, H. Luong, S. Dhondt, N. Wuyts, I. Dirk, and B. Goossens, “Robust plane-based calibration for linear cameras[C]//2017 IEEE International Conference on Image Processing (ICIP),” IEEE, 36–40 (2017).

7. D. Antensteiner, B. Blaschitz, C. Eisserer, R. Huber-Mrk, and V. Kristian, “Line-scan stereo for 3D ground reconstruction[J],” Bildverarbeitung 2016, 209 (2016).

8. R. Huber-Mörk, K. Valentín, B. Blaschitz, and S. Stolc, “Line-Scan Stereo Using Binary Descriptor Matching and Regularization[J],” Electron. Imaging 2017(9), 61–66 (2017). [CrossRef]  

9. P. Zhang, T. Takeda, J. A. Toque, Y. Murayama, and A. Ide-Ektessabi, “A line-scan camera based stereo method for high resolution 3D image reconstruction[C]//Measuring,” Proc. SPIE 9018, 901807 (2014). [CrossRef]  

10. P. C. Chen, M. Chang, J. L. Gabayno, X. Wang, and X. Liu, “High resolution topography utilizing a line-scan stereo vision technique[J],” Microsyst. Technol. 23(6), 1653–1659 (2017). [CrossRef]  

11. T. Eckhard and M. Schnitzlein, “Height adaptive shading correction for line-scan stereo imaging based multi-spectral reflectance measurements[C]//Forum Bildverarbeitung 2016,” KIT Sci. Publ. 84(7-8), 197 (2016).

12. T Ilchev, E Lilienblum, and B Joedicke. A Stereo Line Sensor System to High Speed Capturing of Surfaces in Color and 3D Shape[C]//GRAPP/IVAPP. 2012: 809–812.

13. B. Sun, J. Zhu, L. Yang, and G. Yin, “Stereo line-scan sensor calibration for 3D shape measurement[J],” Appl. Opt. 56(28), 7905–7914 (2017). [CrossRef]  

14. B. Sun, L. Yang, Y. Guo, J. Zhu, and R. Liao, “A stereo line-scan system for 3D shape measurement of fast-moving objects[C]//2017 International Conference on Optical Instruments and Technology: Optoelectronic Measurement Technology and Systems,” Proc. SPIE 10621, 106210A (2018). [CrossRef]  

15. E. Lilienblum and A. Al-Hamadi, “A structured light approach for 3-D surface reconstruction with a stereo line-scan system[J],” IEEE Trans. Instrum. Meas. 64(5), 1258–1266 (2015). [CrossRef]  

16. T. Qingguo, Z. Xiangyu, M. Qian, and G. Bao, “Utilizing polygon segmentation technique to extract and optimize light stripe centerline in line-structured laser 3D scanner[J],” Pattern Recogn. 55, 100–113 (2016). [CrossRef]  

17. W. Shi, C. Chen, J. Jivraj, Y. Dobashi, and X. Yang V, “2D MEMS-based high-speed beam-shifting technique for speckle noise reduction and flow rate measurement in optical coherence tomography[J],” Opt. Express 27(9), 12551–12564 (2019). [CrossRef]  

18. C. Zuo, S. Feng, and L. Huang, “Phase shifting algorithms for fringe projection profilometry: A review[J],” Optics & Lasers in Engineering 109, 23–59 (2018). [CrossRef]  

19. S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-D measurements with motion-compensated phase-shifting profilometry[J],” Opt. Lasers Eng. 103, 127–138 (2018). [CrossRef]  

20. D. Zheng, Q. Kemao, F. Da, D. Fei, and H. Soon S, “Ternary Gray code-based phase unwrapping for 3D measurement using binary patterns with projector defocusing[J],” Appl. Opt. 56(13), 3660–3665 (2017). [CrossRef]  

21. L. Han, Z. Li, K. Zhong, and W. Li Z, “Vibration Detection and Motion Compensation for Multi-Frequency Phase-Shifting-Based 3D Sensors[J],” Sensors 19(6), 1368 (2019). [CrossRef]  

22. M. Zhang, Q. Chen, T. Tao, F. Shi, H. Yan, and L. Hui, “Robust and efficient multi-frequency temporal phase unwrapping: optimal fringe frequency and pattern sequence selection[J],” Opt. Express 25(17), 20381–20400 (2017). [CrossRef]  

23. H. Lin, Z. Ma, C. Yao, and H. Wang, “3D measurement technology based on binocular vision using a combination of gray code and phase-shift structured light [J],” Acta Electron. Sinca 41(1), 24–28 (2013). [CrossRef]  

24. D. Zheng, F. Da, Q. Kemao, and S. Seah H, “Phase-shifting profilometry combined with Gray-code patterns projection: unwrapping error removal by an adaptive median filter[J],” Opt. Express 25(5), 4700–4713 (2017). [CrossRef]  

25. D. Zheng and F. Da, “Self-correction phase unwrapping method based on Gray-code light[J],” Opt. Lasers Eng. 50(8), 1130–1139 (2012). [CrossRef]  

26. Q. Bui L and S. Lee, “Boundary Inheritance Codec for high-accuracy structured light three-dimensional reconstruction with comparative performance evaluation[J],” Appl. Opt. 52(22), 5355–5370 (2013). [CrossRef]  

27. C. Zuo, T. Tao, S. Feng, L. Huang, A. Asundi, and Q. Chen, “Micro Fourier Transform Profilometry (µ FTP): 3D shape measurement at 10,000 frames per second[J],” Opt. Lasers Eng. 102, 70–91 (2018). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Phase shift coded by MEMS scanning galvanometer
Fig. 2.
Fig. 2. Measurement principle. ${\phi _B}$ and ${\phi _H}$ represent the phase of B and H on the MEMS-virtual image line, respectively. C and M represent the optical center of line-scan camera and the coded line laser projection system, respectively.
Fig. 3.
Fig. 3. The periodic error corrected with expansion-corrosion. (a): the wrapped phase and the binary decoding; (b): the absolute phase with no periodic error correction; (c): the absolute phase with the periodic error correction by expansion-corrosion.
Fig. 4.
Fig. 4. Periodic error corrected with expansion-corrosion.
Fig. 5.
Fig. 5. The calibration of the coding line structured light system
Fig. 6.
Fig. 6. The coding line structured light system. (a): the experimental scene; (b) is an enlarged view of the red box in (a); (c) is the coded line laser projection system, which is shown of the blue box in (b); (d) is the coded phase shift fringe pattern, which is projected by the coded line laser projection system and captured by line-scan camera (e).
Fig. 7.
Fig. 7. The absolute phase of 15 positions. (a): The absolute phase; (b) is the enlarged view of red box in (a).
Fig. 8.
Fig. 8. The coefficient parameters. (a): The coefficient parameters of z direction; (b): The coefficient parameters of x direction.
Fig. 9.
Fig. 9. The measurement of stepped plane. (a): the point cloud of the stepped plane; (b), (c) and (d) are the error between the point cloud and fitted plane of plane 1, plane 2 and plane 3, respectively.
Fig. 10.
Fig. 10. The measurement of cylindrical surface. (a): the point cloud of the cylindrical surface; (a): the error between the point cloud and the fitted cylindrical surface.
Fig. 11.
Fig. 11. The measurement of the diffuse reflective surface. (a): the mask; (b): the point cloud of the mask; (c) is the surface description of the point cloud in (b).
Fig. 12.
Fig. 12. The measurement of the reflective or scattering surface. (a): the ceramic; (b): the point cloud of the ceramic; (c) is the surface description of the point cloud in (b).

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

α = θ 2 atan ( x L / x L 2 2 x n l ) , x n = n n p x L ,
{ E F ¯ | x B x C | = D sin ( θ ) L , A B ¯ D cos ( θ ) E F ¯ = h L D sin ( θ ) h ,
L D sin ( θ ) h h = 2 π f 0 sin ( γ ) Δ ϕ ( D cos ( θ ) D sin ( θ ) L | x B x c | ) .
1 h = 1 L D sin ( θ ) + 2 π f 0 sin ( γ ) Δ ϕ ( L D sin ( θ ) ) ( D cos ( θ ) D sin ( θ ) L | x B x c | ) ,
1 h = a + b 1 Δ ϕ ,
I n ( α ) = A + B cos ( α T α + n 1 N 2 π ) , n = 1 , 2 N .
φ ( α ) = tan 1 ( n = 1 N I n ( α ) sin ( n 1 N 2 π ) n = 1 N I n ( α ) cos ( n 1 N 2 π ) ) .
ϕ ( α ) = k ( α ) 2 π + φ ( α ) .
ϕ p h i c o r r e c t = k p h i c o r r e c t 2 π + φ p h i .
{ 1 z 1 ( x ) = a ( x ) + b ( x ) Δ ϕ 1 ( x ) + c ( x ) Δ ϕ 1 ( x ) 2 , 1 z 2 ( x ) = a ( x ) + b ( x ) Δ ϕ 2 ( x ) + c ( x ) Δ ϕ 2 ( x ) 2 , 1 z n ( x ) = a ( x ) + b ( x ) Δ ϕ n ( x ) + c ( x ) Δ ϕ n ( x ) 2 .
x n = z n tan ( β ) + x 0 ,
x n = d n z n + x 0 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.