Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Accuracy improvement of multi-view 3D laser scanning measurements based on point cloud error correction and global calibration optimization

Open Access Open Access

Abstract

Multi-camera laser scanning measurement is emerging as a pivotal element in three-dimensional (3D) optical measurements. It reduces occlusion and enables the gathering of more 3D data. However, it also increases the difficulty of system algorithms in obtaining high measurement accuracy. To improve the measurement accuracy, there is an urgent need to address global calibration and error correction issues caused by the employment of multi-view systems. An accuracy improvement method for multi-view 3D laser scanning measurements based on point cloud error correction and global calibration optimization is then proposed. First, a planar asymmetric circular grid target is designed to calibrate the cameras, laser planes, and initial global transformation matrices of the multi-view 3D laser scanning probe simultaneously. The influence of the position of the laser plane on the measurement error is analyzed and what we believe to be novel mathematical error influencing factors are then modelled for point accuracy. Furthermore, a believed to be novel error model based on the backpropagation (BP) neural network is established for the regression analysis of the mathematical error influencing factors and measurement deviations for each point based on the standard sphere plate measurement. The final measurement is optimized by the correction of point cloud for each camera of the multi-view system and the global calibration optimization based on the error model. The proposed method is reliable and easy to implement, since it only requires a standard sphere plate and a planar target. Several experiments show that the method can effectively improve the measurement accuracy of multi-view 3D laser scanning probe through point cloud error correction and calibration optimization.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Optical 3D measurement systems are popping up progressively more frequently in scientific research and industrial applications, owing to the advantages of non-contact, significant reduction in manufacturing costs and important decrement in the inspection time [13]. Laser scanning measurement has long been a popular subfield of optical 3D measurement and is frequently applied in weld seam detection, large equipment assembly, reverse engineering, and other fields [4,5]. Compared to the popular monocular laser scanning measurement, multi-view laser scanning measurement can alleviate occlusion and acquire more three-dimensional information, which is emerging as the new preferred choice [6]. However, this also increases the complexity of the system to achieve high-accuracy measurements. The final measurement result for multi-view laser scanning generally involves a fusion of several point clouds, so the main sources of error are calibration errors, particularly global calibration errors, and errors in the point clouds themselves. Therefore, accurate calibration and point cloud error correction are prerequisites for high-precision measurements of multi-camera laser scanning measurement systems.

For monocular line laser scanning measurements, there are already a number of well-established methods for calibrating the parameters of the camera and laser plane [5,7,8]. However, the relation between the cameras in multi-camera vision systems, including the multi-camera laser scanning measurement system, is fixed. Therefore, high-precision global calibration for integrate multiple cameras into a common coordinate system is crucial. Depending on the calibration target, typical global calibration methods can be classified as: one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D) calibration targets. A global calibration method was presented by utilizing the co-linearity property of feature points on a 1D target [9]. Wang et al. [10] applied a 1D calibration target to improve the accuracy of global calibration. Although the one-dimensional target structure is simple, there are limited feature points and the calibration accuracy is typically insufficient. Huang et al. [11] designed a 3D cubic target with chessboard patterns on four sides for multi-camera global calibration. Xu et al. [12] put forward a global calibration method based on a 3D calibration board and a height gauge for simultaneous global calibration and laser plane calibration. Sun et al. [13] proposed a global calibration method for multiple vison sensors based on 3D sphere targets. However, fabricating a 3D object with an adequate number of feature points is costly. Zhang [14] proposed a flexible camera calibration method using a planar pattern target, which is the most commonly employed approach for camera calibration. Furthermore, some researchers have extended calibration methods based on planar targets to the global calibration of multi-camera systems owing to their ease of implementation and relatively high accuracy. Yang et al. [15] employed a planar checkerboard target for the global calibration of a binocular structured light system. Liu et al. [16] utilized a planar pattern to optimize the global calibration of a binocular vision system based on multiple constraints. Gai et al. [17] applied a planar circular point array for the global calibration optimization of multi-view systems. Planar targets have the advantage of enabling simultaneous calibration of laser plane equations, individual camera parameters, and global calibration parameters.

Currently, the accuracy of laser scanning measurement is generally lower than that of coordinate measurement machines (CMMs) and other contact measurement systems. Consequently, numerous researchers have concentrated on error correction methods to improve laser scanning measurement accuracy. Factors causing measurement errors can be categorized into external and internal factors, with internal factors being the focal point of the study of laser scanning measurement. Improvements in laser strip feature extraction accuracy [1820] and system calibration accuracy [2123] are research highlights in internal error correction. Nevertheless, only a limited number of writers have been able to carry out systematic studies of external error correction. Nick et al. [24] identified the influence of some factors on the systematic and random errors of the line laser scanner on the basis of point cloud measured by a flat reference plane. Isheil et al. [25] built an experimental global model to relate measurement errors and external factors with a setup consisting of a gauge block and a reference sphere. However, the sample size was relatively small, the error model did not achieve point accuracy, and the influence of the measured point position on the laser plane was not considered. Liu et al. [26] proposed a method to reduce the systemic error of line-structured light sensors using light plane correction. However, only the position of the light plane was corrected and although it was noted that the light plane was a curved surface, no specific measures were implemented to correct it.

In this paper, an accuracy improvement method for multi-view 3D laser scanning measurements based on error correction and global calibration optimization is proposed. Compared with existing works, the main contributions and improvements of the proposed method can be summarized as follows.

  • (1) To the best of our knowledge, this is the first study to build novel mathematical error influencing factors by considering the effect of position in the laser plane. Moreover, the external error factors combined with the modified laser plane model at each measured point can be micro-analyzed and modelled.
  • (2) A novel error model based on BP network is established for regression analysis of the mathematical error influencing factors and measurement deviations for point accuracy. The logical relation between error influencing factors and measurement deviations is explored in point accuracy. Thus, it can be utilized for point cloud error correction and global calibration optimization.
  • (3) The measurement accuracy of the probe is effectively improved utilizing the proposed method based on error correction and calibration optimization. The proposed accuracy improvement method is universal for multi-view structured light probes.

2. Materials and methodology

Although the accuracy improvement method based on error correction and global calibration optimization is proposed for a 3D scanning probe based on asymmetric trinocular vision and multi-line laser (see Fig. 1) in our previous work [27], it is versatile and can be applied to almost any multi-view 3D laser scanning measurement system. As shown in Fig. 2, the architecture of the proposed methods can be divided into three sections: coarse system calibration, error model of point cloud, and point cloud error correction and calibration optimization. The camera parameters, laser plane equations and initial global transmission matrices are calibrated in the coarse system calibration with a planar asymmetric circular grid target. In addition, a modified laser plane model is presented to characterize the different positions of the measured point on the laser plane, and the corresponding calibration method is illustrated. A four-parameter mathematical error influencing factors model is developed with the modified laser plane model. Furthermore, what is believed to be a novel error model based on the BP neural network is established for the regression analysis of the mathematical error influencing factors and measurement deviations for each point based on the standard sphere plate measurement. The final measurement result can be improved on the basis of point cloud error correction and calibration optimization.

 figure: Fig. 1.

Fig. 1. 3D scanning probe based on asymmetric trinocular vision and multi-line laser.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Overall architecture of our configuration and system.

Download Full Size | PDF

2.1 Coarse system calibration

In coarse system calibration, intrinsic and extrinsic parameters of each camera, laser plane equations and initial global transmission matrices are calibrated by an asymmetric circular grid target. Furthermore, a modified laser plane calibration method is put forward to characterize the position of the measured point on the laser plane. Referring to the conventional structured light 3D measurement model, the system parameters can be obtained initially in this process and facilitate the subsequent error correction and accuracy improvement.

2.1.1 System parameters coarse calibration

Camera calibration generally involves transformation relations between four coordinate systems: world coordinate, camera coordinate, image coordinate and pixel coordinate [2830].

The world coordinate and camera coordinate are the relation of rigid body transformations, expressed as

$$\left[ {\begin{array}{{c}} {{x_c}}\\ {{y_c}}\\ {{z_c}}\\ 1 \end{array}} \right] = \left[ {\begin{array}{{cc}} {\boldsymbol R}&{\boldsymbol T}\\ {{0^{\boldsymbol T}}}&1 \end{array}} \right]\left[ {\begin{array}{{c}} {{x_w}}\\ {{y_w}}\\ {{z_w}}\\ 1 \end{array}} \right],$$
where ${\left( {\begin{array}{{cccc}} {{x_w}}&{{y_w}}&{{z_w}}&1 \end{array}} \right)^T}$ and ${\left( {\begin{array}{{cccc}} {{x_c}}&y&{{z_c}}&1 \end{array}} \right)^T}$ represent the homogeneous coordinates of the measured point in the world and camera coordinate, and R and T are the $3 \times 3$ orthogonal rotation matrix and $3 \times 1$ translation vector. This transformation matrix, consisting of R and T, is known as an external parameter matrix.

The relation between the image coordinate and the pixel coordinate can be characterized by

$$\left[ {\begin{array}{{c}} u\\ v\\ 1 \end{array}} \right] = \left[ {\begin{array}{{ccc}} {\frac{1}{{dx}}}&0&{{u_0}}\\ 0&{\frac{1}{{dy}}}&{{v_0}}\\ 0&0&0 \end{array}} \right]\left[ {\begin{array}{{c}} x\\ y\\ 1 \end{array}} \right],$$
where ${\left[ {\begin{array}{{ccc}} u&v&1 \end{array}} \right]^T}$ and ${\left[ {\begin{array}{{ccc}} x&y&1 \end{array}} \right]^T}$ are the homogeneous coordinates in pixel coordinate and image coordinate, dx and dy represent the physical size of each pixel in the x and y directions, and $({{u_0},{v_0}} )$ is the pixel coordinate of the principal point of the camera. Given the pinhole camera model [14], the following equation can be obtained by combining Eqs. (1) and (2):
$$s\left[ {\begin{array}{{c}} u\\ v\\ 1 \end{array}} \right] = \left[ {\begin{array}{{ccc}} {{f_x}}&0&{{u_0}}\\ 0&{{f_y}}&{{v_0}}\\ 0&0&1 \end{array}} \right]\left[ {\begin{array}{{c}} {{x_c}}\\ {{y_c}}\\ {{z_c}} \end{array}} \right] = {\boldsymbol A}\left[ {\begin{array}{{cc}} {\boldsymbol R}&{\boldsymbol T}\\ {{0^{\boldsymbol T}}}&1 \end{array}} \right]\left[ {\begin{array}{{c}} {{x_w}}\\ {{y_w}}\\ {{z_w}}\\ 1 \end{array}} \right],$$
where
$$\left\{ {\begin{array}{{c}} {{f_x} = \frac{f}{{dx}}}\\ {{f_y} = \frac{f}{{dy}}}\\ {A = \left[ {\begin{array}{{ccc}} {{f_x}}&0&{{u_0}}\\ 0&{{f_y}}&{{v_0}}\\ 0&0&1 \end{array}} \right]} \end{array}} \right.,$$
s is a constant coefficient, f stands for the focal length of the camera, and A is the intrinsic parameter matrix of the camera.

As shown in Fig. 3, an asymmetric circular grid target is designed to contain $4 \times 11$ circle markers (the 11 markers in the green box are defined as one row, four columns in total). The diameter of each circle marker is 1.5 mm, and the distance between the circle centers is 4 mm, with an accuracy of 1 µm. The target is located at some positions and captured by three cameras. However, it is not necessary for all three cameras to capture every position simultaneously. Subsequently, the intrinsic and extrinsic parameters of each camera can be calibrated by extracting feature points from the asymmetric circular grid target, drawing on the Zhang's camera calibration method [14].

 figure: Fig. 3.

Fig. 3. Asymmetric circular grid target. (a) Asymmetric circular grid target design. (b) Target picture.

Download Full Size | PDF

The captured image is generally distorted from the actual shape because of manufacturing and assembly errors in the camera and lens, and the optical system has an amplifying effect on this error. Therefore, frequently known tangential and radial distortions are also taken into account [31]. Two radial distortion coefficients (${k_1},{k_2}$) and two tangential distortion coefficients (${p_1},{p_2}$) are factored in to optimize all of the above parameters based on the non-linear optimization method [14].

It is recommended that at least one target position be effectively captured by all three cameras for coarse global calibration. The external matrices of the camera calibration for the target position in the common field of view should serve as the initial global transmission matrices.

2.1.2 Modified laser plane calibration

The laser plane is calibrated based on the asymmetric circular grid target [28] and the plane equation in the camera coordinate system can be obtained. Denote the plane equation as

$$A{x_w} + B{y_w} + C{z_w} + D = 0,$$
where A, B, C and D are coefficients of the plane equation. Equations (3) and (5) can be utilized by standard laser scanning measurement systems to achieve 3D reconstruction. This is referred to as the coarse calibrated 3D scanning measurement system in our study.

In practice, the laser plane surface is curved and deviates from the standard plane surface. However, despite this discrepancy, plane equations are still employed in standard laser scanning measurement systems, resulting in system measurement errors [26].

Indeed, the laser emitted by popular line-structured light sources is similar to a sectoral structure [32], as shown in Fig. 4. To characterize this sector model and correct for measurement errors, two new variables ${O_L}$ and $\overrightarrow {{C_L}} $ are added to the modified laser plane model, where ${O_L}$ is the vertex of the sector and $\overrightarrow {{C_L}} $ is the unit direction vector of the sector midline. Thus, each point in the sector can be localized by the two parameters d and $\gamma $, where d is the height from the vertex ${O_L}$ and $\gamma $ is the angle between the line connecting the measured point to the vertex and the sector midline.

 figure: Fig. 4.

Fig. 4. Modified laser plane model.

Download Full Size | PDF

As shown in Fig. 5, some simple modifications in the conventional optical plane calibration method [28] can be used to calibrate the above two parameters: first, the image coordinates of the left and right endpoints of the laser strip are extracted; then the endpoints are projected onto the above calibrated plane Eq. (5) and their midpoint is calculated; with the assistance of the vertical displacement stage's up and down movement, several groups of endpoints (at least three) are obtained; according to the least-squares method, the left and right endpoints are employed to fit straight lines, ${l_L}$ and ${l_R}$, respectively; ${O_L} $ is the intersection of ${l_L}$ and ${l_R}$; finally, the midline is also fitted by the least-squares method, and the unit direction vector $\overrightarrow {{C_L}} $ is calculated.

 figure: Fig. 5.

Fig. 5. Schematic diagram of modified laser plane calibration.

Download Full Size | PDF

2.2 Point-specific error model of point cloud

2.2.1 Measurement deviation estimation

To evaluate the error model and correct the measurement errors, a standard sphere plate consisting of nine standard ceramic matte spheres is designed, and the size and center of the spheres are checked using a CMM at the Shenzhen Academy of Metrology & Quality Inspection (see Fig. 6). A 3D model of the standard sphere plate (only the spheres are included) is established based on the calibration results of the CMM. Then the standard sphere plate is measured by the coarse calibrated 3D scanning measurement system. The Iterative Closest Point (ICP) method [33] is applied to register the measured point cloud of the standard sphere plate to the 3D model, and the deviation between each point of the registered point cloud and the 3D model is acquired as measurement deviation. The total measurement deviation of the point cloud and the components of the measurement deviation in the x, y, and z directions of the world coordinate (denote as ex, ey and ez, respectively) are shown in Fig. 7. The colored dimension indicates the deviation distance between the measured point cloud and the 3D model, and the nine white spheres indicate the 3D model of the standard sphere plate.

 figure: Fig. 6.

Fig. 6. Measurement of a standard sphere plate.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. An example of measured deviation. (a) x-direction component: ex. (b) y-direction component: ey. (c) z-direction component: ez. (d) Total absolute deviation.

Download Full Size | PDF

2.2.2 Mathematical error influencing factors of each point

There are numerous factors associated with measurement deviations which can be categorized as external and internal factors. Internal factors are inherent properties of the instrument and typically remain constant throughout the measurement process. Lots of researchers have focused on reducing the error caused by internal factors from laser strip feature extraction, structure optimization, and other methods in the design and fabrication of instruments, with the ultimate goal of limiting the error to a maximum of 2-5 µm, subject to physical limitations [25,34,35]. External factors refer to external error influencing factors that can influence measurement errors, such as the probe’s position and orientation in relation to the measured object. The scanning depth d, the out-plane-angle $\mathrm{\alpha }$ and the in-plane-angle $\mathrm{\beta }$ were considered as key external factors to reduce the systematic error [24,25]. However, they were taken as overall parameters and no consideration was given to the differences in the influencing factors at each point. Moreover, varying the placement of the measured points on the laser plane can lead to measurement errors, which are crucial external factors because the laser plane, as mentioned above, is not a perfect plane. The four-factor model at each point is subsequently established to estimate the error influencing factors in conjunction with the aforementioned modified laser plane model:

  • (1) d: the height from the vertex, equals to the distance of measured point and ${O_L}$ along the direction of midline of laser plane;
  • (2) $\mathrm{\alpha }$: the out-plane-angle is the angle between the normal of the measured object point and the laser plane;
  • (3) $\mathrm{\beta }$: the in-plane-angle is the angle between the projection of the normal of the measured object point on the laser plane and the midline of the laser plane;
  • (4) $\mathrm{\gamma }$: laser plane position angle is the angle between the line connecting the point to the vertex and the median line.

As shown in Fig. 8, the four-parameter error influencing model at the measured point can be established as follows:

$$\left\{ {\begin{array}{{c}} {d = |{\overrightarrow {P{O_L}} \cdot \overrightarrow {{c_L}} } |}\\ {\alpha = \frac{\pi }{2} - {{\cos }^{ - 1}}({\overrightarrow {{n_L}} \cdot \overrightarrow {{n_P}} } )}\\ {\beta = \frac{\pi }{2} - {{\cos }^{ - 1}}({\overrightarrow {n{c_L}} \cdot \overrightarrow {{n_P}} } )}\\ {\gamma = \frac{\pi }{2} - {{\cos }^{ - 1}}({\overrightarrow {n{c_L}} \cdot \overrightarrow {{n_{PO}}} } )} \end{array}} \right.,$$
where $\overrightarrow {{n_L}} $ is the normal of the calibrated plane and can be derived from Eq. (5), $\overrightarrow {{n_P}} $ is the normal of the measured object point and acquired by the minimum spanning tree method [36], $\overrightarrow {n{c_L}} = \overrightarrow {{n_L}} \times \overrightarrow {{c_L}} $ and $\overrightarrow {{n_{PO}}} $ is the unitization of the vector $\overrightarrow {P{O_L}} $. Therefore, it is fairly straightforward to determine the mathematical error influencing factors for each point after obtaining the point cloud.

 figure: Fig. 8.

Fig. 8. Four-parameter mathematical error influencing model.

Download Full Size | PDF

The specific calculation results of the mathematical error influencing factors corresponding to the example in Fig. 7 can be derived based on Eq. (6) and are demonstrated in Fig. 9. The color dimension in Fig. 9 indicates the values of the error influencing factors for each point in the point cloud.

 figure: Fig. 9.

Fig. 9. Error influencing factors (a) d, (b) $\mathrm{\alpha }$, (c) $\mathrm{\beta }$ and (d) $\mathrm{\gamma }$ corresponding to the example in Fig. 7.

Download Full Size | PDF

2.2.3 Error model of point cloud estimation

After acquiring the error influencing model and measurement deviation per point of the point cloud measured by a standard sphere plate, it is essential to establish an error model of the point cloud such that any measurement error can be predicted and corrected by employing this model and the mathematical error influencing factors. Moreover, the corrected point cloud facilitates global calibration optimization.

It is a typical regression problem for identifying an error model characterizing the relation between the mathematical error influencing factors and measurement deviation. In our study, it is clear that the error model is complex and non-linear. A matrix form error model was presented to characterize the measurement deviation and the three external influencing factors [25], but the insufficient number of samples for modeling, imprecise data not available to the point, and lack of consideration of the laser plane's position limited the modeling results. Under such complex circumstances, neural network models typically achieve good fit results [37,38]. To model the relation between the mathematical error influencing factors and measurement deviation, a point-specific error model based on BP neural network is therefore built as shown in Fig. 10. The four mathematical error influencing factors are provided as input parameters and the components of the measurement deviation in the x, y and z directions of the world coordinate are taken as output parameters. More abundant data are obtained by repeating the measurements by varying the different postures of the standard sphere plate, and the data are divided into training, test, and validation datasets in a ratio of 8:1:1. The parameters of BP network can be modulated during training to achieve better regression results. In the embodiment above, investigations proved that 15 hidden layers, 8 hidden layer nodes, ${10^{ - 6}}$ tolerance are preferable choices and the coefficient of determination R2 can reach 0.91503.

 figure: Fig. 10.

Fig. 10. Error model based on BP neural network.

Download Full Size | PDF

Subsequently, four mathematical error factors can be computed for any measurement taken by this probe. Then, the model trained above can be invoked to predict and correct the error value in the x, y and z directions of the world coordinate.

2.3 Point cloud error correction and global calibration optimization

2.3.1 Point cloud error correction

Assuming that the point cloud measured by coarse calibrated 3D scanning measurement system is $\textrm{P}{\textrm{C}_{ui}}$ (i stands for camera number in the multi-view system), the world coordinate of any point in $\textrm{P}{\textrm{C}_{ui}}$ is (${x_{uij}},{y_{uij}},{z_{uij}}$)$({j{\; \textrm{stands}\; \textrm{for}\; \textrm{point}\; \textrm{number}\; \textrm{in}\; \textrm{P}}{\textrm{C}_{ui}}} )$. Substituting into the error influencing mathematical model, the four error influencing factors (${d_{ij}},{\alpha _{ij}},{\beta _{ij}},{\gamma _{ij}}$) for each point of the point cloud can be acquired. Inputting the error influencing factors into the trained error model based on BP neural network, the error component in three directions $({e{x_{ij}},e{y_{ij}},e{z_{ij}}} )$ can be predicted for point accuracy. Then error correction equation for each point in the point cloud is

$$\left\{ {\begin{array}{{c}} {{x_{rij}} = {x_{uij}} - e{x_{ij}}}\\ {{y_{rij}} = {y_{uij}} - e{y_{ij}}}\\ {{z_{rij}} = {z_{uij}} - e{z_{ij}}} \end{array}} \right.,$$
where (${x_{rij}},{y_{rij}},{z_{rij}}$) is the world coordinate of the corrected point in the point cloud $\textrm{P}{\textrm{C}_{ui}}$. Point cloud error correction can be performed by utilizing the error correction equation to correct the measured point cloud point-by-point.

2.3.2 Global calibration optimization

Owing to limitations in the accuracy of the calibration target and the extraction of feature points, as well as variability in the calibration process such as illumination differences during image acquisition, calibration errors are inevitable and may be magnified by the imaging system of the camera. As a result, imprecise coarse global calibration could result in an improper coincidence of the common area of point clouds acquired from different viewpoints, making it insufficient for high-precision measurement.

Therefore, a fine global calibration method is proposed to ameliorate this problem. Firstly, the point cloud correction method, as described above, is applied to each camera’s original measured point cloud; a point cloud with best completeness is then selected as a reference; with initial value derived from the transformation matrix Ti obtained from coarse global calibration, the point clouds from various cameras are precisely registered by employing iterative closest point (ICP) technology [11]; assuming that the transformation matrix to the reference point cloud during the registration is denoted as ${A_i}$ (transformation matrix from pre-registration to the optimal overlap position), the final transformation matrix from the ith camera to the reference after fine global calibration is

$$F{T_i} = {T_i} \cdot {A_i},$$
where $F{T_i}$ serves as the final global matrix for point cloud fusion.

3. Experiments and discussion

In our study, a Lenovo-PC running on Window10 and Visual Studio 2017 is employed, and the open-source Opencv library and CloudCompare are utilized for image processing and point cloud analysis. The distance between the probe and measured objects is about 300 mm. The experimental results of the proposed methods are analyzed and discussed as follows.

3.1 Coarse system calibration

The coarse system calibration procedure is implemented with a 3D scanning probe based on asymmetric trinocular vision and multi-line laser (see Fig. 1) and an asymmetric circular grid target (see Fig. 3). The results for the cameras parameters of the probe are shown in Table 1, and the average reprojection error can be up to approximately 0.03 pixel, compared to Zhang's method [14], which generally achieves only approximately 0.1 pixel. Certainly, the adopted high-precision asymmetric circular grid target also contributes to the improvement of the calibration accuracy. The transformation matrices from the three cameras to the reference in coarse global calibration are as follows.

$$\left\{ {\begin{array}{{c}} {{T_1} = \left[ {\begin{array}{{c}} { - 0.6825745, - 0.7246335,0.0948596,9.2899}\\ {0.7066498, - 0.6213171,0.3385425, - 33.5784}\\ { - 0.1863813,0.2981130,0.9361574,362.1606} \end{array}} \right]}\\ {{T_2} = \left[ {\begin{array}{{c}} {0.9751055, - 0.2192670,0.0330327, - 7.5062}\\ {0.1855180,0.8883089,0.4201076,11.2983}\\ { - 0.1214590, - 0.4035211,0.9068729,370.6715} \end{array}} \right]}\\ {{T_3} = \left[ {\begin{array}{{c}} { - 0.3041361,0.9485930, - 0.0875934, - 4.6028}\\ { - 0.7771870, - 0.1939006,0.5986510, - 12.6478}\\ {0.5508917,0.2501478,0.7962063,402.7829} \end{array}} \right]} \end{array}}. \right.$$

Since there are too many laser plane parameters, the article will not be written individually.

Tables Icon

Table 1. The results of cameras calibration

3.2 Experiments of point cloud error correction

To analyze the point cloud error correction method based on BP network, the following validation experiments are designed: (1) the point cloud of the standard sphere plate is measured by a coarse calibrated 3D scanning probe; (2) fine registration of the point clouds [136] (the threshold value is set to ${10^{ - 6}}$) is performed, with the measured 3D point cloud as the floating and the 3D model of the standard sphere plate as the reference, (3) the distance between the registered point cloud and 3D model is calculated as the measurement deviation (see Fig. 11(a)); (4) the measured point cloud is corrected by utilizing the error model based on BP neural network, and similar point cloud fine registration and distance analysis are applied to the corrected point cloud (see Fig. 11(b)). In Fig. 11, the nine white spheres represent the 3D model of the standard sphere plate, while the colored part is the measured point cloud and the color dimension indicates the measurement deviation between the measured point cloud and 3D model. By comparing the original and corrected point clouds in Fig. 11, the measurement deviation dramatically decreased after point cloud correction.

 figure: Fig. 11.

Fig. 11. Comparison of the two sets of point clouds: (a) origin point cloud; (b) corrected point cloud by proposed method.

Download Full Size | PDF

Figure 12 provides the histogram distribution of the measurement deviations for each point in the initial and corrected point clouds, meaning that the number of points in the point cloud within each deviation interval is counted. It can be concluded from Fig. 12 that more points in the corrected point cloud have deviations close to zero, and the overall measurement error is significantly reduced by the proposed point cloud error correction method.

 figure: Fig. 12.

Fig. 12. Comparison of measurement deviation distribution of points in the point cloud: (a) origin point cloud; (b) corrected point cloud by proposed method.

Download Full Size | PDF

To quantitatively analyze the point cloud error correction results, the matrix global model method [25] and the proposed method are respectively implemented for point cloud error correction of the standard sphere plate. Similarly, the above distance analysis between the corrected point cloud and 3D model serves as the measurement deviation. Table 2 reveals that the two methods significantly reduce the mean measurement deviation (MMD) and the root mean square (RMS) of the measurement, with the proposed method slightly outperforming the matrix global model method.

Tables Icon

Table 2. Measurement deviation analysis of point cloud error correction methods

However, the root mean square (RMS) of the corrected data remains significantly high. This is due to the large error in a few points of the red and dark blue sections in Fig. 11(b), which typically exceeds 0.2 mm. It is apparent that these points also correspond to a high value of influencing factor $\mathrm{\alpha }$. Therefore, they can be eliminated as gross errors from the point cloud by setting a proper threshold value of $\mathrm{\alpha }$, which is quite accessible in the mathematical error influencing factors model for point accuracy. The MMD further decreases to 0.0025 mm and RMS to 0.0017 mm, through the above procedure (with a threshold of 0.65).

3.3 Experiments of global calibration optimization

As shown in Fig. 13(a), the yellow and white point clouds are acquired by measuring the standard sphere plate from various viewpoints, that is two sets of independent measurement results for different cameras of the probe in Fig. 1. Figure 13(b) depicts a local enlargement of the overlapping portion of the two sets of point clouds in Fig. 13(a). It is evident that there is a substantial deviation between the two sets of point clouds, although they should theoretically overlap completely. There are two primary reasons for this deviation: measurement errors within each point cloud and a global calibration error between the two cameras of the multi-view system, which results in insufficient overlap between the point clouds. Therefore, the proposed point cloud correction method is first implemented on each point cloud to achieve higher accuracy; then the global calibration optimization method is applied to correct the global transformation matrices to enhance the overlap of the two sets of point clouds. Figure 13(c) corresponds to the two sets of point clouds in Fig. 13(b) following the improvement by the proposed global calibration optimization method. It can be observed that the overlap between the two sets of point clouds is considerably improved.

 figure: Fig. 13.

Fig. 13. Measured point clouds before and after fine global calibration. (a) Two sets of point clouds before fine global calibration. (b) Detailed diagram; (c) Detail after fine global calibration corresponding (b).

Download Full Size | PDF

To further investigate the effect of the global calibration optimization method, the distances between the above two groups of point clouds before and after global calibration are statistically analyzed, and Huang's global calibration method [11] is imported for comparison. As shown in Table 3, the mean distance (MD) between point clouds has a considerable shrinkage after global calibration optimization, and the root mean square (RMS) also decreases to a greater extent by the two methods. However, the proposed method can achieve better MD and RMS values.

Tables Icon

Table 3. Distance analysis between point clouds before and after global calibration

3.4 Verification measurement experiment

A step gauge (see Fig. 14) is measured using a 3D scanning probe based on asymmetric trinocular vision and multi-line laser (see Fig. 1). To verify the validity of the accuracy improvement method based on point cloud error correction and calibration optimization, comparisons of the step gauge measurement deviations are performed before and after the implementation of the proposed method. Another 3D sensor (C2-2040(HS)-GigE, Automation Technology GmbH, resolution of 7.5 $\mathrm{\mu}\textrm{m}$ in z-direction and resolution of 20 µm in x-direction at a working distance of 300 mm) based on the FIR-peak detection 3D reconstruction method is also added to the comparison. The point clouds of the step gauge are obtained through aforementioned three methods. Plane 1 of the step gauge is acquired by the plane fitting tool of CloudCompare. With Plane1 as the reference plane, 15 points on Plane2 are randomly chosen to calculate the distance to Plane1 as Height 1 (H1). Similarly, Heigh2 (H2: Plane3 to Plane1) and Height3 (H3: Plane4 to Plane1) can be derived, respectively. Their reference values are obtained through the CMM (Maximum permissible error:$\pm ({0.3 + L/1000} )\,\mathrm{\mu}\mathrm{m}$). The measured value (MV), measurement deviation (MD) and root mean square (RMS) of three various point clouds are listed in Table 4. It can be seen that the proposed method can reduce the mean measurement deviation by one order of magnitude, the standard deviation is also significantly decreasing, and the measurement results are superior to those of the 3D sensor.

 figure: Fig. 14.

Fig. 14. Measurement of the step gauge. (a) Picture of step gauge; (b) Point cloud measured by proposed method.

Download Full Size | PDF

Tables Icon

Table 4. Measurement results of step gauge

The measurement results of H1, H2 and H3 can be utilized to analyze the measurement accuracy of the proposed method in the depth direction. In order to further test the accuracy in the horizontal direction, a 20 mm gauge block (Mitutoyo Grade 0 steel block) is measured by the proposed method and the 3D sensor C2. As shown in Fig. 15, select one point A and two points B and C on the boundaries of the point cloud, corresponding to the two measurement faces of the gauge block, at random, and use the area of triangle ABC and side length BC to calculate the height ${L_h}$ of the triangle:

$${L_h} = \frac{{2{S_{ABC}}}}{{|{BC} |}},$$
where ${S_{ABC}}$ denotes the area of triangle ABC and |BC| denotes the side length of BC. From the measured point cloud, three random positions of the triangle are selected to calculate the average value of ${L_h}$ as a measurement value of the length of the gauge block. The measurement experiment is repeated ten times and the results are shown in Fig. 16 (the measurement uncertainty of the 20 mm gauge block is 70 nm, and 20.000 mm is utilized as the reference value). Because the design of a structured light 3D scanning probe generally focuses on resolution in the depth direction, theoretically the depth direction resolution is the highest. From the results shown in Fig. 16, it can be seen that the results for the horizontal direction of the gauge block are worse than the height of the step gauge, which is consistent with the theoretical analysis. However, the measurement deviation by the proposed method can also be controlled within 0.015 mm, which is significantly better than the comparison of 3D sensor C2.

 figure: Fig. 15.

Fig. 15. Gauge point cloud analysis.

Download Full Size | PDF

 figure: Fig. 16.

Fig. 16. Gauge length measurement results.

Download Full Size | PDF

4. Conclusion

In this paper, an accuracy improvement method based on point cloud error correction and calibration optimization for multi-view 3D laser scanning measurement is proposed in the steps of coarse system calibration, error model of point cloud, and point cloud error correction and calibration optimization. In coarse system calibration, the parameters of each camera, each laser plane and coarse global system are acquired by utilizing an asymmetric circular grid target, which is straightforward to implement and offers relatively high accuracy. Moreover, the surface deformation of the laser plane is innovatively modelled as a external error influencing factors. The error model based on BP neural networks is then presented to link the mathematical error influencing factors and measurement deviations for each point of the point cloud. Accuracy improvement of the system is implemented by point cloud error correction and global calibration optimization based on the error model. The experimental results show that the measurement error is reduced to less than 5 µm in the depth direction by the proposed method. Furthermore, the RMS is substantially lower, and the measurement results are preferred to those of the reference methods and instruments.

Funding

National Natural Science Foundation of China (51927811); 111 Project (B12019).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. K. He, C. Sui, T. Huang, et al., “3D surface reconstruction of transparent objects using laser scanning with a four-layers refinement process,” Opt. Express 30(6), 8571–8591 (2022). [CrossRef]  

2. Y. Shimizu, L.-C. Chen, D. W. Kim, et al., “An insight into optical metrology in manufacturing,” Meas. Sci. Technol. 32, 042003 (2020). [CrossRef]  

3. H. Zhou, C. Li, G. Sun, et al., “Calibration and location analysis of a heterogeneous binocular stereo vision system,” Appl. Opt. 60(24), 7214–7222 (2021). [CrossRef]  

4. H.-F. Wang, Y.-F. Wang, J.-J. Zhang, et al., “Laser stripe center detection under the condition of uneven scattering metal surface for geometric measurement,” IEEE Trans. Instrum. Meas. 69(5), 2182–2192 (2020). [CrossRef]  

5. R. Chen, Y. Li, G. Xue, et al., “Laser triangulation measurement system with Scheimpflug calibration based on the Monte Carlo optimization strategy,” Opt. Express 30(14), 25290–25307 (2022). [CrossRef]  

6. M. Wang, Y. Yin, D. Deng, et al., “Improved performance of multi-view fringe projection 3D microscopy,” Opt. Express 25(16), 19408–19421 (2017). [CrossRef]  

7. G. Genta, P. Minetola, and G. Barbato, “Calibration procedure for a laser triangulation scanner with uncertainty evaluation,” Opt. Lasers Eng. 86, 11–19 (2016). [CrossRef]  

8. M. A. Isa and I. Lazoglu, “Design and analysis of a 3D laser scanner,” Measurement 111, 122–133 (2017). [CrossRef]  

9. Z. Liu, G. Zhang, Z. Wei, et al., “Novel calibration method for non-overlapping multiple vision sensors based on 1D target,” Opt. Lasers Eng. 49(4), 570–577 (2011). [CrossRef]  

10. L. Wang, W. Wang, C. Shen, et al., “A convex relaxation optimization algorithm for multi-camera calibration with 1D objects,” Neurocomputing 215, 82–89 (2016). [CrossRef]  

11. L. Huang, F. Da, and S. Gai, “Research on multi-camera calibration and point cloud correction method based on three-dimensional calibration object,” Opt. Lasers Eng. 115, 32–41 (2019). [CrossRef]  

12. G. Xu, L. Sun, X. Li, et al., “Global calibration and equation reconstruction methods of a three dimensional curve generated from a laser plane in vision measurement,” Opt. Express 22(18), 22043–22055 (2014). [CrossRef]  

13. J. Sun, H. He, and D. Zeng, “Global calibration of multiple cameras based on sphere targets,” Sensors 16(1), 77 (2016). [CrossRef]  

14. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

15. R. Yang, S. Cheng, and Y. Chen, “Flexible and accurate implementation of a binocular structured light system,” Opt. Lasers Eng. 46(5), 373–379 (2008). [CrossRef]  

16. X. Liu, Z. Liu, G. Duan, et al., “Precise and robust binocular camera calibration based on multiple constraints,” Appl. Opt. 57(18), 5130–5140 (2018). [CrossRef]  

17. S. Gai, F. Da, and M. Tang, “A flexible multi-view calibration and 3D measurement method based on digital fringe projection,” Meas. Sci. Technol. 30(2), 025203 (2019). [CrossRef]  

18. X. Chen, G. Zhang, and J. Sun, “An efficient and accurate method for real-time processing of light stripe images,” Adv. Mech. Eng. 5, 456927 (2013). [CrossRef]  

19. Y. Zhang, W. Liu, X. Li, et al., “Accuracy improvement in laser stripe extraction for large-scale triangulation scanning measurement system,” Opt. Eng. 54(10), 105108 (2015). [CrossRef]  

20. G. Yang and Y. Wang, “Three-dimensional measurement of precise shaft parts based on line structured light and deep learning,” Measurement 191, 110837 (2022). [CrossRef]  

21. X. Hui-yuan, X. You, and Z. Zhi-jian, “Accurate extrinsic calibration method of a line structured-light sensor based on a standard ball,” IET Image Process. 5(5), 369–374 (2011). [CrossRef]  

22. B. Chao, L. Yong, F. Jian-guo, et al., “Calibration of laser beam direction for optical coordinate measuring system,” Measurement 73, 191–199 (2015). [CrossRef]  

23. W. Zou, Z. Wei, and F. Liu, “High-accuracy calibration of line-structured light vision sensors using a plane mirror,” Opt. Express 27(24), 34681–34704 (2019). [CrossRef]  

24. N. Van Gestel, S. Cuypers, P. Bleys, et al., “A performance evaluation test for laser line scanners on CMMs,” Optics and lasers in engineering 47(3-4), 336–342 (2009). [CrossRef]  

25. A. Isheil, J.-P. Gonnet, D. Joannic, et al., “Systematic error correction of a 3D laser scanning measurement device,” Opt. Lasers Eng. 49(1), 16–24 (2011). [CrossRef]  

26. C. Liu, F. Duan, X. Fu, et al., “A method to reduce the systematic error of line-structured light sensors based on light plane correction,” Opt. Lasers Eng. 159, 107217 (2022). [CrossRef]  

27. M. Wan, R. Zheng, S. Wang, et al., “Efficient 3D scanning measurement system based on asymmetric trinocular vision and a multi-line laser,” Appl. Opt. 62(8), 2145–2153 (2023). [CrossRef]  

28. Z. Ruifeng, S. Ziyun, and N. Ganglei, “Calibration Method for Lin-structured Light,” Laser & Optoelectronics Progress 56, 8 (2019).

29. W. G. Li, H. Li, and H. Zhang, “Light plane calibration and accuracy analysis for multi-line structured light vision measurement system,” Optik 207, 163882 (2020). [CrossRef]  

30. X. Xu, Z. Fei, J. Yang, et al., “Line structured light calibration method and centerline extraction: A review,” Results Phys. 19, 103637 (2020). [CrossRef]  

31. H. He, H. Li, Y. Huang, et al., “A novel efficient camera calibration approach based on K-SVD sparse dictionary learning,” Measurement 159, 107798 (2020). [CrossRef]  

32. I. Powell, “Design of a laser beam line expander,” Appl. Opt. 26(17), 3705–3709 (1987). [CrossRef]  

33. G. C. Sharp, S. W. Lee, and D. K. Wehe, “ICP registration using invariant features,” IEEE Trans. Pattern Anal. Machine Intell. 24(1), 90–102 (2002). [CrossRef]  

34. C. Lartigue, A. Contri, and P. Bourdet, “Digitised point quality in relation with point exploitation,” Measurement 32(3), 193–203 (2002). [CrossRef]  

35. L. Qi, Y. Zhang, X. Zhang, et al., “Statistical behavior analysis and precision optimization for the laser stripe center detector based on Steger's algorithm,” Opt. Express 21(11), 13442–13449 (2013). [CrossRef]  

36. K. Demarsin, D. Vanderstraeten, T. Volodine, et al., “Detection of closed sharp edges in point clouds using normal estimation and graph theory,” Computer-Aided Design 39(4), 276–283 (2007). [CrossRef]  

37. X. Huang, H. Wang, W. Luo, et al., “Prediction of loquat soluble solids and titratable acid content using fruit mineral elements by artificial neural network and multiple linear regression,” Sci. Hortic. 278, 109873 (2021). [CrossRef]  

38. A. Bansal, R. J. Kauffman, and R. R. Weitz, “Comparing the modeling performance of regression and neural networks as data quality varies: A business value approach,” Journal of Management Information Systems 10(1), 11–32 (1993). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1.
Fig. 1. 3D scanning probe based on asymmetric trinocular vision and multi-line laser.
Fig. 2.
Fig. 2. Overall architecture of our configuration and system.
Fig. 3.
Fig. 3. Asymmetric circular grid target. (a) Asymmetric circular grid target design. (b) Target picture.
Fig. 4.
Fig. 4. Modified laser plane model.
Fig. 5.
Fig. 5. Schematic diagram of modified laser plane calibration.
Fig. 6.
Fig. 6. Measurement of a standard sphere plate.
Fig. 7.
Fig. 7. An example of measured deviation. (a) x-direction component: ex. (b) y-direction component: ey. (c) z-direction component: ez. (d) Total absolute deviation.
Fig. 8.
Fig. 8. Four-parameter mathematical error influencing model.
Fig. 9.
Fig. 9. Error influencing factors (a) d, (b) $\mathrm{\alpha }$, (c) $\mathrm{\beta }$ and (d) $\mathrm{\gamma }$ corresponding to the example in Fig. 7.
Fig. 10.
Fig. 10. Error model based on BP neural network.
Fig. 11.
Fig. 11. Comparison of the two sets of point clouds: (a) origin point cloud; (b) corrected point cloud by proposed method.
Fig. 12.
Fig. 12. Comparison of measurement deviation distribution of points in the point cloud: (a) origin point cloud; (b) corrected point cloud by proposed method.
Fig. 13.
Fig. 13. Measured point clouds before and after fine global calibration. (a) Two sets of point clouds before fine global calibration. (b) Detailed diagram; (c) Detail after fine global calibration corresponding (b).
Fig. 14.
Fig. 14. Measurement of the step gauge. (a) Picture of step gauge; (b) Point cloud measured by proposed method.
Fig. 15.
Fig. 15. Gauge point cloud analysis.
Fig. 16.
Fig. 16. Gauge length measurement results.

Tables (4)

Tables Icon

Table 1. The results of cameras calibration

Tables Icon

Table 2. Measurement deviation analysis of point cloud error correction methods

Tables Icon

Table 3. Distance analysis between point clouds before and after global calibration

Tables Icon

Table 4. Measurement results of step gauge

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

[ x c y c z c 1 ] = [ R T 0 T 1 ] [ x w y w z w 1 ] ,
[ u v 1 ] = [ 1 d x 0 u 0 0 1 d y v 0 0 0 0 ] [ x y 1 ] ,
s [ u v 1 ] = [ f x 0 u 0 0 f y v 0 0 0 1 ] [ x c y c z c ] = A [ R T 0 T 1 ] [ x w y w z w 1 ] ,
{ f x = f d x f y = f d y A = [ f x 0 u 0 0 f y v 0 0 0 1 ] ,
A x w + B y w + C z w + D = 0 ,
{ d = | P O L c L | α = π 2 cos 1 ( n L n P ) β = π 2 cos 1 ( n c L n P ) γ = π 2 cos 1 ( n c L n P O ) ,
{ x r i j = x u i j e x i j y r i j = y u i j e y i j z r i j = z u i j e z i j ,
F T i = T i A i ,
{ T 1 = [ 0.6825745 , 0.7246335 , 0.0948596 , 9.2899 0.7066498 , 0.6213171 , 0.3385425 , 33.5784 0.1863813 , 0.2981130 , 0.9361574 , 362.1606 ] T 2 = [ 0.9751055 , 0.2192670 , 0.0330327 , 7.5062 0.1855180 , 0.8883089 , 0.4201076 , 11.2983 0.1214590 , 0.4035211 , 0.9068729 , 370.6715 ] T 3 = [ 0.3041361 , 0.9485930 , 0.0875934 , 4.6028 0.7771870 , 0.1939006 , 0.5986510 , 12.6478 0.5508917 , 0.2501478 , 0.7962063 , 402.7829 ] .
L h = 2 S A B C | B C | ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.