Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

In-motion continuous point cloud measurement based on bundle adjustment fused with motion information of triple line-scan images

Open Access Open Access

Abstract

The point cloud continuous measurements in some in-motion circumstances, such as quality inspection of products on assembly lines or rail traffic, have been requiring higher measurement speed, accuracy, and point cloud density. With the advantages of high acquisition rates and ultrahigh resolution, line-scan cameras have been developing gradually for dynamic measurements. However, because of non-coplanar installation and unidimensional images, the measurement based on line-scan cameras is affected by movement. In this article, a dynamic scanning point cloud measurement based on triple line-scan images is present. The point cloud optimization is based on bundle adjustment fused with motion information. The epipolar constraint of line-scan images in dynamic conditions is researched for matching. The effect of motion on matching error is analyzed. A triple line-scan cameras experimental setup validates the proposed method.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Three-dimensional (3D) point cloud measurement [13] is crucial for product inspection of aviation aerospace, automobile, and so on. Hitherto, visual technology based on matrix cameras has been widely used for non-contact point cloud measurement. For example, the linear structured light technique [46] uses matrix cameras to observe the light stripe intersecting with an object. The distortion represented height variations in some way. The fringe projection technique based on matrix cameras and projectors, generally including phase grating projection and Fourier profilometry, is another way to obtain the point cloud [7]. The phase grating projection technique rebuilds the 3D point cloud from several fringe images modulated by the object surface [812]. Fourier profilometry [1315] reduces the number of projected images and increases dynamic performance, despite losing some accuracy. In addition, the speckle projection technique [1617] measures the object deformation with speckle based on stereo vision. Generally, matrix camera-based technologies work in static and deformation measurement cases.

However, the measurement on assembly lines or rail traffic requires higher dynamic performance. The relative motion between sensors and measured objects forms an overlength measurement space. The matrix camera-based technology is not suitable for these measurements because the images from matrix cameras contain large redundant data and measurement effectiveness is limited by low acquisition rates. Besides, omissions are inescapable when stitching the point clouds. In general situations, it is easy to supplement the point data by shifting the objects or sensors. But in continuous measurement cases, reversing the movement of conveyor belts or trains pays a great deal. Comparatively speaking, with ultrahigh resolution and acquisition rates (up to 16834 pixels at 143 kHz), line-scan cameras enables continuous measurement.

The cutting-edge line-scan cameras-based researches are classified according to the number of cameras used. Single line-scan-camera-based methods use one line-scan camera with other auxiliary movement devices to scan the object repeatedly. In a previous study [18], a two-axis scanner was integrated with a single line-scan camera. Affine-SIFT-based correspondence is examined for 3D reconstruction. In another study [19], a single line-scan-camera-based structure from motion (SFM) was described with a rotation table for high-resolution 3D reconstruction. However, these methods lose the advantage of single scan imaging. Other single line-scan-camera-based methods are assisted by auxiliary measurement devices. Elsewhere [2022], a projector was utilized to implement Fourier Profilometry or its derivative algorithm. In other research [23], a line-scan camera was assisted with a matrix camera and a linear structure light. These methods were limited by the auxiliary devices, reduction of accuracy, acquisition rate, or point cloud density occurred. Dual line-scan-cameras-based methods maintain the advantages of line-scan cameras. Their measurement principle is close to stereo vision but the matching methods are different. In researches [24,25], the correlation matching method was employed. In another research [26], a coded structured light was designed and two matching methods of time correlation and phase-shifting were discussed. Elsewhere [27], a one-dimensional background-normalized Fourier transform was utilized to match. These methods suppose that the dual cameras are installed coplanar ideally. However, the camera adjustment process is tedious and ideal coplanar installation is impossible in reality. The effect of movement on measurement can not be ignored. The measurement based on triple line-scan cameras, such as ADS series products in Leica company and STARIMAGER in STARLABO company, are usually used in aerial surveying and mapping remote sensing fields and equipped with an external positioning device. The positioning devices are indispensable in mapping remote sensing fields. But the expensive positioning devices are not always workable in a close-range enclosed environment.

Therefore, a measurement method based on triple line-scan cameras without positioning devices is proposed to solve the movement effect on the measurement. Meanwhile, the advantages of line-scan cameras are retained. The remaining parts of this article are organized as follows: Section 2 explains the measurement model for triple line-scan cameras. The measurement principle and point cloud optimization algorithm based on bundle adjustment fused with motion information are presented. Section 3 implements the dynamic epipolar constraints of line-scan cameras for matching. The measurement error is analyzed in Section 4. In Section 5, the experiments are operated to validate the proposed methods and verify the measurement accuracy. The point clouds of circuit boards, an array of pins, and a fender measured by the proposed method are displayed. The conclusion is given in Section 6. Appendix shows complex calculation processes in detail.

2. Measurement model

2.1. Concepts of different models

The concept of the proposed model is discussed with the classical dual line-scan measurement model and triple line-scan measurement model in the remote sensing field. The dual line-scan measurement model in Fig. 1(a) assumes that the installation of cameras is coplanar. The motion of space point P captured by dual cameras synchronously considers a uniform linear motion. Even though this model is simple and widely used, the coplanar installation and uniform linear motion state deviate from reality. The asynchronous acquisition error caused by non-coplanar installation is not considered. The triple line-scan measurement model in the remoting sensing field in Fig. 1(b) accords with reality. The installation of triple line-scan cameras and movement of the sensor is freer. Point P is captured by different cameras asynchronously. Hence, positioning devices are added to estimate the motion trajectory of the sensor. However, the expensive and low-accuracy positioning devices, such as GPS(Global Position System) or star sensors, are inapplicable in close-range photogrammetry. The proposed measurement model in Fig. 1(c) supposes an ideal uniform linear motion and physical non-coplanar installation of triple line-scan cameras. First, the captured time differences between cameras keep as small as possible in order to ensure the uniform linear motion assumption in a short time. Troublesome or other particular operations for cameras installation adjustment are dispensable. Second, because the movement between different cameras is assumed as uniform linear motion, the positioning devices in Fig. 1(b) are removed. Third, cameras can be added for more observation information. But in this article, only the situation of triple line-scan cameras is discussed. The proposed model can fit the real situation as much as possible without auxiliary positioning devices.

 figure: Fig. 1.

Fig. 1. Different measurement models of line-scan cameras (a) Classical dual line-scan measurement model. (b) Triple line-scan measurement model in the remote sensing field. (c) The proposed measurement model.

Download Full Size | PDF

2.2. Measurement principle

Triple line-scan cameras are used to illustrate the measurement model in Fig. 2. It is unnecessary for three viewing planes of cameras to intersect in a line. Parameter unum (num = 1, 2, 3) represents the imaging pixels along the sensor direction, vnum represents the imaging pixels along the motion direction. Fnum is the ratio of the focal length and ucnum expresses the principal point. The lens distortion Δunum was defined in [28,29]. And normalized imaging pixel $u_{num}^{\prime}$ is defined as

$$\begin{array}{*{20}{c}} {u_{num}^{\prime} = \frac{{{u_{num}} - u{c_{num}} - \Delta {u_{num}}}}{{{F_{num}}}}}&{num = 1,2,3}. \end{array}$$

 figure: Fig. 2.

Fig. 2. Measurement model.

Download Full Size | PDF

The measurement principle consists of the imaging model in Eq. (2) and the uniform linear motion hypothesis in Eq. (3). P1, P2, and P3 are the position of P at different viewing planes. R is a 3×3 rotation matrix and T represents a 3×1 translation vector. $R_j^i$ is a 1×3 matrix that represents the jth row of R of Camera i.$T_j^i$ is a constant which represents the jth row of T of Camera i. Fre is the acquisition rate of cameras and vel is the motion velocity of the point P which is assumed to do the uniform linear motion. D1 expresses the triaxial distance between P2 and P1, and D3 expresses the triaxial distance between P2 and P3. Parameter d(m, n, o)T is the motion direction.

$$\begin{array}{*{20}{c}} {\left\{ \begin{array}{l} u_1^{\prime} = \frac{{R_1^1{P_1} + T_1^1}}{{R_3^1{P_1} + T_3^1}}\\ 0 = R_2^1{P_1} + T_2^1 \end{array} \right.}&{\left\{ \begin{array}{l} u_2^{\prime} = \frac{{R_1^2{P_2} + T_1^2}}{{R_3^2{P_2} + T_3^2}}\\ 0 = R_2^2{P_2} + T_2^2 \end{array} \right.}&{\left\{ \begin{array}{l} u_3^{\prime} = \frac{{R_1^3{P_3} + T_1^3}}{{R_3^3{P_3} + T_3^3}}\\ 0 = R_2^3{P_3} + T_2^3 \end{array}. \right.} \end{array}$$
$$\begin{array}{*{20}{c}} {\left\{ \begin{array}{l} {P_1} = {P_2} + {D_1}\\ {P_3} = {P_2} + {D_3} \end{array} \right.}&{\left\{ \begin{array}{l} {D_1} = \Delta {v_1} \times vel/Fre \times d\\ {D_3} = \Delta {v_3} \times vel/Fre \times d \end{array} \right.}&{\left\{ \begin{array}{l} \Delta {v_1} = {v_1} - {v_2}\\ \Delta {v_3} = {v_3} - {v_2} \end{array}. \right.} \end{array}$$

Because P1(X1, Y1, Z1)T, P2(X2, Y2, Z2)T, and P3(X3, Y3, Z3)T represent the same point at different cameras’ viewing planings, knowing the coordinate of one of them is sufficient. The coordinate system of Camera 2 is regarded as the reference coordinate system. The Least Square method can be utilized to compute the 3D coordinate of P2 as

$$\begin{array}{*{20}{c}} {\left[ \begin{array}{l} u_1^{\prime}R_3^1 - R_1^1\\ - R_2^1\\ u_2^{\prime}R_3^2 - R_1^2\\ - R_2^2\\ u_3^{\prime}R_3^3 - R_1^3\\ - R_2^3 \end{array} \right]{P_2} = \left[ \begin{array}{l} ({R_1^1 - u_1^{\prime}R_3^1} ){D_1} + T_1^1 - u_1^{\prime}T_3^1\\ R_2^1{D_1} + T_2^1\\ T_1^1 - u_1^{\prime}T_3^1\\ T_2^1\\ ({R_1^3 - u_3^{\prime}R_3^1} ){D_3} + T_1^3 - u_3^{\prime}T_3^1\\ R_2^3{D_3} + T_2^3 \end{array} \right] \Rightarrow A{P_2} = b}&{{P_2} = {{({{A^T}A} )}^{ - 1}}{A^T}b}. \end{array}$$

Assuming that the initial motion moment of P is t0 and is captured at time tp, the velocity of P along the X, Y, Z axes are expressed as velx(t), vely(t), velz(t) respectively, the result of point P(X, Y, Z)T can be computed in Eq. (5). Parameter t is an integral time.

$$\left\{ \begin{aligned} &X = {X_2} + \int_{{t_p}}^{{t_0}} {ve{l_x}(t )dt} \\ &Y = {Y_2} + \int_{{t_p}}^{{t_0}} {ve{l_y}(t )dt} \\& Z = {Z_2} + \int_{{t_p}}^{{t_0}} {ve{l_z}(t )dt} \end{aligned}. \right.$$

If uniform linear motion with velocity vel and direction d throughout the whole process is assumed, the result of P can be simplified as

$$\left\{ \begin{aligned} &X = {X_2} - {v_2} \times vel \times m/Fre\\& Y = {Y_2} - {v_2} \times vel \times n/Fre\\ &Z = {Z_2} - {v_2} \times vel \times o/Fre \end{aligned}. \right.$$

2.3. Point optimization based on bundle adjustment fused with motion information

Since the uniform linear motion hypothesis in the measurement model does not conform to the actual situation exactly, the coordinate of P2 computed in Eq. (4) contains an error. In Fig. 3, the actual state is expressed as a black dashed line passing through three red circles. The ideal state is uniform linear motion, expressed as a black solid straight line passing through three red triangles. The difference between the circles and the triangles is the error of coordinate computation. Obtaining the whole motion state of the black dashed line is unnecessary and impossible without external equipment. Estimating the motion direction and velocity of uniform linear motion which corresponds to the actual state best is sufficient to optimize the points. In Fig. 3, the estimation state is shown as a yellow straight line passing through three red rectangular points. The updated motion direction d and velocity vel in the estimation state are used to compute the coordinate of P2.

 figure: Fig. 3.

Fig. 3. The error in the semi-ideal model.

Download Full Size | PDF

A motion vector Dv defined as d×vel/Fre is estimated and has two effects. The first effect is to correct the 3D coordinate of P2. By optimizing Dv, the calculation accuracy of P2 is improved because the measurement principle is also fused with the motion vector. The second effect is to correct the local position of P in the whole point cloud. As point P is moving, the motion of P determines the position of P in the reference coordinate system. The result of P in Eq. (6) is corrected as

$$P = {P_2} - ({{v_2} - 1} )\times vel \times d/Fre - Dv. $$

The optimization process based on bundle adjustment fused with motion information is applied in Fig. 4. The observations of Camera 1 and 3 are employed for correcting vector Dv. An initial value Dv0 of Dv is first given. In the kth iteration, the Jacobian matrix J(Dvk), the error e(Dvk), and the disturbance ΔDvk are computed successively. The Gauss-Newton iteration won’t stop until the second norm of e(Dvk) is smaller than threshold 1 or the number of iterations bigger than threshold 2. The thresholds are empirical values. Minimizing the reprojection error cost function, the estimated vector Dv represents a motion state closest to the real trajectory and that solution is optimal with the 3D coordinate of P2 and the position of P. The detailed calculation procedure for the algorithm is shown in Appendix A.

 figure: Fig. 4.

Fig. 4. The optimization process based on bundle adjustment fused with motion information.

Download Full Size | PDF

3. Matching with dynamic epipolar constraint

3.1. Matching

There have been many matching methods for line-scan cameras systems. We utilized normalized cross-correlation (NCC) [30] for pixel-level matching and two-dimensional quadratic polynomial fitting for sub-pixel accuracy. For a point (u2, v2) in Camera 2’s image, there are many points in Camera 1’s image that may match it. The correlation values between the point (u2, v2) and the points in Camera 1’s image are computed. The point with the maximum correlation value is regarded as the corresponding point.

To obtain sub-pixel matching accuracy, the new coordinate system o-xy is established in Fig. 5. The origin point is the pixel-level matching point. The coordinates of the nine points in the o-xy are shown in the cells. The correlation values of the corresponding imaging point and its surrounding points are used to fit a two-dimensional quadratic polynomial in Eq. (8). Parameters ai(i = 1,…,6) are the polynomial coefficients. The position of the maximum value in Eq. (8) expresses the sub-pixel part and is computed in Eq. (9).

 figure: Fig. 5.

Fig. 5. A new coordinate system with the pixel-level matching point as the origin.

Download Full Size | PDF

$$\begin{array}{*{20}{c}} {C({x,y} )= {a_1} + {a_2}x + {a_3}y + {a_4}{x^2} + {a_5}xy + {a_6}{y^2}}&{x = - 1,0,1}&{y = - 1,0,1}. \end{array}$$
$$\begin{array}{*{20}{c}} {{x_{\max }} = \frac{{2{a_2}{a_6} - {a_3}{a_5}}}{{a_5^2 - 4{a_4}{a_6}}}}&{{y_{\max }} = \frac{{2{a_3}{a_4} - {a_2}{a_6}}}{{a_5^2 - 4{a_4}{a_6}}}} .\end{array}$$

3.2. Dynamic epipolar constraint

In dual line-scan cameras systems, the epipolar constraint is regarded as a straight line parallel to the direction of the sensor. However, only when installation is ideal coplanar is it valid. Besides, the empirical searching range results in many irrelevant matching areas and computational time increments. Thus, the matching for line-scan cameras with epipolar lines is presented to constrain the matching direction and determine a suitable searching area.

The dynamic epipolar line of the line-scan camera relative to the motion is complicated, but it appears as a branch of hyperbola in the case of uniform linear motion. In Fig. 6, the red solid lines with arrows express the camera coordinate systems which are defined in Fig. 2. The red dotted lines with arrows are virtual coordinate systems of Cameras 1 and 3 after movement. The red point in Image 2 is the point to be matched. The red lines in Images 1 and 3 are the epipolar lines and their lengths are determined by the range of Z. Although they look like straight lines in the figure, they are not actually because the re-projections of P to the virtual coordinate systems are considered. The detailed computation procedure can be seen in Appendix B. Therefore, the matching process is operated as follows. First, the initial pixel-level matching point is searched along the epipolar line. Second, the matching point is searched again around a small area because of the motion disturbance. After determining the pixel-level matching point, the sub-pixel accuracy is calculated by a two-dimensional quadratic polynomial finally.

 figure: Fig. 6.

Fig. 6. The epipolar line of line-scan cameras.

Download Full Size | PDF

4. Error analysis

4.1. Measurement uncertainty

Because the two cameras are the minimum unit required for measurement, the measurement uncertainty of Cameras 1 and 3 in Fig. 7 is estimated. 2α is the intersection angle between Cameras 1 and 3. L is the length of the baseline. The matching error in the v direction, expressed as σ, is an important factor affecting the measurement uncertainty. The detailed calculation process is shown in Appendix C. From Eq. (39), if dy, α, and σ, are set to 0.007 mm, 25°, and 0.1 pixels respectively, the effect of u1 on measurement uncertainty is shown in Fig. 8(a). When the imaging point coincides with the principal point, the value of ΔP is a minimum. If u1, dy, and σ are set to 0, 0.007 mm, and 0.1 pixels respectively, the effect of α on ΔP is shown in Fig. 8(b). ΔP decreases with increasing α. It seems that the larger the intersection angle, the smaller the measurement uncertainty. However, with extreme tile angle, the matching will become difficult and the prolonged optical path will decrease the resolution. Combining theory with practice, controlling the intersection angle between 16° and 60° can balance the measurement error and matching difficulty effectively. In our triple line-scan camera experimental system, dy is about 0.07 mm, and the intersection angle 2α chosen is 16°. If σ is regarded as 0.1 pixels and u1 is set to zero, the measurement uncertainty is 0.025 mm theoretically.

 figure: Fig. 7.

Fig. 7. The model for measurement uncertainty of two cameras.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. (a) Relationship between u1 and uncertainty. (b) Relationship between α and uncertainty. The blue, yellow, green, and red lines express the uncertainty of ΔX, ΔY, ΔZ, and ΔP respectively.

Download Full Size | PDF

4.2. Matching error

The 2D image of line-scan cameras can not reflect the real object because they are stitched by several 1D images without considering the motion. Furthermore, the effect of motion affects the correlation values and the matching error. The relationship between matching and movement is complex. The movement includes not only velocity but also triaxial direction. The irregular movement results in different image distortion of different line-scan cameras. The relationship between movement and distortion is related to the intrinsic and extrinsic parameters of line-scan cameras, the position of the object, the velocity and direction of movement while the mathematical expression is unknown. The distortion of different line-scan cameras results in inaccuracy correlation value calculation. In pixel matching, The inaccuracy correlation value has a slight influence because only the position of the maximum correlation value is needed. While in sub-pixel matching, the inaccuracy correlation value is utilized in two-dimensional quadratic polynomial fitting and affects the sub-pixel matching accuracy. The mathematical expression of the influence of correlation value on matching error is shown in Appendix D. Consequently, the irregular movement first affects the correlation value calculation. Then the correlation value calculation affects the sub-pixel matching accuracy. In order to represent the relationship between the matching accuracy and correlation values, as well as the influence of motion state on them, the uniform and non-uniform linear motion cases are analyzed as examples.

The coordinates of the nine points utilized in the sub-pixel calculation step are shown in Fig. 4. Assuming that the coordinate of the matching point is (0, 0) in o-xy. The correlation values of these nine points are shown in Fig. 9(a). The two-dimensional quadratic polynomial fitted curve is computed and shown in Fig. 9(b). The subpixel correlation matching position calculated from the fitted curve is (0, 0) which is equal to the set value of the matching point. Hence, in a uniform linear motion case, the matching error has a weak correlation with motion and is mainly determined by the accuracy of the algorithm

 figure: Fig. 9.

Fig. 9. In uniform linear motion case, (a) the assumed correlation values. (b) The fitted curve.

Download Full Size | PDF

However, in the non-uniform linear motion case shown in Fig. 10(a) where (0, 0) is also set as the matching point, the matching error has a strong correlation with motion. When y is equal to -1, the movement speed increases. The points in this row are closer to the matching point so the correlation values change to 0.5, 0.6, and 0.5. The fitted curve computed is shown in Fig. 10(b) and the sub-pixel correction matching position is computed as (0, -0.06). Therefore, the motion state affects the correlation values. These values further affect the sub-pixel matching accuracy. And the matching error influences the measurement uncertainty.

 figure: Fig. 10.

Fig. 10. In non-uniform linear motion case, (a) the assumed correlation values. (b) The fitted curve.

Download Full Size | PDF

In the measurement based on line-scan cameras, the movement affects the point cloud accuracy from three aspects. First, the movement affects the splicing accuracy. The measurements based on line-scan cameras need prior trajectory information, such as displacement platform, rail, or some other six-degree-of-freedom positioning system, to splice one-shot measurement results. If the practical relative movement between the sensors and object doesn’t follow the prior trajectory, the whole point cloud accuracy declines. Second, the movement affects the measurement. Because the captured time differences between cameras keep as small as possible in the proposed measurement model, the movement between different cameras is assumed as uniform linear motion. The non-uniform linear motion would reduce measurement accuracy. Third, the movement affects the matching accuracy. To point P, the captured times between different cameras are different. Irregular movement causes different distortion in different camera images around the measured point, which affects the correlation values’ computation in sub-pixel matching. The inaccuracy correlation values result in inaccuracy sub-pixel matching results. The more accurate the movement, the more accurate the point clouds. However, the motion vibration is unavoidable and measurement error exists.

5. Experiments

To validate the measurement methods, an experimental setup consisting of triple line-scan cameras (Basler racer raL 8192-12gm), an illumination source, and a motion platform is established. The schematic diagram is shown in Fig. 11. The motion platform generates a relative motion between cameras and measured objects. The resolution of line-scan cameras is 8192 pixels. The focal length of the lens is 35 mm and the working distance Wd is about 700 mm. Theoretically, the lateral resolution of the system is about 0.07 mm. To generate a longitudinal resolution consistent with lateral resolution, the motion speed of the motion platform and the acquisition rate of the cameras have to follow [27].

 figure: Fig. 11.

Fig. 11. Schematic diagram of the experimental setup.

Download Full Size | PDF

$$vel = \frac{{Wd \times Fre}}{{F \times n}}.$$

Hence, the motion speed is set to 35 mm/s and the acquisition rate is 500 Hz. The three line-scan cameras are calibrated accurately by the method in other research [31].

5.1. Accuracy experiments

To verify the accuracy of the proposed optimization algorithm based on bundle adjustment fused with motion information, a 00-level (Flatness 0.005 mm) marble surface in Fig. 12(a) was measured. Though the marble is painted to enrich the surface texture and the flatness inevitably decreases, it still can be seen as a reference plane. The point clouds without or with optimization were both computed and the best-fit planes of these points were calculated. The color maps of deviations are shown in Fig. 12(b) and Fig. 12(c) respectively. The color of the point represents the distance between the point to the best-fit plane. The RMS (root mean square) of deviation without optimization is 0.063 mm while drops to 0.053 mm after optimization. And the RMS of re-projection error drops to 2.76 pixels from 3.18 pixels. Therefore, the proposed optimization method has a certain compensation effect.

 figure: Fig. 12.

Fig. 12. (a) 00-level marble surface. (b) The point cloud without optimization. (c) The point cloud with optimization.

Download Full Size | PDF

However, it should be noted that the occurrence of fringes in the point cloud is not accidental. These also occur on the point clouds of other flat objects such as a painted acrylic plate and an unpainted marble ruler in Fig. 13. It is caused by the inaccurate correlation values involved in the calculation of sub-pixel matching. Non-uniform motion is a significant factor resulting in inaccurate correlation. The matching error affects the measurement accuracy directly. The detailed reason has been illustrated in Section 4.2.

 figure: Fig. 13.

Fig. 13. (a) A painted acrylic plate captured by a line-scan camera. (b) The point cloud of a painted acrylic plate. (c) An unpainted marble ruler captured by a line-scan camera. (d) The point cloud of unpainted marble ruler.

Download Full Size | PDF

To reach higher measurement accuracy, the motion platform was changed to a higher accurate one (PI-M-521.DD1) whose design resolution is 0.05 mm and the marble surface in Fig. 12(a) was measured again. In this experiment, the higher accurate motion platform is used to reduce the effect of movement on the matching, and the proposed optimization algorithm is used to reduce the effect of non-uniform linear motion on the measurement and the splicing. Limited by the maximum speed of the PI motion platform, the movement speed was set to 18 mm/s and the acquisition rate was set to 250 Hz. The result of the point cloud is shown in Fig. 14. The RMS of deviation without optimization is 0.017 mm while drops to 0.015 mm after optimization. And the RMS of re-projection error drops to 3.32 pixels from 5.91 pixels. With a higher accurate motion platform and optimization algorithm, the RMS of deviation drops to 0.015 mm from 0.063 mm and the fringes error in the point cloud disappears.

 figure: Fig. 14.

Fig. 14. The measurement result of the marble surface. (a) The point cloud without optimization. (b) The point cloud with optimization.

Download Full Size | PDF

5.2. Effect of epipolar constraint

The epipolar constraint of line-scan cameras can assist to obtain the point cloud of an object with surface repetition. To verify the effect of epipolar constraint, an array of pins in Fig. 15 which is common in PCB (Printed Circuit Board) was measured. The array of pins is a typical object with surface repetition and the broken pins would affect the performance of PCB directly. In Fig. 15, the pins whose height is about 3 mm have spaced 2 mm apart, and there are two obvious broken pins. Without epipolar constraint, the search range of matching covers multiple pins. The repetition of information interferes with the matching, resulting in a mass of wrong points in Fig. 16 and failed measurement. However, as shown in Fig. 17, the blue line in Image 1 and green line in Image 3 are the epipolar constraint of the red point in Image 2 to be matched. With suitable epipolar constraints, the search range of matching can be limited to a small area where only a pin is covered. The number of wrong points reduces and the result of the point cloud is shown in Fig. 18.

 figure: Fig. 15.

Fig. 15. An array of pins.

Download Full Size | PDF

 figure: Fig. 16.

Fig. 16. The point cloud of pins without epipolar constraint. (a) The point cloud viewed from a general perspective. (b) The point cloud viewed from the Z axis.

Download Full Size | PDF

 figure: Fig. 17.

Fig. 17. The epipolar constraint.

Download Full Size | PDF

 figure: Fig. 18.

Fig. 18. The point cloud of pins with epipolar constraint. (a) The point cloud viewed from a general perspective. (b) The point cloud viewed from the Z axis.

Download Full Size | PDF

In Fig. 16 and Fig. 18, the point color represents the Z axial coordinate. The points with red color have large values of the Z coordinate and with blue color have small values. From the enlargement in Fig. 18, the Z coordinates of the point cloud in the black boxes are larger than the around points, which expresses a high possibility of broken pins in these areas. In Fig. 19, the color of point cloud is covered with gray values in the line-scan images from Camera 2. From the enlargement in Fig. 19, the point cloud in red boxes also accords with the actual broken pins. Therefore, the epipolar constraint of line-scan cameras has great advantages in the measurement of an object with surface repetition. Besides, since this constraint can narrow the search range of matching, the calculation speeds up to some extent.

 figure: Fig. 19.

Fig. 19. The point cloud corresponding to the gray values.

Download Full Size | PDF

5.3. Comparative experiments

The proposed measurement setup is compared with T-Scan (AT 901 LR & T-Cam & T-Scan, Leica Geosystems). From the datasheet of T-Scan, the line-scan rate is 140 Hz and the measurement rate reaches 20000 points per second. The resolution of the sensor is between 0.07 mm and 0.98 mm. The measurement accuracy is 20 µm and the average scan width is about 90 mm. In comparative experiments, the line-scan rate of the triple line-scan camera setup is set to 500 Hz and each line contains 8k points. The resolution of the sensor is up to 0.07 mm and the average scan width is 500 mm. According to the theoretical analysis of measurement uncertainty, the measurement accuracy is 50 µm. The maximum line-scan rate of the triple line-scan camera setup can reach 12000Hz. With a high-accuracy PI motion platform, the measurement accuracy can reach 20 µm.

It is difficult to compare the point cloud of two systems quantitatively because the conventional registration methods fail in the point clouds with great density differences. Even so, we still do our best to compare the proposed measurement setup with T-Scan. A circuit board in Fig. 20 is measured by two systems. The point cloud by the proposed method is shown in Fig. 21(a) and of T-scan is shown in Fig. 21(b). The color of points represents the distance of the point to a best-fit plane. The two point clouds are very similar. The total point number of the proposed method is 2055404 and of T-scan is 12283. The average number of points of each row is about 1752 by our method and about 133 by T-scan. With similar point clouds, the density of our method is much more than that of a T-scan by a single scan. As shown in Fig. 21, the points in black boxes represent chips and in white boxes represent components. Because of unrich surface texture of chips, the point cloud by triple line-scan cameras based on digital image correlation is missing while by T-Scan based on laser triangulation is complete. But in the white box, the points of components measured by T-Scan are missing while measured by the triple line-scan cameras setup is completed. Besides, the proposed setup obtains the images information. The Z coordinate of the points cloud can correspond to the image grayscale as shown in Fig. 22. In order to make the color of the point cloud clearer, the gray value is replaced by a colorful value and the relationship between them is shown in Fig. 23. Therefore, the proposed method is more advantageous for complex surface measurement. With correspondence between the point cloud and the image, the point cloud distinction is also easier.

 figure: Fig. 20.

Fig. 20. The circuit board (a) captured by a phone camera. (b) Captured by a line-scan camera.

Download Full Size | PDF

 figure: Fig. 21.

Fig. 21. The point cloud measured by (a) T-Scan. (b) Triple line-scan cameras setup.

Download Full Size | PDF

 figure: Fig. 22.

Fig. 22. The color of points represented by (a) the Z axis coordinates, (b) the gray values of images.

Download Full Size | PDF

 figure: Fig. 23.

Fig. 23. The relationship between gray values and colorful values.

Download Full Size | PDF

5.4. Measurement experiments

Three other circuit boards in Fig. 24 were measured by our method and the results are shown in Fig. 25, Fig. 26, Fig. 27. The color of points in Fig. 25(a), Fig. 26(a), and Fig. 27(a) expresses the Z coordinates. The color of points in Fig. 25(b), Fig. 26(b), and Fig. 27(b) corresponds to the image grayscale. The relationship between gray values of images and colorful values of point clouds is shown in Fig. 23. Besides, a car fender in Fig. 28(a) was measured and the point cloud is shown in Fig. 28(b). Therefore, the triple line-scan cameras setup with the proposed methods can measure the object with complex surface and obtain the in-motion continuous point cloud. The circuit boards and the fender measurement experiments show that our method has wide application prospects in industrial measurement.

 figure: Fig. 24.

Fig. 24. Three other circuit boards. (a) Circuit board 1. (b) Circuit board 2. (c) Circuit board 3.

Download Full Size | PDF

 figure: Fig. 25.

Fig. 25. The point cloud of Circuit 1. (a) The color of points represented by the Z axis coordinates. (b) The color of points represented by the gray values of images.

Download Full Size | PDF

 figure: Fig. 26.

Fig. 26. The point cloud of Circuit 2. (a) The color of points represented by the Z axis coordinates. (b) The color of points represented by the gray values of images.

Download Full Size | PDF

 figure: Fig. 27.

Fig. 27. The point cloud of Circuit 3. (a) The color of points represented by the Z axis coordinates. (b) The color of points represented by the gray values of images.

Download Full Size | PDF

 figure: Fig. 28.

Fig. 28. (a) A car fender. (b) The point cloud of the car fender.

Download Full Size | PDF

6. Conclusion

A triple line-scan cameras-based method for continuous in-motion measurement was proposed herein. Considering the fused process of measurement and movement, the relative position among cameras in the experimental setup did not require complicated adjustments, resulting in the installation and measurement process being simplified extremely. The measurement accuracy was increased after the optimization based on bundle adjustment fused with movement information. The epipolar constraint was computed to limit the matching searching range and assist the measurement of objects with surface repetition. In experiments, a reference plane was first measured to verify the measurement accuracy. An array of pins was measured to validate the effect of epipolar constraint. The point cloud comparative experiment with T-Scan demonstrated the advantages of high-density and rich details in the proposed setup. Some other measurement experiments showed the application prospects of our method. However, the measurement speed of the proposed method has not been discussed. The algorithm was implemented in Matlab on a PC (Intel i5 CPU). The iteration data process can be accelerated by GPU (Graphics Processing Unit) or FPGA (Field Programmable Gate Array). And real-time computing is possible. Thus, the measurement methods based on line-scan cameras are still significant for in-motion continuous measurement.

Appendix

Appendix A: The computation of bundle adjustment fused with motion information

In this part, the detailed computation procedure about bundle adjustment fused with motion information based on Gauss-Newton is introduced, which can be divided into four steps, estimation, error, Jacobian matrix, and correction.

A1. Estimation

From Eq. (2) and Eq. (3), the projection equations of Camera 1 and 3 are written in Eq. (11). The parameter num represents the camera number. Here num =1 or 3 and ${\tilde u_{num}}$ has been operated by distortion correction.

$$\left\{ \begin{array}{l} {{\tilde u}_{num}} = u{c_{num}} + {F_{num}}\frac{{r_{11}^{num}{X_{num}} + r_{12}^{num}{Y_{num}} + r_{13}^{num}{Z_{num}} + T_1^{num}}}{{r_{31}^{num}{X_{num}} + r_{32}^{num}{Y_{num}} + r_{33}^{num}{Z_{num}} + T_3^{num}}}\\ {P_{num}} = {\left[ {\begin{array}{*{20}{c}} {{X_{num}}}&{{Y_{num}}}&{{Z_{num}}} \end{array}} \right]^T} = {P_2} + {D_{num}}\\ {D_{num}} = \Delta {{\tilde v}_{num}} \times Dv \end{array}. \right.$$

The rotation matrix is

$${R^{num}} = \left[ {\begin{array}{*{20}{c}} {r_{11}^{num}}&{r_{12}^{num}}&{r_{13}^{num}}\\ {r_{21}^{num}}&{r_{22}^{num}}&{r_{23}^{num}}\\ {r_{31}^{num}}&{r_{32}^{num}}&{r_{33}^{num}} \end{array}} \right].$$

In bundle adjustment, the projection equation in Eq. (11) constitutes the whole estimation. But in line-scan cameras, the estimation consists of the projection and motion equations. The motion equation is written as

$$\begin{aligned} &\delta {{\tilde v}_{num}} = \frac{{r_{21}^{num}{X_{num}} + r_{22}^{num}{Y_{num}} + r_{23}^{num}{Z_{num}} + T_2^{num}}}{{r_{21}^{num}D{v_x} + r_{22}^{num}D{v_y} + r_{23}^{num}D{v_z}}}\\ &= \frac{{r_{21}^{num}({{X_2} + \Delta {v_{num}}D{v_x}} )+ r_{22}^{num}({{Y_2} + \Delta {v_{num}}D{v_y}} )+ r_{23}^{num}({{Z_2} + \Delta {v_{num}}D{v_z}} )+ T_2^{num}}}{{r_{21}^{num}D{v_x} + r_{22}^{num}D{v_y} + r_{23}^{num}D{v_z}}} \end{aligned}.$$

Vector Dv[Dvx Dvy Dvz]T is related to motion direction d, motion velocity vel, and acquisition rate of cameras Fre. The parameters with superscript ∼ express the estimation values.$\delta {\tilde v_{num}}$ is expressed in Eq. (14). ${\tilde v_{num}}$ is the estimation value of v direction in the images.

$$\delta {\tilde v_{num}} = ({{v_{num}} - {v_2}} )- ({{{\tilde v}_{num}} - {{\tilde v}_2}} )= {v_{num}} - {\tilde v_{num}}.$$

A2. Error

The error equation in Eq. (15) is derived from imaging equation Eq. (11) and motion equation Eq. (13). The observation values of the re-projection are u1 and u3, and the moving part is zero.

$$e({Dv} )= {\left[ {\begin{array}{*{20}{c}} {{u_1}}&0&{{u_3}}&0 \end{array}} \right]^T} - {\left[ {\begin{array}{*{20}{c}} {{{\tilde u}_1}}&{\delta {{\tilde v}_1}}&{{{\tilde u}_3}}&{\delta {{\tilde v}_3}} \end{array}} \right]^T}.$$

A3. Jacobian matrix

Jacobian matrix of e(Dv) is shown as

$$J({Dv} )= - \left[ {\begin{array}{lll} {\frac{{\partial {{\tilde u}_1}}}{{\partial D{v_x}}}}&{\frac{{\partial {{\tilde u}_1}}}{{\partial D{v_y}}}}&{\frac{{\partial {{\tilde u}_1}}}{{\partial D{v_z}}}}\\ {\frac{{\partial \delta {{\tilde v}_1}}}{{\partial D{v_x}}}}&{\frac{{\partial \delta {{\tilde v}_1}}}{{\partial D{v_y}}}}&{\frac{{\partial \delta {{\tilde v}_1}}}{{\partial D{v_z}}}}\\ {\frac{{\partial {{\tilde u}_3}}}{{\partial D{v_x}}}}&{\frac{{\partial {{\tilde u}_3}}}{{\partial D{v_y}}}}&{\frac{{\partial {{\tilde u}_3}}}{{\partial D{v_z}}}}\\ {\frac{{\partial \delta {{\tilde v}_3}}}{{\partial D{v_x}}}}&{\frac{{\partial \delta {{\tilde v}_3}}}{{\partial D{v_y}}}}&{\frac{{\partial \delta {{\tilde v}_3}}}{{\partial D{v_z}}}} \end{array}} \right].$$

The derivative of unum about Dv can be further expressed as

$$\frac{{\partial {{\tilde u}_{num}}}}{{\partial Dv}} = \frac{{\partial {{\tilde u}_{num}}}}{{\partial {P_{num}}}}\frac{{\partial {P_{num}}}}{{\partial Dv}} = \left[ {\begin{array}{ccc} {\frac{{\partial {{\tilde u}_{num}}}}{{\partial {X_{num}}}}}&{\frac{{\partial {{\tilde u}_{num}}}}{{\partial {Y_{num}}}}}&{\frac{{\partial {{\tilde u}_{num}}}}{{\partial {Z_{num}}}}} \end{array}} \right]\frac{{\partial {P_{num}}}}{{\partial Dv}}.$$

$\frac{{\partial {{\tilde u}_{num}}}}{{\partial {X_{num}}}}$, $\frac{{\partial {{\tilde u}_{num}}}}{{\partial {Y_{num}}}}$, and $\frac{{\partial {{\tilde u}_{num}}}}{{\partial {Z_{num}}}}$ in $\frac{{\partial {{\tilde u}_{num}}}}{{\partial {P_{num}}}}$ can be calculated from the projection equation Eq. (11) as

$$\left\{ \begin{aligned} &{\frac{{\partial {{\tilde u}_{num}}}}{{\partial {X_{num}}}} = {F_{num}}\frac{{({r_{11}^{num}r_{32}^{num} - r_{31}^{num}r_{12}^{num}} ){Y_{num}} + ({r_{11}^{num}r_{33}^{num} - r_{31}^{num}r_{13}^{num}} ){Z_{num}} + r_{11}^{num}T_3^{num} - r_{31}^{num}T_1^{num}}}{{{{({r_{31}^{num}{X_{num}} + r_{32}^{num}{Y_{num}} + r_{33}^{num}{Z_{num}} + T_3^{num}} )}^2}}}}\\ &{\frac{{\partial {{\tilde u}_{num}}}}{{\partial {Y_{num}}}} = {F_{num}}\frac{{({r_{12}^{num}r_{31}^{num} - r_{32}^{num}r_{11}^{num}} ){X_{num}} + ({r_{12}^{num}r_{33}^{num} - r_{32}^{num}r_{13}^{num}} ){Z_{num}} + r_{12}^{num}T_3^{num} - r_{32}^{num}T_1^{num}}}{{{{({r_{31}^{num}{X_{num}} + r_{32}^{num}{Y_{num}} + r_{33}^{num}{Z_{num}} + T_3^{num}} )}^2}}}}\\ &{\frac{{\partial {{\tilde u}_{num}}}}{{\partial {Z_{num}}}} = {F_{num}}\frac{{({r_{13}^{num}r_{31}^{num} - r_{33}^{num}r_{11}^{num}} ){X_{num}} + ({r_{13}^{num}r_{32}^{num} - r_{33}^{num}r_{12}^{num}} ){Y_{num}} + r_{13}^{num}T_3^{num} - r_{33}^{num}T_1^{num}}}{{{{({r_{31}^{num}{X_{num}} + r_{32}^{num}{Y_{num}} + r_{33}^{num}{Z_{num}} + T_3^{num}} )}^2}}}} \end{aligned}. \right.$$

$\frac{{\partial {P_{num}}}}{{\partial Dv}}$ can also be calculated from the projection equation in Eq. (11) as

$$\frac{{\partial {P_{num}}}}{{\partial Dv}} = \left[ {\begin{array}{*{20}{c}} {\Delta {{\tilde v}_{num}}}&0&0\\ 0&{\Delta {{\tilde v}_{num}}}&0\\ 0&0&{\Delta {{\tilde v}_{num}}} \end{array}} \right].$$

Bring Eq. (19) and Eq. (18) to Eq. (17), the value of $\frac{{\partial {{\tilde u}_{num}}}}{{\partial Dv}}$ is computed.

To compute the value of $\frac{{\partial \delta {{\tilde v}_{num}}}}{{\partial Dv}}$, the implicit function from Eq. (13) is calculated considering Eq. (14) as:

$$F({Dv,\delta {{\tilde v}_{num}}} )= \frac{{r_{21}^{num}{X_2} + r_{22}^{num}{Y_2} + r_{23}^{num}{Z_2} + T_2^{num}}}{{r_{21}^{num}D{v_x} + r_{22}^{num}D{v_y} + r_{23}^{num}D{v_z}}} + \Delta {v_{num}} - 2\delta {\tilde v_{num}} = 0.$$

According to the derivation rule based on implicit function, $\frac{{\partial \delta {{\tilde v}_1}}}{{\partial Dv}}$ can be written as

$$\frac{{\partial \delta {{\tilde v}_{num}}}}{{\partial Dv}} = - \left[ {\begin{array}{*{20}{c}} {\frac{{{F_{D{v_x}}}({Dv,\delta {{\tilde v}_{num}}} )}}{{{F_{\delta {{\tilde v}_{num}}}}({Dv,\delta {{\tilde v}_{num}}} )}}}&{\frac{{{F_{D{v_y}}}({Dv,\delta {{\tilde v}_{num}}} )}}{{{F_{\delta {{\tilde v}_{num}}}}({Dv,\delta {{\tilde v}_{num}}} )}}}&{\frac{{{F_{D{v_z}}}({Dv,\delta {{\tilde v}_{num}}} )}}{{{F_{\delta {{\tilde v}_{num}}}}({Dv,\delta {{\tilde v}_{num}}} )}}} \end{array}} \right].$$

Here

$$\left\{ \begin{aligned} &{F_{D{v_x}}}\mathbf{Dv}({Dv,\delta {{\tilde v}_{num}}} )= - r_{21}^{num}\frac{{r_{21}^{num}{X_2} + r_{22}^{num}{Y_2} + r_{23}^{num}{Z_2} + T_2^{num}}}{{{{({r_{21}^{num}D{v_x} + r_{22}^{num}D{v_y} + r_{23}^{num}D{v_z}} )}^2}}}\\ &{F_{D{v_y}}}({Dv,\delta {{\tilde v}_{num}}} )= - r_{22}^{num}\frac{{r_{21}^{num}{X_2} + r_{22}^{num}{Y_2} + r_{23}^{num}{Z_2} + T_2^{num}}}{{{{({r_{21}^{num}D{v_x} + r_{22}^{num}D{v_y} + r_{23}^{num}D{v_z}} )}^2}}}\\ &{F_{D{v_z}}}({Dv,\delta {{\tilde v}_{num}}} )= - r_{23}^{num}\frac{{r_{21}^{num}{X_2} + r_{22}^{num}{Y_2} + r_{23}^{num}{Z_2} + T_2^{num}}}{{{{({r_{21}^{num}D{v_x} + r_{22}^{num}D{v_y} + r_{23}^{num}D{v_z}} )}^2}}}\\ &{F_{\delta {{\tilde v}_{_{num}}}}}({Dv,\delta {{\tilde v}_{num}}} )= - 2 \end{aligned}. \right.$$

Bring Eq. (22) to Eq. (21), the value of $\frac{{\partial \delta {{\tilde v}_{num}}}}{{\partial Dv}}$ is computed. Then the Jacobian matrix in Eq. (16) is obtained.

A4. Correction

ΔDvk expresses the current disturbance of Dv and can be computed as

$$\Delta D{v_k} ={-} {({J{{({Dv} )}^T}J({Dv} )} )^{ - 1}}J{({Dv} )^T}e({D{v_k}} ).$$

The corrected value of Dv is

$$D{v_{k + 1}} = D{v_k} + \Delta D{v_k}.$$

By bringing the new value of Dvk+1 to the measurement function in Eq. (3), the corrected coordinate of the point can be computed. And bring Dvk+1 to Eq. (7), the corrected position of the point can be optimized.

Appendix B: The dynamic epipolar constraint of line-scan cameras

The coordinate system of Camera 2 is regarded as the reference coordinate system. The images of Cameras 1 and 2 in Fig. 1 are taken into computation as an example. The uniform linear motion direction is defined as d(m, n, o)T. The equation of linear motion is

$$\frac{{{X_1} - {X_2}}}{m} = \frac{{{Y_1} - {Y_2}}}{n} = \frac{{{Z_1} - {Z_2}}}{o}.$$

Y2 is equal to zero. The viewing plane equation of Camera 1 is

$$0 = r_{21}^1{X_1} + r_{22}^1{Y_1} + r_{23}^1{Z_1} + T_2^1.$$

The imaging equation of Camera 2 is

$$u_2^{\prime} = \frac{{{u_2} - u{c_2} - \Delta {u_2}}}{{{F_2}}} = \frac{{{X_2}}}{{{Z_2}}}.$$

From Eq. (25) and Eq. (26), the coordinate of P1(X1, Y1, Z1)T can be computed as

$$\left\{ \begin{aligned} &{X_1} = \frac{{({ - mr_{23}^1 + r_{22}^1u_2^{\prime}n + r_{23}^1u_2^{\prime}o} ){Z_2} - mT_2^1}}{{({r_{21}^1m + r_{22}^1n + r_{23}^1o} )}} = \frac{{{X_a}{Z_2} + {X_b}}}{A}\\ &{Y_1} = \frac{{ - n({r_{21}^1u_2^{\prime} + r_{23}^1} ){Z_2} - nT_2^1}}{{({r_{21}^1m + r_{22}^1n + r_{23}^1o} )}} = \frac{{{Y_a}{Z_2} + {Y_b}}}{A}\\ &{Z_1} = \frac{{({r_{21}^1m + r_{22}^1n - or_{21}^1u_2^{\prime}} ){Z_2} - oT_2^1}}{{({r_{21}^1m + r_{22}^1n + r_{23}^1o} )}} = \frac{{{Z_a}{Z_2} + {Z_b}}}{A} \end{aligned}. \right.$$

The imaging equation of Camera 1 is

$${u_1} = u{c_1} - \Delta {u_1} + {F_1}\frac{{r_{11}^1{X_1} + r_{12}^1{Y_1} + r_{13}^1{Z_1} + T_1^1}}{{r_{31}^1{X_1} + r_{32}^1{Y_1} + r_{33}^1{Z_1} + T_3^1}}.$$

From Eq. (28) and Eq. (29), the relationship between u1 and Z2 is shown as

$${u_1} = \frac{{{U_a}{Z_2} + {U_b}}}{{{U_c}{Z_2} + {U_d}}}.$$

Here

$$\left\{ \begin{aligned} &{U_a} = {F_1}({r_{11}^1{X_a} + r_{12}^1{Y_a} + r_{13}^1{Z_a}} )+ ({u{c_1} - \Delta {u_1}} )({r_{31}^1{X_a} + r_{32}^1{Y_a} + r_{33}^1{Z_a}} )\\ &{U_b} = {F_1}({r_{11}^1{X_b} + r_{12}^1{Y_b} + r_{13}^1{Z_b} + AT_1^1} )+ ({u{c_1} - \Delta {u_1}} )({r_{31}^1{X_b} + r_{32}^1{Y_b} + r_{33}^1{Z_b} + AT_3^1} )\\ &{U_c} = r_{31}^1{X_a} + r_{32}^1{Y_a} + r_{33}^1{Z_a}\\ &{U_d} = r_{31}^1{X_b} + r_{32}^1{Y_b} + r_{33}^1{Z_b} + AT_3^1 \end{aligned}. \right.$$

The relationship between Δv1 and Z2 is expressed as the disparity along Y axis.

$$\Delta {v_1} = {v_1} - {v_2} = \frac{{{Y_a}{Z_2} + {Y_a}}}{{A \times D{v_y}}} = {V_a}{Z_2} + {V_b}.$$

From Eq. (30) and Eq. (32), the epipolar line is expressed as

$${u_1} = \frac{{{U_a}\Delta {v_1} - {U_a}{V_b} + {U_b}{V_a}}}{{{U_c}\Delta {v_1} - {U_c}{V_b} + {U_d}{V_a}}}.$$

Appendix C: Measurement uncertainty

For the simplicity of calculation, it is assumed that the transformation relationship between the Camera 1 and 3 coordinate system is completely symmetric about Camera 2.

$$\left\{ {\begin{array}{*{20}{c}} {{R^1} = \left[ {\begin{array}{*{20}{c}} 1&0&0\\ 0&{\cos \alpha }&{ - \sin \alpha }\\ 0&{\sin \alpha }&{\cos \alpha } \end{array}} \right]}&{{T^1} = {{\left[ {\begin{array}{*{20}{c}} 0&{\frac{L}{2}\cos \alpha }&{\frac{L}{2}\sin \alpha } \end{array}} \right]}^T}}\\ {{R^3} = \left[ {\begin{array}{*{20}{c}} 1&0&0\\ 0&{\cos \alpha }&{\sin \alpha }\\ 0&{ - \sin \alpha }&{\cos \alpha } \end{array}} \right]}&{{T^3} = {{\left[ {\begin{array}{*{20}{c}} 0&{ - \frac{L}{2}\cos \alpha }&{\frac{L}{2}\sin \alpha } \end{array}} \right]}^T}} \end{array}}. \right.$$

The imaging model in Eq. (2) can be reformulated as

$$\begin{array}{cc} \left\{ \begin{array}{l} u_1^{\prime} = \frac{{{X_1}}}{{{Y_1}\sin \alpha + {Z_1}\cos \alpha + \frac{L}{2}\sin \alpha }}\\ 0 = {Y_1}\cos \alpha - {Z_1}\sin \alpha + \frac{L}{2}\cos \alpha \end{array} \right. & \left\{ \begin{array}{l} u_3^{\prime} = \frac{{{X_3}}}{{ - {Y_3}\sin \alpha + {Z_3}\cos \alpha + \frac{L}{2}\sin \alpha }}\\ 0 = {Y_3}\cos \alpha + {Z_3}\sin \alpha + \frac{L}{2}\cos \alpha \end{array}. \right. \end{array}$$

In this situation, u coordinates of corresponding imaging pixels for the two cameras are the same. The X-axis and the Z-axis coordinates of points P1 and P3 are consistent. The motion direction d is [0,1, 0]T. The constrained conditions are shown as

$$\left\{ \begin{array}{l} \begin{array}{*{20}{c}} {{X_1} = {X_3}}&{{Z_1} = {Z_3}} \end{array}\\ \begin{array}{*{20}{c}} {u_1^{\prime} = u_3^{\prime}}&{Dv = {{\left[ {\begin{array}{*{20}{c}} 0&{dy}&0 \end{array}} \right]}^T}} \end{array}\\ {Y_3} = {Y_1} + \Delta v \times dy \end{array}. \right.$$

Parameter dy is the resolution along the Y axis, and it is computed as the motion velocity divided by the acquisition frequency. Δv is the disparity between Camera 1 and 3. And here P1(X1, Y1, Z1)T is computed as the coordinate of point P (X, Y, Z)T. Therefore, the coordinate of P can be computed as follow.

$$\left\{ \begin{aligned} &X = \frac{{u_1^{\prime}}}{2}({L - \Delta v \times dy} )\left( {\frac{{\cos \alpha }}{{\tan \alpha }} + \sin \alpha } \right)\\ &Y = \frac{{\Delta v \times dy}}{2}\\ &Z = \frac{{\Delta v \times dy}}{{2\tan \alpha }} + \frac{L}{{2\tan \alpha }} \end{aligned}. \right.$$

From Eq. (37), the matching error in the v direction primarily affects the measurement accuracy and the effect of u1’ can be ignored. The derivative of P about Δv is

$$\left\{ \begin{aligned} &\frac{{\partial X}}{{\partial \Delta v}} = - \frac{{dy}}{2} \times u_1^{\prime}\left( {\sin \alpha + \frac{{\cos \alpha }}{{\tan \alpha }}} \right)\\ &\frac{{\partial Y}}{{\partial \Delta v}} = - \frac{{dy}}{2}\\ &\frac{{\partial Z}}{{\partial \Delta v}} = - \frac{{dy}}{{2\tan \alpha }} \end{aligned}. \right.$$

And the measurement uncertainty can be expressed as

$$\begin{aligned} &\Delta P = \sqrt {{{\left( {\Delta X} \right)}^{2}} + {{\left( {\Delta Y} \right)}^{2}} + {{\left( {\Delta Z} \right)}^{2}}} = \sqrt {{{\left( {\frac{{\partial X}}{{\partial \Delta v}}\sigma } \right)}^{2}} + {{\left( {\frac{{\partial Y}}{{\partial \Delta v}}\sigma } \right)}^{2}} + {{\left( {\frac{{\partial Z}}{{\partial \Delta v}}\sigma } \right)}^{2}}} \\ &= \sigma \frac{{dy}}{2}\sqrt {{{\left( {u_1^{\prime}} \right)}^{2}}{{\left( {\sin \alpha + \frac{{\cos \alpha }}{{\tan \alpha }}} \right)}^{2}} + 1 + {{\left( {\frac{1}{{\tan \alpha }}} \right)}^{2}}} \end{aligned}.$$

Appendix D: The effect of correlation values on sub-pixel matching

The two-dimensional quadratic polynomial fitting for sub-pixel accuracy can be written as

$$C = Aa = \left[ {\begin{array}{*{20}{l}} {one}&x&y&{x \circ x}&{x \circ y}&{y \circ y} \end{array}} \right]\left[ {\begin{array}{*{20}{l}} {{a_1}}&{{a_2}}&{{a_3}}&{{a_4}}&{{a_5}}&{{a_6}} \end{array}} \right].$$

Here $^{\circ} $ is the Hadamard product. The vectors one, x, and y is defined as

$$\left\{ \begin{array}{l} one = {\left[ {\begin{array}{*{20}{l}} 1&1&1&1&1&1&1&1&1 \end{array}} \right]^T}\\ x = {\left[ {\begin{array}{*{20}{l}} { - 1}&0&1&{ - 1}&0&1&{ - 1}&0&1 \end{array}} \right]^T}\\ y = {\left[ {\begin{array}{*{20}{l}} { - 1}&{ - 1}&{ - 1}&0&0&0&1&1&1 \end{array}} \right]^T} \end{array}. \right.$$

And a can be computed by Least Square.

$$a = {({{A^T}A} )^{ - 1}}{A^T}C = MC.$$

M is a 6×9 matrix. C is composed of correlation values. The uncertainty of each value in a can be expressed as:

$$\begin{array}{*{20}{c}} {\Delta {a_i} = \sqrt {\sum\limits_{j = 1}^9 {{M_{ij}}\Delta {C_j}} } }&{i = 1,2, \cdots ,6}. \end{array}$$

The parameter i is the number of elements in a. Mij is ith row, jth column element of matrix M. ΔCj is the uncertainty of every correlation value. As the sub-pixel correction matching value is computed in Eq. (6), the uncertainty of xmax and ymax can be computed as

$$\left\{ \begin{aligned} &\Delta {x_{\max }} = \sqrt {{{\left( {\frac{{\partial {x_{\max }}}}{{\partial {a_2}}}\Delta {a_2}} \right)}^2} + {{\left( {\frac{{\partial {x_{\max }}}}{{\partial {a_3}}}\Delta {a_3}} \right)}^2} + {{\left( {\frac{{\partial {x_{\max }}}}{{\partial {a_4}}}\Delta {a_4}} \right)}^2} + {{\left( {\frac{{\partial {x_{\max }}}}{{\partial {a_5}}}\Delta {a_5}} \right)}^2} + {{\left( {\frac{{\partial {x_{\max }}}}{{\partial {a_6}}}\Delta {a_6}} \right)}^2}} \\ &\Delta {y_{\max }} = \sqrt {{{\left( {\frac{{\partial {y_{\max }}}}{{\partial {a_2}}}\Delta {a_2}} \right)}^2} + {{\left( {\frac{{\partial {y_{\max }}}}{{\partial {a_3}}}\Delta {a_3}} \right)}^2} + {{\left( {\frac{{\partial {y_{\max }}}}{{\partial {a_4}}}\Delta {a_4}} \right)}^2} + {{\left( {\frac{{\partial {y_{\max }}}}{{\partial {a_5}}}\Delta {a_5}} \right)}^2} + {{\left( {\frac{{\partial {y_{\max }}}}{{\partial {a_6}}}\Delta {a_6}} \right)}^2}} \end{aligned}. \right. $$

Here

$$\begin{array}{*{20}{c}} {\left\{ \begin{array}{l} \frac{{\partial {x_{\max }}}}{{\partial {a_2}}} = \frac{{2{a_6}}}{{a_5^2 - 4{a_4}{a_6}}}\\ \frac{{\partial {x_{\max }}}}{{\partial {a_3}}} = \frac{{ - {a_5}}}{{a_5^2 - 4{a_4}{a_6}}}\\ \frac{{\partial {x_{\max }}}}{{\partial {a_4}}} = \frac{{4{a_6}\left( {2{a_2}{a_6} - {a_3}{a_5}} \right)}}{{{{\left( {a_5^2 - 4{a_4}{a_6}} \right)}^2}}}\\ \frac{{\partial {x_{\max }}}}{{\partial {a_5}}} = \frac{{{a_3}a_5^2 + 4{a_3}{a_4}{a_6} - 4{a_2}{a_5}{a_6}}}{{{{\left( {a_5^2 - 4{a_4}{a_6}} \right)}^2}}}\\ \frac{{\partial {x_{\max }}}}{{\partial {a_6}}} = \frac{{2{a_2}a_5^2 - 4{a_4}{a_3}{a_5}}}{{{{\left( {a_5^2 - 4{a_4}{a_6}} \right)}^2}}} \end{array} \right.}&{\left\{ \begin{array}{l} \frac{{\partial {y_{\max }}}}{{\partial {a_2}}} = \frac{{ - {a_5}}}{{a_5^2 - 4{a_4}{a_6}}}\\ \frac{{\partial {y_{\max }}}}{{\partial {a_3}}} = \frac{{2{a_4}}}{{a_5^2 - 4{a_4}{a_6}}}\\ \frac{{\partial {y_{\max }}}}{{\partial {a_4}}} = \frac{{2{a_3}a_5^2 - 4{a_2}{a_5}{a_6}}}{{{{\left( {a_5^2 - 4{a_4}{a_6}} \right)}^2}}}\\ \frac{{\partial {y_{\max }}}}{{\partial {a_5}}} = \frac{{{a_2}a_5^2 + 4{a_2}{a_4}{a_6} - 4{a_2}{a_3}{a_4}}}{{{{\left( {a_5^2 - 4{a_4}{a_6}} \right)}^2}}}\\ \frac{{\partial {y_{\max }}}}{{\partial {a_6}}} = \frac{{ - 4{a_4}\left( {2{a_3}{a_4} - {a_2}{a_5}} \right)}}{{{{\left( {a_5^2 - 4{a_4}{a_6}} \right)}^2}}} \end{array}. \right.} \end{array}$$

Funding

National Natural Science Foundation of China (51721003, 51975408, 52127810).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. A. V. Fantin, T. L. Pinto, C. A. de Carvalho, and A. Albertazzi, “Measurement and stitching of regular cloud of points,” Proc. SPIE 7066, 706607 (2008). [CrossRef]  

2. F. Chen, G. M. Brown, and M. Song, “Overview of 3-D shape measurement using optical methods,” Opt. Eng. 39(1), 8–22 (2000). [CrossRef]  .

3. M. Landmann, S. Heist, P. Dietrich, H. Speck, P. Kuhmstedt, A. Tunnermann, and G. Notni, “3D shape measurement of objects with uncooperative surface by projection of aperiodic thermal patterns in simulation and experiment,” Opt. Eng. 59(09), 094107 (2020). [CrossRef]  .

4. L. He, S. Wu, and C. Wu, “Robust laser stripe extraction for three-dimensional reconstruction based on a cross-structured light sensor,” Appl. Opt. 56(4), 823–832 (2017). [CrossRef]  .

5. Z. Wei, F. Zhou, and G. Zhang, “3D coordinates measurement based on structured light sensor,” Sens. Actuators, A 120(2), 527–535 (2005). [CrossRef]  .

6. B. Wu, F. Zhang, and T. Xue, “Monocular-vision-based method for online measurement of pose parameters of weld stud,” Measurement 61, 263–269 (2015). [CrossRef]  .

7. S. Zhang, “Recent progresses on real-time 3D shape measurement using digital fringe projection techniques,” Opt. Lasers Eng. 48(2), 149–158 (2010). [CrossRef]  .

8. H. Zhang, Y. Ren, C. Liu, and J. Zhu, “Flying spot laser triangulation scanner using lateral synchronization for surface profile precision measurement,” Appl. Opt. 53(20), 4405–4412 (2014). [CrossRef]  .

9. T. Tao, Q. Chen, S. Feng, Y. Hu, M. Zhang, and C. Zuo, “High-precision real-time 3D shape measurement based on a quad-camera system,” J. Opt. 20(1), 014009 (2018). [CrossRef]  .

10. S. V. D. Jeught and J. J. J. Dirckx, “Real-time structured light profilometry: a review,” Opt. Laser Eng. 87, 18–31 (2016). [CrossRef]  .

11. J. S. Hyun, G. T. C. Chiu, and S. Zhang, “High-speed and high-accuracy 3D surface measurement using a mechanical projector,” Opt. Express 26(2), 1474–1487 (2018). [CrossRef]  .

12. L. Lu, J. T. Xi, Y. G. Yu, and Q. Guo, “New approach to improve the accuracy of 3-D shape measurement of moving object using phase shifting profilometry,” Opt. Express 21(25), 30610–30622 (2013). [CrossRef]  .

13. X. Su and W. Chen, “Fourier transform profilometry: a review,” Opt. Laser Eng. 35(5), 263–284 (2001). [CrossRef]  .

14. J. J. Esteve-Taboaba, D. Mas, and J. Garcia, “Three-dimensional object recognition by Fourier transform profilometry,” Appl. Opt. 38(22), 4760–4765 (1999). [CrossRef]  .

15. X. Su and Q. Zhang, “Dynamic 3-D shape measurement method: A review,” Opt. Laser Eng. 48(2), 191–204 (2010). [CrossRef]  .

16. C. Shi and L. Zhang, “A 3D shape measurement system based on random pattern projection,” in Fifth International Conference on Frontier of Computer Science and Technology (IEEE, 2010), pp. 147–153.

17. D. Solav, K. M. Moerman, A. M. Jaeger, K. Genovese, and H. M. Herr, “MultiDIC: an open-source toolbox for multi-view 3D digital image correlation,” IEEE Access 6, 30520–30535 (2018). [CrossRef]  .

18. P. Zhang, T. Takeda, J. Toque, Y. Murayama, and A. Ide-Ektessabi, “A line scan camera based stereo method for high resolution 3D image reconstruction,” Proc. SPIE 9018, 901807 (2014). [CrossRef]  

19. P. Zhang, T. Jay Arre, and A. Ide-Ektessabi, “A line scan camera based structure from motion for high-resolution 3D reconstruction,” J. Cult. Herit. 16(5), 656–663 (2015). [CrossRef]  .

20. B. Denkena and P. Huke, “Development of a high resolution pattern projection system using linescan cameras,” Proc. SPIE 7389, 73890F (2009). [CrossRef]  

21. E. Hu and Y. Zhu, “3D online measurement of spare parts with variable speed by using line-scan non-contact method,” Optik 124(13), 1472–1476 (2013). [CrossRef]  .

22. E. Hu and Y. He, “Surface profile measurement of moving objects by using an improved π phase-shifting Fourier transform profilometry,” Opt. Laser Eng. 47(1), 57–61 (2009). [CrossRef]  .

23. Z. Liu, S. Wu, Q. Wu, C. Quan, and Y. Ren, “A novel stereo vision measurement system using both line scan camera and frame camera,” IEEE Trans. Instrum. Meas. 68(10), 3563–3575 (2019). [CrossRef]  .

24. T. Ilchev, E. Lilienblum, B. Michaelis, B. Joedicke, and M. Schnitzlein, “A stereo line sensor system to high speed capturing of surfaces in color and 3D shape,” in Proceedings of the International Conference on Computer Graphics Theory and Applications and International Conference on Information Visualization Theory and Applications (SciTe Press, 2012), pp. 809–812.

25. B. Sun, J. Zhu, L. Yang, S. Yang, and Y. Guo, “Sensor for in-motion continuous 3D shape measurement based on dual line-scan cameras,” Sensors 16(11), 1949 (2016). [CrossRef]  .

26. E. Lilienblum and A. Al-Hamadi, “A structured light approach for 3-D surface reconstruction with a stereo line-scan system,” IEEE Trans. Instrum. Meas. 64(5), 1258–1266 (2015). [CrossRef]  .

27. R. Liao, L. Yang, L. Ma, J. Yang, and J. Zhu, “A Dense 3-D Point Cloud Measurement Based on 1-D Background-Normalized Fourier Transform,” IEEE Trans. Instrum. Meas. 70, 1–12 (2021). [CrossRef]  .

28. C. B. Duane, “Close-range camera calibration,” Photogramm. Eng. 37, 855–866 (1971).

29. S. Fang, X. Xia, and Y. Xiao, “A calibration method of lens distortion for line scan cameras,” Optik 124(24), 6749–6751 (2013). [CrossRef]  .

30. S. Stolc, D. Soukup, B. Hollander, and R. Huber-Mork, “Depth and all-in-focus imaging by a multi-line-scan light-field camera,” J. Electron. Imag. 23(5), 053020 (2014). [CrossRef]  .

31. R. Liao, J. Zhu, L. Yang, J. Lin, B. Sun, and J. Yang, “Flexible calibration method for line-scan cameras using a stereo target with hollow stripes,” Opt. Laser Eng. 113, 6–13 (2019). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (28)

Fig. 1.
Fig. 1. Different measurement models of line-scan cameras (a) Classical dual line-scan measurement model. (b) Triple line-scan measurement model in the remote sensing field. (c) The proposed measurement model.
Fig. 2.
Fig. 2. Measurement model.
Fig. 3.
Fig. 3. The error in the semi-ideal model.
Fig. 4.
Fig. 4. The optimization process based on bundle adjustment fused with motion information.
Fig. 5.
Fig. 5. A new coordinate system with the pixel-level matching point as the origin.
Fig. 6.
Fig. 6. The epipolar line of line-scan cameras.
Fig. 7.
Fig. 7. The model for measurement uncertainty of two cameras.
Fig. 8.
Fig. 8. (a) Relationship between u1 and uncertainty. (b) Relationship between α and uncertainty. The blue, yellow, green, and red lines express the uncertainty of ΔX, ΔY, ΔZ, and ΔP respectively.
Fig. 9.
Fig. 9. In uniform linear motion case, (a) the assumed correlation values. (b) The fitted curve.
Fig. 10.
Fig. 10. In non-uniform linear motion case, (a) the assumed correlation values. (b) The fitted curve.
Fig. 11.
Fig. 11. Schematic diagram of the experimental setup.
Fig. 12.
Fig. 12. (a) 00-level marble surface. (b) The point cloud without optimization. (c) The point cloud with optimization.
Fig. 13.
Fig. 13. (a) A painted acrylic plate captured by a line-scan camera. (b) The point cloud of a painted acrylic plate. (c) An unpainted marble ruler captured by a line-scan camera. (d) The point cloud of unpainted marble ruler.
Fig. 14.
Fig. 14. The measurement result of the marble surface. (a) The point cloud without optimization. (b) The point cloud with optimization.
Fig. 15.
Fig. 15. An array of pins.
Fig. 16.
Fig. 16. The point cloud of pins without epipolar constraint. (a) The point cloud viewed from a general perspective. (b) The point cloud viewed from the Z axis.
Fig. 17.
Fig. 17. The epipolar constraint.
Fig. 18.
Fig. 18. The point cloud of pins with epipolar constraint. (a) The point cloud viewed from a general perspective. (b) The point cloud viewed from the Z axis.
Fig. 19.
Fig. 19. The point cloud corresponding to the gray values.
Fig. 20.
Fig. 20. The circuit board (a) captured by a phone camera. (b) Captured by a line-scan camera.
Fig. 21.
Fig. 21. The point cloud measured by (a) T-Scan. (b) Triple line-scan cameras setup.
Fig. 22.
Fig. 22. The color of points represented by (a) the Z axis coordinates, (b) the gray values of images.
Fig. 23.
Fig. 23. The relationship between gray values and colorful values.
Fig. 24.
Fig. 24. Three other circuit boards. (a) Circuit board 1. (b) Circuit board 2. (c) Circuit board 3.
Fig. 25.
Fig. 25. The point cloud of Circuit 1. (a) The color of points represented by the Z axis coordinates. (b) The color of points represented by the gray values of images.
Fig. 26.
Fig. 26. The point cloud of Circuit 2. (a) The color of points represented by the Z axis coordinates. (b) The color of points represented by the gray values of images.
Fig. 27.
Fig. 27. The point cloud of Circuit 3. (a) The color of points represented by the Z axis coordinates. (b) The color of points represented by the gray values of images.
Fig. 28.
Fig. 28. (a) A car fender. (b) The point cloud of the car fender.

Equations (45)

Equations on this page are rendered with MathJax. Learn more.

u n u m = u n u m u c n u m Δ u n u m F n u m n u m = 1 , 2 , 3 .
{ u 1 = R 1 1 P 1 + T 1 1 R 3 1 P 1 + T 3 1 0 = R 2 1 P 1 + T 2 1 { u 2 = R 1 2 P 2 + T 1 2 R 3 2 P 2 + T 3 2 0 = R 2 2 P 2 + T 2 2 { u 3 = R 1 3 P 3 + T 1 3 R 3 3 P 3 + T 3 3 0 = R 2 3 P 3 + T 2 3 .
{ P 1 = P 2 + D 1 P 3 = P 2 + D 3 { D 1 = Δ v 1 × v e l / F r e × d D 3 = Δ v 3 × v e l / F r e × d { Δ v 1 = v 1 v 2 Δ v 3 = v 3 v 2 .
[ u 1 R 3 1 R 1 1 R 2 1 u 2 R 3 2 R 1 2 R 2 2 u 3 R 3 3 R 1 3 R 2 3 ] P 2 = [ ( R 1 1 u 1 R 3 1 ) D 1 + T 1 1 u 1 T 3 1 R 2 1 D 1 + T 2 1 T 1 1 u 1 T 3 1 T 2 1 ( R 1 3 u 3 R 3 1 ) D 3 + T 1 3 u 3 T 3 1 R 2 3 D 3 + T 2 3 ] A P 2 = b P 2 = ( A T A ) 1 A T b .
{ X = X 2 + t p t 0 v e l x ( t ) d t Y = Y 2 + t p t 0 v e l y ( t ) d t Z = Z 2 + t p t 0 v e l z ( t ) d t .
{ X = X 2 v 2 × v e l × m / F r e Y = Y 2 v 2 × v e l × n / F r e Z = Z 2 v 2 × v e l × o / F r e .
P = P 2 ( v 2 1 ) × v e l × d / F r e D v .
C ( x , y ) = a 1 + a 2 x + a 3 y + a 4 x 2 + a 5 x y + a 6 y 2 x = 1 , 0 , 1 y = 1 , 0 , 1 .
x max = 2 a 2 a 6 a 3 a 5 a 5 2 4 a 4 a 6 y max = 2 a 3 a 4 a 2 a 6 a 5 2 4 a 4 a 6 .
v e l = W d × F r e F × n .
{ u ~ n u m = u c n u m + F n u m r 11 n u m X n u m + r 12 n u m Y n u m + r 13 n u m Z n u m + T 1 n u m r 31 n u m X n u m + r 32 n u m Y n u m + r 33 n u m Z n u m + T 3 n u m P n u m = [ X n u m Y n u m Z n u m ] T = P 2 + D n u m D n u m = Δ v ~ n u m × D v .
R n u m = [ r 11 n u m r 12 n u m r 13 n u m r 21 n u m r 22 n u m r 23 n u m r 31 n u m r 32 n u m r 33 n u m ] .
δ v ~ n u m = r 21 n u m X n u m + r 22 n u m Y n u m + r 23 n u m Z n u m + T 2 n u m r 21 n u m D v x + r 22 n u m D v y + r 23 n u m D v z = r 21 n u m ( X 2 + Δ v n u m D v x ) + r 22 n u m ( Y 2 + Δ v n u m D v y ) + r 23 n u m ( Z 2 + Δ v n u m D v z ) + T 2 n u m r 21 n u m D v x + r 22 n u m D v y + r 23 n u m D v z .
δ v ~ n u m = ( v n u m v 2 ) ( v ~ n u m v ~ 2 ) = v n u m v ~ n u m .
e ( D v ) = [ u 1 0 u 3 0 ] T [ u ~ 1 δ v ~ 1 u ~ 3 δ v ~ 3 ] T .
J ( D v ) = [ u ~ 1 D v x u ~ 1 D v y u ~ 1 D v z δ v ~ 1 D v x δ v ~ 1 D v y δ v ~ 1 D v z u ~ 3 D v x u ~ 3 D v y u ~ 3 D v z δ v ~ 3 D v x δ v ~ 3 D v y δ v ~ 3 D v z ] .
u ~ n u m D v = u ~ n u m P n u m P n u m D v = [ u ~ n u m X n u m u ~ n u m Y n u m u ~ n u m Z n u m ] P n u m D v .
{ u ~ n u m X n u m = F n u m ( r 11 n u m r 32 n u m r 31 n u m r 12 n u m ) Y n u m + ( r 11 n u m r 33 n u m r 31 n u m r 13 n u m ) Z n u m + r 11 n u m T 3 n u m r 31 n u m T 1 n u m ( r 31 n u m X n u m + r 32 n u m Y n u m + r 33 n u m Z n u m + T 3 n u m ) 2 u ~ n u m Y n u m = F n u m ( r 12 n u m r 31 n u m r 32 n u m r 11 n u m ) X n u m + ( r 12 n u m r 33 n u m r 32 n u m r 13 n u m ) Z n u m + r 12 n u m T 3 n u m r 32 n u m T 1 n u m ( r 31 n u m X n u m + r 32 n u m Y n u m + r 33 n u m Z n u m + T 3 n u m ) 2 u ~ n u m Z n u m = F n u m ( r 13 n u m r 31 n u m r 33 n u m r 11 n u m ) X n u m + ( r 13 n u m r 32 n u m r 33 n u m r 12 n u m ) Y n u m + r 13 n u m T 3 n u m r 33 n u m T 1 n u m ( r 31 n u m X n u m + r 32 n u m Y n u m + r 33 n u m Z n u m + T 3 n u m ) 2 .
P n u m D v = [ Δ v ~ n u m 0 0 0 Δ v ~ n u m 0 0 0 Δ v ~ n u m ] .
F ( D v , δ v ~ n u m ) = r 21 n u m X 2 + r 22 n u m Y 2 + r 23 n u m Z 2 + T 2 n u m r 21 n u m D v x + r 22 n u m D v y + r 23 n u m D v z + Δ v n u m 2 δ v ~ n u m = 0.
δ v ~ n u m D v = [ F D v x ( D v , δ v ~ n u m ) F δ v ~ n u m ( D v , δ v ~ n u m ) F D v y ( D v , δ v ~ n u m ) F δ v ~ n u m ( D v , δ v ~ n u m ) F D v z ( D v , δ v ~ n u m ) F δ v ~ n u m ( D v , δ v ~ n u m ) ] .
{ F D v x D v ( D v , δ v ~ n u m ) = r 21 n u m r 21 n u m X 2 + r 22 n u m Y 2 + r 23 n u m Z 2 + T 2 n u m ( r 21 n u m D v x + r 22 n u m D v y + r 23 n u m D v z ) 2 F D v y ( D v , δ v ~ n u m ) = r 22 n u m r 21 n u m X 2 + r 22 n u m Y 2 + r 23 n u m Z 2 + T 2 n u m ( r 21 n u m D v x + r 22 n u m D v y + r 23 n u m D v z ) 2 F D v z ( D v , δ v ~ n u m ) = r 23 n u m r 21 n u m X 2 + r 22 n u m Y 2 + r 23 n u m Z 2 + T 2 n u m ( r 21 n u m D v x + r 22 n u m D v y + r 23 n u m D v z ) 2 F δ v ~ n u m ( D v , δ v ~ n u m ) = 2 .
Δ D v k = ( J ( D v ) T J ( D v ) ) 1 J ( D v ) T e ( D v k ) .
D v k + 1 = D v k + Δ D v k .
X 1 X 2 m = Y 1 Y 2 n = Z 1 Z 2 o .
0 = r 21 1 X 1 + r 22 1 Y 1 + r 23 1 Z 1 + T 2 1 .
u 2 = u 2 u c 2 Δ u 2 F 2 = X 2 Z 2 .
{ X 1 = ( m r 23 1 + r 22 1 u 2 n + r 23 1 u 2 o ) Z 2 m T 2 1 ( r 21 1 m + r 22 1 n + r 23 1 o ) = X a Z 2 + X b A Y 1 = n ( r 21 1 u 2 + r 23 1 ) Z 2 n T 2 1 ( r 21 1 m + r 22 1 n + r 23 1 o ) = Y a Z 2 + Y b A Z 1 = ( r 21 1 m + r 22 1 n o r 21 1 u 2 ) Z 2 o T 2 1 ( r 21 1 m + r 22 1 n + r 23 1 o ) = Z a Z 2 + Z b A .
u 1 = u c 1 Δ u 1 + F 1 r 11 1 X 1 + r 12 1 Y 1 + r 13 1 Z 1 + T 1 1 r 31 1 X 1 + r 32 1 Y 1 + r 33 1 Z 1 + T 3 1 .
u 1 = U a Z 2 + U b U c Z 2 + U d .
{ U a = F 1 ( r 11 1 X a + r 12 1 Y a + r 13 1 Z a ) + ( u c 1 Δ u 1 ) ( r 31 1 X a + r 32 1 Y a + r 33 1 Z a ) U b = F 1 ( r 11 1 X b + r 12 1 Y b + r 13 1 Z b + A T 1 1 ) + ( u c 1 Δ u 1 ) ( r 31 1 X b + r 32 1 Y b + r 33 1 Z b + A T 3 1 ) U c = r 31 1 X a + r 32 1 Y a + r 33 1 Z a U d = r 31 1 X b + r 32 1 Y b + r 33 1 Z b + A T 3 1 .
Δ v 1 = v 1 v 2 = Y a Z 2 + Y a A × D v y = V a Z 2 + V b .
u 1 = U a Δ v 1 U a V b + U b V a U c Δ v 1 U c V b + U d V a .
{ R 1 = [ 1 0 0 0 cos α sin α 0 sin α cos α ] T 1 = [ 0 L 2 cos α L 2 sin α ] T R 3 = [ 1 0 0 0 cos α sin α 0 sin α cos α ] T 3 = [ 0 L 2 cos α L 2 sin α ] T .
{ u 1 = X 1 Y 1 sin α + Z 1 cos α + L 2 sin α 0 = Y 1 cos α Z 1 sin α + L 2 cos α { u 3 = X 3 Y 3 sin α + Z 3 cos α + L 2 sin α 0 = Y 3 cos α + Z 3 sin α + L 2 cos α .
{ X 1 = X 3 Z 1 = Z 3 u 1 = u 3 D v = [ 0 d y 0 ] T Y 3 = Y 1 + Δ v × d y .
{ X = u 1 2 ( L Δ v × d y ) ( cos α tan α + sin α ) Y = Δ v × d y 2 Z = Δ v × d y 2 tan α + L 2 tan α .
{ X Δ v = d y 2 × u 1 ( sin α + cos α tan α ) Y Δ v = d y 2 Z Δ v = d y 2 tan α .
Δ P = ( Δ X ) 2 + ( Δ Y ) 2 + ( Δ Z ) 2 = ( X Δ v σ ) 2 + ( Y Δ v σ ) 2 + ( Z Δ v σ ) 2 = σ d y 2 ( u 1 ) 2 ( sin α + cos α tan α ) 2 + 1 + ( 1 tan α ) 2 .
C = A a = [ o n e x y x x x y y y ] [ a 1 a 2 a 3 a 4 a 5 a 6 ] .
{ o n e = [ 1 1 1 1 1 1 1 1 1 ] T x = [ 1 0 1 1 0 1 1 0 1 ] T y = [ 1 1 1 0 0 0 1 1 1 ] T .
a = ( A T A ) 1 A T C = M C .
Δ a i = j = 1 9 M i j Δ C j i = 1 , 2 , , 6 .
{ Δ x max = ( x max a 2 Δ a 2 ) 2 + ( x max a 3 Δ a 3 ) 2 + ( x max a 4 Δ a 4 ) 2 + ( x max a 5 Δ a 5 ) 2 + ( x max a 6 Δ a 6 ) 2 Δ y max = ( y max a 2 Δ a 2 ) 2 + ( y max a 3 Δ a 3 ) 2 + ( y max a 4 Δ a 4 ) 2 + ( y max a 5 Δ a 5 ) 2 + ( y max a 6 Δ a 6 ) 2 .
{ x max a 2 = 2 a 6 a 5 2 4 a 4 a 6 x max a 3 = a 5 a 5 2 4 a 4 a 6 x max a 4 = 4 a 6 ( 2 a 2 a 6 a 3 a 5 ) ( a 5 2 4 a 4 a 6 ) 2 x max a 5 = a 3 a 5 2 + 4 a 3 a 4 a 6 4 a 2 a 5 a 6 ( a 5 2 4 a 4 a 6 ) 2 x max a 6 = 2 a 2 a 5 2 4 a 4 a 3 a 5 ( a 5 2 4 a 4 a 6 ) 2 { y max a 2 = a 5 a 5 2 4 a 4 a 6 y max a 3 = 2 a 4 a 5 2 4 a 4 a 6 y max a 4 = 2 a 3 a 5 2 4 a 2 a 5 a 6 ( a 5 2 4 a 4 a 6 ) 2 y max a 5 = a 2 a 5 2 + 4 a 2 a 4 a 6 4 a 2 a 3 a 4 ( a 5 2 4 a 4 a 6 ) 2 y max a 6 = 4 a 4 ( 2 a 3 a 4 a 2 a 5 ) ( a 5 2 4 a 4 a 6 ) 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.