Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Extrinsic parameters calibration of multi-camera with non-overlapping fields of view using laser scanning

Open Access Open Access

Abstract

An extrinsic parameters calibration method of multi-cameras with non-overlapping fields of view (FOV) using laser scanning is presented. Firstly, two lasers are mounted on a multi-degree-of-freedom manipulator and can scan objects freely by the projected line-structured light. Then, controlling the movement of the manipulator, the line-structured light is projected into the field of view of one of the multi-cameras, and the light plane equation in the camera coordinate frame is calibrated by the target. The manipulator is moved several times in small amplitude to change the position of structured light in the field of vision of the camera and to continue to calibrate the light plane. The light plane equation of line-structured light in the manipulator coordinate frame are solved by the hand-eye calibration method. Secondly, with the help of the light planes, projected into the field of vision of other cameras to be calibrated, the light plane equation in the camera coordinate frame is calibrated, and the external parameters between the camera coordinate frame and the manipulator coordinate frame are calculated, so that the calibration of the external parameters of multiple cameras can be realized. The proposed method connects the non-overlapping multi-cameras by the laser scanning. It can effectively solve the problem of multi-camera extrinsic parameter calibration under the conditions of long working distance and complex environment light.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The line-structured light vision sensor is one of the most widely applied in industrial measurement because of its non-contact, rapidity, high precision, and large measurement range. At present, the research on line-structured light vision sensor mainly focuses on the calibration method of line-structured light vision sensor in different field environments [1–4], the measurement and three-dimensional reconstruction of objects with different shapes and reflective surfaces [5–7], and the materials of the sensor itself [8]. However, few studies have combined the light plane calibration of the line-structured light vision sensor, the global calibration of multi-camera (with or without overlapping fields of view), hand-eye calibration to solve practical engineering problems simultaneously.

In multi-camera vision measurement systems, in order to transform all the measurement data acquired by different cameras into a global coordinate frame, it is necessary to establish transformation relationships between all cameras coordinate frame. Most traditional methods would rely on matching features available to establish common constraint in each camera’s overlapped fields of view (FOV). Usually, a stereo calibration object or a planar pattern target in most cases is utilized to establish a common coordinate frame. By using classical calibration methods both intrinsic and extrinsic parameters of cameras would be well estimated [9]. However, on some special occasions, large-scale measurement or panorama measurement. Usually, there is not enough overlapping FOV between multi-camera or even no overlapping FOV at all. Thus, the scene feature correspondences between cameras can hardly obtain, especially in the multi-camera system with long working distance or large working angle.

Many research has been proposed to calibrate the extrinsic parameters of multi-cameras with non-overlapping FOV in the last two decades, which can be divided into two main categories.

One is with the help of auxiliary targets, laser rangefinder, point laser or line laser, auxiliary camera, plane mirror, which is widely used because of its low cost and relatively simple operation. Kumar et al. [10] and Lébraly et al. [11] and Z. Y. Xu et al. [12] used a mirror to overcome the difficulty of calibrating non-overlapping cameras, respectively. By adjusting the position and orientation of the mirror, each camera can observe the common planar target in the mirror. However, when the measurement system is complex, it is hard to ensure that every camera can observe the fixed target clearly by adjusting the mirror, and the target can be easily occluded by objects in the measurement field. Also, the scene depth is a severe problem as variability in depth makes the matching hard.

Liu et al. [13] proposed an extrinsic parameters calibration method for multi-camera with non-overlapping FOV using the two planar targets that are fixed by a rigid rod together. The mutual coordinate transformations between the two targets need not be known. By moving the target several times and using hand-eye calibration method to solve the external parameters, the method has high accuracy, but it is not easy to operate in narrow space or long working distance because of the limitation of the size of the planar target and the length of the connecting rod.

Liu et al. [14–15] proposed a novel calibration method based on the 1D target. The rotation matrix and the translation vector are respectively computed according to the co-linearity property and known distances of the feature points on the 1D target. Due to the small volume and mobility of 1D target, this method can be applied to multi-cameras distributed in a large area or narrow space, but further improvement of its calibration accuracy is limited due to the limited known points.

Liu et al. [16] used a laser projector and 2D planar target as auxiliary rigs to connect the non-overlapping cameras. The proposed method computes the external parameter using the co-linearity of the laser spots captured by all vision sensors at each spot laser position. Liu et al. [17–18] used several pairs of skew laser lines and a planar target to construct an intermediary coordinate frame. These methods are well suited for on-site extrinsic parameter calibration in both large and narrow spatial environments, but because of the fixed connection of laser projector itself, it is difficult to operate and obtain clear calibration images, which makes it difficult to further improve the accuracy.

Dong et al. [19] unifies multiple cameras using some arbitrarily distributed artificial encoded labels on the wall which can be regarded as a planar target. Liu et al. [20] used a SLR camera to calibrate the extrinsic parameters of the two cameras with non-overlapping fields of view. These two methods are simple and practical, but when two cameras look face to face or look back to back, it is difficult to calibrate the extrinsic parameters.

Another is with the assistance of high-precision 3D coordinate acquisition equipment. Kitahara et al. [21] combined a calibration target and a 3D laser-surveying instrument to calibrate a large scale multi-camera visual measurement system. Lu [22] employed two theodolites and a planar target to realize the calibration of non-overlapping multi-cameras. Kuo et al. [23] utilized a mobile device such as a smartphone equipped with GPS position and 3D orientation information to calibrate a wide-area camera network with non-overlapping FOV.

Manipulator has been widely used in industrial measurement, instead of manual to achieve accurate operation. In this paper, we use two lasers to fix on the manipulator arm instead of the camera, unlike the previous hand-eye system, this method can not only achieve calibration in a complex light environment but also solve the image blurring caused by the camera’s perspective. Although the light plane parameters calibration of the two lasers seems cumbersome, we can simplify the calibration steps by using the stereo target to provide feature constraints (such as spherical target and double cylindrical target in [1]).

The rest of the paper is organized as follows: Section 2 is a detailed introduction of the basic principle of the proposed algorithm; Section 3 and 4 present the simulation and physical experiments, respectively; and section 5 concludes the study.

2. Principle of the algorithm

The algorithm can be divided into two steps. The first is to calibrate the light plane equation of the two lasers in the manipulator coordinate frame. The second is to solve the extrinsic parameters between two cameras by using the light plane as the intermediate. The flowchart of the system and the algorithm is shown in Fig. 1.

 figure: Fig. 1

Fig. 1 The flowchart of the calibration system and the algorithm.

Download Full Size | PDF

2.1. Measurement model of the line-structured light

The measurement model of the line-structured light is shown Fig. 2. Suppose OCXCYCZC represents the real camera coordinate frame. ocxcyc represents the image coordinate frame in millimeter and ouv represents the image coordinate frame in pixel. Let PC (XC,YC, ZC)T be an object point lying on the light plane Π, p is an ideal projection point on the image plane.

 figure: Fig. 2

Fig. 2 Measurement model of line-structured light vision sensor.

Download Full Size | PDF

According to the perspective projection model, we have the following equation between the point PC in the camera coordinate frame and its image coordinate p (u, v)T.

s[uv1]=KC[I3×3,03×1][XCYCZC1],KC=[αxγu00αyv0001].

The light plane equation Π is written as

aXC+bYC+cZC+d=0,
where KC is an intrinsic parameter matrix of the real camera obtained by calibration. αx and αy denote the effective focal length in the image x and y axes. (u0, v0) is the principle point, γ is the skew of two image axes, s is a nonzero scale factor.

If the intrinsic parameter matrix KC and equation of the light plane Π are known, the 3D coordinates of the measured point PC can be calculated by Eqs. (1) and (2).

2.2. Calibration of the laser scanning system

The laser scanning system consists of two line lasers fixed on a multi-degree-of-freedom manipulator. Then, the equations of the two light planes in the manipulator coordinate frame OMXMYMZM can be calibrated. The calibration sketch is shown in Fig. 3.

 figure: Fig. 3

Fig. 3 Calibration of multi-camera with non-overlapping FOV using laser scanning.

Download Full Size | PDF

Driving the manipulator to make the two beams of the structured light project on the planar target in the clear field of vision of the camera to be calibrated. Then, moving the planar target to calculate the two light plane equation in the camera coordinate frame at this time. The manipulator is driven many times (at least twice) and the planar target is moved appropriately within the clear field of vision of the camera. The two light plane equation at each position in the camera coordinate frame is obtained. Because the motion parameters of the manipulator before and after driving are known accurately, we can obtain not only the external parameters of the camera coordinate frame and the manipulator coordinate frame but also the two light plane equation in the manipulator coordinate frame by the hand-eye calibration method.

As shown in Fig. 4, the light plane before and after the manipulator movement is ΠiΠ′i, the subscript i(i = 1, 2) indicates the number of the laser. The unit normal vector of the plane is represented by n, the homogeneous coordinates of the plane Π = (n, d), where d is the fourth dimension of the plane vector Π. The subscripts c and p are used to represent variables related to camera coordinate frame and manipulator coordinate frame respectively.

 figure: Fig. 4

Fig. 4 Calibration of the laser scanning system (that is the light plane equation in the manipulator coordinate frame.)

Download Full Size | PDF

It should be noted that both the planar target and the stereo target can be used to calibrate the light plane parameters of the laser. The planar target processing is relatively easy, but it needs to be placed in many different positions. The stereo target can be calibrated only once, but its machining requirements are relatively high. In practice, the calibration accuracy of the two kinds of targets is equivalent. In this paper, we use the planar target to achieve calibration.

In the camera coordinate frame OCXCYCZC, the light planes have the following expression before and after movement.

Πci=HcTΠci,
where Hc=[RcTc0T1], Rc, Tc are ratation matrix and translation vector that make the plane change in the camera coordinate frame when the manipulator moves.

In the manipulator coordinate frame OMXMYMZM, the light planes have the following expression before and after movement.

Πmi=HmTΠmi,
where Hm=[RmTm0T1], Rm and Tm are the rotation matrix and translation vector of the manipulator in the manipulator coordinate frame, respectively.

Therefore, it is easy to know that before and after the manipulator movement, two light planes have the following relationship from the camera coordinate frame to the manipulator coordinate frame.

{Πmi=HmcTΠciΠmi=HmcTΠci,
where Hmc=[RmcTmc0T1], Rmc and Tmc are the rotation matrix and translation vector of the manipulator in the manipulator coordinate frame, respectively.

2.2.1. Calculation of rotation matrix Rmc

From Eq. (3), we can get the relationship between the plane normal vectors in the camera coordinate frame.

nci=Rcnci.

There are two sets of equations i(i = 1, 2) in the form of Eq. (6). The two light planes are neither coincident nor parallel. Therefore, in any state, their normal vectors are not parallel. We can cross-multiply the two normal vectors and get a new vector which is perpendicular to both vectors. Rc is solved as follows:

Rc=[nc1nc2nc3][nc1nc2nc3]1,
where n′c3 = n′c1 × n′c2/‖n′c1 × n′c22, nc3 = nc1 × nc2/‖nc1 × nc22.

From Eqs. (3), (4), and (5), the following relations can be obtained between the rotation matrix of the manipulator before and after movement.

RmRmc=RmcRc.

The rotation matrix Rc can be solved by Eq. (6) and the rotation matrix Rm can be obtained by feedback the manipulator. Eq. (8) is a typical hand-eye equation AX = XB in [24–25]. By rotating the manipulator many times, we can get many equations similar to Eq. (8). By combining these equations, we can get Rmc.

2.2.2. Calculation of translation vector Tmc

The expansion of Eq. (5) is

{dmi=(RmcTTmc)Tnci+dcidpi=(RmcTTmc)Tnci+dci.

From Eq. (4), we can get

dmi=(RmTTm)Tnmi+dmi=(RmTTm)TRmcnci+dmi.

From Eqs. (9) and (10), we can get the Eq. (11) about Tmc.

[(Rmcnci)T(Rmcnci)T]Tmc+(RmTTm)TRmcnci=dcidci,
where i = 1, 2, the manipulator movement can provide two constraint equations at a time. Tmc has three degrees of freedom, so the manipulator moves at least twice, and the translation vector Tmc between the two coordinate frames can be solved by Eq. (11).

In this way, we can get the two light plane equation in the manipulator coordinate frame.

2.2.3. Nonlinear optimization

The parameters of Rmc and Tmc solved above are optimized as initial values, and the optimization objective function is established.

minf(x)=j,k=1,jkni=1,2(HmTjkΠcikkΠciHmcTΠcij2),
where x = {Rmc, Tmc}, i denotes the number of laser, j, k denotes the serial number of the manipulator movement, n denotes the total number of the manipulator movement, Πci denotes the liht plane in camera coordinate frame, Hm=[RmTm0T1] denotes the transformation matrix of manipulator, and Hmc=[RmcTmc0T1] denotes the transformation matrix between the camera coordinate frame and the manipulator coordinate frame.

2.3. Extrinsic parameters calibration (Global calibration)

For the sake of narrative convenience, global calibration is illustrated only by two cameras. As shown in Fig. 1, the light plane is projected into the field of view of the left camera A by movement, and then the light plane equation is solved by the planar target. After that, driving the manipulator, and the light plane is projected into the field of view of the right camera B to calculate the light plane equation. The extrinsic parameters between the two cameras are calibrated by using the light plane as the intermediary. We use subscripts cl and cr to represent variables associated with the camera coordinate frame A and the camera coordinate frame B, respectively.

In the manipulator coordinate frame, the relationship between the light plane Πmli in the field of view the camera A and the light plane Πmri in the field of view of the camera B after rotation is as follows:

Πpri=HmrlΠmli,
where Hmrl=[RmrlTmrl0T1] is the transformation matrix from plane Πmli to plane Πmri in the manipulator coordinate frame.

The plane Πli, Πri have the following transformation relations from A camera coordinate frame and B camera coordinate frame to manipulator coordinate frame, respectively.

{Πmli=HmclTΠcliΠmri=HmcrTΠcri,
where Hmcl=[RmclTmcl0T1] and Hmcr=[RmcrTmcr0T1] are the transformation matrix from the camera coordinate frame to the manipulator coordinate frame of the plane in the field of view of the camera A and B, respectively.

2.3.1. Calculation of rotation matrix Rlr

The Eq. (14) shows that the relationship between plane normal vectors in two camera coordinate frames and manipulator coordinate frames is as follows:

{nmli=Rmclnclinmri=Rmcrncri,
where i = 1, 2, Rmcl and Rmcl can be easily solved. Then, the rotation matrix between two cameras can be also easily solved.
Rlr=Rmcl1Rmcr.

2.3.2. Calculation of translation vector Tlr

By expanding Eq. (14), the Eq. (17) related to translation vectors can be obtained.

{ncliTRmclTTmcl=dclidmlincriTRmcrTTmcr=dcridmri.

When the light plane scan at least twice in the field of view of the camea A and B, the translation vectors Tmcl and Tmcr from the camera coordinates to the manipulator coordinates can be solved respectively.

The corresponding transformation relationship from the camera B coordinate frame to the camera A coordinate frame is as follows:

Hlr=Hmcl1Hmcr.

And because Hlr=[RlrTlr0T1], so there is as the following equation.

Tlr=RmclRmcr1Tmcr+Tmcl.

2.3.3. Nonlinear optimization

The parameters of Rlr and Tlr solved above are optimized as initial values, and the optimization objective function is established as follows:

minf(x)j=1ni=1,2(ΠlijHlrΠrij2),
where x = {Rlr, Tlr}, i denotes the number of the laser, j denotes the serial number of the rotation of the manipulator, n denotes the total number of the rotation, Πli, Πri denotes the light plane in the camera coordinate frame, Hlr=[RlrTlr0T1] is the transformation matrix from the camera B coordinate frame to the camera A coordinate frame.

Similarly, the rotation matrix Rlr in the independent variable of Eq. (20) is expressed in the form of Rodrigues vector, and optimized by Levengerg-Marquardt optimization method. Finally, the optimal estimates of Rlr and Tlr can be obtained.

3. Simulation

For the extrinsic parameters calibration method proposed in this paper, the calibration accuracy of the laser scanning system is the most critical, so simulation experiments are carried out. Through simulation, the impact of the following two factors on the calibration accuracy is analyzed: (1) rotational accuracy of the manipulator; (2) calibration accuracy of the light plane.

As shown in Fig. 5, according to the relation of positive motion, we can obtain the transformation matrix is

T=T10T21T32T43T54T65,
where the specific expression of T10, T21, T32, T43, T54, T65 are detailed in the appendix.

 figure: Fig. 5

Fig. 5 Structural drawing of the manipulator.

Download Full Size | PDF

Based on the transformation matrix model of the manipulator, we can easily simulate and analyze the influence of the rotational accuracy of the manipulator on the calibration accuracy of the laser scanning system. We use the robotic toolbox to analyze the calibration accuracy of extrinsic parameters, the D–H parameters of the robot are shown in Table 1.

Tables Icon

Table 1. The D–H parameters of the manipulator

The Simulation schematic diagram is as shown Fig. 6. The intrinsic parameters of the simulation camera A and B are shown as follows, without considering lens distortion. The effective focal length fAx = fAy = 5000, fBx = fBy = 5000, the principal point (uA0, vA0) = (800, 600), (uB0, vB0) = (800, 600), and the skew of the two image axis γA = 0, γB = 0.

 figure: Fig. 6

Fig. 6 Simulation schematic diagram for calibration of the manipulator and the camera.

Download Full Size | PDF

The rotation and translation vectors from the manipulator coordinate frame to the camera A coordinate frame is rmc = [0.0873, −0.0873, 0.1745]T and tmc = [1000, 300, 10]T. The rotation and translation vectors from the camera B coordinate frame to the camera A coordinate frame is rAB = [0.0873, 0.0873, 0.1745]T and tAB = [2000, −20, 50]T.The light plane of the two lasers are Π1 = [0, 0, 1, 10]T and Π1 = [2, −1, 0, −20]T in the manipulator coordinate frame.

The rotation matrix Rmc is expressed in Rodrigues vector from rmc, and the dimensions of both rmc and translation vector tmc are 3 × 1. The Euclidean distance between the simulated values of rmc and tmc and the true values is taken as the absolute error, and the ratio of the absolute error to the modulus of the corresponding vector truth values is taken as the relative error.

3.1. Impact of the manipulator rotation accuracy

In this simulation, each joint angle (θ1, θ2, θ3, θ4, θ5, θ6) are added with Gauss noise with mean value of 0 and standard deviation of σ (0.001–0.01degrees). The manipulator rotates 10 times. For each level of noise, 1000 independent simulation experiments are carried out. Under different simulation conditions, the relative errors of rmc and tmc are shown in Fig. 7.

 figure: Fig. 7

Fig. 7 (a) The effect of rotation error of the manipulator on the extrinsic parameters rmc. (b) The effect of rotation error of the manipulator on the extrinsic parameters tmc.

Download Full Size | PDF

It can be seen from the Fig. 7(a) and Fig. 7(b) that the calibration error increases with the decrease of rotating precision of the manipulator. When the rotation accuracy of the manipulator is 1 degree, the calibration error of the extrinsic parameters rmc between the manipulator coordinate frame and the camera A coordinate frame is less than 0.3%.

3.2. Impact of the light plane calibration accuracy

In this simulation, Gauss noise with a mean value of 0 and a standard deviation of σ (0.1%–1%) is added to the light plane equation(that is the coefficients of the plane equation) in camera coordinate frame, and the manipulator rotates 10 times. For each noise level, 1000 independent experiments were conducted. Comparing the calculated equations and the true values of the light plane in the turntable coordinate frame, the results are shown in Fig. 8.

 figure: Fig. 8

Fig. 8 (a) The effect of calibration error of the light plane on the extrinsic parameters rmc. (b) The effect of calibration error of the light plane on the extrinsic parameters tmc..

Download Full Size | PDF

It can be seen from the figure that the calibration error increases with the decrease of calibration error of the light plane. When the calibration error of the light plane is 0.5%, the calibration error of the extrinsic parameters rmc between the manipulator coordinate frame and the camera A coordinate frame is less than 0.4%.

4. Physical experiment

In the experiment, we use a two-axis electric servo turntable to verify the proposed method. The two-axis turntable can also be regarded as a simplified manipulator, and the operation is simpler, as shown in Fig. 9.

 figure: Fig. 9

Fig. 9 multi-camera vision measurement system. The Rotation accuracy of two-axis turntable is better than 0.01°.

Download Full Size | PDF

The scanning system consists of a two-axis turntable and two laser projectors, so the light planes projected by the laser projectors can be scanned in all directions in space. The four cameras of Allied Vision Technologies equipped with 17mm Schneider optical lens are placed on the left and right sides of the turntable. Camera A and camera C, camera B and camera D constitute binocular vision sensors, respectively, and measure the marking points at both ends of the standard length ruler. Camera A and camera B have non-overlapping field of view. If the extrinsic parameters between them are obtained, the length of the standard length ruler can be measured.

4.1. Calibration of intrinsic parameters of multi-camera vision system

The four cameras intrinsic parameters are calibrated by Zhang Zhengyou’s plane calibration method [9], and are shown in Table 2.

Tables Icon

Table 2. Intrinsic parameters of the four cameras

The re-projection errors of the four cameras are shown in Fig. 10.

 figure: Fig. 10

Fig. 10 (a) Reprojection error of the camera A. (b) Reprojection error of the camera B. (c) Reprojection error of the camera C. (d) Reprojection error of the camera D.

Download Full Size | PDF

The extrinsic parameters between the camera A and the camera C is

RAC=[0.82000.03300.57140.02210.99940.02610.57190.00880.8202],TAC=[448.405217.9286223.5824].
The extrinsic parameters between the camera B and the camera D is
RBD=[0.85990.11190.49800.09260.99370.06330.50190.00830.8649],TBD=[437.009454.919161.6975].

Next, we use the auxiliary camera to complete the calibration of the scanning system. Then, the calibrated scanning system can be directly used to calibrate the extrinsic parameters between multi-camera.

4.2. Calibration of the laser scanning system

The calibration of the laser scanning system is shown in Fig. 11. The camera A is used as auxiliary equipment to calibrate the laser scanning system.

 figure: Fig. 11

Fig. 11 Calibration of the laser scanning system in the physical experiment.

Download Full Size | PDF

Then, the camera obtains the images of the light stripe on the target used to calibrate the two light planes after each rotation, and the turntable is rotated by10 times. The rotation parameters of the turntable are recorded in Table 3.

Tables Icon

Table 3. Rotate parameters of the turntable

The light plane equation after each rotation of the turntable is calibrated, and then the rotation matrix Rmc and translation vector Tmc from the camera coordinate frame to turntable coordinate frame are solved by the method proposed in this paper.

Rmc=[0.99620.81920.0150.08680.57140.08720.0140.05000.9962],Tmc=[311.192810.227121.2129].
The part of the calibration images of the light plane is shown in Fig. 12.

 figure: Fig. 12

Fig. 12 Calibration images of the light plane equation.The light plane equation is calibrated by using a 7 ×7 dot array plane target with LED. The distance between dots is 10mm and the machining accuracy is 0.02mm.

Download Full Size | PDF

Finally, the light plane equation in the camera coordinate frame is transformed to the turntable coordinate frame. Combining with the turntable rotation state, the light plane equation in the turntable coordinate frame can be obtained when the turntable is in zero position.

{0.0868X+0.5714Y0.0087Z9.2211=00.1041X+0.9907Y0.0065Z+29.7789=0.

4.3. Calibration of the extrinsic parameters

4.3.1. Extrinsic parameters calibration based on laser scanning

Rotating turntable makes the light plane projected to five different positions in the field of view of two cameras A and B, respectively. Recording the rotation parameters of the turntable. For different positions, calculate the plane equation of each group of the light plane in the camera A coordinate frame and the camera B coordinate frame respectively, and then solve the rotation matrix and translation vector between the camera A and the camera B.

RAB=[0.77100.02920.63610.04380.99910.00730.63530.03550.7716],TAB=[1077.51125.583318.2765].

4.3.2. Extrinsic parameters calibration based on hand-eye

Here, the extrinsic parameters calibration method based on hand-eye is also given as shown in Fig. 13. Firstly, an auxiliary camera mounted on the turntable to calibrate the hand-eye relationship between the auxiliary camera C and the manipulator coordinate frame. Secondly, we rotate the turntable to form binocular stereo vision sensors with the auxiliary camera and the camera A and B respectively, so as to realize the extrinsic parameters calibration between the two cameras through the binocular principle.

The rotation matrix and translation vector between the camera A and the camera B is

RAB=[0.77120.02910.63510.04370.99710.00730.63490.03550.7714],TAB=[1077.26165.534218.3168].

 figure: Fig. 13

Fig. 13 Extrinsic parameters calibration based on hand-eye.

Download Full Size | PDF

4.3.3. Comparison of measurement accuracy

Ten measurements of the standard length ruler at different positions are shown in Table 4.

Tables Icon

Table 4. Measurement of the standard length ruler at ten different positions

A total of ten different positions of the standard length ruler are obtained as shown in Fig. 14.

 figure: Fig. 14

Fig. 14 Measurement images of the standard length ruler (the length is 1021.413mm).

Download Full Size | PDF

We can see that the RMS error of the measurement of the standard length ruler is 0.26mm using the proposed calibration method of laser scanning, 0.22mm using the calibration method of moving the camera mounted on the manipulator. It can be proved that the measurement accuracy level of the two methods is equal.

4.4. Applications

The multi-camera vision system is applied to the measurement of four-wheel alignment. Due to the factors of four-wheel alignment parameters such as the toe-in, propulsion angle, steering axle inclination and caster, the car may have the potential dangers of driving wandering. The calibration method of the four-wheel alignment system based on visual measurement method is given, as shown in Fig. 15.

 figure: Fig. 15

Fig. 15 Multi-camera vision system is applied to the measurement of four-wheel alignment.

Download Full Size | PDF

Four cameras A, B, C and D without overlapping field of view are fixed on both sides of the vehicle. A checkerboard plane target is fixed on the side of each wheel, and can be observed by the corresponding camera. After the intrinsic parameters of each camera are calibrated, the spatial normal vector of the fixed plane target on the wheel can be calculated. If we regard the camera A coordinate frame as the world coordinate frame after solving the extrinsic parameters between the four cameras, then we can know the spatial normal vectors of the four checkerboard planes, and the parameters of the four-wheel alignment can be obtained indirectly through the spatial normal vectors.

As shown in Fig. 16, the extrinsic parameters of four cameras with overlapping FOV are calibrated by the proposed laser scanning method. The two light plane equation in the turntable coordinate frame has been given in Section 4.1. The LED plane target is directly placed in the camera field of view, and the two light plane equation in the camera coordinate frame is calibrated, then the camera coordinate frame and the turntable coordinate frame can be solved. Therefore, the extrinsic parameters of each camera can be calibrated by using the turntable coordinate frame as the intermediary.

 figure: Fig. 16

Fig. 16 Calibration of the multi-camera vision measurement system based on laser scanning.

Download Full Size | PDF

It should be noted that because of the symmetrical arrangement of the four cameras when the turntable is located in the center of four cameras, the distance between the turntable and each camera is equal. The turntable can be installed with the same focal length of the camera to calibrate the extrinsic parameters of all cameras. However, in a multi-camera vision measurement system, if the distance between cameras is not uniform or symmetrically distributed, it is often necessary to change the lens with different focal lengths for adapting different working distances. This will lead to the re-calibration of the hand-eye relationship between the camera and the turntable. Because of the large field of view angle and the long working distance of the laser projector, as long as the light plane is canned into the field of view of the camera, the extrinsic parameters can be obtained by calibrating the equation of the two light planes in the turntable coordinate frame only once.

The images of plane target mounted on the wheel obtained by each camera are shown in Fig. 17.

 figure: Fig. 17

Fig. 17 Plane target images mounted on wheels captured by each camera.

Download Full Size | PDF

Using the camera A coordinate frame as the world coordinate frame, the spatial distribution of each checkerboard plane target after three-dimensional reconstruction is shown in Fig. 18.

 figure: Fig. 18

Fig. 18 Spatial distribution of camera and target.

Download Full Size | PDF

5. Conclusion

In this paper, we propose an extrinsic parameters calibration method for multi-camera with non-overlapping fields of view based on laser scanning, which consists of a manipulator and two lasers. We use the lasers instead of the camera in traditional hand-eye systems. Because the structured light projected by the laser can form easily identifiable features in the complex or dark light environment, the calibration can be achieved quickly only by driving several manipulators and using stereo targets (step blocks, spheres, double cylinders, etc.). Compared with the traditional hand-eye system, it is not affected by the camera imaging angle and the field of view of the hand-eye system, and does not need to adjust the exposure time and aperture of the camera. Therefore, the operation is relatively simple, especially suitable for on-line calibration under complex light environment. Experiments show that the global calibration accuracy is higher.

Appendix

T10=[cos(θ1)0sin(θ1)0sin(θ1)0cos(θ1)0010d10001],T21=[cos(θ2)sin(θ2)0a2cos(θ2)sin(θ2)cos(θ2)0a2sin(θ2)00100001],T32=[cos(θ3)sin(θ3)0a3cos(θ3)sin(θ3)cos(θ3)0a3sin(θ3)00100001],T43=[cos(θ4)0sin(θ4)0sin(θ4)0cos(θ4)0010d40001],T54=[cos(θ5)0sin(θ5)0sin(θ5)0cos(θ5)0010d10001],T65=[cos(θ6)sin(θ6)00sin(θ6)cos(θ6)00001d60001].

Funding

National Science Fund for Distinguished Young Scholars of China (51625501); National Natural Science Foundation of China (61673039).

References

1. Z. Liu, X. J. Li, and Y. Yin, “On-site calibration of line-structured light vision sensor in complex light environments,” Opt. Express 23(23), 129896 (2015). [CrossRef]  

2. P. Wang, J. M. Wang, J. Xu, Y. Guan, G. L. Zhang, and K. Chen, “Calibration method for a large-scale structured light measurement system,” Appl. Opt. 56(14), 3995–4002 (2017). [CrossRef]   [PubMed]  

3. Y. Zhu, Y. G. Gu, Y. Jin, and C. Zhai, “Flexible calibration method for an inner surface detector based on circle structured light,” Appl. Opt. 55(5), 1034–1039 (2016). [CrossRef]   [PubMed]  

4. M. Deetjen, E. Marc, and D. Lentink, “Automated calibration of multi-camera-projector structured light systems for volumetric high-speed 3D surface reconstructions,” Opt. Express 26(25), 33278–33304 (2018). [CrossRef]  

5. Z. Liu, F. Li, B. Huang, and G. Zhang, “Real-time and accurate rail wear measurement method and experimental analysis,” J. Opt. Soc. Am. A. 31(8), 1721–1729 (2014). [CrossRef]  

6. Z. W. Cai, X. L. Liu, X. Peng, and B. Z. Gao, “Universal phase-depth mapping in a structured light field,” Appl. Opt. 57(1), A26–A32 (2018). [CrossRef]   [PubMed]  

7. C. F. Jiang, B. Lim, and S. Zhang, “Three-dimensional shape measurement using a structured light system with dual projectors,” Appl. Opt. 57(14), 3983–3990 (2018). [CrossRef]   [PubMed]  

8. T. Omatsu, N. M. Litchinitser, E. Brasselet, R. Morita, and J. Wang, “Focus issue introduction: synergy of structured light and structured materials,” Opt. Express 25(14), 16681–16685 (2017). [CrossRef]   [PubMed]  

9. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

10. R. K. Kumar, A. llie, J. M. Frahm, and M. Pollefeys, “Simple calibration of non-overlapping cameras with a mirror,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2008).

11. P. Lébraly, C. Deymier, and O. Ait-Aider, “Flexible extrinsic calibration of non-overlapping cameras using a planar mirror: application to visionbased robotics,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2010), pp. 5640–5647.

12. Z. Y. Xu, Y. Wang, and C. Yang, “Multi-camera global calibration for large-scale measurement based on plane mirror,” Optik 126(23), 4149–4154 (2015). [CrossRef]  

13. Z. Liu, G. Zhang, Z. Z. Wei, and J. Sun, “A global calibration method for multiple vision sensors based on multiple targets,” Meas. Sci. Technol. 22(12), 125102 (2011). [CrossRef]  

14. Z. Liu, G. Zhang, Z. Z. Wei, and J. Sun, “Novel calibration method for non-overlapping multiple vision sensors based on 1D target,” Opt. Precis. Eng. 49(4), 570–577 (2011).

15. Z. Liu, G. Zhang, and Z. Z. Wei, “Global calibration of multi-vision sensor based on one dimensional target,” Opt. Precis. Eng. 16(1), 2274–2280 (2008).

16. Z. Liu, F. J. Li, and G. J. Zhang, “An external parameter calibration method for multiple cameras based on laser rangefinder,” Measurement 47, 954–962 (2014). [CrossRef]  

17. Q. Z. Liu, J. H. Sun, Y. T. Zhao, and Z. Liu, “Calibration method for geometry relationships of non-overlapping cameras using light planes,” Opt. Eng. 52(7), 074108 (2013). [CrossRef]  

18. Q. Z. Liu, J. H. Sun, Z. Liu, and G. J. Zhang, “Global calibration method of multi-sensor vision system using skew laser lines,” Chin. J. Mech. Eng. 2(25), 405–411 (2012). [CrossRef]  

19. S. Dong, X. Shao, X. Kang, F. J. Yang, and X. Y. He, “Extrinsic calibration of a non-overlapping camera network based on close-range photogrammetry,” Appl. Opt. 55(23), 6363–6371 (2016). [CrossRef]   [PubMed]  

20. C. Liu, S. Dong, M. Mokhtar, X. Y. He, J. Y. Lu, and X. L. Wu, “Multicamera system extrinsic stability analysis and large-span truss string structure displacement measurement,” Appl. Opt. 55(29), 8153–8162 (2016). [CrossRef]   [PubMed]  

21. I. Kitahara, H. Saito, S. Akimichi, T. Onno, Y. Ohta, and Takeo Kanade, “large-scale virtualized reality,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2001).

22. R. S. Lu and Y. F. Li, “A global calibration method for large-scale multisensor visual measurement systems,” Sens. Actuators A Phys. 116(3), 384–393 (2004). [CrossRef]  

23. T. Kuo, Z. Ni, S. Sunderrajan, and B. S. Manjunath, “Calibrating a widearea camera network with non-overlapping views using mobile devices,” ACM Trans. Sens. Netw. 2(10), 1–24 (2014). [CrossRef]  

24. R. Y. Tsai and R. K. Lenz, “A new technique for fully autonomous and efficient 3D robotics hand/eye calibration,” IEEE Trans. Rob. Autom. 5(3), 345–358 (1989). [CrossRef]  

25. F. C. Park and B. J. Martin, “Robot Sensor Calibration: Solving AX=XB on the Euclidean Group,” IEEE Trans. Rob. Autom. 10(5), 717–722 (1994). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (18)

Fig. 1
Fig. 1 The flowchart of the calibration system and the algorithm.
Fig. 2
Fig. 2 Measurement model of line-structured light vision sensor.
Fig. 3
Fig. 3 Calibration of multi-camera with non-overlapping FOV using laser scanning.
Fig. 4
Fig. 4 Calibration of the laser scanning system (that is the light plane equation in the manipulator coordinate frame.)
Fig. 5
Fig. 5 Structural drawing of the manipulator.
Fig. 6
Fig. 6 Simulation schematic diagram for calibration of the manipulator and the camera.
Fig. 7
Fig. 7 (a) The effect of rotation error of the manipulator on the extrinsic parameters rmc. (b) The effect of rotation error of the manipulator on the extrinsic parameters tmc.
Fig. 8
Fig. 8 (a) The effect of calibration error of the light plane on the extrinsic parameters rmc. (b) The effect of calibration error of the light plane on the extrinsic parameters tmc..
Fig. 9
Fig. 9 multi-camera vision measurement system. The Rotation accuracy of two-axis turntable is better than 0.01°.
Fig. 10
Fig. 10 (a) Reprojection error of the camera A. (b) Reprojection error of the camera B. (c) Reprojection error of the camera C. (d) Reprojection error of the camera D.
Fig. 11
Fig. 11 Calibration of the laser scanning system in the physical experiment.
Fig. 12
Fig. 12 Calibration images of the light plane equation.The light plane equation is calibrated by using a 7 ×7 dot array plane target with LED. The distance between dots is 10mm and the machining accuracy is 0.02mm.
Fig. 13
Fig. 13 Extrinsic parameters calibration based on hand-eye.
Fig. 14
Fig. 14 Measurement images of the standard length ruler (the length is 1021.413mm).
Fig. 15
Fig. 15 Multi-camera vision system is applied to the measurement of four-wheel alignment.
Fig. 16
Fig. 16 Calibration of the multi-camera vision measurement system based on laser scanning.
Fig. 17
Fig. 17 Plane target images mounted on wheels captured by each camera.
Fig. 18
Fig. 18 Spatial distribution of camera and target.

Tables (4)

Tables Icon

Table 1 The D–H parameters of the manipulator

Tables Icon

Table 2 Intrinsic parameters of the four cameras

Tables Icon

Table 3 Rotate parameters of the turntable

Tables Icon

Table 4 Measurement of the standard length ruler at ten different positions

Equations (28)

Equations on this page are rendered with MathJax. Learn more.

s [ u v 1 ] = K C [ I 3 × 3 , 0 3 × 1 ] [ X C Y C Z C 1 ] , K C = [ α x γ u 0 0 α y v 0 0 0 1 ] .
a X C + b Y C + c Z C + d = 0 ,
Π c i = H c T Π c i ,
Π m i = H m T Π m i ,
{ Π m i = H mc T Π c i Π m i = H mc T Π c i ,
n c i = R c n c i .
R c = [ n c 1 n c 2 n c 3 ] [ n c 1 n c 2 n c 3 ] 1 ,
R m R mc = R mc R c .
{ d m i = ( R mc T T mc ) T n c i + d c i d p i = ( R mc T T mc ) T n c i + d c i .
d m i = ( R m T T m ) T n m i + d m i = ( R m T T m ) T R mc n c i + d m i .
[ ( R mc n c i ) T ( R mc n c i ) T ] T mc + ( R m T T m ) T R mc n c i = d c i d c i ,
min f ( x ) = j , k = 1 , j k n i = 1 , 2 ( H m T j k Π c i k k Π c i H m c T Π c i j 2 ) ,
Π pr i = H mrl Π ml i ,
{ Π ml i = H mcl T Π cl i Π mr i = H mcr T Π cr i ,
{ n ml i = R mcl n cl i n mr i = R mcr n cr i ,
R lr = R mcl 1 R mcr .
{ n cl i T R mcl T T mcl = d cl i d ml i n cr i T R mcr T T mcr = d cr i d mr i .
H lr = H mcl 1 H mcr .
T lr = R mcl R mcr 1 T mcr + T mcl .
min f ( x ) j = 1 n i = 1 , 2 ( Π li j H lr Π ri j 2 ) ,
T = T 1 0 T 2 1 T 3 2 T 4 3 T 5 4 T 6 5 ,
R AC = [ 0.8200 0.0330 0.5714 0.0221 0.9994 0.0261 0.5719 0.0088 0.8202 ] , T AC = [ 448.4052 17.9286 223.5824 ] .
R BD = [ 0.8599 0.1119 0.4980 0.0926 0.9937 0.0633 0.5019 0.0083 0.8649 ] , T BD = [ 437.0094 54.919 161.6975 ] .
R mc = [ 0.9962 0.8192 0.015 0.0868 0.5714 0.0872 0.014 0.0500 0.9962 ] , T mc = [ 311.1928 10.2271 21.2129 ] .
{ 0.0868 X + 0.5714 Y 0.0087 Z 9.2211 = 0 0.1041 X + 0.9907 Y 0.0065 Z + 29.7789 = 0 .
R AB = [ 0.7710 0.0292 0.6361 0.0438 0.9991 0.0073 0.6353 0.0355 0.7716 ] , T AB = [ 1077.5112 5.5833 18.2765 ] .
R AB = [ 0.7712 0.0291 0.6351 0.0437 0.9971 0.0073 0.6349 0.0355 0.7714 ] , T AB = [ 1077.2616 5.5342 18.3168 ] .
T 1 0 = [ cos ( θ 1 ) 0 sin ( θ 1 ) 0 sin ( θ 1 ) 0 cos ( θ 1 ) 0 0 1 0 d 1 0 0 0 1 ] , T 2 1 = [ cos ( θ 2 ) sin ( θ 2 ) 0 a 2 cos ( θ 2 ) sin ( θ 2 ) cos ( θ 2 ) 0 a 2 sin ( θ 2 ) 0 0 1 0 0 0 0 1 ] , T 3 2 = [ cos ( θ 3 ) sin ( θ 3 ) 0 a 3 cos ( θ 3 ) sin ( θ 3 ) cos ( θ 3 ) 0 a 3 sin ( θ 3 ) 0 0 1 0 0 0 0 1 ] , T 4 3 = [ cos ( θ 4 ) 0 sin ( θ 4 ) 0 sin ( θ 4 ) 0 cos ( θ 4 ) 0 0 1 0 d 4 0 0 0 1 ] , T 5 4 = [ cos ( θ 5 ) 0 sin ( θ 5 ) 0 sin ( θ 5 ) 0 cos ( θ 5 ) 0 0 1 0 d 1 0 0 0 1 ] , T 6 5 = [ cos ( θ 6 ) sin ( θ 6 ) 0 0 sin ( θ 6 ) cos ( θ 6 ) 0 0 0 0 1 d 6 0 0 0 1 ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.