Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

On-site calibration of line-structured light vision sensor in complex light environments

Open Access Open Access

Abstract

A novel calibration method for the line-structured light vision sensor that only requires the image of the light stripe on the target using a movable parallel cylinder target is proposed in this paper. The corresponding equations between two ellipses obtained from the intersection of the light stripe and the target and their projected images are established according to the perspective projection transformation, and the light plane equation is solved based on the constraint conditions that the minor axis of the ellipse is equal to the diameter of the cylinder. In the physical experiment, the field of view of the line-structured light vision sensor is about 500 mm × 400 mm, and the measurement distance is about 700 mm. A calibration accuracy of 0.07 mm is achieved using the proposed method, which is comparable to that when planar targets are used.

© 2015 Optical Society of America

1. Introduction

Among many vision measurement methods [1–5], the light-structured vision measurement method is one of the most widely applied in industrial environment because of its large measurement range, non-contact, rapidity, and high precision [6–8]. This technology is especially suitable for dynamic measurements. Depending on the form of projected light, the light-structured vision measurement method can be divided into four categories, namely, point-structured light method, line-structured light method, grating-structured light method, and coded-structured light method. The point-structured light method enables the acquisition of 1D data but is incapable of 3D shape measurement. The coded-structured light method [9–13] usually involves the use of a camera and a projector to form a 3D optical sensor that is suited to static and dynamic measurements. However, the limited power of the projector makes it unsuitable for dynamic measurements in complex industrial environments, especially 3D profiles of fast-moving or high-temperature objects. By combining the large-power laser and the camera to construct a 3D vision sensor, the line-structured light method and the grating-structured light method can be applied to measure the 3D profile of objects in complex industrial environments. These methods have been applied to on-line measurements in the rail transport industry, such as the measurement of train wheels, steel rails, and pantographs, and in the steel industry, such as the measurement of geometric dimension of hot steel.

Line-structured light vision sensor and grating-structured light vision sensor share similar calibration procedures, which consist of the calibration of intrinsic parameters of the camera and light plane parameters. Many studies concerning the calibration of intrinsic parameters of the camera have been published. For example, calibration methods that use 3D targets [14], 2D targets [15], 1D targets [16], and spherical targets [17,18] have been reported. We assume that the intrinsic parameters of the camera are known; hence, the present study focuses on the calibration of light plane parameters. The calibration method for light plane parameters has been widely reported [19–23]. For the calibration of light plane parameters, movable 3D targets [19], 2D targets [20,21], 1D targets [22] or a single spherical target [23] are usually used. Huynh, et al. [19] utilize the principle of cross ratio invariability to determine the calibration points of the light plane by using a 3D target. The main goal is to acquire at least three collinear points with accurate coordinates by using the targets. The principle of cross ratio invariability is then used to obtain the calibration points on the light plane with high precision. Zhou, et al. [20] propose a method for the calibration of light plane parameters based on a planar target. In this method, the calibration points on the light plane are acquired using the cross ratio invariability. The planar target is repeatedly moved to obtain the calibration points on the light plane, and the light plane equation is then fitted using the calibration points. Liu, et al. [21] also report the use of planar targets but he adopted Plücker's equations to describe the line of light stripes. Compared with those reported in literature [20] where only a few characteristic points of light stripes are used, the calibration precision is improved. Wei, et al. [22] propose a method for calibrating the line-structured light vision system based on a 1D target. The 3D coordinates of the intersection between the light plane and the 1D target are solved using the distances between the characteristic points of the 1D target. The light plane equation is solved by fitting the 3D coordinates with several intersections. Liu, et al. [23] propose a method based on a single ball target. The method needs to extract the image of the profile of the ball target to solve the ball target position under the camera coordinate frame, and combines the cone profile that decided by the laser stripe to solve the light plane equation. This method has some advantages since the profile feature of the ball target is unaffected by the placement angle of the target, but also needs to extract the image of the profile of the ball target.

According to the above analyses, the traditional on-site calibration methods for the line-structured light vision sensor require information about the characteristic points and the light stripe on the target to calculate the light plane equation. In some complex light environments such as strong sunlight or at night, clear images of the light stripes and the characteristic points on the target are difficult to obtain simultaneously. Thus, several auxiliary methods are used to help the vision sensor, such as awnings or auxiliary lighting. All of these factors make calibrating the line-structured light vision sensor difficult in complex light environments. In addition, the line-structured light vision sensor always is equipped with an optical filter to reduce the impact of complex light environments; however, the filter makes obtaining clear images of the characteristic points on the target impossible.

To solve the problem presented above, a novel calibration method for the line-structured light vision sensor that only requires the image of the light stripe on the target is proposed in this paper. The light plane of the laser projector intersects with the parallel cylinder target to form two ellipses in space. Based on the perspective projection transformation, the equations relating the two ellipses in space with the corresponding projected images are then established. The light plane equation is solved at constraint conditions where the minor axis of the ellipse is equal to the radius of the cylinder. In this paper, the remaining chapters are arranged as follows: Section 2 is a detailed introduction of the basic principle of the proposed algorithm; Sections 3 and 4 present the simulation and physical experiments, respectively; and section 5 concludes the study.

2. Principle of the algorithm

The procedures of calibrating the light plane parameters of the line-structured light vision sensor are shown in Fig. 1. Suppose Ocxcyczc represents the coordinate system of the camera; Ouxuyuis the coordinate system of the image; π is the light plane, the equation of which is written as ax+by+cz+d=0, where a2+b2+c2=1 . The coordinate system of the line-structured light vision sensor is established uponOcxcyczc. Q1=[1/β120001/α120001] and Q2=[1/β220001/α220001] are the expressions of the two ellipses obatined from the intersections of the light plane and the target in space, respectively. α1 and α2 are the semi-major axes of Q1 and Q2, respectively. β1 and β2 are the semi-minor axes of the ellipse Q1 and Q2, respectively. C1 and C2 are the images of Q1 and Q2, respectively.

 figure: Fig. 1

Fig. 1 Calibration process of structured light vision sensor.

Download Full Size | PDF

As shown in Fig. 1, the two ellipses Q1 and Q2 can be obtained from the tangency of the light plane and the target. Through ellipse fitting for the two light stripe in the image, the images of Q1 and Q2 are obtained as C1 and C2, respectively. The y axis of Ow1xw1yw1zw1 is same to the direction of the major axis of Q1, the x axis is same to the direction of the minor axis of Q1, and the center of the ellipse is taken as Ow1. The coordinate system Ow2xw2yw2zw2 is then established for Q2 by using a similar method. T1=[R1t101] is the transformation matrix from Ow1xw1yw1zw1 to Ocxcyczc.R1 and t1are the rotation matrix and the translation vector from Ow1xw1yw1zw1 to Ocxcyczc, respectively.

The two cylinders with the same radius are parallel with each other. Q1 and Q2 can be obtained from the tangency of the light plane and the two cylinders of the target. It’s obviously that the coordinate frames of Q1 and Q2 are parallel with each other. According to the literature [24], the lengths of semi-major axes and the distances of two foci of Q1 and Q2 are calculated, as follows:

{α1=α2=r1+a˜2+b˜2c˜2f1=f2=ra˜2+b˜2c˜2
where f1 and f2 are the half of the distance of two foci of Q1 and Q2, respectively. a˜,b˜and c˜ are parameters of the light plane. r is the radius of the cylinder.

We have:

{β12=α12f12β22=α22f22

Based on Eq. (1) and Eq. (2), we have:

β1=β2=r

According to the above analyses, the following conclusions are obtained:

Conclusion 1: The two ellipses Q1 and Q2 have exactly the same size. The minor axes of the two ellipses Q1 and Q2 are both equal to the diameter of cylinder.

Conclusion 2: The major and minor axes of two ellipses Q1 and Q2 are parallel with each other.

The above conclusions are based on the assumptions that two cylinders are parallel with each other and they have the same diameter. Hence, the diameter error and the parallelism between two cylinders will influence the calibration accuracy. The machining accuracy of the proposed target is higher than that of complex targets in the same condition since the cylinder is easily machined, so the machining error of the proposed target can bring fewer effects on the calibration accuracy than that of other complex targets.

The specific procedures of the proposed method are as follows:

Step 1: The target is placed at the proper place at least once. The light stripe on the target is captured by the line-structured light vision sensor. The central points of the two light stripes in the image are extracted. After distortion correction, C1 and C2are obtained by fitting.

Step 2: Using perspective projection model, the equations of C1, C2, Q1 and Q2 are established using a perspective projection model. T1 is then obtained using the orthogonality of the rotation matrix.

Step 3: The linear and non-linear solution of the light plane equation can be obtained by T1 and the non-linear optimization method, respectively.

2.1 Solving C1and C2

According to Steger's method [25], the centers of the two light stripes in the image are extracted. Combined with ellipse fitting, C1 and C2 are solved, as shown in Fig. 2.

 figure: Fig. 2

Fig. 2 Result of processing the light stripe in the image. (a) Image of target; (b) Extraction of the center of the light stripe in the image; (c) C1 and C2 obtained by ellipse fitting.

Download Full Size | PDF

2.2 Solving T1

C1and C2are 3 × 3 matrices, for which the expressions are written as Eq. (4).

[ujvj1]Cj[ujvj1]=0(j=1,2)
where [ujvj1] is the undistorted image homogeneous coordinate of the point of the j-th ellipse under Ouxuyu.

Q1and Q2are 3 × 3 matrices, for which the expression is written as Eq. (5).

[xjyj1]Qj[xjyj1]=0
where [xjyj1] is the coordinate of the point of the j-th ellipse below Owjxwjywj. According to the camera model,
ρj[ujvj1]=[axγu00ayv0001][r1r2tj][xjyj1]=KMj[xjyj1]
where ρj represents the non-zero scale factors; K denotes the intrinsic parameter matrix of the camera; u0 and v0 are the coordinates of the principal point, ax and ay are the scale factors in the image axes u and v, and the parameter γis the skew of the two image axes. As known from conclusion 2, Ow1xw1yw1zw1completely parallels the coordinate axis of Ow2xw2yw2zw2, so R1=R2=[r1r2r3].R2 and t2are the rotation matrix and the translation vector from Ow2xw2yw2zw2 to Ocxcyczc, respectively.

Substituting Eq. (6) into Eq. (4),

[xjyj1]MjTKTCjKMj[xjyj1]=0

Combining Eqs. (5) and (7),

ρjQj=MjTKTCjKMj
where ρj is the non-zero scale factor.

By expanding Eq. (8), the equation relatingC1, C2 to Q1, Q2 is obtained in Eq. (9):

ρj[1/β20001/α20001]=[r1TWjr1r1TWjr2r1TWjtjr2TWjr1r2TWjr2r2TWjtjtjTWjr1tjTWjr2tjTWjtj]
where Wj=KTCjK.α=α1=α2.β=β1=β2.

A target with one cylinder is used, and the three rotation angles of R1 contained in Eq. (9) are determined to be unknown quantities; t1contains three unknown quantities; αand non-zero scale factor ρ1 are unknown quantities; β is a known quantity. Thus, there are a total of eight unknown quantities. However, Eq. (9) only provides six constraint equations. So calibrating the light plane parameters by using a target with one cylinder is impossible. When a target with two parallel cylinder is used, the three rotation angles of R1=R2 are determined to be unknown quantities; t1and t2contain six unknown quantities; α, ρ1 and ρ2 are also unknown quantities; β is a known quantity. Thus, there are twelve unknown quantities. Given that Eq. (9) provides twelve equations when a target with two parallel cylinders is used, the equation becomes solvable.

We can decompose Eq. (9) into twelve equations, as follows:

r1TW1r1=ρ1/β2;r1TW2r1=ρ2/β2;r2TW1r2=ρ1/α2;r2TW2r2=ρ2/α2;r1TW1r2=0;r1TW2r2=0;r1TW1t1=0;r1TW2t2=0;r2TW1t1=0;r2TW2t2=0t1TW1t1=ρ1;t2TW2t2=ρ2

Establishing simultaneous equations with the first six equations in Eq. (10) as Eq. (11) and utilizing the orthogonality of r1 and r2,

β2r1TW1r1=α2r2TW1r2;β2r1TW2r1=α2r2TW2r2;r1TW1r2=0;r1TW2r2=0;r1Tr1=1;r2Tr2=1;r1Tr2=0
where r1 and r2 contain six variables, and there are a total of seven variables if α is included. Seven equations are contained in Eq. (11), whereby r1, r2 and αcan be solved. By substituting r1, r2 and αinto the first and the second equations of Eq. (10), respectively, ρ1 and ρ2 can be solved.

By establishing simultaneous equations with the last six equations in Eq. (10) as Eq. (12), t1 and t2can be solved.

r1TW1t1=0;r1TW2t2=0;r2TW1t1=0;r2TW2t2=0;t1TW1t1=ρ1;t2TW2t2=ρ2

2.3 Solving the light plane equation

Given that Ow1xw1yw1zw1 is established on the light plane, the coefficients of the light plane equation [a,b,c,d] underOcxcyczc are set as follows:

[abcd]=[R1t101]-T[0010]

p˜ is the undistorted image homogeneous coordinate of P under Ouxuyu. If [a,b,c,d] are known, the homogeneous coordinate qc=[xc,yc,zc,1]T of P under Ocxcyczc can be solved using Eq. (14):

{ρp˜=K[I3×303×1]qcaxc+byc+czc+d=0

2.4 Non-linear optimization

To improve the calibration accuracy, the parallel cylinder target is placed at different positions. The light plane parameters are then optimized using the maximum likelihood criterion. The centers of the light stripe in the image of the target at the i-th position are extracted and undistorted. Suppose that the m-th undistorted image homogeneous coordinates of ellipses 1 and 2 are p˜1i(m) andp˜2i(m), respectively. q1i(m)=[x1i(m),y1i(m),z1i(m),1]Tand q2i(m)=[x2i(m),y2i(m),z2i(m),1]Tof p˜1i(m) and p˜2i(m) under Ow1xw1yw1zw1 are solved by Eq. (15), respectively.

{ρp˜ji(m)=K[R1t1]qji(m)zji(m)=0
whereρis the non-zero scale factor.

The 3D coordinates of the centers of the light stripe in the image at each position of the target under Ow1xw1yw1zw1 are solved using Eq. (15). The z coordinate components of these points are zero, thereby indicating that these points are located on the light plane. Q1i and Q2i can then be obtained via ellipse fitting. From Q1i and Q2i, semi-major axes α1i and α2i and semi-minor axes β1i and β2i, as well as the angles φ1i and φ1i between major axis and x axis, can be calculated. The objective function is established as follows:

f(ε)=min(i=1n(|β1i+β2i2β|+|β1iβ2i|+|α1iα2i|+|φ1iφ2i|))
where ε=(R1,t1), β is the radius of the cylinder, and n is the placement number of the target. The optimal solution of ε under the maximum likelihood criteria can be solved via non-linear optimization method (e.g., Levenberg-Marquardt algorithm [26]). [a,b,c,d] can then be solved using Eq. (13).

3. Simulation experiment

The proposed method is verified by the simulation experiment. Generally, image noise and the dimension of the target have a high impact on the calibration. The simulation experiment is performed to determine the effects of the above two factors on the calibration accuracy. The conditions of the simulation experiment are as follows: camera resolution 1380 pixels × 1080 pixels, focal length 17 mm, field of view 400 mm × 300 mm. The light plane equation is expressed as 0.774x0.126y+0.621z276.398=0. The calibration accuracy is evaluated by the relative error of [a,b,c,d].

3.1 Impact of the image noise on calibration accuracy

In the experiment, the diameter of the target is 60 mm. The target is placed at five different positions. Gaussian noise with zero mean and a standard deviation of 0.1 to 1 pixel with an interval of 0.1 pixels is added to the characteristic points. For each noise level, 100 experiments are carried out, and the relative errors of light plane parameters are computed. The relative errors of the calibration results at different noise levels are shown in Fig. 3(a).

 figure: Fig. 3

Fig. 3 Result of the simulation experiment. (a) Relative errors of the calibration results at different noise levels; (b) Relative errors of the calibration results at different diameter levels.

Download Full Size | PDF

As shown in Fig. 3(a), the calibration accuracy is improved with the reduction of noise. Thus, the calibration accuracy can be improved by increasing the image processing accuracy. The accuracy of the extraction of light stripe center in the image usually reaches 0.1 pixel. Based on the simulation results, the relative error of the light plane parameters calibration via the proposed method can reach 0.05%.

3.2 Impact of the diameter of the cylinder on calibration accuracy

In the experiment, Gaussian noise is added to the characteristic points, and the noise level is σ=0.1pixel. The diameter of the target is varied from 30 mm to 84 mm with an interval of 6 mm. For each diameter level, 100 experiments are carried out, and the RMS error is computed. The relative errors of the calibration results at different diameter levels are shown in Fig. 3(b). From Fig. 3(b), the calibration accuracy is improved by increasing the diameter of the cylinder. When the diameter of the cylinder is larger than 50 mm, the extent of improvement of calibration precision decreased with increasing diameter. Therefore, the diameter of the cylinder need not be infinitely increased to increase the calibration accuracy. Good calibration accuracy is obtained when the ratio of the field range to the diameter of the cylinder is about 8 (400 mm/50 mm).

4. Physical experiment

The line-structured light vision sensor used in the physical experiment consists of one camera and one line laser projector, as shown in Fig. 4. The camera of Allied Vision Technologies equipped with 17 mm Schneider optical lens is used with an image resolution of about 1 360 pixels × 1 024 pixels, field range of 500 mm × 400 mm, and measuring distance of 700 mm. A single line red laser projector with the power of 10 mw is used. The diameter of the cylinder in the target is 60 mm, and the machining accuracy of the target is 0.02 mm.

 figure: Fig. 4

Fig. 4 Structured light vision sensor and target in the physical experiment.

Download Full Size | PDF

The physical experiment consists of the following steps: first, the performance of different targets is evaluated in complex light environments. Secondly, the intrinsic parameters of the camera are calibrated via the method in literature [15]. The light plane parameters are then calibrated using the calibration method in literature [21] and the proposed method, and a comparison is made for the calibration accuracy by using a planar target. Finally, the validity is tested by applying the proposed method to measure standard steel rail, wheel, and plaster cast.

4.1 Performance of different targets in complex light environments

In this section, the advantages and disadvantages of the normal planar target, the LED planar target, the spherical target, and the parallel cylinder target are evaluated in complex light environments, such as dim light, strong sunlight, high-powered laser projector, and optical filter. As shown in Figs. 5 to 10, the green line, the red line and the yellow point denote the extracted light strip, the extracted outline of the spherical target and the extracted characteristic point, respectively.

 figure: Fig. 5

Fig. 5 Images of three targets captured by the vision senor in the good light environment.

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Images of three targets captured by the vision senor in the dim light environment.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Images of three targets captured by the vision senor in the strong sunlight environment.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Images of three targets captured by the vision senor with a high-powered laser projector. (a) Images of three targets when the image of target characteristic points is clear; (b) Images of three targets when the image of the light stripe is clear.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 Images of three targets captured by the vision senor with an optical filter.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Images of the parallel cylinder target captured by the vision senor in complex light environments. (a) Image captured by the vision sensor in the dim light environment; (b) Image captured by the vision sensor in the strong sunlight environment; (c) Image captured by the vision sensor with a high-powered laser projector; (d) Image captured by the vision sensor with an optical filter.

Download Full Size | PDF

Target images obtained in the good light environment when the normal planar target, the LED planar target, and the spherical target are used are shown in Fig. 5. As shown in Fig. 5, all the characteristic points, the light strips and the outline of the spherical target can be extracted clearly.

Target images obtained in the dim light environment when normal planar target, the LED planar target, and the spherical target are used are shown in Fig. 6. Despite increase of exposure time of the camera, clear characteristic point images of the normal planar target and clear outline images of the spherical target cannot be obtained. The clear characteristic points of the LED planar target can be obtained because the characteristic points are LED. Consequently, the LED planar target has certain advantages in the dim light environment.

Target images obtained in the strong sunlight environment are shown in Fig. 7. The characteristic points of the normal planar target and the LED planar target have similar image intensity with the target background, and the extraction accuracy of the image feature will be reduced. In the strong sunlight environment, it is difficult for the spherical target to obtain the clear outline image of the spherical target and the clear light stripe image. If the laser power of the vision sensor is low, the clear light stripe image cannot be obtained by using the above three targets because the exposure time of the camera is short for strong sunlight.

Images of the three targets obtained by the vision sensor with a high-powered laser projector are shown in Fig. 8. To obtain a clear image of the target characteristic points, the exposure time of the camera is increased, resulting in poor image quality of light stripes, as shown in Fig. 8(a). To obtain a clear image of the light stripes, the exposure time of the camera is reduced, resulting in poor image quality of target characteristic points and outline of the spherical target, as shown in Fig. 8(b). Obtaining clear image of the target characteristic point and the light stripe simultaneously is very difficult for the vision sensor with a high-powered laser projector. To solve this problem and obtain the characteristic points of the target, we fixed the target and turned off the laser projector. Next, we turned on the laser to obtain light stripe images. However, this method is not applicable to on-site calibration of vision sensor in complicated field environments.

In addition, in order to reduce the impact of the complex light, the line-structured light vision sensor always is equipped with an optical filter. As shown in Fig. 9, the camera with optical filter cannot obtain characteristic points of the target and the spherical target outline image by using the above three targets.

As shown in Figs. 5 to 9, the normal planar target and the spherical target are only suitable for the calibration of line-structured light vision sensor in the good light environment. Thus, they perform poorly in complex light environments. The LED planar target has advantages over the normal planar target and the spherical target that is more applicable in the dim light environment. However, these targets perform poorly in the strong sunlight environment when the vision sensor uses high-powered laser projector or optical filter. Moreover, the LED planar target is costly and difficult to machine.

Furthermore, the light stripe on both the normal planar target and the LED planar target easily intersects with the target characteristic points, resulting in the failure of the characteristic point extraction, as shown in Fig. 6 and Fig. 8. Consequently, we will pay special attention to avoiding the intersection of the light stripe and the characteristic points during the calibration. The spherical target has no problems on intersection during calibration; however, extracting the outline of the spherical target in complex light environments is difficult. All these factors bring difficulties to the on-site calibration of the vision sensor.

As shown in Fig. 10, we can obtain the clear image in complex light environments using the parallel cylinder target, such as dim light, strong sunlight, high-powered laser projector, and optical filter. The proposed method only needs the image of the light stripe on the target to calibrate the line-structured light vision sensor, and the above experiments have proven that the proposed method has better adaptability than the current methods in complex light environments.

According to [21–23], the calibration method using the planar target has better calibration accuracy than the 1D target and the spherical target. Therefore, we will evaluate the calibration accuracy of the proposed method by comparing the proposed method with the calibration method in [21].

4.2 Calibration of intrinsic parameters of camera

The intrinsic parameters of the camera are calibrated via the software in literature [27]. During calibration, the planar target is placed in front of the sensor 10 times. The machining accuracy of the target is 5 um. All images used for calibration are shown in Fig. 11.

 figure: Fig. 11

Fig. 11 Images used for the calibration of intrinsic parameters of camera.

Download Full Size | PDF

The calibration results of the intrinsic parameters of camera are shown as follows:

Intrinsic parameters of camera: ax = 2733.80; ay = 2733.63; γ = 0; u0 = 684.23; v0 = 524.69; k1 = −0.23; k2 = 0.31.

The uncertainty degree of the intrinsic parameters:uax=1.05; uay=1.11; uu0=1.08; uv0=1.09.

4.3 Results of calibration of the light plane parameters

The light plane parameters are calibrated via the method in literature [21]. The LED planar target is placed at five different positions in front of the sensor. The machining accuracy of the LED planar target is 0.02 mm. The images used in the calibration are shown in Fig. 12(a), and the calibration result is 0.7811x-0.2185y+0.5812z-424.2294=0. The light plane parameters are calibrated via the proposed method with the target placed at five different positions. Figure 12(b) shows the images used in the calibration, and the calibration result is 0.7822x-0.2195y+0.5816z-424.5595=0.

 figure: Fig. 12

Fig. 12 Images used in the calibration via the two methods (a) Five images used for calibration with the LED planar target; (b) Five images used for calibration with the proposed algorithm.

Download Full Size | PDF

4.4 Analysis of experimental results

The LED planar target is placed twice. At each position, the 3D coordinates of the point of intersection between the light stripe and the grid line of the LED planar target on the horizontal direction are calculated (the point of intersection is called the testing point). The distance between any two testing points is calculated as the measurement distance dm. Following the principle of cross-ratio invariability, the local coordinates of the testing points in the coordinate frame of the LED planar target are calculated. The distance between any two testing points in the coordinate frame of the LED planar target is calculated as the ideal distance dt. A total of 12 testing point can be obtained, as shown in Table 1.

Tables Icon

Table 1. 3D coordinates of the testing points obtained using different calibration results

The distance between the first test point and the remaining five test points is calculated at each position via the two methods. The RMS errors of the distances between the test points are obtained by estimating the deviation Δd between dm and dt. The RMS errors of the distances obtained via the two methods are shown in Table 2. As shown in Table 2, the RMS error of the calibration method when the LED planar target is used is about 0.05mm and that when the proposed method is used is 0.07m. Thus, the calibration accuracy of the proposed algorithm is comparable to that when a LED planar target is used.

Tables Icon

Table 2. Assessment result of calibration precision via the two methods

4.5 Applications

The line-structured light vision sensor is applied to the measurements of standard rail, wheel, and plaster cast. The light stripe images are extracted via Steger's method [25]. After that, the 3D profiles are reconstructed using the above two calibration algorithms.

As shown in Fig. 13(a), the rail is 60kg/m rail. The rail wear measured by the high-precision three-coordinate measurement device (the vertical wear is 2.04 mm and the horizontal wear is 2.14 mm) is taken as the standard rail wear value. The measurement sites of wheel and plaster cast are shown in Figs. 13(b) and 13(c), respectively.

 figure: Fig. 13

Fig. 13 Measurement sites of three applications. (a) Standard rail; (b) Wheel; (c) Plaster cast.

Download Full Size | PDF

The correspondent reconstructed 3D profiles are shown in Fig. 14. The red color denotes the 3D profile obtained using the calibration results with the LED planar target, and the blue color denotes the 3D profile obtained via the proposed method. As shown in Fig. 14, the red color line in each sub-image overlaps closely to the blue line. It means the reconstructed 3D profiles using the two calibration results are similar to each other in three applications. The results further verify that the proposed method can achieve a calibration accuracy that is comparable to that when a LED planar target is used.

 figure: Fig. 14

Fig. 14 Reconstructed 3D profiles using the two calibration results (a) The reconstructed 3D profile of the standard rail; (b) The reconstructed 3D profile of the wheel; (c) The reconstructed 3D profile of the plaster cast.

Download Full Size | PDF

According to the standard value of rail wear and the correspondent reconstructed 3D profile of the standard rail, the RMS errors of the rail wear using the two calibration methods are calculated. The RMS errors of the vertical wear and horizontal wear using the proposed method are 0.17 mm and 0.14 mm, respectively. The RMS errors of the vertical wear and horizontal wear when a LED planar target is used are 0.14 mm and 0.13 mm, respectively. The results prove that the proposed method can meet the need for the high calibration accuracy of a line-structured light vision sensor in many industrial applications.

5. Conclusion

A novel calibration method for line-structured light vision sensor using a parallel cylinder target is proposed in this paper. The advantage of the proposed method is as follows:

The essential difference between the proposed method and the existing on-site method including the method based on a single ball is that the proposed method does not need any auxiliary information except for the image of the light stripe on the target that can be randomly moved. Thus, the proposed method is suitable for on-site calibration in complex light environments even when an optical filter is used for the line-structured light vision sensor. Moreover, the parallel cylinder target is easily machined with a high mechanical machining accuracy. A physical experiment is carried out with a field range of about 500 mm × 400 mm and a measuring distance of 700 mm. At this condition, the proposed method can achieve a calibration accuracy of 0.07 mm, which is comparable to that of algorithms involving the use of planar targets.

Acknowledgments

The authors acknowledge the support from National Natural Science Foundation of China (NSFC) under Grant No. 51175027, 51575033 and the Beijing Natural Science Foundation under Grant No. 3132029.

References and links

1. S. Shirmohammadi and A. Ferrero, “Camera as the instrument: the rising trend of vision based measurement,” IEEE Trans. Instrum. Meas. 17(3), 41–47 (2014). [CrossRef]  

2. Z. Ren, J. Liao, and L. Cai, “Three-dimensional measurement of small mechanical parts under a complicated background based on stereo vision,” Appl. Opt. 49(10), 1789–1801 (2010). [CrossRef]   [PubMed]  

3. W. Li and Y. F. Li, “Single-camera panoramic stereo imaging system with a fisheye lens and a convex mirror,” Opt. Express 19(7), 5855–5867 (2011). [CrossRef]   [PubMed]  

4. L. Lu, J. Xi, Y. Yu, and Q. Guo, “New approach to improve the accuracy of 3-D shape measurement of moving object using phase shifting profilometry,” Opt. Express 21(25), 30610–30622 (2013). [CrossRef]   [PubMed]  

5. E. N. Malamas, E. G. M. Petrakis, M. Zervakis, L. Petit, and J. D. Legat, “A survey on industrial vision systems, applications and tools,” Image Vis. Comput. 21(2), 171–188 (2003). [CrossRef]  

6. R. S. Lu, Y. F. Li, and Q. Yu, “On-line measurement of straightness of seamless steel pipe using machine vision technique,” Sens. Actuators A Phys. 94(1-2), 95–101 (2001). [CrossRef]  

7. A. Okamoto, Y. Wasa, and Y. Kagawa, “Development of shape measurement system for hot large forgings,” Kobe Steel Eng. Rep. 57(3), 29–33 (2007).

8. Z. Liu, F. Li, B. Huang, and G. Zhang, “Real-time and accurate rail wear measurement method and experimental analysis,” J. Opt. Soc. Am. A 31(8), 1721–1729 (2014). [CrossRef]   [PubMed]  

9. X. Zhang, Y. Li, and L. Zhu, “Color code identification in coded structured light,” Appl. Opt. 51(22), 5340–5356 (2012). [CrossRef]   [PubMed]  

10. Y. Chen and Y. F. Li, “Self-recalibration of a colour-encoded light system for automated three-dimensional measurements,” Meas. Sci. Technol. 14(1), 33–40 (2003). [CrossRef]  

11. P. Griffin, L. Narasimhan, and S. Yee, “Generation of uniquely encoded light patterns for range data acquisition,” Pattern Recognit. 25(6), 609–616 (1992). [CrossRef]  

12. A. K. C. Wong, P. Niu, and X. He, “Fast acquisition of dense depth data by a new structured light scheme,” Comput. Vis. Image Underst. 98(3), 398–422 (2005). [CrossRef]  

13. T. P. Koninckx and L. Van Gool, “Real-time range acquisition by adaptive structured light,” IEEE Trans. Pattern Anal. Mach. Intell. 28(3), 432–445 (2006). [CrossRef]   [PubMed]  

14. R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV camera and lenses,” IEEE J. Robot. Autom. 3(4), 323–344 (1987). [CrossRef]  

15. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

16. Z. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 892–899 (2004). [CrossRef]   [PubMed]  

17. H. Zhang, K. Y. Wong, and G. Zhang, “Camera calibration from images of spheres,” IEEE Trans. Pattern Anal. Mach. Intell. 29(3), 499–502 (2007). [CrossRef]   [PubMed]  

18. K. Y. Wong, G. Zhang, and Z. Chen, “A stratified approach for camera calibration using spheres,” IEEE Trans. Image Process. 20(2), 305–316 (2011). [CrossRef]   [PubMed]  

19. D. Q. Huynh, R. A. Owens, and P. E. Hartmann, “Calibration a structured light stripe system: a novel approach,” Int. J. Comput. Vis. 33(1), 73–86 (1999). [CrossRef]  

20. F. Q. Zhou and G. J. Zhang, “Complete calibration of a structured light stripe vision sensor through planar target of unknown orientations,” Image Vis. Comput. 23(1), 59–67 (2005). [CrossRef]  

21. G. J. Zhang, Z. Liu, J. H. Sun, and Z. Z. Wei, “Novel calibration method for multi-sensor visual measurement system based on structured light,” Opt. Eng. 49(4), 043602 (2010). [CrossRef]  

22. Z. Z. Wei, L. J. Cao, and G. J. Zhang, “A novel 1D target-based calibration method with unknown orientation for structured light vision sensor,” Opt. Laser Technol. 42(4), 570–574 (2010). [CrossRef]  

23. Z. Liu, X. J. Li, F. J. Li, and G. J. Zhang, “Calibration method for line-structured light vision sensor based a single ball target,” Opt. Lasers Eng. 69(6), 20–28 (2015). [CrossRef]  

24. A. R. Partridge, “Ellipses from a Circular and Spherical Point of View,” Two-year. Coll. Math. J. 14(5), 436–438 (1983).

25. C. Steger, “An unbiased detector of curvilinear structures,” IEEE Trans. Pattern Anal. Mach. Intell. 20(2), 113–125 (1998). [CrossRef]  

26. J. MORE, The Levenberg-Marquardt Algorithm, Implementation and Theory (Numerical Analysis, 1977).

27. J. Y. Bouguet, “The MATLAB open source calibration toolbox,” http://www.vision.caltech.edu/bouguetj/calib _doc/.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1
Fig. 1 Calibration process of structured light vision sensor.
Fig. 2
Fig. 2 Result of processing the light stripe in the image. (a) Image of target; (b) Extraction of the center of the light stripe in the image; (c) C 1 and C 2 obtained by ellipse fitting.
Fig. 3
Fig. 3 Result of the simulation experiment. (a) Relative errors of the calibration results at different noise levels; (b) Relative errors of the calibration results at different diameter levels.
Fig. 4
Fig. 4 Structured light vision sensor and target in the physical experiment.
Fig. 5
Fig. 5 Images of three targets captured by the vision senor in the good light environment.
Fig. 6
Fig. 6 Images of three targets captured by the vision senor in the dim light environment.
Fig. 7
Fig. 7 Images of three targets captured by the vision senor in the strong sunlight environment.
Fig. 8
Fig. 8 Images of three targets captured by the vision senor with a high-powered laser projector. (a) Images of three targets when the image of target characteristic points is clear; (b) Images of three targets when the image of the light stripe is clear.
Fig. 9
Fig. 9 Images of three targets captured by the vision senor with an optical filter.
Fig. 10
Fig. 10 Images of the parallel cylinder target captured by the vision senor in complex light environments. (a) Image captured by the vision sensor in the dim light environment; (b) Image captured by the vision sensor in the strong sunlight environment; (c) Image captured by the vision sensor with a high-powered laser projector; (d) Image captured by the vision sensor with an optical filter.
Fig. 11
Fig. 11 Images used for the calibration of intrinsic parameters of camera.
Fig. 12
Fig. 12 Images used in the calibration via the two methods (a) Five images used for calibration with the LED planar target; (b) Five images used for calibration with the proposed algorithm.
Fig. 13
Fig. 13 Measurement sites of three applications. (a) Standard rail; (b) Wheel; (c) Plaster cast.
Fig. 14
Fig. 14 Reconstructed 3D profiles using the two calibration results (a) The reconstructed 3D profile of the standard rail; (b) The reconstructed 3D profile of the wheel; (c) The reconstructed 3D profile of the plaster cast.

Tables (2)

Tables Icon

Table 1 3D coordinates of the testing points obtained using different calibration results

Tables Icon

Table 2 Assessment result of calibration precision via the two methods

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

{ α 1 = α 2 = r 1 + a ˜ 2 + b ˜ 2 c ˜ 2 f 1 = f 2 = r a ˜ 2 + b ˜ 2 c ˜ 2
{ β 1 2 = α 1 2 f 1 2 β 2 2 = α 2 2 f 2 2
β 1 = β 2 = r
[ u j v j 1 ] C j [ u j v j 1 ] = 0 ( j = 1 , 2 )
[ x j y j 1 ] Q j [ x j y j 1 ] = 0
ρ j [ u j v j 1 ] = [ a x γ u 0 0 a y v 0 0 0 1 ] [ r 1 r 2 t j ] [ x j y j 1 ] = K M j [ x j y j 1 ]
[ x j y j 1 ] M j T K T C j K M j [ x j y j 1 ] = 0
ρ j Q j = M j T K T C j K M j
ρ j [ 1 / β 2 0 0 0 1 / α 2 0 0 0 1 ] = [ r 1 T W j r 1 r 1 T W j r 2 r 1 T W j t j r 2 T W j r 1 r 2 T W j r 2 r 2 T W j t j t j T W j r 1 t j T W j r 2 t j T W j t j ]
r 1 T W 1 r 1 = ρ 1 / β 2 ; r 1 T W 2 r 1 = ρ 2 / β 2 ; r 2 T W 1 r 2 = ρ 1 / α 2 ; r 2 T W 2 r 2 = ρ 2 / α 2 ; r 1 T W 1 r 2 = 0 ; r 1 T W 2 r 2 = 0 ; r 1 T W 1 t 1 = 0 ; r 1 T W 2 t 2 = 0 ; r 2 T W 1 t 1 = 0 ; r 2 T W 2 t 2 = 0 t 1 T W 1 t 1 = ρ 1 ; t 2 T W 2 t 2 = ρ 2
β 2 r 1 T W 1 r 1 = α 2 r 2 T W 1 r 2 ; β 2 r 1 T W 2 r 1 = α 2 r 2 T W 2 r 2 ; r 1 T W 1 r 2 = 0 ; r 1 T W 2 r 2 = 0 ; r 1 T r 1 = 1 ; r 2 T r 2 = 1 ; r 1 T r 2 = 0
r 1 T W 1 t 1 = 0 ; r 1 T W 2 t 2 = 0 ; r 2 T W 1 t 1 = 0 ; r 2 T W 2 t 2 = 0 ; t 1 T W 1 t 1 = ρ 1 ; t 2 T W 2 t 2 = ρ 2
[ a b c d ] = [ R 1 t 1 0 1 ] -T [ 0 0 1 0 ]
{ ρ p ˜ = K [ I 3 × 3 0 3 × 1 ] q c a x c + b y c + c z c + d = 0
{ ρ p ˜ j i ( m ) = K [ R 1 t 1 ] q j i ( m ) z j i ( m ) = 0
f ( ε ) = min ( i = 1 n ( | β 1 i + β 2 i 2 β | + | β 1 i β 2 i | + | α 1 i α 2 i | + | φ 1 i φ 2 i | ) )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.