Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Profile reconstruction method adopting parameterized re-projection errors of laser lines generated from bi-cuboid references

Open Access Open Access

Abstract

A flexible reconstruction method, which is based on the line re-projection errors of the laser plane, is presented for the profile recovery. The bi-cuboid references are designed to cover the large view-field of the camera. The local intrinsic and extrinsic parameter matrices of the camera are initially contributed by the RQ decomposition. Then the balance model is demonstrated to obtain the global parameter matrices in view of the refined projection in the camera coordinate system. The flexible laser plane is solved by the Plücker matrices of the projection laser lines that are generated from the homographies of the cubical references and the global parameter matrices. Furthermore, the laser plane and global parameter matrices are improved by the cost function that is constructed by the re-projection errors of the parameterized laser lines on the references. The reconstruction experiments are performed to verify the validity and the accuracy of the optimization method and the initial method. The impact factors of the measurement distance, the reference distance and the test distance are investigated in the experiments. The average reconstruction errors are 1.14 mm, 1.13 mm, 1.15 mm and 1.17 mm in the four groups of experiments, which shows the good application prospect of the profile reconstructions.

© 2017 Optical Society of America

1. Introduction

Profile recovery is one of the most important studies in the optical detection field in view of the widely application prospects [1, 2]. Some typical cases include face recognition [3], vehicle or industrial parts inspection [4–6], reverse engineering [7], building crack inspection [8], cardiac optical mapping [9], etc. Two main methods are appropriate to solve the problem of profile recovery. Binocular vision is previously studied by many researchers [10–12]. The method adopts two cameras to capture the feature points on the measured object. Based on the triangulation among two cameras and the measured object, the object is reconstructed by the extracted feature points in the images [13, 14]. The difficulty of binocular vision is to match the corresponding feature points from two images. However, it is often a challenge to obtain the accurate feature point pairs, even though the epipolar constraint is considered in the matching process. The point matching is more difficult for an object with the smooth surfaces. The other reconstruction method is the active vision consisting of the line, plane or coded structured light [15, 16]. The line structured light takes the advantage of the high illumination of the laser projection, whereas only the point information is obtained from one captured image. The coded structured light utilizes the DLP projector to project a coded image on the measured object so that the pixels are able to be distinguished in the profile reconstruction. The coded structured light reconstructs the object profile with the high efficiency. As the illumination of the coded structured light generated from the DLP projector is weaker than the one of the laser projector, the coded-structured-light reconstruction tends to be impacted by the environmental light. Therefore, the profile reconstruction with the plane structured light from the laser projector is the moderate method considering both of the illumination and the reconstruction efficiency.

In the profile reconstruction method with a camera and the structured light, 1D [17], 2D [18–20] and 3D [21, 22] calibration references are studied in the previous works. J. Santolaria et al. [22] explained an approach to calibrate the laser sensors based on the 3D crenellated reference with feature points. The method is performed on the laser triangulation sensor in articulated arm coordinate measuring machine. J. E. Ha [23] interpreted a calibration method for the extrinsic parameters of the laser sensor and the camera. A 2D calibration reference with a triangle hole is designed for the calibration. S. Zhang et al. [24] explored a 2D-reference method to calibrate the structured light system for the 3D shape measurement. The projector is considered as a camera in the calibration. Therefore, the structured light system consisting of a camera and a projector is similar to the binocular vision system including two cameras. Furthermore, the calibration technology of the binocular vision is well established. J. Heikkila et al. [25] proposed a three step calibration method by employing 3D reference with the circular points. The mapping of the circle on the reference is reviewed to realize the camera calibration. Although the simple structure of the 1D reference is beneficial for the manufacture and operation, the 1D reference is rarely used for the lack of the calibration information. 2D reference is widely reported due to its flexible calibration process. However, the 2D reference method requires more than 3 images to perform the calibration. Hence, the 3D reference, taking the advantage of only one needed image, is employed as the basis of the system calibration and the profile reconstruction. In order to cover the large view-field of the camera, two cuboid references are designed to achieve the purpose of the profile recovery. Correspondingly, the reconstruction method with two 2D references can be considered as the same 2D reference on two different positions. In the 2D reference case, three or more reference images provide a global intrinsic parameter matrix. However, in the 3D reference case, a 3D reference image is enough to provide a local intrinsic parameter matrix. As there should be only one intrinsic parameter matrix for one camera to represent the essential characteristics of the camera, two or more 3D references in an image take the problem of data redundancy. Therefore, it is meaningful to solve the global intrinsic parameter matrix by a reasonable balance method, considering the data of both 3D references.

The rest of the paper is outlined as follows: In Section 2, a two-step method is presented to balance the intrinsic parameter matrices generated from two cuboid references. The first step is to calibrate the world-image transform matrices by two cuboid references and decompose the two extrinsic parameter matrices from the two world-image transform matrices. In the second step, the world coordinates of the two references are transformed to the coordinates in the same camera coordinate system by the decomposed extrinsic parameter matrices. The global intrinsic parameter matrix is solved by the coordinates in the camera coordinate system and the image projections of them. In Section 3, the flexible laser plane is determined by the homographies from the intersection laser lines on the two references to the projection laser lines in the image. The optimization laser plane and intrinsic parameter matrix are generated from the cost function that is contributed by the re-projection errors of the parameterized laser lines. The points on the measured object are reconstructed by the laser plane and the projection transform. Section 4 provides the comprehensive experiment results, which investigate the impact factors of the measurement distance from the camera to the measured object, the reference distance between two cuboid references, and the test distance on the benchmark ruler. Section 5 concludes the paper.

The reconstruction method takes two main benefits at the same time. The first benefit is to realize the profile reconstruction, camera calibration and laser plane calibration with the information from only one image. The second one is the flexible laser plane without strict movement constraint. The method is profitable to enhance the profile measurement in both fundamental research and potential applications.

2. Balance model of intrinsic parameter matrix

The profile reconstruction system, which consists of two cuboid references, a camera and a laser plane, is shown in Fig. 1. Two world coordinate frames OW1-XW1YW1ZW1, OW2-XW2YW2ZW2 are defined on the two cuboid references. The camera coordinate frame OC-XCYCZC and the image coordinate frame o-xy are defined by the camera and the image, respectively. The flexible laser plane scans the measured object and the cuboid references. Therefore, the image of four intersection laser lines on the references and an intersection laser curve on the measured object is captured by the camera.

 figure: Fig. 1

Fig. 1 Profile reconstruction method with two cuboid references and a flexible laser plane.

Download Full Size | PDF

According to the projection of the 3D point set on the two cuboid references Xi(W,m)={Xi(W,m,n)}, the projection point on the image plane is given by [26]

δmxi(I,m)=PmXi(W,m)
where Pm is the projection matrix from the reference to the image. xi(I,m)={xi(I,m,n)} is the projection point on the image plane. δm is the scale factor. Pm is solved by the direct linear transform (DLT) method [27]. m = 1, 2 is the number of the reference [1, 2], respectively. n = 1, 2 is the number of the left plane or right plane on the reference, respectively.

The projection matrix Pm is decomposed by the method in [26] to

Pm=Km(L)[Rm(L),tm(L)]
where Km(L)=[αmγmum0βmvm001] is the local intrinsic parameter matrix generated from the 3D coordinates on the m-th cuboid reference. Rm(L) and tm(L) are the local camera rotation matrix and the local camera translation vector generated from the 3D coordinates on the m-th cuboid reference.

The intrinsic parameter matrix K1(L) of the first cuboid reference should be theoretically identical to the intrinsic parameter matrix K2(L) of the second cuboid reference. However, K1(L) is actually unequal to K2(L), as there are noises in the experimental image. In order to balance the noise impacts on the image of two cuboid references, we solve the globally intrinsic parameter matrix according to the feature point coordinates of the two references in the camera coordinate system. Based on the local extrinsic parameters Rm(L), tm(L) solved by Eq. (2), the 3D point Xi(W,m) on the m-th cuboid reference is transformed to

Xi(C,m)=[Rm(L),tm(L)]Xi(W,m)
where Xi(C,m) is the 3D coordinate of the m-th reference in the camera coordinate system.

The image coordinates xi(I,1) and xi(I,2) are combined into xi(I) while the 3D coordinates Xi(C,1), Xi(C,2) are combined into Xi(C). Considering the pinhole model in the camera coordinate system [26], the transform from the 3D coordinate Xi(C) to the 2D image coordinate xi(I) is comprehensively expressed by

δxi(I)=K(G)Xi(C)
Where K(G)=[αγu0βv001] is the global intrinsic parameter matrix considering the projection from Xi(C,1) to xi(I,1) as well as the projection from Xi(C,2) to xi(I,2). δ is the scale factor.

Let xi(I)=(xi(I),yi(I),1)T, Xi(C)=(Xi(C),Yi(C),Zi(C),1)T. Equation (4) is transformed to the non-homogenous equation as

[Xi(C)0Yi(C)Zi(C)00Yi(C)00Zi(C)][αβγuv]=[xi(I)Zi(C)yi(I)Zi(C)]

The global intrinsic parameter matrix of K(G) can be determined by

[α,β,γ,u,v]T=(BTB)1BTb
where B=[B1T,B2T,...,BiT,...,BNT]T, Bi=[Xi(C)0Yi(C)Zi(C)00Yi(C)00Zi(C)], b=[b1T,b2T,...,biT,...,bNT]T, bi=[xi(I)Zi(C)yi(I)Zi(C)]T.

From Eq. (2), the global extrinsic parameter matrix is estimated by

[Rm(G),tm(G)]=(K(G))1Pm

The solution process of the global intrinsic parameter matrix and the global extrinsic parameter matrix is shown in Fig. 2.

 figure: Fig. 2

Fig. 2 The solution diagram of the global intrinsic parameter matrix and the global extrinsic parameter matrix.

Download Full Size | PDF

3. Profile reconstruction model

In the left plane of the reference m, Xi(W,m,1)=0. In the right plane of reference m, Yi(W,m,1)=0. Therefore, for the point on the left plane of the cubical reference, Xi(W,m,1)=[0,Yi(W,m,1),Zi(W,m,1),1]T. For the point on the right plane of the cubical reference, Xi(W,m,2)=[Xi(W,m,2),0,Zi(W,m,2),1]T. According to Eqs. (1), (2), the point-point homography matrix from the left plane of the reference m to the image is H(W,m,1)=K(G)[r2(G,m),r3(G,m),tm(G)]. The point-point homography matrix from the right plane of the reference m to the image is H(W,m,2)=K(G)[r1(G,m),r3(G,m),tm(G)]. r1(G,m),r2(G,m),r3(G,m) are the three column vectors of Rm(G).

The line-line homograhpy from the left plane or right plane of the reference m to the image is (H(W,m,n))-T [26, 28]. Then, the 2D line on the left or right plane of the reference m is

lj(m,n)=(H(W,m,n))T(lj(m,n))
where (lj(m,n)) is the 2D projection laser line in the image. The 2D line is extracted from the image by the Hessian matrix method [29].

Let Jm=[Rm(G)tm(G)0T1]. In the camera coordinate system, the left and right planes of the reference m transformed by Jm are

fj(m,1)=JmT[1,0,0,0]T
fj(m,2)=JmT[0,1,0,0]T

In the camera coordinate system, the two planes passing the line lj(m,n) in Eq. (8) and vertical to the left and right planes of the reference m are

ej(m,1)=JmT[0,(lj(m,1))x,(lj(m,1))y,(lj(m,1))z]T
ej(m,2)=JmT[(lj(m,2))x,0,(lj(m,2))y,(lj(m,2))z]T
where lj(m,n)=[(lj(m,n))x,(lj(m,n))y,(lj(m,n))z]T.

Thus, the 3D laser line on the reference m is represented by the Plücker matrix

Lj(m,n)*=ej(m,n)(fj(m,n))Tfj(m,n)(ej(m,n))T=[lj(m,n)]4×4

The 3D laser line Lj(m,n)* represented by two planes is transform to the point-form Plücker matrix Lj(m,n) with the relationship of (lj(m,n))34:(lj(m,n))42:(lj(m,n))23:(lj(m,n))14:(lj(m,n))13:(lj(m,n))12=(lj(m,n))12:(lj(m,n))14:(lj(m,n))23:(lj(m,n))42:(lj(m,n))34 [26]. The coordinate of the laser plane Πj passing Lj(m,n) is

[Lj(1,1),Lj(1,2),Lj(2,1),Lj(2,2)]TΠj=0

The laser plane is then solved by the singular value decomposition (SVD) method [30]. The solution process of the initial solution of the laser plane is shown in Fig. 3.

 figure: Fig. 3

Fig. 3 The solution diagram of the initial solution of the flexible laser plane.

Download Full Size | PDF

In the general reconstruction case, the laser plane Πj can be used to estimate the 3D points of the intersection laser curve on the measurement object. In order to improve the reconstruction accuracy, in the camera coordinate system, the intersection laser line is further parameterized by the laser plane Πj, Rm(G) and tm(G) as

Lj(C,m,n)*(Πj,Rm(G),tm(G))=Πj[fj(m,n)(Rm(G),tm(G))]T[fj(m,n)(Rm(G),tm(G))]ΠjT=[lj(C,m,n)]4×4

The plane-form Plücker matrix Lj(C,m,n)*(Πj,Rm(G),tm(G)) is also transformed to the point-form Plücker matrix Lj(C,m,n)(Πj,Rm(G),tm(G)). Then, from Eq. (2), the point-form Plücker matrix Lj(C,m,n)(Πj,Rm(G),tm(G)) is projected to the image by the parameterized projection matrix as

[(l˜j(m,n)(Πj,K(G),Rm(G),tm(G)))]×={K(G)[Rm(G),tm(G)]}{Lj(C,m,n)(Πj,Rm(G),tm(G))}{K(G)[Rm(G),tm(G)]}T
where [(l˜j(m,n)(Πj,K(G),Rm(G),tm(G)))]×=[0(l˜j(m,n))z(l˜j(m,n))y(l˜j(m,n))z0(l˜j(m,n))x(l˜j(m,n))y(l˜j(m,n))x0] defines the parameterized skew-symmetric matrix that relates to the parameterized vector of the projection laser line (l˜j(m,n)(Πj,K(G),Rm(G),tm(G)))=[(l˜j(m,n))x,(l˜j(m,n))y,(l˜j(m,n))z]T.

The re-projection intersection laser lines are parameterized by Eq. (16). The cost function is constructed by the re-projection errors of the intersection laser lines on the two cuboid references and described by

f(Πj,K(G),R1(G),t1(G),R2(G),t2(G))=(l˜j(1,1)(Πj,K(G),R1(G),t1(G)))(lj(1,1))2+(l˜j(1,2)(Πj,K(G),R1(G),t1(G)))(lj(1,2))2+(l˜j(2,1)(Πj,K(G),R2(G),t2(G)))(lj(2,1))2+(l˜j(2,2)(Πj,K(G),R2(G),t2(G)))(lj(2,2))2

The solution process of the optimization solution of the laser plane is shown in Fig. 4.

 figure: Fig. 4

Fig. 4 The solution diagram of the optimization solution of the flexible laser plane.

Download Full Size | PDF

The optimization solutions Π^j,K^(G),R^1(G),t^1(G),R^2(G),t^2(G) are derived from the minimum value of the cost function. As the intersection point Xi(O) between the laser plane and the object is located on the optimization laser plane Π^j, it satisfies [24]

Π^jTXi(O)=0

Therefore, the intersection laser point Xi(O) on the measured object is reconstructed by Eq. (18), Eq. (4), the optimized intrinsic matrix K^(G) and image coordinate xi(O).

4. Experiments and discussion

A camera with 2048 × 1536 image resolution and two 50 mm × 50 mm × 400 mm references with 20 mm × 20 mm checkerboard pattern are employed to test the performance of the profile reconstruction method. Harris corner recognition [31] contributes the image coordinates of the feature points on the bi-cuboid references. 40 feature points are selected on each cuboid reference. The performance of the optimization method is verified by changing the measurement distance, the reference distance and the test distance in the experiments. The reconstruction errors of the optimization method are compared with the ones of the initial method to test the accuracy of the reconstruction method.

The verification method with the benchmark ruler is shown in Fig. 5. In order to evaluate the accuracy of the reconstruction method, a target with 5 feature points is considered as the benchmark ruler and the distance between two feature points is regarded as the test distance. The test distances are 20 mm, 30 mm, 40 mm and 50 mm, respectively. The reference distances between two cuboid references are 150 mm, 200 mm, 250 mm and 300 mm, respectively. In addition, the measurement distances from the measured object to the camera are 700 mm, 800 mm, 900 mm and 1000 mm. The reconstruction error between the reconstructed test distance and the real test distance on the ruler is adopted to verify the accuracy of the profile reconstruction method [32].

 figure: Fig. 5

Fig. 5 Verification method with the benchmark ruler.

Download Full Size | PDF

The reconstruction results adopting the bi-cuboid references are shown in Fig. 6. Theexperiment setup of the reconstruction method is shown in Fig. 6(a). The verification experiment is shown in Fig. 6(b). Figures 6(c), 6(e), 6(g) and 6(i) are the images captured in the experiments. Four different objects are located between two cuboid references. Figures 6(d), 6(f), 6(h) and 6(j) are the reconstruction results of the four objects. The red spheres represent the points on the intersection lines between the laser plane and the objects in the camera coordinate system. The blue stars indicate the feature points of the bi-cuboid references. The reconstruction results accurately agree with the shape of the four objects.

 figure: Fig. 6

Fig. 6 Profile reconstruction results adopting the parameterized re-projection errors of laser lines from the bi-cuboid references. (a) The experiment setup of the reconstruction method. (b) The verification experiment. (c) The captured image of the car model. (d) The reconstruction results of the car model. (e) The captured image of the square box. (f) The reconstruction results of the square box. (g) The captured image of the cylindrical tube. (h) The reconstruction results of the cylindrical tube. (i) The captured image of the flat surface. (j) The reconstruction results of the flat surface.

Download Full Size | PDF

In order to quantitatively analyze the accuracy of the reconstruction, the reconstruction distance errors λj of the optimization method and the initial method are demonstrated in Fig. 7. The statistic data are shown in Table 1.

 figure: Fig. 7

Fig. 7 Reconstruction errors of the initial method and the optimization method. λj is the reconstruction error. IN indicates the initial method. OP indicates the optimization method. MD indicates measurement distance, mm. RD indicates reference distance, mm. (a) MD = 700, RD = 150. (b) MD = 700, RD = 200. (c) MD = 700, RD = 250. (d) MD = 700, RD = 300. (e) MD = 800, RD = 150. (f) MD = 800, RD = 200. (g) MD = 800, RD = 250. (h) MD = 800, RD = 300. (i) MD = 900, RD = 150. (j) MD = 900, RD = 200. (k) MD = 900, RD = 250. (l) MD = 900, RD = 300. (m) MD = 1000, RD = 150. (n) MD = 1000, RD = 200. (o) MD = 1000, RD = 250. (p) MD = 1000, RD = 300.

Download Full Size | PDF

Tables Icon

Table 1. Statistic data of the reconstruction experiments.

When the measurement distance is 700 mm, the means of the reconstruction errors of the initial method are 1.13 mm, 1.16 mm, 1.18 mm and 1.22 mm, related to the reference distances of 150 mm, 200 mm, 250 mm and 300 mm. Moreover, the means of the optimization method are 1.07 mm, 1.09 mm, 1.12 mm and 1.15 mm. When the measurement distance increases to 800 mm, the means of the initial method are 1.13 mm, 1.15 mm, 1.18 mm and 1.20 mm, corresponding to the reference distance of 150 mm, 200 mm, 250 mm and 300 mm. then, the means of the optimization method are 1.06 mm, 1.09 mm, 1.11 mm and 1.13 mm. When the measurement distance grows up to 900 mm, the means of the initial method are 1.14 mm, 1.16 mm, 1.19 mm and 1.22 mm, under the reference distances of 150 mm, 200 mm, 250 mm and 300 mm. Furthermore, the means of the optimization method are 1.08 mm, 1.10 mm, 1.13 mm and 1.16 mm. When the measurement distance is 1000 mm, the means of the initial method are 1.15 mm, 1.18 mm, 1.21 mm and 1.25 mm with the reference distances of 150 mm, 200 mm, 250 mm and 300 mm. Moreover, the means of the optimization method are 1.09 mm, 1.11 mm, 1.14 mm and 1.18 mm.

The relative errors of the reconstruction distances are illustrated in Fig. 8. When the measurement distance is 700 mm, the means of the relative errors of the optimization method are 2.96%, 3.04%, 3.10% and 3.21%, under the reference distances of 150 mm, 200 mm, 250 mm and 300 mm. While the measurement distance is 800 mm, the means of the relative errors of the optimization method are 2.93%, 3.02%, 3.08% and 3.16%. When the measurement distance is 900 mm, the means of the relative errors of the optimization method are 3.00%, 3.06%, 3.14% and 3.23%. For the measurement distance of 1000 mm, the means of the relative errors of the optimization method are 3.03%, 3.10%, 3.19% and 3.29%.

 figure: Fig. 8

Fig. 8 Relative errors of the initial method and the optimization method. δj is the reconstruction error. IN indicates the initial method. OP indicates the optimization method. MD indicates measurement distance, mm. RD indicates reference distance, mm. (a) MD = 700, RD = 150. (b) MD = 700, RD = 200. (c) MD = 700, RD = 250. (d) MD = 700, RD = 300. (e) MD = 800, RD = 150. (f) MD = 800, RD = 200. (g) MD = 800, RD = 250. (h) MD = 800, RD = 300. (i) MD = 900, RD = 150. (j) MD = 900, RD = 200. (k) MD = 900, RD = 250. (l) MD = 900, RD = 300. (m) MD = 1000, RD = 150. (n) MD = 1000, RD = 200. (o) MD = 1000, RD = 250. (p) MD = 1000, RD = 300.

Download Full Size | PDF

Finally, we can come to the conclusion that the absolute errors and relative errors of the experimental reconstruction errors obtained by the optimization method are smaller than those of the initial method. Therefore, the optimization method adopting parameterized re-projection errors of laser lines reduces the experimental errors effectively. The relative error slightly increases with the growth of the test distance. In addition, with the test distance increasing from 20 mm to 50 mm, the reconstruction errors present steady growing trends. The reconstruction errors show steady growing trends with the increasing reference distances from 150 mm to 300 mm. Therefore, when the distance between the bi-cuboid references is 150 mm, the reconstruction values are closer to the true valves. It is because that the references approach to the center of the image, where the image distortion is indistinctive. Besides, the measurement distance between the camera and the measured object is also considered as an impact factor. It is proved that the reconstructed errors are the smallest when the measurement distance is 800 mm. When the measurement distance is 700 mm, the reconstruction errors are slightly larger than the errors under the measurement distance of 800 mm, and are slightly smaller than the errors under the measurement distance of 900 mm. While the measurement distance is 1000 mm, the reconstructed errors are the largest in the four groups of experiments. In summary, the experimental results indicate that the reconstruction values are closest to the true values on the condition of the measurement distance of 800 mm, the test distance of 20 mm and the reference distance of 150 mm.

5. Conclusion

A profile reconstruction method based on the line re-projection errors of the laser plane and bi-cuboid references is proposed in this paper. Two world coordinate systems are built on the bi-cuboid references. The intrinsic parameter matrix of the camera is solved by the balance model of the two 3D references in one image. The flexible laser plane is modeled by the homographies from the references to the images and refined by the cost function of the line re-projection errors. The accuracy of the optimization method and the initial method are verified by comparing the reconstruction distance errors. The impact factors of the measurement distance, the reference distance and the test distance are investigated in this paper. The mean of the reconstruction errors of the initial method is 1.18 mm while the mean of the reconstruction errors of the optimization method is 1.11 mm. The mean of the relative errors of the initial method is 3.30% while the mean of the relative errors of the optimization method is 3.10%. The comparison results show that the optimization method is valuable for the vision-based measurement systems, due to the advantages of high accuracy and good stability. The potential applications include the inspection fields of vehicle manufacture, medical operation, robot navigation, etc. In the future, the object scale in the inspection will be studied to extend the application scopes.

Funding

National Natural Science Foundation of China (NSFC) (51478204, 51205164); Natural Science Foundation of Jilin Province (20170101214JC, 20150101027JC).

References and links

1. B. Özgürün, D. Ö. Tayyar, K. Ö. Agiş, and M. Özcan, “Three-dimensional image reconstruction of macroscopic objects from a single digital hologram using stereo disparity,” Appl. Opt. 56(13), F84–F90 (2017). [PubMed]  

2. A. Glowacz and Z. Glowacz, “Diagnostics of stator faults of the single-phase induction motor using thermal images, moasos and selected classifiers,” Measurement 93, 86–93 (2016).

3. D. Kim and S. Lee, “Structured-light-based highly dense and robust 3D reconstruction,” J. Opt. Soc. Am. A 30(3), 403–417 (2013). [PubMed]  

4. A. Głowacz and Z. Głowacz, “Diagnosis of the three-phase induction motor using thermal imaging,” Infrared Phys. Technol. 81, 7–16 (2017).

5. Y. An and S. Zhang, “Three-dimensional absolute shape measurement by combining binary statistical pattern matching with phase-shifting methods,” Appl. Opt. 56(19), 5418–5426 (2017). [PubMed]  

6. F. J. Brosed, J. Santolaria, J. J. Aguilar, and D. Guillomía, “Laser triangulation sensor and six axes anthropomorphic robot manipulator modelling for the measurement of complex geometry products,” Robot. Com.- Int. Manuf. 28(6), 660–671 (2012).

7. J. Santolaria, J. J. Pastor, F. J. Brosed, and J. J. Aguilar, “A one-step intrinsic and extrinsic calibration method for laser line scanner operation in coordinate measuring machines,” Meas. Sci. Technol. 20(4), 045107 (2009).

8. R. S. Lim, H. M. La, and W. Sheng, “A robotic crack inspection and mapping system for bridge deck maintenance,” IEEE Trans. Autom. Sci. Eng. 11(2), 367–378 (2014).

9. H. Zhang, K. Iijima, J. Huang, G. P. Walcott, and J. M. Rogers, “Optical mapping of membrane potential and epicardial deformation in beating hearts,” Biophys. J. 111(2), 438–451 (2016). [PubMed]  

10. H. Hirschmüller, P. R. Innocent, and J. Garibaldi, “Real-time correlation-based stereo vision with reduced border errors,” Int. J. Comput. Vis. 47(1–3), 229–246 (2002).

11. C. L. Zitnick and T. Kanade, “A cooperative algorithm for stereo matching and occlusion detection,” IEEE Trans. Pat. Anal. 22(7), 675–684 (2000).

12. C. H. Lee, Y. C. Lim, S. Kwon, and J. H. Lee, “Stereo vision-based vehicle detection using a road feature and disparity histogram,” Opt. Eng. 50(2), 027004 (2011).

13. X. Cui, K. B. Lim, Q. Guo, and D. Wang, “Accurate geometrical optics model for single-lens stereovision system using a prism,” J. Opt. Soc. Am. A 29(9), 1828–1837 (2012). [PubMed]  

14. M. Bleyer and M. Gelautz, “A layered stereo matching algorithm using image segmentation and global visibility constraints,” ISPRS J. Photogramm. 59(3), 128–150 (2005).

15. J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011).

16. C. Wei, C. Sihai, L. Dong, and J. Guohua, “A compact two-dimensional laser scanner based on piezoelectric actuators,” Rev. Sci. Instrum. 86(1), 013102 (2015). [PubMed]  

17. Z. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pat. Anal. Mach. Intell. 26(7), 892–899 (2004). [PubMed]  

18. W. Li, S. Shan, and H. Liu, “High-precision method of binocular camera calibration with a distortion model,” Appl. Opt. 56(8), 2368–2377 (2017). [PubMed]  

19. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pat. Anal. 22(11), 1330–1334 (2000).

20. R. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE Trans. Autom. Sci. Eng. 3(4), 323–344 (1987).

21. Y. I. Abdel-Aziz and H. M. Karara, “Direct linear transformation into object space coordinates in close-range photogrammetry,” in Proceedings of the Symposium on Close-Range Photogrammetry (Falls Church, VA, USA, 1971), pp. 1–18.

22. J. Santolaria, J. J. Aguilar, D. Guillomía, and C. Cajal, “A crenellated target-based calibration method for laser triangulation sensors integration in articulated measurement arms,” Robot. Com.- Int. Manuf. 27(2), 282–291 (2011).

23. J. E. Ha, “Extrinsic calibration of a camera and laser range finder using a new calibration structure of a plane with a triangular hole,” Int. J. Control Autom. 10(6), 1240–1244 (2012).

24. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006).

25. J. Heikkila, “Geometric camera calibration using circular control points,” IEEE Trans. Pat. Anal. 22(10), 1066–1077 (2000).

26. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003).

27. O. Faugeras, Q. T. Luong, and T. Papadopoulo, The Geometry of Multiple Images: the Laws that Govern the Formation of Multiple Images of a Scene and Some of Their Applications (MIT, 2004).

28. G. Xu, X. Zhang, J. Su, X. Li, and A. Zheng, “Solution approach of a laser plane based on Plücker matrices of the projective lines on a flexible 2D target,” Appl. Opt. 55(10), 2653–2656 (2016). [PubMed]  

29. C. Steger, “An unbiased detector of curvilinear structures,” IEEE Trans. Pat. Anal. 20(2), 113–125 (1998).

30. J. Nocedal and S. Wright, Numerical optimization (Springer, 2006).

31. C. Harris and M. Stephens, “A combined corner and edge detector,” in Proceedings of the 4th Alvey Vision Conference (1988), pp. 147–151.

32. G. Xu, J. Yuan, X. Li, and J. Su, “3D reconstruction of laser projective point with projection invariant generated from five points on 2D target,” Sci. Rep. 7(1), 7049 (2017). [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Profile reconstruction method with two cuboid references and a flexible laser plane.
Fig. 2
Fig. 2 The solution diagram of the global intrinsic parameter matrix and the global extrinsic parameter matrix.
Fig. 3
Fig. 3 The solution diagram of the initial solution of the flexible laser plane.
Fig. 4
Fig. 4 The solution diagram of the optimization solution of the flexible laser plane.
Fig. 5
Fig. 5 Verification method with the benchmark ruler.
Fig. 6
Fig. 6 Profile reconstruction results adopting the parameterized re-projection errors of laser lines from the bi-cuboid references. (a) The experiment setup of the reconstruction method. (b) The verification experiment. (c) The captured image of the car model. (d) The reconstruction results of the car model. (e) The captured image of the square box. (f) The reconstruction results of the square box. (g) The captured image of the cylindrical tube. (h) The reconstruction results of the cylindrical tube. (i) The captured image of the flat surface. (j) The reconstruction results of the flat surface.
Fig. 7
Fig. 7 Reconstruction errors of the initial method and the optimization method. λj is the reconstruction error. IN indicates the initial method. OP indicates the optimization method. MD indicates measurement distance, mm. RD indicates reference distance, mm. (a) MD = 700, RD = 150. (b) MD = 700, RD = 200. (c) MD = 700, RD = 250. (d) MD = 700, RD = 300. (e) MD = 800, RD = 150. (f) MD = 800, RD = 200. (g) MD = 800, RD = 250. (h) MD = 800, RD = 300. (i) MD = 900, RD = 150. (j) MD = 900, RD = 200. (k) MD = 900, RD = 250. (l) MD = 900, RD = 300. (m) MD = 1000, RD = 150. (n) MD = 1000, RD = 200. (o) MD = 1000, RD = 250. (p) MD = 1000, RD = 300.
Fig. 8
Fig. 8 Relative errors of the initial method and the optimization method. δj is the reconstruction error. IN indicates the initial method. OP indicates the optimization method. MD indicates measurement distance, mm. RD indicates reference distance, mm. (a) MD = 700, RD = 150. (b) MD = 700, RD = 200. (c) MD = 700, RD = 250. (d) MD = 700, RD = 300. (e) MD = 800, RD = 150. (f) MD = 800, RD = 200. (g) MD = 800, RD = 250. (h) MD = 800, RD = 300. (i) MD = 900, RD = 150. (j) MD = 900, RD = 200. (k) MD = 900, RD = 250. (l) MD = 900, RD = 300. (m) MD = 1000, RD = 150. (n) MD = 1000, RD = 200. (o) MD = 1000, RD = 250. (p) MD = 1000, RD = 300.

Tables (1)

Tables Icon

Table 1 Statistic data of the reconstruction experiments.

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

δ m x i (I,m) =P m X i (W,m)
Pm= K m (L) [R m (L) , t m (L) ]
X i (C,m) =[ R m (L) , t m (L) ] X i (W,m)
δ x i (I) = K (G) X i (C)
[ X i (C) 0 Y i (C) Z i (C) 0 0 Y i (C) 0 0 Z i (C) ][ α β γ u v ]=[ x i (I) Z i (C) y i (I) Z i (C) ]
[ α,β,γ,u,v ] T = ( B T B) 1 B T b
[R m (G) , t m (G) ]= ( K (G) ) 1 Pm
l j (m,n) = (H (W,m,n) ) T ( l j (m,n) )
f j (m,1) = J m T [1,0,0,0] T
f j (m,2) = J m T [0,1,0,0] T
e j (m,1) = J m T [0, ( l j (m,1) ) x , ( l j (m,1) ) y , ( l j (m,1) ) z ] T
e j (m,2) = J m T [ ( l j (m,2) ) x ,0, ( l j (m,2) ) y , ( l j (m,2) ) z ] T
L j (m,n)* = e j (m,n) ( f j (m,n) ) T f j (m,n) ( e j (m,n) ) T = [ l j (m,n) ] 4×4
[ L j (1,1) , L j (1,2) , L j (2,1) , L j (2,2) ] T Π j =0
L j (C,m,n)* ( Π j , R m (G) , t m (G) )= Π j [ f j (m,n) ( R m (G) , t m (G) )] T [ f j (m,n) ( R m (G) , t m (G) )] Π j T = [ l j (C,m,n) ] 4×4
[( l ˜ j (m,n) ( Π j , K (G) , R m (G) , t m (G) ) ) ] × ={ K (G) [R m (G) , t m (G) ]}{ L j (C,m,n) ( Π j , R m (G) , t m (G) )} { K (G) [R m (G) , t m (G) ]} T
f( Π j , K (G) , R 1 (G) , t 1 (G) , R 2 (G) , t 2 (G) )= ( l ˜ j (1,1) ( Π j , K (G) , R 1 (G) , t 1 (G) ) ) ( l j (1,1) ) 2 + ( l ˜ j (1,2) ( Π j , K (G) , R 1 (G) , t 1 (G) ) ) ( l j (1,2) ) 2 + ( l ˜ j (2,1) ( Π j , K (G) , R 2 (G) , t 2 (G) ) ) ( l j (2,1) ) 2 + ( l ˜ j (2,2) ( Π j , K (G) , R 2 (G) , t 2 (G) ) ) ( l j (2,2) ) 2
Π ^ j T X i (O) =0
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.