Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Removal of noise and radial lens distortion during calibration of computer vision systems

Open Access Open Access

Abstract

The calibration of computer vision systems that contain the camera and the projector usually utilizes markers of the well-designed patterns to calculate the system parameters. Undesirably, the noise and radial distortion exist universally, which decreases the calibration accuracy and consequently decreases the measurement accuracy of the related technology. In this paper, a method is proposed to remove the noise and radial distortion by registering the captured pattern with an ideal pattern. After the optimal modeled pattern is obtained by registration, the degree of freedom of the total calibration markers is reduced to one and both the noise and radial distortion are removed successfully. The accuracy improvement in a structured light scanning system is over 1024 order of magnitude in the sense of mean square errors. Most importantly, the proposed method can be readily adopted by the computer vision techniques that use projectors or cameras.

© 2015 Optical Society of America

1. Introduction

Noise removal or reduction [110] is a hot topic in optics and it usually determines the application prospect of the related technology because the noise induced error during the calibration stage may affect the measurement accuracy greatly and limits their applications. This paper mainly deals with the noise of projector calibration and camera calibration in the structured light scanning system [1119]. The structured light scanning methods [1119] project some designed patterns onto the surface of the object to be measured. From the distorted pattern, the surface's profile could be calculated based on the system parameters calculated from a set of calibrated markers. Thus, the calibration accuracy is critical for the measurement accuracy of these methods. Traditionally, these methods use as many calibration markers as they can until the calibration accuracy improvement was close to zero growth. Similarly, the camera calibration [20] makes use of multiple views of the well-designed pattern to calculate intrinsic and extrinsic parameters and it uses as many views of the designed pattern as it can until the calibration accuracy could not be improved significantly any more. Unfortunately, none of them could remove the noise completely in both calibration stage and measurement stage even if they might reduce the noise to a low level by averaging more markers or views of the designed pattern. In addition, multiple views could not reduce or remove the radial distortions, which must be rectified for robust measurement.

In this paper, we propose a method to eliminate both the noise and radial distortion during calibration for a structured light system [12] which uses a Pico laser projector to generate the structured light pattern that suffers both radial distortion and noise. Consequently, the measurement accuracy is very poor. In [11], a SNF laser is used instead of the projector. The radial distortion is avoided and the contrast of the image is also increased because of the higher power of SNF laser compared to the Pico laser projector. Hence, the measurement accuracy is increased over 10 order of magnitude compared to that of [12]. To increase the measurement accuracy of the structured light system with a projector [12], we model the markers in the perfectly designed pattern with an arbitrarily assigned center as rays. We then search for an optimal plane that intercepts the rays and produces an optimal pattern that is closest to the actually captured pattern. When the actually captured pattern is replaced by the optimal pattern, the noise and radial distortion are removed with the degree of freedom of all the calibration markers reduced to one.

2. The structured light scanning system

The structured light scanning system [12] is illustrated in Figs. 1(a) and 1(b). Figure 1 shows the principle of the structured light method and the established system where a Pico laser projector is used to produce the structured light pattern. The optical center of the projector is denoted as C and its symmetry point relative to the plane p1 is C'. C' and the horizontal plane form the virtual camera that is used to compute the equations of the planes p2 and p3 with camera c2 and camera c3 respectively. Plane p1 is defined as the reference plane, z = 0 and it originates at O. The laser ray is projected onto and reflected by plane p1 onto a beam splitter which splits the ray into two parts that intercept planes p2 and p3 respectively. During calibration, the poses of the three cameras are estimated. Then the equation of the diffusive plane p2 is computed by camera c2 and the virtual camera. In the same way, the equation of the diffusive plane p3 is computed by camera c3 and the virtual camera. With the equation of the diffusive plane, the Homography between the camera and the diffusive plane and the camera coordinates of the interception points, the 3D world coordinate of the interception point can be computed. When the points on p3 are computed, they are mapped to p4. Then, two points intercepting one ray are obtained and the ray can be determined uniquely with closed form solution. With the incident rays determined by camera c1, the 3D coordinates of the points on the specular surface are computed as the closed form solutions of the intersections of the incident rays and the reflected rays.

 figure: Fig. 1

Fig. 1 The developed structured light system.

Download Full Size | PDF

We make use of the pattern designed in this structured light system as an instance to describe the proposed method and illustrate its effectiveness. During calibration, a set of bright dots (markers) are used to calculate the system parameters. The two dimensional coordinates of these points in the camera view need to be computed, where the noises are introduced when they are computed as the mean of all the corner or bright pixels. Figure 2(a) shows the designed pattern and it is projected by a Pico Laser Projector onto a horizontal diffusive plane. The brightest point in the center denotes the center marker. A dragonfly camera captures the projected pattern as shown in Fig. 2(b). To demonstrate the noise, we select 44 points around the center marker and compute their x coordinates and y coordinates as the mean of the bright pixels after segmentation by the proposed method in [21]. Then we calculate the difference of the adjacent dots for the x coordinate and y coordinate respectively. Figures 2(c) and 2(d) show the calculated x coordinate differences and y coordinate differences respectively which reflect the noise that changes randomly. The differences should change regularly without noise according to the designed pattern while noise adds the random variations. The purpose of the proposed method is to remove these random variations (noise) completely.

 figure: Fig. 2

Fig. 2 Illustration of pattern and noise; (a) Designed pattern; (b) Captured pattern; (c) plot of the x coordinate differences in pixels; (d) plot of the y coordinate differences in pixels. (The y axis label is mm and x axis label is index number for (c) and (d)).

Download Full Size | PDF

For the structured light system, we define the noise as the random variations caused during image capturing that is affected by different influencing light sources and automatic image processing that is affected by the unevenly distributed gray-scales of the markers. The most commonly encountered lens distortions are radial distortions [22] that are addressed in [13] [20] [23,24] with different methods. In [13], the radial distortion of the projector is corrected independently by adjusting the coefficients of the projected pattern with inverse distortion. In [20], the radial distortion was modeled and incorporated into the camera calibration process. In [23], inverse distortion patterns different from [13] are used to correct the radial distortion. In [24], Hough transformation was used to correct the radial distortions. Unfortunately, none the above mentioned methods could remove the radial distortions completely. In addition, these correction methods are also easy to be affected by the noise, which was evaluated with Gaussian noise in [20]. In this paper, a new method is proposed to remove the noise and radial distortion as a whole by registering the captured pattern with an ideal pattern.

3. The proposed method

In most cases [1120], the designed patterns keep the distances between adjacent markers the same for convenience. In this section, we propose a 3D pattern modeling method to eliminate the noise based on the distances between the center marker and the other markers. The proposed method is more general because it does not require the distances between adjacent markers equal, which might benefit more computer vision applications in the future. The proposed method contains the following steps:

  • Step 1: Model the rays with designed markers and an arbitrarily assigned projection center C (xc,yc,zc). The unit of the markers can be chosen as convenient as pixel or as mm depending on the convenience of the application. The modeled rays are formulated as:
    xxcxixc=yycyiyc=zzczizc=ti  

    where (xi,yi,zi)is the ith marker in the designed pattern.

  • Step 2: Use the plane ax+by+cz=1 to intercept the modeled rays. Then compute the distances between the center marker and a set of markers around it. ax+by+cz=1
    dim=(ximx0m)2+(yimy0m)2+(zimz0m)2   
  • Step 3: For the captured pattern, compute the distances between the center marker and the same set of markers as those used in Step 2 by the following equation.
    dip=(xipx0p)2+(yipy0p)2+(zipz0p)2 
  • Step 4: Compute the total difference of all the distances by the following equation.
    d=di=|dimdip|
  • Step 5: Find the optimal interception plane P(a, b, c) that makes d minimum.
    P¯=argminPd

The intercepted points (markers) are computed in a virtual coordinate system instead of the world coordinate system. A registration is thus needed between the original points and the intercepted points to convert the coordinates correctly. We register the two set of points based on the least square errors by finding the transformation matrix A that makes the sum of square errors, dr minimum.

[x¯ipy¯ipz¯ip1]=ω[a11a12a21a22a13a14a23a24a31a32a41a42a33a34a43a44][ximyimzim1]
dr=i=144(x¯ipxip)2+(y¯ipyip)2+(z¯ipzip)2 
A¯=argminAdr
A=[a11a12a21a22a13a14a23a24a31a32a41a42a33a34a43a44]
where ω is a constant and the transformation matrix Ais defined as:

The proposed method can be summarized as follows. The rays are modeled with an arbitrarily assigned center outside of the plane that the markers lie in. It relies on the plane P(a, b, c)that intercepts the modeled rays to generate the modeled pattern. Then a transformation is used to match the modeled pattern with the original computed pattern. Without noise and radial distortion, the modeled pattern and the original computed pattern should match completely. Because the projection center was arbitrarily assigned for ray modeling, we need to make sure that the transformation matrix exists before we could calculate it by least squares estimation. Hence, the proposed method relies on the following two lemmas.

Lemma 1: Two different planes P1(a1, b1, c1) and P2(a2, b2, c2) intercept the modeled rays, Rwith central projection and produce two different patterns M1(xi1,yi1,zi1;i=1,2,..,N) and M2(xi2,yi2,zi2;i=1,2,..,N). Ndenotes the number of markers in the pattern. M1and M2can be transformed to each other by a transformation matrix A.

Proof:

From the property of central projection and Homography definition, there is Homography Hbetween two sets of planar markers. The following two formulations hold.

[ωxi2ωyi2ω]=|h11h12h13h21h22h23h31h32h33|[xi1yi11]
zi2=(1a2xi2b2yi2)/c2

Equation (11) can be rewritten in the format of  xi1and  yi1as:

zi2=(H31xi1+H32yi1+H33)/ω

where

H31=a2h11b2h21c2
H32=a2h12b2h22c2
H33=ωa2h13b2h23c2

Combining Eqs. (10)-(15), we obtain the following equation.

 [ωxi2ωyi2ωzi2ω]=[h11h12h13h21h22h23H31H32H33h31h32h33][xi1yi11]

Let  B=[h11h12h13h21h22h23H31H32H33h31h32h33] and  D=BTB, then Eq. (16) can be rewritten as:

 [xi1yi11]=D1BT[ωxi2ωyi2ωzi2ω]= [h11'h12'h13'h14'h21'h22'h23'h24'h31'h32'h33'h34'] [ωxi2ωyi2ωzi2ω]

From the above equation, we obtain the value of xi1 and yi1 respectively.

xi1=h11'ωxi2+h12'ωyi2+h13'ωzi2+h14'ω
yi1=h21'ωxi2+h22'ωyi2+h23'ωzi2+h24'ω

By definition, the following formulation holds.

zi1=(1a1xi1b1yi1)/c1

Put Eq. (18) and Eq. (19) into Eq. (20), we get the value of zi1 in the following format.

zi1=H31'ωxi2+H32'ωyi2+H33'ωzi2+H34'ω

where

H31'=h11'a1c1
H32'=h12'a1c1
H33'=h13'a1c1
H34'=1ωc1h14'a1c1 

Combining Eq. (17) and Eq. (21), the following equation is obtained:

[xi1yi1zi11]=[h11'h12'h13'h14'h21'h22'h23'h24'H31'H32'H33'H34'h31'h32'h33'h34'][ωxi2ωyi2ωzi2ω]

The transformation matrix Aexists as:

A=[h11'h12'h13'h14'h21'h22'h23'h24'H31'H32'H33'H34'h31'h32'h33'h34']

The lemma is proved.

Lemma 2: For a given pattern M(xi,yi,zi;i=1,2,..,N), two set of modeled rays, R1 and R2 are obtained with two different projection centers C1andC2. Suppose a plane P1(a1, b1, c1) intercepts the modeled rays R1with a patternM1(xi1,yi1,zi1;i=1,2,..,N) and a plane P2(a2, b2, c2)intercept the modeled rays with a patternM2(xi2,yi2,zi2;i=1,2,..,N). M1and M2can be transformed to each other by a transformation matrix A.

Proof:

According to Lemma 1, there is a transformation matrix A1 between M(xi,yi,zi;i=1,2,..,N) and M1(xi1,yi1,zi1;i=1,2,..,N) that is produced by plane P1(a1, b1, c1) intercepting the modeled raysR1.

[xiyizi1]=A1 [ω1xi1ω1yi1ω1zi1ω1]

There is also a transformation matrix A2 between M(xi,yi,zi;i=1,2,..,N) and M2(xi2,yi2,zi2;i=1,2,..,N) that is produced by plane P2(a2, b2, c2) intercepting the modeled raysR2.

[xiyizi1]=A2 [ω2xi2ω2yi2ω2zi2ω2] 

Combining Eq. (28) and Eq. (29), we get,

 [xi1yi1zi11]=A11A2[ω2xi2/ω1ω2yi2/ω1ω2zi2/ω1ω2/ω1]

As can be seen, the transformation matrix Aexists:

 A=A11A2

The lemma is proved.

For the practical implementation, the searching range to find the optimal parameters (a,b,c) is limited since it will be intractable to search all the possible values thoroughly. The searching ranges is chosen as  a[50,50] b[50,50] and c[50,50] respectively during experiments. The complexity of the searching is 106 and it takes less than one minute in MATLAB.

Since the searching range is fixed, the center used to model the rays will affect the equations of the modeled rays, which might in turn affect the final pattern modeling accuracy significantly. Thus, an additional search around the arbitrarily assigned center is performed to find a center that could yield more accurate registration results by the following two error criteria.

 xe=1Ni=1N(XmiXoi)2
 ye=1Ni=1N(YmiYoi)2

Where  (Xmi,Ymi) denotes the ith modeled point and  (Xoi,Yoi) denotes the i th original point.

As can be seen, the proposed 3D pattern modeling method is operated in the three dimensions for better accuracy. Based on Eq. (10), the proposed method could be simplified into the 2D pattern modeling method as follows:

  • Step 1: For the designed pattern, compute the distances between the center marker and a set of markers around it.
    dim=(ximx0m)2+(yimy0m)2   
  • Step 2: For the captured pattern, compute the distances between the center marker and the same set of markers as those used in Step 1 by the following equation.
    dip=(xipx0p)2+(yipy0p)2 
  • Step 3: Register the designed pattern and the captured pattern based on the least square errors by finding the transformation matrix  A that makes the sum of square errors, dr minimum.
    [x¯ipy¯ip1]=ω[a11a12a13a21a22a23a31a32a33][ximyim1]
    dr=i=144(x¯ipxip)2+(y¯ipyip)2 
    A¯=argminAdr

where  ω is a constant and the transformation matrix  A is defined as:

A=[a11a12a13a21a22a23a31a32a33]

In [25], the authors use Homography estimation to remove the perspective distortion instead of radial distortion and we did not find any literature that utilizes the Homography transformation to remove the radial distortion. Hence, we claim that both the proposed 3D pattern modeling method and the proposed 2D pattern modeling method are original. The 2D pattern modeling method is a simplified version of the 3D pattern modeling method.

4. Experimental results

Firstly, the exemplary pattern shown in Fig. 2 is modeled by the 3D pattern modeling method and Figs. 3(a) and 3(b) show the modeled coordinates (in red) versus the original original coordinates (in blue). To see the noise removal effect of the proposed method, the differences of the x and y coordinate for these 45 points after modeling are plotted in Figs. 3(c) and 3(d) respectively to compare with those plotted in Figs. 2(c) and 2(d). It is seen that the proposed method works well and the noise (random variation) is eliminated successfully. Please note that the original points refer to the points computed directly from the captured image and the modeled points refer to the points on the registered ideal pattern. The registered ideal pattern is computed from the designed ideal pattern by Eqs. (1)-(9) or Eqs. (34)-(39).

 figure: Fig. 3

Fig. 3 Results of modeling the pattern (Fig. 2(b)) by the proposed 3D pattern modeling method without center searching (a) Modeled and original x coordinates; (b) Modeled and original y coordinates; (c) x coordinate differences after modeling; (d) y coordinate differences after modeling; (The y axis label is pixel and x axis label is index number).

Download Full Size | PDF

For the modeled results shown in Fig. 3, the computed mean squared errors (MSE), xe and ye are 2.8061 and 2.0916 respectively. We search a new projection center in a small range [5,5] in three dimensions and find the center that yields the minimum MSE. The MSEs, xe and ye are reduced to 2.2204 and 1.6357 respectively and the results are shown in Fig. 4. Since the improvement of MSEs is not significant, the visual difference is not very obvious. In fact, the modeled coordinates with the new center match better than the modeled coordinates with the original center, which indicates that finding the optimal parameters is a challenging engineering problem that needs great effort.

 figure: Fig. 4

Fig. 4 Results of modeling the pattern (Fig. 2(b)) with center searching by the proposed 3D pattern modeling method with center searching (a) Modeled xcoordinate versus original xcoordinate; (b) Modeled ycoordinate versus original y coordinate; (The y axis label is in mm and x axis label is index number).

Download Full Size | PDF

Secondly, two sets of detected corners of a camera calibration pattern [20] are modeled by the 3D pattern modeling method to compare the accuracy of without center searching and with center searching. The results of modeling the first set are shown in Fig. 5 and Fig. 6. There are obvious mismatches between the modeled ycoordinates and the original y coordinates in Fig. 5. The computed xe and ye are 1.8155 and 2.8142 respectively. After searching around the projection center within the range [5,5] in three dimensions, the MSEs are reduced to xe = 1.074 and ye = 1.6262 respectively. The modeling results with the new center are shown in Fig. 6. It is seen that the modeled points and original ones match significantly better than those in Fig. 5. The modeling results for the other set of corner points are shown in Fig. 7 and Fig. 8. The computed MSEs for the modeling results without center searching in Fig. 7 are xe = 2.0002 and ye = 3.2223 respectively. With a new searched center, the MSEs are reduced to xe = 1.8155 and ye = 2.8142 respectively.

 figure: Fig. 5

Fig. 5 Results of modeling the a set of corner points by the proposed 3D pattern modeling method without center searching (a) Captured calibration pattern with a set of detected corners; (b) Modeled points against the original points; (c) Modeledxcoordinates against originalxcoordinates; (d) Modeled y coordinates against originaly coordinates; (The y axis label is pixel and x axis label is index number for (c) and (d)).

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Results of modeling the a set of corner points by the proposed 3D pattern modeling method with center searching (a) Modeled x coordinates against original x coordinates; (b) Modeled y coordinates against original y coordinates; (c)-(d) Modeled points against the original points; (The y axis label is pixel and x axis label is index number for (a) and(b)).

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Results of modeling the another set of corner points by the proposed 3D pattern modeling method without center searching (a) Captured calibration pattern with a set of detected corners; (b) Modeled points against the original points; (c) Modeledx coordinates against original xcoordinates; (d) Modeled y coordinates against originaly coordinates; (The y axis label is pixel and x axis label is index number for (c) and (d)).

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Results of modeling the another set of corner points by the proposed 3D pattern modeling method with center searching (a) Modeledxcoordinates against originalx coordinates; (b) Modeledy coordinates against original y coordinates; (c)-(d) Modeled points against the original points; (The y axis label is pixel and x axis label is index number for (a) and(b)).

Download Full Size | PDF

Thirdly, we use a different camera calibration pattern to compare the performance of the 3D pattern modeling method and 2D pattern modeling method and the results are shown in Fig. 9 and Fig. 10. In Fig. 9(b), there are obvious mismatches between the modeled y coordinates and the original y coordinates. In Fig. 9(c), the mismatches between the modeled points and the original points are also significant. On the contrary, both the coordinates and points modeled by the 3D pattern modeling method match very well with the original ones as shown in Fig. 10. Figure 11 shows more results of the proposed 3D pattern modeling method on different captured camera calibration patterns with different orientations and resolutions. As can be seen, the modeled points match the corners well.

 figure: Fig. 9

Fig. 9 Results of modeling the corner points by the proposed 2D pattern modeling method (a) Modeledxcoordinates against original xcoordinates; (b) Modeled ycoordinates against originalycoordinates; (c) Modeled points against the original points; (d) Modeled points overlaying on the original pattern (The y axis label is pixel and x axis label is index number for (a) and(b)).

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Results of modeling the corner points by the proposed 3D pattern modeling method with center searching (a) Modeledxcoordinates against originalxcoordinates; (b) Modeledycoordinates against original ycoordinates; (c) Modeled points against the original points; (d) Modeled points overlaying on the original pattern (The y axis label is pixel and x axis label is index number for (a) and (b)).

Download Full Size | PDF

 figure: Fig. 11

Fig. 11 Results of modeling the corner points on different captured camera calibration patterns by the proposed 3D pattern modeling method with center searching (a) Modeled points overlaying on the original pattern 1; (b) Modeled points overlaying on the original pattern 2; (c) Modeled points overlaying on the original pattern 3; (d) Modeled points overlaying on the original pattern 4.

Download Full Size | PDF

Fourthly, we show the modeling results of the pattern of a SNF laser during measuring the 3D weld pool shape. The pattern contains 11 rows and 11 columns of laser points. The distances between adjacent points in each row or column are equal. Different from the camera and the projector, the SNF laser does not have the problem of radial distortion. Hence, the function of the proposed pattern modeling method in this specific application is to remove the noise only. Figure 12 shows the results of modeling the laser pattern. As can be seen, both the modeled coordinates and points match the original ones well.

 figure: Fig. 12

Fig. 12 Results of modeling the SNF laser points (a) Modeledx coordinates against originalxcoordinates; (b) Modeledycoordinates against original ycoordinates; (c) Modeled points against the original points; (d) Modeled points overlaying on the original laser pattern (The y axis label is pixel and x axis label is index number for (a) and (b)).

Download Full Size | PDF

Fifthly, we compare the accuracy and computation time of the proposed methods in Table 1. The 3D method1 represents the 3D method without center searching and the 3D method2 represents the 3D method with center searching. It is seen that the proposed 3D pattern modeling method is superior to the 2D pattern modeling method in accuracy while the 2D method is significantly more efficient than the 3D method. Hence, the 2D method might be used in cases that do not require extremely high accuracy.

Tables Icon

Table 1. Comparison of the proposed methods

Sixthly, the reconstuction accuracy of the structured light system is used to demonstrate the effectiveness of the proposed 3D pattern modeling method. In [12], the following equation is used to compute the reconstruction accuracy.

[ExEyEz]=[1Ni=1N(XriXoi)21Ni=1N(YriYoi)21Ni=1N(ZriZoi)2] 

where Ex denotes the error in xcoordinate, Ey denotes the error in ycoordinate and Ez denotes the error in zcoordinate. (Xri,Yri,Zri) denotes the ith reconstructed point and (Xoi,Yoi,Zoi) denotes the ith original point. Without 3D pattern modeling, the measurement accuracy of the developed system is 50.8 μm2 in xcoordinate, 39.1 μm2 in ycoordinate and 6.8 μm2 in zcoordinate. Then, we compute the errors of reconstructing the flat mirror with 3D pattern modeling of the camera coordinates in c2 and c3 in addition to the world coordinates in p1, p2 and p3. The reconstruction measurement accuracy is 1.2 × 10−24 μm2 in xcoordinate, 4.46 × 10−24 μm2 in ycoordinate and 6.08 × 10−25 μm2 in zcoordinate. The accuracy improvement is over 1024 order of magnitude.

5. Conclusion

In this paper, a pattern modeling method is proposed to remove the noise and radial lens distortion during computer vision calibration that uses projector or camera by registering the captured pattern with an ideal pattern. When the pattern is modeled as a whole, the degree of the freedom of the all the calibration markers (points) is reduced to one. After the optimal modeled pattern is obtained by registration, the noise and radial distortion can be removed effectively.

The major contributions of this paper include:

  • (1), to our best knowledge, the proposed pattern modeling method is the only method so far that could remove the noise completely while the state of art computer vision systems [1419] are trying to reduce the noise through filtering and average of multiple patterns.
  • (2), compared to the state of literatures [13] [20] [23,24] that deal with the radial lens distortion, the proposed pattern modeling method is capable of removing the radial distortion with noise as a whole while the state of art methods have to consider the effect of the noise during correcting the radial distortions.
  • (3), the proposed method could conducts the pattern modeling in 3D for better accuracy and in 2D for better efficiency, which is very flexible to meet the requirements of different computer vision applications.
  • (4), two lemmas are proposed and proved to validate the correctness of modeling the pattern in 3D with an arbitrarily assigned projection center.

References and links

1. A. Wong, A. Mishra, K. Bizheva, and D. A. Clausi, “General Bayesian estimation for speckle noise reduction in optical coherence tomography retinal imagery,” Opt. Express 18(8), 8338–8352 (2010). [CrossRef]   [PubMed]  

2. S. Moon, S. W. Lee, and Z. Chen, “Reference spectrum extraction and fixed-pattern noise removal in optical coherence tomography,” Opt. Express 18(24), 24395–24404 (2010). [CrossRef]   [PubMed]  

3. F. Pan, W. Xiao, S. Liu, F. Wang, L. Rong, and R. Li, “Coherent noise reduction in digital holographic phase contrast microscopy by slightly shifting object,” Opt. Express 19(5), 3862–3869 (2011). [CrossRef]   [PubMed]  

4. C. T. Lin, C. C. Wei, and M. I. Chao, “Phase noise suppression of optical OFDM signals in 60-GHz RoF transmission system,” Opt. Express 19(11), 10423–10428 (2011). [CrossRef]   [PubMed]  

5. N. Brauckmann, M. Kues, P. Gross, and C. Fallnich, “Noise reduction of supercontinua via optical feedback,” Opt. Express 19(16), 14763–14778 (2011). [CrossRef]   [PubMed]  

6. M. Szkulmowski, I. Gorczynska, D. Szlag, M. Sylwestrzak, A. Kowalczyk, and M. Wojtkowski, “Efficient reduction of speckle noise in optical coherence tomography,” Opt. Express 20(2), 1337–1359 (2012). [CrossRef]   [PubMed]  

7. Y. Wang, P. Meng, D. Wang, L. Rong, and S. Panezai, “Speckle noise suppression in digital holography by angular diversity with phase-only spatial light modulator,” Opt. Express 21(17), 19568–19578 (2013). [CrossRef]   [PubMed]  

8. S. M. Jung, S. M. Yang, K. H. Mun, and S. K. Han, “Optical beat interference noise reduction by using out-of-band RF clipping tone signal in remotely fed OFDMA-PON link,” Opt. Express 22(15), 18246–18253 (2014). [CrossRef]   [PubMed]  

9. J. F. Barrera, A. Vélez, and R. Torroba, “Experimental scrambling and noise reduction applied to the optical encryption of QR codes,” Opt. Express 22(17), 20268–20277 (2014). [CrossRef]   [PubMed]  

10. P. Memmolo, V. Bianco, M. Paturzo, B. Javidi, P. A. Netti, and P. Ferraro, “Encoding multiple holograms for speckle-noise reduction in optical display,” Opt. Express 22(21), 25768–25775 (2014). [CrossRef]   [PubMed]  

11. Z. Wang, “A one-shot-projection method for measurement of specular surfaces,” Opt. Express 23(3), 1912–1929 (2015). [CrossRef]   [PubMed]  

12. Z. Z. Wang, X. Y. Huang, R. G. Yang, and Y. M. Zhang, “Measurement of mirror surfaces using specular reflection and analytical computation,” Mach Vision Appl. 24(2), 289–304 (2013). [CrossRef]  

13. Z. Z. Wang, “Robust measurement of the diffuse surface by phase shift profilometry,” J. Opt. 16(10), 105407 (2014). [CrossRef]  

14. K. Liu, Y. Wang, D. L. Lau, Q. Hao, and L. G. Hassebrook, “Dual-frequency pattern scheme for high-speed 3-D shape measurement,” Opt. Express 18(5), 5229–5244 (2010). [CrossRef]   [PubMed]  

15. C. Guan, L. G. Hassebrook, D. L. Lau, and V. G. Yalla, “Improved composite-pattern structured light profilometry by means of postprocessing,” Opt. Eng. 47(9), 097203 (2008).

16. C. Je, S. W. Lee, and R. H. Park, “Colour-stripe permutation pattern for rapid structured-light range imaging,” Opt. Commun. 285(9), 2320–2331 (2012). [CrossRef]  

17. C. Je, K. H. Lee, and S. W. Lee, “Multi-projector color structured-light vision,” Signal Process. Image Commun. 28(9), 1046–1058 (2013). [CrossRef]  

18. L. Huang, C. S. Ng, and A. K. Asundi, “Dynamic three-dimensional sensing for specular surface with monoscopic fringe reflectometry,” Opt. Express 19(13), 12809–12814 (2011). [CrossRef]   [PubMed]  

19. W. Jang, C. Je, Y. Seo, and S. W. Lee, “Stuctured-light stereo: Comparative analysis and integration of structured-light and active stereo for measuring dynamic shape,” Opt. Lasers Eng. 51(11), 1255–1264 (2013). [CrossRef]  

20. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. on PAMI 22(11), 1330–1334 (2000). [CrossRef]  

21. Z. Z. Wang, “Monitoring of GMAW weld pool from the reflected laser lines for real time control,” IEEE Trans, on Ind, Inform 10(4), 2073–2083 (2014).

22. D. C. Brown, “Decentering distortion of lenses,” Photogramm. Eng. 32(3), 444–462 (1966).

23. R. Cucchiara, C. Grana, A. Pratzi, and R. Vezzani, “A hough transform-based method for radial lens distortion correction,” ICIAP 1, 182–187 (2003).

24. J. P. Villiers, F. W. Leuschner, and R. Geldenhuys, “Centi-pixel accurate real-time inverse distortion correction,” Proc. SPIE 7266, 726611 (2008). [CrossRef]  

25. A. K. Geetha and S. Murali, “Automatic rectification of perspective distortion from a single image using plane homography,” IJCSA 3(5), 47–58 (2013). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 The developed structured light system.
Fig. 2
Fig. 2 Illustration of pattern and noise; (a) Designed pattern; (b) Captured pattern; (c) plot of the x coordinate differences in pixels; (d) plot of the y coordinate differences in pixels. (The y axis label is mm and x axis label is index number for (c) and (d)).
Fig. 3
Fig. 3 Results of modeling the pattern (Fig. 2(b)) by the proposed 3D pattern modeling method without center searching (a) Modeled and original x coordinates; (b) Modeled and original y coordinates; (c) x coordinate differences after modeling; (d) y coordinate differences after modeling; (The y axis label is pixel and x axis label is index number).
Fig. 4
Fig. 4 Results of modeling the pattern (Fig. 2(b)) with center searching by the proposed 3D pattern modeling method with center searching (a) Modeled xcoordinate versus original xcoordinate; (b) Modeled ycoordinate versus original y coordinate; (The y axis label is in mm and x axis label is index number).
Fig. 5
Fig. 5 Results of modeling the a set of corner points by the proposed 3D pattern modeling method without center searching (a) Captured calibration pattern with a set of detected corners; (b) Modeled points against the original points; (c) Modeledxcoordinates against originalxcoordinates; (d) Modeled y coordinates against originaly coordinates; (The y axis label is pixel and x axis label is index number for (c) and (d)).
Fig. 6
Fig. 6 Results of modeling the a set of corner points by the proposed 3D pattern modeling method with center searching (a) Modeled x coordinates against original x coordinates; (b) Modeled y coordinates against original y coordinates; (c)-(d) Modeled points against the original points; (The y axis label is pixel and x axis label is index number for (a) and(b)).
Fig. 7
Fig. 7 Results of modeling the another set of corner points by the proposed 3D pattern modeling method without center searching (a) Captured calibration pattern with a set of detected corners; (b) Modeled points against the original points; (c) Modeledx coordinates against original xcoordinates; (d) Modeled y coordinates against originaly coordinates; (The y axis label is pixel and x axis label is index number for (c) and (d)).
Fig. 8
Fig. 8 Results of modeling the another set of corner points by the proposed 3D pattern modeling method with center searching (a) Modeledxcoordinates against originalx coordinates; (b) Modeledy coordinates against original y coordinates; (c)-(d) Modeled points against the original points; (The y axis label is pixel and x axis label is index number for (a) and(b)).
Fig. 9
Fig. 9 Results of modeling the corner points by the proposed 2D pattern modeling method (a) Modeledxcoordinates against original xcoordinates; (b) Modeled ycoordinates against originalycoordinates; (c) Modeled points against the original points; (d) Modeled points overlaying on the original pattern (The y axis label is pixel and x axis label is index number for (a) and(b)).
Fig. 10
Fig. 10 Results of modeling the corner points by the proposed 3D pattern modeling method with center searching (a) Modeledxcoordinates against originalxcoordinates; (b) Modeledycoordinates against original ycoordinates; (c) Modeled points against the original points; (d) Modeled points overlaying on the original pattern (The y axis label is pixel and x axis label is index number for (a) and (b)).
Fig. 11
Fig. 11 Results of modeling the corner points on different captured camera calibration patterns by the proposed 3D pattern modeling method with center searching (a) Modeled points overlaying on the original pattern 1; (b) Modeled points overlaying on the original pattern 2; (c) Modeled points overlaying on the original pattern 3; (d) Modeled points overlaying on the original pattern 4.
Fig. 12
Fig. 12 Results of modeling the SNF laser points (a) Modeledx coordinates against originalxcoordinates; (b) Modeledycoordinates against original ycoordinates; (c) Modeled points against the original points; (d) Modeled points overlaying on the original laser pattern (The y axis label is pixel and x axis label is index number for (a) and (b)).

Tables (1)

Tables Icon

Table 1 Comparison of the proposed methods

Equations (40)

Equations on this page are rendered with MathJax. Learn more.

x x c x i x c = y y c y i y c = z z c z i z c = t i   
d i m = ( x i m x 0 m ) 2 + ( y i m y 0 m ) 2 + ( z i m z 0 m ) 2    
d i p = ( x i p x 0 p ) 2 + ( y i p y 0 p ) 2 + ( z i p z 0 p ) 2  
d= d i =| d i m d i p |
P ¯ =arg min P d
[ x ¯ i p y ¯ i p z ¯ i p 1 ]=ω[ a 11 a 12 a 21 a 22 a 13 a 14 a 23 a 24 a 31 a 32 a 41 a 42 a 33 a 34 a 43 a 44 ][ x i m y i m z i m 1 ]
d r = i=1 44 ( x ¯ i p x i p ) 2 + ( y ¯ i p y i p ) 2 + ( z ¯ i p z i p ) 2  
A ¯ =arg min A d r
A=[ a 11 a 12 a 21 a 22 a 13 a 14 a 23 a 24 a 31 a 32 a 41 a 42 a 33 a 34 a 43 a 44 ]
[ ω x i 2 ω y i 2 ω ]=| h 11 h 12 h 13 h 21 h 22 h 23 h 31 h 32 h 33 |[ x i 1 y i 1 1 ]
z i 2 =( 1 a 2 x i 2 b 2 y i 2 )/ c 2
z i 2 =( H 31 x i 1 + H 32 y i 1 + H 33 )/ω
H 31 = a 2 h 11 b 2 h 21 c 2
H 32 = a 2 h 12 b 2 h 22 c 2
H 33 = ω a 2 h 13 b 2 h 23 c 2
 [ ω x i 2 ω y i 2 ω z i 2 ω ]=[ h 11 h 12 h 13 h 21 h 22 h 23 H 31 H 32 H 33 h 31 h 32 h 33 ][ x i 1 y i 1 1 ]
 [ x i 1 y i 1 1 ]= D 1 B T [ ω x i 2 ω y i 2 ω z i 2 ω ]= [ h 11 ' h 12 ' h 13 ' h 14 ' h 21 ' h 22 ' h 23 ' h 24 ' h 31 ' h 32 ' h 33 ' h 34 ' ] [ ω x i 2 ω y i 2 ω z i 2 ω ]
x i 1 = h 11 ' ω x i 2 + h 12 ' ω y i 2 + h 13 ' ω z i 2 + h 14 ' ω
y i 1 = h 21 ' ω x i 2 + h 22 ' ω y i 2 + h 23 ' ω z i 2 + h 24 ' ω
z i 1 =( 1 a 1 x i 1 b 1 y i 1 )/ c 1
z i 1 = H 31 ' ω x i 2 + H 32 ' ω y i 2 + H 33 ' ω z i 2 + H 34 ' ω
H 31 ' = h 11 ' a 1 c 1
H 32 ' = h 12 ' a 1 c 1
H 33 ' = h 13 ' a 1 c 1
H 34 ' = 1 ω c 1 h 14 ' a 1 c 1  
[ x i 1 y i 1 z i 1 1 ]=[ h 11 ' h 12 ' h 13 ' h 14 ' h 21 ' h 22 ' h 23 ' h 24 ' H 31 ' H 32 ' H 33 ' H 34 ' h 31 ' h 32 ' h 33 ' h 34 ' ][ ω x i 2 ω y i 2 ω z i 2 ω ]
A=[ h 11 ' h 12 ' h 13 ' h 14 ' h 21 ' h 22 ' h 23 ' h 24 ' H 31 ' H 32 ' H 33 ' H 34 ' h 31 ' h 32 ' h 33 ' h 34 ' ]
[ x i y i z i 1 ]= A 1  [ ω 1 x i 1 ω 1 y i 1 ω 1 z i 1 ω 1 ]
[ x i y i z i 1 ]= A 2  [ ω 2 x i 2 ω 2 y i 2 ω 2 z i 2 ω 2 ] 
 [ x i 1 y i 1 z i 1 1 ]= A 1 1 A 2 [ ω 2 x i 2 / ω 1 ω 2 y i 2 / ω 1 ω 2 z i 2 / ω 1 ω 2 / ω 1 ]
 A= A 1 1 A 2
  x e = 1 N i=1 N ( X m i X o i ) 2
  y e = 1 N i=1 N ( Y m i Y o i ) 2
d i m = ( x i m x 0 m ) 2 + ( y i m y 0 m ) 2    
d i p = ( x i p x 0 p ) 2 + ( y i p y 0 p ) 2  
[ x ¯ i p y ¯ i p 1 ]=ω[ a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 ][ x i m y i m 1 ]
d r = i=1 44 ( x ¯ i p x i p ) 2 + ( y ¯ i p y i p ) 2  
A ¯ =arg min A d r
A=[ a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 ]
[ E x E y E z ]=[ 1 N i=1 N ( X r i X o i ) 2 1 N i=1 N ( Y r i Y o i ) 2 1 N i=1 N ( Z r i Z o i ) 2 ] 
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.