Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Method for large-scale structured-light system calibration

Open Access Open Access

Abstract

We propose a multi-stage calibration method for increasing the overall accuracy of a large-scale structured light system by leveraging the conventional stereo calibration approach using a pinhole model. We first calibrate the intrinsic parameters at a near distance and then the extrinsic parameters with a low-cost large-calibration target at the designed measurement distance. Finally, we estimate pixel-wise errors from standard stereo 3D reconstructions and determine the pixel-wise phase-to-coordinate relationships using low-order polynomials. The calibrated pixel-wise polynomial functions can be used for 3D reconstruction for a given pixel phase value. We experimentally demonstrated that our proposed method achieves high accuracy for a large volume: sub-millimeter within 1200(H) × 800 (V) × 1000(D) mm3.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The difficulty and costs of manufacturing large calibration targets and ensuring their metrological dimensions prevent most calibration methods from being implemented for large scale 3D metrology. However, recent works have shown that a camera or a camera-projector structured light setup can be accurately calibrated in close range with a small calibration target while being focused at a far distance [13]. Nevertheless, even in this case, the lens distortions are not perfectly modeled for both devices, especially for the projector [4,5], which results in residual systematic errors that impact the overall accuracy of the 3D imaging system.

To overcome the limitation of conventional calibration approaches, several authors [6,7] proposed the phase to coordinate mapping (PCM) method that relates the 3D coordinates from a calibration target to the recovered absolute phase. Though working, such an approach implies that the calibration target is precisely positioned in space, making it cumbersome to be implemented in general and even more difficult to be translated to large scale structured light system calibration.

High accuracy large scale 3D measurements have always been a major concern in the industry [8]. However, the typical measurement setup often requires multiple sensors arranged in a multi-view approach [9], robotic arms to perform accurate partial scans that are later stitched together in software [10], or sophisticated calibration devices [11]. Not only is this approach cumbersome and slow, but it increases the overall costs.

Recent developments have shown that there are alternatives to conventional and costly approaches. For example, An et al., [3] proposed a large-range structured light system calibration method with decoupled intrinsic and extrinsic parameter estimation. For such a method, the extrinsic parameter estimation required an auxiliary low-resolution 3D sensor. Furthermore, the calibration accuracy is limited due to the use of the standard pinhole lens model. Wang et al., [12] used a virtual camera to enlarge the FOV without requiring a large calibration target. Although the method is technically sound, it requires precise positioning of a flat mirror between the object and the camera. Other alternatives include methods like the one by Liu et al., [13] which uses bundle adjustment to refine calibration parameters, Yang et al., [4] and Lv et al., [5] which use a planar target to compensate projector residual distortion. Along the same line, Yu et al., [14] and Xing and Guo [15] proposed calibration refinement schemes that aim at estimating more precisely conventional calibration parameters. However, all these methods have all been evaluated only for small scales or require costly calibration targets. More recently, Vargas et al., [16] proposed a hybrid approach to improve the performance of the standard calibration method developed by Zhang and Huang [17]. Such a method was only demonstrated successful for small-scale structured light system calibration.

We propose a method for the calibration of large-scale structured light systems. Our method includes three stages. First, we obtain the projector’s and camera’s intrinsic parameters using a small calibration target while they are focused at a far distance. Second, we obtain the extrinsic parameters using a low-cost large-format calibration target (size of approximately $1100(H) \times 800 (V) \; \textrm {mm}^2$). Third, we reconstruct 3D surfaces of all the poses for the large-format calibration target from the acquired calibration data. We fit a plane for each pose and calculate pixel-wise 3D coordinate errors to obtain new corrected coordinates along the line-of-sight of each camera pixel. We then iteratively fit a pixel-wise third-order polynomial for each corrected coordinates as a function of absolute phase value. The experimental results showed that this approach is highly flexible and achieves highly accurate 3D reconstructions over a large volume.

2. Principles

2.1 Pinhole model

The typical structured light system consists of a digital projector and a camera in a stereo vision arrangement [18]. As shown in Fig. 1, the camera and projector follow the pinhole model. The relation between a point $P(x^w, y^w, z^w)$ on the object’s surface and its projection $(u^c , v^c )$ on the image sensor can be written as

$$s^c [u^c v^c 1]^T = \mathbf{A}^c \, [\mathbf{R}^c \; \mathbf{t}^c] \, [x^w y^w z^w 1]^T \enspace,$$
with
$$\mathbf{A}^c = \begin{bmatrix} f_u & 0 & u^c_0 \\0 & f_v & v^c_0\\0 & 0 & 1 \end{bmatrix} ,$$
where $s^c$ is an arbitrary scale factor, $\mathbf {A}^c$ is the camera’s intrinsic matrix with $f_u$, and $f_v$ as the effective focal lengths along the $u$ and $v$ directions, and ($u^c_0$, $v^c_0$) is the principal point. $\mathbf {R}^c$ is a $3 \times 3$ rotation matrix, often expressed in terms of a $3 \times 1$ vector $\mathbf {\theta _s}$ of Euler angles, and $\mathbf {t}^c$ is a $3 \times 1$ translation vector; they are the camera’s extrinsic matrices. The projector is regarded as an inverse camera in which the same relation can be described as
$$s^p [u^p v^p 1]^T = \mathbf{A}^p \, [\mathbf{R}^p \; \mathbf{t}^p] \, [x^w y^w z^w 1]^T \enspace,$$
where the superscript $p$ denotes the projector’s parameters.

 figure: Fig. 1.

Fig. 1. Pinhole model of a structured-light system with the world coordinate system coinciding with the camera lens coordinate system.

Download Full Size | PDF

The above model does not take into account lens distortions. However, in reality, the camera and projector lenses have distortions, mostly radial and tangential distortion. These distortions make the imaging points deviate from their ideal locations and introduce systematic errors in the 3D reconstruction [19]. For highly accurate 3D reconstruction, these distortions need to be corrected [7]. The camera and projector lens distortions can be modeled as

$$\begin{bmatrix} u_d\\v_d \end{bmatrix} = (1+k_1r^2+k_2r^4+k_3r^6)\begin{bmatrix} \bar{u} \\ \bar{v} \end{bmatrix} +\begin{bmatrix} 2p_1\bar{u}\bar{v} + p_2(r^2+2\bar{u}^2) \\ 2p_2\bar{u}\bar{v} + p_1(r^2+2\bar{v}^2) \\ \end{bmatrix} \enspace,$$
with
$$r^2 = \bar{u}^2 + \bar{v}^2 \enspace,$$
where $[k_1, k_2, k_3]$ are the radial distortion coefficients and $[p_1, p_2]$ are the tangential distortion parameters. $[u_d, v_d]^T$ refer to the distorted points, and $[\bar {u}, \bar {v}]^T$ are the normalized coordinates. Although this lens distortion model has been successfully used for camera calibration, it is known that it is not entirely reliable for the projector lens due to its high optical efficiency and optical offset [4,5].

2.2 Conventional calibration method

The conventional calibration method follows a similar approach used in computer vision for calibration of a stereo vision system. It was proposed by Zhang and Huang [17] in which the camera and projector follow the pinhole model. A calibration target is located at the system’s working distance, as shown in Fig. 2. Because the projector cannot capture images like a camera, a phase correspondence relationship is established between the camera and projector pixels using two sets of orthogonal fringe patterns. During the calibration procedure, the camera captures the image of the calibration target at different poses. For each image, the feature points are detected in the camera sensor pixels and mapped to the projector digital micromirror device (DMD) pixels. Finally, the camera and the projector are calibrated following the standard camera calibration method [20], which can be implemented with the OpenCV camera calibration routines. Once the intrinsic and extrinsic parameters have been calibrated, the 3D coordinates of a point are obtained using Eqs. (1) and (3).

 figure: Fig. 2.

Fig. 2. Conventional calibration method. The calibration target is of appropriate size relative to the field of view (FOV) and the working distance. The intrinsic and extrinsic parameters are estimated via a phase correspondence between the camera and projector from all the captured target poses.

Download Full Size | PDF

2.3 Two-stage calibration method

Although the previous method works sufficiently well in most circumstances, it does not translate well to large scale settings. To address this limitation, An et al., [3] proposed the calibration of a structured light system in two estimation stages: first, the intrinsic parameters, and second, the extrinsic parameters. Bell et al., [1] and Li et al., [2] showed that a camera and projector could be adequately calibrated while out-of-focus. To avoid preparing a large calibration target, An et al., [3] used a low-resolution 3D scanner (Kinect V2) to assist in the extrinsic parameter calibration. Here, we avoid using ancillary equipment and assume that a large calibration target is available. As a result, the modified two-stage calibration method is depicted in Fig. 3. In the first stage, the projector is focused at the far distance, and an accurate small calibration target is placed at the near distance. The camera is equipped with a short focal length lens to cover a field of view (FOV) larger than that of the projector. The calibration target is placed at different poses, and these are used to estimate the projector’s intrinsic parameters. The projector is held fixed, and a similar procedure is carried out with the camera. Although the camera is focused at the far distance, the camera defocusing is still not strong enough to fail a conventional camera calibration procedure. Finally, in the second stage, a large calibration target is captured at different poses to estimate the system’s extrinsic parameters.

 figure: Fig. 3.

Fig. 3. Two-stage calibration method. First, the intrinsic parameters of the projector and the camera are estimated at near distance with a small and accurate calibration target. Second, the extrinsic parameters are estimated with a large target.

Download Full Size | PDF

Note that the feature extraction methods, the camera and projector models, and the parameter estimation routines are the same as that of the conventional method. However, in practice, the two-stage procedure for large scale should produce more reliable 3D reconstructions because of two reasons. The first is that the close-range estimation of the intrinsic parameters is carried out with a small and accurate calibration target that can be easily placed in arbitrary positions, which increases the accuracy of the intrinsic calibration. The second reason is related to the degrees of freedom of the calibration. If the intrinsic parameters are fixed, then a few points from the large calibration target should suffice to obtain the projector’s extrinsic parameters reliably.

3. Large-scale calibration method

The previous calibration methods rely on the pinhole lens model. However, most commercial lens distortions cannot be fully represented by this model, resulting in residual errors on 3D reconstructions [5,7]. To address this limitation, our proposed calibration method calibrates each pixel separately and thus does not require a lens to follow a pinhole model. Our calibration method leverages the well-established standard stereo calibration results as our initial calibration and further establishes the phase-to-3D coordinates mapping for each pixel. This section will elucidate our calibration method and the calibration procedures.

3.1 Residual error compensation

The underlying principle for the proposed calibration method relies on estimating the residual distortion that persists after 3D reconstruction with the stereo calibration following the pinhole model. Because the conventional calibration method requires capturing many poses of the calibration target, we can use those same images for estimating the residual error. However, now instead of using the target circle features, we use the entire calibration board. All board poses 1 through $n$ are reconstructed to obtain a point cloud for each pose, as depicted in Fig. 4. These 3D reconstructions have a residual error relative to an ideal flat surface. Following the line-of-sight of a camera pixel $(u_c, v_c)$ through the different poses, we obtain new corrected coordinates $\tilde {P}_i (\tilde {x}, \tilde {y}, \tilde {z})$ for each point $P_i$ by replacing it with the closest point to the fitted flat surface, as shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. Pixel-wise calibration with residual error compensation. The 3D reconstruction of a calibration board has a residual error relative to an ideal flat surface which can be estimated to compute new corrected $\tilde {P}_i$ 3D coordinates along the line-of-sight of a given camera pixel.

Download Full Size | PDF

It is also worth noting that for each corrected coordinate $\tilde {P}_i$ at a given pose, the corresponding phase value $\phi _i$ is known. Thus, it becomes straightforward to relate each corrected coordinate $\tilde {P}_i (\tilde {x}, \tilde {y}, \tilde {z})$ to its corresponding phase value to achieve a highly accurate 3D reconstruction.

3.2 Pixel-wise polynomial fit

Having related the corrected metric coordinates with the corresponding absolute phase values, we use a pixel-wise regression model to directly obtain the corrected 3D coordinates without requiring solving triangulation equations. The reasoning is that phase carries not only depth but also transversal information [21]. Marrugo et al., [22] studied the performance of different polynomials to adequately model the phase to coordinate mapping and found that a third-order polynomial proved sufficient. Therefore, we model the mapping from absolute phase $\phi$ to metric coordinates $(\tilde {x}, \tilde {y}, \tilde {z})$ for each camera pixel $(u_c, v_c)$ as

$$ \tilde{x} = a_3 \phi^3 + a_2 \phi^2 + a_1 \phi + a_0 \enspace, $$
$$ \tilde{y} = b_3 \phi^3 + b_2 \phi^2 + b_1 \phi + b_0 \enspace, $$
$$ \tilde{z} = c_3 \phi^3 + c_2 \phi^2 + c_1 \phi + c_0 \enspace, $$
where $a_{0-3}$, $b_{0-3}$, $c_{0-3}$, are the coefficients of the adjustment polynomials for $\tilde {x}$, $\tilde {y}$, $\tilde {z}$, respectively. For the sake of simplicity, here we have omitted the camera pixel coordinate $(u_c,v_c)$ for each equation term.

3.3 Calibration procedures

The proposed method consists of three stages: 1) intrinsic parameter calibration, 2) extrinsic parameter calibration, and 3) error map and pixel-wise phase to coordinate mapping. The first two stages follow the modified two-stage calibration method described in the previous section to obtain a reliable stereo calibration. By first estimating the intrinsic parameters, it is reasonable to assume that the extrinsic parameters should be well-estimated, up to a scale factor. Because a calibration target is a rigid object, any metric error leads to a scaling error. Moreover, it is known that the calibration errors do not depend on the accuracy of the calibration point measurement but on the accuracy of calibration point detection in the image plane [20,23,24]. However, the calibration results obtained with the first two stages inevitably bear residual error. The proposed third stage reduces the residual errors while simultaneously enabling fast and accurate 3D reconstruction without solving triangulation equations or removing lens distortions. In summary, the proposed calibration procedures are as follows.

Stage 1: Intrinsic parameter calibration

  • 1. Print a small calibration target and attach it to a planar surface. Place it in the near distance.
  • 2. Focus the projector at the far distance for large-scale measurement.
  • 3. Focus the camera at the near distance with a short focal length lens.
  • 4. Acquire several images of the small calibration target at different poses using the phase-aided correspondence method [17,18].
  • 5. Run the camera calibration software to calibrate the projector.
  • 6. Focus the camera at the far distance with a long focal length lens.
  • 7. Acquire several images of the small calibration target at different poses.
  • 8. Run the camera calibration software to calibrate the camera.

Stage 2: Extrinsic parameter calibration

  • 9. Print a large-format calibration target and attach it to a planar surface (e.g., blackboard).

    Place it at the far distance.

  • 10. Acquire several images of the large calibration target at different poses using the phase-aided correspondence method [17,18].
  • 11. Run the camera calibration software with the camera and projector intrinsic parameters from the previous stage to calibrate the system’s extrinsic parameters.

Stage 3: Error map and pixel-wise phase to coordinate mapping

  • 12. Reconstruct all calibration poses to obtain a point cloud for each pose.
  • 13. Estimate ideal plane from each point cloud by least-squares.
  • 14. Compute pixel-wise error map for each pose.
  • 15. Compute corrected metric coordinates from error map.
  • 16. Compute pixel-wise polynomials from corrected coordinates and recovered absolute phase using Eqs. (6), (7), and (8).
  • 17. Repeat 12-16 until convergence.

Please note that the initial 3D reconstruction (iteration 0) in step 12 is obtained by solving the triangulation Eqs. (1) and (3) with lens distortion correction. The following iterations (iterations $1, 2, \ldots , n$) use Eqs. (6)–(8) to obtain the 3D reconstruction with the fitted polynomials, which are refined iteratively. Additionally, if the calibration target’s flatness cannot be ensured, then a single iteration of steps 12 through 16 should be sufficient. Further iterations may induce over-fitting and worsen the overall accuracy, as we will show in the next section.

4. Experiments and results

4.1 Experimental setup

To verify the performance of the proposed method, we developed a structured light system shown in Fig. 5(a). It consists of a complementary metal-oxide-semiconductor (CMOS) camera with a resolution of $1920 \times 1200$ pixels (Model: Point Gray Research Grasshopper3 GS3-U3-23S6M), a digital light processing (DLP) projector with a resolution of $912 \times 1140$ pixels (Model: Texas Instruments LightCrafter 4500) and a microprocessor (Model: Arduino Uno). We used two camera lenses: an 8 mm focal lens with f-number 1.4 (Model: Computar M0814-MP2) for calibrating the projector intrinsic parameters, and a 16 mm focal length lens with f-number 1.4 (Model: Computar M1614-MP) for the final setup. The microprocessor is used to synchronize the camera with the projector.

 figure: Fig. 5.

Fig. 5. (a) The structured light system components and arrangement. (b) The system and the large calibration target.

Download Full Size | PDF

We used two calibration targets with $7 \times 21$ white circles on a black background: a small target with $10.0$ mm circle spacing and a low-cost large-format calibration target with $50.0$ mm circle spacing. The small calibration target was prepared following the guidelines from Zhang [18]. The large calibration target was fixed to a large flat surface, as shown in Fig. 5(b). Both calibration patterns were designed with CAD software and printed using high-accuracy laser printers.

4.2 Calibration procedure

We followed the three calibration methods described earlier to calibrate the system. In all cases, to obtain a reliable phase-aided correspondence, we used binary defocused fringes of an 18-pixel pitch with the 18-step phase-shifting algorithm for phase recovery along with 7-bit gray coding patterns for absolute phase unwrapping [18]. To ensure high accuracy, for each calibration stage, we acquired images from 24 poses of the calibration target. We begin with the two-stage calibration method. First, we attached the short focal length lens to the camera to calibrate the projector’s intrinsic parameters while it is focused at a far distance (approximately $1800$ mm). This procedure was carried out with the small calibration target placed at the near distance. The projector was left fixed, and we attached the long focal length lens to the camera and set the camera orientation to match the projector’s FOV at the far distance. Then, we calibrated the camera’s intrinsic parameters while focused at the far distance with the small calibration target placed at the near distance. In the second stage, we used the large calibration target to calibrate the structured light system’s extrinsic parameters using 24 poses that covered an approximate volume of $1000 (H) \times 700 (V) \times 400 (D) \; \textrm {mm}^3$ ($x, y, z$). The intrinsic and extrinsic parameters for the two-stage method are shown in Table 1. The reprojection errors in pixels were 0.24, 0.34, and 1.48 for the projector, the camera, and the stereo system, respectively. These errors are relatively small for the considered scale.

Tables Icon

Table 1. System parameters obtained with the two-stage calibration method. Reprojection errors in pixels: projector (0.24), camera(0.34), and stereo (1.48).

For the conventional calibration method, we used the same 24 poses from the two-stage method. However, here we used these poses to simultaneously estimate the structured light system’s intrinsic and extrinsic parameters. The intrinsic and extrinsic parameters obtained with the conventional method are shown in Table 2. They are slightly different from the ones obtained with the two-stage method. The reprojection errors in pixels were 0.13, 0.36, and 0.50 for the projector, the camera, and the stereo system, respectively. These reprojection errors are somewhat smaller than the errors from the two-stage method and can also be considered suitable for the considered scale. However, as we will see in the following results, simply relying on reprojection error is not sufficient for assessing the calibration performance.

Tables Icon

Table 2. System parameters obtained with the conventional calibration method. Reprojection errors: projector (0.13), camera (0.36), and stereo (0.50).

Finally, to calibrate the system with the proposed method, we obtained the 3D reconstruction of all the 24 calibration target poses from the previous stage. To avoid black-to-white transition phase errors from the calibration target, we interpolated the phase values using a fifth-order 2D polynomial function to all pixels following a similar procedure as described in Ref. [25]. We estimated an ideal flat surface from each point cloud and computed the residual error. We obtained the corrected 3D coordinates and performed the pixel-wise polynomial fitting of phase $\phi$ relative to corrected ($\tilde {x}, \tilde {y}, \tilde {z}$) 3D coordinates along the line-of-sight of each camera pixel. Figure 6 shows the RMS fitting error maps for each metric coordinate. The fitting errors are small throughout the FOV, and their distribution is smooth. Although ideally, only four data points are required to perform a third-order polynomial fit, we use only pixels with at least ten measurement points to obtain a reliable fitting, as shown in the bottom row of Fig. 6 for the fitting result at pixel (500, 500). It is worth noting that the third-order polynomial describes sufficiently well the phase to metric coordinate relation.

 figure: Fig. 6.

Fig. 6. (from left to right) Top: the RMS residual fitting error maps of $\tilde {x}$, $\tilde {y}$, and $\tilde {z}$ versus phase $\phi$ calibration. The maps show a small error throughout most of the field of view. Bottom: scatter plot and fitting result at pixel (500, 500) for $\tilde {x}$, $\tilde {y}$, and $\tilde {z}$ versus phase $\phi$.

Download Full Size | PDF

4.3 Validation experiments

To assess the proposed method’s validity and accuracy, we measured the back surface of a large flat mirror at different distances and orientations, as shown in Fig. 7. The measurements are labeled from plane numbers 1 to 10 to indicate the different positions. Note that we have highlighted in pink the calibration volume where the 24 calibration poses were placed. The planes 1 and 2 were placed in front of the calibration volume, and 3 through 5 were within the calibration volume. Planes 6 through 8 were behind the calibration volume at a distance exceeding 2000 mm and with a FOV of $1200(H) \times 800(V) \; mm^2$. Finally, planes 9 and 10, were slanted planes mostly within the calibration volume.

 figure: Fig. 7.

Fig. 7. Validation measurements of different planes inside and outside the calibration volume.

Download Full Size | PDF

The root-mean-square (RMS) error with respect to an ideal plane for each position and calibration method is given in Table 3. In all cases, the proposed method outperforms the other two calibration methods with an RMS error well below 1 mm for such a large object. While it is noticeable that the error is lower within the calibration volume, the error outside the calibration volume is acceptable. Also, note that the two-stage calibration method generally yields lower error than the conventional method, but not as low as the proposed method.

Tables Icon

Table 3. Root mean square error (RMS) with respect to ideal plane for different positions.

To show how the errors are distributed along the FOV for each calibration method, in Fig. 8, we show the error maps and histograms for selected measurements. The error maps show that the proposed method achieves a more uniform error distribution throughout the FOV. Whereas, the conventional or the two-stage calibration methods have substantial residual errors. Moreover, the histograms show an almost symmetric and centered error distribution for the proposed method, which is never achieved by the other methods. As expected, within and near the calibration volume, the best performance is achieved. However, although the slanted plane is the most challenging measurement as different parts of the object are placed at different depths from the system, the proposed method still outperforms with a mostly uniform error and an RMS error of 0.87 mm shown in Table 3. These experiments show that the proposed calibration method performs sufficiently well at such a large scale.

 figure: Fig. 8.

Fig. 8. Error maps and histograms from selected validation planes. From left to right: plane position relative to calibration volume, error maps from the conventional method, the two-stage method, the proposed method, and the error histogram. The proposed method achieves a more uniform error distribution throughout the FOV and an almost symmetric and centered error distribution.

Download Full Size | PDF

To assess the advantage of the iterative refinement step in stage 3 of the proposed method, we compared the performance of such optimization using both the large calibration target and the flat surface. We used 18 poses of the flat surface in the following way: 14 poses for the calibration refinement and 4 poses for validation. Figure 9 shows how, in a single iteration (from 0 to 1), the RMS error from the two-stage calibration method is significantly reduced by the proposed method. Moreover, the iterative refinement (iterations 1 to 3) reduces the error further.

 figure: Fig. 9.

Fig. 9. (a) RMS residual error per pose with the proposed method optimized with a flat surface. Iteration 0 is the initial error from the two-stage calibration method. (b) The method achieves convergence after three iterations with tolerance of 0.01 mm.

Download Full Size | PDF

In Table 4, we show the comparison results. The four validation poses were positions 2, 4, 7, and 10 as depicted in Fig. 7 which include the flat reference surface at different distances from the structured light system. As expected, note that regardless of the number of iterations for the refinement step, when a substantially flat surface is used, the RMS errors are lower than the target-optimized errors. Nevertheless, it is highly remarkable that the iterative refinement approach further reduces the errors when optimized with a flat surface in some cases up to a 10% reduction, as in position 07 placed at a distance exceeding 2200 mm, from an RMS error of 0.424 mm to 0.381 mm. Conversely, the results with the target-optimized method produce marginally worse results, probably due to over-fitting to a surface that is not perfectly flat.

Tables Icon

Table 4. Comparison of the iterative refinement (stage 3 of proposed method) carried out with the printed calibration target versus a flat surface.

Finally, we measured a large complex scene shown in Fig. 10. The scene consists of different statutes arranged at different distances and heights covering a volume of about $1200~\mathrm {mm} \times 800~\mathrm {mm} \times 600~\mathrm {mm}$. The structured light system was able to successfully measure the whole scene with remarkable details. The 3D reconstruction has not been filtered.

 figure: Fig. 10.

Fig. 10. Measurement of a large scene. The scene is about $1200~\mathrm {mm} \times 800~\mathrm {mm} \times 600~\mathrm {mm}$. (a) Layout of the measured scene. (b) 3D reconstruction with the proposed method.

Download Full Size | PDF

5. Summary

We have presented a calibration method that can improve calibration accuracy without using sophisticated procedures or expensive calibration artifacts, which are often prohibitive for large scale measurements. The proposed calibration method leverages the existing calibration methods to produce highly accurate large-scale measurements. The concept of pixel-wise residual error compensation with iterative refinement yields remarkable results while measuring a large object.

Funding

Fulbright Colombia; Departamento Administrativo de Ciencia, Tecnología e Innovación (COLCIENCIAS) (935-2019); National Science Foundation (IIS-1637961, IIS-1763689).

Acknowledgments

R. Vargas thanks Universidad Tecnológica de Bolívar (UTB) for a PhD scholarship, MinCiencias, and MinSalud for a “Joven Talento" scholarship. L.A. Romero and A.G. Marrugo thank UTB for a Research Leave Fellowship. A.G. Marrugo acknowledges support from the Fulbright Commission in Colombia and the Colombian Ministry of Education within the framework of the Fulbright Visiting Scholar Program, Cohort 2019-2020.

This research was primarily conducted at Purdue University.

L.A. Romero and A.G. Marrugo dedicate this paper to the memory of Cruz Marina Perez Duran.

Disclosures

SZ: ORI LLC (C), Orbbec 3D (C), Vision Express Optics Inc (I). AGM, RV, and LAR: declare no conflicts of interest.

References

1. T. Bell, J. Xu, and S. Zhang, “Method for out-of-focus camera calibration,” Appl. Opt. 55(9), 2346–2352 (2016). [CrossRef]  

2. B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured-light system with an out-of-focus projector,” Appl. Opt. 53(16), 3415–3426 (2014). [CrossRef]  

3. Y. An, T. Bell, B. Li, J. Xu, and S. Zhang, “Method for large-range structured light system calibration,” Appl. Opt. 55(33), 9563–9572 (2016). [CrossRef]  

4. S. Yang, M. Liu, J. Song, S. Yin, Y. Ren, J. Zhu, and S. Chen, “Projector distortion residual compensation in fringe projection system,” Opt. Lasers Eng. 114, 104–110 (2019). [CrossRef]  

5. S. Lv, Q. Sun, Y. Zhang, Y. Jiang, J. Yang, J. Liu, and J. Wang, “Projector distortion correction in 3D shape measurement using a structured-light system by deep neural networks,” Opt. Lett. 45(1), 204–207 (2020). [CrossRef]  

6. J. Villa, M. Araiza, D. Alaniz, R. Ivanov, and M. Ortiz, “Transformation of phase to (x, y, z)-coordinates for the calibration of a fringe projection profilometer,” Opt. Lasers Eng. 50(2), 256–261 (2012). [CrossRef]  

7. R. Vargas, A. G. Marrugo, J. Pineda, J. Meneses, and L. A. Romero, “Camera-Projector Calibration Methods with Compensation of Geometric Distortions in Fringe Projection Profilometry: A Comparative Study,” Opt. Pura Apl. 51(3), 1–10 (2018). [CrossRef]  

8. A. G. Marrugo, F. Gao, and S. Zhang, “State-of-the-art active optical techniques for three-dimensional surface metrology: a review,” J. Opt. Soc. Am. A 37(9), B60–B77 (2020). [CrossRef]  

9. S. Gai, F. Da, and M. Tang, “A flexible multi-view calibration and 3d measurement method based on digital fringe projection,” Meas. Sci. Technol. 30(2), 025203 (2019). [CrossRef]  

10. S. Yin, Y. Ren, Y. Guo, J. Zhu, S. Yang, and S. Ye, “Development and calibration of an integrated 3D scanning system for high-accuracy large-scale metrology,” Measurement 54, 65–76 (2014). [CrossRef]  

11. I. Léandry, C. Bréque, and V. Valle, “Calibration of a structured-light projection system: Development to large dimension objects,” Opt. Lasers Eng. 50(3), 373–379 (2012). [CrossRef]  

12. P. Wang, J. Wang, J. Xu, Y. Guan, G. Zhang, and K. Chen, “Calibration method for a large-scale structured light measurement system,” Appl. Opt. 56(14), 3995–4002 (2017). [CrossRef]  

13. X. Liu, Z. Cai, Y. Yin, H. Jiang, D. He, W. He, Z. Zhang, and X. Peng, “Calibration of fringe projection profilometry using an inaccurate 2D reference target,” Opt. Lasers Eng. 89, 131–137 (2017). [CrossRef]  

14. J. Yu and F. Da, “Calibration refinement for a fringe projection profilometry system based on plane homography,” Opt. Lasers Eng. 140, 106525 (2021). [CrossRef]  

15. S. Xing and H. Guo, “Iterative calibration method for measurement system having lens distortions in fringe projection profilometry,” Opt. Express 28(2), 1177–1196 (2020). [CrossRef]  

16. R. Vargas, A. G. Marrugo, S. Zhang, and L. A. Romero, “Hybrid calibration procedure for fringe projection profilometry based on stereo vision and polynomial fitting,” Appl. Opt. 59(13), D163–D169 (2020). [CrossRef]  

17. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

18. S. Zhang, High-Speed 3D Imaging with Digital Fringe Projection Techniques (CRC Press, 2016).

19. K. Li, J. Bu, and D. Zhang, “Lens distortion elimination for improving measurement accuracy of fringe projection profilometry,” Opt. Lasers Eng. 85, 53–64 (2016). [CrossRef]  

20. Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

21. R. Juarez-Salazar, A. Giron, J. Zheng, and V. H. Diaz-Ramirez, “Key concepts for phase-to-coordinate conversion in fringe projection systems,” Appl. Opt. 58(18), 4828–4834 (2019). [CrossRef]  

22. A. G. Marrugo, R. Vargas, S. Zhang, and L. A. Romero, “Hybrid calibration method for improving 3D measurement accuracy of structured light systems,” Proc. SPIE 11490, 1149008 (2020). [CrossRef]  

23. J. M. Lavest, M. Viala, and M. Dhome, “Do we really need an accurate calibration pattern to achieve a reliable camera calibration?” in Computer Vision — ECCV’98, (Springer, Berlin, Heidelberg, Berlin, Heidelberg, 1998), pp. 158–174.

24. L. Huang, Q. Zhang, and A. Asundi, “Camera calibration with active phase target: improvement on feature detection and optimization,” Opt. Lett. 38(9), 1446–1448 (2013). [CrossRef]  

25. X.-L. Zhang, B.-F. Zhang, and Y.-C. Lin, “Accurate phase expansion on reference planes in grating projection profilometry,” Meas. Sci. Technol. 22(7), 075301 (2011). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Pinhole model of a structured-light system with the world coordinate system coinciding with the camera lens coordinate system.
Fig. 2.
Fig. 2. Conventional calibration method. The calibration target is of appropriate size relative to the field of view (FOV) and the working distance. The intrinsic and extrinsic parameters are estimated via a phase correspondence between the camera and projector from all the captured target poses.
Fig. 3.
Fig. 3. Two-stage calibration method. First, the intrinsic parameters of the projector and the camera are estimated at near distance with a small and accurate calibration target. Second, the extrinsic parameters are estimated with a large target.
Fig. 4.
Fig. 4. Pixel-wise calibration with residual error compensation. The 3D reconstruction of a calibration board has a residual error relative to an ideal flat surface which can be estimated to compute new corrected $\tilde {P}_i$ 3D coordinates along the line-of-sight of a given camera pixel.
Fig. 5.
Fig. 5. (a) The structured light system components and arrangement. (b) The system and the large calibration target.
Fig. 6.
Fig. 6. (from left to right) Top: the RMS residual fitting error maps of $\tilde {x}$ , $\tilde {y}$ , and $\tilde {z}$ versus phase $\phi$ calibration. The maps show a small error throughout most of the field of view. Bottom: scatter plot and fitting result at pixel (500, 500) for $\tilde {x}$ , $\tilde {y}$ , and $\tilde {z}$ versus phase $\phi$ .
Fig. 7.
Fig. 7. Validation measurements of different planes inside and outside the calibration volume.
Fig. 8.
Fig. 8. Error maps and histograms from selected validation planes. From left to right: plane position relative to calibration volume, error maps from the conventional method, the two-stage method, the proposed method, and the error histogram. The proposed method achieves a more uniform error distribution throughout the FOV and an almost symmetric and centered error distribution.
Fig. 9.
Fig. 9. (a) RMS residual error per pose with the proposed method optimized with a flat surface. Iteration 0 is the initial error from the two-stage calibration method. (b) The method achieves convergence after three iterations with tolerance of 0.01 mm.
Fig. 10.
Fig. 10. Measurement of a large scene. The scene is about $1200~\mathrm {mm} \times 800~\mathrm {mm} \times 600~\mathrm {mm}$ . (a) Layout of the measured scene. (b) 3D reconstruction with the proposed method.

Tables (4)

Tables Icon

Table 1. System parameters obtained with the two-stage calibration method. Reprojection errors in pixels: projector (0.24), camera(0.34), and stereo (1.48).

Tables Icon

Table 2. System parameters obtained with the conventional calibration method. Reprojection errors: projector (0.13), camera (0.36), and stereo (0.50).

Tables Icon

Table 3. Root mean square error (RMS) with respect to ideal plane for different positions.

Tables Icon

Table 4. Comparison of the iterative refinement (stage 3 of proposed method) carried out with the printed calibration target versus a flat surface.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

s c [ u c v c 1 ] T = A c [ R c t c ] [ x w y w z w 1 ] T ,
A c = [ f u 0 u 0 c 0 f v v 0 c 0 0 1 ] ,
s p [ u p v p 1 ] T = A p [ R p t p ] [ x w y w z w 1 ] T ,
[ u d v d ] = ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) [ u ¯ v ¯ ] + [ 2 p 1 u ¯ v ¯ + p 2 ( r 2 + 2 u ¯ 2 ) 2 p 2 u ¯ v ¯ + p 1 ( r 2 + 2 v ¯ 2 ) ] ,
r 2 = u ¯ 2 + v ¯ 2 ,
x ~ = a 3 ϕ 3 + a 2 ϕ 2 + a 1 ϕ + a 0 ,
y ~ = b 3 ϕ 3 + b 2 ϕ 2 + b 1 ϕ + b 0 ,
z ~ = c 3 ϕ 3 + c 2 ϕ 2 + c 1 ϕ + c 0 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.