Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-precision and rapid binocular camera calibration method using a single image per camera

Open Access Open Access

Abstract

This study proposes a precise and rapid binocular camera calibration (BCC) method based on a stereo target composed of 12 coded planar targets on which each calibration corner has a unique coded number. Unlike Zhang’s method which requires numerous pairs of images in a binocular calibration process and fails to realize the matching of homonymous corners in the case of the incomplete target projection, the proposed method can implement an accurate BCC using a single calibration image per camera even in the case of target incompete projection. The proposed method greatly decreases the complexity of the calibration process. An optimization method based on multi-constraint is also presented to improve the accuracy of the BCC. The reprojection error and the 3D measurement errors are combined to evaluate the precision of the BCC more comprehensively. A binocular camera is calibrated by utilizing the proposed method and Zhang’s method for comparison. The reprojection error and 3D measurement errors are remarkably reduced by applying the proposed method. The supplementary experiment further verifies the advantage of the proposed optimization method.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

High-precision BCC is an important basis for the accuracy stereo vision 3D measurement, 3D visual positioning, and robot navigation. BCC method can be roughly divided into two categories: self-calibration [1] method and the calibration methods based on a calibration target. Self-calibration method can barely be implemented in the actual applications without any camera motion constraints [2], and ensuring the accuracy of the extraction of the feature points in images is difficult. The calibration objects used in target-based calibration methods usually include 1D target [35], 2D target [69], and 3D target [1013].

The 1D target-based calibration method requires the strict motion constraints and can barely realize high-precision calibration with few involved feature points [14]. The 2D target-based method is widely used for camera calibration due to its flexibility. Especially, Zhang [6] proposed a calibration method based on a chessboard pattern, which is one of the most influential camera calibration methods in practical application. However, when the pattern is partially exposed, the famous Zhang’s method cannot realize the precious matching of homonymous corners in a BCC process. A new planar target [15] is proposed to implement the BCC under the case of the incomplete target projection for coping with the abovementioned issue. A coding target [16] is designed to achieve the accurate matching of homonymous calibration corners despite the partially occluded planar target. The Charuco board [17] which adds the Aruco markers [18] on a chessboard can implement camera calibration with the images of the partially occluded target. Although the aforementioned methods solve the issue of the incomplete target projection, their calibration processes require numerous calibration images and are thus complicated, which is unsuitable for certain application scenarios, such as underwater scenarios with motion limits.

Diverse 3D target-based calibration methods are developed to simplify the calibration process. However, the methods based on conventional 3D targets [10,12] are not widespread considering the fabrication difficulty of the 3D targets. To deal with the abovementioned problem, the multi-plane based stereo target gradually becomes a major research topic. Zhang et al. [19] calibrated a camera with anamorphic lens by using the 3D target which is composed of two chessboard patterns. The multi-plane stereo target [20] can calibrate multi-camera with only one target image for each camera. Although the aforementioned methods turn to simplify the process of camera calibration by using a low-cost 3D target, they cannot deal with the issue of the incomplete target projection. J. Zhang et al. [21] proposed a method based on a stereo coding target to implement camera calibration with a single shot. However, this method can solve the occlusion of the feature points only in the case that the special frames on the target are entirely exposed, and thus, it has less flexibility.

What’s more, the abovementioned J. Zhang’s method [21] uses the reprojection error to evaluate the accuracy of BCC. However, Yang [22] and Poulin-Girard [23] emphasize that the reprojection error is unable to reflect the precision of the camera calibration comprehensively. Cui et al. [24] propose a parameter optimization method based on the 3D constraint, epipolar constraint and distance constraint. Nevertheless, they choose the point which is converted from the object point through the extrinsic parameters of the left camera as the true value in their 3D constraint. This means that the above method may have extra errors in its optimization process. And the reprojection constraint that is able to ensure the homography mapping relation is not presented in their method. Thus, a new parameter optimization method and criteria are needed to improve and comprehensively assess the precision of BCC, respectively.

Through the foregoing discussion, this study presents a precise and rapid BCC method by using one calibration image per camera. The proposed target is composed of 12 coded planar targets whose calibration corners have their own coded numbers to allow the matching of homonymous calibration corners under the case of the partial target projection. Moreover, the optimization method based on multi-constraint is presented to ameliorate the accuracy of the BCC for fully utilizing the 3D features of the proposed target.

2. Proposed stereo target

The proposed stereo target aims to implement the BCC with a single shot and simultaneously remove the requirement, which means that the target should be fully present in calibration images.

2.1 Structure of the proposed stereo target

As shown in Fig. 1, the proposed stereo target consists of a reference plate and 12 target plates. The number of the target plates can be changed according to the practical application scenario. And the angle between the outer and inner target plates can be adjusted properly on the basis of ensuring that there is a significant difference in the spatial attitude between the inner and outer target plates. The target plates are laid evenly on the reference plate. Moreover, a coded planar target is attached to each target plate. The coded planar target is composed of several coded markers, as shown in Fig. 1. Each coded marker has four calibration corners, and its internal coded unit is used to provide a coded number for each calibration corner, which allows the matching of the homonymous calibration corners in the case of the partial presence of the target in left and right calibration images. When we fabricate the proposed target, each target plate should have high-precision flatness. Moreover, it is also necessary to ensure the accuracy of the world coordinates of the calibration corners in the related world coordinate system. Besides the above two requirements, the target material should be selected with low expansion coefficient and high strength as much as possible, such as ceramic material.

 figure: Fig. 1.

Fig. 1. Schematic of the structure of the proposed stereo target.

Download Full Size | PPT Slide | PDF

2.2 Design of coded unit

A directional circle, a positioning ring, and 12 coded circles generally form a coded unit, as shown in Fig. 2. The coded unit is divided into six regions by three lines ${l_1}$, ${l_2}$, and ${l_3}$. Two coded circles are located in each region. The colors of the directional circle and the positioning ring are white. The color of a coded circle can be white or black, which depends on the specific number of the related coded unit. Based on the colors of the twelve coded circles, the coded number of the coded unit (CNCU) $e$ can be calculated by

$$e = ({\xi _0},{\xi _1},\ldots ,{\xi _{11}}) \cdot {({2^0},{2^1},\ldots ,{2^{11}})^T}, $$
where ${\xi _i}$ is the coefficient of the coded circle ${P_i}$ ($i = \textrm{0},1,\ldots ,11$). When the color of ${P_i}$ is white, ${\xi _i} = 1$. Otherwise, ${\xi _i} = 0$. All the CNCUs on the stereo target are different. After the number of the coded unit is calculated, as shown in Fig. 2, we can obtain the coded number of a calibration corner (CNCC) $e\_\sigma$ on the relative coded marker, where $\sigma$ represents the coded region of the related calibration corner, and it can be 1, 3, 4, or 6.

 figure: Fig. 2.

Fig. 2. Structure of a coded unit and the coding principle of calibration corners.

Download Full Size | PPT Slide | PDF

Figure 3 shows us the principle of establishing the word coordinate system (WCS) on a coded planar target of the stereo target. An initial coded marker (its position is shown in Fig. 3) exists on each planar target, and its related CNCU ${e_{0,i}}$ ($i = \textrm{1},\textrm{2},\ldots ,12$) is called initial coded number, which is the minimum CNCU on the corresponding planar target. For the convenient of description, we take the 12 coded planar targets as the planar targets 1, 2, …, 12 and regard their related WCSs as the WCSs 1, 2, …, 12 in the order of smallest to largest in their related initial coded numbers. ${\boldsymbol c}_{0,1}^i$, ${\boldsymbol c}_{0,2}^i$, ${\boldsymbol c}_{0,3}^i$, and ${\boldsymbol c}_{0,4}^i$ denote the calibration corners located in regions 1, 3, 4, and 6, respectively, on the initial coded marker of the planar target i. Thereafter, we take ${\boldsymbol c}_{0,1}^i$ as the origin point ${O_{i,W}}$ of the WCS i ${O_{i,W}} - {X_{i,W}}{Y_{i,W}}{Z_{i,W}}$. The direction from ${\boldsymbol c}_{0,1}^i$ to ${\boldsymbol c}_{0,4}^i$ and that from ${\boldsymbol c}_{0,1}^i$ to ${\boldsymbol c}_{0,2}^i$ are considered the positive direction of the ${X_{i,W}}$ and ${Y_{i,W}}$ axes. The ${X_{i,W}}$, ${Y_{i,W}}$, and ${Z_{i,W}}$ axes meet the right-hand principle, which establishes the WCS i ${O_{i,W}} - {X_{i,W}}{Y_{i,W}}{Z_{i,W}}$ on the coded planar target i. CNCUs on the planar target i continuously increase in positive direction of ${X_{i,W}}$ axis by starting with ${e_{0,i}}$. As shown in Fig. 3, the world coordinates of all the calibration corners on each coded planar target can be computed with the known interval distance and the side length of the coded unit. The side length and interval distance on each planar target on our 3D target can be adjusted according to the actual application scenario.

 figure: Fig. 3.

Fig. 3. Principle of establishing the world coordinate system on a coded planar target, (a) and (b) are the planar target on the outer and inner stereo target, respectively.

Download Full Size | PPT Slide | PDF

2.3 Decoding of the proposed stereo target

The decoding process of our target aims to obtain all the CNCCs in a target image. With one target image, the decoding process can be described as follows:

The flow of the decoding method of the stereo target is described in Fig. 4. As shown in Step 1 in Fig. 4, we first gray the original image and obtain grayscale image. Then, the corner detection method proposed by A. Geiger [25] is used to find the subpixel coordinates of candidate corners in the mentioned grayscale image. Second, after the grayscale image is converted to the binary image and the binary image is copied, we label the white connected domains in the copied binary image and record their areas in Step 2 in Fig. 4. Next, those white domains whose areas are below a certain threshold are eliminated by setting the values of the pixels inside the domains to 0 to remove all the coded designs in the copied binary image. Thereafter, we detect quadrilaterals in the image and find their vertices after the copied binary image is corroded. Third, as shown in Step 3 in Fig. 4, we choose the four candidate corners which are outside the quadrilateral and respectively closest to the four vertices as the quasi-calibration corners of the related quadrilateral for each quadrilateral. In the light of the interference corners that may not be filtered in the previous processing, we also need to confirm whether the patterns inside the quadrilateral are the coded designs before the next step. After the values of the pixels outside the quadrilateral are set to 255 in the binary image, we label the white and black domains and record the number of the white domains ${n_w}$, the number of the black domains ${n_b}$, the area of the largest black domain ${m_{_{\max }}}$, and the area of the smallest black domain ${m_{_{\min }}}$. If they meet the following requirements: $3 \le {n_w} \le 15$, ${n_b} ={=} 2$, and the value of $({m_{_{\max }}}/{m_{_{\min }}})$ is greater than a certain threshold, then the four quasi-calibration corners of the quadrilateral are considered the calibration corners on the related coded marker. With this approach, the subpixel coordinates of all the calibration corners can be obtained. Finally, we employ the marker decoding method (MDM) shown in Fig. 5 to get all the CNCCs.

 figure: Fig. 4.

Fig. 4. Schematic flow of decoding a stereo target.

Download Full Size | PPT Slide | PDF

 figure: Fig. 5.

Fig. 5. Schematic flow of the marker decoding method (MDM).

Download Full Size | PPT Slide | PDF

As shown in Fig. 5, given the four calibration corners on each coded marker, we can calculate all the CNCCs via the MDM. Without loss of generality, we take one of the coded markers as the example to detail the MDM. First, we remove the complex background outside the quadrilateral formed by the four calibration corners on the ith coded marker to 255 in the copied binary image, as shown in Step 2 in Fig. 5. In Step 3 in Fig. 5, we detect the minimum black domain in the image and calculate the coordinates of its center ${{\boldsymbol o}_{l,i}}$. Then, the white domain nearest to the minimum black domain, namely, the positioning ring, can be located and removed. Thereafter, we detect all the contours in the image and compute their centers, as shown in Step 4 in Fig. 5. Apparently, the center of the second largest contour is the center of the directional circle ${{\boldsymbol o}_{d,i}}$, and the centers of the rest contours except the largest and the second largest ones are the centers of the coded circles in the unit. Before the next step, we can determine ${{\boldsymbol c}_{i,1}}$, ${{\boldsymbol c}_{i,2}}$, ${{\boldsymbol c}_{i,3}}$, and ${{\boldsymbol c}_{i,4}}$, which are the calibration corners located respectively in regions 1, 3, 4, and 6 according to the positional relation between the four calibration corners, ${{\boldsymbol o}_{d,i}}$ and ${{\boldsymbol o}_{l,i}}$. Next, we can obtain the CNCU ${e_i}$ according to the location relationships between the mentioned centers, and thus, we can obtain the four CNCCs, as shown from Step 5 to Step 6. Finally, we utilize the similar method to decode each coded marker in the target image until all the CNCCs are obtained, as shown in Step 7 in Fig. 5.

3. Proposed calibration method

3.1 Calibration model

Figure 6 shows the calibration model of the proposed method. ${\boldsymbol M}\textrm{ = [}X,Y,Z{\textrm{]}^T}$ is a calibration corner in the WCS 1 ${O_{1,W}} - {X_{1,W}}{Y_{1,W}}{Z_{1,W}}$ whose ${Z_{1,W}}$ component equals to zero, and its corresponding ideal image point ${{\boldsymbol m}_l} = {[{{u_l},{v_l}} ]^T}$ is in the LCPCS ${o_{l,p}} - {u_{l,p}}{v_{l,p}}$. ${\widehat {\boldsymbol m}_l} = {[{{u_l},{v_l},1} ]^T}$ and $\widehat {\boldsymbol M} = {[{X\textrm{,}Y\textrm{,}Z,1} ]^T}$ are the homogeneous coordinates of ${{\boldsymbol m}_l}$ and ${\boldsymbol M}$, respectively. In terms of the left camera, the imaging process is:

$$s{\widehat {\boldsymbol m}_l} = {{\boldsymbol A}_l}[{{{\boldsymbol R}_{l,1}}\;\;{{\boldsymbol T}_{l,1}}} ]\widehat {\boldsymbol M}\textrm{ = }{{\boldsymbol A}_l}\left[ {\begin{array}{cccc} {r_{l,1}^1}&{r_{l,1}^2}&{r_{l,1}^3}&{{T_{l,1}}} \end{array}} \right]\left[ {\begin{array}{c} X\\ Y\\ \textrm{0}\\ 1 \end{array}} \right]\textrm{ = }{{\boldsymbol A}_l}[\begin{array}{ccc} {r_{l,1}^1}&{r_{l,1}^2}&{{T_{l,1}}} \end{array}]\left[ \begin{array}{l} X\\ Y\\ \;1 \end{array} \right] = {H_{l,1}}\left[ \begin{array}{l} X\\ Y\\ \;1 \end{array} \right], $$
where ${{\boldsymbol A}_l}\textrm{ = }\left[ {\begin{array}{ccc} {{f_{x,l}}}&0&{{u_{0,l}}}\\ 0&{{f_{y,l}}}&{{v_{0,l}}}\\ 0&0&1 \end{array}} \right]\;$ is the intrinsic matrix of the left camera. ${{\boldsymbol R}_{l,1}}$ and ${{\boldsymbol T}_{l,1}}$ are the rotation and translation matrices from the WCS 1 to the LCCS, respectively. ${{\boldsymbol H}_{l,1}}$ is a 3×3 homography matrix which can be solved with four or more known calibration corners on the same planar target where ${\boldsymbol M}$ is located. If we are given at least three homography matrices .. ($i = \textrm{1,2,3,}\ldots $), then ${{\boldsymbol A}_l}$ and $\textrm{(}{{\boldsymbol R}_{l,i}},{{\boldsymbol T}_{l,i}})$ corresponding to each homography matrix can be computed. The calculation process is detailed in Ref. [6]. A target image of the proposed stereo target includes 12 coded planar targets, and thus, it can provide 12 homography matrices $\textrm{(}{{\boldsymbol H}_{l,1}},{{\boldsymbol H}_{l,2}},..,{{\boldsymbol H}_{l,12}}\textrm{)}$ at once, which allows a camera calibration with only one target image and greatly simplifies the calibration process. Similarly, with ${{\boldsymbol H}_{r,1}}$,…, ${{\boldsymbol H}_{r,11}}$ and ${{\boldsymbol H}_{r,12}}$, the intrinsic matrix of right camera ${{\boldsymbol A}_r}$, $\textrm{(}{{\boldsymbol R}_{r,i}},{{\boldsymbol T}_{r,i}})$ ($i = \textrm{1,2,}\ldots \textrm{12}$) converting the related WCS i ${O_{i,W}} - {X_{i,W}}{Y_{i,W}}{Z_{i,W}}$ to the RCCS ${O_{r,c}} - {X_{r,c}}{Y_{r,c}}{Z_{r,c}}$ can be computed.

 figure: Fig. 6.

Fig. 6. Binocular calibration model of the proposed method.

Download Full Size | PPT Slide | PDF

The distortion model for the cameras used in the proposed method is

$$\left\{ \begin{array}{l} {x_d} = x + x({k_1}{r^2} + {k_2}{r^4}) + 2{p_1}y + {p_2}({r^2} + 2{x^2})\\ {y_d} = y + y({k_1}{r^2} + {k_2}{r^4}) + 2{p_1}x + {p_2}({r^2} + 2{y^2}) \end{array} \right., $$
where $({x_d},{y_d})$ and $(x,y)$ are the distorted and undistorted normalized pixel coordinates. ${r^2} = {x^2} + {y^2}$. In the actual calibration process, the initial values of ${{\boldsymbol D}_l}\textrm{ = }{\left[ {\begin{array}{cccc} {{k_{1,l}}}&{{k_{2,l}}}&{{p_{1,l}}}&{{p_{2,l}}} \end{array}} \right]^T}$ and ${{\boldsymbol D}_r}\textrm{ = }{\left[ {\begin{array}{cccc} {{k_{1,r}}}&{{k_{2,r}}}&{{p_{1,r}}}&{{p_{2,r}}} \end{array}} \right]^T}$ are set to 0.

${{\boldsymbol R}_{l,r}}$ and ${{\boldsymbol T}_{l,r}}$ are named the extrinsic parameters of binocular camera (EPBC), and their initial values can be computed by

$$\left\{ \begin{array}{l} {{\boldsymbol R}_{l,r}} = \frac{1}{{12}}\sum\limits_{i = 1}^{12} {\textrm{(}{{\boldsymbol R}_{r,i}}{{({{\boldsymbol R}_{l,i}})}^{ - 1}})} \\ {{\boldsymbol T}_{l,r}} = \frac{1}{{12}}\sum\limits_{i = 1}^{12} {({{\boldsymbol T}_{r,i}} - {{\boldsymbol R}_{r,i}}{{({{\boldsymbol R}_{l,i}})}^{ - 1}}{{\boldsymbol T}_{l,i}})} \end{array} \right.. $$

3.2 Parameter optimization

All the parameters computed in Section 3.1 should be optimized before their application in an accurate 3D measurement. Although the reprojection constraint is widely used in conventional calibration methods to refine the parameters of the cameras, the same 2D pixel error in the image may produce different 3D metric errors [24]. The 3D measurement is conducted in a 3D coordinate system, and the optimization is implemented in a 2D coordinate system [16]. Therefore, 3D constraints need to be introduced into the objective function of binocular parameter optimization in addition to the reprojection constraint.

In this work, the right-angle, length, and coplanar constraints are combined as three-dimension constraints to ameliorate the precision of the BCC for fully utilizing the 3D feature of the calibration corners on the stereo target.

  • A. Reprojection constraint
Given ${n_{i,l}}$ extracted coded squares ($i = 1,2,\ldots ,12$) on the ith planar target in the left image of stereo target and ${n_{i,r}}$ extracted coded squares ($i = 1,2,\ldots ,12$) on the ith planar target in the right image of stereo target, the reprojection constraint in left and right cameras can be expressed as follows:
$$\left\{ {\begin{array}{c} {{J_{rep,l}} = \sum\limits_{i = 1}^{12} {\sum\limits_{j = 1}^{4 \cdot {n_{i,l}}} {||{{\boldsymbol c}_{i,j,l}} - {{\widetilde {\boldsymbol c}}_{i,j,l}}({{\boldsymbol A}_l},{{\boldsymbol R}_{l,i}},{{\boldsymbol T}_{l,i}},{{\boldsymbol D}_l},{{\boldsymbol C}_{i,j,l}})|{|^2}} } }\\ {{J_{rep,r}} = \sum\limits_{i = 1}^{12} {\sum\limits_{j = 1}^{4 \cdot {n_{i,r}}} {||{{\boldsymbol c}_{i,j,r}} - {{\widetilde {\boldsymbol c}}_{i,j,r}}({{\boldsymbol A}_r},{{\boldsymbol R}_{r,i}},{{\boldsymbol T}_{r,i}},{{\boldsymbol D}_r},{{\boldsymbol C}_{i,j,r}})|{|^2}} } } \end{array}} \right., $$
where ${{\boldsymbol c}_{i,j,l}}$ (or ${{\boldsymbol c}_{i,j,r}}$) is an actual calibration corner on ith planar target in the left (or right) image. ${\widetilde {\boldsymbol c}_{i,j,l}}$ (or ${\widetilde {\boldsymbol c}_{i,j,r}}$) is the 2D reprojected point related to ${{\boldsymbol c}_{i,j,l}}$ (or ${{\boldsymbol c}_{i,j,r}}$) computed by ${{\boldsymbol A}_l}$ (or ${{\boldsymbol A}_r}$), ${{\boldsymbol R}_{l,i}}$ (or ${{\boldsymbol R}_{r,i}}$), ${{\boldsymbol T}_{l,i}}$ (or ${{\boldsymbol T}_{r,i}}$), ${{\boldsymbol D}_l}$ (or ${{\boldsymbol D}_r}$), and spatial point ${{\boldsymbol C}_{i,j,l}}$ (or ${{\boldsymbol C}_{i,j,r}}$) corresponding to ${{\boldsymbol c}_{i,j,l}}$ (or ${{\boldsymbol c}_{i,j,r}}$).
  • B. Length constraint
Length constraint can reduce the difference between the actual side length of a coded unit ${L_{act}}$ on our target and the side length of a 3D reconstructed coded unit. With ${n_i}$ coded squares ($i = 1,2,\ldots ,12$) on the ith reconstructed planar target, the length constraint is:
$${J_{len}} = \sum\limits_{i = 1}^{12} {\sum\limits_{j = 1}^{{n_i}} {\sum\limits_{k = 1}^3 {|{{L_{act}} - d({{\boldsymbol c}_{i,j,k}},{{\boldsymbol c}_{i,j,k + 1}})} |+ } } } \sum\limits_{i = 1}^{12} {\sum\limits_{j = 1}^{{n_i}} {|{{L_{act}} - d({{\boldsymbol c}_{i,j,1}},{{\boldsymbol c}_{i,j,4}})} |} }, $$
where ${{\boldsymbol c}_{i,j,1}}$, ${{\boldsymbol c}_{i,j,2}}$, ${{\boldsymbol c}_{i,j,3}}$, and ${{\boldsymbol c}_{i,j,4}}$ are 3D reconstructed calibration corners (3D-RCCs) which are respectively located in the region 1, 3, 4, and 6 of the jth coded square on the ith planar target of the reconstructed stereo target. $d({\boldsymbol a},{\boldsymbol b})$ represents the spatial distance between two 3D points ${\boldsymbol a}$ and ${\boldsymbol b}$:
$$d({\boldsymbol a},{\boldsymbol b}) = \sqrt {{{({\boldsymbol a}.x - {\boldsymbol b}.x)}^2} + {{({\boldsymbol a}.y - {\boldsymbol b}.y)}^2} + {{({\boldsymbol a}.z - {\boldsymbol b}.z)}^2}}. $$
  • C. Right-angle constraint
Right-angle constraint can minimize the difference between 90° and each of the four internal angles in the reconstructed coded squares. Given ${n_i}$ coded squares ($i = 1,2,\ldots ,12$) on the ith planar target of the 3D reconstructed stereo target, the right-angle constraint can be expressed as follows:
$$\begin{array}{l} {J_{rig}} = \sum\limits_{i = 1}^{12} {\sum\limits_{j = 1}^{{n_i}} {\sum\limits_{k = 1}^2 {|{90 - \varphi ({{\boldsymbol c}_{i,j,k + 1}} - {{\boldsymbol c}_{i,j,k}},{{\boldsymbol c}_{i,j,k + 2}} - {{\boldsymbol c}_{i,j,k + 1}})} |+ } } } \sum\limits_{i = 1}^{12} {\sum\limits_{j = 1}^{{n_i}} {|{90 - \varphi ({{\boldsymbol c}_{i,j,4}} - {{\boldsymbol c}_{i,j,1}},{{\boldsymbol c}_{i,j,2}} - {{\boldsymbol c}_{i,j,1}})} |} } \\ \begin{array}{ccc} {}&{}&{ + \sum\limits_{i = 1}^{12} {\sum\limits_{j = 1}^{{n_i}} {|{90 - \varphi ({{\boldsymbol c}_{i,j,3}} - {{\boldsymbol c}_{i,j,4}},{{\boldsymbol c}_{i,j,1}} - {{\boldsymbol c}_{i,j,4}})} |} } } \end{array} \end{array}, $$
where ${{\boldsymbol c}_{i,j,1}}$, ${{\boldsymbol c}_{i,j,2}}$, ${{\boldsymbol c}_{i,j,3}}$, and ${{\boldsymbol c}_{i,j,4}}$ are 3D-RCCs which are respectively located in regions 1, 3, 4, and 6 of the jth coded square on the ith planar target of the reconstructed stereo target. $\varphi ({{\boldsymbol v}_1},{{\boldsymbol v}_2})$ can compute the angle between two spatial vectors ${{\boldsymbol v}_1}$ and ${{\boldsymbol v}_2}$:
$$\varphi ({{\boldsymbol v}_1},{{\boldsymbol v}_2}) = \frac{{180}}{\pi } \cdot \arccos (\frac{{{{\boldsymbol v}_1} \cdot {{\boldsymbol v}_2}}}{{|{{\boldsymbol v}_1}|\cdot |{{\boldsymbol v}_2}|}}). $$
  • D. Coplanar constraint
The coplanar constraint aims to keep the 3D-RCCs on the same planar target of the reconstructed stereo target coplanar. Given ${n_i}$ coded squares ($i = 1,2,\ldots ,12$) on the ith reconstructed planar target, the coplanar constraint is:
$${J_{cop}} = \sum\limits_{i = 1}^{12} {\sum\limits_{j = 1}^{{n_i}} {\sum\limits_{k = 1}^4 {|{D({{\boldsymbol F}_i}\textrm{,}{{\boldsymbol c}_{i,j,k}})} |} } }, $$
where ${{\boldsymbol c}_{i,j,1}}$, ${{\boldsymbol c}_{i,j,2}}$, ${{\boldsymbol c}_{i,j,3}}$, and ${{\boldsymbol c}_{i,j,4}}$ are the four 3D-RCCs on the jth coded square of the ith planar target on the reconstructed stereo target. ${{\boldsymbol F}_i}{\boldsymbol = }\left[ {\begin{array}{cccc} {{a_{i,1}}}&{{a_{i,2}}}&{{a_{i,3}}}&{{a_{i,4}}} \end{array}} \right]$, while ${a_{i,1}}$, ${a_{i,2}}$, ${a_{i,3}}$, and ${a_{i,4}}$ are the coefficients in the equation of the plane fitted by the 3D-RCCs on the ith planar target. Given a spatial point ${\boldsymbol c}\textrm{ = }{\left[ {\begin{array}{ccc} {{c_x}}&{{c_y}}&{{c_z}} \end{array}} \right]^T}$, $D({\ast} , \ast )$ can compute the distance from the spatial point to the spatial plane:
$$D({\boldsymbol F},{\boldsymbol c}) = \frac{{{\boldsymbol F} \cdot \widehat {\boldsymbol c}}}{{\sqrt {a_1^2 + a_2^2 + a_3^2} }}, $$
where $\widehat {\boldsymbol c} = {\left[ {\begin{array}{cccc} {{c_x}}&{{c_y}}&{{c_z}}&1 \end{array}} \right]^T}$.

In the proposed optimization process, the parameters that need to be refined are the intrinsic parameters of binocular camera (IPBC) and EPBC. Their initial values are computed in Section 3.1.

By combining the reprojection constraint and the 3D constraint (right-angle, length, and coplanar constraints), the optimization objective function is:

$${f_{op}} = \min ({s_{rep,l}}{J_{rep,l}} + {s_{rep,r}}{J_{rep,r}} + {s_{len}}{J_{len}} + {s_{rig}}{J_{rig}} + {s_{cop}}{J_{cop}}), $$
where srep,l, srep,r, slen, srig and scop are the weight coefficients of the related error components. This function can be solved by Levenberg–Marquardt algorithm implemented in Minpack [26]. Thereafter, the final results of BCC${{\boldsymbol A}_l}$, ${{\boldsymbol A}_r}$, ${{\boldsymbol D}_l}$, ${{\boldsymbol D}_r}$, ${{\boldsymbol R}_{l,r}}$, and ${{\boldsymbol T}_{l,r}}$ can be obtained.

The flow of the proposed calibration method is described in Fig. 7.

 figure: Fig. 7.

Fig. 7. Flowchart of the proposed method.

Download Full Size | PPT Slide | PDF

4. Experiment

The setup of BCC experiment is shown in Fig. 8(a). The binocular vision system established in this work is composed of two cameras (MER-231-41U3C) with two lenses (12 mm focal length). The resolutions of the cameras are 1920 pixels ${\times}$1200 pixels. The stereo target used in the decoding and calibration experiments is shown as Fig. 8(b). The number ranges of CNCUs on the planar target 1, planar target 2, …, planar target 11, and planar target 12 are (0–8), (9–17), (18–26), (27–35), (40–48), (49–57), (68–76), (77–85), (86–94), (95–103), (120–128), and (129–137), respectively. The interval distances and the side length of the coded units are 13 mm. The postures of every two coded planar targets on the stereo target are different.

 figure: Fig. 8.

Fig. 8. (a) Setup of BCC experiment and (b) stereo target used in the decoding and calibration experiments.

Download Full Size | PPT Slide | PDF

4.1 Decoding experiments

We conduct 12 decoding experiments to verify the performance of our self-developed decoding program in computing the CNCCs in entirely and partially exposed target images. Four of the decoding results are shown in Fig. 9.

 figure: Fig. 9.

Fig. 9. Four of the decoding results in the decoding experiments, (a), (b), and (c) are implemented with the image of the partially occluded target; (d) is conducted with the image of the entirely exposed target.

Download Full Size | PPT Slide | PDF

The decoding experiments show us that our program can detect the calibration corners and compute accurately their coded numbers even in the case of the partial projection of the stereo target, which allows a precious matching of homonymous calibration corners in the process of a BCC using partially exposed target images.

4.2 Calibration experiments

In this section, we conduct two groups of binocular calibration experiments. In the 1st group, 15-fold calibration trials are all implemented using the proposed method with the stereo target (Fig. 8(b)). And certain calibration images may contain the partially exposed stereo target, as shown in Fig. 10. In the 2nd group, 15-fold calibration trials are all conducted using Zhang’s method (Calibration Toolbox [27]) with a chessboard pattern which contains 6${\times}$6 corners. To ensure the effectiveness of the comparison, the distance between every two corners on the chessboard is 13 mm. In each test in the second group, we capture 12 pairs of images of the chessboard pattern. One of the comparative results of the IPBC and EPBC is shown in Table 1.

 figure: Fig. 10.

Fig. 10. (a) left and (b) right target images used in one of the trials in the 1st group.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. One of the comparison of the IPBC and EPBC

The comparison between the absolute reprojection errors of the left camera (${E_{rep,l}}$) and right camera (${E_{rep,r}}$) computed by proposed method and those calculated by Zhang’s method are described in Fig. 11. As shown in Figs. 11(a) and 11(c), the mean absolute reprojection errors of left camera (${\delta _{rep,l}}$) and right camera (${\delta _{rep,r}}$) calculated by our method are greatly reduced with less error fluctuation compared with those by Zhang’s method. ${\delta _{rep,l}}$ and ${\delta _{rep,r}}$ of the proposed method are below 0.0303 and 0.0277 pixels, respectively. However, the maximum of ${\delta _{rep,l}}$ and ${\delta _{rep,r}}$ computed by Zhang’s method are 0.0964 and 0.0957 pixels, respectively. As shown in Fig. 11(b), in the 7th trial where ${\delta _{rep,l}}$ of Zhang’s method and that of the proposed method are closest among all the 15-fold trials, ${\delta _{rep,l}}$ of our method is 0.0303 pixels, which is declined by 49.83% compared with that of Zhang’s method. The SD of ${E_{rep,l}}$ is reduced by 50.89%. Similarly, in the 14th trial, ${\delta _{rep,r}}$ of our method is reduced by 59.17% and the SD of ${E_{rep,r}}$ is lessened by 68.66% in contrast with those of Zhang’s method, as shown in Fig. 11(d). The specific comparative results of ${E_{rep,l}}$ and ${E_{rep,r}}$ in each trial are detailed in Tables 2 and 3, respectively.

 figure: Fig. 11.

Fig. 11. (a) and (c) are the comparative results of Erep,l and Erep,r, respectively; (b) shows the comparison of the Erep,l in the 7th trial, and (d) shows the comparison of the Erep,r in the 14th trial.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 2. Comparative results of Erep,l in each calibration trial

Tables Icon

Table 3. Comparative results of Erep,r in each calibration trial

Through the abovementioned analysis, the proposed method can produce a more precious homograph relation between the 2D image calibration corners and their related 3D spatial corners than Zhang’s method.

4.3 3D measurement experiment

We implement 15-fold 3D measurement experiments to assess the precision of the BCC implemented by Zhang’s method and the proposed method more comprehensively. When we conduct the ith 3D measurement trial (i = 1, 2, …, 15), the binocular camera capture one pair of images of the stereo target on which the interval distance and the side lengths of the coded units are 10 mm, as shown in Fig. 12. Thereafter, the two sets of IPBC and EPBC respectively computed by Zhang’s method and the proposed method in the ith trial in Section 4.2 are used to reconstruct the stereo target in the aforementioned images and calculate the corresponding 3D errors (absolute length error ${E_{len}}$, absolute coplanar error ${E_{cop}}$, and absolute right-angle error ${E_{rig}}$).

 figure: Fig. 12.

Fig. 12. Stereo target used in the 3D measurement experiment.

Download Full Size | PPT Slide | PDF

Figure 13(a) details the comparative results of ${E_{len}}$. In each trial, the mean absolute length error ${\delta _{len}}$ and the SD of ${E_{len}}$ considerably decrease compared with those of Zhang’s method. ${\delta _{len}}$ of our method is below 0.0580 mm. By contrast, the maximum of ${\delta _{len}}$ computed by Zhang’s method is 0.1627 mm. As shown in Fig. 13(b), in the 13th trial where the ${\delta _{len}}$ of Zhang’s method and that of the proposed method are closest among all the 15-fold trials, ${\delta _{len}}$ and the SD of ${E_{len}}$ calculated by our method are 0.0494 and 0.0059 mm, respectively, which are reduced by 57.49% and 71.50%, respectively, in contrast with those of Zhang’s method. The specific comparative results of ${E_{len}}$ in each trial are detailed in Table 4. ${\delta _{len}}$ and the SD of ${E_{len}}$ calculated by our method are lessened by over 54.12% and 65.48%, respectively, compared with those of Zhang’s method.

 figure: Fig. 13.

Fig. 13. (a) describes the comparative results of the Elen; (b) details the comparative results of the Elen in the 13th trial.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 4. Comparative results of Elen in each 3D measurement trial

The comparison between the ${E_{cop}}$ of Zhang’s method and that of ours is described in Fig. 14(a). The mean absolute coplanar error ${\delta _{cop}}$ in each trial computed by the proposed method is much lower than that of Zhang’s method, and our method produces less coplanar error fluctuation than Zhang’s method. ${\delta _{cop}}$ of Zhang’s method and that of ours are closest in the 8th trial. As shown in Fig. 14(b), ${\delta _{cop}}$ of our method in the 8th trial is 0.0419 mm, which is reduced by 45.09% compared with that of Zhang’s method. The SD of ${E_{cop}}$ calculated by the proposed method in the 8th trial is lessened by 76.19%. The detailed comparative results of ${E_{cop}}$ in each trial is shown in Table 5. ${\delta _{cop}}$ of our method is decreased by at least 45.06% compared with that of Zhang’s method, and the SD of ${E_{cop}}$ is reduced by over 61.33%.

 figure: Fig. 14.

Fig. 14. (a) shows comparative results of the absolute coplanar error. (b) details the comparative results of absolute coplanar error in the 8th trials.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 5. Comparative results of Ecop in each 3D measurement trial

Figure 15(a) describes the comparative results of ${E_{rig}}$. The proposed method can effectively cut down ${E_{rig}}$ and its fluctuation. The mean absolute right-angle error ${\delta _{rig}}$ of our method is below 0.2177°. By contrast, the highest value of ${\delta _{rig}}$ produced by Zhang’s method reaches 0.4631°. ${\delta _{rig}}$ of Zhang’s method and that of ours are closest in the 1st trial. As shown in Fig. 15(b), ${\delta _{rig}}$ of our method in the 1st trial is 0.2039°, which is reduced by 41.58% compared with that of Zhang’s method. The SD of ${E_{rig}}$ calculated by the proposed method in the 1st trial is lessened by 72.44%. The comparative results of ${E_{rig}}$ in each trial are described in Table 6. ${\delta _{rig}}$ of the proposed method is reduced by over 41.58%, and the SD of ${E_{rig}}$ is declined by over 69.36% compared with that of Zhang’s method.

 figure: Fig. 15.

Fig. 15. (a) shows comparative results of the Erig; (b) details the comparative results of the Erig in the 1st trial.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 6. Comparative results of Erig in each 3D measurement trial

Through the abovementioned analysis, the proposed method can result in less 3D error with much less error fluctuations. In other words, the 3D reconstructed stereo target calculated by the IPBC and EPBC of the proposed method are closer to the corresponding 3D ground truth compared with those of Zhang’s method. From all the analysis in Sections 4.2 and 4.3, we can conclude that the proposed calibration method based on the stereo target can immensely ameliorate the precision of the BCC.

4.4 Supplementary experiment

The comparative experiments in the Section 4.2 and Section 4.3 reflect the advantages of the proposed calibration method in BCC, compared with the famous Zhang’s calibration method. To make this work more convincing, the supplementary experiment is carried out to compare our parameter optimization method with those proposed by J. Zhang [21] and Cui [24], and to verify the effectiveness of our optimization method. It is worth noting that the mentioned three parameter optimization methods apply different kinds of error components in the objective functions. Therefore, we utilize the three sets of IPBC and EPBC optimized by the aforementioned three methods to measure standard blocks, and compare their performance in measurement errors.

In this section, we use our target and theory (detailed in the Section 3.1) to calculate the initial parameters of the binocular camera. Then, the initial parameters are optimized by the proposed optimization method, J. Zhang’s optimization method [21] and Cui’s optimization method [24], respectively. We use these three sets of IPBC and EPBC to measure 6 standard blocks with standard lengths of 90, 80, 70, 60, 50 and 40 mm. Each set of IPBC and EPBC is utilized to measure the aforementioned blocks 15 times.

As shown in Fig. 16, ${\delta _{mea}}$ of the proposed optimization method is below 0.0793 mm while the maximum of ${\delta _{mea}}$ computed by J. Zhang’s optimization method and Cui’s optimization method are 0.2044 mm and 0.1656 mm, respectively. The average of ${\delta _{mea}}$ computed by our optimization method is 0.0668 mm, which has 62.50% and 55.74% reductions compared with those of J. Zhang’s optimization method and Cui’s optimization method, respectively. Accordingly, the supplementary experiment reveals the advantages of our optimization method in ameliorating the precision of the BCC.

 figure: Fig. 16.

Fig. 16. Comparative results of the mean absolute measurement error ${\delta _{mea}}$

Download Full Size | PPT Slide | PDF

5. Conclusion

This study proposed a high-precision and rapid BCC method using a single-image per camera. The IPBC and EPBC can be calibrated with only one pair of target images by using our stereo target, which facilitates the calibration process. The decoding method of the proposed target has advantage in terms of stability, speed, and precision. It can obtain the sub-pixel coordinates and CNCCs in the complex background even in the case of the incomplete target projection, which allows a high-precision and rapid binocular calibration using the images of the partially exposed target. Moreover, after the calibration model based on the proposed stereo target is established and the initial values of the IPBC and EPBC are calculated, the 3D constraints (length, coplanar, and right-angle constraints) in addition to the reprojection constraint are engaged in the process of the optimization process of the IPBC and EPBC to enhance the accuracy of the calibration. The decoding experiments validate the performance of our method in obtaining the sub-pixel coordinates and the CNCCs in the case of complete and incomplete projection of our target. Through the analysis in the calibration experiment, measurement experiment and supplementary experiment, the proposed method is found to remarkably ameliorate the precision of the BCC. Although the proposed calibration method is capable of simplifying the calibration process and improving the precision of binocular camera calibration, the local position of the target projected on the imaging sensor may have a certain impact on the optimization of distortion coefficients, especially for the lens with high distortion. We will dig into the mentioned problem in our future works.

Funding

Fundamental Research Funds for the Central Universities (JZ2020HGQB0226); School-enterprise Cooperation Funds of Anhui Province (W2020JSKF0519).

Acknowledgments

This work was supported by the Fundamental Research Funds for the Central Universities (JZ2020HGQB0226) and the School-enterprise Cooperation Funds of Anhui Province (W2020JSKF0519).

Disclosures

The authors declare no conflict of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. B. Guan, Y. Yu, A. Su, Y. Shang, and Q. Yu, “Self-calibration approach to stereo cameras with radial distortion based on epipolar constraint,” Appl. Opt. 58(31), 8511–8521 (2019). [CrossRef]  

2. J. Sun, X. Cheng, and Q. Fan, “Camera calibration based on two-cylinder target,” Opt. Express 27(20), 29319–29331 (2019). [CrossRef]  

3. Z. Y. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Machine Intell. 26(7), 892–899 (2004). [CrossRef]  

4. J. A. D. Franca, M. R. Stemmer, M. B. D. M. Franca, and J. C. Piai, “A new robust algorithmic for multi-camera calibration with a 1D object under general motions without prior knowledge of any camera intrinsic parameter,” Pattern Recognition 45(10), 3636–3647 (2012). [CrossRef]  

5. Y. W. Lv, W. Liu, and X. P. Xu, “Methods based on 1D homography for camera calibration with 1D objects,” Appl. Opt. 57(9), 2155–2164 (2018). [CrossRef]  

6. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

7. H. Zhu, Y. Li, X. Liu, X. Yin, Y. Shao, Y. Qian, and J. Tan, “Camera calibration from very few images based on soft constraint optimization,” J. Franklin Inst. 357(4), 2561–2584 (2020). [CrossRef]  

8. X. Yang, X. Chen, and J. Xi, “Perspective transformation based-initial value estimation for the speckle control points matching in an out-of-focus camera calibration using a synthetic speckle pattern,” Opt. Express 30(2), 2310–2325 (2022). [CrossRef]  

9. B. Chen and B. Pan, “Camera calibration using synthetic random speckle pattern and digital image correlation,” Optics and Lasers in Engineering 126, 105919 (2020). [CrossRef]  

10. R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv camera and lenses,” IEEE J. Robot. Automat. 3(4), 323–344 (1987). [CrossRef]  

11. J. Zhang, H. Yu, H. Deng, Z. Chai, M. Ma, and X. Zhong, “A Robust and Rapid Camera Calibration Method by One Captured Image,” IEEE Trans. Instrum. Meas. 68(10), 4112–4121 (2019). [CrossRef]  

12. Z. Xiaowen, R. Yongfeng, Z. Guoyong, S. Yanhu, C. Chengqun, and L. Fan, “Camera calibration method for solid spheres based on triangular primitives,” Precis. Eng. 65, 91–102 (2020). [CrossRef]  

13. F. Abedi, Y. Yang, and Q. Liu, “Group geometric calibration and rectification for circular multi-camera imaging system,” Opt. Express 26(23), 30596–30613 (2018). [CrossRef]  

14. Y. W. Wang, Y. J. Wang, L. Liu, and X. C. Chen, “Defocused camera calibration with a conventional periodic target based on fourier transform,” Opt. Lett. 44(13), 3254–3257 (2019). [CrossRef]  

15. Z. Gao, M. Zhu, and J. Yu, “A Novel Camera Calibration Pattern Robust to Incomplete Pattern Projection,” IEEE Sens. J. 21(8), 10051–10060 (2021). [CrossRef]  

16. Y. Yin, H. Zhu, P. Yang, Z. Yang, K. Liu, and H. Fu, “Robust and accuracy calibration method for a binocular camera using a coding planar target,” Opt. Express 30(4), 6107–6128 (2022). [CrossRef]  

17. F. J. Romero-Ramirez, R. Muñoz-Salinas, and R. Medina-Carnicer, “Speeded up detection of squared fiducial markers,” Image and Vision Computing 76, 38–47 (2018). [CrossRef]  

18. S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez, “Automatic generation and detection of highly reliable fiducial markers under occlusion,” Pattern Recognition 47(6), 2280–2292 (2014). [CrossRef]  

19. J. Zhang, X. Wang, M. Ma, F. Li, H. Liu, and H. Cui, “Camera calibration for anamorphic lenses with three-dimensional targets,” Appl. Opt. 59(2), 324–332 (2020). [CrossRef]  

20. Z. Zhang, R. Zhao, E. Liu, K. Yan, and Y. Ma, “A single-image linear calibration method for camera,” Measurement 130, 298–305 (2018). [CrossRef]  

21. J. Zhang, J. Zhu, H. Deng, Z. Chai, M. Ma, and X. Zhong, “Multi-camera calibration method based on a multi-plane stereo target,” Appl. Opt. 58(34), 9353–9359 (2019). [CrossRef]  

22. M. Yang, X. B. Chen, and C. Y. Yu, “Camera calibration using a planar target with pure translation,” Appl. Opt. 58(31), 8362–8370 (2019). [CrossRef]  

23. A. S. Poulin-Girard, S. Thibault, and D. Laurendeau, “Influence of camera calibration conditions on the accuracy of 3D reconstruction,” Opt. Express 24(3), 2678–2686 (2016). [CrossRef]  

24. Y. Cui, F. Zhou, Y. Wang, L. Liu, and H. Gao, “Precise calibration of binocular vision system used for vision measurement,” Opt. Express 22(8), 9134–9149 (2014). [CrossRef]  

25. A. Geiger, F. Moosmann, O. Car, and B. Schuster, “Automatic camera and range sensor calibration using a single shot,” in 2012 IEEE International Conference on Robotics and Automation (2012).

26. J. More, “The levenberg-marquardt algorithm, implementation and theory,” In G. A. Watson, Numerical Analysis, Lecture Notes in Mathematics 630. Springer-Verlag, 1977.

27. “Camera Calibration Toolbox for Matlab,” http://www.vision.caltech.edu/bouguetj/calib_doc/.

References

  • View by:

  1. B. Guan, Y. Yu, A. Su, Y. Shang, and Q. Yu, “Self-calibration approach to stereo cameras with radial distortion based on epipolar constraint,” Appl. Opt. 58(31), 8511–8521 (2019).
    [Crossref]
  2. J. Sun, X. Cheng, and Q. Fan, “Camera calibration based on two-cylinder target,” Opt. Express 27(20), 29319–29331 (2019).
    [Crossref]
  3. Z. Y. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Machine Intell. 26(7), 892–899 (2004).
    [Crossref]
  4. J. A. D. Franca, M. R. Stemmer, M. B. D. M. Franca, and J. C. Piai, “A new robust algorithmic for multi-camera calibration with a 1D object under general motions without prior knowledge of any camera intrinsic parameter,” Pattern Recognition 45(10), 3636–3647 (2012).
    [Crossref]
  5. Y. W. Lv, W. Liu, and X. P. Xu, “Methods based on 1D homography for camera calibration with 1D objects,” Appl. Opt. 57(9), 2155–2164 (2018).
    [Crossref]
  6. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000).
    [Crossref]
  7. H. Zhu, Y. Li, X. Liu, X. Yin, Y. Shao, Y. Qian, and J. Tan, “Camera calibration from very few images based on soft constraint optimization,” J. Franklin Inst. 357(4), 2561–2584 (2020).
    [Crossref]
  8. X. Yang, X. Chen, and J. Xi, “Perspective transformation based-initial value estimation for the speckle control points matching in an out-of-focus camera calibration using a synthetic speckle pattern,” Opt. Express 30(2), 2310–2325 (2022).
    [Crossref]
  9. B. Chen and B. Pan, “Camera calibration using synthetic random speckle pattern and digital image correlation,” Optics and Lasers in Engineering 126, 105919 (2020).
    [Crossref]
  10. R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv camera and lenses,” IEEE J. Robot. Automat. 3(4), 323–344 (1987).
    [Crossref]
  11. J. Zhang, H. Yu, H. Deng, Z. Chai, M. Ma, and X. Zhong, “A Robust and Rapid Camera Calibration Method by One Captured Image,” IEEE Trans. Instrum. Meas. 68(10), 4112–4121 (2019).
    [Crossref]
  12. Z. Xiaowen, R. Yongfeng, Z. Guoyong, S. Yanhu, C. Chengqun, and L. Fan, “Camera calibration method for solid spheres based on triangular primitives,” Precis. Eng. 65, 91–102 (2020).
    [Crossref]
  13. F. Abedi, Y. Yang, and Q. Liu, “Group geometric calibration and rectification for circular multi-camera imaging system,” Opt. Express 26(23), 30596–30613 (2018).
    [Crossref]
  14. Y. W. Wang, Y. J. Wang, L. Liu, and X. C. Chen, “Defocused camera calibration with a conventional periodic target based on fourier transform,” Opt. Lett. 44(13), 3254–3257 (2019).
    [Crossref]
  15. Z. Gao, M. Zhu, and J. Yu, “A Novel Camera Calibration Pattern Robust to Incomplete Pattern Projection,” IEEE Sens. J. 21(8), 10051–10060 (2021).
    [Crossref]
  16. Y. Yin, H. Zhu, P. Yang, Z. Yang, K. Liu, and H. Fu, “Robust and accuracy calibration method for a binocular camera using a coding planar target,” Opt. Express 30(4), 6107–6128 (2022).
    [Crossref]
  17. F. J. Romero-Ramirez, R. Muñoz-Salinas, and R. Medina-Carnicer, “Speeded up detection of squared fiducial markers,” Image and Vision Computing 76, 38–47 (2018).
    [Crossref]
  18. S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez, “Automatic generation and detection of highly reliable fiducial markers under occlusion,” Pattern Recognition 47(6), 2280–2292 (2014).
    [Crossref]
  19. J. Zhang, X. Wang, M. Ma, F. Li, H. Liu, and H. Cui, “Camera calibration for anamorphic lenses with three-dimensional targets,” Appl. Opt. 59(2), 324–332 (2020).
    [Crossref]
  20. Z. Zhang, R. Zhao, E. Liu, K. Yan, and Y. Ma, “A single-image linear calibration method for camera,” Measurement 130, 298–305 (2018).
    [Crossref]
  21. J. Zhang, J. Zhu, H. Deng, Z. Chai, M. Ma, and X. Zhong, “Multi-camera calibration method based on a multi-plane stereo target,” Appl. Opt. 58(34), 9353–9359 (2019).
    [Crossref]
  22. M. Yang, X. B. Chen, and C. Y. Yu, “Camera calibration using a planar target with pure translation,” Appl. Opt. 58(31), 8362–8370 (2019).
    [Crossref]
  23. A. S. Poulin-Girard, S. Thibault, and D. Laurendeau, “Influence of camera calibration conditions on the accuracy of 3D reconstruction,” Opt. Express 24(3), 2678–2686 (2016).
    [Crossref]
  24. Y. Cui, F. Zhou, Y. Wang, L. Liu, and H. Gao, “Precise calibration of binocular vision system used for vision measurement,” Opt. Express 22(8), 9134–9149 (2014).
    [Crossref]
  25. A. Geiger, F. Moosmann, O. Car, and B. Schuster, “Automatic camera and range sensor calibration using a single shot,” in 2012 IEEE International Conference on Robotics and Automation (2012).
  26. J. More, “The levenberg-marquardt algorithm, implementation and theory,” In G. A. Watson, Numerical Analysis, Lecture Notes in Mathematics 630. Springer-Verlag, 1977.
  27. “Camera Calibration Toolbox for Matlab,” http://www.vision.caltech.edu/bouguetj/calib_doc/ .

2022 (2)

2021 (1)

Z. Gao, M. Zhu, and J. Yu, “A Novel Camera Calibration Pattern Robust to Incomplete Pattern Projection,” IEEE Sens. J. 21(8), 10051–10060 (2021).
[Crossref]

2020 (4)

J. Zhang, X. Wang, M. Ma, F. Li, H. Liu, and H. Cui, “Camera calibration for anamorphic lenses with three-dimensional targets,” Appl. Opt. 59(2), 324–332 (2020).
[Crossref]

Z. Xiaowen, R. Yongfeng, Z. Guoyong, S. Yanhu, C. Chengqun, and L. Fan, “Camera calibration method for solid spheres based on triangular primitives,” Precis. Eng. 65, 91–102 (2020).
[Crossref]

B. Chen and B. Pan, “Camera calibration using synthetic random speckle pattern and digital image correlation,” Optics and Lasers in Engineering 126, 105919 (2020).
[Crossref]

H. Zhu, Y. Li, X. Liu, X. Yin, Y. Shao, Y. Qian, and J. Tan, “Camera calibration from very few images based on soft constraint optimization,” J. Franklin Inst. 357(4), 2561–2584 (2020).
[Crossref]

2019 (6)

2018 (4)

F. Abedi, Y. Yang, and Q. Liu, “Group geometric calibration and rectification for circular multi-camera imaging system,” Opt. Express 26(23), 30596–30613 (2018).
[Crossref]

Z. Zhang, R. Zhao, E. Liu, K. Yan, and Y. Ma, “A single-image linear calibration method for camera,” Measurement 130, 298–305 (2018).
[Crossref]

F. J. Romero-Ramirez, R. Muñoz-Salinas, and R. Medina-Carnicer, “Speeded up detection of squared fiducial markers,” Image and Vision Computing 76, 38–47 (2018).
[Crossref]

Y. W. Lv, W. Liu, and X. P. Xu, “Methods based on 1D homography for camera calibration with 1D objects,” Appl. Opt. 57(9), 2155–2164 (2018).
[Crossref]

2016 (1)

2014 (2)

Y. Cui, F. Zhou, Y. Wang, L. Liu, and H. Gao, “Precise calibration of binocular vision system used for vision measurement,” Opt. Express 22(8), 9134–9149 (2014).
[Crossref]

S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez, “Automatic generation and detection of highly reliable fiducial markers under occlusion,” Pattern Recognition 47(6), 2280–2292 (2014).
[Crossref]

2012 (1)

J. A. D. Franca, M. R. Stemmer, M. B. D. M. Franca, and J. C. Piai, “A new robust algorithmic for multi-camera calibration with a 1D object under general motions without prior knowledge of any camera intrinsic parameter,” Pattern Recognition 45(10), 3636–3647 (2012).
[Crossref]

2004 (1)

Z. Y. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Machine Intell. 26(7), 892–899 (2004).
[Crossref]

2000 (1)

Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000).
[Crossref]

1987 (1)

R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv camera and lenses,” IEEE J. Robot. Automat. 3(4), 323–344 (1987).
[Crossref]

Abedi, F.

Car, O.

A. Geiger, F. Moosmann, O. Car, and B. Schuster, “Automatic camera and range sensor calibration using a single shot,” in 2012 IEEE International Conference on Robotics and Automation (2012).

Chai, Z.

J. Zhang, J. Zhu, H. Deng, Z. Chai, M. Ma, and X. Zhong, “Multi-camera calibration method based on a multi-plane stereo target,” Appl. Opt. 58(34), 9353–9359 (2019).
[Crossref]

J. Zhang, H. Yu, H. Deng, Z. Chai, M. Ma, and X. Zhong, “A Robust and Rapid Camera Calibration Method by One Captured Image,” IEEE Trans. Instrum. Meas. 68(10), 4112–4121 (2019).
[Crossref]

Chen, B.

B. Chen and B. Pan, “Camera calibration using synthetic random speckle pattern and digital image correlation,” Optics and Lasers in Engineering 126, 105919 (2020).
[Crossref]

Chen, X.

Chen, X. B.

Chen, X. C.

Cheng, X.

Chengqun, C.

Z. Xiaowen, R. Yongfeng, Z. Guoyong, S. Yanhu, C. Chengqun, and L. Fan, “Camera calibration method for solid spheres based on triangular primitives,” Precis. Eng. 65, 91–102 (2020).
[Crossref]

Cui, H.

Cui, Y.

Deng, H.

J. Zhang, J. Zhu, H. Deng, Z. Chai, M. Ma, and X. Zhong, “Multi-camera calibration method based on a multi-plane stereo target,” Appl. Opt. 58(34), 9353–9359 (2019).
[Crossref]

J. Zhang, H. Yu, H. Deng, Z. Chai, M. Ma, and X. Zhong, “A Robust and Rapid Camera Calibration Method by One Captured Image,” IEEE Trans. Instrum. Meas. 68(10), 4112–4121 (2019).
[Crossref]

Fan, L.

Z. Xiaowen, R. Yongfeng, Z. Guoyong, S. Yanhu, C. Chengqun, and L. Fan, “Camera calibration method for solid spheres based on triangular primitives,” Precis. Eng. 65, 91–102 (2020).
[Crossref]

Fan, Q.

Franca, J. A. D.

J. A. D. Franca, M. R. Stemmer, M. B. D. M. Franca, and J. C. Piai, “A new robust algorithmic for multi-camera calibration with a 1D object under general motions without prior knowledge of any camera intrinsic parameter,” Pattern Recognition 45(10), 3636–3647 (2012).
[Crossref]

Franca, M. B. D. M.

J. A. D. Franca, M. R. Stemmer, M. B. D. M. Franca, and J. C. Piai, “A new robust algorithmic for multi-camera calibration with a 1D object under general motions without prior knowledge of any camera intrinsic parameter,” Pattern Recognition 45(10), 3636–3647 (2012).
[Crossref]

Fu, H.

Gao, H.

Gao, Z.

Z. Gao, M. Zhu, and J. Yu, “A Novel Camera Calibration Pattern Robust to Incomplete Pattern Projection,” IEEE Sens. J. 21(8), 10051–10060 (2021).
[Crossref]

Garrido-Jurado, S.

S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez, “Automatic generation and detection of highly reliable fiducial markers under occlusion,” Pattern Recognition 47(6), 2280–2292 (2014).
[Crossref]

Geiger, A.

A. Geiger, F. Moosmann, O. Car, and B. Schuster, “Automatic camera and range sensor calibration using a single shot,” in 2012 IEEE International Conference on Robotics and Automation (2012).

Guan, B.

Guoyong, Z.

Z. Xiaowen, R. Yongfeng, Z. Guoyong, S. Yanhu, C. Chengqun, and L. Fan, “Camera calibration method for solid spheres based on triangular primitives,” Precis. Eng. 65, 91–102 (2020).
[Crossref]

Laurendeau, D.

Li, F.

Li, Y.

H. Zhu, Y. Li, X. Liu, X. Yin, Y. Shao, Y. Qian, and J. Tan, “Camera calibration from very few images based on soft constraint optimization,” J. Franklin Inst. 357(4), 2561–2584 (2020).
[Crossref]

Liu, E.

Z. Zhang, R. Zhao, E. Liu, K. Yan, and Y. Ma, “A single-image linear calibration method for camera,” Measurement 130, 298–305 (2018).
[Crossref]

Liu, H.

Liu, K.

Liu, L.

Liu, Q.

Liu, W.

Liu, X.

H. Zhu, Y. Li, X. Liu, X. Yin, Y. Shao, Y. Qian, and J. Tan, “Camera calibration from very few images based on soft constraint optimization,” J. Franklin Inst. 357(4), 2561–2584 (2020).
[Crossref]

Lv, Y. W.

Ma, M.

Ma, Y.

Z. Zhang, R. Zhao, E. Liu, K. Yan, and Y. Ma, “A single-image linear calibration method for camera,” Measurement 130, 298–305 (2018).
[Crossref]

Madrid-Cuevas, F. J.

S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez, “Automatic generation and detection of highly reliable fiducial markers under occlusion,” Pattern Recognition 47(6), 2280–2292 (2014).
[Crossref]

Marín-Jiménez, M. J.

S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez, “Automatic generation and detection of highly reliable fiducial markers under occlusion,” Pattern Recognition 47(6), 2280–2292 (2014).
[Crossref]

Medina-Carnicer, R.

F. J. Romero-Ramirez, R. Muñoz-Salinas, and R. Medina-Carnicer, “Speeded up detection of squared fiducial markers,” Image and Vision Computing 76, 38–47 (2018).
[Crossref]

Moosmann, F.

A. Geiger, F. Moosmann, O. Car, and B. Schuster, “Automatic camera and range sensor calibration using a single shot,” in 2012 IEEE International Conference on Robotics and Automation (2012).

More, J.

J. More, “The levenberg-marquardt algorithm, implementation and theory,” In G. A. Watson, Numerical Analysis, Lecture Notes in Mathematics 630. Springer-Verlag, 1977.

Muñoz-Salinas, R.

F. J. Romero-Ramirez, R. Muñoz-Salinas, and R. Medina-Carnicer, “Speeded up detection of squared fiducial markers,” Image and Vision Computing 76, 38–47 (2018).
[Crossref]

S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez, “Automatic generation and detection of highly reliable fiducial markers under occlusion,” Pattern Recognition 47(6), 2280–2292 (2014).
[Crossref]

Pan, B.

B. Chen and B. Pan, “Camera calibration using synthetic random speckle pattern and digital image correlation,” Optics and Lasers in Engineering 126, 105919 (2020).
[Crossref]

Piai, J. C.

J. A. D. Franca, M. R. Stemmer, M. B. D. M. Franca, and J. C. Piai, “A new robust algorithmic for multi-camera calibration with a 1D object under general motions without prior knowledge of any camera intrinsic parameter,” Pattern Recognition 45(10), 3636–3647 (2012).
[Crossref]

Poulin-Girard, A. S.

Qian, Y.

H. Zhu, Y. Li, X. Liu, X. Yin, Y. Shao, Y. Qian, and J. Tan, “Camera calibration from very few images based on soft constraint optimization,” J. Franklin Inst. 357(4), 2561–2584 (2020).
[Crossref]

Romero-Ramirez, F. J.

F. J. Romero-Ramirez, R. Muñoz-Salinas, and R. Medina-Carnicer, “Speeded up detection of squared fiducial markers,” Image and Vision Computing 76, 38–47 (2018).
[Crossref]

Schuster, B.

A. Geiger, F. Moosmann, O. Car, and B. Schuster, “Automatic camera and range sensor calibration using a single shot,” in 2012 IEEE International Conference on Robotics and Automation (2012).

Shang, Y.

Shao, Y.

H. Zhu, Y. Li, X. Liu, X. Yin, Y. Shao, Y. Qian, and J. Tan, “Camera calibration from very few images based on soft constraint optimization,” J. Franklin Inst. 357(4), 2561–2584 (2020).
[Crossref]

Stemmer, M. R.

J. A. D. Franca, M. R. Stemmer, M. B. D. M. Franca, and J. C. Piai, “A new robust algorithmic for multi-camera calibration with a 1D object under general motions without prior knowledge of any camera intrinsic parameter,” Pattern Recognition 45(10), 3636–3647 (2012).
[Crossref]

Su, A.

Sun, J.

Tan, J.

H. Zhu, Y. Li, X. Liu, X. Yin, Y. Shao, Y. Qian, and J. Tan, “Camera calibration from very few images based on soft constraint optimization,” J. Franklin Inst. 357(4), 2561–2584 (2020).
[Crossref]

Thibault, S.

Tsai, R. Y.

R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv camera and lenses,” IEEE J. Robot. Automat. 3(4), 323–344 (1987).
[Crossref]

Wang, X.

Wang, Y.

Wang, Y. J.

Wang, Y. W.

Watson, G. A.

J. More, “The levenberg-marquardt algorithm, implementation and theory,” In G. A. Watson, Numerical Analysis, Lecture Notes in Mathematics 630. Springer-Verlag, 1977.

Xi, J.

Xiaowen, Z.

Z. Xiaowen, R. Yongfeng, Z. Guoyong, S. Yanhu, C. Chengqun, and L. Fan, “Camera calibration method for solid spheres based on triangular primitives,” Precis. Eng. 65, 91–102 (2020).
[Crossref]

Xu, X. P.

Yan, K.

Z. Zhang, R. Zhao, E. Liu, K. Yan, and Y. Ma, “A single-image linear calibration method for camera,” Measurement 130, 298–305 (2018).
[Crossref]

Yang, M.

Yang, P.

Yang, X.

Yang, Y.

Yang, Z.

Yanhu, S.

Z. Xiaowen, R. Yongfeng, Z. Guoyong, S. Yanhu, C. Chengqun, and L. Fan, “Camera calibration method for solid spheres based on triangular primitives,” Precis. Eng. 65, 91–102 (2020).
[Crossref]

Yin, X.

H. Zhu, Y. Li, X. Liu, X. Yin, Y. Shao, Y. Qian, and J. Tan, “Camera calibration from very few images based on soft constraint optimization,” J. Franklin Inst. 357(4), 2561–2584 (2020).
[Crossref]

Yin, Y.

Yongfeng, R.

Z. Xiaowen, R. Yongfeng, Z. Guoyong, S. Yanhu, C. Chengqun, and L. Fan, “Camera calibration method for solid spheres based on triangular primitives,” Precis. Eng. 65, 91–102 (2020).
[Crossref]

Yu, C. Y.

Yu, H.

J. Zhang, H. Yu, H. Deng, Z. Chai, M. Ma, and X. Zhong, “A Robust and Rapid Camera Calibration Method by One Captured Image,” IEEE Trans. Instrum. Meas. 68(10), 4112–4121 (2019).
[Crossref]

Yu, J.

Z. Gao, M. Zhu, and J. Yu, “A Novel Camera Calibration Pattern Robust to Incomplete Pattern Projection,” IEEE Sens. J. 21(8), 10051–10060 (2021).
[Crossref]

Yu, Q.

Yu, Y.

Zhang, J.

Zhang, Z.

Z. Zhang, R. Zhao, E. Liu, K. Yan, and Y. Ma, “A single-image linear calibration method for camera,” Measurement 130, 298–305 (2018).
[Crossref]

Zhang, Z. Y.

Z. Y. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Machine Intell. 26(7), 892–899 (2004).
[Crossref]

Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000).
[Crossref]

Zhao, R.

Z. Zhang, R. Zhao, E. Liu, K. Yan, and Y. Ma, “A single-image linear calibration method for camera,” Measurement 130, 298–305 (2018).
[Crossref]

Zhong, X.

J. Zhang, J. Zhu, H. Deng, Z. Chai, M. Ma, and X. Zhong, “Multi-camera calibration method based on a multi-plane stereo target,” Appl. Opt. 58(34), 9353–9359 (2019).
[Crossref]

J. Zhang, H. Yu, H. Deng, Z. Chai, M. Ma, and X. Zhong, “A Robust and Rapid Camera Calibration Method by One Captured Image,” IEEE Trans. Instrum. Meas. 68(10), 4112–4121 (2019).
[Crossref]

Zhou, F.

Zhu, H.

Y. Yin, H. Zhu, P. Yang, Z. Yang, K. Liu, and H. Fu, “Robust and accuracy calibration method for a binocular camera using a coding planar target,” Opt. Express 30(4), 6107–6128 (2022).
[Crossref]

H. Zhu, Y. Li, X. Liu, X. Yin, Y. Shao, Y. Qian, and J. Tan, “Camera calibration from very few images based on soft constraint optimization,” J. Franklin Inst. 357(4), 2561–2584 (2020).
[Crossref]

Zhu, J.

Zhu, M.

Z. Gao, M. Zhu, and J. Yu, “A Novel Camera Calibration Pattern Robust to Incomplete Pattern Projection,” IEEE Sens. J. 21(8), 10051–10060 (2021).
[Crossref]

Appl. Opt. (5)

IEEE J. Robot. Automat. (1)

R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv camera and lenses,” IEEE J. Robot. Automat. 3(4), 323–344 (1987).
[Crossref]

IEEE Sens. J. (1)

Z. Gao, M. Zhu, and J. Yu, “A Novel Camera Calibration Pattern Robust to Incomplete Pattern Projection,” IEEE Sens. J. 21(8), 10051–10060 (2021).
[Crossref]

IEEE Trans. Instrum. Meas. (1)

J. Zhang, H. Yu, H. Deng, Z. Chai, M. Ma, and X. Zhong, “A Robust and Rapid Camera Calibration Method by One Captured Image,” IEEE Trans. Instrum. Meas. 68(10), 4112–4121 (2019).
[Crossref]

IEEE Trans. Pattern Anal. Machine Intell. (2)

Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000).
[Crossref]

Z. Y. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Machine Intell. 26(7), 892–899 (2004).
[Crossref]

Image and Vision Computing (1)

F. J. Romero-Ramirez, R. Muñoz-Salinas, and R. Medina-Carnicer, “Speeded up detection of squared fiducial markers,” Image and Vision Computing 76, 38–47 (2018).
[Crossref]

J. Franklin Inst. (1)

H. Zhu, Y. Li, X. Liu, X. Yin, Y. Shao, Y. Qian, and J. Tan, “Camera calibration from very few images based on soft constraint optimization,” J. Franklin Inst. 357(4), 2561–2584 (2020).
[Crossref]

Measurement (1)

Z. Zhang, R. Zhao, E. Liu, K. Yan, and Y. Ma, “A single-image linear calibration method for camera,” Measurement 130, 298–305 (2018).
[Crossref]

Opt. Express (6)

Opt. Lett. (1)

Optics and Lasers in Engineering (1)

B. Chen and B. Pan, “Camera calibration using synthetic random speckle pattern and digital image correlation,” Optics and Lasers in Engineering 126, 105919 (2020).
[Crossref]

Pattern Recognition (2)

J. A. D. Franca, M. R. Stemmer, M. B. D. M. Franca, and J. C. Piai, “A new robust algorithmic for multi-camera calibration with a 1D object under general motions without prior knowledge of any camera intrinsic parameter,” Pattern Recognition 45(10), 3636–3647 (2012).
[Crossref]

S. Garrido-Jurado, R. Muñoz-Salinas, F. J. Madrid-Cuevas, and M. J. Marín-Jiménez, “Automatic generation and detection of highly reliable fiducial markers under occlusion,” Pattern Recognition 47(6), 2280–2292 (2014).
[Crossref]

Precis. Eng. (1)

Z. Xiaowen, R. Yongfeng, Z. Guoyong, S. Yanhu, C. Chengqun, and L. Fan, “Camera calibration method for solid spheres based on triangular primitives,” Precis. Eng. 65, 91–102 (2020).
[Crossref]

Other (3)

A. Geiger, F. Moosmann, O. Car, and B. Schuster, “Automatic camera and range sensor calibration using a single shot,” in 2012 IEEE International Conference on Robotics and Automation (2012).

J. More, “The levenberg-marquardt algorithm, implementation and theory,” In G. A. Watson, Numerical Analysis, Lecture Notes in Mathematics 630. Springer-Verlag, 1977.

“Camera Calibration Toolbox for Matlab,” http://www.vision.caltech.edu/bouguetj/calib_doc/ .

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1.
Fig. 1. Schematic of the structure of the proposed stereo target.
Fig. 2.
Fig. 2. Structure of a coded unit and the coding principle of calibration corners.
Fig. 3.
Fig. 3. Principle of establishing the world coordinate system on a coded planar target, (a) and (b) are the planar target on the outer and inner stereo target, respectively.
Fig. 4.
Fig. 4. Schematic flow of decoding a stereo target.
Fig. 5.
Fig. 5. Schematic flow of the marker decoding method (MDM).
Fig. 6.
Fig. 6. Binocular calibration model of the proposed method.
Fig. 7.
Fig. 7. Flowchart of the proposed method.
Fig. 8.
Fig. 8. (a) Setup of BCC experiment and (b) stereo target used in the decoding and calibration experiments.
Fig. 9.
Fig. 9. Four of the decoding results in the decoding experiments, (a), (b), and (c) are implemented with the image of the partially occluded target; (d) is conducted with the image of the entirely exposed target.
Fig. 10.
Fig. 10. (a) left and (b) right target images used in one of the trials in the 1st group.
Fig. 11.
Fig. 11. (a) and (c) are the comparative results of Erep,l and Erep,r, respectively; (b) shows the comparison of the Erep,l in the 7th trial, and (d) shows the comparison of the Erep,r in the 14th trial.
Fig. 12.
Fig. 12. Stereo target used in the 3D measurement experiment.
Fig. 13.
Fig. 13. (a) describes the comparative results of the Elen; (b) details the comparative results of the Elen in the 13th trial.
Fig. 14.
Fig. 14. (a) shows comparative results of the absolute coplanar error. (b) details the comparative results of absolute coplanar error in the 8th trials.
Fig. 15.
Fig. 15. (a) shows comparative results of the Erig; (b) details the comparative results of the Erig in the 1st trial.
Fig. 16.
Fig. 16. Comparative results of the mean absolute measurement error ${\delta _{mea}}$

Tables (6)

Tables Icon

Table 1. One of the comparison of the IPBC and EPBC

Tables Icon

Table 2. Comparative results of Erep,l in each calibration trial

Tables Icon

Table 3. Comparative results of Erep,r in each calibration trial

Tables Icon

Table 4. Comparative results of Elen in each 3D measurement trial

Tables Icon

Table 5. Comparative results of Ecop in each 3D measurement trial

Tables Icon

Table 6. Comparative results of Erig in each 3D measurement trial

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

e = ( ξ 0 , ξ 1 , , ξ 11 ) ( 2 0 , 2 1 , , 2 11 ) T ,
s m ^ l = A l [ R l , 1 T l , 1 ] M ^  =  A l [ r l , 1 1 r l , 1 2 r l , 1 3 T l , 1 ] [ X Y 0 1 ]  =  A l [ r l , 1 1 r l , 1 2 T l , 1 ] [ X Y 1 ] = H l , 1 [ X Y 1 ] ,
{ x d = x + x ( k 1 r 2 + k 2 r 4 ) + 2 p 1 y + p 2 ( r 2 + 2 x 2 ) y d = y + y ( k 1 r 2 + k 2 r 4 ) + 2 p 1 x + p 2 ( r 2 + 2 y 2 ) ,
{ R l , r = 1 12 i = 1 12 ( R r , i ( R l , i ) 1 ) T l , r = 1 12 i = 1 12 ( T r , i R r , i ( R l , i ) 1 T l , i ) .
{ J r e p , l = i = 1 12 j = 1 4 n i , l | | c i , j , l c ~ i , j , l ( A l , R l , i , T l , i , D l , C i , j , l ) | | 2 J r e p , r = i = 1 12 j = 1 4 n i , r | | c i , j , r c ~ i , j , r ( A r , R r , i , T r , i , D r , C i , j , r ) | | 2 ,
J l e n = i = 1 12 j = 1 n i k = 1 3 | L a c t d ( c i , j , k , c i , j , k + 1 ) | + i = 1 12 j = 1 n i | L a c t d ( c i , j , 1 , c i , j , 4 ) | ,
d ( a , b ) = ( a . x b . x ) 2 + ( a . y b . y ) 2 + ( a . z b . z ) 2 .
J r i g = i = 1 12 j = 1 n i k = 1 2 | 90 φ ( c i , j , k + 1 c i , j , k , c i , j , k + 2 c i , j , k + 1 ) | + i = 1 12 j = 1 n i | 90 φ ( c i , j , 4 c i , j , 1 , c i , j , 2 c i , j , 1 ) | + i = 1 12 j = 1 n i | 90 φ ( c i , j , 3 c i , j , 4 , c i , j , 1 c i , j , 4 ) | ,
φ ( v 1 , v 2 ) = 180 π arccos ( v 1 v 2 | v 1 | | v 2 | ) .
J c o p = i = 1 12 j = 1 n i k = 1 4 | D ( F i , c i , j , k ) | ,
D ( F , c ) = F c ^ a 1 2 + a 2 2 + a 3 2 ,
f o p = min ( s r e p , l J r e p , l + s r e p , r J r e p , r + s l e n J l e n + s r i g J r i g + s c o p J c o p ) ,
Select as filters


Select Topics Cancel
© Copyright 2022 | Optica Publishing Group. All Rights Reserved