Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Stratified camera calibration algorithm based on the calibrating conic

Open Access Open Access

Abstract

In computer vision, camera calibration is essential for photogrammetric measurement. We propose a new stratified camera calibration method based on geometric constraints. This paper proposes several new theorems in 2D projective transformation: (1) There exists a family of lines whose parallelity remains invariable in a 2D projective transformation. These lines are parallel with the image of the infinity line. (2) There is only one line whose verticality is invariable with the family of parallel lines in a 2D projective transformation, and the principal point lies on this line. With the image of the infinite line and the dual conic of the circular points, the closed-form solution of the line passing through principal point is deduced. The angle among the target board and image plane, which influences camera calibration, is computed. We propose a new geometric interpretation of the target board’s pose and solution method. To obtain appropriate poses of the target board for camera calibration, we propose a visual pose guide (VPG) of the target board system that can guide a user to move the target board to obtain appropriate images for calibration. The expected homography is defined, and its solution method is deduced. Experimental results with synthetic and real data verify correctness and validity of the proposed method.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Camera calibration is a crucial procedure in photogrammetric measurement and computer vision, such as 3D reconstruction [114], structure from motion (SFM) [1532], and simultaneous localization and mapping (SLAM) [3351]. There are calibration methods with target and calibration methods without a target, that is, self-calibration [52], such as calibration based on the absolute conic [53], calibration based on the absolute dual quadric [54], calibrations based on pure rotation [55]. The drawbacks of self-calibration are its low accuracy and poor robustness. There are several calibration methods, such as the DLT method [56] and the radial constraint–based methods [57]; however, 3D target fabrication is difficult and inconvenient when it comes to carrying. Furthermore, self-occlusion frequently occurs during calibration procedure. Zhang bring up a calibration algorithm that combined self-calibration and target-based methods [58]. Instead of adopting a 3D target, a 2D plane target is employed for calibration, it is completed based on the relationship between image of the absolute conic and camera intrinsic parameters.

The main drawback of Zhang’s method is that intrinsic parameters of camera are deemed a holistic algebraic entity. Indeed, intrinsic parameters of camera such as focal length and principal point are independent of imaging geometric model. Thus, the method for decoupling principal point and focal length is approximately geometrical essence. It can eliminate the error influence in principal point and focal length. One paper [59] proposed a method for estimating principal point by vanishing points. Another paper [60] bring up a calibration method where principal point and focal length are calibrated separately, and this method’s deduction and computation are sophisticated because they involve many coordinate system transformations. In addition, the focal length is estimated using only one image, leading to a larger estimation fluctuation.

We propose a stratified camera calibration method based on the calibrating Conic that computes the principal point and focal length separately, and introduces new theorems for 2D projective transformation. The principal point’s estimation is obtained based on these new theorems. After estimating the principal point, we deduce a linear solution of estimating the focal length based on the calibrating Conic.

The pose of the target board has a major influence on camera calibration. Triggs [61] investigated the situation wherein angle among target board plane and image plane causes the propagation of error in focal length. Sturm [62] described all singularities on target board calibration. To calibrate the camera's distortion, the image of the target board should cover camera's full field of view, especially involving the borders of field of view. This requirement is easy to implement using intuition and user experience. For estimating the principal point, lines passing through the principal point should be distributed in all directions in the image plane. For the focal length estimation, the angle between target board and image planes should be maintained near 45°, as suggested by Zhang [58]. Both the aforementioned demands of estimating the principal point and the focal length are difficult to meet with only the intuition and experience of the user. Thus, a visual pose guide (VPG) of the target board system is exploited. The need for the estimation of principal point and focal length can be met by adopting a graphical user interface (GUI).

A new geometric interpretation of the target board’s pose is obtained, from which the pose estimation of the target board can be obtained. Furthermore, the expected homography is deduced, which is the projective transformation among the image plane and the expected target board plane. The expected target board means that the angle among the image plane and target board is in the vicinity of 45°. Then, realistic feature points and expected feature points of the target board are displayed in real time by VPG. The expected homography can calculate the expected line on which the principal point lies. The expected line passing through the principal point can be evenly distributed in the image plane and be displayed in real time. The user can move and adjust the target board and make the realistic line on which the principal point lies as close as possible to the expected line on which the principal point lies by means of VPG. Moreover, the user can move and adjust the target board to achieve a result such that realistic feature points are as close to the expected feature points as possible by means of VPG. Then, images can be acquired when the expected pose of the target board is achieved. This leads to collecting ideal images of the target board for calibration.

The paper’s structure is: a new mathematical theorem in 2D projective transformation and the principle of calibration are elaborated in Section 2. Then the VPG and related mathematical deduction are presented in Section 3. Section 4 presents simulations and experiments that verify the method’s correctness and robustness.

2. Calibration algorithm

2.1 Family of lines whose parallelity is invariant in 2D projective transformation

Figure 1 illustrates the relationship between vanishing line of plane of target and intersection line among target plane and image plane. This plane through optical center $C$ and paralleling with ${\pi _T}$ is indicated as ${\pi _{vl}}$. The vanishing line ${l_\infty }$ is obtained as the intersecting line among ${\pi _{vl}}$ and ${\pi _I}$, and the intersecting line among ${\pi _T}$ and ${\pi _I}$ is ${l_{\pi \textrm{ - }I}}$. Because parallel planes are cut by another plane, the intersection lines are parallel: ${l_\infty }$ is parallel with ${l_{\pi \textrm{ - }I}}$.

 figure: Fig. 1.

Fig. 1. Relation of vanishing line ${l_\infty }$ and intersection line ${l_{\pi \textrm{ - }I}}$ between the target plane ${\pi _T}$ and image plane ${\pi _I}$

Download Full Size | PDF

The target plane ${\pi _T}$ can be obtained by the rotation of plane ${\pi _I}$ around the axis ${l_{\pi \textrm{ - }I}}$. There are parallel lines denoted by ${L_i}$ on target plane ${\pi _T}$, which is parallel with the image of the infinite line ${l_\infty }$ obtained by a 2D projective transform from the target coordinate system to the image coordinate system

$${l_\infty } = {H^{ - T}}{L_\infty } = {H^{ - T}}\left[ {\begin{array}{c} 0\\ 0\\ 1 \end{array}} \right]. $$
where ${L_\infty }$ is the infinity line in the canonical Euclidean coordinate system.

The projection of line ${L_i}$ is ${l_i}$. These lines are parallel in the 2D projective transform; that is, ${L_i}$ is parallel with ${l_i}$, so ${L_i}$, ${l_i}$, ${l_\infty }$, and ${l_{\pi \textrm{ - }I}}$ are parallel. This is shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. Family of lines whose parallelity is invariant in 2D projective transformation

Download Full Size | PDF

2.1.1 Algebraic proof

The image of the infinite line denotes ${l_\infty }\textrm{ = }{\left[ {\begin{array}{ccc} a&b&1 \end{array}} \right]^T}$,and homography is denoted by

$$H = \left[ {\begin{array}{ccc} {{h_1}}&{{h_2}}&{{h_3}}\\ {{h_4}}&{{h_5}}&{{h_6}}\\ {{h_7}}&{{h_8}}&{{h_9}} \end{array}} \right].$$

From Eq. (1), we obtain

$$\left[ {\begin{array}{c} 0\\ 0\\ 1 \end{array}} \right] = {H^T}{l_\infty } = {\left[ {\begin{array}{ccc} {{h_1}}&{{h_2}}&{{h_3}}\\ {{h_4}}&{{h_5}}&{{h_6}}\\ {{h_7}}&{{h_8}}&{{h_9}} \end{array}} \right]^T}\left[ {\begin{array}{c} a\\ b\\ 1 \end{array}} \right] = \left[ {\begin{array}{ccc} {{h_1}}&{{h_4}}&{{h_7}}\\ {{h_2}}&{{h_5}}&{{h_8}}\\ {{h_3}}&{{h_6}}&{{h_9}} \end{array}} \right]\left[ {\begin{array}{c} a\\ b\\ 1 \end{array}} \right]. $$

From Eq. (2), we obtain

$$\left[ {\begin{array}{cc} {{h_1}}&{{h_4}}\\ {{h_2}}&{{h_5}} \end{array}} \right]\left[ {\begin{array}{c} a\\ b \end{array}} \right] = \left[ {\begin{array}{c} { - {h_7}}\\ { - {h_8}} \end{array}} \right]. $$

An arbitrary line that is parallel with ${l_\infty }$ is denoted by ${l_i}\textrm{ = }{\left[ {\begin{array}{ccc} {sa}&{sb}&1 \end{array}} \right]^T}$, where $S$ is an arbitrary factor that isn’t 0 or 1; then, a line ${L_i}$ in the target plane ${\pi _T}$ is given by

$$\begin{array}{l} {L_i} = {H^T}{l_i}\\ {L_i} = \left[ {\begin{array}{ccc} {{h_1}}&{{h_4}}&{{h_7}}\\ {{h_2}}&{{h_5}}&{{h_8}}\\ {{h_3}}&{{h_6}}&{{h_9}} \end{array}} \right]\left[ {\begin{array}{c} {sa}\\ {sb}\\ 1 \end{array}} \right]\\ {L_i} = \left[ {\begin{array}{*{20}{c}} {s\left[ {\begin{array}{*{20}{c}} {{h_1}}&{{h_4}}\\ {{h_2}}&{{h_5}} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} a\\ b \end{array}} \right]\textrm{ + }\left[ {\begin{array}{*{20}{c}} {{h_7}}\\ {{h_8}} \end{array}} \right]}\\ {s({{h_3}a + {h_6}b} )+ {h_9}} \end{array}} \right] \end{array}. $$

Substituting Eq. (3) into (4), we obtain

$${L_i} = \left[ {\begin{array}{*{20}{c}} {({1 - s} )\left[ {\begin{array}{*{20}{c}} {{h_7}}\\ {{h_8}} \end{array}} \right]}\\ {s({{h_3}a + {h_6}b} )+ {h_9}} \end{array}} \right]. $$

Because $s \ne 1$, ${L_i}(1)/{L_i}(2) \equiv {h_7}/{h_8}$, the slope of ${L_i}$ stays invariant with a variation of s. So, under a homography transform, there is a family of parallel lines in which the parallel property can be maintained; that is,

$$\{{{L_i},{L_j}|{L_i}\parallel {L_j}} \}\buildrel H \over \longrightarrow \{{{l_i},{l_j}|{l_i}\parallel {l_j}} \}$$
where ${L_i},{L_j} \in {\pi _T}$, ${l_i},{l_j} \in {\pi _I}$.

2.2 Line through the principal point

There is a line that is vertical to ${l_{\pi \textrm{ - }I}}$ on plane ${\pi _T}$, which is indicated as ${L_{{p_i}}}$. Then, ${L_{{p_i}}}$ is projected on an image plane ${\pi _I}$, indicated as ${l_{{p_i}}}$. A plane through optical center C and ${L_{{p_i}}}$ is denoted by ${\pi _{C - {l_{pi}}}}$; thus, ${l_{{p_i}}}$ is obtained as the intersection line of ${\pi _{C - {l_{pi}}}}$ and ${\pi _I}$, as shown in Fig. 3.

 figure: Fig. 3.

Fig. 3. Line that passes through the principal point and that is vertical to the image of the infinite line ${l_\infty }$

Download Full Size | PDF

There is a unique plane ${\pi _{C - {l_{p0}}}}$ in the plane family $\{{{\pi_{C\textrm{ - }{l_{pi}}}}} \}$ that is vertical to line ${l_{\pi \textrm{ - }I}}$. The unique plane ${\pi _{C - {l_{p0}}}}$ maintains two conditions: one is that the plane ${\pi _{C - {l_{p0}}}}$ passes through optical center C, and the other is that plane ${\pi _{C - {l_{p0}}}}$ is vertical to line ${l_{\pi \textrm{ - }I}}$. The intersecting line among the plane ${\pi _{C - {l_{p0}}}}$ and the target board plane ${\pi _T}$ is ${L_{p0}}$, and the intersection line of ${\pi _{C - {l_{p0}}}}$ and ${\pi _I}$ is denoted by ${l_{{p_0}}}$. Thus, both ${L_{p0}}$ and ${l_{{p_0}}}$ are on the plane ${\pi _{C - {l_{p0}}}}$, and both ${L_{p0}}$ and ${l_{{p_0}}}$ are vertical to ${l_{\pi \textrm{ - }I}}$.

Principal point p lies on image plane ${\pi _I}$, and the line through C and p is ${l_{C - p}}$. There is a plane pencil $\{{{\pi_{{l_{C - pi}}}}} \}$ that passes through ${l_{C - p}}$ and that is vertical to the plane ${\pi _I}$. The intersection line of plane pencil $\{{{\pi_{{l_{C - pi}}}}} \}$ and ${\pi _I}$ can obtain a pencil of lines $\{{{l_{{\pi_{{l_{C - pi}} - {\pi_I}}}}}} \}$ that pass through the principal point p.

There is a unique plane ${\pi _{C - {l_{p0}}}}$ in the plane pencil $\{{{\pi_{{l_{C - pi}}}}} \}$ that passes through ${l_{C - p}}$ and that is vertical to ${l_{\pi \textrm{ - }I}}$. Furthermore, there is only a line ${l_{{p_0}}}$ in the line pencil $\{{{l_{{\pi_{{l_{C - pi}} - {\pi_I}}}}}} \}$ that passes through the principal point p and that is vertical to ${l_{\pi \textrm{ - }I}}$.

The intact projective schematic diagram of lines ${L_{{p_i}}}$ and ${L_i}$ on the target plane is shown in Fig. 4. Intact projective schematic diagram of lines ${L_{{p_i}}}$ and $L_i$ on target plane $\pi_T$.

 figure: Fig. 4.

Fig. 4. Intact projective schematic diagram of lines ${L_{{p_i}}}$ and ${L_i}$ on target plane ${\pi _T}$

Download Full Size | PDF

In Fig. 4, C is the camera optical center and $P$ is the principal point in image plane ${\pi _I}$. The projective transformation of the family of lines $\{{L|{L_i}\parallel {L_j}\parallel {L_\infty }} \}\buildrel H \over \longrightarrow \{{l|{l_i}\parallel {l_j}\parallel {l_\infty }} \}$ can hold invariable parallelity. The family of lines that is vertical to the line family $\{{{L_i}\textrm{|}i = 0, 1,2\ldots } \}$ on the target plane $\pi_T$ is denoted by $\{{{L_{{p_i}}}\textrm{|}i = 0, 1,2\ldots } \}$, where the line with invariable perpendicularity with respect to the line family $\{{{L_i}\textrm{|}i = 0, 1,2\ldots } \}$ in 2D projective transformation is ${L_{{p_0}}}$; that is

$$\{{{L_{{p_0}}}|{L_{{p_0}}}\mathbf{\bot }{L_i},i = 0,1,2\ldots } \}\buildrel H \over \longrightarrow \{{{l_{{p_0}}}|{l_{{p_0}}}\mathbf{\bot }{l_i},i = 0,1,2\ldots } \}.$$
${l_{{p_0}}}$ passing through principal point $p$ is on the image plane ${\pi _I}$; that is, ${l_{{p_0}}} \in {\pi _I},p \in {l_{{p_0}}}$.

2.3 Solution of the line on which the principal point lies

In the canonical projective coordinate system, there are two fixed points on the image of the line at infinity $l_\infty$, whose coordinates are [63]

$$I = \left( {\begin{array}{c} 1\\ i\\ 0 \end{array}} \right),J = \left( {\begin{array}{c} 1\\ { - i}\\ 0 \end{array}} \right).$$

The conic

$$C_\infty ^\ast{=} I{J^T} + J{I^T}$$
is dual to the circular points. The conic is a degenerate (rank 2) line conic consisting of two circular points. In a Euclidean coordinate system, it is given by
$$C_\infty ^\mathrm{\ast }\textrm{ = }\left( {\begin{array}{c} 1\\ i\\ 0 \end{array}} \right)\left( {\begin{array}{ccc} 1&{ - i}&0 \end{array}} \right) + \left( {\begin{array}{c} 1\\ { - i}\\ 0 \end{array}} \right)\left( {\begin{array}{ccc} 1&i&0 \end{array}} \right) = \left[ {\begin{array}{ccc} 1&0&0\\ 0&1&0\\ 0&0&0 \end{array}} \right].$$

From Section 2.2, we obtain ${l_{p0}}\,\bot\,{l_{\pi - I}},\,{L_{p0}}\,\bot\,{l_{\pi - I}}$. Because ${l_{\pi - I}}//{l_\infty }$, we obtain ${l_{p0}} \bot {l_\infty }$. Setting a canonical Euclidean coordinate system on image plane $\pi_I$, we obtain

$$l_{p0}^TC_\infty ^\ast {l_\infty } = 0. $$

On the image plane, arbitrarily select a line $l^{\prime}_{\infty}$ which is parallel with $l_\infty$. Similarly, we can obtain

$$l_{p0}^TC_\infty ^\ast {l^{\prime}_\infty } = 0. $$

From Section 2.2, we obtain ${L_{p0}}\,\bot\,{l_{\pi - I}}$. Because ${L_i}//{l_i}//{l_{\pi - 1}}//{l_\infty }$, on the target plane $\pi_I$ where the canonical Euclidean coordinate system is set, we obtain

$$L_{p0}^TC_\infty ^\ast {L^{\prime}_\infty }\textrm{ = }0$$
where ${l^{\prime}_\infty }$ and ${L^{\prime}_\infty }$ are associated by ${l^{\prime}_\infty } = {H^{ - T}}{L^{\prime}_\infty }$, and ${l_{p0}}$ and ${L_{p0}}$ are associated by ${l_{p0}} = {H^{ - T}}{L_{p0}}$.

The conic dual of the circular points $C_\infty ^\ast $ on the target plane ${\pi _T}$ in the canonical Euclidean coordinate system is transformed to $C_{{\infty}^{\ast\prime}}$ on the image plane ${\pi _I}$ in the projective coordinate system

$$C_{{\infty}^{\ast\prime}}\textrm{ = }HC_\infty ^\ast {H^T}. $$

From Eqs. (8) and (9), we obtain

$$\begin{array}{l} L_{p0}^TC_\infty ^\ast {{L^{\prime}}_\infty }\textrm{ = (}{H^T}{l_{p0}}{)^T}C_\infty ^\ast {H^T}{l_\infty }^\prime \textrm{ = }l_{p0}^T(HC_\infty ^\ast {H^T}){l_\infty }^\prime \\ \textrm{ = }l_{p0}^TC_{{\infty}^{\ast\prime}}{{l^{\prime}}_\infty } = 0 \end{array}. $$

The reason for adopting ${l_\infty }^\prime$ instead of ${l_\infty }$ is that ${l_\infty }$ is the null space of $C_{{\infty}^{\ast\prime}}$. Equation (11) is degenerated for ${l_{p0}}$ when $C_{{\infty}^{\ast\prime}}{l_\infty } \equiv 0$. In practice, ${l_\infty } = {[a,b,1]^T}$, and ${l_\infty }^\prime$ can be obtained by setting $l_\infty ^{\prime}\textrm{ = }{[{a,b,\lambda } ]^T}$, where $\lambda$ is an attribute value that is not equal to 1.

From Eqs. (7) and (11), the line ${l_{p0}}$ passing through the principal point p can be obtained from

$${l_{p0}} = ({C{_{\infty^\ast }^\prime }{l_\infty }^\prime } )\times ({C_\infty^\ast {l_\infty }} ). $$

2.4 Solution of principal point

The ith target board plane is denoted by ${T_i}$, in the target plane, and the line on which the principal point lies is ${l_{p0,i}}\textrm{ = }{[{{c_i},{d_i},1} ]^T}$. Stack the row vector ${l_{p0,i}}^T$ to obtain a matrix M. The principal point $p = {[{u_0},{v_0},1]^T}$ is the right null space of M, which can be obtained by SVD decomposition

$$Mp = \left[ {\begin{array}{ccc} {{c_1}}&{{d_1}}&1\\ {{c_2}}&{{d_2}}&1\\ \vdots & \vdots & \vdots \\ {{c_n}}&{{d_n}}&1 \end{array}} \right]\left[ {\begin{array}{c} {{u_0}}\\ {{v_0}}\\ 1 \end{array}} \right] = 0. $$

2.5 Solution of the focal length

2.5.1 Recovery of the vanishing point ${V_3}$ in the normal direction of plane ${\pi _T}$

A finite point ${V_1}$ is selected arbitrarily on line ${l_\infty }$. An arbitrary line ${l_{{V_1}}}$ passing through ${V_1}$ is different from line ${l_\infty }$. The vanishing point ${V_2}$ is normal to ${V_1}$. Likewise, an arbitrary line ${l_{{V_2}}}$ passing through ${V_2}$ is different from line ${l_\infty }$. Then, we obtain

$${l_{{V_1}}}^TC_{{\infty}^{\ast\prime}}{l_{{V_2}}} = 0. $$
${l_{{V_2}}}$ can be obtained from Eq. (14), and ${V_2}$ can be solved from ${l_\infty }$ and ${l_{{V_2}}}$, as follows:
$${V_2} = {l_\infty } \times {l_{{V_2}}}. $$

The vanishing point in the normal direction of the image plane ${\pi _I}$ is denoted by ${V_3}$. The line through ${V_2}$ and ${V_3}$ is ${l_{{V_2} - {V_3}}}$, and the line passing through the principal point p and the vanishing point ${V_1}$ is denoted by ${l_{{V_1} - p}}$. According to relation among normal vanishing point and principal point, we obtain

$$\left\{ {\begin{array}{c} {{V_3}^T{l_{p0}} = 0}\\ {{l_{{V_2} - {V_3}}} = {V_2} \times {V_3}}\\ {{l_{{V_1} - p}}^T{l_{{V_2} - {V_3}}} = 0} \end{array}} \right.. $$

The vanishing point ${V_3}$ in the normal direction of plane ${\pi _T}$ can be recovered using Eq. (16), as illustrated in Fig. 5.

 figure: Fig. 5.

Fig. 5. Arbitrary set of mutual orthogonality vanishing points (${V_1},{V_2}$) and vanishing points (${V_3}$) in the normal direction of target plane ${\pi _T}$

Download Full Size | PDF

2.5.2 Solving points on the calibrating Conic

The calibrating Conic is defined on image plane; and camera intrinsic parameter calibration is equivalent to determining the calibrating Conic. With principal point p, the image of the infinite line ${l_\infty }$, the vanishing point ${V_3}$ in the normal direction of plane ${\pi _T}$, we can obtain points on the calibrating Conic.

The distance among optical center C and principal point p equals focal length f. A line ${l_{de}}$ that passes by principal point p is vertical with ${l_{p0}}$. It is parallel with ${l_\infty }$, and there are two points d and e on the calibrating conic. Under the assumption of square and non-skew pixels, the calibrating Conic is a circle; so, points $d,e$ satisfy $|{pd} |= |{pe} |= f$. Due to the solution method of point d being identified as point $e^{\prime}s$, we present the method to solve point d as an example.

Since $\triangle Cx{V_3} \cong \triangle dx{V_3}$, $\angle xC{V_3} = \angle xd{V_3}\textrm{ = }90^\circ $. This is illustrated in Fig. 6. The intersecting point among line ${l_\infty }$ and the line ${l_{p0}}$ is the point x; that is, $x = {l_{p0}} \times {l_\infty }$. Then,

$$\left\{ {\begin{array}{c} {{l_{pd}} = p \times d}\\ {{l_{xd}} = x \times d}\\ {{l_{d{V_3}}} = d \times {V_3}}\\ {{l_{xd}} \times {l_{d{V_3}}} = 0} \end{array}} \right.. $$

The points $d,e$ on the calibrating Conic can be obtained. Other points can be solved in other images. The calibrating Conic can be fitted by many points on the calibrating Conic. The radius of calibrating Conic is the focal length. And the calibrating Conic and points on it are shown in Fig. 7.

 figure: Fig. 6.

Fig. 6. Two points $d,e$ on the calibrating Conic

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Fitting the calibrating Conic

Download Full Size | PDF

2.6 New geometric interpretation and solution method of the target board’s pose

The angle ${\theta _X}$ among the image plane ${\pi _I}$ and calibration board plane ${\pi _T}$ is related to the precision of camera calibration. Specifically, a larger ${\theta _X}$ provides more pure geometric information for camera calibration, but an excessively large ${\theta _X}$ lowers the corner extraction accuracy. To strike a balance, approximately ${\theta _X}\textrm{ = }45^\circ $ is regarded as the ideal target board angle.

For the sake of representing angle ${\theta _X}$ independent of the original target board coordinate frame, we propose a canonical target board coordinate frame ${S_{TC}}$ instead of the primary target board coordinate frame ${S_T}$. It is defined as follows: the origin in the ${S_{TC}}$ is identified as the origin in the ${S_T}$. The x axis in ${S_{TC}}$ is parallel to ${l_\infty }$, and the y axis is parallel to ${l_{p0}}$. The z axis is the normal of target plane ${\pi _T}$. The x axis, y axis, and z axis satisfy the right-hand rule.

In the target canonical coordinate frame ${S_{TC}}$, the attitude of the target board is given a new geometric interpretation, based on which a new solution method for the attitude of target board is proposed; that is, the attitude of the target board in the camera coordinate frame is deemed to attain by successive active rotation about the $z$ axis, the $x$ axis, and the z axis of ${S_{TC}}$. Three attitude angles are denoted by ${\theta _{Z1}},{\theta _X}$ and ${\theta _{Z2}}$.

The attitude transformation of the target board is illustrated in Fig. 8,

 figure: Fig. 8.

Fig. 8. Coordinate frame transform illustration of the target board pose’s decomposition

Download Full Size | PDF

where ${S_{cam}}$ is camera coordinate frame and ${S_{TC1}}$ denotes an intermediate transform coordinate frame.

The rotation transformation from camera coordinate system to target board coordinate system is illustrated as follows.

  • 1. First, we solve for ${\theta _{Z1}}$, first active rotation angle around z axis.

The coordinate system transformation from ${S_{cam}}$ to ${S_{TC1}}$ can be considered as the ${l_{p0}}$ rotation ${\theta _{Z1}}$ in the target board geometrically; thus, ${\theta _{Z1}}$ can be obtained using ${l_{p0}}$, as follows:

$${\theta _{Z1}}\textrm{ = }\arctan ({l_{p0}}(2)/{l_{p0}}(1)). $$
  • 2. Then, we solve for ${\theta _X}$, the angle among the image plane ${\pi _I}$ and the calibration board plane ${\pi _T}$.

It is shown in Figs. 5, 6, and 9. We obtain ${\theta _X}\textrm{ = }\angle cx{V_3} = \angle dxp$; then, ${S_{TC1}}$ is transferred to ${S_{TC2}}$

$${\theta _X}\textrm{ = }\arctan (|{pd} |/|{xp} |). $$
  • 3. Finally, we solve for ${\theta _{Z2}}$, the second active rotation angle about z axis.

 figure: Fig. 9.

Fig. 9. Angle ${\theta _X}$ among the image plane ${\pi _I}$ and the calibration board plane ${\pi _T}$

Download Full Size | PDF

${\theta _{Z2}}$ is the rotation angle from ${S_{TC}}$ to ${S_T}$. Because $x$ axis is parallel with ${l_\infty }$ in ${S_{TC}}$, we find that ${\theta _{Z2}}$ is related to a line ${l_\infty }^\prime $ that is parallel with ${l_\infty }$ (the reason that ${l_\infty }$ is not adopted is that ${l_\infty }$ transfers to ${L_\infty }$ under the homography transform, and ${L_\infty }$ can cause degeneration in the follow-up process).

$l_\infty ^{\prime}\textrm{ = }{[{a,b,\lambda } ]^T}$, in which $\lambda$ is an arbitrary factor that isn’t 0 or 1. Under the homography transform, ${l_\infty }^\prime $ in the image plane ${\pi _I}$ is transferred to ${L_\infty }^\prime$ in the target plane ${\pi _T}$

$${L_\infty }^\prime \textrm{ = }{H^T}{l_\infty }^\prime. $$

Then, ${\theta _{Z2}}$ is obtained by

$${\theta _{Z2}} = \arctan ({L_\infty }^\prime (1)/{L_\infty }^\prime (2)). $$

3. Visual pose guidance for target board

The target board’s pose influences calibration. It has two aspects. First, the distribution of the line pencil $\{ {l_{p0}}\} $ that passes through the principal point impacts the principal point estimation. A uniform distribution in all directions in the range $[0,2\pi )$ obtains a good estimation. In geometry, the direction of ${l_{p0}}$ is identical to ${\theta _{Z1}}$. Second, the ${\theta _X}$ among the image plane ${\pi _I}$ and the calibration board plane ${\pi _T}$ has an influence on camera calibration, as mentioned above. A tilted view of the target board can tell us whether the size of a particular imaged calibration board is caused by the distance among the camera and the target board or the focal length.

Thus, we proposed a VPG of the target board system that guides the user through a GUI to move the camera or target board to obtain appropriate images for calibration. The appropriate image possesses two characteristics: one is that the direction of the line through principal point is nearly the same as our expected orientation, and the other is that the angle ${\theta _X}$ between the image plane ${\pi _I}$ and target board ${\pi _T}$ is close to $45^\circ $.

3.1 Expected homography ${H_{\exp }}$

To achieve functions as above in VPG, we need to compute the expected homography ${H_{\exp }}$, which applies homography to the target board plane coordinate frame ${S_{T\_xy}}$ at an appropriate distance off camera to acquire expected image of the target board in plane coordinate frame ${S_{\textrm{cam}\_xy}}$. So, we take one image at an appropriate distance, and then we obtain the homography H among the target plane ${\pi _T}$ and the image plane ${\pi _I}$. After that we can calculate expected homography ${H_{\exp }}$.

The homography H is a transform from plane coordinate frame ${S_{cam\_xy}}$ to plane coordinate frame ${S_{T\_xy}}$. We can obtain the homography ${H_{TC}}$, which is transformed from plane coordinate frame ${S_{cam\_xy}}$ to plane coordinate frame ${S_{TC\_xy}}$. It can eliminate the ${\theta _{Z2}}$’s degree of freedom in rotation

$${H_{TC}} = H{R_z}( - {\theta _{Z2}}). $$

The homography ${H_{TC}}$ is proved as follows:

$$\left\{ \begin{array}{l} H = K\left[ {\begin{array}{ccc} {{r_1}}&{{r_2}}&t \end{array}} \right]\\ {H_{TC}} = H{R_z}( - {\theta_{Z2}})\\ {H_{TC}} = K\left[ {\begin{array}{ccc} {{r_1}}&{{r_2}}&t \end{array}} \right]{R_z}( - {\theta_{Z2}})\\ {H_{TC}} = K\left[ {\begin{array}{ccc} {{r_1}}&{{r_2}}&t \end{array}} \right]\left[ {\begin{array}{ccc} {c( - {\theta_{Z2}})}&{ - s( - {\theta_{Z2}})}&0\\ {s( - {\theta_{Z2}})}&{c( - {\theta_{Z2}})}&0\\ 0&0&1 \end{array}} \right]\\ {H_{TC}} = K{\left[ {\begin{array}{c} {c( - {\theta_{Z2}}){r_1} + s( - {\theta_{Z2}}){r_2}}\\ { - s( - {\theta_{Z2}}){r_1} + c( - {\theta_{Z2}}){r_2}}\\ t \end{array}} \right]^T}\\ {H_{TC}} = K[\begin{array}{ccc} {{r_{1\_TC}}}&{{r_{2\_TC}}}&t \end{array}] \end{array} \right.$$
where c denotes cosine, s denotes sine, and ${r_1},{r_2}$ are first two columns of 3D rotation matrix ${R_{cam\_T}}$ from ${S_{cam}}$ to ${S_T}$, where the 3D rotation matrix is a set of orthogonal bases in 3D space.

The equations above are another first two columns of the 3D rotation matrix ${R_{cam\_TC}}$, which is the basis transform from ${R_{cam\_T}}$ to ${R_{cam\_TC}}$. This basis transformation can be obtained by rotating $- {\theta _{Z2}}$ around the z axis geometrically.

Since

$${R_{cam\_T}} = {R_{Z1}}({\theta _{Z1}}){R_X}({\theta _X}){R_{Z2}}({\theta _{Z2}}) = [{r_1},{r_2},{r_3}],$$
${r_{1\_TC}},{r_{2\_TC}}$ which are derived from ${R_{cam\_TC}}\textrm{ = }\left[ {\begin{array}{ccc} {{r_{1\_TC}}}&{{r_{2\_TC}}}&{{r_{3\_TC}}} \end{array}} \right]$ are obtained by ${r_1}$ and ${r_2}$ rotation $- {\theta _{Z2}}$ around the z axis. Then, ${R_{cam\_TC}}\textrm{ = }{R_{Z1}}({\theta _{Z1}}){R_X}({\theta _X})$. So, ${H_{TC}}$ is the homography; that is, it is the product of intrinsic parameter matrix K and matrix $[\begin{array}{ccc} {{r_{1\_TC}}}&{{r_{2\_TC}}}&t \end{array}]$, which are only related by ${\theta _{Z1}}$, ${\theta _X}$ in ${r_{1\_TC}},{r_{2\_TC}}$ and the translation of ${H_{TC}}$ remains t.

Furthermore, we can construct a standard homography ${H_{stand}}$, eliminating the degree of freedom of ${\theta _{Z1}}$ in rotation. The rotation of the target board deduced from ${H_{stand}}$ is equivalent to target board rotation $\textrm{ - }{\theta _{Z1}}$ around the ${z_c}$ axis in ${S_{cam}}$, and it can be considered as the image of the target board rotation $\textrm{ - }{\theta _{Z1}}$ in the image plane by the principal point. Furthermore, 2D homogeneous rigid body transformation matrix ${M_{2d}}$ can be expressed as follows:

$$\left\{ \begin{array}{l} {M_{t1}} = \left[ {\begin{array}{ccc} 1&0&{ - {p_{comp\_x}}}\\ 0&1&{ - {p_{comp\_y}}}\\ 0&0&1 \end{array}} \right]\\ t1\_ori = {M_{t1}}{\left[ {\begin{array}{ccc} 0&0&1 \end{array}} \right]^T}\\ {M_{t2}}\textrm{ = }\left[ {\begin{array}{ccc} 1&0&{ - t1\_ori(1)}\\ 0&1&{ - t1\_ori(2)}\\ 0&0&1 \end{array}} \right]\\ {M_{2d}} = {M_{t2}}R( - {\theta_{z1}}){M_{t1}} \end{array} \right.$$
where ${p_{comp}} = ({p_{comp\_x}},{p_{comp\_y}})$ is the principal point obtained by using Eq. (13) by taking several images of the target board in advance. Then,
$${H_{stand}}\textrm{ = }{M_{2d}}{H_{TC}}. $$

${H_{stand}}$ eliminates the degree of freedom of ${\theta _{Z2}}$ and ${\theta _{Z1}}$ in rotation; only the degree of freedom of ${\theta _X}$ is retained. Using ${p_{comp}}$, we can set ${\theta _X}\textrm{ = }45^\circ$ to obtain the ideal ${H_{stand}}$.

Finally, we obtain

$$\left\{ {\begin{array}{l} {{M_{t1}} = \left[ {\begin{array}{ccc} 1&0&{ - {p_{comp\_x}}}\\ 0&1&{ - {p_{comp\_y}}}\\ 0&0&1 \end{array}} \right]}\\ {t1\_ori = {M_{t1}}{{\left[ {\begin{array}{ccc} 0&0&1 \end{array}} \right]}^T}}\\ {{M_{t2}}\textrm{ = }\left[ {\begin{array}{ccc} 1&0&{ - t1\_ori(1)}\\ 0&1&{ - t1\_ori(2)}\\ 0&0&1 \end{array}} \right]}\\ {{M_{2d\_\exp }} = {M_{t2}}R({\theta_{exp}}){M_{t1}}}\\ {{H_{exp}}\textrm{ = }{M_{2d\_exp }}{H_{stand}}} \end{array}} \right.$$
where ${\theta _{\exp }}$ is expected line on which the principal point lies, whose direction varies in the range $[0,360)^\circ $. ${H_{exp}}$ is the expected homography from the target board to the image of the target board. The expected projected feature point ${X_{\exp }}$ of the target board is given by
$${X_{\exp }} = {H_{\exp }}{x_T}. $$

According to the expected homography ${H_{exp}}$ and the above method, the expected line passing through the principal point ${l_{p0 - exp}}$ can be obtained.

Both ${X_{\exp }}$ and ${l_{p0 - exp}}$ can be displayed in real time by VPG. And the actual line which passes through the principal point is obtained by the real-time frame in the image flow according to Eq. (12). It also is displayed in real-time displayed in VPG. Then, we adjust the pose of the target board to bring the actual feature points close to ${X_{\exp }}$ and the actual line on which the principal point lies close to the expected line on which the principal point lies ${l_{p0 - exp}}$. In practice, the degree of closeness of line through principal point is a primary consideration, and the closeness of the degree of feature points is a secondary consideration. We capture the image for camera calibration when the target board's ideal pose is reached.

4. Synthetic and real data evaluations

4.1 Synthetic experiment

The proposed algorithm has been tested on a computer. The intrinsic parameters are defined as follows:${f_M} = 100mm$, the pixel is square, and its size is $10\mu m$; so, the normalized focal length f is 1000, image resolution is $1200 \times 1000$. The principal point is ${[600,500]^T}$. The target plane is a grid pattern with $12 \times 16 = 192$ feature points.

4.1.1 Performance w.r.t. target planes number

The ${\theta _X}$ of the rotation parameters of the target board is set to $45^\circ $. ${\theta _{Z1}}$ of the rotation parameters of the target board has a homogeneous distribution in the range $[0,360)^\circ $, and the rotation parameters of the target board ${\theta _{Z2}}\textrm{ = }0$. The number of targets varied from 8 to 28 with a step of 4, a mean of 0, and a standard deviation of 0.5-pixel noises are added; 100 trials were performed for each setting. The average result is shown in Fig. 10.

From Fig. 10, we can see that when the number of lines passing through the principal point $\ge 8$, both principal point error and focal length error are lower than the errors that occurred when using Zhang’s method in most cases.

 figure: Fig. 10.

Fig. 10. Error of intrinsic parameter of camera vs. lines through principal point number

Download Full Size | PDF

4.1.2 Performance w.r.t. noise level

The trial investigated our camera calibration method performance w.r.t. pixel noise. The pose setting of the target is the same as in Section 4.1. Gaussian noise with 0 mean and $\sigma $ standard deviation is added to the target board image. $\sigma $ is in range of $[0.5,3]$ with steps of 0.5. Focal length and principal point are compared to truth value. For each level of noise, 100 trials were performed, then the average of the outcomes was evaluated, as shown in Fig. 11. We can see that both principal point error and focal length error are lower than the errors that occurred when using Zhang’s method in most cases.

 figure: Fig. 11.

Fig. 11. Error of intrinsic parameter of camera vs. error of pixel

Download Full Size | PDF

4.2 Real experiment

For evaluating the effect of the method in practice, we conducted realistic experiments. The camera model is Daheng mer-1070-14u3, and the image resolution is $960 \times 687$. Because the VPG must robustly locate the target's feature points in real time, we use the charuco plane target. The target is composed of aruco feature points and checker corner points. Among them, aruco feature points are uniquely encoded, which is convenient for rapid and robust detection. Based on detection of aruco feature points, corners of checker boxes are located with high accuracy. The calibration board is composed of black and white checkers. Every side length of checker is $30mm$, and aruco feature points are in the white checkers. 27 images of target board are collected for calibration. The experiment settings are illustrated in Fig. 12.

 figure: Fig. 12.

Fig. 12. Camera and target board

Download Full Size | PDF

The VPG is shown in Fig. 13, where the red line is the real-time line on which the principal point lies, the green line is the expected line through the principal point, and yellow hollow markers are the expected feature points. The angle among the image plane and the calibration board plane and is shown in the left and upper corner of figure in real time. This target pose is constantly adjusted until the actual line on which the principal point lies is close to the expected line through principal point by VPG. The pose of the target board is adjusted until the angle among the image plane and the target board plane is around 45°. At this time, this expected target posture is obtained, and the camera shot is taken.

 figure: Fig. 13.

Fig. 13. Visual pose guidance of the target board system

Download Full Size | PDF

Partial calibration images, the real-time line on which the principal point lies and the expected line on which the principal point lies drawn by the VPG, and the angle among the image plane and the calibration board plane are shown in Fig. 14.

 figure: Fig. 14.

Fig. 14. Partial target images and the visual pose guide of the target board system’s image interface

Download Full Size | PDF

The estimated lines through principal point and estimated principal point are shown in Fig. 15. The principal points estimated by our method are closer to geometric center of intersection of principal lines in a sense of least squares than those estimated by Zhang’s method. The line on which the principal point lies is rectified to pass through the estimated principal points. The rectified estimated lines on which the principal point lies are shown in Fig. 16.

 figure: Fig. 15.

Fig. 15. Lines through the principal point and estimated principal point

Download Full Size | PDF

 figure: Fig. 16.

Fig. 16. Rectified lines passing through principal point and estimated principal point

Download Full Size | PDF

The above method calculated the intrinsic parameters, and the back projection errors were calculated. The outcomes were compared to those obtained using Zhang’s method, as shown in Table 1. It can be seen that the proposed method has higher calibration accuracy.

Tables Icon

Table 1. Results of calibration

4.2.1 Degeneration of calibration without visual pose guidance of the target board

When the images are taken without VPG, the acquired images may result in the degeneration about principal point estimation and degeneration about focal length estimation. However, both degeneration cases could be avoided easily by means of VPG.

1. Degeneration of principal point estimation

In the shooting target process without VPG, lines passing through principal point directions are relatively close, and the target is prone to degradation. The reason for this is that, although there are various positions and attitudes of the target, the target board rotation does not affect the direction of the line passing through the principal point. This is true according to our new geometric interpretation of the external parameters of the target mentioned above when the normal vector of the plane of the target remains unchanged. In addition, when the angle ${\theta _X}$ between the target plane ${\pi _T}$ and the image plane ${\pi _I}$ changes in the pose of the target, the direction of the line passing through the principal point does not change. Degraded images of the target board are shown in Fig. 18.

 figure: Fig. 17.

Fig. 17. Points on the calibrating Conic and the fitted calibrating Conic

Download Full Size | PDF

 figure: Fig. 18.

Fig. 18. Target images where lines through the principal point are close

Download Full Size | PDF

The principal point calculated by using lines through principal point and that calculated from using Zhang’s method are illustrated in Fig. 19. It can be observed that when lines’ directions passing through the principal point are close, then the estimation of the principal point calculated using the intersecting point of lines through the principal point in the least square sense leads to a large error. Deviations between principal point estimation and image center found using our method and Zhang’s method are larger compared with non-degenerated case. However, because of the coupling effect of its intrinsic parameters, the deviation observed using Zhang’s method is smaller compared to our method.

 figure: Fig. 19.

Fig. 19. Lines passing through the principal point, whose orientations are approachable, and the estimated principal point

Download Full Size | PDF

2. Degeneration of focal length estimation

In the case of imaging target without VPG, when the angle among the image plane and the calibration board plane is too small, it results in large fluctuations in the distance among estimated points on the calibrating Conic and fitted calibrating Conic, causing a decrease in the reliability of focal length estimation. The angle between the target plane and the image plane is calculated in three cases, namely, 5∼10 degrees (case 1), 10∼15 degrees (case 2), and 15∼20 degrees (case 3). The estimated points on the calibrating Conic and the fitted calibration conic are shown in Fig. 20. Compared with the ideal situation shown in Fig. 17, the variation in the distance among the estimated point on the calibrating Conic and the fitted calibration conic is greater. Figure 21 shows the distance and its mean value from the estimated principal point to the line passing through the principal line in three cases. It can be seen that the smaller the angle between the target plane and the image plane, the greater the average distance between the estimated principal point and the line passing through principal point, thus reducing the reliability of the focus length estimation results. These degeneration cases can easily avoid these situations by adopting VPG.

 figure: Fig. 20.

Fig. 20. Estimated points on the calibrating Conic and the fitted calibrating Conic

Download Full Size | PDF

 figure: Fig. 21.

Fig. 21. the distance between the estimated principal point and Lines passing through the principal point

Download Full Size | PDF

5. Conclusion

In this paper, the invariant parallelity of the family of lines and the invariant perpendicularity of the family of lines wherein the line passes through the principal point in 2D projective transformation are presented. Based on these new theorems, a stratified calibration method for decoupling focal length from the principal point is proposed. A new geometric interpretation of the target pose is presented, and the corresponding solution is also presented. Based on the new geometric interpretation, a VPG system of the target is proposed that could guide a user to calibrate a camera through GUI. The simulation and experiment results demonstrate the correctness and effectiveness of the proposed method.

Funding

National Natural Science Foundation of China (51625501, 52127809).

Acknowledgments

This article is supported by the Key Laboratory of Precision Opto-mechatronics Technology, Ministry of Education, Beihang University, China

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. A.-S. Poulin-Girard, S. Thibault, and D. Laurendeau, “Influence of camera calibration conditions on the accuracy of 3D reconstruction,” Opt. Express 24(3), 2678–2686 (2016). [CrossRef]  

2. H. Fathi and I. Brilakis, “Multistep explicit stereo camera calibration approach to improve euclidean accuracy of large-scale 3D reconstruction,” (2016).

3. M. Wilczkowiak, E. Boyer, and P. Sturm, “Camera calibration and 3D reconstruction from single images using parallelepipeds,” in Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001 (IEEE,2001), pp. 142–148.

4. G. Wang, H.-T. Tsui, Z. Hu, and F. Wu, “Camera calibration and 3D reconstruction from a single view based on scene constraints,” Image Vis. Comput. 23(3), 311–323 (2005). [CrossRef]  

5. P. Sturm and S. Maybank, “A method for interactive 3d reconstruction of piecewise planar objects from single images,” in The 10th British machine vision conference (BMVC'99) (The British Machine Vision Association (BMVA), 1999), pp. 265–274.

6. M. Pollefeys, D. Nistér, J.-M. Frahm, A. Akbarzadeh, P. Mordohai, B. Clipp, C. Engels, D. Gallup, S.-J. Kim, and P. Merrell, “Detailed real-time urban 3d reconstruction from video,” Int J. Comput. Vis. 78(2-3), 143–167 (2008). [CrossRef]  

7. M. Pollefeys, L. V. Gool, and M. Proesmans, “Euclidean 3D reconstruction from image sequences with variable focal lengths,” in European Conference on Computer Vision (Springer, 1996), pp. 31–42.

8. S. Negahdaripour, H. Sekkati, and H. Pirsiavash, “Opti-acoustic stereo imaging: On system calibration and 3-D target reconstruction,” IEEE Trans. on Image Process. 18(6), 1203–1214 (2009). [CrossRef]  

9. N. Navab, A. Bani-Hashemi, M. S. Nadar, K. Wiesent, P. Durlak, T. Brunner, K. Barth, and R. Graumann, “3D reconstruction from projection matrices in a C-arm based 3D-angiography system,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 1998), pp. 119–129.

10. F. Mourgues, F. Devemay, and E. Coste-Maniere, “3D reconstruction of the operating field for image overlay in 3D-endoscopic surgery,” in Proceedings IEEE and ACM International Symposium on Augmented Reality (IEEE, 2001), pp. 191–192.

11. D. Mitton, C. Landry, S. Veron, W. Skalli, F. Lavaste, and J. A. De Guise, “3D reconstruction method from biplanar radiography using non-stereocorresponding points and elastic deformable meshes,” Med. Biol. Eng. Comput. 38(2), 133–139 (2000). [CrossRef]  

12. A. Henrichsen, 3D reconstruction and camera calibration from 2D images, (University of Cape Town, 2000).

13. E. Guillou, D. Meneveaux, E. Maisel, and K. Bouatouch, “Using vanishing points for camera calibration and coarse 3D reconstruction from a single image,” Visual Comp. 16(7), 396–410 (2000). [CrossRef]  

14. C. Beder and R. Steffen, “Determining an initial image pair for fixing the scale of a 3d reconstruction from an image sequence,” in Joint Pattern Recognition Symposium (Springer, 2006), pp. 657–666.

15. T. Luhmann, C. Fraser, and H.-G. Maas, “Sensor modelling and camera calibration for close-range photogrammetry,” ISPRS J. Photogramm 115, 37–46 (2016). [CrossRef]  

16. M. V. Peppa, J. P. Mills, P. Moore, P. E. Miller, and J. E. Chambers, “Automated co-registration and calibration in SfM photogrammetry for landslide change detection,” Earth Surf. Process. Landforms 44(1), 287–303 (2019). [CrossRef]  

17. Y. Furukawa and J. Ponce, “Accurate camera calibration from multi-view stereo and bundle adjustment,” Int J. Comput. Vis. 84(3), 257–268 (2009). [CrossRef]  

18. L. Parente, J. H. Chandler, and N. Dixon, “Optimising the quality of an SfM-MVS slope monitoring system using fixed cameras,” The Photogrammetric Record 34(168), 408–427 (2019). [CrossRef]  

19. A. R. Mosbrucker, J. J. Major, K. R. Spicer, and J. Pitlick, “Camera system considerations for geomorphic applications of SfM photogrammetry,” Earth Surf. Process. Landforms 42(6), 969–986 (2017). [CrossRef]  

20. A. Jalandoni, I. Domingo, and P. S. Taçon, “Testing the value of low-cost Structure-from-Motion (SfM) photogrammetry for metric and visual analysis of rock art,” J. Archaeol. Sci. 17, 605–616 (2018). [CrossRef]  

21. C. Sweeney, V. Fragoso, T. Höllerer, and M. Turk, “Large scale sfm with the distributed camera model,” in 2016 Fourth International Conference on 3D Vision (3DV) (IEEE, 2016), pp. 230–238.

22. Y. Salaün, R. Marlet, and P. Monasse, “Line-based robust SfM with little image overlap,” in 2017 International Conference on 3D Vision (3DV) (IEEE, 2017), pp. 195–204.

23. A. Shalaby, M. Elmogy, and A. A. El-Fetouh, “Algorithms and applications of structure from motion (SFM): A survey,” Algorithms 6, 1 (2017).

24. Y. Salaün, R. Marlet, and P. Monasse, “Multiscale line segment detector for robust and accurate SfM,” in 2016 23rd International Conference on Pattern Recognition (ICPR) (IEEE, 2016), pp. 2000–2005.

25. H. Hastedt, T. Luhmann, H. Przybilla, and R. Rofallski, “Evaluation of interior orientation modelling for cameras with aspheric lenses and image pre-processing with special emphasis to sfm reconstruction,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. 43, 17–24 (2021). [CrossRef]  

26. M. Saponaro, A. Capolupo, G. Caporusso, E. Borgogno Mondino, and E. Tarantino, “Predicting the Accuracy of Photogrammetric 3d Reconstruction from Camera Calibration Parameters Through a Multivariate Statistical Approach,” in XXIV ISPRS Congress (ISPRS, 2020), pp. 479–486.

27. P. H. Nyimbili, H. Demirel, D. Seker, and T. Erden, “Structure from motion (sfm)-approaches and applications,” in Proceedings of the international scientific conference on applied sciences, Antalya, Turkey (2016), pp. 27–30.

28. V. K. Mali and S. N. Kuiry, “Assessing the accuracy of high-resolution topographic data generated using freely available packages based on SfM-MVS approach,” Measurement 124, 338–350 (2018). [CrossRef]  

29. J. Liu and R. Hubbold, “Automatic camera calibration and scene reconstruction with scale-invariant features,” in International Symposium on Visual Computing (Springer, 2006), pp. 558–568.

30. M. R. James and S. Robson, “Straightforward reconstruction of 3D surfaces and topography with a camera: Accuracy and geoscience application,” J. Geophys. Res. 117(F3), n/a–n/a (2012).

31. L. Girod, C. Nuth, A. Kääb, B. Etzelmüller, and J. Kohler, “Terrain changes from images acquired on opportunistic flights by SfM photogrammetry,” The Cryosphere 11(2), 827–840 (2017). [CrossRef]  

32. M. A. Fonstad, J. T. Dietrich, B. C. Courville, J. L. Jensen, and P. E. Carbonneau, “Topographic structure from motion: a new development in photogrammetric measurement,” Earth Surf. Process. Landforms 38(4), 421–430 (2013). [CrossRef]  

33. A. Teichman, S. Miller, and S. Thrun, “Unsupervised Intrinsic Calibration of Depth Sensors via SLAM,” in Robotics: Science and Systems (Citeseer, 2013), p. 3.

34. S. Song, M. Chandraker, and C. C. Guest, “High accuracy monocular SFM and scale correction for autonomous driving,” IEEE Trans. Pattern Anal. Mach. Intell. 38(4), 730–743 (2016). [CrossRef]  

35. D. Weikersdorfer, D. B. Adrian, D. Cremers, and J. Conradt, “Event-based 3D SLAM with a depth-augmented dynamic vision sensor,” in 2014 IEEE international conference on robotics and automation (ICRA) (IEEE, 2014), pp. 359–364.

36. O. Wasenmüller, M. Meyer, and D. Stricker, “CoRBS: Comprehensive RGB-D benchmark for SLAM using Kinect v2,” in 2016 IEEE Winter Conference on Applications of Computer Vision (WACV) (IEEE, 2016), pp. 1–7.

37. R. Wagner, O. Birbach, and U. Frese, “Rapid development of manifold-based graph optimization systems for multi-sensor calibration and SLAM,” in 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2011), pp. 3305–3312.

38. A. Pumarola, A. Vakhitov, A. Agudo, A. Sanfeliu, and F. Moreno-Noguer, “PL-SLAM: Real-time monocular visual SLAM with points and lines,” in 2017 IEEE international conference on robotics and automation (ICRA) (IEEE, 2017), pp. 4503–4508.

39. G. Nützi, S. Weiss, D. Scaramuzza, and R. Siegwart, “Fusion of IMU and vision for absolute scale estimation in monocular SLAM,” J. Intell. Robot Syst. 61(1-4), 287–299 (2011). [CrossRef]  

40. E. Mueggler, H. Rebecq, G. Gallego, T. Delbruck, and D. Scaramuzza, “The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM,” Int. J. Rob. Res. 36(2), 142–149 (2017). [CrossRef]  

41. J. M. Montiel and A. J. Davison, “A visual compass based on SLAM,” in Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006.(IEEE, 2006), pp. 1917–1922.

42. R. Melo, J. P. Barreto, and G. Falcao, “A new solution for camera calibration and real-time image distortion correction in medical endoscopy–initial technical evaluation,” IEEE Trans. Biomed. Eng. 59(3), 634–644 (2012). [CrossRef]  

43. S. Leutenegger, P. Furgale, V. Rabaud, M. Chli, K. Konolige, and R. Siegwart, “Keyframe-based visual-inertial slam using nonlinear optimization,” Proceedings of Robotis Science and Systems (RSS) 2013 (2013). [CrossRef]  

44. S. Kim and S.-Y. Oh, “SLAM in indoor environments using omni-directional vertical and horizontal line features,” J. Intell. Robot Syst. 51(1), 31–43 (2008). [CrossRef]  

45. J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-scale direct monocular SLAM,” in European conference on computer vision (Springer, 2014), pp. 834–849.

46. A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “MonoSLAM: Real-time single camera SLAM,” IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067 (2007). [CrossRef]  

47. J. Civera, A. J. Davison, and J. M. Montiel, “Inverse depth parametrization for monocular SLAM,” IEEE Trans. Robot. 24(5), 932–945 (2008). [CrossRef]  

48. D. Caruso, J. Engel, and D. Cremers, “Large-scale direct slam for omnidirectional cameras,” in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE, 2015), pp. 141–148.

49. M. Bryson and S. Sukkarieh, “Building a Robust Implementation of Bearing-only Inertial SLAM for a UAV,” J. Field Robotics 24(1-2), 113–143 (2007). [CrossRef]  

50. S. Bazeille and D. Filliat, “Incremental topo-metric slam using vision and robot odometry,” in 2011 IEEE International Conference on Robotics and Automation (IEEE, 2011), pp. 4067–4073.

51. E. Ataer-Cansizoglu, Y. Taguchi, S. Ramalingam, and Y. Miki, “Calibration of non-overlapping cameras using an external SLAM system,” in 2014 2nd International Conference on 3D Vision (IEEE, 2014), pp. 509–516.

52. O. D. Faugeras, Q.-T. Luong, and S. J. Maybank, “Camera self-calibration: Theory and experiments,” in European conference on computer vision (Springer, 1992), pp. 321–334.

53. T. Viéville and D. Lingrand, “Using singular displacements for uncalibrated monocular visual systems,” in European Conference on Computer Vision (Springer, 1996), pp. 207–216.

54. B. Triggs, “Autocalibration and the absolute quadric,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 1997), pp. 609–614.

55. R. I. Hartley, “Self-calibration from multiple views with a rotating camera,” in European Conference on Computer Vision (Springer, 1994), pp. 471–478.

56. O. D. Faugeras, “The calibration problem for stereo,” CVPR, 1986 (1986).

57. R. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” IEEE J. Robot. Automat. 3(4), 323–344 (1987). [CrossRef]  

58. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000). [CrossRef]  

59. A. S. Alturki and J. S. Loomis, “Camera principal point estimation from vanishing points,” in 2016 IEEE National Aerospace and Electronics Conference (NAECON) and Ohio Innovation Summit (OIS) (IEEE, 2016), pp. 307–313.

60. J.-H. Chuang, C.-H. Ho, A. Umam, H.-Y. Chen, J.-N. Hwang, and T.-A. Chen, “Geometry-based camera calibration using closed-form solution of principal line,” IEEE Trans. on Image Process. 30, 2599–2610 (2021). [CrossRef]  

61. B. Triggs, “Autocalibration from planar scenes,” in European conference on computer vision (Springer, 1998), pp. 89–105.

62. P. F. Sturm and S. J. Maybank, “On plane-based camera calibration: A general algorithm, singularities, applications,” in Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149) (IEEE, 1999), pp. 432–437.

63. R. Hartley and A. Zisserman, Multiple view geometry in computer vision (Cambridge university press, 2003).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (21)

Fig. 1.
Fig. 1. Relation of vanishing line ${l_\infty }$ and intersection line ${l_{\pi \textrm{ - }I}}$ between the target plane ${\pi _T}$ and image plane ${\pi _I}$
Fig. 2.
Fig. 2. Family of lines whose parallelity is invariant in 2D projective transformation
Fig. 3.
Fig. 3. Line that passes through the principal point and that is vertical to the image of the infinite line ${l_\infty }$
Fig. 4.
Fig. 4. Intact projective schematic diagram of lines ${L_{{p_i}}}$ and ${L_i}$ on target plane ${\pi _T}$
Fig. 5.
Fig. 5. Arbitrary set of mutual orthogonality vanishing points (${V_1},{V_2}$) and vanishing points (${V_3}$) in the normal direction of target plane ${\pi _T}$
Fig. 6.
Fig. 6. Two points $d,e$ on the calibrating Conic
Fig. 7.
Fig. 7. Fitting the calibrating Conic
Fig. 8.
Fig. 8. Coordinate frame transform illustration of the target board pose’s decomposition
Fig. 9.
Fig. 9. Angle ${\theta _X}$ among the image plane ${\pi _I}$ and the calibration board plane ${\pi _T}$
Fig. 10.
Fig. 10. Error of intrinsic parameter of camera vs. lines through principal point number
Fig. 11.
Fig. 11. Error of intrinsic parameter of camera vs. error of pixel
Fig. 12.
Fig. 12. Camera and target board
Fig. 13.
Fig. 13. Visual pose guidance of the target board system
Fig. 14.
Fig. 14. Partial target images and the visual pose guide of the target board system’s image interface
Fig. 15.
Fig. 15. Lines through the principal point and estimated principal point
Fig. 16.
Fig. 16. Rectified lines passing through principal point and estimated principal point
Fig. 17.
Fig. 17. Points on the calibrating Conic and the fitted calibrating Conic
Fig. 18.
Fig. 18. Target images where lines through the principal point are close
Fig. 19.
Fig. 19. Lines passing through the principal point, whose orientations are approachable, and the estimated principal point
Fig. 20.
Fig. 20. Estimated points on the calibrating Conic and the fitted calibrating Conic
Fig. 21.
Fig. 21. the distance between the estimated principal point and Lines passing through the principal point

Tables (1)

Tables Icon

Table 1. Results of calibration

Equations (33)

Equations on this page are rendered with MathJax. Learn more.

l = H T L = H T [ 0 0 1 ] .
H = [ h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h 9 ] .
[ 0 0 1 ] = H T l = [ h 1 h 2 h 3 h 4 h 5 h 6 h 7 h 8 h 9 ] T [ a b 1 ] = [ h 1 h 4 h 7 h 2 h 5 h 8 h 3 h 6 h 9 ] [ a b 1 ] .
[ h 1 h 4 h 2 h 5 ] [ a b ] = [ h 7 h 8 ] .
L i = H T l i L i = [ h 1 h 4 h 7 h 2 h 5 h 8 h 3 h 6 h 9 ] [ s a s b 1 ] L i = [ s [ h 1 h 4 h 2 h 5 ] [ a b ]  +  [ h 7 h 8 ] s ( h 3 a + h 6 b ) + h 9 ] .
L i = [ ( 1 s ) [ h 7 h 8 ] s ( h 3 a + h 6 b ) + h 9 ] .
{ L i , L j | L i L j } H { l i , l j | l i l j }
{ L p 0 | L p 0 L i , i = 0 , 1 , 2 } H { l p 0 | l p 0 l i , i = 0 , 1 , 2 } .
I = ( 1 i 0 ) , J = ( 1 i 0 ) .
C = I J T + J I T
C  =  ( 1 i 0 ) ( 1 i 0 ) + ( 1 i 0 ) ( 1 i 0 ) = [ 1 0 0 0 1 0 0 0 0 ] .
l p 0 T C l = 0.
l p 0 T C l = 0.
L p 0 T C L  =  0
C  =  H C H T .
L p 0 T C L  = ( H T l p 0 ) T C H T l  =  l p 0 T ( H C H T ) l  =  l p 0 T C l = 0 .
l p 0 = ( C l ) × ( C l ) .
M p = [ c 1 d 1 1 c 2 d 2 1 c n d n 1 ] [ u 0 v 0 1 ] = 0.
l V 1 T C l V 2 = 0.
V 2 = l × l V 2 .
{ V 3 T l p 0 = 0 l V 2 V 3 = V 2 × V 3 l V 1 p T l V 2 V 3 = 0 .
{ l p d = p × d l x d = x × d l d V 3 = d × V 3 l x d × l d V 3 = 0 .
θ Z 1  =  arctan ( l p 0 ( 2 ) / l p 0 ( 1 ) ) .
θ X  =  arctan ( | p d | / | x p | ) .
L  =  H T l .
θ Z 2 = arctan ( L ( 1 ) / L ( 2 ) ) .
H T C = H R z ( θ Z 2 ) .
{ H = K [ r 1 r 2 t ] H T C = H R z ( θ Z 2 ) H T C = K [ r 1 r 2 t ] R z ( θ Z 2 ) H T C = K [ r 1 r 2 t ] [ c ( θ Z 2 ) s ( θ Z 2 ) 0 s ( θ Z 2 ) c ( θ Z 2 ) 0 0 0 1 ] H T C = K [ c ( θ Z 2 ) r 1 + s ( θ Z 2 ) r 2 s ( θ Z 2 ) r 1 + c ( θ Z 2 ) r 2 t ] T H T C = K [ r 1 _ T C r 2 _ T C t ]
R c a m _ T = R Z 1 ( θ Z 1 ) R X ( θ X ) R Z 2 ( θ Z 2 ) = [ r 1 , r 2 , r 3 ] ,
{ M t 1 = [ 1 0 p c o m p _ x 0 1 p c o m p _ y 0 0 1 ] t 1 _ o r i = M t 1 [ 0 0 1 ] T M t 2  =  [ 1 0 t 1 _ o r i ( 1 ) 0 1 t 1 _ o r i ( 2 ) 0 0 1 ] M 2 d = M t 2 R ( θ z 1 ) M t 1
H s t a n d  =  M 2 d H T C .
{ M t 1 = [ 1 0 p c o m p _ x 0 1 p c o m p _ y 0 0 1 ] t 1 _ o r i = M t 1 [ 0 0 1 ] T M t 2  =  [ 1 0 t 1 _ o r i ( 1 ) 0 1 t 1 _ o r i ( 2 ) 0 0 1 ] M 2 d _ exp = M t 2 R ( θ e x p ) M t 1 H e x p  =  M 2 d _ e x p H s t a n d
X exp = H exp x T .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.