Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Calibration method for monocular laser speckle projection system

Open Access Open Access

Abstract

This paper proposes a novel calibration method for the monocular laser speckle projection system. By capturing images of a calibration board with speckles under different poses, projector’s optical axis is fitted and utilized to calibrate the rotation between the camera and projector. The translation is solved in closed form subsequently and projector’s virtual image is recovered via homography. After calibration, the system can be regarded and operated as a binocular stereo vision system with speckle pattern. The proposed method is efficient and convenient, without need of reference image or high-precision auxiliary equipment. Validated by experiments on Astra-s and Astra-pro, it presents significant improvement in depth-estimation compared to the traditional method.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

With the rapid development in image processing technology and sensor accuracy, depth measurement technology has been widely used in various fields, such as 3D reconstruction [14], industrial detection [5,6], medical imaging [79] and so on. It is a non-contact method to measure the distance between the target and depth sensor. The depth measurement technology is mainly divided into three categories: Time of Flight (ToF), binocular stereo vision (without structured illumination), and structured-light methods.

ToF obtains the depth information by calculating the flight time or the phase value of the modulated light [10]. It has great advantage in long-distance measurement, but its accuracy is relatively low [11]. Different from ToF, the binocular stereo vision without structured illumination acquires the disparity map of the target by matching the image pair captured at two perspectives. Block matching or semi-global block matching are always utilized to calculate the similarity of the searching windows, which is relatively easy to achieve [12,13]. However, the binocular stereo vision method still has great difficulty in extracting features in textureless areas, which may lead to failure in feature-matching. In addition, the computational complexity caused by feature extraction and matching limits its application in real-time measurement [14,15].

To overcome the limitations of binocular stereo vision, researchers proposed the structured-light methods [1618]. The structured-light systems can be categorized into two kinds: systems with controllable or designed projections [19] and systems with random speckle projections [2022]. Examples of the former kind include systems with software-controlled projector, designed projection patterns, etc. These systems usually measure via pattern-coding, e.g., temporal coding or spatial coding [23], and provide high accuracy. Furthermore, benefit by the controllable or specific patterns, they can be easily calibrated, e.g, by phase-shift method [24]. On the other hand, the latter kind of system, also called the laser speckle projection system, is based on laser passing through diffraction optical elements (DOE) like ground glass and only generates highly-random speckle pattern. Due to its compact structure, it provides advantages in system modularization and miniaturization [25,26] compared to the former kind.

The laser speckle projection system is generally composed of rigidly-fixed camera and laser speckle projector. Abundant feature information provided by the speckle patterns significantly improves the accuracy of image matching for textureless areas [25,27]. According to the number of cameras, the laser speckle projection system is generally divided into two categories: binocular system and monocular system. The binocular one [28,29], which contains a laser speckle projector and two cameras, is more commonly-used and performs well in accuracy and stability. However, its cost is comparatively high and calibration process is complex. The monocular system only consists of a camera and a laser speckle projector, and hence is more compact and cost-effective. So far, there are many commercial depth-measurement products based on monocular laser speckle projection system, such as Microsoft KinectV1 [30,31], Intel RealSenseR200 [32,33], Orbbec Astra-Pro [34,35] and iPhone X [36].

Unfortunately, few calibration methods exist for the monocular laser speckle projection system and they are still far from perfect. The traditional method is still applied in most of the systems. In this method, reference speckle images at several standard distances are captured and then employed for image matching during measurement. It requires high-precision auxiliary equipment to obtain several reference images, making the calibration expensive and complex [11]. Moreover, optical axes of the camera and projector are assumed to be parallel for the traditional method. Therefore, it is sensitive to the angle error of the optical axes (by manufacturing imperfection, collision, etc.), which however cannot be found and corrected by the method itself.

Recently, a calibration method has been proposed by Zhu and Zhang [37]. It calibrates the system with the help of reference speckle images (i.e., the datum image in [37]) and a calibration board. After taking images of the calibration board with and without speckles pattern, it extracts features’ coordinates in the camera images and finds their correspondences in the reference images by image-matching. Since features’ world coordinates are known, by changing calibration board’s pose and repeating the above step, it calibrates the system as a binocular vision system. This method avoids the usage of high-precision auxiliary equipment and needs much fewer reference images, leading to more convenient calibration. However, it still requires taking a reference image in advance. Furthermore, since the reference image plays as projector’s pseudo image plane, ideally it should be taken with a whiteboard perpendicular to the camera’s optical axis. Angle error easily occurs here as no high-precision equipment is involved, and then propagates to subsequent steps and grows with target distance.

For more flexible and accurate calibration, our previous work [38] further exploits the hidden constraints for the speckles on different-pose calibration boards, i.e., matches of them among calibration boards with different poses should lie on the same ray emitted from the projector and all these rays intersect at projector’s optical center. Given these hidden constraints, projector’s optical center is fitted and then utilized to compute the rigid transformation between the camera and projector. However, our previous work is time-consuming since all sets of matched speckles on different-pose calibration boards should be fitted to rays of the projector. Moreover, the fitted optical center for the projector is noise-sensitive.

To address these problems and further improve the accuracy, a novel calibration method has been proposed in this work. Instead of fitting all the matched speckle points to lines passing through projector’s optical center, the proposed method focuses only on the matched speckles on the projector’s optical axis, which significantly reduces computational cost. More importantly, it illustrates that the fitted projector’s optical axis can be used to efficiently compute the rotation between the camera and projector, after which the translation is efficiently estimated in closed form. Finally, the virtual speckle image of the projector is generated though the homography composed by the rotation and translation, and the system can be viewed as a binocular vision system with speckle pattern. All these give rise to an efficient, accurate and convenient calibration method without the need of any high-precision auxiliary equipment and reference image of speckle.

The rest of this paper is organized as follows: Section 2 introduces some preliminaries. Section 3 gives a detailed description of the architecture of the proposed method. Results for synthetic and real-data experiments are analyses in Section 4 and Section 5 summaries this paper.

2. Preliminaries

2.1 Depth-estimation model for idealized monocular laser-speckle projection system

The idealized monocular laser-speckle projection system estimates the depth via parallax. As drawn in Fig. 1, an idealized system is comprised of a camera and a laser speckle projector, with their optical axes paralleled to each other and known baseline length $s$. The camera is calibrated to obtain its focal length $f$. Other parameters in Fig. 1 are defined as follows:

  • $P_t$, $P_r$: corresponding speckles on the target and reference frames;
  • $Z_t$, $Z_r$: depths of $P_t$, $P_r$, respectively;
  • $u_t$, $u_r$: projections of $P_t$, $P_r$ on camera image plane, respectively;
  • $u_p$: intersection point between camera image plane and the ray which is emitted from the projector and passes $P_t$ and $P_r$;
  • $F_c$: intersection point between camera’s optical axis and camera image plane (i.e., principal point);
  • $F_p$: intersection point between projector’s optical axis and camera image plane;
  • $\delta _c$, $\delta _p$, $\delta$: distance between $F_c$ and $u_t$; distance between $F_p$ and $u_p$; distance between $u_t$ and $u_r$.

By comparing the target speckle image with the reference speckle image, the deviations $\delta$ in the two images are derived. According to similarity of triangles, it is trivial that:

$$\begin{aligned}\frac{Z_t-f}{Z_t}&=\frac{s- \delta_c- \delta_p}{s}, \\ \frac{Z_r-f}{Z_r}&=\frac{s - \delta_c- \delta_p - \delta}{s}. \end{aligned}$$
then reciprocal of the target distance $Z_t$ is computed as:
$$\frac{1}{Z_t}=\frac{1}{Z_r} - \frac{\delta}{sf}.$$

 figure: Fig. 1.

Fig. 1. Schematic diagram of the depth-estimation model for idealized monocular laser-speckle projection system.

Download Full Size | PDF

Obviously, $Z_r$ in Eq. (2) should be known in advance and this is the reason why the traditional calibration method requires reference speckle images captured at known distance in high-precision.

2.2 Camera calibration

The camera imaging model we adopt is the pinhole model [39] extended with radial and tangential distortion [40]. Suppose that $[x_w,y_w,z_w]^T$ is the coordinate of a 3D point in the world coordinate system and its distortion-free and distorted (observed) image coordinates are $[u,v]^T$ and $[u_d,v_d]^T$, respectively. The pinhole model is described as follows:

$$s \begin{bmatrix} {u_u} \\ {v_u} \\ 1 \end{bmatrix} = \underbrace{ \begin{bmatrix} f_x & \gamma & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \\ \end{bmatrix} }_{\boldsymbol{K}_{cam}} \begin{bmatrix} \boldsymbol{R} & \boldsymbol{T} \end{bmatrix} \begin{bmatrix} x_w \\ y_w \\ z_w \\ 1 \end{bmatrix},$$
where $\boldsymbol {K}_{cam}$ is camera intrinsic matrix; $f_x$ and $f_y$ are the focal lengths along $x$ and $y$ axes of the image coordinate system; $[c_x,c_y]^T$, $s$, $\gamma$ are the principal point, depth scale factor and deflection factor, respectively. Nevertheless, due to some defects in manufacturing process, it is unavoidable for the camera to include radial and tangential distortions. Given the normalized distortion-free coordinate as $[\bar {u},\bar {v},1]^T = (\boldsymbol {K}_{cam})^{-1}[u,v,1]^T$, the distortion can be expressed as:
$$\begin{bmatrix} u_d \\ v_d \\ 1 \end{bmatrix} = \boldsymbol{K}_{cam} \begin{bmatrix} \bar{u} \left(1+k_1r+k_2r^2+k_3r^3\right) + 2p_1 \bar{u}\bar{v} + p_2\left(r^2+2\bar{u}^2\right) \\ \bar{v} \left(1+k_1r+k_2r^2+k_3r^3\right) + p_1\left(r^2+2\bar{v}^2\right) + 2p_2 \bar{u} \bar{v} \\ 1 \end{bmatrix},$$
where $r = \bar {u}^2 + \bar {v}^2$; $k_1$, $k_2$ and $k_3$ are radial distortion parameters while $p_1$ and $p_2$ are tangential distortion parameters. The rotation matrix $\boldsymbol {R}$ and translation vector $\boldsymbol {T}$ are the camera’s extrinsic parameters, which describe the transformation relationship between the world and camera coordinate systems. Having performed the widely-used Zhang’s method [41], intrinsic matrix and distortion parameters for the camera are assumed to be known. Then, the camera images have been undistorted according to the distortion parameters.

3. Proposed method

3.1 Projector optical axis calibration

In this paper, the laser speckle projector is regarded as an inverse camera, which is also modeled by Eq. (3) but with inverse optical properties. The intrinsic matrix of the projector is the same as that of the camera. In the proposed method, the virtual image of the projector is used for subsequent reconstruction tasks. This virtual image is an image that meets the ideal perspective projection model and directly derived using extrinsic parameters between the camera and projector. Therefore, the actual distortion of the projector need not be considered in the proposed method. This subsection introduces the method to derive projector’s optical axis, which is essential for the recovery of the rotation between the camera and projector.

We have designed a simple calibration board with features only covering small edge-areas of the calibration board, as shown in Fig. 2(a). Note, the requirement on this optical target’s flatness is not high and all targets with normal diffuse reflecting surface could be used for our method. In fact, in our real-data experiments, the optical target is printed by commercial printer using A4 paper sheet and then we can achieve depth-estimation with error around 0.1 mm. The position of the calibration board in a specific frame is determined by its unit normal vector $\boldsymbol {n}$ and the distance between the plane and frame’s origin, e.g., $d_{cam}$ in Fig. 2(a). Here, features of the type same as our previous work [38,42] have been adopted due to their sub-pixel detection accuracy and, more importantly, ability to be detected under poor-contrast conditions at a small size. However, the feature type is not restricted and other types able to provide accurate and stable detection can also be adopted. As the actual distance between features is known, their 3D coordinates in the target frame can be obtained.

 figure: Fig. 2.

Fig. 2. (a) Schematic diagram of the calibration board. (b) Speckle image of the calibration board.

Download Full Size | PDF

First, we place the calibration board in front of the system and take the image with the camera. Based on the sub-pixel coordinates of features in the images [43], we can calculate the extrinsic relationship between the target and camera frames, as well as features’ 3D coordinates $\boldsymbol {P}_F^i \in \mathbb {R}^3$ in the camera frame [44]. Then, position of the calibration board in the camera frame is fitted. Specifically, the board (plane) can be expressed as $\boldsymbol {n}_{cam}^T \boldsymbol {P}_F^i=d_{cam}$, where $\boldsymbol {n}_{cam} \in \mathbb {R}^3$ is the plane’s unit normal vector in the camera frame. The plane is found by least-squares and several-step Gauss-Newton refinement on the penalty function as:

$$C_1 \left(\bar{\boldsymbol{n}}_{cam}\right) = \sum_{i=1}^{4} \Vert \bar{\boldsymbol{n}}_{cam}^T \boldsymbol{P}_F^i-1 \Vert^2.$$
where $\bar {\boldsymbol {n}}_{cam} = \boldsymbol {n}_{cam} / d_{cam}$.

Next, we project speckle pattern onto the calibration board and capture the images using the camera. Exploiting the fitted plane for the calibration board, it is easy to obtain 3D coordinates of these speckles in the camera frame. During the calibration, pose of the calibration board needs to be adjusted several times and the above steps are repeated. After that, the digital image correlation (DIC) [25,45] is used to match the corresponding speckles among the speckles images.

In a standard laser speckle projector, the optical axis is designed strictly perpendicular to the DOE. Moreover, the optical axis passes through the center or other fixed positions of the DOE. In general, the closer to the optical axis, the stronger the intensity of the beam, and thus the brighter the speckle in the image. As shown in Fig. 2(b), there is a larger and brighter speckle in the image, which is the projection point of the optical axis.

To calibrate projector’s optical axis, we select an image and extract the coordinate $[u_0,v_0]^T$ of this brighter speckle, whose correspondences in other images are already got by DIC before. Subsequently, their 3D coordinates $\boldsymbol {P}_o^i$ in the camera frame are derived using the fitted plane for the calibration board. Given that they all lie on projector’s optical axis $\boldsymbol {L}_o$, 3D-line fitting method is utilized to fit it in the camera frame. Specifically, accurate image coordinates of the brighter speckle are derived by: 1) labeling the area by image thresholding [46] and connected component analysis [46,47]; 2) taking the centroid of labeled area. Besides, the line-fitting is achieved using Opencv [46] fitLine() function, in which the iteratively reweighted least squares [48] are applied to minimize the following cost function:

$$C_2 \left(\boldsymbol{L}_o\right) = \sum_{i=1}^{n} \frac{\rho\left(\boldsymbol{L}_o, \boldsymbol{P}_o^i \right)^2}{2},$$
where the function $\rho \left (\right )$ indicates the distance between 3D line and point.

3.2 System calibration

3.2.1 Homography transformation

For readability, variables are defined before derivations:

  • $\boldsymbol {P}_{cam}$, $\boldsymbol {P}_{proj}$: coordinates of the speckles in the camera and projector frames, respectively;
  • $\boldsymbol {R}_{cp}$, $\boldsymbol {T}_{cp}$: rotation and translation from the camera frame to the projector frame;
  • $\boldsymbol {R}_{pc}$, $\boldsymbol {T}_{pc}$: rotation and translation from the projector frame to the camera frame;
  • $A_x$, $A_y$, $A_z$: three Euler angles of $\boldsymbol {R}_{cp}$;
  • $\bar {\boldsymbol {n}}_{proj}$: parameter in form as $\bar {\boldsymbol {n}}_{cam}$ but describing the calibration board (plane) in the projector frame ;
  • $\boldsymbol {p}_{cam}$, $\boldsymbol {p}_{proj}$: coordinates of the speckles in camera image and projector’s virtual image, respectively;
  • $\boldsymbol {K}_{cam}$, $\boldsymbol {K}_{proj}$: intrinsic matrices of the camera and laser speckle projector, respectively.

Fig. 3 demonstrates the principle of the calibration. Then the relationship between $\boldsymbol {P}_{cam}$ and $\boldsymbol {P}_{proj}$ can be expressed as:

$$\boldsymbol{P}_{proj} = \boldsymbol{R}_{cp} \boldsymbol{P}_{cam} + \boldsymbol{T}_{cp}.$$

 figure: Fig. 3.

Fig. 3. Principle of the calibration.

Download Full Size | PDF

Since the laser speckle projector projects the speckles onto the calibration board and the spatial plane equation of the calibration plane is solved in Section 3.1, coordinates of the speckles in the camera frame satisfy the following equation:

$$\bar{\boldsymbol{n}}_{cam}^T \boldsymbol{P}_{cam}= 1,$$
where $\bar {\boldsymbol {n}}_{cam}$ have already been obtained by Eq. (5).

By substituting Eq. (8) into Eq. (7), we can get:

$$\boldsymbol{P}_{proj} = \left( \boldsymbol{R}_{cp} + \boldsymbol{T}_{cp} {\bar{\boldsymbol{n}}_{cam}^T} \right) \boldsymbol{P}_{cam}.$$

According to Eq. (9), the homography transformation between the camera image and the virtual image of projector is constructed:

$$\begin{aligned} & \boldsymbol{p}_{proj} {\sim} \boldsymbol{H}_{cp} \boldsymbol{p}_{cam},\\ & \boldsymbol{H}_{cp} = \boldsymbol{K}_{proj} \left( \boldsymbol{R}_{cp} + \boldsymbol{T}_{cp} \bar{\boldsymbol{n}}_{cam}^T \right) \boldsymbol{K}_{cam}^{{-}1}, \end{aligned}$$
where the symbol $\sim$ represents equality up to a scale. Hence, it is necessary to calibrate the transformation relationship between the camera and projector frames, i.e., $\boldsymbol {R}_{cp}$ and $\boldsymbol {T}_{cp}$.

3.2.2 Rotation matrix

Keeping the monocular laser speckle system stationary, and changing the pose of the calibration plane, homography transformation exists between any two of the images of the calibration board under different poses. According to Eq. (10), the homography relationship between the image of camera and the virtual image of projector is constructed:

$$ \boldsymbol{p}_{proj} {\sim} \boldsymbol{H}^1_{cp} \boldsymbol{p}^1_{cam}, $$
$$ \boldsymbol{H}^1_{cp} = \boldsymbol{K}_{proj} \left( \boldsymbol{R}_{cp} + \boldsymbol{T}_{cp} \left( \bar{\boldsymbol{n}}^1_{cam}\right)^T \right) \boldsymbol{K}_{cam}^{{-}1}, $$
$$ \boldsymbol{p}^2_{cam} {\sim} \boldsymbol{H}^2_{pc} \boldsymbol{p}_{proj}, $$
$$ \boldsymbol{H}^2_{pc} = \boldsymbol{K}_{cam} \left( \boldsymbol{R}_{pc} + \boldsymbol{T}_{pc} \bar{\boldsymbol{n}}^T_{proj} \right) \boldsymbol{K}^{{-}1}_{proj}, $$

By substituting Eq. (11) into Eq. (13), we can get:

$$\boldsymbol{p}^2_{cam} {\sim} \boldsymbol{H}^2_{pc} \boldsymbol{H}^1_{cp} \boldsymbol{p}^1_{cam},$$
where
$$\boldsymbol{H}^2_{pc} \boldsymbol{H}^1_{cp} = \boldsymbol{K}_{cam} \left( \boldsymbol{I} - \boldsymbol{T}_c \left( \bar{\boldsymbol{n}}^1_{cam}\right)^T + \boldsymbol{T}_c \left( \bar{\boldsymbol{n}}^2_{cam}\right)^T - \boldsymbol{T}_c \left( \bar{\boldsymbol{n}}^2_{cam}\right)^T \boldsymbol{T}_c \left( \bar{\boldsymbol{n}}^1_{cam}\right)^T \right) \boldsymbol{K}^{{-}1}_{cam},$$
and $\boldsymbol {T}_c = -\boldsymbol {R}^T_{cp} \boldsymbol {T}_{cp}$ denotes the coordinate of projector’s optical center in the camera frame.

It can be seen from the Eq. (16) that $\boldsymbol {H}^2_{pc}\boldsymbol {H}^1_{cp}$ is only related to $\boldsymbol {T}_c$ and independent of the rotation matrix $\boldsymbol {R}_{cp}$. However, this does not indicate that $\boldsymbol {R}_{cp}$ is completely unsolvable. In fact, two degrees of freedom of it, i.e., $A_x$ and $A_y$, can be determined by aligning the directions of projector’s optical axis in the camera and projector frames. The detailed derivations are explained as follows.

In a standard laser speckle projector, the optical axis is designed strictly perpendicular to the DOE and passes through its center. To make the virtual image of the projector contains the speckle pattern, we set the obtained optical axis as the $z$ axis of the projector coordinate system. Moreover, we take the direction pointing to the target as its positive direction. Suppose that the unit direction vector of the optical axis is $\boldsymbol {V}_c =\left [v_x,v_y,v_z\right ]^T$ in the camera coordinate system and $\boldsymbol {V}_p =\left [0,0,1\right ]^T$ in the projector coordinate system. The relationship between $\boldsymbol {V}_c$ and $\boldsymbol {V}_p$ can be expressed as:

$$\begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} = \boldsymbol{R}_z \left( A_z \right) \boldsymbol{R}_y \left( A_y \right) \boldsymbol{R}_x \left( A_x \right) \begin{bmatrix} v_x \\ v_y \\ v_z \end{bmatrix},$$

Then, $A_x$ and $A_y$ can be solved:

$$\begin{aligned} A_x &= \tan^{{-}1} \left( \frac{v_y}{v_z} \right),\\ A_y &= \tan^{{-}1} \left( -\frac{v_x \sin A_x}{2v_y} -\frac{v_x \cos A_x}{2v_z} \right). \end{aligned}$$

It should be emphasized that the spatial position of the lines emitted by the projector is uniquely determined when the relative position between the camera and the laser speckle projector is fixed. If the projector coordinate system takes the laser optical center as the origin, directions of the axes of the projector coordinate system determine the position of the projector virtual image plane. However, the virtual image planes at different positions are equivalent and do not affect the depth estimation.

The Euler angle $A_z$ determines the $x$ and $y$ axes of the projector coordinate system, which will affect the coordinates of the speckle pattern in the virtual image. So we need to select an appropriate value of $A_z$ to ensure that the speckle pattern can be distributed in the center of the virtual image. Then, the rotation matrix $\boldsymbol {R}_{cp}$ is obtained by Euler angles $A_x$, $A_y$ and $A_z$.

3.2.3 Translation vector

According to Eq. (10), relationship between the speckles in the camera and projector frames can be formulated as:

$$\bar{\boldsymbol{p}}_{proj} = {\frac{1}{\left({\boldsymbol{H}_{cp}} \boldsymbol{p}_{cam} \right)_z}} \left( {\boldsymbol{R}_{cp}} + {\boldsymbol{T}_{cp} \bar{\boldsymbol{n}}_{cam}^T} \right) \bar{\boldsymbol{p}}_{cam},$$
with $\bar {\boldsymbol {p}}_{proj} = \boldsymbol {K}^{-1}_{proj} \boldsymbol {p}_{proj}$, $\bar {\boldsymbol {p}}_{cam} = \boldsymbol {K}^{-1}_{cam} \boldsymbol {p}_{cam}$. The operation $\left ( \cdot \right )_z$ means taking the $z$-coordinate.

Assuming that the pose of calibration board has been changed twice, we can obtain:

$$ \bar{\boldsymbol{p}}_{proj} = {\frac{1}{\left({\boldsymbol{H}_{cp}^1} \boldsymbol{p}_{cam}^1 \right)_z}} \left( {\boldsymbol{R}_{cp}} + {\boldsymbol{T}_{cp}} \bar{\boldsymbol{n}}^T_1 \right) \bar{\boldsymbol{p}}^1_{cam}, $$
$$ \bar{\boldsymbol{p}}_{proj} = {\frac{1}{\left({\boldsymbol{H}_{cp}^2} \boldsymbol{p}_{cam}^2 \right)_z}} \left( {\boldsymbol{R}_{cp}} + {\boldsymbol{T}_{cp}} \bar{\boldsymbol{n}}^T_2 \right) \bar{\boldsymbol{p}}^2_{cam}. $$

Suppose that $\boldsymbol {R}_{cp} = \left [ \boldsymbol {r}_0,\boldsymbol {r}_1,\boldsymbol {r}_2 \right ]^T$ and $\boldsymbol {T}_{cp} = \left [t_0,t_1,t_2 \right ]^T$, we combine Eq. (20) and Eq. (21):

$${\left({\boldsymbol{H}_{cp}^2} \boldsymbol{p}_{cam}^2 \right)_z} \left( {\boldsymbol{R}_{cp}} + {\boldsymbol{T}_{cp}} \boldsymbol{n}^T_1 \right) \bar{\boldsymbol{p}}^1_{cam} - {\left({\boldsymbol{H}_{cp}^1} \boldsymbol{p}_{cam}^1 \right)_z} \left( {\boldsymbol{R}_{cp}} + {\boldsymbol{T}_{cp}} \boldsymbol{n}^T_2 \right) \bar{\boldsymbol{p}}^2_{cam} = 0.$$

Then, Eq. (22) can be equivalently described as follow:

$$\begin{aligned} &\left( \boldsymbol{r}^T_2 + t_2 \boldsymbol{n}^T_2 \right) \bar{\boldsymbol{p}}^2_{cam} \left( {\boldsymbol{R}_{cp}} + {\boldsymbol{T}_{cp}} \boldsymbol{n}^T_1 \right) \bar{\boldsymbol{p}}^1_{cam} \\ & - \left( \boldsymbol{r}^T_2 + t_2 \boldsymbol{n}^T_1 \right) {\bar{\boldsymbol{p}}^1_{cam}} \left( {\boldsymbol{R}_{cp}} + {\boldsymbol{T}_{cp}} \boldsymbol{n}^T_2 \right) \bar{\boldsymbol{p}}^2_{cam} = 0. \end{aligned}$$

To simplify the calculation, Eq. (23) can be changed into:

$$b {\boldsymbol{T}_{cp}} + t_2 \boldsymbol{m} = \boldsymbol{n},$$
where
$$\begin{aligned} \boldsymbol{n} & = \boldsymbol{r}^T_2 \left( \bar{\boldsymbol{p}}^2_{cam} \boldsymbol{R}_{cp} \bar{\boldsymbol{p}}^1_{cam} - \bar{\boldsymbol{p}}^1_{cam} \boldsymbol{R}_{cp} \bar{\boldsymbol{p}}^2_{cam} \right), \\ \boldsymbol{m} &= \boldsymbol{n}^T_2 \bar{\boldsymbol{p}}^2_{cam} {\boldsymbol{R}_{cp}} \bar{\boldsymbol{p}}^1_{cam} - \boldsymbol{n}^T_1 \bar{\boldsymbol{p}}^1_{cam} {\boldsymbol{R}_{cp}} \bar{\boldsymbol{p}}^2_{cam}, \\ b &= \boldsymbol{r}^T_2 \left( \bar{\boldsymbol{p}}^2_{cam} \boldsymbol{n}^T_1 \bar{\boldsymbol{p}}^1_{cam} - \bar{\boldsymbol{p}}^1_{cam} \boldsymbol{n}^T_2 \bar{\boldsymbol{p}}^2_{cam} \right). \end{aligned}$$

Let $\boldsymbol {M}=\begin {bmatrix} b & 0 & m_0 \\ 0 & b & m_1 \end {bmatrix}$ and $\boldsymbol {d}=\left [n_0,n_1\right ]^T$ , we select the first two lines of the Eq. (24) as:

$$\boldsymbol{M}{\boldsymbol{T}_{cp}} = \boldsymbol{d}.$$

Equation (25) contains two equations, but there are three translation factors to be solved. Therefore, we need to change the pose of the calibration board at least three times to determine the solution of $\boldsymbol {T}_{cp}$. Assuming that there are $n$ groups of calibration board poses in pairs, the Eq. (25) can be converted into:

$$\begin{bmatrix} \boldsymbol{M}_0 \\ \vdots \\ \boldsymbol{M}_n \end{bmatrix} {\boldsymbol{T}_{cp}} = \begin{bmatrix} \boldsymbol{d}_0 \\ \vdots \\ \boldsymbol{d}_n \end{bmatrix}.$$

Hence, translation vector ${\boldsymbol {T}_{cp}}$ can be calculated as:

$${\boldsymbol{T}_{cp}} = \left( \sum_{n}^{i=0} \boldsymbol{M}^T_i \boldsymbol{M}_i \right) ^{{-}1} \sum_{n}^{i=0} \boldsymbol{M}^T_i \boldsymbol{d}_i$$

3.2.4 Virtual image

With the estimated rotation matrix and translation, the homography from the image plane of the camera to the virtual image plane of the projector can be estimated according to Eq. (12). Then the virtual image of the projector can be recovered by Eq. (11).

4. Experiments

4.1 Numerical simulations

To verify the feasibility of the proposed method, several numerical simulations comparing the methods proposed in [38] (named as Fitting Method) and this work (named as Proposed Method), have been conducted. All numerical simulations were written and run in C++.

In the numerical simulations, speckle projector and camera were modeled as pinhole cameras, whose resolutions were set to $1920\times 1080$ (pixel) and $1280\times 1024$ (pixel), respectively. Besides, their focal length and field of view were 2100 (pixel) and $30^{\circ }$. In each trail, $A_x$, $A_y$, $A_z$ and the three elements of $\boldsymbol {T}_{cp}$ were randomly generated in the ranges of $[-5^{\circ },5^{\circ }]$, $[0^{\circ },15^{\circ }]$, $[-5^{\circ },5^{\circ }]$, $[-200,-150]$ (mm), $[-10,10]$ (mm) and $[-10,10]$ (mm), respectively. Speckle points were first randomly generated in the image plane of the projector. Then they were projected onto the calibration board (spatial plane) with different appropriate poses and, subsequently, the camera image. Based on the generated scene, performances of the two methods were studied. During the calibration process, there are many factors that affect the calibration accuracy of the monocular laser speckle projection system. We investigated the influence of the image matching accuracy, the speckle points number and the poses number of calibration board.

First, accuracy of the two methods were evaluated with varying image matching error from 0.1 to 1 (pixel) at interval of 0.1 (pixel). For each level of matching error, 500 trails were executed and the mean data were recorded. As shown in Fig. 4(a)(b)(c), the calibration errors of Euler angle, translation vector and RMS of speckle points become significantly larger as the image matching error increases, which illustrates decisive effect of the image matching error on the calibration results. In this paper, DIC is utilized to match the speckle images and the matching error is normally lower than 0.3 pixels.

 figure: Fig. 4.

Fig. 4. Results with respect to the simulated data. The influence of image matching error: (a) Euler angle; (b) Translation vector; (c) RMS of the speckle points. The influence of the number of the speckle points: (d) Euler angle; (e) Translation vector; (f) RMS of the speckle points. The influence of the number of calibration board poses: (g) Euler angle; (h) Translation vector; (i) RMS of the speckle points.

Download Full Size | PDF

Analogously, the two methods were tested under varying number of speckle points (100 to 1000 at interval of 100) and poses of calibration board (3 to 20 at interval of 1), respectively. Fig. 4(d)(e)(f) showcases that the accuracy of the two methods grows with increased number of speckles. They also demonstrate that adding more speckles is more helpful for improving accuracy of the translation, while has less effect on the rotation and RMS. Further, in Fig. 4(g)(h)(i), as the pose number of the calibration board increases, the calibration error gradually decreases. Such downward trend nearly stabilizes when there are more than 12 poses for the calibration board. In these figures, the curves of two methods are basically coincident, with Proposed Method slightly better than Fitting Method. The numerical simulations preliminary proves the effectiveness and accuracy of the proposed method.

4.2 Real-data experiments

To further demonstrate the calibration accuracy the proposed method, three experiments with real-data were conducted. Throughout these experiments, the optical targets on the calibration board were printed by commercial printer using common paper sheet and a single shot was used for each reconstruction.

4.2.1 Comparison between fitting method and proposed method

In the first two experiments, accuracy of Fitting Method and Proposed Method were compared on a monocular laser speckle projector system, as shown in Fig. 5(a), where an ANHUA M10EGF(LC) [49] speckle projector (resolution: $1920\times 1080$) and a MindVision MVSUA134GM camera [50] (resolution: $1280\times 1024$) are rigidly fixed together. The focal length of the camera was 2089.1 (pixel). In the first experiment, calibrated by the two methods, respectively, the projector system was used to measure the displacement of a whiteboard installed on a two-dimensional displacement table (precision: 0.01mm), which provided the ground-truth displacement. Thirteen trails in the range of 2-9 mm were conducted. The second experiment followed a similar process, though instead, measured two customized spheres [51] with known radius of 28.56 mm (precision: 0.003 mm) as shown in Fig. 5(b).

 figure: Fig. 5.

Fig. 5. Experimental set up for the first two real-data experiments. (a) Monocular laser speckle projector system; (b) Images of the spheres.

Download Full Size | PDF

In calibration process, the projector projected the random speckle pattern to the surfaces of the calibration board in different poses and the camera captured the corresponding speckle images. With the help of DIC, we matched the speckle images and determined the corresponding speckle sets. Then, we solved the relationship between the camera frame and the projector frame, i.e., the rotation matrix and the translation vector. With the help of the extrinsic parameters of the monocular speckle projection system and plane equations of the calibration board in different poses, we built the homography transformation between the camera image and projector’s virtual image. The projector was treated as the inverse camera by constructing the virtual image. As a result, $[A_x, A_y, A_z]$ and $\boldsymbol {T}_{cp}$ were calibrated as $[-1.87^{\circ }, 13.30^{\circ }, -0.84^{\circ }]$ and $[-178.93,-1.98,-5.91]$ (mm), respectively. Obviously, the resolutions, camera’s focal length, $[A_x, A_y, A_z]$ and $\boldsymbol {T}_{cp}$ of the two real-data experiments were close to that values set in the simulations. Therefore, the results are also expected to illustrate how well the simulations "track" the true performances of the two methods.

The results of the first experiment are shown in Fig. 6(a), the measurements of Fitting Method and Proposed Method are basically coincident with the real value, which illustrate that they can realize high-precision displacement measurement. In addition, Fig. 6(b) shows that the results of Proposed Method are obviously better than those of Fitting Method. The average errors of the two methods are 0.069 mm and 0.037 mm respectively. Compared to Fitting Method, accuracy of Proposed Method has been significantly improved.In the second experiment, by matching the virtual image of projector and speckle image of the spheres, we obtained the coordinates of the spheres in the camera frame and constructed their models. The three-dimensional models of the two spheres are shown in Fig. 7. We compare the measured radii of the spheres with their actual values in Table 1. The absolute deviations for the two spheres are 0.076 mm and 0.098 mm in Proposed Method while 0.092 mm and 0.127 mm in Fitting Method. The results of Proposed Method are better than Fitting Method significantly.

 figure: Fig. 6.

Fig. 6. Displacement measurement results for the first real-data experiment.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Sphere models (dark blue) fitted by the obtained point clouds (gray) for the spheres in the second real-data experiment. (a) Left sphere; (b) Right sphere.

Download Full Size | PDF

Tables Icon

Table 1. Radius measurement of the spheres in the second real-data experiment. (unit: mm).

4.2.2 Comparison between the proposed and tradition methods

In the third experiment, Proposed Method was compared with the traditional method using Orbbec Astra-s and Astra-pro [34]. The resolutions of their cameras (infrared) are $1280\times 1024$ and $640\times 480$, respectively. Depth-estimation accuracy of the two methods were measured via the reconstruction of a plane. In particular, with the help of the proposed and traditional methods respectively, depths of the speckles on the plane are estimated and employed to fit the plane. Then, the average off-plane error and the depth distribution range were recorded. The trail number for Astra-s and Astra-pro are five and ten, respectively.

Astra-s and Astra-pro are both pre-calibrated using the traditional method. In addition, we followed the proposed calibration process to calibrate the two Astra systems and constructed the virtual image of the projector for measurement. Fig. 8 shows a sample speckle image captured by the camera and its corresponding virtual image of the projector. As shown in Table 2 and Table 3, the proposed method obtains much better results for Astra-s and Astra-pro than that measured by themselves, i.e., the traditional method, with performances on the average off-plane error and depth distribution range improved by about 10 times.

 figure: Fig. 8.

Fig. 8. Sample images of the third real-data experiment. (a) Speckle image of camera; (b) virtual speckle image of projector.

Download Full Size | PDF

Tables Icon

Table 2. Results w.r.t the third real-data experiment: comparison between the traditional and proposed methods on Astra-s (unit: mm).

Tables Icon

Table 3. Results w.r.t the third real-data experiment: comparison between the traditional and proposed methods on Astra-pro (unit: mm).

5. Conclusion

An efficient and accurate calibration method for the monocular laser speckle projection system has been proposed in this paper, which does not require any expensive high-precision auxiliary equipment or reference speckle image. The key to such improvement is the hidden colinearity of the correspondences among the speckles on different-pose calibration boards. To avoid redundant computation, only the projector’s optical axis is fitted using these correspondences. The rotation is calibrated by aligning the optical axis in the camera and projector frame. After that, the translation and virtual image of the projector are recovered easily. The superior performance of the proposed method has been validated both by the simulated and real-data experiments. Experimental results on Astra-s and Astra-pro show that the proposed method obviously outperforms the traditional method in depth-estimation, with the accuracy improved by about 10 times. There are still future works to improve the proposed method further, such as utilizing the precise stereo calibration devices to improve the calibration accuracy. Although the proposed method focuses on the laser speckle projector, it can also be applied to software-controlled projectors. To achieve this, designed patterns satisfying the requirements of the proposed method, or more conveniently, speckle patterns, should be used.

Funding

National Natural Science Foundation of China (12372184, 12002215, 52208399); Key Technologies Research and Development Program (2019YFC1511102).

Disclosures

The authors declare no conflicts of interest.

Data Availability

Data underlying the results presented in this paper are available in GitHub: [52].

References

1. S. Izadi, D. Kim, O. Hilliges, et al., “Kinectfusion: real-time 3D reconstruction and interaction using a moving depth camera,” in Proceedings of the 24th annual ACM symposium on User interface software and technology, (2011), pp. 559–568.

2. Z. Cai, X. Liu, X. Peng, and B. Z. Gao, “Ray calibration and phase mapping for structured-light-field 3D reconstruction,” Opt. Express 26(6), 7598–7613 (2018). [CrossRef]  

3. X. Huang, J. Bai, K. Wang, Q. Liu, Y. Luo, K. Yang, and X. Zhang, “Target enhanced 3D reconstruction based on polarization-coded structured light,” Opt. Express 25(2), 1173–1184 (2017). [CrossRef]  

4. P. Zhou, Y. Wang, Y. Xu, Z. Cai, and C. Zuo, “Phase-unwrapping-free 3D reconstruction in structured light field system based on varied auxiliary point,” Opt. Express 30(17), 29957–29968 (2022). [CrossRef]  

5. T. Luhmann, “Close range photogrammetry for industrial applications,” ISPRS J. Photogramm. Remote Sens. 65(6), 558–569 (2010). [CrossRef]  

6. D. Craciun, N. Paparoditis, and F. Schmitt, “Multi-view scans alignment for 3D spherical mosaicing in large-scale unstructured environments,” Comput. Vis. Image Understanding 114(11), 1248–1263 (2010). [CrossRef]  

7. B. Lu, B. Li, W. Chen, Y. Jin, Z. Zhao, Q. Dou, P.-A. Heng, and Y. Liu, “Toward image-guided automated suture grasping under complex environments: A learning-enabled and optimization-based holistic framework,” IEEE Trans. Autom. Sci. Eng. 19(4), 3794–3808 (2022). [CrossRef]  

8. C. D’Ettorre, G. Dwyer, X. Du, F. Chadebecq, F. Vasconcelos, E. De Momi, and D. Stoyanov, “Automated pick-up of suturing needles for robotic surgical assistance,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), (IEEE, 2018), pp. 1370–1377.

9. A. Wilcox, J. Kerr, B. Thananjeyan, J. Ichnowski, M. Hwang, S. Paradis, D. Fer, and K. Goldberg, “Learning to localize, grasp, and hand over unmodified surgical needles,” in 2022 International Conference on Robotics and Automation (ICRA), (2022), pp. 9637–9643.

10. S. Foix, G. Alenya, and C. Torras, “Lock-in time-of-flight (ToF) cameras: A survey,” IEEE Sens. J. 11(9), 1917–1926 (2011). [CrossRef]  

11. H. Yao, C. Ge, G. Hua, and N. Zheng, “The VLSI implementation of a high-resolution depth-sensing SoC based on active structured light,” Mach. Vis. Appl. 26(4), 533–548 (2015). [CrossRef]  

12. W. Wang, J. Yan, N. Xu, Y. Wang, and F.-H. Hsu, “Real-time high-quality stereo vision system in FPGA,” IEEE Trans. Circuits Syst. Video Technol. 25(10), 1696–1708 (2015). [CrossRef]  

13. K.-R. Kim and C.-S. Kim, “Adaptive smoothness constraints for efficient stereo matching using texture and edge information,” in 2016 IEEE International Conference on Image Processing (ICIP), (IEEE, 2016), pp. 3429–3433.

14. C. Strecha, R. Fransens, and L. Van Gool, “Combined depth and outlier estimation in multi-view stereo,” in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), vol. 2 (IEEE, 2006), pp. 2394–2401.

15. D. Bradley, T. Boubekeur, and W. Heidrich, “Accurate multi-view reconstruction using robust binocular stereo and surface meshing,” in 2008 IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2008), pp. 1–8.

16. M. Landmann, S. Heist, P. Dietrich, P. Lutzke, I. Gebhart, J. Templin, P. Kühmstedt, A. Tünnermann, and G. Notni, “High-speed 3D thermography,” Opt. Lasers Eng. 121, 448–455 (2019). [CrossRef]  

17. S. Zhang, High-Speed 3D Imaging with Digital Fringe Projection Techniques (CRC Press, Inc., USA, 2016).

18. S. Heist, P. Dietrich, M. Landmann, P. Kühmstedt, G. Notni, and A. Tünnermann, “GOBO projection for 3D measurements at highest frame rates: a performance analysis,” Light: Sci. Appl. 7(1), 71 (2018). [CrossRef]  

19. S. Zhang and P. S. Huang, “Novel method for structured light system calibration,” Opt. Eng. 45(8), 083601 (2006). [CrossRef]  

20. S. Zhang, “High-speed 3D shape measurement with structured light methods: A review,” Opt. Lasers Eng. 106, 119–131 (2018). [CrossRef]  

21. C. Zuo, Q. Chen, G. Gu, S. Feng, F. Feng, R. Li, and G. Shen, “High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection,” Opt. Lasers Eng. 51(8), 953–960 (2013). [CrossRef]  

22. A. Stark, E. Wong, D. Weigel, H. Babovsky, and R. Kowarschik, “Repeatable speckle projector for single-camera three-dimensional measurement,” Opt. Eng. 57(12), 120501 (2018). [CrossRef]  

23. J. Salvi, J. Pagés, and J. Batlle, “Pattern codification strategies in structured light systems,” Pattern Recognition 37(4), 827–849 (2004). [CrossRef]  

24. B. Li, N. Karpinsky, and S. Zhang, “Novel calibration method for structured-light system with an out-of-focus projector,” Appl. Opt. 53(16), 3415–3426 (2014). [CrossRef]  

25. M. Dekiff, P. Berssenbrügge, B. Kemper, C. Denz, and D. Dirksen, “Three-dimensional data acquisition by digital correlation of projected speckle patterns,” Appl. Phys. B 99(3), 449–456 (2010). [CrossRef]  

26. A. W. Stark, E. Wong, H. Babovsky, C. Franke, and R. Kowarschik, “Miniaturization of a coherent monocular structured illumination system for future combination with digital holography,” Light: Adv. Manuf. 3, 437 (2022).

27. P. Etchepareborda, M.-H. Moulet, and M. Melon, “Random laser speckle pattern projection for non-contact vibration measurements using a single high-speed camera,” Mech. Syst. Sig. Process. 158, 107719 (2021). [CrossRef]  

28. H. Yao, C. Ge, J. Xue, and N. Zheng, “A high spatial resolution depth sensing method based on binocular structured light,” Sensors 17(4), 805 (2017). [CrossRef]  

29. M. Schaffer, M. Grosse, and R. Kowarschik, “High-speed pattern projection for three-dimensional shape measurement using laser speckles,” Appl. Opt. 49(18), 3622–3629 (2010). [CrossRef]  

30. R. A. El-laithy, J. Huang, and M. Yeh, “Study on the use of Microsoft Kinect for robotics applications,” in Proceedings of the 2012 IEEE/ION Position, Location and Navigation Symposium, (IEEE, 2012), pp. 1280–1288.

31. J. Han, L. Shao, D. Xu, and J. Shotton, “Enhanced computer vision with Microsoft Kinect sensor: A review,” IEEE Trans. Cybern. 43(4), 1290–1303 (2013). [CrossRef]  

32. https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/realsense-camera-r200-datasheet.pdf.

33. L. Keselman, J. Iselin Woodfill, A. Grunnet-Jepsen, and A. Bhowmik, “Intel realsense stereoscopic depth cameras,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, (2017), pp. 1–10.

34. https://www.orbbec.com/products/structured-light-camera/astra-series/.

35. J. G. da Silva Neto, P. J. da Lima Silva, F. Figueredo, J. M. X. N. Teixeira, and V. Teichrieb, “Comparison of RGB-D sensors for 3D reconstruction,” in 2020 22nd Symposium on Virtual and Augmented Reality (SVR), (2020), pp. 252–261.

36. A. Breitbarth, T. Schardt, C. Kind, J. Brinkmann, P.-G. Dittrich, and G. Notni, “Measurement accuracy and dependence on external influences of the iPhone X TrueDepth sensor,” in Photonics and Education in Measurement Science 2019, vol. 11144 (SPIE, 2019), pp. 27–33.

37. C. Zhu and Q. Zhang, “Calibration method for a structured light system based on DIC,” Appl. Opt. 61(27), 8050–8056 (2022). [CrossRef]  

38. Z. Jiang, Y. Zhang, B. Hu, X. Liu, and Q. Yu, “A calibration method for extrinsic parameters of monocular laser speckle projection system (in chinese),” Acta Opt. Sin. 43, 0315001 (2023). [CrossRef]  

39. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University Press, 2004).

40. B. Huang, Y. Tang, S. Ozdemir, and H. Ling, “A fast and flexible projector-camera calibration system,” IEEE Trans. Autom. Sci. Eng. 18(3), 1049–1063 (2021). [CrossRef]  

41. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

42. Q. Yu, Y. Yin, Y. Zhang, W. Chen, B. Hu, and X. Liu, “Displacement measurement of large structures using nonoverlapping field of view multi-camera systems under six degrees of freedom ego-motion,” Computer aided Civil Eng. 38, 1483–1503 (2023). [CrossRef]  

43. C. Harris and M. Stephens, “A combined corner and edge detector,” in Alvey vision conference, vol. 15 (Citeseer, 1988), pp. 10–5244.

44. O. Faugeras and Q.-T. Luong, The geometry of multiple images: the laws that govern the formation of multiple images of a scene and some of their applications (MIT press, 2001).

45. L. Robert, F. Nazaret, T. Cutard, and J.-J. Orteu, “Use of 3-D digital image correlation to characterize the mechanical behavior of a fiber reinforced refractory castable,” Exp. Mech. 47(6), 761–773 (2007). [CrossRef]  

46. G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools (2000).

47. A. Rosenfeld and J. L. Pfaltz, “Sequential operations in digital picture processing,” J. ACM 13(4), 471–494 (1966). [CrossRef]  

48. https://en.wikipedia.org/wiki/Iteratively_reweighted_least_squares.

49. https://www.anhuaoe.com/en/demokit/info.aspx?itemid=2596.

50. https://www.mindvision.com.cn/uploadfiles/2022/06/24/55396453501572255.pdf.

51. http://www.sz-htb.com/index.php?_m=frontpage&_a=index.

52. B. Wang, “Code for Calibration Method for Monocular Laser Speckle Projection System,” GitHub (2023) [accessed 25 Oct 2023], https://github.com/BaoqiongWang/Novel_Calibration_Method_for_Monocular_LaserSpeckle_Projection_System.git.

Data Availability

Data underlying the results presented in this paper are available in GitHub: [52].

52. B. Wang, “Code for Calibration Method for Monocular Laser Speckle Projection System,” GitHub (2023) [accessed 25 Oct 2023], https://github.com/BaoqiongWang/Novel_Calibration_Method_for_Monocular_LaserSpeckle_Projection_System.git.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Schematic diagram of the depth-estimation model for idealized monocular laser-speckle projection system.
Fig. 2.
Fig. 2. (a) Schematic diagram of the calibration board. (b) Speckle image of the calibration board.
Fig. 3.
Fig. 3. Principle of the calibration.
Fig. 4.
Fig. 4. Results with respect to the simulated data. The influence of image matching error: (a) Euler angle; (b) Translation vector; (c) RMS of the speckle points. The influence of the number of the speckle points: (d) Euler angle; (e) Translation vector; (f) RMS of the speckle points. The influence of the number of calibration board poses: (g) Euler angle; (h) Translation vector; (i) RMS of the speckle points.
Fig. 5.
Fig. 5. Experimental set up for the first two real-data experiments. (a) Monocular laser speckle projector system; (b) Images of the spheres.
Fig. 6.
Fig. 6. Displacement measurement results for the first real-data experiment.
Fig. 7.
Fig. 7. Sphere models (dark blue) fitted by the obtained point clouds (gray) for the spheres in the second real-data experiment. (a) Left sphere; (b) Right sphere.
Fig. 8.
Fig. 8. Sample images of the third real-data experiment. (a) Speckle image of camera; (b) virtual speckle image of projector.

Tables (3)

Tables Icon

Table 1. Radius measurement of the spheres in the second real-data experiment. (unit: mm).

Tables Icon

Table 2. Results w.r.t the third real-data experiment: comparison between the traditional and proposed methods on Astra-s (unit: mm).

Tables Icon

Table 3. Results w.r.t the third real-data experiment: comparison between the traditional and proposed methods on Astra-pro (unit: mm).

Equations (28)

Equations on this page are rendered with MathJax. Learn more.

Z t f Z t = s δ c δ p s , Z r f Z r = s δ c δ p δ s .
1 Z t = 1 Z r δ s f .
s [ u u v u 1 ] = [ f x γ c x 0 f y c y 0 0 1 ] K c a m [ R T ] [ x w y w z w 1 ] ,
[ u d v d 1 ] = K c a m [ u ¯ ( 1 + k 1 r + k 2 r 2 + k 3 r 3 ) + 2 p 1 u ¯ v ¯ + p 2 ( r 2 + 2 u ¯ 2 ) v ¯ ( 1 + k 1 r + k 2 r 2 + k 3 r 3 ) + p 1 ( r 2 + 2 v ¯ 2 ) + 2 p 2 u ¯ v ¯ 1 ] ,
C 1 ( n ¯ c a m ) = i = 1 4 n ¯ c a m T P F i 1 2 .
C 2 ( L o ) = i = 1 n ρ ( L o , P o i ) 2 2 ,
P p r o j = R c p P c a m + T c p .
n ¯ c a m T P c a m = 1 ,
P p r o j = ( R c p + T c p n ¯ c a m T ) P c a m .
p p r o j H c p p c a m , H c p = K p r o j ( R c p + T c p n ¯ c a m T ) K c a m 1 ,
p p r o j H c p 1 p c a m 1 ,
H c p 1 = K p r o j ( R c p + T c p ( n ¯ c a m 1 ) T ) K c a m 1 ,
p c a m 2 H p c 2 p p r o j ,
H p c 2 = K c a m ( R p c + T p c n ¯ p r o j T ) K p r o j 1 ,
p c a m 2 H p c 2 H c p 1 p c a m 1 ,
H p c 2 H c p 1 = K c a m ( I T c ( n ¯ c a m 1 ) T + T c ( n ¯ c a m 2 ) T T c ( n ¯ c a m 2 ) T T c ( n ¯ c a m 1 ) T ) K c a m 1 ,
[ 0 0 1 ] = R z ( A z ) R y ( A y ) R x ( A x ) [ v x v y v z ] ,
A x = tan 1 ( v y v z ) , A y = tan 1 ( v x sin A x 2 v y v x cos A x 2 v z ) .
p ¯ p r o j = 1 ( H c p p c a m ) z ( R c p + T c p n ¯ c a m T ) p ¯ c a m ,
p ¯ p r o j = 1 ( H c p 1 p c a m 1 ) z ( R c p + T c p n ¯ 1 T ) p ¯ c a m 1 ,
p ¯ p r o j = 1 ( H c p 2 p c a m 2 ) z ( R c p + T c p n ¯ 2 T ) p ¯ c a m 2 .
( H c p 2 p c a m 2 ) z ( R c p + T c p n 1 T ) p ¯ c a m 1 ( H c p 1 p c a m 1 ) z ( R c p + T c p n 2 T ) p ¯ c a m 2 = 0.
( r 2 T + t 2 n 2 T ) p ¯ c a m 2 ( R c p + T c p n 1 T ) p ¯ c a m 1 ( r 2 T + t 2 n 1 T ) p ¯ c a m 1 ( R c p + T c p n 2 T ) p ¯ c a m 2 = 0.
b T c p + t 2 m = n ,
n = r 2 T ( p ¯ c a m 2 R c p p ¯ c a m 1 p ¯ c a m 1 R c p p ¯ c a m 2 ) , m = n 2 T p ¯ c a m 2 R c p p ¯ c a m 1 n 1 T p ¯ c a m 1 R c p p ¯ c a m 2 , b = r 2 T ( p ¯ c a m 2 n 1 T p ¯ c a m 1 p ¯ c a m 1 n 2 T p ¯ c a m 2 ) .
M T c p = d .
[ M 0 M n ] T c p = [ d 0 d n ] .
T c p = ( n i = 0 M i T M i ) 1 n i = 0 M i T d i
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.