Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-accuracy calibration of cameras without depth of field and target size limitations

Open Access Open Access

Abstract

During camera calibration, targets need to be placed in the depth of field of the lens to ensure clear imaging, and they should take up proper proportions in the image. These requirements cause difficulty in many calibration scenarios, such as those involving large-field-of-view, shallow-depth-of-field, or online operation cameras. In view of the above-mentioned problems, this study proposes a high-accuracy camera calibration method, which can overcome the influence of image blur and noise and is not limited by depth of field and target size. First, a high-accuracy light-spot small target is placed closely in front of the camera, so that the target image can take up a large proportion in the whole image. In case of defocus blur, the adaptive multi-scale method is used to extract feature point coordinates at first to ensure accuracy, and the location variance of each feature point is estimated concurrently. Finally, the high-accuracy intrinsic and extrinsic parameters of the camera under test are obtained by nonlinear optimization where re-projection errors are normalized by location variances according to the Gauss-Markov theorem. Simulation and physical experiments validate the effectiveness of the proposed method.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Corrections

24 March 2021: A typographical correction was made to the funding section.

1. Introduction

Camera calibration is an important part of vision measurement, because its accuracy determines that of measurement systems. The existing camera calibration methods can be classified according to target pattern, mainly including 3D [1,2], 2D [35], 1D [6,7] targets, and self-calibration methods [8,9]. Compared with traditional fixed target patterns with limited feature points, some methods use TFTs of panels [10,11] to complete calibration, whose feature points can be up to millions of points, and could achieve higher accuracy. Among them, Zhang’s method [3] is state-of-the-art and the most popular, which calibrates cameras by a planar chessboard target.

However, most of the existing calibration methods require the target to be placed in the depth of field (DoF) for sharp image and occupy a large proportion of the entire image, and these requirements are difficult to satisfy in some scenarios, such as the calibration of large-field-of-view cameras or shallow-DoF cameras. For large-field-of-view cameras, it is hard to balance target size and image clarity: large targets that match field-of-view in the DoF range have relatively low accuracy, are difficult to manufacture, and are inconvenient to move and place; small- and medium-sized targets take up small coverage in image when put in the DoF range, but defocus blur will occur when they are put out of the DoF for large coverage. And for a shallow-DoF camera, its DoF is limited and defocus blur occurs easily during calibration. As shown in Fig. 1(a), defocus causes severe blur to feature points of a chessboard target, which will lead to a decrease in the accuracy of extraction and calibration in traditional methods. Therefore, it is important to propose a camera calibration method that is not limited by DoF and target size.

 figure: Fig. 1.

Fig. 1. Calibration targets in focus or out of focus. (a) Chessboard target out of focus whose feature point is severely blurred and hard to be extracted accurately. (b) Light-spot target in focus. (c) Light-spot target out of focus.

Download Full Size | PDF

Based on the above analysis, if high-accuracy calibration can be achieved by blurred images captured out of the DoF, the method breaks the limitations of DoF and target size. Scholars have conducted some research on this problem, whose methods can be classified into the following three types:

The first type establishes phase shift structures to overcome defocus blur [1214]. Huang et al. [12] projected sinusoidal phase shifting images in horizontal and vertical directions on a liquid crystal display (LCD) and determined the coordinates of the feature points. Wang [14] used Fourier transform to deal with defocus blur. These methods can realize camera calibration under defocus blur. However, the phase decoding is complex, and the calibration is affected by the resolution of the LCD screen and image noise.

The second type is to design new target patterns [1517]. For example, Douxchamps [16] designed a multi-ring disc target pattern, improving the location accuracy under defocus blur by increasing the area of the points. Tehrani [17] configured an extremely small pore plate in front of a lens to form a light-spot target, to reduce the impact of defocus blur. These methods solve the problem of defocus blur to some extent but the location accuracy of feature points in cases of severe blur remains low.

The third method establishes a blur model, to overcome the effect of defocus blur [18,19]. Baba et al. [18] showed that blur width can be calculated through the lens diameter and front DoF and that the blur can be considered as a calibration parameter in calibration and optimization processes. Liu et al. [19] established a regional description of point uncertainty area to optimize the image coordinates of target feature points together with camera parameters. However, in practice, errors are caused not only by defocus blur but also by image noise. These methods consider blur only and ignore image noise. In fact, most of the existing camera calibration methods implicitly suppress the influence of noise using the least-square method, the effect of which is limited.

A light-spot target is frequently used for defocused camera calibration, because its image intensity distribution form is invariant ideally after defocusing, as presented in Figs. 1(b) and (c). Based on the above analysis, a high-accuracy calibration method of cameras without DoF and target size limitations is proposed, which explicitly deals with defocus blur and image noise. Firstly, the mathematical model of location variance of light-spot feature point is established; then, the best scale in the adaptive multi-scale extraction method for light spot is defined and proved; moreover, the location variance of each feature point is estimated at the best scale, and they are used to normalize the re-projection errors during non-linear optimization to obtain the optimal and high-accuracy intrinsic and extrinsic parameters of a camera.

The remainder of this paper is organized as follows. Section 2 provides an overview of the camera model. Section 3 describes the detailed procedure of the proposed method. Sections 4 and 5 present the simulation and physical experiments conducted to validate the effectiveness of the proposed method. Section 6 covers the conclusions drawn.

2. Mathematical model of camera

In this study, the pinhole camera model and the Brown distortion model [20] are adopted.

For a feature point $Q$, $\boldsymbol {q}=\left [x, y, z, 1\right ]^\textrm {T}$ is the 3D homogeneous coordinate in the world reference frame. ${\boldsymbol {p}_{u}}={{\left [ {{u}_{u}},{{v}_{u}},1 \right ]}^\textrm {T}}$ and ${\boldsymbol {p}_{d}}={{\left [ {{u}_{d}},{{v}_{d}},1 \right ]}^\textrm {T}}$ are the undistorted and distorted homogeneous coordinates in the image reference frame in pixels, and the pinhole camera model can be described as

$$\rho {\boldsymbol{p}_{u}}=\boldsymbol{K}\left[ \begin{matrix} \begin{matrix} 1 & 0 & 0 & 0 \\ \end{matrix} \\ \begin{matrix} 0 & 1 & 0 & 0 \\ \end{matrix} \\ \begin{matrix} 0 & 0 & 1 & 0 \\ \end{matrix} \\ \end{matrix} \right]\left[ \begin{matrix} \boldsymbol{R} & \boldsymbol{t} \\ 0 & 1 \\ \end{matrix} \right]\boldsymbol{q},$$
where $\rho$ is an arbitrary non-zero scale factor; $\boldsymbol {K}=\left [ \begin {matrix} {{f}_{x}} & \gamma & {{u}_{0}} \\ 0 & {{f}_{y}} & {{v}_{0}} \\ 0 & 0 & 1 \\ \end {matrix} \right ]$, $f_x$ and $f_y$ are the effective focal lengths in the direction of the $u$- and $v$-axes, respectively; $u_0$ and $v_0$ are the coordinates of the principal point; $\gamma$ is the non-perpendicular between $u$- and $v$-axes; $\boldsymbol {R}=\left [ {\boldsymbol {r}_{1}}, {\boldsymbol {r}_{2}}, {\boldsymbol {r}_{3}}\right ]$ and $\boldsymbol {t}$ are the rotation matrix and translation vector that relate the world reference frame to the camera reference frame.

According to [1,3,21], the first two order radial coefficients of the Brown distortion model are enough to describe distortion. Moreover, the more elaborated models may cause numerical instability. Moreover, the center of radial distortion is assumed to be the same as the principal point here. Thus, the distortion model can be described as [3,22]

$${{\left[ \begin{matrix} {{x}_{un}} & {{y}_{un}} & 1 \\ \end{matrix} \right]}^\textrm{T}}={\boldsymbol{K}^{-1}}{\boldsymbol{p}_{u}},$$
$$\left[ \begin{matrix} {{x}_{dn}} \\ {{y}_{dn}} \\ \end{matrix} \right]=\left( 1+{{k}_{1}}{{r}^{2}}+{{k}_{2}}{{r}^{4}} \right)\left[ \begin{matrix} {{x}_{un}} \\ {{y}_{un}} \\ \end{matrix} \right],$$
$${\boldsymbol{p}_{d}}=\boldsymbol{K}{{\left[ \begin{matrix} {{x}_{dn}} & {{y}_{dn}} & 1 \\ \end{matrix} \right]}^\textrm{T}},$$
where $r=\sqrt {x_{un}^{2}+y_{un}^{2}}$, $\left ( {{x}_{un}},{{y}_{un}} \right )$ and $\left ( {{x}_{dn}},{{y}_{dn}} \right )$ are the undistorted and distorted normalized image coordinates, respectively. $k_1$ and $k_2$ are the radial distortion coefficients.

3. Algorithm principle

This study proposes a camera calibration method that is not limited by DoF and target size. The algorithm procedure is shown in Fig. 2. With the proposed method, the use of a small target in the non-DoF range for calibration can achieve the same effect as that using a large target in the DoF range. A light-spot small target is placed close to a camera to obtain a large image, but defocus blur occurs subsequently, as shown in Fig. 2(a). In view of this situation, this study obtains image feature point coordinates by the adaptive multi-scale method to overcome the influence of defocus blur on location, and the location variance of each feature point is calculated at this step, which are shown in Fig. 2(b). Finally, high-accuracy intrinsic and extrinsic parameters of the camera under test are obtained via normalized non-linear optimization, as depicted in Figs. 2(c) and (d).

 figure: Fig. 2.

Fig. 2. Algorithm procedure. (a) The intention of the proposed algorithm. (b) Feature point image coordinate extraction method and location variance estimation method. (c) Normalization of location variances of the extracted coordinates. (d) Calculation of camera intrinsic and extrinsic parameters.

Download Full Size | PDF

3.1 Adaptive multi-scale extraction method of light spot center

Ideally, the intensity distribution of a light spot obeys a symmetric two-dimensional Gaussian distribution, and it can be expressed as follows:

$$f(u,v)=\frac{M}{2\pi\sigma^2_w}exp\left(-\frac{u^2+v^2}{2\sigma^2_w}\right),$$
where $\sigma _w$ is the standard deviation of the light spot.

Considering the influence of noise, the actual intensity distribution of the light spot, $I(u,v)$, can be expressed as $I(u,v)=f(u,v)+n(u,v)$, where $n\left (u,v\right )$ is the image noise with a mean of 0 and variance of $\sigma _{n}^{2}$.

The Hessian matrix at any image feature point is

$$\boldsymbol{H}\left( u,v \right)=\left[ \begin{matrix} {{{\tilde{I}}}_{uu}}\left( u,v \right) & {{{\tilde{I}}}_{uv}}\left( u,v \right) \\ {{{\tilde{I}}}_{uv}}\left( u,v \right) & {{{\tilde{I}}}_{vv}}\left( u,v \right) \\ \end{matrix} \right],$$
where ${{\tilde {I}}_{uu}}\left ( u,v \right )$, ${{\tilde {I}}_{uv}}\left ( u,v \right )$, and ${{\tilde {I}}_{vv}}\left ( u,v \right )$ refer to the images convoluted by second-order Gaussian differential operators. These operators play two roles: 1) They can give the derivatives of the image and 2) they work as Gaussian kernels, which could recover the Gaussian distribution of the spot image from the saturated one or reshaped one.

The normalized determinant of the Hessian matrix [23] is defined as $C$, which can be described as

$$C\left( u,v,{{\sigma }_{g}} \right)=\sigma _{g}^{4}\left( {{{\tilde{I}}}_{uu}}\left( u,v \right){{{\tilde{I}}}_{vv}}\left( u,v \right)-\tilde{I}_{uv}^{2}\left( u,v \right) \right),$$
where $\sigma _g$ is the standard deviation of the Gaussian convolutional kernel.

The standard deviation of the Gaussian distribution is defined as the scale. At a certain scale, the center of the light spot is determined by calculating the extreme value of $C$. Moreover, for a certain feature point, the values of $C$ over scales form the function, $Lc$. When $Lc$ reaches the maximum value, the convolutional scale, $\sigma _g$, is equal to the light-spot scale, $\sigma _w$, and the location accuracy is the highest, as presented in Fig. 3. Thus, the pixel-wise coordinate at this scale is taken as the pixel-wise coordinate ${\boldsymbol {p}_{p}}={{\left [ {{u}_{p}},{{v}_{p}} \right ]}^\textrm {T}}$ of the feature point. Furthermore, this scale is defined as the best scale, $\sigma _{\tilde {g}}$, owing to its characteristics.

 figure: Fig. 3.

Fig. 3. (Left) $Lc$ and (Right) location root mean square error (RMSE) with different $\sigma _g$.

Download Full Size | PDF

In this study, a local reference frame, $o-st$, is established with the ideal location of the image feature point in the noiseless image as the origin, (0, 0), and the coordinate axes parallel to the $u$- and $v$-axes of the image reference frame as the coordinate axes. In $o-st$, the sub-pixel coordinate of the feature point is $\left (s_0, t_0\right )$. Thus, the gray value of this point after convolution can be expressed as follows by second-order Taylor expansion at (0, 0):

$$\tilde{I}\left( 0+{{s}_{0}},0+{{t}_{0}} \right)={{\tilde{I}}_{0}}+\left( {{s}_{0}},{{t}_{0}} \right)\left( \begin{matrix} {{{\tilde{I}}}_{s}} \\ {{{\tilde{I}}}_{t}} \\ \end{matrix} \right)+\frac{1}{2}\left( {{s}_{0}},{{t}_{0}} \right)\left( \begin{matrix} \begin{matrix} {{{\tilde{I}}}_{ss}} & {{{\tilde{I}}}_{st}} \\ \end{matrix} \\ \begin{matrix} {{{\tilde{I}}}_{st}} & {{{\tilde{I}}}_{tt}} \\ \end{matrix} \\ \end{matrix} \right)\left( \begin{matrix} {{s}_{0}} \\ {{t}_{0}} \\ \end{matrix} \right),$$
where $\tilde {I}_0$ refers to the convoluted gray value of $I(s, t)$ at (0, 0) in $o-st$. ${{\tilde {I}}_{s}}$ and ${{\tilde {I}}_{t}}$ are used to represent the values at (0, 0) in the Gaussian convolutional first-order differential images, and ${{\tilde {I}}_{ss}}$, ${{\tilde {I}}_{st}}$, and ${{\tilde {I}}_{tt}}$ represent those in the Gaussian convolutional second-order differential images.

The first-order derivatives at point center w.r.t. $s$- and $t$-axes in Eq. (8) are zero, that is

$$\left\{ \begin{matrix} {{{\tilde{I}}}_{ss}}{{s}_{0}}+{{{\tilde{I}}}_{st}}{{t}_{0}}+{{{\tilde{I}}}_{s}}=0 \\ {{{\tilde{I}}}_{st}}{{s}_{0}}+{{{\tilde{I}}}_{tt}}{{t}_{0}}+{{{\tilde{I}}}_{t}}=0 \\ \end{matrix} \right..$$
The accurate sub-pixel location of a feature point can be solved as follows:
$${{s}_{0}}=\frac{{{{\tilde{I}}}_{t}}{{{\tilde{I}}}_{st}}-{{{\tilde{I}}}_{s}}{{{\tilde{I}}}_{tt}}}{{{{\tilde{I}}}_{ss}}{{{\tilde{I}}}_{tt}}-\tilde{I}_{st}^{2}},{{t}_{0}}=\frac{{{{\tilde{I}}}_{s}}{{{\tilde{I}}}_{st}}-{{{\tilde{I}}}_{t}}{{{\tilde{I}}}_{ss}}}{{{{\tilde{I}}}_{ss}}{{{\tilde{I}}}_{tt}}-\tilde{I}_{st}^{2}}.$$

3.2 Location variance model of a light-spot center

The derivatives can be decomposed as $\tilde {I}. = f. + n.$, and $\tilde {I}.. = f.. + n..$, where subscripts include $s$ and $t$, representing the directions of the partial derivatives. Moreover, $f.$ and $f..$ represent the values at (0, 0) in $o-st$ in the ideal image convoluted by the Gaussian first-order and second-order differential kernels, respectively, and $n.$ and $n..$ represent those of the image noise convoluted by the above two kernels.

According to the characteristics of the Gaussian distribution, $f_s=f_t=0$, $f_{ss}$=$f_{tt}$, ${{f}_{st}}=0$, and $n_{ss}<<f_{ss}$. Therefore, Eq. (10) can be simplified as

$$\begin{aligned} & {{s}_{0}}=\frac{{{n}_{s}}\left( {{f}_{st}}-{{f}_{tt}} \right)}{{{f}_{ss}}{{f}_{tt}}-f_{st}^{2}}=-\frac{{{n}_{s}}}{{{f}_{ss}}}, \\ & {{t}_{0}}=\frac{{{n}_{t}}\left( {{f}_{st}}-{{f}_{ss}} \right)}{{{f}_{ss}}{{f}_{tt}}-f_{st}^{2}}=-\frac{{{n}_{t}}}{{{f}_{tt}}}. \\ \end{aligned}$$

The ideal image, noise, and convolutional kernel satisfy a Gaussian distribution, resulting in $n_s=n_t$. According to [24], its variance is $\hat {\sigma }_{n}^{2}={\sigma _{n}^{2}}/\left ({8\pi \sigma _{g}^{4}}\right )$. Combining with the characteristic of variance, the location variance of the spot center is $\sigma _{{{s}_{0}}}^{2}=\sigma _{{{t}_{0}}}^{2}={\hat {\sigma }_{n}^{2}}/{{{f}_{ss}}^{2}}$.

Therefore, the location variance of the light-spot center is

$$\sigma^2_{s_0}=\sigma^2_{t_0}=\frac{(\sigma^2_g+\sigma^2_w)^4}{8\pi\sigma^4_w\sigma^4_g}\left(\frac{\sigma^2_n}{K^2}\right).$$
When $\sigma _g$ = $\sigma _w$, Eq. (12) can be simplified as
$$\sigma _{{{s}_{0}}}^{2}=\sigma _{{{t}_{0}}}^{2}=\frac{2}{\pi }\left( \frac{\sigma _{n}^{2}}{{{K}^{2}}} \right),$$
where $\sigma _{n}^{2}$ is the variance of image noise $n\left (s, t\right )$, which can be estimated by the method in [25], and $K$ is the maximum gray value in the vicinity of the light-spot center.

Based on the model, it will be proved that when $\sigma _g=\sigma _w$, the location accuracy is the highest. Differentiating Eq. (12) with respect to $\sigma _g^{2}$ and setting it as 0 yields

$$\frac{\partial \sigma _{{{s}_{0}}}^{2}}{\partial \sigma _{g}^{2}}=A\frac{2\left( \sigma _{g}^{2}-\sigma _{w}^{2} \right){{\left( \sigma _{g}^{2}+\sigma _{w}^{2} \right)}^{3}}}{\sigma _{g}^{6}}=0,$$
where $A={\sigma _{n}^{2}}/{\left ( 8\pi {{K}^{2}}\sigma _{w}^{4} \right )}$. Thus, $\sigma _g=\sigma _w$, which suggests that the location variance becomes the minimum concurrently. Because the mean of the feature points’ location errors is generally assumed to be 0, the standard deviation of feature point locations is basically equal to the root mean square error (RMSE) of those in value. Therefore, the minimum of $\sigma _{s_0}$ indicates the highest location accuracy. Figure 4 presents the simulation results showing that with a increasing noise level, the standard deviation of location, $\sigma _e$, continues to be essentially equal to the RMSE of location. Moreover, the estimated standard deviation of location, $\sigma _{s_0}$, is basically consistent with $\sigma _e$, which proves the above derivation. For an ideal light-spot center, differentiating Eq. (7) with respect to $\sigma _g$ and setting it as 0, that is
$$\frac{\partial Lc}{\partial {{\sigma }_{g}}}=\frac{\partial \left( {{\sigma }_{g}}^{4}\left( {{f}_{ss}}{{f}_{tt}}-{{f}_{st}}^{2} \right) \right)}{\partial {{\sigma }_{g}}} = \frac{{{M}^{2}}}{4{{\pi }^{2}}}\frac{4\left( \sigma _{g}^{5}-\sigma _{g}^{3}\sigma _{w}^{2} \right)}{{{\left( \sigma _{g}^{2}+\sigma _{w}^{2} \right)}^{5}}}=0.$$

 figure: Fig. 4.

Fig. 4. Comparison of standard deviation of actual location, $\sigma _e$, RMSE of actual location, $Erms$, and estimated standard deviation of location, $\sigma _{s_0}$, with increasing image noise. (Left) Comparison of $\sigma _e$ and $Erms$. (Right) Comparison of $\sigma _e$ and $\sigma _{s_0}$.

Download Full Size | PDF

It shows that when $\sigma _g=\sigma _w$, ${\partial Lc}/{\partial \sigma _g}=0$, which means $Lc$ reaches the maximum at the best scale. In summary, the best scale in the adaptive multi-scale extraction method can be determined by the maximum of $Lc$ and the highest accuracy is achieved here.

3.3 Calibration based on location variance normalization

According to the Gauss-Markov theorem [26], the ordinary least squares estimator is the best linear unbiased estimator if the errors in the linear regression model are uncorrelated and have equal variance and an expectation value of zero. During the traditional camera calibration [3], the location variance of each feature point is assumed as equal, and re-projection errors are minimized directly in the nonlinear optimization after obtaining the linear solution, that is

$$\boldsymbol{P}=\arg \min \sum_{j=1}^{M}{\sum_{i=1}^{N}{{{\left( {\boldsymbol{p}_{d,ij}}-{{{\tilde{\boldsymbol{p}}}}_{d,ij}} \right)}^{2}}}},$$
where $\boldsymbol {P}$ is camera parameters, and $M$ is the number of images involved in the calibration with $N$ feature points per image. ${\boldsymbol {p}_{d,ij}}$ is the distorted coordinate in the image reference frame of the $i$-th point in the $j$-th image, and ${{\tilde {\boldsymbol {p}}}_{d,ij}}$ is the distorted coordinate of the corresponding re-projection point.

However, based on this study, location variances vary with points when there is defocus blur and large and uneven image noise. Moreover, in this scenario, minimizing the re-projection error of each point without normalization in the optimization will lead to a sub-optimal solution.

Therefore, the location variances estimated by the proposed model are used to normalize the re-projection error of each feature point to obtain the optimal solution. The Gauss-Newton method is used in nonlinear optimization, whose process can be expressed as

$$\boldsymbol{P}=\arg \min \sum_{j=1}^{M}{\sum_{i=1}^{N}{\frac{{{\left( {\boldsymbol{p}_{d,ij}}-{{{\tilde{\boldsymbol{p}}}}_{d,ij}} \right)}^{2}}}{\sigma _{{{s}_{0}}ij}^{2}}}},$$
where $\sigma _{s_0ij}^{2}$ is the location variance of the feature point, which not only reduces the impacts of low-accuracy points, but also normalizes the variances of re-projection errors, making the non-linear optimization satisfy the Gauss-Markov theorem and derive the optimal solutions.

As target feature points are usually required to be imaged sharply and the image noise is even during traditional camera calibration, the location variance of each feature point is basically equal, thus, the optimal solution can be obtained without normalization of variance. From this point, traditional methods can be regarded as the special case of the proposed method.

Binocular stereo vision sensors can be calibrated by the proposed method combined with the method introduced in [27].

4. Simulated experiment

Simulated experiments are conducted to analyze the effectiveness of the proposed method by analyzing the influence of location errors and target forms on calibration. The experimental settings are as follows: $f_x$ = $f_y$ = 5000 pixel, $u_0$= 968.5 pixel, $v_0$ = 728.5 pixel, and $k_1$ = $k_2$ = 0.05. And the simulations are implemented by Matlab.

4.1 Analysis of the influence of the location errors on camera calibration

Because different kinds of disturbance will result in location errors finally, the first part of simulation mainly analyzes the effect of the location error on different calibration methods. In simulation, the coordinates of feature points are directly generated, and Gaussian white noise with a level of 0-1 pixel with a bin of 0.1 pixel is added to each image to simulate location errors. 20 images are generated for calibration with 36 feature points per image, and the distance between adjacent points is 8 mm. There are 3 groups compared in this experiment: (1) Zhang’s method with the same standard deviation of noise at one level (LZ); (2) Zhang’s method with different standard deviations of noise at one level (SZ); (3) the proposed method with different standard deviations of noise at one level (OURS). Because when camera is calibrated in the non-DoF range, image blur and noise vary from point to point, the standard deviations of different points are different. Thus, group (2) and (3) simulate this situation. For a certain noise level, two levels of noise with twice the standard deviations are added to the same number of feature points in one image. For example, at a noise level of 0.1 pixel, noise with a standard deviation of 0.1 pixel is added to half of the feature points in one image, and that with 0.2 pixel is added to the other half of the points, to simulate that different points have different location variances. Group (1) simulates a calibration scenario in the DoF range, and all feature points are added with noise whose standard deviation is 1.3 times the noise level to balance with group (2) and (3). Experiments are repeated 100 times at each noise level, and noise is re-added each time to enhance the randomness of the experiment. The evaluation criterion is RMSEs between the estimated values and the ground truth of the parameters. The results are shown in Fig. 5.

 figure: Fig. 5.

Fig. 5. RMSEs of the camera parameters with increasing location noise. (a) Relative RMSE of $f_x$. (b) Relative RMSE of $f_y$. (c) Relative RMSE of $u_0$. (d) Relative RMSE of $v_0$. (e) RMSE of $k_1$. (f) RMSE of $k_2$.

Download Full Size | PDF

As displayed in Fig. 5, the RMSEs of different methods’ calibration results show a linear distribution, and no sharp rise occurs. Also, the accuracy of the proposed method approaches to that of LZ which is the benchmark. In fact, the synthetic standard deviation of OURS and SZ is about 1.58 times the noise level, which is higher than that of LZ. Under larger and more uneven noise, the proposed method still performs basically equivalent to LZ, proving the effectiveness of the proposed method. Moreover, the proposed method has a higher accuracy than directly applying Zhang’s method when the location variances of the feature points differ. The accuracy of OURS is 12% higher than that of SZ for focal lengths and up to 22% higher for principal points and up to 20% higher for distortion coefficients. This is because the feature points with a larger location variance have a lower location accuracy. If they are treated with the same weightage as the high-accuracy points in the optimization, the calibration results will be degraded. However, the negative effects of the feature points with a large location variance will be weakened after normalization. In addition, after normalization, the re-projection errors have the same variance, and according to the Gauss-Markov theorem, the calibration will achieve a better performance. The above analysis indicates that the proposed method can achieve high accuracy in complex environments such as those involving defocus blur and uneven noise, which means the method can break the limitations of DoF and target size.

4.2 Analysis of the influence of target forms on camera calibration

Usually, 3D targets are supposed to have higher accuracy than 2D targets. This section will verify whether the proposed method has higher accuracy with 3D targets.

Also, the coordinates of feature points are generated directly, and the 3D target composes of two orthogonal planes with 6$\times$6$\times$6 points, whose interval is 8mm. And the target is generated at one position without occlusion. The calibration method of 3D target is Tsai’s method [1] with normalization (named as 3D) where the proposed method can be easily applied and the re-projection errors in the non-linear optimization are normalized. The defocusing situation is simulated here, too. Thus, the half feature points near the camera are added with high-level noise and the other half are added with low-level noise. The settings for the planar target are same as OURS in Section 4.1, and the setting of the noise level is also the same as Section 4.1. The experiment at each noise level repeats 100 times and the noise is re-added at each iteration. Because in Tsai’s method, the principal point needs to be calibrated in advance and the definition of distortion coefficients is different from the proposed method, the focal lengths are mainly analyzed. And the results are show in Fig. 6.

 figure: Fig. 6.

Fig. 6. Simulated calibration results of the proposed method using 3D and 2D targets with increasing noise level. (a) relative RMSEs of $f_x$ (b) relative RMSEs of $f_y$.

Download Full Size | PDF

It could be seen from Fig. 6 that the results of 3D are slightly better than those of OURS in low noise levels, while OURS is better than 3D when the noise level is high, which may be caused by the limited total number of feature points of the 3D target. That means overall the 3D target has slightly higher accuracy than the 2D target for the proposed method, but it easily suffers from self-occlusion and limited number of feature points. In contrast, the 2D target is more flexible. Thus, it may be a better choice to calibrate a camera with 2D targets.

5. Physical experiment

The effectiveness of the proposed method is also verified via physical experiments in which different calibration methods are evaluated by the calibration results of the intrinsic parameters and binocular measurements. In the experiments, cameras (AVT GT1920) with a resolution of 1936 pixel $\times$ 1456 pixel and a 23-mm Schneider’s lens are used. Moreover, there are three types of targets: (1) a large chessboard target whose interval of adjacent points is 20 mm with a total of $6\times 6$ feature points placed in the DoF range and extracted by the Harris method [28] (LZ); (2) a small chessboard target whose interval of adjacent points is 5 mm with a total of $6\times 6$ feature points placed in the non-DoF range and extracted by the Harris method (SZ), and (3) a light-spot target whose interval of adjacent points is 8 mm with a total of $6\times 6$ feature points placed in the non-DoF range and extracted by the proposed method (OURS), Hessian method [29] (GS), Gauss fitting method [30] (GF), and centroid method [30] (CT), respectively. The targets are commercial available, and the targets used here are ordered at http://www.caliboptics.com/. And the machining precision of the three targets is 1 $\mu$m. Figure 7 shows the calibration scenarios of small target and large target, and compares areas of different size of targets. It can be noticed that the area of the large target is more than 6 times that of the small target. Because only the proposed method can estimate the location variance, the proposed normalized optimization method is used for OURS in nonlinear optimization, whereas for the others, Zhang’s method is employed. Because LZ performs Zhang’s method in the DoF range, which is popular and state-of-the-art, it is set as the benchmark.

 figure: Fig. 7.

Fig. 7. Comparison of calibration scenarios of small target and large target. (Left) Small target calibration scenario. (Middle) Large target calibration scenario. (Right) Comparison of large target and small target.

Download Full Size | PDF

5.1 Evaluation of intrinsic parameters calibration

One camera is analyzed in the evaluation of the intrinsic parameters. All kinds of targets are placed 20 times, and part of the calibration datasets are shown in Fig. 8. According to our experiments and experience, the targets are recommended to be placed at an angle of more than 20$^\circ$ to guarantee the calibration accuracy. The results of the camera calibration are listed in Table 1 and Table 2, where the values after plus or minus sign represent the uncertainties of the corresponding parameters, and the last column (repE) is the RMSE of the re-projection errors. Figure 9 shows distributions of re-projection errors of different calibration methods.

 figure: Fig. 8.

Fig. 8. Selection of calibration datasets. (Top) Large chessboard target. (Middle) Small chessboard target. Only the left top $6\times 6$ points are used for both chessboard targets. (Bottom) Small light-spot target.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Re-projection errors of different calibration methods. (a)-(f) are re-projection errors of LZ, SZ, GF, CT, GS, and OURS, respectively.

Download Full Size | PDF

Tables Icon

Table 1. Calibration results of focal lengths and principal points (pixel)

Tables Icon

Table 2. Calibration results of radial distortion coefficients

It could be drawn from Tables 1 and 2 that the results of OURS is basically equal to those of LZ in terms of parameter values and uncertainties although OURS is performed in the non-DoF range, which means the proposed method can break the limitations of DoF and target size. Moreover, the calibration results of the light-spot target are better than those of the chessboard target among methods performed in the non-DoF range, which proves the advantage of the light spot in defocused camera calibration. In addition, OURS performs the best among the four methods using the light-spot target, followed by GS, GF, and CT. This is because OURS normalizes the re-projection error of each feature point, satisfying the Gauss-Markov theorem, thus improving the accuracy of the estimated parameters. Concurrently, OURS can determine the convolutional scale adaptively compared with GS, and thus, OURS performs better. Affected by various conditions, the intensity distribution of the light spot is not strictly regularly Gaussian in the actual process of extraction of the feature points, and therefore, CT and GF perform worse. In addition, they are also affected by the window size during image processing. Figure 9 presents that re-projection errors of OURS are most concentrated among defocused camera calibration methods, whose standard deviation is 0.0406 pixel and approaches to that of LZ. The above analysis verifies the effectiveness of the proposed method.

5.2 Evaluation of binocular measurement results

The goal of calibration is measurement, thus different calibration methods are evaluated objectively by binocular vision measurement in this section. Binocular vision system composes of two aforementioned cameras, and targets are placed 20 times in common field of view of the two cameras. The calibration datasets are similar to those in Section 5.1, thus not displayed here. Intrinsic parameters of the cameras under test are calibrated by the above listed methods, respectively, and the structure parameters are calibrated by taking the calibration target as the intermediary [22]. The structure parameters calculated by different methods are listed in Table 3, where $\boldsymbol {om}$ is the rotation vector which can derive rotation matrix $\boldsymbol {R}$ by rodrigues transform. During evaluation, a light-spot planar target is placed in the field-of-view of binocular cameras with different poses, providing pairs of points to be measured. The accuracy is evaluated by comparing the measured distance between the longest pair of points in each row and column and their ground truths. Figure 10 presents pairs of points provided by one image used for evaluation. For completeness, the evaluation points are extracted by four methods: centroid method (named as IMG_CT), Gaussian fitting method (named as IMG_GF), Hessian method (named as IMG_GS), and the adaptive multi-scale method (named as IMG_MS, adopted in the proposed calibration procedure). Because some calibration methods performed in the DoF range while the other not, and it is generally assumed that measurement has higher accuracy in the calibration space, therefore, evaluation is performed in the DoF range and in the non-DoF range, respectively. In addition, when adjusting the brightness of the light-spot target, one should turn it down appropriately to maintain the Gaussian distribution of the spot when target is in focus; and turn it up relatively when the target is out of focus to conquer the decrease of brightness caused by defocus.

 figure: Fig. 10.

Fig. 10. Evaluation images. Yellow lines indicate pair of points in each row, and blue lines represent pair of points in each column. (Left) Evaluation image in the DoF range. (Right) Evaluation image in the non-DoF range.

Download Full Size | PDF

Tables Icon

Table 3. Structure parameters calibrated by different methods

5.2.1 Analysis of the evaluation results using target images in the DoF range

The evaluation target in this section is a light-spot target whose interval of adjacent points is 20 mm with a total of $6\times 6$ feature points placed in the DoF range, and it is placed 9 times. 12 pairs of points are provided by one image. And the longest distance in each row or column is 100 mm and there are totally 108 pairs of point. Table 4 presents relative RMSEs of the distance between pairs of points, and Fig. 11 shows distributions of the distance error of pairs of points. Each rectangle represents the measurement error of a calibration method. The upper and lower sides of the rectangle represent 75% and 25% of the error, respectively. The small rectangle inside represents the mean value.

 figure: Fig. 11.

Fig. 11. Error distributions of different calibration methods evaluated by different evaluation point coordinates in the DoF range. (a)-(d) are methods evaluated by IMG_GF, IMG_CT, IMG_GS, and IMG_MS, respectively.

Download Full Size | PDF

Tables Icon

Table 4. Binocular measurement results in the DoF range (%)

From Table 4 and Fig. 11 it can be drawn that the measurement results of OURS approach to those of LZ when evaluated in the DoF range. Their values are close and their distributions are all concentrated and close to zero by each means of evaluation points. Moreover, among the calibration methods performed in the non-DoF range, OURS achieves the highest accuracy up to 0.1528% for the same evaluation point, followed by GS, CT, GF and SZ. The accuracy of OURS is about 3 times that of SZ and 12.5% higher than that of GS. Figure 11 shows that the error distribution of OURS is the most concentrated and closest to zero among the defocused camera calibration methods. The above analysis shows that the proposed method can still achieve high-accuracy measurement in the DoF range, although calibrated in the non-DoF range, which approaches to the accuracy of the focused camera calibration method (LZ); moreover, the proposed method behaves the best among defocused calibration methods. All of those prove the effectiveness of the proposed method.

5.2.2 Analysis of the evaluation results using target images in the non-DoF range

In the non-DoF range, the small light-spot target used for calibration is used for evaluation with 14 poses different from calibration dataset,whose longest distance between a pair of points is 40 mm in each row or column and there are totally 168 pairs of point. Table 5 presents relative RMSEs of the distance between pairs of points, and Fig. 12 shows distributions of the distance error of pairs of points.

 figure: Fig. 12.

Fig. 12. Error distributions of different calibration methods evaluated by different evaluation point coordinates in the non-DoF range. (a)-(d) are methods evaluated by IMG_GF, IMG_CT, IMG_GS, and IMG_MS, respectively.

Download Full Size | PDF

Tables Icon

Table 5. Binocular measurement results in the non-DoF range (%)

As shown in Table 5, OURS achieves the highest accuracy up to 0.0403% for one certain kind of evaluation point. The accuracy of OURS is higher than that of LZ and about 3 times that of SZ. Moreover, it is up to 41.7% higher than GS, which proves the effectiveness of the adaptive multi-scale extraction method and the normalized optimization method. Moreover, for one certain calibration results, IMG_MS achieves the highest evaluation accuracy or is close to the most accurate results, proving the effectiveness of the adaptive multi-scale method. Figure 12 shows that for one kind of evaluation point, the rectangle of OURS is the smallest and the mean of OURS is close to zero, depicting that the proposed method has the highest accuracy. Concurrently, the error distributions of most calibration methods are the most concentrated when evaluated by IMG_MS, proving the high accuracy of the proposed adaptive multi-scale method. The above analysis allprove the effectiveness of the proposed calibration method.

6. Conclusion

A high-accuracy camera calibration method, which is not limited by depth of field and target size, is proposed in this study. The use of a small light-spot planar target, with superimposed image blur and noise, can achieve the high-accuracy calibration of different types of cameras. This method has the following contributions and advantages:

1. The proposed method breaks the limitations of depth of field and target size existing in current camera calibration methods. High-accuracy calibration is achieved through blurred images of small targets, which are placed in the non-depth-of-field range of cameras. Experiments prove that the accuracy of the proposed method is comparable to that of Zhang’s method using large targets in the depth-of-field range. In the non-depth-of-field range range, the proposed method is considerably superior to other methods, and its measurement accuracy is up to 0.0403%.

2. A mathematical model describing the relationship between image noise and location variance is established. Based on that, location variance of each feature point is specifically estimated, and is used to normalize re-projection errors during non-linear optimization to obtain optimal solution according to the Gauss-Markov theorem. Experiments verify the correctness of the model and the effectiveness of the normalized optimization method.

3. The best scale ${{\sigma }_{{\tilde {g}}}}={{\sigma }_{w}}$ in the adaptive multi-scale extraction method is defined and proved. Based on the location variance model, it is proved that the extraction accuracy is the highest at the best scale and this scale can be found by searching maximum of Lc.

Funding

National Natural Science Foundation of China (51575033); Aeronautical Science Foundation of China (201946051001).

Disclosures

The authors declare no conflicts of interest.

References

1. R. Y. Tsai, “A versatile camera calibration technique for high-accuracy 3d machine vision metrology using off-the-shelf tv camera and lenses,” IEEE J. Robot. Automat. 3(4), 323–344 (1987). [CrossRef]  

2. J. Heikkila, “Geometric camera calibration using circular control points,” IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1066–1077 (2000). [CrossRef]  

3. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). [CrossRef]  

4. H. C. Daniel, K. Juho, and H. Janne, “Joint depth and color camera calibration with distortion correction,” IEEE Trans. Pattern Anal. Mach. Intell. 34(10), 2058–2064 (2012). [CrossRef]  

5. Z. Liu, Q. Wu, X. Chen, and Y. Yin, “High-accuracy calibration of low-cost camera using image disturbance factor,” Opt. Express 24(21), 24321–24336 (2016). [CrossRef]  

6. Z. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. and Mach. Intell. 26(7), 892–899 (2004). [CrossRef]  

7. F. Wu, Z. Hu, and H. Zhu, “Camera calibration with moving one-dimensional objects,” Pattern Recognit. 38(5), 755–765 (2005). [CrossRef]  

8. O. D. Faugeras, Q. T. Luong, and S. J. Maybank, “Camera self-calibration: Theory and experiments,” in Proc. of European Conference on Computer Vision, (1992).

9. M. Pollefeys and L. V. Gool, “Stratified self-calibration with the modulus constraint,” IEEE Trans. Pattern Anal. Mach. Intell. 21(8), 707–724 (1999). [CrossRef]  

10. B. Chen and B. Pan, “Camera calibration using synthetic random speckle pattern and digital image correlation,” Opt. Lasers Eng. 126, 105919 (2020). [CrossRef]  

11. M. Grosse, M. Schaffer, B. Harendt, and R. M. Kowarschik, “Camera calibration using time-coded planar patterns,” Opt. Eng. 51(8), 083604 (2012). [CrossRef]  

12. L. Huang, Q. Zhang, and A. Asundi, “Camera calibration with active phase target: improvement on feature detection and optimization,” Opt. Lett. 38(9), 1446–1448 (2013). [CrossRef]  

13. Y. Wang, B. Cai, K. Wang, and X. Chen, “Out-of-focus color camera calibration with one normal-sized color-coded pattern,” Opt. Lasers Eng. 98, 17–22 (2017). [CrossRef]  

14. Y. Wang, Y. Wang, L. Liu, and X. Chen, “Defocused camera calibration with a conventional periodic target based on fourier transform,” Opt. Lett. 44(13), 3254–3257 (2019). [CrossRef]  

15. H. Ha, Y. Bok, K. Joo, J. Jung, and I. S. Kweon, “Accurate camera calibration robust to defocus using a smartphone,” in Proc. of IEEE International Conference on Computer Vision, (2015).

16. D. Douxchamps and K. Chihara, “High-accuracy and robust localization of large control markers for geometric camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 376–383 (2009). [CrossRef]  

17. M. A. Tehrani, T. Beeler, and A. Grundhofer, “A practical method for fully automatic intrinsic camera calibration using directionally encoded light,” in IEEE Conference on Computer Vision and Pattern Recognition, (2017).

18. M. Baba, M. Mukunoki, and N. Asada, “A unified camera calibration using geometry and blur of feature points,” in International Conference on Pattern Recognition, (2006).

19. J. Liu, Y. Li, and S. Chen, “Robust camera calibration by optimal localization of spatial control points,” IEEE Trans. Instrum. Meas. 63(12), 3076–3087 (2014). [CrossRef]  

20. D. C. Brown, “Close-range camera calibration,” Photogramm. Eng. 37, 855–866 (1971).

21. G. Wei and D. Song, “Implicit and explicit camera calibration: Theory and experiments,” IEEE Trans. Pattern Anal. Mach. Intell. 16(5), 469–480 (1994). [CrossRef]  

22. J. Y. Bouguet, “The matlab open source calibration toolbox,” http://www.vision.caltech.edu/bouguetj/calib_doc/.

23. T. Lindeberg, “Feature detection with automatic scale selection,” Int. J. Comput. Vis. 30(2), 79–116 (1998). [CrossRef]  

24. C. Steger, “Analytical and empirical performance evaluation of subpixel line and edge detection,” in Empirical Evaluation Methods in Computer vision (Los Alamitos, California, 1998), pp. 188–210.

25. G. Chen, F. Zhu, and A. Pheng, “An efficient statistical method for image noise level estimation,” in Proc. of IEEE International Conference on Computer Vision, (2015).

26. H. Theil, Principles of econometrics (Wiley, 1971).

27. Z. Liu, G. Zhang, and Z. Wei, “A global calibration method for multiple vision sensors based on multiple targets,” Meas. Sci. Technol. 22(12), 125102 (2011). [CrossRef]  

28. C. Harris and M. Stephens, “A combined corner and edge detector,” in Alvey vision conference, vol. 15 (Citeseer, 1988), pp. 147–151.

29. C. Steger, “An unbiased detector of curvilinear structures,” IEEE Trans. Pattern Anal. Mach. Intell. 20(2), 113–125 (1998). [CrossRef]  

30. M. R. Shortis, T. A. Clarke, and T. Short, “Comparison of some techniques for the subpixel location of discrete target images,” in Videometrics III, vol. 2350 (International Society for Optics and Photonics, 1994), pp. 239–250.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Calibration targets in focus or out of focus. (a) Chessboard target out of focus whose feature point is severely blurred and hard to be extracted accurately. (b) Light-spot target in focus. (c) Light-spot target out of focus.
Fig. 2.
Fig. 2. Algorithm procedure. (a) The intention of the proposed algorithm. (b) Feature point image coordinate extraction method and location variance estimation method. (c) Normalization of location variances of the extracted coordinates. (d) Calculation of camera intrinsic and extrinsic parameters.
Fig. 3.
Fig. 3. (Left) $Lc$ and (Right) location root mean square error (RMSE) with different $\sigma _g$ .
Fig. 4.
Fig. 4. Comparison of standard deviation of actual location, $\sigma _e$ , RMSE of actual location, $Erms$ , and estimated standard deviation of location, $\sigma _{s_0}$ , with increasing image noise. (Left) Comparison of $\sigma _e$ and $Erms$ . (Right) Comparison of $\sigma _e$ and $\sigma _{s_0}$ .
Fig. 5.
Fig. 5. RMSEs of the camera parameters with increasing location noise. (a) Relative RMSE of $f_x$ . (b) Relative RMSE of $f_y$ . (c) Relative RMSE of $u_0$ . (d) Relative RMSE of $v_0$ . (e) RMSE of $k_1$ . (f) RMSE of $k_2$ .
Fig. 6.
Fig. 6. Simulated calibration results of the proposed method using 3D and 2D targets with increasing noise level. (a) relative RMSEs of $f_x$ (b) relative RMSEs of $f_y$ .
Fig. 7.
Fig. 7. Comparison of calibration scenarios of small target and large target. (Left) Small target calibration scenario. (Middle) Large target calibration scenario. (Right) Comparison of large target and small target.
Fig. 8.
Fig. 8. Selection of calibration datasets. (Top) Large chessboard target. (Middle) Small chessboard target. Only the left top $6\times 6$ points are used for both chessboard targets. (Bottom) Small light-spot target.
Fig. 9.
Fig. 9. Re-projection errors of different calibration methods. (a)-(f) are re-projection errors of LZ, SZ, GF, CT, GS, and OURS, respectively.
Fig. 10.
Fig. 10. Evaluation images. Yellow lines indicate pair of points in each row, and blue lines represent pair of points in each column. (Left) Evaluation image in the DoF range. (Right) Evaluation image in the non-DoF range.
Fig. 11.
Fig. 11. Error distributions of different calibration methods evaluated by different evaluation point coordinates in the DoF range. (a)-(d) are methods evaluated by IMG_GF, IMG_CT, IMG_GS, and IMG_MS, respectively.
Fig. 12.
Fig. 12. Error distributions of different calibration methods evaluated by different evaluation point coordinates in the non-DoF range. (a)-(d) are methods evaluated by IMG_GF, IMG_CT, IMG_GS, and IMG_MS, respectively.

Tables (5)

Tables Icon

Table 1. Calibration results of focal lengths and principal points (pixel)

Tables Icon

Table 2. Calibration results of radial distortion coefficients

Tables Icon

Table 3. Structure parameters calibrated by different methods

Tables Icon

Table 4. Binocular measurement results in the DoF range (%)

Tables Icon

Table 5. Binocular measurement results in the non-DoF range (%)

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

ρ p u = K [ 1 0 0 0 0 1 0 0 0 0 1 0 ] [ R t 0 1 ] q ,
[ x u n y u n 1 ] T = K 1 p u ,
[ x d n y d n ] = ( 1 + k 1 r 2 + k 2 r 4 ) [ x u n y u n ] ,
p d = K [ x d n y d n 1 ] T ,
f ( u , v ) = M 2 π σ w 2 e x p ( u 2 + v 2 2 σ w 2 ) ,
H ( u , v ) = [ I ~ u u ( u , v ) I ~ u v ( u , v ) I ~ u v ( u , v ) I ~ v v ( u , v ) ] ,
C ( u , v , σ g ) = σ g 4 ( I ~ u u ( u , v ) I ~ v v ( u , v ) I ~ u v 2 ( u , v ) ) ,
I ~ ( 0 + s 0 , 0 + t 0 ) = I ~ 0 + ( s 0 , t 0 ) ( I ~ s I ~ t ) + 1 2 ( s 0 , t 0 ) ( I ~ s s I ~ s t I ~ s t I ~ t t ) ( s 0 t 0 ) ,
{ I ~ s s s 0 + I ~ s t t 0 + I ~ s = 0 I ~ s t s 0 + I ~ t t t 0 + I ~ t = 0 .
s 0 = I ~ t I ~ s t I ~ s I ~ t t I ~ s s I ~ t t I ~ s t 2 , t 0 = I ~ s I ~ s t I ~ t I ~ s s I ~ s s I ~ t t I ~ s t 2 .
s 0 = n s ( f s t f t t ) f s s f t t f s t 2 = n s f s s , t 0 = n t ( f s t f s s ) f s s f t t f s t 2 = n t f t t .
σ s 0 2 = σ t 0 2 = ( σ g 2 + σ w 2 ) 4 8 π σ w 4 σ g 4 ( σ n 2 K 2 ) .
σ s 0 2 = σ t 0 2 = 2 π ( σ n 2 K 2 ) ,
σ s 0 2 σ g 2 = A 2 ( σ g 2 σ w 2 ) ( σ g 2 + σ w 2 ) 3 σ g 6 = 0 ,
L c σ g = ( σ g 4 ( f s s f t t f s t 2 ) ) σ g = M 2 4 π 2 4 ( σ g 5 σ g 3 σ w 2 ) ( σ g 2 + σ w 2 ) 5 = 0.
P = arg min j = 1 M i = 1 N ( p d , i j p ~ d , i j ) 2 ,
P = arg min j = 1 M i = 1 N ( p d , i j p ~ d , i j ) 2 σ s 0 i j 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.