Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Measurement error model of the bio-inspired polarization imaging orientation sensor

Open Access Open Access

Abstract

This article studies the measurement error model and calibration method of the bio-inspired polarization imaging orientation sensor (BPIOS), which has important engineering significance for promoting bio-inspired polarization navigation. Firstly, we systematically analyzed the measurement errors in the imaging process of polarized skylight and accurately established an error model of BPIOS based on Stokes vector. Secondly, using the simulated Rayleigh skylight as the incident surface light source, the influence of multi-source factors on the measurement accuracy of BPIOS is quantitatively given for the first time. These simulation results can guide the later calibration of BPIOS. We then proposed a calibration method of BPIOS based on geometric parameters and the Mueller matrix of the optical system and conducted an indoor calibration experiment. Experimental results show that the measurement accuracy of the calibrated BPIOS can reach 0.136°. Finally, the outdoor performance of BPIOS is studied. Outdoor dynamic performance test and field compensation were performed. Outdoor results show that the heading accuracy of BPIOS is 0.667°.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Polarization is another dimension of light, just like spectrum and intensity, which can provide distinct and useful information about a visual scene [1,2]. Many animals, particularly insects, are sensitive to the polarization of light and use this information for navigation, detection and communication [3]. A substantial amount of research has been done on the behavioral neurobiology of the insect’s polarization navigation [49]. The desert ant Cataglyphis has to rely heavily on celestial compass information (namely polarized skylight) and path integration in its featureless desert habitat during foraging [10]. Behavioral experiments using flight simulators have shown that monarch butterflies use a time-compensated skylight compass to adjust their flight to the southwest direction [11] and the navigation mechanism of migrating monarch butterflies is revealed [12]. Biologists have also recently demonstrated that greater mouse-eared bats use skylight polarization cues to calibrate a magnetic compass at sunset [13] and mantis shrimp use celestial polarization and path integration to navigate home [14]. This celestial polarization orientation method is used as a bio-inspired polarization navigation (BPN) method which has attracted much attention due to its advantages, namely autonomy and no error accumulation [15]. It is reported this method can assist inertial navigation system in the case of satellite rejection, and it has potential application value [16]. However, the factors that affect the measurement performance of the bio-inspired polarization orientation sensor (BPOS) are still unclear. The study of the measurement error model of BPOS is of practical importance to facilitate polarization navigation.

Due to the difference in the measurement principle of the polarized skylight, the current measurement error model and calibration method of BPOS are mainly divided into photodiode-based measurement and polarization imaging-based. In terms of photodiode-based measurement, Lambrinos et al. [17] designed a six-channel photodiode-based polarization compass applied to ground robot navigation, and proved the feasibility of using the polarized skylight information for navigation. Meanwhile, Chu et al. [18,19] constructed an improved version of photodiode-based BPOS, evaluated the static performance of the sensor, calibrated and tested the static sensitivity and accuracy of the sensor. Ma et al. [20] used the NSGA-II algorithm to calibrate a similar photodiode-based BPOS. Wang et al. [21] presented an improved photodiode-based BPOS with a planoconvex lens, and used central-symmetry and non-continuous calibration method. Chahl et al. [22] imitated the optical stabilization organ of dragonflies, known as the ocelli, and also developed a six-channel photodiode-based BPOS. Dupeyroux et al. [2325] designed a photodiode-based BPOS, which can measure ultraviolet (UV) light, conducted outdoor performance tests under various weather conditions and achieved an accuracy of $0.3^{\circ }$ in clear sky. However, due to the fact that these photodiode-based BPOS can only measure the polarization information of a certain point in the whole sky, they are susceptible to external factors such as weather interference, surrounding environment and occlusion, resulting in poor robustness. To improve the performance of the BPN methods, researchers have developed several polarization imaging navigation methods. Sturzl et al. [26,27] presented an efficient method for reconstruction of the full-sky polarization pattern and performed geometrical calibration of the four fisheye cameras. Chu et al. [28] integrated multi-directional nanowire grid polarizer on complementary metal oxide semiconductor (CMOS) sensor by nanoimprint lithography, and performed laboratory calibration and outdoor test. Fan et al. [29] has presented a calibration method for a four-camera polarization imaging orientation device, and analyzed the inconsistent response error of the camera and the installation angle error of the four polarization filters. A calibration method of a bio-inspired polarized light compass based on iterative least squares estimation was proposed [30]. Yang et al. [31] has reported calibration method based on extinction ratio error for camera-based polarization orientation sensor. Although imaging prototypes are used for perception, the modeling and calibration of photodiode-based measurement principle are still used, and the modeling is not systematic, and the Mueller matrix error of an optical system is not considered [2832]. For the modeling and calibration of polarization imaging principle, the geometric errors of internal parameters, such as lens distortion and principal point, are not considered [3336], especially for the measurement of skylight.

Motivated by this situation, we present a measurement error model for bio-inspired polarization imaging orientation sensor (BPIOS) considering multi-source factors. The proposed model can effectively reflect the theoretical measurement accuracy of BPIOS under practical working conditions. Specifically, the main contributions of this article are as follows:

(1) The measurement errors in the skylight imaging process are systematically analyzed from the perspective of optical polarization imaging.

(2) Using the simulated Rayleigh polarized skylight as the incident surface light source, the influence of multi-source factors on the azimuth measurement of BPIOS is quantitatively given for the first time. This work can guide the calibration of BPIOS.

(3) A calibration method of BPIOS based on geometric internal parameters and the Mueller matrix of the optical system is proposed.

This article is organized as follows. Section 2 presents the formation of skylight polarization field. Section 3 describes the mesurement principal of BPIOS. The measurement error model of BPIOS is presented in Section 4. Section 5 introduces the numerical simulation and analysis. Section 6 details the calibration method. The experimental results are given in Section 7. We finally draw conclusions and perspectives of this work in Section 8.

2. Formation of the skylight polarization field

Light radiated by the Sun is unpolarized; however, on its way through the earth’s atmosphere, sunlight is scattered by atmospheric molecules. The atmospheric molecules, such as oxygen and nitrogen, are all smaller than the wavelength of sunlight. When sunlight encounters these atmospheric molecules, it scatters and forms a skylight polarization pattern [37]. A general description of the scattering process at any point P in the sky is described in the world coordinate frame, namely, East-North-Up (ENU) coordinate frame, as shown in Fig. 1. This scattering result in the polarization of skylight: The E-vector is oriented perpendicularly to the scattering plane (called the OPS) formed by the Sun, the scattering point in the sky, and the observer. Here, E-vector is the electric field strength of polarized skylight. Due to the E-vector being perpendicular to the scattering plane, the cross-product relation is obtained as:

$$\qquad \mathit{E}_{n} = \mathit{S}_{n}\times\mathit{P}_{n},$$
where $E_{n}$ is the E-vector in the ENU coordinate frame, $\mathit {P}_{n}$ represents the scattering vector, and $\mathit {S}_{n}$ is the solar vector.

 figure: Fig. 1.

Fig. 1. A general description of the scattering process at any point in the sky in the ENU coordinate frame. S is the solar position on the celestial sphere and $p$ is a scattering point. $X_{p}Y_{p}Z_{p}$ is the local coordinate frame of a scattering point (p frame). $\alpha _{p},h_{p}$ are the azimuth and elevation angle of the scattering vector , and $\alpha _{s},h_{s}$ are the azimuth and elevation angle of the solar vector $S_{n}$, respectively. $\varphi$ represents the angle of E-vector from the local meridian ( $X_{p}$ is the reference axis).

Download Full Size | PDF

The E-vector $E_{p}$ in the local coordinate frame of a scattering point can be expressed as:

$$\qquad \mathit{E}_{p} = \mathit{R}^{p}_{n}\cdot\mathit{E}_{n},$$
where the transformation matrix $\mathit {R}^{p}_{n}$ represents from ENU coordinate frame to local coordinate frame of a scattering point is defined as:
$$\mathit{R}^{p}_{n} = \begin{bmatrix} \sin h_{p} & 0 & -\cos h_{p}\\ 0 & 1 & 0\\ \cos h_{p} & 0 & \sin h_{p} \end{bmatrix}\begin{bmatrix} \cos \alpha_{p} & \sin \alpha_{p} & 0\\ -\sin \alpha_{p} & \cos \alpha_{p} & 0\\ 0 & 0 & 1 \end{bmatrix}.$$

Substituting Eq. (1) and (3) into Eq. (2) yields:

$$\mathit{E}_{p} = \begin{bmatrix} -\cos h_{s}\sin(\alpha_{s}-h_{p}) \\ -\cos h_{p}\sin h_{s}+\sin h_{p}\cos h_{s}\cos(\alpha_{s}-h_{p}) \\ 0 \end{bmatrix}.$$

As can be seen from Eq. (4), the angle of E-vector (AoE) in the local coordinate frame of a scattering point is written as:

$$\tan \varphi = \frac{\cos h_{p}\sin h_{s}-\sin h_{p}\cos h_{s}\cos(\alpha_{s}-h_{p})}{\cos h_{s}\sin(\alpha_{s}-h_{p})}.$$

According to the above Eq. (5), a three-dimensional simulation model of the skylight polarization field based on Rayleigh scattering in the ENU coordinate frame can be established. When the solar altitude is $10^{\circ }$ and the azimuth angle is $300^{\circ }$, the 3-D simulation result of skylight polarization field is shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. 3-D simulation of skylight polarization field when the Sun is at an altitude angle of $10^{\circ }$ and an azimuth angle of $300^{\circ }$ in the North-East-Up coordinate frame; (a) represents the AoE; (b) represents the corresponding degree of linear polarization (DoLP).

Download Full Size | PDF

3. Measurement principle of BPIOS

Skylight is partially polarized light and its polarization state can be described by the Stokes vector [1,2]. The Stokes vector contains four parameters $I, Q, U$ and $V$. Among these parameters, the $I$ parameter is the total intensity of the light; the $Q$ parameter describes the amount of linear horizontal or vertical polarization; the parameter $U$ is the amount of linear +$45^{\circ }$ or -$45^{\circ }$ polarization. The parameter $V$ is the amount of right or left circular polarization contained within the beam, and the content of this component in the skylight is extremely small [2], so it can be ignored in the skylight imaging. Based on the polarimetric imaging principle, the polarimetric imaging of skylight is the process of projecting each point of polarized light in the sky onto the imaging plane through Muller matrix of the optical system, as shown in Fig. 3 and Fig. 4. Since only the first dimension of the Stokes vector can be received, the light intensity received on the imaging plane is expressed as follows [38]:

$$I_{\theta} = \frac{1}{2}(I+Q \cos 2(\theta-\beta)+U \sin 2(\theta-\beta)),$$
where $\theta$ is the angular distance between the transmission axis of linear polarizer and the reference axis $X_{c}$. $\beta$ is the angular distance between the local meridian and the reference axis $X_{c}$.

 figure: Fig. 3.

Fig. 3. Schematic diagram of measurement principle of the bio-inspired polarization imaging orientation sensor.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Projection of the local and solar meridians onto an imaging plane. (a) represents the projection of any incident light beam; (b) represents the projection of the incident light beam on the solar meridian.

Download Full Size | PDF

Substituting $\theta = 0^{\circ },45^{\circ },90^{\circ }$ into Eq. (6) yields:

$$\begin{bmatrix} I_{0} \\ I_{45} \\ I_{90} \end{bmatrix}=\frac{1}{2}\begin{bmatrix} 1 & \cos 2\beta & -\sin 2\beta\\ 1 & \sin 2\beta & \cos 2\beta\\ 1 & -\cos 2\beta & \sin 2\beta \end{bmatrix}\begin{bmatrix} I \\ Q \\ U \end{bmatrix}.$$

On the imaging plane, the AoE of skylight is calculated as follows [2]:

$$\tan \varphi = \frac{U}{Q} = \frac{(2I_{45}-I_{0}-I_{90})\cos 2\beta-(I_{0}-I_{90})\sin 2\beta}{(2I_{45}-I_{0}-I_{90})\sin 2\beta+(I_{0}-I_{90})\cos 2\beta}.$$

The numerator and denominator of Eq. (8) are divided by $(I_{0}-I_{90})\cos 2\beta$ , it can be solved as:

$$\begin{cases} \varphi=\alpha-\beta, & \\ \alpha=\frac{1}{2}\arctan \frac{(2I_{45}-I_{0}-I_{90})}{(I_{0}-I_{90})}, & \\ \beta=\arctan (\frac{j-v_{y}}{i-u_{x}}), & \end{cases}$$
where $(i,j)$ are the coordinates of the incident skylight, $(u_{x},v_{y})$ is the principal point of the AoE image.

The $\varphi$ map corresponds to the AoE image that contains the azimuth information of the solar meridian. To obtain the azimuth information, we use the projection feature of the E-vector to derive the solar meridian in the AoE image. As shown in Fig. 4(b), when the solar meridian coincides with the local meridian, the AoE on the solar meridian is equal to $90^{\circ }$. Based on the above polarimetric imaging of skylight, we could obtain the AoE image which contains the solar azimuth information. The solar azimuth angle is extracted from the AoE image using an azimuth measurement method [39].

4. Measurement error model of BPIOS

We can see from Eq. (9) that the error sources of BPIOS mainly include the coordinate deviation of the principal point, the installation angle error of micro-polarization array, the attenuation and distortion of the lens, and the grayscale response inconsistency of CMOS. Therefore, it is necessary to classify and discuss these error sources. The continuous form of the measurement error model of BPIOS is written as:

$$\varphi(\delta\beta,\delta I_{\theta},\delta\theta,\delta\eta)=\frac{\partial f}{\partial\beta}\delta\beta+\frac{\partial f}{\partial I_{\theta}}\delta I_{\theta}+\frac{\partial f}{\partial \theta}\delta\theta+\frac{\partial f}{\partial \eta}\delta\eta,$$
where $\delta I_{\theta },\delta \beta,\delta \theta,\delta \eta$ respectively represent the inconsistent grayscale response of CMOS, the coordinate deviation of principal point, the installation angle error of micro-polarization array, and the attenuation of the lens.

4.1 Coordinate deviation of principal point

Considering that the incident skylight is coplanar with the direction of the coordinate deviation of the principal point, the mechanism of the coordinate deviation of the principal point is shown in Fig. 5. Suppose that $e_{1}$ is along the direction of the maximum coordinate deviation of principal point, $e_{3}$ is the ideal principal optical axis direction, and $e^{'}_{3}$ is the actual principal optical axis direction. Since the principal point of BPIOS is not calibrated, there is an offset between the coordinates of the actual principal point and the ideal principal point. When other error sources are zero, the coordinates of the principal point $(u_{x},v_{y})$ are affected. From Eqs. (9) and (10), the influence of the principal point on the AoE image is defined as follows:

$$\quad\varphi=\alpha-\tilde{\beta},$$
where $\tilde {\beta }=\arctan (\frac {j-v_{y}}{i-u_{x}}+\delta \beta )$ , $\delta \beta =\frac {\delta u_{x}(j-v_{y})-\delta v_{y}(i-u_{x})}{(j-v_{y}-\delta v_{y})(j-v_{y})}$ , $(\delta u_{x},\delta v_{y})$ is the coordinate deviation of the principal point.

 figure: Fig. 5.

Fig. 5. Coordinate deviation of the principal point.

Download Full Size | PDF

4.2 Installation angle error of the micro-polarization array

Since the micro-polarization array is integrated into the CMOS through micro-nano processing, angle errors are inevitable during alignment. The installation angle error of the micro-polarization array is shown in Fig. 6. The incident skylight beam interacts with the micro-polarization array after passing through the lens. The Mueller matrix of the micro-polarization array including the installation angle error is expressed as:

$$M_{(\theta-\beta)}=\frac{1}{2}\begin{bmatrix} 1 & \cos 2\phi_{\theta} & \sin 2\phi_{\theta}\\ \cos 2\phi_{\theta} & \cos^{2} 2\phi_{\theta} & \cos 2\phi_{\theta} \sin 2\phi_{\theta}\\ \sin 2\phi_{\theta} & \cos 2\phi_{\theta} \sin 2\phi_{\theta} & \sin^{2} 2\phi_{\theta} \end{bmatrix},$$
where $\phi _{\theta }=(\theta +\delta \theta -\beta )$, the installation angle error of micro-polarization array $\delta \theta$ is assumed to be the same in the three directions.

 figure: Fig. 6.

Fig. 6. Installation angle error of micro-polarization array.

Download Full Size | PDF

When other error sources are zero, it can be obtained from Eqs. (9) and (10) that the influence of the installation angle of micro-polarization array on the AoE image is obtained as:

$$\quad\varphi=\alpha-(\beta+\delta\theta).$$

4.3 Grayscale response inconsistency of CMOS

In visual measurement, there is an approximate linear relationship between input light intensity and output grayscale. The grayscale response inconsistency of CMOS mainly consists of the nonuniformity of dark current response and photoelectric response. The photoelectric sensitive array outputs the light intensity $I_{\theta }$ as image grayscale. Based on the linear model in EMVA 1288 [40], the image grayscale of the CMOS output $I^{'}_{\theta }$ is expressed as:

$$\qquad I^{'}_{\theta}=aI_{\theta}+b_{\theta}+n_{\theta},(\theta=0^{{\circ}},45^{{\circ}},90^{{\circ}}),$$
where $a=\mu g$, $\mu$ means quantum efficiency, $g$ represents the ratio of the number of electrons converted to image grayscale, $b_{\theta }$ represents dark signal, $n_{\theta }$ represents the measurement noise.

In the case that other error sources are considered unchanged, it can be derived from Eqs. (9) and (10) that the impact of the grayscale response inconsistency of CMOS on the AoE image is defined as follows:

$$\qquad\varphi=\dfrac{1}{2}\arctan ( \frac{2I^{'}_{45}-I^{'}_{0}-I^{'}_{90}}{I^{'}_{0}-I^{'}_{90}}) -\beta.$$

4.4 Attenuation of the lens

When polarized skylight passes through the lens, the polarized skylight beam corresponding to a certain pixel can be decomposed into orthogonal p-light and s-light. Let $\tau _{p}$ and $\tau _{s}$ represent the lens amplitude transmittance of p-light and s-light respectively, which only changes with the change of incident angle. When a single pixel beam is at different apertures of the lens, the incident angle of each light ray to each optical surface and the azimuth of its incident surface are different (with a certain symmetry), which will lead to the depolarization of the beam during converging and imaging. Assuming this depolarization effect is a negligible small amount, the Mueller matrix of the lens micro-unit corresponding to a single pixel in the local coordinate frame can be expressed as [38]:

$$M_{lens}=\frac{\tau^{2}_{p}+\tau^{2}_{s}}{2}\begin{bmatrix} 1 & \eta & 0\\ \eta & 1 & 0\\ 0 & 0 & \sqrt{1-\eta^{2}} \end{bmatrix}.$$
where $\eta$ represents the linear attenuation of the lens, $\eta =\frac {\tau ^{2}_{s}-\tau ^{2}_{p}}{\tau ^{2}_{s}+\tau ^{2}_{p}}$. When other error sources are zero, it can be obtained from Eqs. (9) and (10) that the influence of attenuation of the lens on the AoE image is defined as follows:
$$\varphi=\dfrac{1}{2}\arctan\frac{\sqrt{1-\eta^{2}}(mI_{0}+nI_{90}-2I_{45}\cos 2\beta)}{I_{0}(\eta-n)+mI_{90}-I_{45}\sin 2\beta},$$
where $m=\cos 2\beta +\sin 2\beta,n=\cos 2\beta -\sin 2\beta$.

5. Numerical simulation and analysis

Given the solar position in the East-North-Up coordinate frame, the skylight polarization pattern can be established. To truly simulate the skylight polarization pattern during the outdoor scene at a certain time on a certain day, we chose the solar altitude angle was $5^{\circ }$ and the azimuth angle was $20^{\circ }$, $50^{\circ }$, and $80^{\circ }$ for simulation. According to the above-mentioned measurement error model, each error source is sequentially changed, and the AoE image containing the actual measurement error is obtained through simulation. Due to the randomness and uncertainty of each error source, it is necessary to simulate the distribution of each error source, and the parameter settings of these error sources are shown in Table 1. The Monte Carlo method [41] is used to analyze the influence of a single error source and comprehensive error source on azimuth measurement. To reduce the calculation time, we set the image resolution to (1024, 1224) pixel, the maximum grayscale of the pixel is 153 (DN). Due to the interference (cloudy) and attenuation of the outdoor skylight, the maximum DoLP simulated here is set to 0.6. Based on the position of the image center in the pixel coordinate frame, the ideal principal point of the image is (512.5, 612.5) pixel. The specific parameter settings of BPIOS are shown in Table 2.

Tables Icon

Table 1. Simulation parameter settings of each error source

Tables Icon

Table 2. Parameter settings of BPIOS

Based on the parameter settings of error sources in Table 1 and measurement model, 45 simulation experiments were randomly performed, and the solar azimuth angle was changed every 15 times. The azimuth extraction was performed on the generated AoE image containing the actual measurement error, and then the measured azimuth angle was compared with the ideal azimuth angle. Figure 7 shows the azimuth measurement results affected by a single error source. Figure 8 shows the azimuth measurement error corresponding to each single error source. As shown by the brown solid line in Fig. 8, the coordinate deviation of the principal point has a greater impact on the overall measurement accuracy, followed by the installation error of micro-polarization array and the grayscale response inconsistency of CMOS, and the attenuation of the lens has the least impact. Table 3 quantitatively shows the statistical results of azimuth measurement error caused by a single error source.

 figure: Fig. 7.

Fig. 7. Azimuth measurement results affected by a single error source.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Azimuth measurement error corresponding to a single error source.

Download Full Size | PDF

Tables Icon

Table 3. Simulation results of azimuth measurements

The simulation result of the above single error source can quantitatively reflect the influence of a certain error source on the azimuth measurement, but it is necessary to analyse the comprehensive influence caused by the above four error sources. After all, the azimuth measurement accuracy of the uncalibrated BPIOS is mainly determined by these four error sources. Similarly, the same number of comprehensive simulation experiments were performed, and the solar azimuth angle was changed every 15 times. Figure 9 shows an example result of a comprehensive simulation experiment. It can be seen from Fig. 9(d) that the measured AoE image is not very smooth, especially the edge part. The measured DoLP (see Fig. 9(e)) is also a similar phenomenon. Figure 10 shows the azimuth measurement results that are simultaneously affected by these four error sources. Through the above Monte Carlo simulation, for the BPIOS to be analysed, the coordinate deviation of the principal point is controlled within (3, 3) pixels, the azimuth measurement error is $0.289^{\circ } (\sigma )$ . The installation error of the micro-polarization array and the grayscale response inconsistency of CMOS are controlled within $0.2^{\circ }$ and 2.0 pixels, respectively. The azimuth measurement error is $0.160^{\circ }$ and $0.170^{\circ }$, and the error is within the same order of magnitude, which is close to zero. The attenuation of the lens is controlled within (0.1, 0.1), and the azimuth measurement error is zero, which indicates that the attenuation of the light intensity has no effect on the AoE image, but has an effect on the DoLP image, because DoLP is determined by the light intensity. The following conclusions can be drawn from the above results: the coordinate deviation of the principal point, the installation error of the micro-polarization array and the grayscale response inconsistency of CMOS are the important error sources that affect the azimuth measurement.These simulation results can guide the later calibration of BPIOS.

 figure: Fig. 9.

Fig. 9. A comprehensive simulation experiment result. (a) AoE truth (ie, calibrated); (b) DoLP truth; (c) Original image; (d) Measured AoE (ie, uncalibrated); (e) Measured DoLP.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Azimuth measurement result simultaneously affected by the four error sources.

Download Full Size | PDF

6. Calibration method

From the simulation and analysis in the above section, it is known that the BPIOS must be calibrated before navigation, especially the principal point and the installation angle error of the micro-polarization array. Based on the error model in Section 4, the parameters to be calibrated include the deviation of the principal point, grayscale response inconsistency of CMOS, the installation angle of the micro-polarization array, and the attenuation of the lens. This section describes the calibration method of BPIOS, which is mainly divided into the following three steps: (1) the calibration of the geometric internal parameters of BPIOS, especially the principal point; (2) the grayscale response calibration; (3) the Mueller matrix calibration of the optical system, which includes the installation angle error of the micro-polarization array and the attenuation of the lens.

6.1 Principal point calibration

Based on the camera perspective projection imaging, the projection of any point k in the field of view can be expressed as follows:

$$z\begin{bmatrix} x^{k}_{\theta} \\ y^{k}_{\theta} \\ 1 \end{bmatrix}=\begin{bmatrix} f_{x} & 0 & u_{x}\\ 0 & f_{y} & v_{y}\\ 0 & 0 & 1 \end{bmatrix}\begin{bmatrix} R & T \end{bmatrix}\begin{bmatrix} X^{k}_{w} \\ Y^{k}_{w} \\ Z^{k}_{w} \\ 1 \end{bmatrix},(\theta=0^{{\circ}},45^{{\circ}},90^{{\circ}}),$$
where $(x^{k}_{\theta },y^{k}_{\theta })$ represents the pixel coordinate on the imaging plane, namely $(i,j)$, $(u_{x},v_{y})$ is the principal point, $(f_{x},f_{y})$ is the focal length, $R$ and $T$ represent rotation and translation of external parameters, respectively. $(X^{k}_{w},Y^{k}_{w},Z^{k}_{w})$ represents the three-dimensional coordinates of any point $k$ in the world coordinate frame, $z$ is the scale factor.

Since the internal parameters are not calibrated and the principal point is offset, for a feature point in the world coordinate frame, the corresponding point in the image can be estimated by the projection function:

$$z\begin{bmatrix} \tilde{x}^{k}_{\theta} \\ \tilde{y}^{k}_{\theta} \\ 1 \end{bmatrix}=\begin{bmatrix} f_{x} & 0 & \tilde{u}_{x}\\ 0 & f_{y} & \tilde{v}_{y}\\ 0 & 0 & 1 \end{bmatrix}\begin{bmatrix} R & T \end{bmatrix}\begin{bmatrix} X^{k}_{w} \\ Y^{k}_{w} \\ Z^{k}_{w} \\ 1 \end{bmatrix},(\theta=0^{{\circ}},45^{{\circ}},90^{{\circ}}),$$

Combining Eqs. (18) and (19), we can define the reprojection error and minimize it:

$$\qquad e_{k}=J(x)\delta x+O(\begin{Vmatrix} \delta x \end{Vmatrix}),$$
where $J(x)$ is the Jacobian matrix containing the first partial derivatives of the function components, and $O(\cdot )$ represents the infinitesimal of the same order.

The gradient descent iteration is employed to reduce the linearization error and to improve the estimation accuracy, as shown in Fig. 11. Firstly, BPIOS is calibrated using the camera calibration toolbox (http://www.vision.caltech.edu/bouguetj/calib_doc/) and $x_{0}$ is initialized by Eq. (18). The coordinate of the feature point in the world coordinate frame, the indicator threshold and the maximum iterative number $n_{max}$ are also set. Then, the gradient descent iterations are started. During the iteration, the Jacobian matrix $J(x)$ is computed and the parameter error $\Delta x$ is estimated. If the parameter error $\Delta x$ is less than $\delta$ or the current iterative number is greater than $n_{max}$, the loop is stopped and the calibration results are given; otherwise, the iteration loop continues.

 figure: Fig. 11.

Fig. 11. Flow chart of principal point calibration algorithm.

Download Full Size | PDF

6.2 Grayscale response calibration

To calibrate the linear parameters $a$ and $b_{\theta }$ , we use linear least squares fitting to solve them. First, an integrating sphere is used to create a uniform unpolarized light source. Then the grayscale images of BPIOS are collected under a uniform light source. Different image response intensities can be recorded by changing the incident intensity of the integrating sphere light source. Finally, the linear parameters $a$ and $b_{\theta }$ can be obtained. The parameter-estimated principle is described as follows:

$$min\Sigma(I^{'}_{\theta}(a,b_{\theta})-I_{\theta}),(\theta=0^{{\circ}},45^{{\circ}},90^{{\circ}}).$$

6.3 Mueller matrix calibration of the optical system

The polarized light is focused and incident on the CMOS sensor through the optical system of BPIOS, but the Mueller matrix error of the optical system will change the polarization state of the measured skylight. From the above-mentioned error model of BPIOS and simulation result, it can also be seen that we need to calibrate the Mueller matrix of the optical system. The specific calibration method is as follows. First, we create four known linearly polarized lights with different polarization states, and then respectively irradiate these four linearly polarized lights to the BPIOS with the calibrated principal point. According to the principle of optical polarimetric imaging, the Stokes vector of the emitted light can be obtained as:

$$S_{out}=M_{(\theta-\beta)}M_{lens}S^{l}_{in},(l=1,2,3,4).$$

Then, since the BPIOS can only perceive the first dimension of the Stokes vector of the emitted light, the received light intensity can be obtained. Finally, $S^{l}_{in}$ is introduced into the above Eq. (22), and the least square method is used to calculate the installation angle error of the micro-polarization array and the attenuation of the lens, and then the actual Mueller matrix of the optical system of BPIOS is obtained as:

$$M_{cabli}=q \begin{bmatrix} 2(1+\eta t_{0}) & -4\eta\sin 2\phi_{0} & 2(1+\eta w_{0}) \\ -2(\eta+t_{45}) & 4\sin 2\phi_{45} & -2(\eta+w_{45}) \\ -2bw_{90} & 4b\cos 2\phi_{90} & -2bt_{90} \end{bmatrix},$$
where $q =\frac {1}{(\tau ^{2}_{p}+\tau ^{2}_{s})b^{2}},t_{i}=\sin 2\phi _{i} -\cos 2\phi _{i},w_{i}=\sin 2\phi _{i} +\cos 2\phi _{i},(i=0,45,90), b=\sqrt {1-\eta ^{2}}$.

7. Experiments and results

7.1 Indoor results

The calibration experiment of the principal point was performed, and 20 checkerboard images were acquired in the field of view of BPIOS (CMOS Sony IMX250MZR based). Feature points were extracted from these images, and residuals were constructed using Eq. (20), and gradient descent was used to optimize the minimum reprojection residuals. The calibration results of the geometric internal parameters of BPIOS are shown in Table 4. Table 4 indicates that there is a coordinate deviation between the actual principal point and the ideal principal point. Meanwhile, the lens has some slight radial distortion.

Tables Icon

Table 4. Calibration results of the internal parameters of BPIOS

The BPIOS was placed near the light hole at the front of the integrating sphere (LCA-00389-300, Labsphere), and various images were collected by adjusting different light intensities, and the linear parameters were solved by using Eq. (21). Four linearly polarized lights with known polarization states are irradiated into the BPIOS, and Eq. (23) is used to calibrate the actual Mueller matrix of the optical system. The calibrated Mueller matrix of the optical system is shown as:

$$\mathbf{M}_{cabli} = \begin{bmatrix} 1.027 & 0 & 1.039\\ 1.027 & 0 & -1.039\\ -1.027 & 2 & -1.039 \end{bmatrix}.$$

We used a precision turntable (Zolix RAK200, China) in front of the integrating sphere to rotate the high-contrast linear polarizer (TECHSPEC86-201, Edmund Optics) once every $10^{\circ }$ to perform the azimuth measurement experiments. The specific azimuth test experiments are shown in Table 5. The experimental layout is shown in Fig. 12. AoE images were collected and the azimuth angle was calculated. The azimuth measurement results of before and after the principal point calibration are plotted in Fig. 13. It can be seen from Fig. 13(b) that the measured azimuth angle error is smaller, which indicates that the azimuth angle after principal point calibration is closer to the reference value. Figure 14 shows an example of AoE images before and after the principal point calibration (the 33th AoE image).

 figure: Fig. 12.

Fig. 12. Experimental layout for calibration and testing of BPIOS.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Azimuth measurement result before and after principal point calibration. (a) measured azimuth angle; (b) measured azimuth angle error.

Download Full Size | PDF

 figure: Fig. 14.

Fig. 14. AoE images before and after the principal point calibration. (a) Case 1; (b) Case 4.

Download Full Size | PDF

Tables Icon

Table 5. Azimuth test experimentsa

Figure 15(a) shows the measured azimuth angle before and after the Mueller matrix calibration and grayscale response calibration. As can be seen from Fig. 15(b), after this step of calibration, the azimuth measurement error is smaller than before the calibration, which indicates that the Mueller matrix of the optical system and the grayscale response of CMOS do have an impact on the azimuth measurement. This result is also similar to the above simulation results. Figure 16 shows an example of AoE images before and after Mueller matrix calibration (the 25th AoE image). Table 6 shows the root mean square error (RMSE) and maximum error of the azimuth measurement of the above test experiment. Table 6 indicates that the azimuth accuracy of the calibrated BPIOS in the laboratory is $0.136^{\circ }$, and the maximum error is within $0.274^{\circ }$.

 figure: Fig. 15.

Fig. 15. Azimuth measurement result before and after the Mueller matrix and grayscale response calibration. (a) measured azimuth angle; (b) measured azimuth angle error.

Download Full Size | PDF

 figure: Fig. 16.

Fig. 16. AoE images before and after the Mueller matrix calibration. (a) Case 3; (b) Case 4.

Download Full Size | PDF

Tables Icon

Table 6. Azimuth measurement accuracy of test experiments

7.2 Outdoor results

Since the working environment of BPIOS is an outdoor scene, it is necessary to conduct outdoor performance tests to evaluate the comprehensive measurement accuracy. At 6:42:06 am on June 9, 2019, a vehicle-mounted outdoor performance test was conducted near the Art Museum of Tsinghua University. During the experiment, there were few clouds in the sky and trees on both sides of the road. We used the high-precision inertial navigation system (Sagem Epsilon 20-200, hemispherical resonant gyroscope, France) as a reference benchmark (Heading: $0.250^{\circ }$, Roll and Pitch: $0.125^{\circ }$ (RMS)). The vehicle ran two laps (total length of about 1.9km), and compared the three paths (about 679 m). The experimental trajectory is shown in Fig. 17. BPIOS was used to collect polarized light in the sky. We used the calibrated parameters to calculate the AoE image, extracted the solar azimuth angle, and then used the astronomical calendar to calculate the heading angle. The heading measurement result is shown in Fig. 18. The heading measurement error is shown by the solid blue line in Fig. 18(b). We performed outdoor measurement compensation and used polynomial fitting to compensate the heading measurement error. The RMSE of the heading before compensation is $1.313^{\circ }$, and the RMSE of the heading after compensation is $0.667^{\circ }$. The outdoor dynamic result indicate that BPIOS can achieve a relatively good heading performance without diverging over time. In addition, due to partial occlusion in the sky or cloudy skies (the skylight polarization pattern is not ideal or perfect), sometimes these will interfere with the heading measurement results, which requires better azimuth extraction algorithms to eliminate interference [39,42]. Therefore, the overall measurement accuracy of BPIOS is not only determined by the system error of BPIOS, but also affected by whether the skylight polarization pattern conforms to the Rayleigh sky model and whether there is environmental occlusion and interference in the sky.

 figure: Fig. 17.

Fig. 17. Outdoor dynamic experiment test trajectory. The red solid dot indicates the starting position, and the red arrow indicates the direction of vehicle movement.

Download Full Size | PDF

 figure: Fig. 18.

Fig. 18. Heading measurement results of outdoor vehicle-mounted experiment. (a) heading angle; (b) heading error.

Download Full Size | PDF

8. Conclusion

This article presented a measurement error model of BPIOS based on Stokes vector. Comprehensive consideration of multi-source factors, such as the principal point, grayscale response of CMOS, micro-polarization array and lens, using the simulated Rayleigh polarized skylight as the incident light, the theoretical measurement accuracy simulation was conducted for the first time. These simulation results show that the deviation of the principal point have a greater influence, followed by the grayscale response inconsistency of CMOS and the installation angle error of the micro-polarization array, and finally the attenuation of the lens. This finding can guide the calibration of BPIOS. A calibration method of BPIOS based on geometric internal parameters and the Mueller matrix of the optical system is proposed. The indoor experimental results show that the azimuth measurement accuracy of the calibrated BPIOS can reach $0.136^{\circ }$. Outdoor dynamic vehicle test shows that the heading measurement accuracy is $0.667^{\circ }$ after error compensation is performed on the heading measurement results. In summary, for bio-inspired polarization imaging navigation, the above-mentioned factors are very important, especially considering the multi-source factors, the measurement accuracy of BPIOS is of practical engineering significance. As far as we know, it is the first time that we have studied the influence of multi-source factors in a complex environment, such as the system error of BPIOS, occlusion and interference in the sky and the polarization pattern of skylight, on the comprehensive measurement performance of BPIOS. However, it is worth noting that there are still some shortcomings. For example, we did not consider the instantaneous field of view error of the focal plane [4345]. In the future, we will focus on this aspect to further improve the comprehensive measurement performance of BPIOS. Additionally, our idea of modeling the measurement error of a skylight polarimetric imaging sensor is also applicable to underwater polarimetric navigation, underwater imaging detection and underwater target enhancement [46,47].

Funding

Fundamental Research Funds for the Central Universities (DUT21GF308, DUT20LAB303); Foundation for Innovative Research Groups of the National Natural Science Foundation of China (51621064); National Youth Foundation of China (11904044); National Natural Science Foundation of China (51675076).

Disclosures

The authors declare no conflicts of interest.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. C. Brosseau, Fundamentals of polarized light: A statistical optics approach, (John Wiley, 1998).

2. D. Goldstein, Polarized light, third edition, (Crc Press, 2010).

3. G. Horvath, Polarized light and polarization vision in animal sciences, (Springer, Berlin, Heidelberg, 2014).

4. J. K. Jeffery, The neurobiology of spatial behaviour || path integration in insects, (Oxford University Press, 2003), pp. 9–30.

5. K. Cheng, “K.j. jeffery (ed) the neurobiology of spatial behaviour,” Animal Cogn. 7(3), 1 (2004). [CrossRef]  

6. H. Uwe, “Sky compass orientation in desert locusts–evidence from field and laboratory studies,” Front. Behav. Neurosci. 9, 346 (2015). [CrossRef]  

7. G. Robin, P. N. Fleischmann, G. Kornelia, W. Rüdiger, and R. Wolfgang, “The role of celestial compass information in cataglyphis ants during learning walks and for neuroplasticity in the central complex and mushroom bodies,” Front. Behav. Neurosci. 11, 1–14 (2017). [CrossRef]  

8. T. L. Warren, Y. M. Giraldo, M. H. Dickinson, B. el Jundi, A. Kelber, and B. Webb, “Celestial navigation in Drosophila,” J. Exp. Biol. 222(Suppl_1), Jeb186148 (2019). [CrossRef]  

9. B. Ronacher, “Path integration in a three-dimensional world: the case of desert ants,” J. Comp. Physiol., A 206(3), 379–387 (2020). [CrossRef]  

10. G. Horváth and D. Varjú, Polarized Light in Animal Vision (Springer, Berlin, Heidelberg, 2004).

11. Oren Froy, L. Anthony Gotter, L. Amy Cassemlman, and M. Steven Reppert, “Illuminating the circadian clock in monarch butterfly migration,” Science 300(5623), 1303–1305 (2003). [CrossRef]  

12. S. M. Reppert, R. J. Gegear, and C. Merlin, “Navigational mechanisms of migrating monarch butterflies,” Trends Neurosci. 33(9), 399–406 (2010). [CrossRef]  

13. S. Greif, I. Borissov, Y. Yovel, and R. A. Holland, “A functional role of the sky’s polarization pattern for orientation in the greater mouse-eared bat,” Nat. Commun. 5(1), 4488 (2014). [CrossRef]  

14. R. N. Patel and T. W. Cronin, “Mantis shrimp navigate home using celestial and idiothetic path integration,” Curr. Biol. 30(11), 1981–1987.e3 (2020). [CrossRef]  

15. S. B. Powell, R. Garnett, J. Marshall, C. Rizk, and V. Gruev, “Bioinspired polarization vision enables underwater geolocalization,” Sci. Adv. 4(4), eaao6841 (2018). [CrossRef]  

16. K. D. Pham, G. Chen, T. Aycock, A. Lompado, T. Wolz, and D. Chenault, “Passive optical sensing of atmospheric polarization for gps denied operations,” Proc. SPIE 9838, 98380Y (2016). [CrossRef]  

17. D. Lambrinos, R. Moller, T. Labhart, R. Pfeifer, and R. Wehner, “A mobile robot employing insect strategies for navigation,” Robotics Auton. Syst. 30(1-2), 39–64 (2000). [CrossRef]  

18. J. Chu, K. Zhao, Z. Qiang, and T. Wang, “Design of a novel polarization sensor for navigation,” in International Conference on Mechatronics and Automation, (2007).

19. J. Chu, K. Zhao, Z. Qiang, and T. Wang, “Construction and performance test of a novel polarization sensor for navigation,” Sens. Actuators, A 148(1), 75–82 (2008). [CrossRef]  

20. T. Ma, X. Hu, L. Zhang, and X. He, “Calibration of a polarization navigation sensor using the nsga-ii algorithm,” Opt. Commun. 376, 107–114 (2016). [CrossRef]  

21. Y. Wang, J. Chu, R. Zhang, J. Li, X. Guo, and M. Lin, “A bio-inspired polarization sensor with high outdoor accuracy and central-symmetry calibration method with integrating sphere,” Sensors 19(16), 3448 (2019). [CrossRef]  

22. J. Chahl and A. Mizutani, “Biomimetic attitude and orientation sensors,” IEEE Sens. J. 12(2), 289–297 (2012). [CrossRef]  

23. J. Dupeyroux, J. Diperi, M. Boyron, S. Viollet, and J. Serres, “A bio-inspired celestial compass applied to an ant-inspired robot for autonomous navigation,” in 2017 European Conference on Mobile Robots (ECMR), (2017).

24. J. Dupeyroux, J. Diperi, M. Boyron, S. Viollet, and J. Serres, “A novel insect-inspired optical compass sensor for a hexapod walking robot,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (2017).

25. J. Dupeyroux, S. Viollet, and J. R. Serres, “An ant-inspired celestial compass applied to autonomous outdoor robot navigation,” Robotics and Autonomous Systems (2019).

26. W. Sturzl, “A lightweight single-camera polarization compass with covariance estimation,” in IEEE International Conference on Computer Vision, (2017).

27. W. Stürzl and N. Carey, “A fisheye camera system for polarisation detection on uavs,” in Computer Vision – ECCV 2012. Workshops and Demonstrations, A. Fusiello, V. Murino, and R. Cucchiara, eds. (Springer Berlin Heidelberg, Berlin, Heidelberg, 2012), pp. 431–440.

28. Z. Liu, R. Zhang, Z. Wang, G. Le, B. Li, and J. Chu, “Integrated polarization-dependent sensor for autonomous navigation,” J. Micro/Nanolithogr., MEMS, MOEMS 14(1), 015001 (2015). [CrossRef]  

29. C. Fan, X. Hu, J. Lian, L. Zhang, and X. He, “Design and calibration of a novel camera-based bio-inspired polarization navigation sensor,” IEEE Sens. J. 16(10), 3640–3648 (2016). [CrossRef]  

30. G. Han, X. Hu, J. Lian, X. He, L. Zhang, Y. Wang, and F. Dong, “Design and calibration of a novel bio-inspired pixelated polarized light compass,” Sensors 17(11), 2623 (2017). [CrossRef]  

31. H. Ren, J. Yang, X. Liu, P. Huang, and L. Guo, “Sensor modeling and calibration method based on extinction ratio error for camera-based polarization navigation sensor,” Sensors 20(13), 3779 (2020). [CrossRef]  

32. X. Wang, J. Gao, and N. W. Roberts, “Bio-inspired orientation using the polarization pattern in the sky based on artificial neural networks,” Opt. Express 27(10), 13681–13693 (2019). [CrossRef]  

33. H. Lu, K. Zhao, Z. You, and K. Huang, “Angle algorithm based on hough transform for imaging polarization navigation sensor,” Opt. Express 23(6), 7248–7262 (2015). [CrossRef]  

34. J. Tang, Z. Nan, D. Li, F. Wang, and J. Liu, “Novel robust skylight compass method based on full-sky polarization imaging under harsh conditions,” Opt. Express 24(14), 15834 (2016). [CrossRef]  

35. H. Zhao, W. Xu, Z. Ying, X. Li, and J. Bo, “Polarization patterns under different sky conditions and a navigation method based on the symmetry of the aop map of skylight,” Opt. Express 26(22), 28589 (2018). [CrossRef]  

36. W. Zhang, X. Zhang, Y. Cao, H. Liu, and Z. Liu, “Robust sky light polarization detection with an s-wave plate in a light field camera,” Appl. Opt. 55(13), 3518–3525 (2016). [CrossRef]  

37. B. Suhai and G. Horváth, “How well does the rayleigh model describe the e-vector distribution of skylight in clear and cloudy conditions? a full-sky polarimetric study,” J. Opt. Soc. Am. A 21(9), 1669 (2004). [CrossRef]  

38. H. Qian, Q. Ye, B. Meng, J. Hong, Y. Yuan, and S. Li, “The polarized radiometric theoretical error of spaceborne directional polarimetric camera,” Spectrosc. Spectr. Analysis 37, 1558–1565 (2017). [CrossRef]  

39. Z. Wan, K. Zhao, and J. Chu, “Robust azimuth measurement method based on polarimetric imaging for bionic polarization navigation,” IEEE Trans. Instrum. Meas. 69(8), 5684–5692 (2020). [CrossRef]  

40. B. Jahne, “Emva 1288 standard for machine vision – objective specification of vital camera data,” Opt. Photonik 5(1), 53–54 (2010). [CrossRef]  

41. Y. Jiang and Z. Li, “Monte carlo simulation of mueller matrix of randomly rough surfaces,” Opt. Commun. 474, 126113 (2020). [CrossRef]  

42. H. Liang, H. Bai, N. Liu, and X. Sui, “Polarized skylight compass based on a soft-margin support vector machine working in cloudy conditions,” Appl. Opt. 59(5), 1271–1279 (2020). [CrossRef]  

43. E. Gilboa, J. P. Cunningham, A. Nehorai, and V. Gruev, “Image interpolation and denoising for division of focal plane sensors using gaussian processes,” Opt. Express 22(12), 15277–15291 (2014). [CrossRef]  

44. J. Zhang, H. Luo, B. Hui, and Z. Chang, “Image interpolation for division of focal plane polarimeters with intensity correlation,” Opt. Express 24(18), 20799–20807 (2016). [CrossRef]  

45. A. Ahmed, X. Zhao, V. Gruev, J. Zhang, and A. Bermak, “Residual interpolation for division of focal plane polarization image sensors,” Opt. Express 25(9), 10651–10662 (2017). [CrossRef]  

46. M. Dubreuil, P. Delrot, I. Leonard, A. Alfalou, C. Brosseau, and A. Dogariu, “Exploring underwater target detection by imaging polarimetry and correlation techniques,” Appl. Opt. 52(5), 997–1005 (2013). [CrossRef]  

47. K. O. Amer, M. Elbouz, A. Alfalou, C. Brosseau, and J. Hajjami, “Enhancing underwater optical imaging by using a low-pass polarization filter,” Opt. Express 27(2), 621 (2019). [CrossRef]  

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (18)

Fig. 1.
Fig. 1. A general description of the scattering process at any point in the sky in the ENU coordinate frame. S is the solar position on the celestial sphere and $p$ is a scattering point. $X_{p}Y_{p}Z_{p}$ is the local coordinate frame of a scattering point (p frame). $\alpha _{p},h_{p}$ are the azimuth and elevation angle of the scattering vector , and $\alpha _{s},h_{s}$ are the azimuth and elevation angle of the solar vector $S_{n}$, respectively. $\varphi$ represents the angle of E-vector from the local meridian ( $X_{p}$ is the reference axis).
Fig. 2.
Fig. 2. 3-D simulation of skylight polarization field when the Sun is at an altitude angle of $10^{\circ }$ and an azimuth angle of $300^{\circ }$ in the North-East-Up coordinate frame; (a) represents the AoE; (b) represents the corresponding degree of linear polarization (DoLP).
Fig. 3.
Fig. 3. Schematic diagram of measurement principle of the bio-inspired polarization imaging orientation sensor.
Fig. 4.
Fig. 4. Projection of the local and solar meridians onto an imaging plane. (a) represents the projection of any incident light beam; (b) represents the projection of the incident light beam on the solar meridian.
Fig. 5.
Fig. 5. Coordinate deviation of the principal point.
Fig. 6.
Fig. 6. Installation angle error of micro-polarization array.
Fig. 7.
Fig. 7. Azimuth measurement results affected by a single error source.
Fig. 8.
Fig. 8. Azimuth measurement error corresponding to a single error source.
Fig. 9.
Fig. 9. A comprehensive simulation experiment result. (a) AoE truth (ie, calibrated); (b) DoLP truth; (c) Original image; (d) Measured AoE (ie, uncalibrated); (e) Measured DoLP.
Fig. 10.
Fig. 10. Azimuth measurement result simultaneously affected by the four error sources.
Fig. 11.
Fig. 11. Flow chart of principal point calibration algorithm.
Fig. 12.
Fig. 12. Experimental layout for calibration and testing of BPIOS.
Fig. 13.
Fig. 13. Azimuth measurement result before and after principal point calibration. (a) measured azimuth angle; (b) measured azimuth angle error.
Fig. 14.
Fig. 14. AoE images before and after the principal point calibration. (a) Case 1; (b) Case 4.
Fig. 15.
Fig. 15. Azimuth measurement result before and after the Mueller matrix and grayscale response calibration. (a) measured azimuth angle; (b) measured azimuth angle error.
Fig. 16.
Fig. 16. AoE images before and after the Mueller matrix calibration. (a) Case 3; (b) Case 4.
Fig. 17.
Fig. 17. Outdoor dynamic experiment test trajectory. The red solid dot indicates the starting position, and the red arrow indicates the direction of vehicle movement.
Fig. 18.
Fig. 18. Heading measurement results of outdoor vehicle-mounted experiment. (a) heading angle; (b) heading error.

Tables (6)

Tables Icon

Table 1. Simulation parameter settings of each error source

Tables Icon

Table 2. Parameter settings of BPIOS

Tables Icon

Table 3. Simulation results of azimuth measurements

Tables Icon

Table 4. Calibration results of the internal parameters of BPIOS

Tables Icon

Table 5. Azimuth test experimentsa

Tables Icon

Table 6. Azimuth measurement accuracy of test experiments

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

E n = S n × P n ,
E p = R n p E n ,
R n p = [ sin h p 0 cos h p 0 1 0 cos h p 0 sin h p ] [ cos α p sin α p 0 sin α p cos α p 0 0 0 1 ] .
E p = [ cos h s sin ( α s h p ) cos h p sin h s + sin h p cos h s cos ( α s h p ) 0 ] .
tan φ = cos h p sin h s sin h p cos h s cos ( α s h p ) cos h s sin ( α s h p ) .
I θ = 1 2 ( I + Q cos 2 ( θ β ) + U sin 2 ( θ β ) ) ,
[ I 0 I 45 I 90 ] = 1 2 [ 1 cos 2 β sin 2 β 1 sin 2 β cos 2 β 1 cos 2 β sin 2 β ] [ I Q U ] .
tan φ = U Q = ( 2 I 45 I 0 I 90 ) cos 2 β ( I 0 I 90 ) sin 2 β ( 2 I 45 I 0 I 90 ) sin 2 β + ( I 0 I 90 ) cos 2 β .
{ φ = α β , α = 1 2 arctan ( 2 I 45 I 0 I 90 ) ( I 0 I 90 ) , β = arctan ( j v y i u x ) ,
φ ( δ β , δ I θ , δ θ , δ η ) = f β δ β + f I θ δ I θ + f θ δ θ + f η δ η ,
φ = α β ~ ,
M ( θ β ) = 1 2 [ 1 cos 2 ϕ θ sin 2 ϕ θ cos 2 ϕ θ cos 2 2 ϕ θ cos 2 ϕ θ sin 2 ϕ θ sin 2 ϕ θ cos 2 ϕ θ sin 2 ϕ θ sin 2 2 ϕ θ ] ,
φ = α ( β + δ θ ) .
I θ = a I θ + b θ + n θ , ( θ = 0 , 45 , 90 ) ,
φ = 1 2 arctan ( 2 I 45 I 0 I 90 I 0 I 90 ) β .
M l e n s = τ p 2 + τ s 2 2 [ 1 η 0 η 1 0 0 0 1 η 2 ] .
φ = 1 2 arctan 1 η 2 ( m I 0 + n I 90 2 I 45 cos 2 β ) I 0 ( η n ) + m I 90 I 45 sin 2 β ,
z [ x θ k y θ k 1 ] = [ f x 0 u x 0 f y v y 0 0 1 ] [ R T ] [ X w k Y w k Z w k 1 ] , ( θ = 0 , 45 , 90 ) ,
z [ x ~ θ k y ~ θ k 1 ] = [ f x 0 u ~ x 0 f y v ~ y 0 0 1 ] [ R T ] [ X w k Y w k Z w k 1 ] , ( θ = 0 , 45 , 90 ) ,
e k = J ( x ) δ x + O ( δ x ) ,
m i n Σ ( I θ ( a , b θ ) I θ ) , ( θ = 0 , 45 , 90 ) .
S o u t = M ( θ β ) M l e n s S i n l , ( l = 1 , 2 , 3 , 4 ) .
M c a b l i = q [ 2 ( 1 + η t 0 ) 4 η sin 2 ϕ 0 2 ( 1 + η w 0 ) 2 ( η + t 45 ) 4 sin 2 ϕ 45 2 ( η + w 45 ) 2 b w 90 4 b cos 2 ϕ 90 2 b t 90 ] ,
M c a b l i = [ 1.027 0 1.039 1.027 0 1.039 1.027 2 1.039 ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.