Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Self-calibration algorithm for installation angle deviation of bionic polarization compound eyes

Open Access Open Access

Abstract

A self-calibration algorithm based on unsupervised optimization for polarizer installation angle deviation is proposed and used in a multi-aperture bionic polarization compound eye system. To simplify calibration operation, under the condition that the calibration-polarized light information is unknown, this algorithm fully exploits redundancy and random polarization information in the scene, and uses a non-convex multi-objective discrete parameter sorting optimization method to achieve angle self-calibration. Compared with ordinary calibration procedures, the algorithm requires less stringent conditions, achieves online calibration and is more accurate. It also can be applied to camera polarization arrays, division-of-focal-plane polarization cameras, and other polarization devices.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The advantages of insect compound eyes include their small size, wide field of view (FOV), and multiple apertures. Their unique imaging pattern and performance make them an important research topic in bionic compound eye systems [1]. Polarization imaging is an important characteristic of some insects’ compound eyes for navigation. The bionic polarization compound eye system (BPCES) is a type of bionic compound eye with a polarizer in its light path, which imitates the fine physiological polarization structure of the small rhabdoms in the dorsal rim ommatidia of the insect compound eyes [2], making it a multi-aperture (partially) overlapping FOV polarization imaging system with broad prospects in applications such as sky-polarized light navigation [3] and target detection. The FOV-overlapping BPCES uses multiple polarizers installed at various angles to simultaneously image the incident polarized light field and calculate vital information such as the degree of linear polarization (DoLP) and angle of polarization (AoP).

We designed a BPCES that imitates the light-transmission crystal cones and the microvillar polarizer ommatidia (sub-eyes) structure of insect compound eyes. Our BPCES consists of a multi-aperture bionic compound eye camera [4] based on a micro-surface fiber faceplate and a curved polarizing film array covering the front of the ommatidia. The central region of the wide-stitched FOV of the BPCES has a 4-FOV overlap; hence, polarization imaging can be achieved by setting the polarizing films of different apertures at different angles θ.

However, the actual angles θ of the polarizing films are difficult to install accurately in accordance with the ideal direction set requirements, resulting in a calculation error of the polarization information of the incident light. Other polarization imaging systems, such as division-of-focal-plane (DoFP) polarization cameras, camera polarization arrays, and division-of-amplitude polarization cameras, exhibit similar installation deviations. In addition, system drift or looseness during usage can result in angular deviations. Therefore, with the development of polarization imaging technology, system polarizer angle detection and unconventional calibration algorithms have become increasingly important.

Polarization imaging system calibration typically uses two conventional supervised algorithms: the coefficient optimization [5,6] and cosine fitting [7] algorithms. The former requires multiple Stokes datasets to be measured, including the true values of the light intensity IS, DoLP p, and AoP A of the calibration polarization light source, and pseudo-inverse the Mueller matrix M of the system using the matrix linear least squares (LLS) method. The latter requires the IS and p of the light source to be constant, and the A data must be measured. By fitting the cosine function or locating the extreme value of the light intensity, the actual polarizer angle θ of each aperture is determined.

These two conventional calibration algorithms have stringent calibration conditions and require high sampling precision. In recent years, optimization-based polarization calibration algorithms [812] have advanced significantly. By constructing the objective function from the calculated value a and true value A of the incident light’s AoP, the polarization parameters are iterated by gradient descent or search algorithms to make the objective function gradually reach the minimum. Theoretically, the optimization calibration algorithm does not require measuring or fixing the IS and p of the light source and only requires the true value A to be measured, which simplifies the calibration process.

From 2014 to 2017, Fan et al. researched the parameter calibration algorithm of the 3-POL-OP BPCES [810], a 4-camera array [11], and a mounted DoFP polarization camera [12] and focused on the installation angle deviation optimization calibration. The incident light’s AoP A was used as the supervision value because of its ease of measurement. Owing to the ill-conditioned characteristics of the loss function [9], the optimization results were typically not well posed. Therefore, in further studies, linear weighted light intensity constraints [8] or DoLP constant constraints [9] were added, and the multi-objective optimization (MOO) method of LLS was used [9] to perform two-objective optimization with the constant DoLP and the difference of a - A.

In [12], the angle deviation between the polarizer array and the charge-coupled device (CCD) was calibrated (Fig. 1(a)), and a calibration algorithm based on the Gauss–Newton nonlinear optimization method was developed. The polarization characteristics of a pixel that receives input from multiple polarizer units (M1−4) are regarded as a generalized polarizer angle and extinction ratio, and the standard deviation of the AoP measurement result of the calibrated system is reduced from 0.99° to 0.06°.

 figure: Fig. 1.

Fig. 1. Angle deviations in the DoFP camera and insect compound eyes. (a) Installation deviation between the polarizer array and CCD pixels [12]; (b) Polarizer angle distribution of a cricket [18].

Download Full Size | PDF

The disadvantages of the aforementioned algorithms [812] are as follows: (1) The supervised optimization algorithm requires a precise true AoP A as the supervised value, and the AoP rotation error of the incident light affects the calibration accuracy, necessitating strict calibration conditions. (2) For the MOO algorithms, the simple weighting method [8] is prone to falling into only one optimal objective function. Additionally, the Pareto optimization methods (such as NSGA-II [13]) have difficulty in comprehensively screening the results according to the importance priority of each objective and can only finally obtain the Pareto set of solutions containing actual parameters. The polarization optimization calibration problem is essentially a non-convex, non-Pareto MOO problem, and there have been few studies on it.

Considering the prevalence of polarization compound eye structures in insect receptors, we referred to the learning model and optimization algorithm of an organism’s response to its own receptors. Many insects have been identified as having polarization-based navigational abilities [14]. Anatomical experiments have indicated that ommatidia in the dorsal rim of insect compound eyes have rhabdomeric microvilli with orthogonal polarization directions [2].

Therefore, the polarization compound eyes of insects detect polarization information via simultaneous ommatidia [15,16] with various growth angles and encounter the self-calibration problem [17,18] that the growth angles between polarization ommatidia must be calibrated (Fig. 1(b)). Wehner et al. [19] found that the interference of the sky polarization pattern can cause a desert ant to navigate incorrectly; however, once it observes the full-sky polarization pattern, it corrects the error. It is generally believed that the desert ant’s rotation before foraging is a process to calibrate its polarization light compass. The sky polarized light was used as calibration light by Wang et al. [20] for bio-inspired polarization sensor calibration, and multiple calibration algorithms with theoretical angles as supervised value was implied, achieved high-accuracy of rotation measurement to ±0.003° after calibration.

The polarization system not only has deviation when installation, but even if the polarization imaging system has been calibrated and parameter-corrected accurately, the polarization measurement errors will occur because of the small parameters changes as the structure looseness, pose drift, etc. when using. Therefore, online calibration for real-time application has become an important key technology. In outdoor environments or environments without calibration conditions (such as a rotary stage to provide accurate AoP values, or rotate evenly for sampling), real-time calibration will be challenging. Considering the desert ant's self-rotation calibration, which itself also cannot obtain the AoP values accurately, this study imitates the self-rotation process and hopes to achieve parameters deviation self-calibration without obtaining the true value.

The similar self-calibration researches have been implemented on RRDOFP [21] (4 linear polarizers + 1 linear retarder) and Mueller polarimeter [22,23]. Aiming at the Mueller matrix parameters deviation as the linear retarder's retardance δ and starting angle θ1.etc., parameters can be calibrated by intensity measurements redundancy, and this calibration algorithm do not need any extra polarizing reference element.

In this study, inspired by the random rotation and calibration behavior of insects, a self-calibration algorithm based on unsupervised optimization was used to calibrate our multi-aperture BPCES based on a micro-surface fiber faceplate. Using random multi-angle imaging for unknown polarized light or scenes, our algorithm achieves self-calibration of the relative installation angle of the ommatidia’s polarizing film without DoLP or AoP information. Finally, we used the calibration value as the actual polarizer angle to replace the ideal installation angle for scene polarization information calculation, achieved high-accuracy polarization calibration in the central region of the FOV, and significantly reduced the detection error.

The remainder of this paper is organized as follows. Section 2 introduces the structure, image pre-processing method, and polarization imaging process of our BPCES and explains the conventional supervised calibration algorithm for the installation angle. Section 3 discusses the two objective functions of our self-calibration algorithm and the new MOO method. Section 4 presents the verification of the effectiveness of the algorithm using a simulation and a prototype system of the BPCES. Finally, Section 5 concludes the paper.

2. Bionic polarization compound eye system

The insect polarization compound eye is a unique type of compound eye. Because the polarization ommatidia are devoid of screening pigments, their FOV is wider than that of ordinary ommatidia [24], and the median value of the full FOV reaches 40°. Considering that the average included angle of the visual axis of the adjacent ommatidia is only approximately 1.15°, according to the relationship between the FOV of the polarization ommatidia and the included angle of the visual axis, a high FOV overlap ratio is achieved for imaging the sky. Therefore, in this study, we imitated insect polarization compound eyes and produced a BPCES with a high overlapping ratio in the central region of the FOV.

2.1 BPCES with polarizing film

Our BPCES consists of a bionic compound eye camera [4] based on micro-surface fiber faceplate and a curved polarizing film array, which constitute the following structure: 9-aperture polarizing film + 9-miniaturized ommatidia lenses + micro-surface fiber faceplate + large-format complementary metal–oxide semiconductor (CMOS) camera (Fig. 2(a)).

 figure: Fig. 2.

Fig. 2. Bionic compound eye camera. (a) System structure; (b) Inclined micro-surface of the fiber faceplate; (c) System prototype; (d) Overlapping FOV.

Download Full Size | PDF

The focal distance of the ommatidia was 9 mm, the FOV was 40°, and the image from every visual axis was projected onto the inclined micro-surface of the fiber faceplate. The top surface was a square area with a side length of 6.14 mm. The angle between the top and side surfaces was 20°, and that between the top and corner surfaces was 27°. A large-format CMOS of 5120 × 5120 pixels (side length of 23.04 mm) was directly coupled with the bottom surface of the fiber faceplate by a pressing structure (Fig. 2(b)).

The integrated structure of the bionic compound eye experimental system had a partially overlapping FOV with nine apertures (Fig. 2(c)). Figure 2(d) shows the overlapping status of the ommatidia. Visual axis divergence constitutes wide-stitched FOV imaging. The FOV overlapping ratio between the side and top ommatidia was approximately 50%, and that between the corner and top ommatidia was approximately 25%; thus, the FOVs of every four ommatidia partially overlapped in the central region of the stitched full FOV.

Figure 3(a) shows the output of the bionic compound eye camera prototype. It uses KAZA feature point matching, screens the matches via the RANSAC algorithm, and the angle convergence method [25] (Fig. 3(b)) to obtain an accurate registration of the sub-pixel level. Figure 3(c) shows a wide-stitched FOV image reconstructed by nine ommatidia, and Fig. 3(d) shows the aperture overlapping numbers in the FOV.

 figure: Fig. 3.

Fig. 3. Bionic compound eye system output. (a) Output image; (b) Feature point matching; (c) Stitched image; (d) Aperture overlapping numbers.

Download Full Size | PDF

As shown in Fig. 4(a), Edmund Optics’ linear polarizing film XP42 [26] (transmission range of 400–700 nm, extinction ratio of 9000:1, thickness of 0.18 mm) was used. The polarizing film array was cut in the directions of 0° and 45° (Fig. 4(b)) and placed in front of the compound eye camera (Fig. 4(d)). The CMOS vertical axis was set as the reference direction, and the ideal included angle θ between the apertures’ polarizer direction and the CMOS vertical axis was 0°, 45°, 90°, and 135° (Fig. 4(c)). Thus, a 4-aperture linear polarization imaging pattern was formed in the central region of the FOV, and a wide FOV ordinary imaging pattern was formed in the peripheral region. The remainder of this paper mainly discusses the characteristics of the central polarization imaging.

 figure: Fig. 4.

Fig. 4. Polarizing film array. (a) Linear polarizing film; (b) Cutting shape; (c) Direction arrangement; (d) Array mounting.

Download Full Size | PDF

2.2 Polarization calculation and image pre-processing

We assume that the Stokes vector of the incident polarized light is S, its light intensity is IS, the angle between the polarization direction and reference direction (AoP) is α, and the DoLP is p. Then, the Stokes vector [27] of polarized light can be expressed as

$${\mathbf{S}} = \left[ {\begin{array}{c} {{I_{\mathbf{S}}}}\\ Q\\ U \end{array}} \right] = \left[ {\begin{array}{c} 1\\ {p\cos 2\alpha }\\ {p\sin 2\alpha } \end{array}} \right]{I_{\mathbf{S}}}$$

Suppose that the Mueller matrix of nth-aperture of the BPCES is Mn, where the maximum transmission of the polarizing film is τ, the extinction ratio is er, and the angle between the polarizer axis and reference direction is θ. Without considering the circular polarization information, the times 3 linear Mueller matrix of the polarizer [28] is

$${{\mathbf{M}}_\textrm{n}} = \frac{1}{2}\tau \left[ {\begin{array}{ccc} {(er + 1)}&{(er - 1)\,\cos\,2\theta }&{(er - 1)\sin\,2\theta }\\ {(er - 1)\,\cos\,2\theta }&{(er + 1)\cos{^2}2\theta + 2\sqrt {er} \sin{^2}2\theta }&{{{(\sqrt {er} - 1)}^2}\,\cos\,2\theta \sin\,2\theta }\\ {(er - 1)\sin\,2\theta }&{{{(\sqrt {er} - 1)}^2}\,\cos\,2\theta \sin\,2\theta }&{(er + 1)\sin{^2}2\theta + 2\sqrt {er} \cos{^2}2\theta } \end{array}} \right]$$

Considering the non-polarization parameters of a single pixel, the nonuniformity effect caused by the lens parameter difference of each aperture and the transmitting efficiency of the micro-surface fiber faceplate is defined as the coefficient gain K, and the dark current is defined as the bias b. Then, the incident polarized light S passes through the nth-aperture polarizer Mn of the BPCES, and the result Sn is given as

$${{\mathbf{S}}_\textrm{n} } = {[{I_\textrm{n}},{Q_\textrm{n}},{U_\textrm{n} }]^T} = K{{\mathbf {M}}_\textrm{n}}{\mathbf{S}} + b$$

The light-intensity term In is given as

$${I_\textrm{n} }\textrm{ = }0.5\tau {I_{\mathbf{S}}}[er + 1 + (er - 1)p\,\cos\,2(\alpha - \theta )$$

Substituting the coefficients k and E yields:

$${I_\textrm{n} } = k[1 + E\,\cos\,2(\alpha - \theta )]{I_{\mathbf{S}}} + b,\textrm{where}\,E = (er - 1)/(er + 1),k = 0.5K\tau (er + 1)$$

Among them, the non-polarization terms, i.e., coefficient k, and bias b, were calibrated in advance with nonpolarized light. The light intensity In was corrected using the calibrated parameters k and b, and the result was defined as the uniform light intensity in:

$${i_\textrm{n} } = ({I_\textrm{n} } - b)/k$$

The incident polarized light S passes through the polarizers with various installation angles θ and then projects its images onto each of the four apertures’ pixels. If each aperture uses the same polarizing film with parameter e, the imaging process can be expressed in matrix form as

$${\mathbf{i}} = \left[ {\begin{array}{c} {{i_1}}\\ {{i_2}}\\ {{i_3}}\\ {{i_4}} \end{array}} \right] = \left[ {\begin{array}{ccc} 1&{E\,\cos\,2{\theta_1}}&{E\sin 2{\theta_1}}\\ 1&{E\,\cos\,2{\theta_2}}&{E\sin 2{\theta_2}}\\ 1&{E\,\cos\,2{\theta_3}}&{E\sin 2{\theta_3}}\\ 1&{E\,\cos\,2{\theta_4}}&{E\sin 2{\theta_4}} \end{array}} \right]\left[ {\begin{array}{c} {{I_{\mathbf{S}}}}\\ {p{I_{\mathbf{S}}}\cos 2\alpha }\\ {p{I_{\mathbf{S}}}\sin 2\alpha } \end{array}} \right]\textrm{ = }{{\mathbf{M}}_{1\sim 4}}{\mathbf{S}}$$

The 3-aperture polarization system can pseudo-invert M1∼3 to solve the Stokes vector S of the incident light, and the 4-aperture (and above) polarization system can use the least-squares method to reduce the error caused by the sampling noise of the light intensity i:

$$\hat {\mathbf{S}} = {\mathbf{M}}_{\mathrm{1\sim n}}^\dagger {\mathbf{i}}$$
We can obtain the AoP α and DoLP p from $\hat{\mathbf{S}}$ using Eq. (1). Accurate calibration of the polarizer installation angle θ is the key to polarization measurement, as indicated by Eq. (7).

2.3 Supervised polarization calibration algorithm

Conventional supervised iterative optimization algorithms [812] need to know a series of true values of the AoP A of the incident light and calculate the corresponding α using the i of each aperture. Then, the algorithms conduct supervised optimization on the difference of α-A, iterate the installation angle θ to make the calculated value α of each sample gradually approach A, and finally accurately calibrate the installation angle.

First, the resolved S of the system output is used to obtain the calculated value α:

$$\alpha = \frac{1}{2}\arctan (\frac{U}{Q})$$

Taking the 4-aperture system as an example, α can be written as a function of the installation angle parameters θ1−4 and the light intensity samples i1−4:

$$\alpha = f({i_1},{i_2},{i_3},{i_4}|{\theta _1},{\theta _2},{\theta _3},{\theta _4})$$

The θ values are continuously adjusted to make α approach the true value A as the supervision value (benchmark). We change the AoP A w times, collect the sample sequence of light intensity response i and A, calculate the α of each sample, and construct the loss function:

$$loss = \sum\limits_\textrm{{m} = 1}^w {{{({\alpha _\textrm{m} } - {A_\textrm{m} })}^2}}$$

The gradient descent iteration method is typically used for optimization. In one of these iterative cycles, the installation angle θ of the nth-aperture is updated synchronously until the loss function converges and the optimal value θ is calibrated with high precision.

$${\theta _\textrm{{n} \_new}} = {\theta _\textrm{{n \_old}}} - \eta \frac{{\partial loss}}{{\partial {\theta _\textrm{n} }}}$$

There are defects in the calibration algorithm that uses the true value for supervision: 1) The AoP of the incident light, as the true value A for supervision, is generated by the polarizer, which may have significant axis mark angle deviations. Our experiments revealed that the polarization axis mark of a specific wire grid polarizer can deviate by 5°. 2) The true value A must be known, and the angle of installation deviation must be as small as possible. 3) The correspondence between the true value A and the calculated value α is not one-to-one but periodic with a bias angle. The supervised algorithm requires manual adjustment of A to the continuous domain of α according to Eq. (9); otherwise, the loss function will be discontinuous which negatively affects the gradient optimization result. 4) The supervised optimization function for polarization calibration has a severe ill-conditioned characteristic [9], and a small-angle rotation deviation corresponds to a large deviation in the optimization results. Therefore, developing an unsupervised bionic algorithm to calibrate the installation angle of each aperture under the condition of unknown calibration light information is necessary.

3. Polarization self-calibration algorithm with unsupervised optimization

The self-calibration concept imitates the natural calibration of the physiological phenomenon of an insect biological polarization detection system. Insects use naturally polarized light (such as sky light) as a calibration light source and randomly rotate themselves many times [19] for polarization imaging of the sky to realize the self-calibration of their polarization compound eyes. The information redundancy of the polarization system with more than three apertures and the Stokes vector of the natural polarized light’s random and relatively steady distribution correspond to the two optimization goals for achieving the optimal solution of the installation angle θ: single-aperture redundancy among the different θ apertures and the steadiness of the DoLP among sample sequences.

This section describes the self-calibration algorithm for the 4-aperture polarization system, and the system with apertures > 4 can use the same principle.

3.1 Single-aperture redundancy optimization objective

The single-aperture redundancy optimization objective utilizes the redundancy characteristics of the 4-aperture polarization measurement; that is, if the installation angles θ = [θ1 θ2 θ3 θ4] are accurate, each three of the system’s four apertures should output the same Stokes results of the polarized light. If a difference exists, it is used as the objective function loss to iteratively optimize the installation angle deviation.

Taking three apertures of the 4-aperture polarization imaging system (without loss of generality, apertures 1, 2, and 3 are selected), the polarization process is

$${\mathbf{i}} = \left[ {\begin{array}{c} {{i_1}}\\ {{i_2}}\\ {{i_3}} \end{array}} \right] = \left[ {\begin{array}{ccc} 1&{E\,\cos\,2{\theta_1}}&{E\sin 2{\theta_1}}\\ 1&{E\,\cos\,2{\theta_2}}&{E\sin 2{\theta_2}}\\ 1&{E\,\cos\,2{\theta_3}}&{E\sin 2{\theta_3}} \end{array}} \right]\left[ {\begin{array}{c} {{I_{\mathbf{S}}}}\\ {p{I_{\mathbf{S}}}\cos 2\alpha }\\ {p{I_{\mathbf{S}}}\sin 2\alpha } \end{array}} \right] = {{\mathbf{M}}_{1\sim 3}}{\mathbf{S}}$$

The Stokes vector S of the polarized light can be obtained as

$$\widehat {\mathbf{S}} = {\mathbf{M}}_{1\sim 3}^{ - 1}{\mathbf{i}} = \left[ {\begin{array}{c} {\widehat {{I_{\mathbf{S}}}}}\\ {\widehat Q}\\ {\widehat U} \end{array}} \right] = \left[ {\begin{array}{c} {\frac{{\sin \textrm{2}({\theta_3} - {\theta_2}){i_1} + \sin \textrm{2}({\theta_1} - {\theta_3}){i_2} + \sin \textrm{2}({\theta_2} - {\theta_1}){i_3}}}{{4\sin ({\theta_3} - {\theta_1})\sin ({\theta_2} - {\theta_3})\sin ({\theta_1} - {\theta_2})}}}\\ {\frac{{(\sin \textrm{2}{\theta_2} - \sin \textrm{2}{\theta_3}){i_1} + (\sin \textrm{2}{\theta_3} - \sin \textrm{2}{\theta_1}){i_2} + (\sin \textrm{2}{\theta_1} - \sin \textrm{2}{\theta_2}){i_3}}}{{4E\sin ({\theta_3} - {\theta_1})\sin ({\theta_2} - {\theta_3})\sin ({\theta_1} - {\theta_2})}}}\\ {\frac{{(\cos\textrm{2}{\theta_3} - \cos\textrm{2}{\theta_2}){i_1} + (\cos\textrm{2}{\theta_1} - \cos\textrm{2}{\theta_3}){i_2} + (\cos\textrm{2}{\theta_2} - \cos\textrm{2}{\theta_1}){i_3}}}{{4E\sin ({\theta_3} - {\theta_1})\sin ({\theta_2} - {\theta_3})\sin ({\theta_1} - {\theta_2})}}} \end{array}} \right]$$

If $\hat{\mathbf{S}}$ has an installation angle deviation propagation, we simulate the process of the 4th-aperture polarizer acting on the calculated value $\hat{\mathbf{S}}$ with deviation and obtain the calculated value ie of the 4th-aperture response with four installation angle deviation propagations:

$${i_\textrm{e} }(\mathbf{\theta }) = {{\mathbf{M}}_4}\widehat {\mathbf{S}} = {{\mathbf{M}}_4}{\mathbf{M}}_{1\sim 3}^{ - 1}{\mathbf{i}} = \frac{{{i_1}\textrm{S}{\textrm{N}_{324}} + {i_2}\textrm{S}{\textrm{N}_{134}} + {i_3}\textrm{S}{\textrm{N}_{214}}}}{{\textrm{S}{\textrm{N}_{321}}}}$$

Among them, $\textrm{S}{\textrm{N}_\textrm{{abc}}} = {s _\textrm{{ab}}} + {s _\textrm{{bc}}} + {s _\textrm{{ca}}}$ and ${s _\textrm{{ab}}} = \sin 2(\theta_\textrm{a} - \theta_\textrm{b})$.

If θ used in Eq. (15) is the system’s actual value, ie should be equal to the measurement value i4. However, there is a difference between ie and i4 due to the installation deviation of θ. The error function is defined as the sum of the squares of the iei4 deviations of all the w samples.

$$loss = \sum\limits_\textrm{{m} = 1}^w {{{({i_{\textrm{em}}} - {i_{4\textrm{m} }})}^2}}$$

The installation angle of the 4th aperture was set as the reference angle 0°, and the relative installation angle deviation o = [o1, o2, o3] was set as the parameter to be optimized.

$${\theta _4} = 0,\textrm{ }{\theta _1} ={-} 135^\circ{+} {o_1},\textrm{ }{\theta _2} ={-} 90^\circ{+} {o_2},\textrm{ }{\theta _3} ={-} 45^\circ{+} {o_3}$$

Then, the installation angle calibration problem was converted into an optimization problem of locating the loss function’s minimum point of the three-dimensional (3D) space (o1, o2, o3), with the loss function of the single-aperture redundancy being the first optimization objective:

$$\textrm{min }\{{loss({\mathbf{o}})} \},\textrm{ }{\mathbf{o}} \in {{\mathbf{R}}^3}$$

3.2 DoLP steadiness optimization objective

According to Appendix A, the loss is not a convex function, and has multi minimum points; thus, an additional objective function is needed for further optimization to find the only actual value point ot. By solving Eq. (31), we obtain

$${p_\textrm{e}}(\alpha ) = \frac{{p(\alpha )}}{{\cos 2({o_{\textrm{nt}}} - {o_\textrm{{ne} }}) + p(\alpha )E\sin 2({o_{\textrm{nt}}} - {o_\textrm{{ne} }})\sin 2\phi }}$$

If and only if ont = one, pe(α) = p(α), and the calculated DoLP does not fluctuate as the ϕ sample changes. Similarly, if the DoLP fluctuates among the sample sequences, the minimum point oe is not the actual value. As shown in Fig. 5(a), whether DoLP of calibration light is constant (upper) or random (lower), this study calculates pe(α) distribution and each curve(red) meets the loss optimized, but only the curve superposed with the minimum fluctuation(green) indicates the actual value ot according to Eq. (19). By evaluating the variance of the calculated DoLP, a new objective function loss2 is constructed:

$$los{s_2} = \sum\limits_{m = 1}^w {{{({p_\textrm{e}}({\alpha _m}) - \overline p )}^2}} ,\textrm{ }\overline p = \frac{1}{w}\sum\limits_{m = 1}^w {{p_\textrm{e}}({\alpha _m})}$$

 figure: Fig. 5.

Fig. 5. (a) Calculated pe(α) (Red) and p(α) (Green) distribution; (b) Unsupervised optimization algorithm flow.

Download Full Size | PDF

The parameter o to be iterated should minimize the two objective functions, i.e., single-aperture redundancy and DoLP steadiness, to minimum simultaneously, which corresponds to a MOO problem.

$$\textrm{min }\{{loss({\mathbf{o}})} \},\textrm{min }\{{los{s_\textrm{2}}({\mathbf{o}})} \},\textrm{ }{\mathbf{o}} \in {{\mathbf{R}}^3}$$

The self-optimization algorithm applied to the 4-aperture BPCES involves a 3D optimization space. The 4th aperture is defined as the reference direction and the redundant verification aperture. The angle self-calibration algorithm flow is shown in Fig. 5(b).

In addition, although the extinction ratio of the polarizing film for each BPCES aperture is the same, considering that the ommatidia lens and micro-surface fiber faceplate may have no polarization-maintaining properties, and the performance of polarizing film may vary when using, resulting in the decrease or non-uniformity in the extinction ratio er of each aperture in the broad sense. Assuming the extinction ratios of the four apertures are er1er4, the polarization coefficient can be obtained from Eq. (5) as E1E4. Similar from the process of Eqs. (13)∼(15), the calculated value ie can be obtained considering the non-uniformity of extinction ratio :

$${i_\textrm{e}} = {{\mathbf{M}}_4}\widehat {\mathbf{S}} = {{\mathbf{M}}_4}{{\mathbf{M}}^{ - 1}}{\mathbf{i}} = \frac{{{i_1}\textrm{Z}{\textrm{N}_{324}} + {i_2}\textrm{Z}{\textrm{N}_{134}} + {i_3}\textrm{Z}{\textrm{N}_{214}}}}{{\textrm{Z}{\textrm{N}_{321}}}}$$

Among them, $\textrm{Z}{\textrm{N}_\textrm{{abc}}} = {E_\textrm{a}}{E_\textrm{b}}{\textrm{s}_\textrm{{ab}}} + {E_\textrm{b}}{E_\textrm{c}}{\textrm{s}_\textrm{{bc}}} + {E_\textrm{a}}{E_\textrm{c}}{\textrm{s}_\textrm{{ca}}}$ and ${s _\textrm{{ab}}} = \sin 2(\theta_\textrm{a} - \theta_\textrm{b})$.

It can be seen that each E multiply by the same factor simultaneously, while ie remains unchanged, that is, each E is dependent on each other. To represent the influence of E, the redundant aperture is used as the reference polarization coefficient, i.e., E4 = 1, to obtain the ratio of the other polarization coefficients En/E4. Then, the optimal values of o1o3 under E1E3 can be obtained using the same optimization algorithm, and the full Mueller matrix will be calibrated.

3.3 Discrete parameter sorting method for non-convex MOO problem

For the aforementioned nonlinear MOO problem, a linear weighting method is typically used to convert it into a single-objective optimization problem. In the ideal case of no sampling error (noise and analog-to-digital converter error) and a calibration light DoLP constant, the minimum of the two non-convex objective functions can reach 0. The weighted sum can be used to construct a single-objective function (where η is a positive constant).

$$Loss = \textrm{ }loss + \eta \cdot los{s_2}$$

The gradient method iteratively reduces the Loss to 0, so that the two objective functions can be optimized simultaneously and the optimal parameter o can be obtained. The minimum points are distributed on the two curved lines in the o space of R3, which can be vividly sketched as the distribution of two-dimensional (2D) surfaces fA(o1,o2) and fB(o1,o2) in Fig. 6, where the minimum points of the objective functions are infinite and all are located at the ridges of the valley bottom. In the solution space R2, although the two objective functions are non-convex, each function’s minimum of the common solution can reach 0.

$${\mathbf{o}} = ({o_1},{o_2}):los{s_{\min }} = \min {f_A}({{o_1},{o_2}} ),los{s_{2\min }} = \min {f_B}({{o_1},{o_2}} ),({{o_1},{o_2}} )\in {{\mathbf{R}}^2}$$

 figure: Fig. 6.

Fig. 6. Distributions of the two objective functions without sampling error and DoLP inconsistency.

Download Full Size | PDF

However, in practical application scenarios, owing to the DoLP p inconsistency of the natural calibration polarized light and the sampling error of i, the minimum of the two objective functions exceeded zero, and there were multiple discontinuous local minimum points beneath the ridges (Fig. 7). This results in the optimal value o depending on the subjectively priori weight η, and obtaining a single optimal value o for the two non-convex objectives is not possible; only a Pareto solution set containing a series of parameters o under different η values can be obtained.

 figure: Fig. 7.

Fig. 7. Distributions of the two objective functions with sampling error and DoLP inconsistency.

Download Full Size | PDF

In addition, the gradient descent iteration method cannot be used to obtain the common minimum point of the two objective functions to avoid falling into the local minima beneath the ridge owing to sampling errors(noises). According to the characteristics of non-convex MOO calibration function, the optimization method should satisfy the following requirements:

  • 1) It should not fall into local minima or the solutions with only one objective function minimized; that is, an optimization point located near the ridges of both objective functions is better than an optimization point located at one objective function minimum but far from the ridge of the other objective function.
  • 2) The final result should not depend on the priori weight η.
  • 3) The polarized light DoLP changes in real scenarios, resulting in the reliability of loss2 being lower than that of loss; thus, the optimization of loss should be prioritized.

According to the aforementioned requirements, we propose a discrete parameter sorting method for non-convex MOO. The function values of each subdivision point set are obtained in Rn, where the parameter o is discretely subdivided. According to the order of importance of the objective functions (loss > loss2), the strongly Pareto-dominated subdivision points are sorted, and the best point in the sorted set is selected. Then, with the best point as the center, a new round of re-subdivision and optimization at a smaller scale is started, and finally the optimum is obtained. This method does not depend on the gradient of the objective function and can consider the minimum distribution of multiple non-convex objective functions, avoiding falling into a local minimum or a solution with only one objective function optimized.

Taking the R2 space as an example, we set the initialization parameter coordinates o0 as (0, 0). We define a square with side lengths L and o0 as the center point and subdivide each dimension into P grids. The distance between adjacent points is L/P, and a point set P containing P2 discrete points is generated in a 2D space. The objective distributions loss and loss2 of fA and fB function values of all points in P are calculated, and sorted in ascending order of loss; then, the 10% points(as the example shows in Fig. 8(a), 25 white points was taken) with the smallest loss values are taken as the set P1, which is also sorted in loss2 ascending order, and the minimum point obest(as shown in Fig. 8(b), the black point was taken) is taken to simultaneously meet the minimum requirement of the two-objective function in the current P.

 figure: Fig. 8.

Fig. 8. 2D schematic of our discrete parameter sorting method for non-convex MOO (the red color indicates that the value is smaller). The process is from (a) to (b) of one cycle in (c).

Download Full Size | PDF

Therefore, we take the current minimum point obest as the new subdivision center, if the re-subdivision ratio is d, and use the square region with side length dL as the definition domain. We re-subdivide for the point set P and screen the points that can strongly Pareto-dominate obest as P0. We sort the P0 points in ascending order of loss and loss2 and update obest iteratively. During this process, the side length of the square decreases continuously. When obest changes until convergence or the side length reaches the precision requirement of the effective digits, angle deviation calibration is completed (as shown in Fig. 8(c), the iterative process is continuously re-subdivided for five times at a ratio of 0.5).

It is extended to a discrete parameter-sorting method for non-convex MOO. The method flow is presented below. The ≺ symbol indicates the strong Pareto-domination that o is smaller than obest in every function value, and the function card indicates the number of elements of a point set. ft (o1) ≤……≤ft (om) indicates the elements sorted in ascending order of the function value ft, and the m elements with the smallest values are chosen.

oe-31-16-25446-i001

4. Simulation results

The effectiveness of the proposed self-calibration algorithm was verified by the BPCES simulation results. And simulation data can be used to compare the accuracies and advantages of different calibration methods because the actual value θ is known in simulation.

4.1 Simulation model

In the simulation experiment, 51 groups of 4-aperture intensity samples were obtained by simulating four apertures detecting polarized light with different random unknown AoPs. After an integrating sphere output light passed through a polarizer, we assumed that the light intensity IS was 250, and the random DoLP had a normal distribution with a mean of 0.5 and variance of 10% (Fig. 9(a)). The maximum transmittance of each aperture polarizer was 80%, and the extinction ratio are the same. The first aperture was set as the reference, and the ideal installation angles θ of the other three apertures relative to the reference angle were 45°, 90°, and 135°. The installation deviation angles o were set as 1.79°, –1.90°, and 2.33° (Fig. 9(b)).

 figure: Fig. 9.

Fig. 9. Calibration simulation of BPCES: (a) True value of simulation data and one of its samples S(IS, p and α); (b) Angles of installation and deviation of polarizing films; (c) Intensity samples with sampling noise and DoLP Gaussian noise (i1−4 are represented by red, green, blue, and black points, respectively).

Download Full Size | PDF

The CMOS ADC error and other noises were simplified to random uniform noise of the maximum 1 gray level and were superimposed on the intensity samples. The 4-aperture simulation sample outputs are presented in Fig. 9(c).

Then, we formulated the single-aperture redundant optimization objective and the DoLP steadiness optimization objective, input the dataset of 4 × 51 light intensity samples, and set L = 20° as the side length of the subdivision space and P = 10 as the subdivision density in each dimension. The 10% of the points with the smallest loss were screened for loss2 sorting, with d = 0.5 as the re-subdivided ratio and 10 iterations.

The difference between the optimal value (solid line) and actual value (dashed line) of the 3-aperture installation deviation angle o (red, green and blue) is shown in Fig. 10(a) during iterative optimization. The optimal value gradually converged to the actual value and loss/loss2 (black and purple arrows) reduced, and the maximum calibration error Δo with respect to the actual value was 0.07°. Using the relative error (the ratio to the iterative initial error) as the evaluation criterion for reducing the objective function, both the loss (black) and loss2 (purple) values were optimized.

 figure: Fig. 10.

Fig. 10. Iterative optimization results: (a) er is uniform; (b) er is non-uniform, value er1∼4 is considered; (c) er is non-uniform, value er1∼4 is not considered.

Download Full Size | PDF

Considering conditions where the extinction ratio is non-uniform, we used the redundant aperture as the reference, and assumed the er1∼3 of other apertures are 90%, 105%, and 115% of the reference respectively. If the polarization parameter E1∼3 is calibrated in advance and substituted into Eq. (22), the same accuracy calibration results can also be obtained (Fig. 10 (b)). But if er1∼4 is non-uniform and the calibration is still carried out according to the consistent er as Eq. (15), although loss/loss2 (black and purple arrows) reduced, the calibration results eventually converged into a wrong value, resulting in calibration failure (Fig. 10 (c)). Therefore, it is necessary to calibrate the er values of each aperture as the precondition of this algorithm. In following research, through other prior constraints that also do not require any true value as supervision, this algorithm is expected to be improved and covered more parameters calibration as θ, er and k.

4.2 Advantages of unsupervised algorithm and discrete sorting method

First, the advantages of the unsupervised algorithm over the supervised algorithm were verified. In addition to requiring simpler experimental conditions, the unsupervised algorithm avoids the error propagation of the angle rotation deviation eA. However, the error propagation of the redundant aperture intensity noise ei4 is introduced:

$${E_\textrm{S}} = \textrm{ }{\textrm{F}_{\textrm{Supervised}}}({e_{i1}},{e_{i2}},{e_{i3}},{e_A}),{E_\textrm{U}} = \textrm{ }{\textrm{F}_{\textrm{Unsupervised}}}({e_{i1}},{e_{i2}},{e_{i3}},{e_{i4}})$$

Taking the maximum calibration angle error Δo as the evaluation criterion, the effects of error propagation on eA and ei were compared through a simulation. In the simulation, it was assumed that ei was uniform noise with a 0–2 gray-level standard deviation, and the distribution sequence was ei. eA was Gaussian noise with a standard deviation of 0°–2°, and the distribution sequence was eA. By comparing the calibration error-propagation value matrices ES(ei,eA) and EU(ei) of the supervised and unsupervised algorithms under different ei and eA values, the advantages and the application scope of each algorithm application field was confirmed. When the calibration error is ES = EU, the error-propagation equivalence relationship G of eA and ei can be established as follows:

$${e_A} = G({e_i}) = \overline {{{\mathbf{E}}_\textrm{U}}({e_i},{{\mathbf{e}}_A})} /{k_f},\textrm{among }{k_f}\textrm{ = }{{\mathbf{e}}_i}\cdot {{\mathbf{E}}_\textrm{S}}({e_i},{{\mathbf{e}}_A})/{|{{{\mathbf{e}}_i}} |^2}$$

The equivalence relationship G-curve obtained from the simulation is shown in Fig. 11 as an approximate proportional function with a fitting ratio of eA:ei = 0.151:1. This implies that if the standard deviation of the sampling noise ei is one gray level, the angle rotation precision eA of the supervised algorithm needs to be more precise than 0.151°, so that the calibration accuracy can be higher than that of the unsupervised algorithm. In contrast, under typical calibration conditions, unsupervised algorithms are always more accurate than supervised algorithms.

 figure: Fig. 11.

Fig. 11. G-curve simulation results; the upper-left and lower-right regions correspond to the unsupervised and supervised algorithms having better performance, respectively.

Download Full Size | PDF

In addition, our MOO method was verified to be more suitable than the single-objective-converted methods [8,9] for non-convex MOO polarization calibration problems. We divided the sample sets into three different simulation scenarios: we examined cases with and without sampling noise ei, and the DoLP of the polarized light was constant or with a Gaussian noise of 10% standard deviation. We calibrated the simulation data from the aforementioned scenarios and used the simple linear weighting method [8], the matrix LLS method [9], and our discrete sorting optimization method for optimization, as shown in Table 1. The results were compared with regard to the final optimization values (LOSS/LOSS2) of loss/loss2 functions and the maximum calibration angle errors (Δo) among all the apertures.

Tables Icon

Table 1. Final LOSSs and Δo for different MOO algorithms

Theoretically, because the linear weighting and LLS methods lack prior knowledge of the multi-objective weight distribution, only a carefully chosen k can reach MOO convergence in a real-world scenario with sampling noise ei and DoLP fluctuations, which eventually leads to large calibration errors. Our optimization method solves this problem via two rounds of sorting in an iteration and has an angle calibration that is still accurate when ei and DoLP noise are assignable and LOSS/LOSS2 does not tend to 0.

5. Experiment results

The effectiveness of the proposed self-calibration algorithm was verified by the BPCES simulation results and experiments on polarized light from the integrating sphere and sky.

5.1 Experimental verification of indoor integrating sphere data

The nonuniformity of the BPCES was first corrected through pre-processing. The nonuniformity and bias of each pixel were easily calibrated through the intensity changes of the unpolarized calibration light source. Subsequently, the gain K was smoothed, and the dark-current bias term b was removed. Thus, the BPCES was regarded as an ideal polarization imaging system with uniform aperture parameters and only the installation angles unknown.

Subsequently, the images of the nine apertures were registered, and the overlapping region in the full FOV was identified. Polarized light from the integrating sphere and sky was used as the dataset for self-calibration. We aimed the BPCES at the exit of the integrating sphere and randomly rotated the polarizer between them to capture the images (Fig. 12).

 figure: Fig. 12.

Fig. 12. Image data collected by the BPCES. (a) Aiming at the polarizer; (b) Selection of image parts of the upper-left four apertures as samples.

Download Full Size | PDF

Taking 50 × 50 pixel image blocks in the registered 4-aperture overlapping FOV (Fig. 12(b)), the gray-level averages of the corresponding area were used as the sample set (Fig. 13(a)).

 figure: Fig. 13.

Fig. 13. 4-aperture uniformity-corrected data. (a) Random rotation calibration dataset without A known; (b) 2° evenly spaced angle dataset.

Download Full Size | PDF

In contrast to the simulation, the actual values of the installation angle deviation of the BPCES experimental system were unknown. Therefore, evaluating the accuracy of our algorithm using the optimal and actual values was difficult. Hence, another dataset with A sampling interval of 2° was measured for the evaluation criteria (Fig. 13(b)), and the phase angle of the light intensity changes as the actual value θ was obtained by fitting the cosine formula [7]. The difference between the phase angles and ideal installation angles was the actual deviation o, which was compared with the optimal value of our algorithm for evaluating the calibration error.

We input 4 × 100 light intensity sample data and used P = 20 as the subdivision density for each dimension. The other subdivision parameters were identical to those mentioned in Section 4.1. The error between the optimal value (solid line) and the actual value (dashed line) is shown in Fig. 14(a). Our MOO algorithm made the loss/loss2 descended and each o (red, green and blue) gradually converged to the actual value as iteration. Finally, the maximum calibration angle error Δo was 0.61°, and loss/loss2 were both optimized. The results are presented in Table 2.

 figure: Fig. 14.

Fig. 14. Self-calibration and evaluation of integrating sphere experiment data. (a) Iterative optimization results of the unsupervised algorithm, the optimal value gradually converged to the actual value and loss/loss2 (black and purple arrows) reduced; (b) Objective functions values decline by the supervised algorithm; (c) Results comparison of 0∼350° AoP measurement error.

Download Full Size | PDF

Tables Icon

Table 2. Calibration result of each aperture’s deviation for the integrating sphere polarized light dataset

Compared with our unsupervised algorithm, the optimal result of the supervised gradient descent algorithm was far from the actual value despite its objective function (Eq. (11)) converging to 0 (Fig. 14(b)), which can only be explained by the problem being ill-conditioned.

In addition, considering the influence of er non-uniformity due to stray light or the lack of polarization-maintaining of the fiber faceplate, the polarization parameters of E1/E4, E2/E4, and E3/E4 are first calibrated as the ratio of 100.51%, 102.02%, and 98.65%, and the er can be obtained to be 100.76%, 103.56%, and 97.20% of the redundant aperture. By substituting E1E4 into Eq. (22) for self-calibration, the maximum calibration angle error was 0.34°. Compared with models with the same er by Eq. (15), taking the er non-uniformity into consideration can significantly reduce calibration errors and improve orientation accuracy.

The calibration angles are used to replace the ideal installation angles θ in the system matrix M, and this actual M measures the A-known dataset iA to obtain the calculated AOP α (Fig. 14(c)), for evaluating the accuracy change due to BPCES calibration. Using the 1-norm to construct the average AOP measurement error, we obtain

$${E_{loss}} = \sum\limits_\textrm{{m} = 1}^w {|{{\alpha_\textrm{m}}({{\mathbf{i}}_A}\textrm{|}\mathbf{\theta }) - {A_\textrm{m}}} |/w}$$

After unsupervised calibration, the average measurement error was reduced from 2.88° to 2.13°. The measurement error still existed, because the evaluation data iA had sampling noise, which could not be removed through calibration. The measurement error increased to 4.05° because the supervised algorithm was not well-posed. Our unsupervised calibration algorithm (with or without considering er non-uniformity) got the accurate results close to the fitting methods, and was more accurate than supervised methods or uncalibrated results (Fig. 14(c)).

5.2 Experimental verification of outdoor sky-polarized light data

For the sky-polarized light self-calibration verification experiment using the BPCES, the apertures were roughly aimed at the opposite side of the solar azimuth to capture the high-DoLP sky area. The camera was hand-rotated to collect multiple i samples, and Nvidia TX2 was used for image acquisition and pre-processing. Because the captured sky area could not be stabilized, the light intensity and DoLP of the incident polarized light changed significantly during the sample collection, as shown in Fig. 15(a). Our algorithm input 4 × 37 light intensity sample data and performed subdivision optimization with the same parameters in Section 4.1. The loss/loss2 descended and each o gradually converged to the actual value as iteration as shown in Fig. 15(b); the maximum calibration angle error Δo was 1.12°. After the unsupervised calibration, the average measurement error was reduced from 3.15° to 0.824°, which was 26.1% of the original value.

 figure: Fig. 15.

Fig. 15. Sky polarized light collected by the BPCES. (a) Uniformity-corrected 4-aperture light intensity data; (b) Iterative optimization results. The loss/loss2 reduced and the optimal value (solid line) converged to the actual value (dashed line).

Download Full Size | PDF

After self-calibration, the accuracy of BPCES was evaluated using sky polarized light orientation. The experimental time was June 25 at 15:00, longitude 116° 18’ 16'‘ E and latitude 39° 57’ 47'‘ N, and the weather was clear. The BPCES was mounted on a rotary stage and photographed vertically upwards. The rotary stage rotates every 5° and compares the change of the AoP output with the reading of the rotary stage, to evaluate the accuracy of the heading angle before and after the self-calibration.

Two methods were used for heading angle measurement: The first method is based on the direction of zenith polarization region, similar to Section 5.1, which only takes about 2° FOV from the center of the BPCES, obtain the average of the 50 × 50 pixels as i1∼4 and calculate the AoP to determine the relative angle between its own heading and the zenith polarization direction. The second method is based on the self-calibration angle result that calibrated by 50 × 50 pixels, and extend that to the entire four overlapping region to obtain sky images, that is 1000 × 1000 pixels of the 40° FOV. We refered spatiotemporal information and astronomical calendar to obtain the sun direction vector [29] to determine its own absolute heading angle. After self-calibration, the calculated AoP distribution of the sky image in the full overlapping FOV is shown in Fig 6(a).

The heading angle measurement error is calculated by Eq. (27). With the initial angle as the reference, the 0∼180° evaluation data of the 5° interval was collected, and the change of the heading angle relative to the initial value was calculated. For the zenith measurement method, our self-calibration algorithm reduces the average error from 1.86° to 1.52° as shown in Fig. 16(b); For the full overlapping FOV measurement method of calculating the sun direction vector, our method reduces the average error from 3.18° to 2.69° as shown in Fig. 16(c).

 figure: Fig. 16.

Fig. 16. Sky polarized light direction measurement evaluation: (a) The AoP of the sky at 50° heading angle, and the heading angle measurement error based on the (b) zenith polarization region and (c) full overlapping FOV

Download Full Size | PDF

6. Conclusions

In this study, a multi-aperture BPCES based on a micro-surface fiber faceplate and a curved polarizing film array was designed, which realizes polarization imaging in the central 4-aperture overlapping FOV region. Additionally, a self-calibration algorithm based on unsupervised optimization was developed for the installation angle deviation of this system.

This calibration algorithm, which imitates the insect polarization compound eye self-calibration process, fully exploits the multi-aperture redundant information and random polarization information in the scene, adopts a single-aperture redundancy objective and DoLP steadiness objective, and uses a non-convex MOO discrete parameter sorting method, where unknown polarized light information is employed for self-calibration of the polarizer installation angle of the BPCES. In simulations and experiments using artificial/natural polarized light sources, this algorithm achieved maximum calibration angle errors of 0.07°, 0.61°, and 1.12° under large amounts of sampling noise. In simulation experiments, under general experimental conditions, the unsupervised algorithm always outperformed the conventional supervised algorithm, and the MOO discrete optimization method of iterative sorting was generally better than the conventional linear weighted MOO method.

Compared with ordinary calibration procedures, our self-calibration algorithm does not require true value as supervision for achieving real-time calibration during using. Under conditions where the calibration information is not stringent, such as manually setting the AoP of the polarized light, calibration accuracy by supervised method may be poor, and our algorithm will not be affected by rotation errors, resulting in higher calibration accuracy. It can be applied not only to the BPCESs but also to systems such as camera polarization arrays or DoFP cameras.

The limitations of the proposed algorithm are as follows: (1) If the DoLP fluctuation of a calibration light just offset the p(α) fluctuation of Eq. (19), the second objective function will be invalidated. (2) Extinction ratio er is an important polarization parameter, which also should be self-calibrated.

In future work, we will improve the BPCES structure and the robustness of the second optimization objective function, ultimately achieve the self-calibration of all polarization parameters (θ, er and light intensity response k) based on complex scenes. The non-convex MOO method will be studied to increase its precision and make it less likely to fall into local or one-objective optima for increasing the self-calibration accuracy and broadening its application range.

Appendix A. Convexity and unique judgment

It is necessary to determine whether the loss is a strictly convex function, which is equivalent to the Hessian matrix of the second partial derivatives being positive-definite in the definition domain:

$${\mathbf{H}}(loss) = \left[ {\begin{array}{ccc} {\frac{{{\partial^2}loss}}{{{\partial^2}{o_1}}}}&{\frac{{{\partial^2}loss}}{{\partial {o_1}\partial {o_2}}}}&{\frac{{{\partial^2}loss}}{{\partial {o_1}\partial {o_3}}}}\\ {\frac{{{\partial^2}loss}}{{\partial {o_2}\partial {o_1}}}}&{\frac{{{\partial^2}loss}}{{{\partial^2}{o_2}}}}&{\frac{{{\partial^2}loss}}{{\partial {o_2}\partial {o_3}}}}\\ {\frac{{{\partial^2}loss}}{{\partial {o_3}\partial {o_1}}}}&{\frac{{{\partial^2}loss}}{{\partial {o_3}\partial {o_2}}}}&{\frac{{{\partial^2}loss}}{{{\partial^2}{o_3}}}} \end{array}} \right]$$

The Hessian matrix was found to have negative eigenvalues with arbitrary data o, proving that the loss is not a strictly convex function, and that there are multiple local minimum points in the 3D space.

It should then be determined whether the loss has other minimum points oe adjacent to the actual value ot. Assume a scenario where α is set to multiple unknown angles, during which the light intensity IS(α) and DoLP p(α) remain relatively steady. Considering the nth installation deviation angle of the θn aperture, if another one in oe differs from the actual parameter ont, the calculated IS and p values are defined as ISe(α) and pe(α) respectively, under the parameter oe according to Eq. (14). Without considering the sampling error, the calculated value ine and true value int of the light intensity response are given as follows:

$$\begin{array}{l} {i_{\textrm{nt}}} = {I_{\mathbf{S}}}(\alpha )[1 + E\cdot p(\alpha )\cdot \cos 2(\alpha - {\theta _\textrm{n} } - {o_{\textrm{nt}}})]\\ {i_\textrm{{ne }}} = {I_{{\mathbf{S}}\textrm{e}}}(\alpha )[1 + E\cdot {p_\textrm{e}}(\alpha )\cdot \cos 2(\alpha - {\theta _\textrm{n} } - {o_\textrm{{ne} }})] \end{array}$$

If different minimum points oe yield the same response int in Eq. (15) under each α, that is, int = ine, we set ϕ = α-θn-ont and expand Eq. (29)::

$$\begin{array}{l} {I_{{\mathbf{S}}\textrm{e}}}(\alpha ) - E{I_{{\mathbf{S}}\textrm{e}}}(\alpha )\cdot {p_\textrm{e}}(\alpha )\cdot \sin 2({o_{\textrm{nt}}} - {o_\textrm{{ne} }})\cdot \sin 2\phi + E{I_{{\mathbf{S}}\textrm{e}}}(\alpha )\cdot {p_\textrm{e}}(\alpha )\cdot \cos 2({o_{\textrm{nt}}} - {o_\textrm{{ne} }})\cdot \cos 2\phi \\ = {I_{\mathbf{S}}}(\alpha ) + E{I_{\mathbf{S}}}(\alpha )\cdot p(\alpha )\cdot \cos 2\phi \end{array}$$

The constant term and the coefficient of the cos2ϕ term on the two sides of the equal sign in Eq. (30) should be equal:

$$\begin{array}{l} {{\mathbf{o}}_e } \in {I_{{\mathbf{S}}\textrm{e}}}(\alpha ) - {I_{{\mathbf{S}}\textrm{e}}}(\alpha )\cdot E\cdot p(\alpha )\cdot \sin 2({o_{\textrm{nt}}} - {o_\textrm{{ne} }})\cdot \sin 2\phi = {I_{\mathbf{S}}}(\alpha )\\ {{\mathbf{o}}_e } \in {I_{{\mathbf{S}}\textrm{e}}}(\alpha )\cdot E\cdot {p_\textrm{e}}(\alpha )\cdot \cos 2({o_{\textrm{nt}}} - {o_\textrm{{ne} }}) = {I_{\mathbf{S}}}(\alpha )\cdot E\cdot p(\alpha ) \end{array}$$

Thus, the three components of oe satisfy the two independent ternary nonlinear equations in Eq. (31). Each equation provides a surface constraint of the local minimum distribution, and two surface constraints lead to a curved line-set in the 3D space. Accordingly, if ISe(α) and pe(α) satisfy Eq. (31) under the parameter oe, the different oe values are all on the line-set and reach the same loss minimum under the single-aperture redundancy optimization objective. It proved that loss is not convexity nor unique.

Funding

National Natural Science Foundation of China (61871034).

Disclosures

The authors declare no conflicts of interest.

Data availability

No data were generated or analyzed in the presented research.

References

1. Y. Cheng, J. Cao, Y. Zhang, and Q. Hao, “Review of state-of-the-art artificial compound eye imaging systems,” Bioinspir. Biomim. 14(3), 031002 (2019). [CrossRef]  

2. E. P. Meyer and T. Labhart, “Morphological specializations of dorsal rim ommatidia in the compound eye of dragonflies and damselfies (Odonata),” Cell Tissue Res. 272(1), 17–22 (1993). [CrossRef]  

3. J. Liu, R. Zhang, Y. Li, C. Guan, R. Liu, J. Fu, and J. Chu, “A bio-inspired polarization navigation sensor based on artificial compound eyes,” Bioinspir. Biomim. 17(4), 046017 (2022). [CrossRef]  

4. J. Xue, S. Qiu, X. Wang, and W. Jin, “A compact visible bionic compound eyes system based on micro-surface fiber faceplate,” in 2019 International Conference on Optical Instruments and Technology: Optoelectronic Imaging/Spectroscopy and Signal Processing Technology (2020), pp. 68–75.

5. S.B. Powell and V. Gruev, “Calibration methods for division-of-focal-plane polarimeters,” Opt. Express 21(18), 21039–21055 (2013). [CrossRef]  

6. S. B. Powell and V. Gruev, “Evaluation of calibration methods for visible-spectrum division-of-focal-plane polarimeters,” Polarization Science and Remote Sensing VI 8873, 887306 (2013). [CrossRef]  

7. D. Wang, H. Liang, H. Zhu, and S. Zhang, “A bionic camera-based polarization navigation sensor,” Sensors 14(7), 13006–13023 (2014). [CrossRef]  

8. Z. Xian, X. Hu, J. Lian, L. Zhang, J. Cao, Y. Wang, and T. Ma, “A novel angle computation and calibration algorithm of bio-inspired sky-light polarization navigation sensor,” Sensors 14(9), 17068–17088 (2014). [CrossRef]  

9. T. Ma, X. Hu, J. Lian, and L. Zhang, “A novel calibration model of polarization navigation sensor,” IEEE Sens. J. 15(8), 4241–4248 (2015). [CrossRef]  

10. T. Ma, X. Hu, L. Zhang, and X. He, “Calibration of a polarization navigation sensor using the NSGA-II algorithm,” Opt. Commun. 376, 107–114 (2016). [CrossRef]  

11. C. Fan, X. Hu, J. Lian, L. Zhang, and X. He, “Design and calibration of a novel camera-based bio-inspired polarization navigation sensor,” IEEE Sens. J. 16(10), 3640–3648 (2016). [CrossRef]  

12. G. Han, X. Hu, J. Lian, X. He, L. Zhang, Y. Wang, and F. Dong, “Design and calibration of a novel bio-inspired pixelated polarized light compass,” Sensors 17(11), 2623 (2017). [CrossRef]  

13. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Trans. Evol. Comput. 6(2), 182–197 (2002). [CrossRef]  

14. K. V. Frisch, “Die Polarisation des Himmelslichtes als orientierender Faktor bei den Tänzen der Bienen,” Experientia 5(4), 142–148 (1949). [CrossRef]  

15. K. Kirschfeld, “Die notwendige Anzahl von Rezeptoren zur Bestimmung der Richtung des elektrischen Vektors linear polarisierten Lichtes/The number of receptors necessary for determining the position of the e-vector of linearly polarized light,” Z. Naturforsch. B 27(5), 578–579 (1972). [CrossRef]  

16. T. Labhart, “How polarization-sensitive interneurones of crickets perform at low degrees of polarization,” J. Exp. Biol. 199(7), 1467–1475 (1996). [CrossRef]  

17. H. Vitzthum, M. Müller, and U. Homberg, “Neurons of the central complex of the locust Schistocerca gregaria are sensitive to polarized light,” J. Neurosci. 22(3), 1114–1125 (2002). [CrossRef]  

18. T. Labhart and E. P. Meyer, “Neural mechanisms in insect navigation: polarization compass and odometer,” Curr. Opin. Neurobiol. 12(6), 707–714 (2002). [CrossRef]  

19. R. Wehner, “Desert ant navigation: how miniature brains solve complex tasks,” J. Comp. Physiol. A 189(8), 579–588 (2003). [CrossRef]  

20. Y. Wang, J. Chu, R. Zhang, J. Li, X. Guo, and M. Lin, “A bio-inspired polarization sensor with high outdoor accuracy and central-symmetry calibration method with integrating sphere,” Sensors 19(16), 3448 (2019). [CrossRef]  

21. F. Goudail, X. Li, M. Boffety, S. Roussel, T. Liu, and H. Hu, “Precision of retardance autocalibration in full-Stokes division-of-focal-plane imaging polarimeters,” Opt. Lett. 44(22), 5410–5413 (2019). [CrossRef]  

22. X Li, F Goudail, P Qi, T. Liu, and H. Hu, “Integration time optimization and starting angle autocalibration of full Stokes imagers based on a rotating retarder,” Opt. Express 29(6), 9494–9512 (2021). [CrossRef]  

23. X Li, F Goudail, and S. C. Chen, “Self-calibration for Mueller polarimeters based on DoFP polarization imagers,” Opt. Lett. 47(6), 1415–1418 (2022). [CrossRef]  

24. M. Blum and T. Labhart, “Photoreceptor visual fields, ommatidial array, and receptor axon projections in the polarisation-sensitive dorsal rim area of the cricket compound eye,” J. Comp. Physiol. A 186(2), 119–128 (2000). [CrossRef]  

25. Z. Zhang, S. Qiu, W. Jin, C. Yang, and J. Xue, “Image mosaic of bionic compound eye imaging system based on image overlap rate prior,” Optical Sensing and Imaging Technologies and Applications 10846, 108462C (2018). [CrossRef]  

26. “High Contrast Linear Polarizing Film (XP42),” https://www.edmundoptics.com/f/high-contrast-linear-polarizing-film/14385/ Accessed: 2023-01-03.

27. J. A. North and M. J. Duggin, “Stokes vector imaging of the polarized sky-dome,” Appl. Opt. 36(3), 723–730 (1997). [CrossRef]  

28. K. J. Voss and Y. Liu, “Polarized radiance distribution measurements of skylight. I. System description and characterization,” Appl. Opt. 36(24), 6083–6094 (1997). [CrossRef]  

29. Y. Li, X. Wang, Y. Pan, L. Li, and J. Chen, “Ultraviolet-visible light compass method based on local atmospheric polarization characteristics in adverse weather conditions,” Appl. Opt. 61(23), 6853–6860 (2022). [CrossRef]  

Data availability

No data were generated or analyzed in the presented research.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1.
Fig. 1. Angle deviations in the DoFP camera and insect compound eyes. (a) Installation deviation between the polarizer array and CCD pixels [12]; (b) Polarizer angle distribution of a cricket [18].
Fig. 2.
Fig. 2. Bionic compound eye camera. (a) System structure; (b) Inclined micro-surface of the fiber faceplate; (c) System prototype; (d) Overlapping FOV.
Fig. 3.
Fig. 3. Bionic compound eye system output. (a) Output image; (b) Feature point matching; (c) Stitched image; (d) Aperture overlapping numbers.
Fig. 4.
Fig. 4. Polarizing film array. (a) Linear polarizing film; (b) Cutting shape; (c) Direction arrangement; (d) Array mounting.
Fig. 5.
Fig. 5. (a) Calculated pe(α) (Red) and p(α) (Green) distribution; (b) Unsupervised optimization algorithm flow.
Fig. 6.
Fig. 6. Distributions of the two objective functions without sampling error and DoLP inconsistency.
Fig. 7.
Fig. 7. Distributions of the two objective functions with sampling error and DoLP inconsistency.
Fig. 8.
Fig. 8. 2D schematic of our discrete parameter sorting method for non-convex MOO (the red color indicates that the value is smaller). The process is from (a) to (b) of one cycle in (c).
Fig. 9.
Fig. 9. Calibration simulation of BPCES: (a) True value of simulation data and one of its samples S(IS, p and α); (b) Angles of installation and deviation of polarizing films; (c) Intensity samples with sampling noise and DoLP Gaussian noise (i1−4 are represented by red, green, blue, and black points, respectively).
Fig. 10.
Fig. 10. Iterative optimization results: (a) er is uniform; (b) er is non-uniform, value er1∼4 is considered; (c) er is non-uniform, value er1∼4 is not considered.
Fig. 11.
Fig. 11. G-curve simulation results; the upper-left and lower-right regions correspond to the unsupervised and supervised algorithms having better performance, respectively.
Fig. 12.
Fig. 12. Image data collected by the BPCES. (a) Aiming at the polarizer; (b) Selection of image parts of the upper-left four apertures as samples.
Fig. 13.
Fig. 13. 4-aperture uniformity-corrected data. (a) Random rotation calibration dataset without A known; (b) 2° evenly spaced angle dataset.
Fig. 14.
Fig. 14. Self-calibration and evaluation of integrating sphere experiment data. (a) Iterative optimization results of the unsupervised algorithm, the optimal value gradually converged to the actual value and loss/loss2 (black and purple arrows) reduced; (b) Objective functions values decline by the supervised algorithm; (c) Results comparison of 0∼350° AoP measurement error.
Fig. 15.
Fig. 15. Sky polarized light collected by the BPCES. (a) Uniformity-corrected 4-aperture light intensity data; (b) Iterative optimization results. The loss/loss2 reduced and the optimal value (solid line) converged to the actual value (dashed line).
Fig. 16.
Fig. 16. Sky polarized light direction measurement evaluation: (a) The AoP of the sky at 50° heading angle, and the heading angle measurement error based on the (b) zenith polarization region and (c) full overlapping FOV

Tables (2)

Tables Icon

Table 1. Final LOSSs and Δo for different MOO algorithms

Tables Icon

Table 2. Calibration result of each aperture’s deviation for the integrating sphere polarized light dataset

Equations (31)

Equations on this page are rendered with MathJax. Learn more.

S = [ I S Q U ] = [ 1 p cos 2 α p sin 2 α ] I S
M n = 1 2 τ [ ( e r + 1 ) ( e r 1 ) cos 2 θ ( e r 1 ) sin 2 θ ( e r 1 ) cos 2 θ ( e r + 1 ) cos 2 2 θ + 2 e r sin 2 2 θ ( e r 1 ) 2 cos 2 θ sin 2 θ ( e r 1 ) sin 2 θ ( e r 1 ) 2 cos 2 θ sin 2 θ ( e r + 1 ) sin 2 2 θ + 2 e r cos 2 2 θ ]
S n = [ I n , Q n , U n ] T = K M n S + b
I n  =  0.5 τ I S [ e r + 1 + ( e r 1 ) p cos 2 ( α θ )
I n = k [ 1 + E cos 2 ( α θ ) ] I S + b , where E = ( e r 1 ) / ( e r + 1 ) , k = 0.5 K τ ( e r + 1 )
i n = ( I n b ) / k
i = [ i 1 i 2 i 3 i 4 ] = [ 1 E cos 2 θ 1 E sin 2 θ 1 1 E cos 2 θ 2 E sin 2 θ 2 1 E cos 2 θ 3 E sin 2 θ 3 1 E cos 2 θ 4 E sin 2 θ 4 ] [ I S p I S cos 2 α p I S sin 2 α ]  =  M 1 4 S
S ^ = M 1 n i
α = 1 2 arctan ( U Q )
α = f ( i 1 , i 2 , i 3 , i 4 | θ 1 , θ 2 , θ 3 , θ 4 )
l o s s = {m} = 1 w ( α m A m ) 2
θ {n} \_new = θ {n \_old} η l o s s θ n
i = [ i 1 i 2 i 3 ] = [ 1 E cos 2 θ 1 E sin 2 θ 1 1 E cos 2 θ 2 E sin 2 θ 2 1 E cos 2 θ 3 E sin 2 θ 3 ] [ I S p I S cos 2 α p I S sin 2 α ] = M 1 3 S
S ^ = M 1 3 1 i = [ I S ^ Q ^ U ^ ] = [ sin 2 ( θ 3 θ 2 ) i 1 + sin 2 ( θ 1 θ 3 ) i 2 + sin 2 ( θ 2 θ 1 ) i 3 4 sin ( θ 3 θ 1 ) sin ( θ 2 θ 3 ) sin ( θ 1 θ 2 ) ( sin 2 θ 2 sin 2 θ 3 ) i 1 + ( sin 2 θ 3 sin 2 θ 1 ) i 2 + ( sin 2 θ 1 sin 2 θ 2 ) i 3 4 E sin ( θ 3 θ 1 ) sin ( θ 2 θ 3 ) sin ( θ 1 θ 2 ) ( cos 2 θ 3 cos 2 θ 2 ) i 1 + ( cos 2 θ 1 cos 2 θ 3 ) i 2 + ( cos 2 θ 2 cos 2 θ 1 ) i 3 4 E sin ( θ 3 θ 1 ) sin ( θ 2 θ 3 ) sin ( θ 1 θ 2 ) ]
i e ( θ ) = M 4 S ^ = M 4 M 1 3 1 i = i 1 S N 324 + i 2 S N 134 + i 3 S N 214 S N 321
l o s s = {m} = 1 w ( i em i 4 m ) 2
θ 4 = 0 ,   θ 1 = 135 + o 1 ,   θ 2 = 90 + o 2 ,   θ 3 = 45 + o 3
min  { l o s s ( o ) } ,   o R 3
p e ( α ) = p ( α ) cos 2 ( o nt o {ne}  ) + p ( α ) E sin 2 ( o nt o {ne}  ) sin 2 ϕ
l o s s 2 = m = 1 w ( p e ( α m ) p ¯ ) 2 ,   p ¯ = 1 w m = 1 w p e ( α m )
min  { l o s s ( o ) } , min  { l o s s 2 ( o ) } ,   o R 3
i e = M 4 S ^ = M 4 M 1 i = i 1 Z N 324 + i 2 Z N 134 + i 3 Z N 214 Z N 321
L o s s =   l o s s + η l o s s 2
o = ( o 1 , o 2 ) : l o s s min = min f A ( o 1 , o 2 ) , l o s s 2 min = min f B ( o 1 , o 2 ) , ( o 1 , o 2 ) R 2
E S =   F Supervised ( e i 1 , e i 2 , e i 3 , e A ) , E U =   F Unsupervised ( e i 1 , e i 2 , e i 3 , e i 4 )
e A = G ( e i ) = E U ( e i , e A ) ¯ / k f , among  k f  =  e i E S ( e i , e A ) / | e i | 2
E l o s s = {m} = 1 w | α m ( i A | θ ) A m | / w
H ( l o s s ) = [ 2 l o s s 2 o 1 2 l o s s o 1 o 2 2 l o s s o 1 o 3 2 l o s s o 2 o 1 2 l o s s 2 o 2 2 l o s s o 2 o 3 2 l o s s o 3 o 1 2 l o s s o 3 o 2 2 l o s s 2 o 3 ]
i nt = I S ( α ) [ 1 + E p ( α ) cos 2 ( α θ n o nt ) ] i {ne } = I S e ( α ) [ 1 + E p e ( α ) cos 2 ( α θ n o {ne}  ) ]
I S e ( α ) E I S e ( α ) p e ( α ) sin 2 ( o nt o {ne}  ) sin 2 ϕ + E I S e ( α ) p e ( α ) cos 2 ( o nt o {ne}  ) cos 2 ϕ = I S ( α ) + E I S ( α ) p ( α ) cos 2 ϕ
o e I S e ( α ) I S e ( α ) E p ( α ) sin 2 ( o nt o {ne}  ) sin 2 ϕ = I S ( α ) o e I S e ( α ) E p e ( α ) cos 2 ( o nt o {ne}  ) = I S ( α ) E p ( α )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.