Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Bio-inspired orientation using the polarization pattern in the sky based on artificial neural networks

Open Access Open Access

Abstract

Many insects use the pattern of polarized light in the sky as a navigational cue. In this study, we use this sensory ability as a source of inspiration to create a computational orientation model based on an artificial neural network (POL-ANN). After a training phase using numerically generated sky polarization patterns, stable and convergent networks are obtained. We undertook a series of verification tests using four typical but different sky conditions and showed that the post-trained networks were able to make an accurate prediction of the direction of the sun. Comparisons between the proposed models and models based on the convolutional neural network (CNN) structure revealed the merits of the bio-inspired architecture. We further investigated the accuracy of the models based on two different (locust-like, broader; Drosophila-like, narrower) visual fields of the sky. We find that the accuracy of the computations depends on the overhead visual scene, specifically that wider fields of view perform better when information about the overhead polarization pattern is missing.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Navigating is an essential ability for both humans and animals. Due to the limitations of current global navigation satellite systems (GNSS) and inertial navigation systems (INS), many efforts have been made to find new approaches for the purpose of orientation [1,2]. For insects, they could use multiple sources of visual information when navigating. The position of the sun, stars, the Milky Way at night and landmarks all provide multiple visual cues for an internal compass [3–6]. Recently, more and more orientation methods inspired by animals have been proposed [7–9].

As a result of scattering, skylight forms a special polarization pattern. It is believed that the characteristic of this polarization distribution has a specific relationship with the position of the sun. Many insects are able to use the polarization of light as a source of visual information [10–12]. Using the dorsal rim area (DRA) of their compound eyes, insects use the overhead variation in the polarization of light as information to determine the direction towards either the sun or anti-sun positions. A number of studies have now demonstrated the common ability of both flying, walking and underwater animals to navigate with the information derived from the polarization patterns [13–15].

The DRA is the upwards-facing polarization-sensitive part of the eye and is comprised of several specialized rows of ommatidia [16]. Typically containing an ultra-violet sensitive visual pigment, the photoreceptors are short, untwisted and exhibit a high degree of polarization sensitivity. In some species, a lack of screening pigment and large diameter also gives each ommatidium a large acceptance angle and receptive field (for example around 20 degrees in crickets) [17]. The photoreceptor cells, which are intrinsically dichroic [18], are arranged such that an opponent comparison between different channels provides intensity independent information about the polarization of the light. Importantly, the photoreceptor cell orientation in the DRA is arranged to provide a spatial representation of the skylight polarization [16,17]. This information is first relayed via interneurons through the lamina, medulla and lobula and then to the central brain via trans medulla neurons to the anterior optic tubercle [19]. In locusts, this spatial information of the sky is then mapped to the protocerebral bridge, a compartment of the central complex [20]. The ordered array of columnar neurons in the protocerebral bridge acts to encode the body angle relative to a celestial reference frame [21].

Many attempts have been made to develop novel orientation models utilizing polarized skylight patterns as insects do. The first type of approaches is to build a point-source device consisting of three or two pairs of polarization direction analyzers with photodetectors and linear film polarizers. Inspired by the crossed-analyzer configuration behavior of ommatidia, the polarizing axes of each polarization direction analyzer are set perpendicular to each other [22]. As the model is designed to implement measurements towards the zenith area, the visual field is much smaller compared with the DRA of compound eyes. When beams from the zenith are shrouded by clouds or leaves, there would be a severe decline of orientation precision [23]. The second type of approaches is to build an imaging polarimeter using a digital camera, a fisheye lens and a rotatable polarizer (time-sharing) or multiple cameras, fisheye lenses and fixed polarizers (space-sharing) [24,25]. The wide field of view makes it possible to acquire and analyze the distribution of information from almost entire polarization pattern of the sky. By retrieving Stokes parameters from images taken with different polarization transmission orientations, the symmetry axes of degree of polarization and E-vector angle maps are used to predict the orientation of the solar meridian [26–28]. As these methods depend heavily on weather conditions (clouds, pollution and surrounding coverage), the orientation precision drops greatly when some of the polarization information is missing for parts of the sky. While efforts have been made to address this problem [29–31], proposed image-processing algorithms deviate from the working principle of insects’ compound eyes [32].

In this work, our aim is to build a synthetic polarization-based orientation model based on artificial neural networks (ANNs) inspired by the polarized light navigation systems of insects. Our goal is to develop a simple, low complexity method for accurately obtaining a direct heading based on skylight polarization information that is simply bio-inspired by the neurophysiology. By abstracting the structure of photoreceptor cells in the DRA and polarization sensitive neurons found in insects’ nervous systems to a simpler system that could be constructed, orientation information provided by the skylight polarization patterns can be investigated and used in an application control system. In the set of verification tests reported here, we addressed four questions: (1) Does our Polarization-analyzing Artificial Neural Network (POL-ANN) successfully identify the direction of the sun? (2) Do different DRA fields-of-view function equally well under an un-obscured whole skylight polarization pattern? (3) Does the bio-inspired model offer advantages over the models based on naive CNN applied in conventional computer vision? (4) Is there any difference between different DRAs when some of the polarization information is missing for parts of the sky? Ultimately, we hope that this study will lead to further testable hypotheses which may be studied in a both bio-engineered and ecological context.

2. Methods

2.1. Polarization sensitivity of invertebrate photoreceptors

A variety of computational models have been proposed to simulate the response of polarization DRA photoreceptors. In this work, we follow the formulation of Bernard and Wehner and How and Marshall [33,34]. The output Sn, of a single photoreceptor cell n is modeled as

Sn=KI(1+dcos2(φφn))
where I indicates skylight intensity, d and φ represent the degree of polarization and the angle of polarization respectively, and K is a scaling factor [33,34]. Comparing therefore the output of two orthogonal channels, Pn,
Pn=log(1+dcos2(φφn)1dcos2(φφn))
In these calculations, the degree of polarization and angle of polarization information is provided by our previously published analytical model of the skylight polarization field [35].

2.2. Visual fields of the dorsal rim area

Animals employ multiple strategies with their DRAs to cover a view of the sky and Heinze et al. recently provided an in-depth review that detailed two different categories of DRA visual fields in insects [19]. The first type is exemplified by locusts and crickets where each single ommatidium has a large angular sensitivity that overlaps with the visual fields of many other ommatidia [17]. This creates the situation where each area of the sky is covered by multiple photoreceptive units [16]. Furthermore, the DRA covers a low aspect ratio visual field that is slightly extended along the anterior/posterior axis. Typically, this covers a large portion of the sky, from the zenith to 30 degrees from the horizon. The second type of DRA is exemplified by genus Drosophila, where the field of view of the ommatidia is narrower creating less overlap [36]. The whole visual field also has a much greater aspect ratio, giving an elongated and narrow strip-like coverage of a much smaller area of the sky.

In this study, we model both of these types of DRA. The diagrams in Fig. 1 depicts the model setup and the coverage of the skylight polarization pattern. We term these two models, DRA-W and DRA-L (Fig. 1) for the wide and long visual fields respectively. A further variable we include in models is the number of photoreceptor rows that make up the model detector. In all cases we set the orientation of the photoreceptors to be radially maximally sensitive.

 figure: Fig. 1

Fig. 1 The layouts of two different types of dorsal rim area (DRA) and their corresponding visual fields in space. (A) Type DRA-W with short but wide elliptical visual field. (B) Type DRA-L with elongated and narrow strip-like visual field.

Download Full Size | PDF

The photoreceptor acceptance angles are set to be 20° for the DRA-W model and 4° for DRA-L model, and all points of the sky that fall within the visual field of every receptor are considered to be a part of the input. To account for the overlapping fields of view, we sum the different inputs from each part of the sky viewed for each photoreceptor, with

Pv=log(n=1rIn(1+dncos2(φnφv))n=1rIn(1dncos2(φnφv)))
where r is the number of points within the target receptor’s visual field. We model m receptors for each DRA (m is set to 36 in this work), and every receptor has its angle of maximum sensitivity (φv) perpendicular to the tangent of the DRA according to its position. Therefore, the output of a DRA becomes a one-dimensional vector with m elements,

MDRA=[P1,P2,P3PvPm].

2.3. Polarization analyzing artificial neural network (POL-ANN)

Inspired by biological neuropils, artificial neural networks (ANNs) are mathematical or computational models which can be used to model real neural networks and estimate or approximate unknown neural functions. Due to their similarities with biological neural networks in information processing, ANNs have been applied in many biological research areas including molecular biology, ecological modeling, bionic sensors and bio-inspired visual signal processing [37]. Based on three stages of neural processing here, we construct the network in three parts: (1) An input layer (2) two hidden layers and (3) an output layer (Fig. 2). In section 2.2, Eq. (4) provides the input of one DRA to the first hidden layer, and in our model, we use a vector with 2m elements based on the information from both the left and right DRAs. The hidden layers of the network represent the two neural stages of processing. The three neurons in the first hidden layer are set to mimic the functions of the three subtypes of POL neurons tuned to angles of polarization at 10°, 60° and 130°, while the sixteen neurons in the second hidden layer are used as mimic the sixteen columnar neurons encoding topographic information in the protocerebral bridge [20,38]. The encoded information in these neurons is then used to determine the direction of the axis between the sun and anti-sun positions (SM-ASM).

 figure: Fig. 2

Fig. 2 The flow of polarization information. In the forward propagation process, original polarized skylight signals are first sampled by the DRA model. The outputs of the DRA model are taken as the input layer and transferred into the two hidden layers that have 3 and 16 neurons respectively. In the back-propagation process, the error signals are transferred back to the hidden layers and input layer from the output layer to adjust weights for the neurons in all of the layers. Where w1ij represents the vector of weights for the neurons between the input layer and the first hidden layer, w2jk represents the vector of weights for the neurons between the two hidden layers, w3k represents the vector of weights for the neurons between the second hidden layer and the output layer. The inputs and outputs for the first hidden layer, second hidden layer and output layer are S1j, S2k, S3 and O1j, O2k, O3 separately.

Download Full Size | PDF

The first step is to set initial weights for every neuron in adjacent network layers,

{O1j(n)=f(S1j(n))=f(i=12mw1ij(n)Pi(n))j[1,3],O2k(n)=f(S2k(n))=f(j=13w2jk(n)O1j(n))k[1,16],O3(n)=f(S3(n))=g(k=116w3k(n)O2k(n)),
where n represents the number of iterations, the activation functions f for the two hidden layers are ‘tansig’ functions and the activation function g is a ‘purelin’ function [37]. If the expectation output of the network is d(n), then, the prediction error of the network, e(n), is
e(n)=d(n)O3(n).
During the training phase, we use a gradient descent method in a back-propagation algorithm to calculate the derivative of the squared error function [37], E(n), with respect to the weights of the network such that,
E(n)=12e2(n).
The weights for the neurons are then updated in a reverse order, and the local gradients for the three layers are defined as
{δ3(n)=E(n)S3(n)=e(n)g'(S3(n)),δ2k(n)=E(n)S2k(n)=f'(S2k(n))δ3(n)w3k(n)k[1,16],δ1j(n)=E(n)S1j(n)=f'(S1j(n))k=116(δ2k(n)w2jk(n))j[1,3].
Thus, the changes of weight for neurons in every iteration could be described as
{Δw3k(n)=ηδ3(n)O2k(n)k[1,16],Δw2jk(n)=ηδ2k(n)O1j(n)j[1,3],k[1,16],Δw1ij(n)=ηδ1j(n)Pi(n)i[1,2m],j[1,3],
where η is a constant representing the learning rate (set to be 0.01 in this work).

3. Results and discussion

3.1. Network training

To conduct the training phase, our previous analytical skylight polarization model was used to generate 291600 different skylight polarization fields [35]. The parameter sets used were with the sun azimuth φs[1°,360°], the sun elevation es[1°,90°] and sky turbidities T[1,9]. The patterns were input into four different network analyses using four different model DRA architectures. We created two DRA-W models, one with a single row of radially aligned photoreceptors (DRA-W-1) and one with three concentric rows of photoreceptors (DRA-W-3). Similarly, we used two DRA-L architectures, (DRA-L-1 and DRA-L-3), with the same one and three row structures. The changing number of rows mimic the differences between the DRA-W and DRA-L types in animals, where the DRA typically contains fewer rows, sometimes just one, for the wider fields of view. By training with the sampling matrixes in all cases, we could acquire four stable and convergent post-trained networks after repeated iterations for each of the four DRA models.

3.2. Network verification

In order to test the performance of all four post-trained networks, we used 64 calculated skylight polarization patterns, again generated by the analytical sky polarization model [35]. These patterns were set at four different turbidities (3, 4, 6, 8), four different sun elevations (30°, 40°, 50°, 60°) and four different sun azimuths (30.5°, 118.5°, 233.5°, 324.5°). Representative images of simulated skylight polarization patterns are shown in Fig. 3.

 figure: Fig. 3

Fig. 3 Representative images of simulated skylight polarization patterns. The first and second rows are Dop (degree of polarization) and Aop (angle of polarization) images respectively.

Download Full Size | PDF

From each of the models, we compared the calculated directions of the SM-ASM axis with the known value and determined the error in the calculation.

The data in Fig. 4(a) demonstrate that the errors of all four networks are very close, with maximum values under 0.5° and medians between approximately 0.1° and 0.2°. While Fig. 4(b) illustrates the distribution in the values of the heading errors, with the accuracy of the two networks based on DRA models with three row receptors (DRA-W-3 and DRA-L-3) being better than those DRA models with one circle of receptors (DRA-W-1 and DRA-L-1). The prediction errors of networks based on DRA-W-1 and DRA-L-1 are evenly distributed between 0° and 0.5°, while for networks based on DRA-W-3 and DRA-L-3 most errors are under 0.1° and 0.2° respectively.

 figure: Fig. 4

Fig. 4 Calculated errors from each of the four DRA models. (A) Boxplot shows the minimum, first quartile, median, third quartile and maximum of errors in every DRA group. (B) Histograms show the occurrence rates of the errors and the total number of errors in every DRA group is 64. ‘W-1’ and ‘W-3′ represent the networks based on DRA-W model with receptors distributed in one circle and three circles respectively. While ‘L-1’ and ‘L-3′ have the same meaning for the networks based on DRA-L model.

Download Full Size | PDF

3.3. Comparisons between different models

We hypothesized that the accuracy of DRA-W and DRA-L models would be different when some of the polarization information was missing for parts of the sky. This could happen for example in cloudy skies, or when terrestrial insects have parts of the sky obscured by flora. Images in the first row of Fig. 5 illustrate snapshots of the four calculated skylight patterns we used. The first is an un-obstructed view of the whole skylight polarization pattern. The second has one large area of the sky obscured, simulating the sky is partly covered by a single leaf or cloud. The third has several small missing patches, simulating navigation within grassland or multiple clouds when the sky partly covered. The final situation is with randomly distributed, small sized missing points all over the sky, simulating hazy skies.

 figure: Fig. 5

Fig. 5 Errors under four sky situations for different models. Images in the first row illustrate degree of polarization of the whole sky under four cases of different coverage, (A) un-obscured skylight polarization pattern, (B) skylight polarization pattern with one large obscured area, (C) skylight polarization pattern with several small missing patches, (D) skylight polarization pattern with randomly distributed missing points. The white areas in the calculated images represent the areas with missing information. For every sky situation, we used 64 calculated skylight polarization patterns for testing. Again, these test patterns were set at four different turbidities (3, 4, 6, 8), four different sun elevations (30°, 40°, 50°, 60°) and four different sun azimuths (30.5°, 118.5°, 233.5°, 324.5°). Boxplots show the minimum, first quartile, median, third quartile, maximum of errors and histograms show the occurrence rates of the errors. The third, fourth, fifth and sixth rows present results of the DRA-W-3 model, DRA-L-3 model, AlexNet model, and ResNet-50 model respectively.

Download Full Size | PDF

On the other hand, to make comparisons between our bio-inspired models and networks applied in conventional computer vision, two other models based on widely used CNN architectures were constructed. Looking at this as a standard computer vision task, we built these two models as we have no biological knowledge. Inputs of the two networks are original Dop (degree of polarization) and Aop (angle of polarization) images without pre-processing. The structures of the two models are based on two typical networks, the ‘AlexNet’ and the ‘ResNet-50’ respectively [39,40]. As done for the proposed bio-inspired models, the two CNN models were also trained using different skylight polarization fields and tested under four different sky conditions described above.

Comparisons were taken using the three row models as these showed the lowest variance in the error values. Moreover, the plotted error data in columns (B) and (C) in Fig. 5 suggest that the performance of the DRA-W-3 model (most errors under 0.5°) is better than the one based on DRA-L-3 model (error up to 3° and most errors greater than 1°) for two of the situations of the missing information being the large single area or multiple smaller areas. For the two models based on the CNN architectures, when provided with un-obscured skylight polarization patterns, both of them performed as well as the DRA-W-3 model. However, significant decreases in performance appeared when part of the polarization information was missing (most errors greater than 1°). In the fourth situation with a randomly distributed set of missing points, all networks perform almost equally well with most errors under 0.5°.

3.4. Experiments under real polarized skylight patterns

In order to test the performance of the post-trained networks in the real world, we used a ground-based all-sky imaging polarimeter to collect skylight polarization data. Measurements were taken at the main teaching building of Hefei University of Technology (31°50′49′′N, 117°17′43′′E), and the instrument (Fig. 6) is mainly composed of a rotatable polarizer, a fisheye lens (Sigma/8mm/F3.5) and a digital camera (Nikon D800).

 figure: Fig. 6

Fig. 6 Photo of the instrument for skylight polarization pattern measurements.

Download Full Size | PDF

Then, 64 groups of data with different sun elevations, sun azimuths and atmosphere turbidities were picked out to implement further tests from collected polarized skylight patterns. Representative images of simulated skylight polarization patterns are shown in Fig. 7.

 figure: Fig. 7

Fig. 7 Representative images of measured skylight polarization patterns. The first and second rows are Dop (degree of polarization) and Aop (angle of polarization) images respectively.

Download Full Size | PDF

Again, the three row models were used for performance testing under real polarized skylight patterns. Due to the gap between measured data and simulation data, the prediction errors of all networks under real polarized skylight patterns are greater than the tests in section 3.3. For the un-obscured situation, the maximum values of the DRA-W-3 model and the DRA-L-3 model are 0.7° and 1.2° separately. Moreover, the plotted error data in columns (B) and (C) in Fig. 8 suggest that the performance of the DRA-W-3 model (most errors under 1°) is still better than the one based on DRA-L-3 model (error up to 4.8° and most errors greater than 3°) for two of the situations of the missing information being the large single area or multiple smaller areas. For the two models based on the CNN architectures, most errors were greater than 3° when provided with partly obscured skylight polarization patterns. Also, in the fourth situation with a randomly distributed set of missing points, the results are very close for the four types of networks with most errors under 1°.

 figure: Fig. 8

Fig. 8 Errors under real skylight polarization patterns. Similar to the tests with simulation data, some pixels of the images were removed to mimic different sky conditions, (A) un-obscured skylight polarization pattern, (B) skylight polarization pattern with one large obscured area, (C) skylight polarization pattern with several small missing patches, (D) skylight polarization pattern with randomly distributed missing points. Boxplots show the minimum, first quartile, median, third quartile, maximum of errors and histograms show the occurrence rates of the errors. The third, fourth, fifth and sixth rows present results of the DRA-W-3 model, DRA-L-3 model, AlexNet model, and ResNet-50 model respectively.

Download Full Size | PDF

The three-stage artificial neural network demonstrates a good accuracy for predicting the direction of the sun under both un-obscured sky condition and sky conditions with missing information. Inspired by the polarization vision of insects, the proposed DRA-W-3 model involves the response of polarization dorsal rim area (DRA) receptors into the neural network pipeline and transforms the input (skylight polarization pattern) into a one-dimensional vector by pooling information within certain area together according to the visual fields of ommatidia, which makes the model more robust under partly obscured sky conditions compared with naive CNN models applied in conventional computer vision. On the other hand, different from the complex structures of CNN models, the training of the bio-inspired networks is less time-consuming (the training time of AlexNet model and ResNet-50 model are nearly 50 times and 90 times longer than the proposed POL-ANN model respectively).

As we set out in the introduction, one aim of this study was that these results would suggest further questions for future work. It would be interesting to further examine differences in navigational accuracy that are due to different sampling of the visual fields. For example, is there any general correlation between the visual fields of the DRA and the primary navigation mode and use of information? Is it that DRAs with wider visual fields are found in animals that predominately use polarization information for navigation? For the case of the Monarch butterflies, with narrower angular sensitivity, they use the sun primarily in a hierarchy of visual cues available and only perhaps use polarization information as a secondary cue [41]. Moreover, the polarization pattern changes as a function of the wavelength. Whilst we have integrated over ‘CCD-sensitive’ wavelengths, many insects have evolved specific spectral sensitivities to the blue or ultraviolet wavelengths. An interesting question to address is the level of accuracy under different skylight conditions with such varying spectral sensitivities.

4. Conclusions

In summary, we created a new computational navigation model based on an artificial neural network that is inspired by how insects use the skylight polarization pattern. Our Polarization-analyzing Artificial Neural Network (POL-ANN) functions accurately under a variety of skylight conditions. Our investigations into the effect of different visual fields of the differently structured DRAs demonstrate that a greater visual field is more robust in situations where areas of the skylight polarization pattern are occluded either by an obstruction or by clouds or haze. And the bio-inspired POL-ANN model offers remarkable advantages over models based on conventional CNNs for the task of predicting the direction of the sun using skylight polarization patterns. Our future efforts will concentrate on incorporating the POL-ANN within a robotic navigation system thus providing a bio-informed framework for enhanced navigational systems in GPS denied environments.

Funding

National Natural Science Foundation of China (NSFC) (61801161 and 61571175); National Defense Science and Technology Project (1816321TS00106401); Air Force Office of Scientific Research (FA8655-12-2112).

Acknowledgments

We are grateful to Martin How and Eric Warrant for helpful discussions and critical comments.

References

1. M. O. Franz and H. A. Mallot, “Biomimetic robot navigation,” Robot. Auton. Syst. 30(1–2), 133–153 (2000). [CrossRef]  

2. R. Wiltschko, “Navigation,” J. Comp. Physiol. A Neuroethol. Sens. Neural Behav. Physiol. 203(6-7), 455–463 (2017). [CrossRef]   [PubMed]  

3. S. M. Reppert, R. J. Gegear, and C. Merlin, “Navigational mechanisms of migrating monarch butterflies,” Trends Neurosci. 33(9), 399–406 (2010). [CrossRef]   [PubMed]  

4. J. F. Diego-Rasilla and R. M. Luengo, “Celestial orientation in the marbled newt (Triturus marmoratus),” J. Ethol. 20(2), 137–141 (2002). [CrossRef]  

5. M. Dacke, E. Baird, M. Byrne, C. H. Scholtz, and E. J. Warrant, “Dung beetles use the Milky Way for orientation,” Curr. Biol. 23(4), 298–300 (2013). [CrossRef]   [PubMed]  

6. T. S. Collett and P. Graham, “Animal navigation: path integration, visual landmarks and cognitive maps,” Curr. Biol. 14(12), R475–R477 (2004). [CrossRef]   [PubMed]  

7. J. R. Serres and F. Ruffier, “Optic flow-based collision-free strategies: From insects to robots,” Arthropod Struct. Dev. 46(5), 703–717 (2017). [CrossRef]   [PubMed]  

8. J. Keshavan, G. Gremillion, H. Alvarez-Escobar, and J. S. Humbert, “Autonomous vision-based navigation of a quadrotor in corridor-like environments,” Int. J. Micro Air Veh. 7(2), 111–123 (2015). [CrossRef]  

9. C. Lee, S. E. Yu, and D. Kim, “Landmark-based homing navigation using omnidirectional depth information,” Sensors (Basel) 17(8), 1928 (2017). [CrossRef]   [PubMed]  

10. P. Duelli and R. Wehner, “The spectral sensitivity of polarized light orientation in Cataglyphis bicolor (Formicidae, Hymenoptera),” J. Comp. Physiol. A Neuroethol. Sens. Neural Behav. Physiol. 86(1), 37–53 (1973). [CrossRef]  

11. M. Dacke, D. E. Nilsson, C. H. Scholtz, M. Byrne, and E. J. Warrant, “Insect orientation to polarized moonlight,” Nature 424(6944), 33 (2003). [CrossRef]   [PubMed]  

12. U. Homberg, “Sky compass orientation in desert locusts - evidence from field and laboratory studies,” Front. Behav. Neurosci. 9, 346 (2015). [CrossRef]   [PubMed]  

13. G. Horvath, Polarized Light and Polarization Vision in Animal Sciences (Springer, 2014).

14. S. B. Powell, R. Garnett, J. Marshall, C. Rizk, and V. Gruev, “Bioinspired polarization vision enables underwater geolocalization,” Sci. Adv. 4(4), eaao6841 (2018). [CrossRef]   [PubMed]  

15. A. Lerner, S. Sabbah, C. Erlick, and N. Shashar, “Navigation by light polarization in clear and turbid waters,” Philos. Trans. R. Soc. Lond. B Biol. Sci. 366(1565), 671–679 (2011). [CrossRef]   [PubMed]  

16. T. Labhart and E. P. Meyer, “Detectors for polarized skylight in insects: a survey of ommatidial specializations in the dorsal rim area of the compound eye,” Microsc. Res. Tech. 47(6), 368–379 (1999). [CrossRef]   [PubMed]  

17. F. Schmeling, J. Tegtmeier, M. Kinoshita, and U. Homberg, “Photoreceptor projections and receptive fields in the dorsal rim area and main retina of the locust eye,” J. Comp. Physiol. A Neuroethol. Sens. Neural Behav. Physiol. 201(5), 427–440 (2015). [CrossRef]   [PubMed]  

18. N. W. Roberts, M. L. Porter, and T. W. Cronin, “The molecular basis of mechanisms underlying polarization vision,” Philos. Trans. R. Soc. Lond. B Biol. Sci. 366(1565), 627–637 (2011). [CrossRef]   [PubMed]  

19. S. Heinze, Polarized Light and Polarization Vision in Animal Sciences (Springer, 2014), Chap. 4.

20. S. Heinze and U. Homberg, “Maplike representation of celestial E-vector orientations in the brain of an insect,” Science 315(5814), 995–997 (2007). [CrossRef]   [PubMed]  

21. U. Homberg, S. Heinze, K. Pfeiffer, M. Kinoshita, and B. el Jundi, “Central neural coding of sky polarization in insects,” Philos. Trans. R. Soc. Lond. B Biol. Sci. 366(1565), 680–687 (2011). [CrossRef]   [PubMed]  

22. D. Lambrinos, R. Moller, T. Labhart, R. Pfeifer, and R. Wehner, “A mobile robot employing insect strategies for navigation,” Robot. Auton. Syst. 30(1–2), 39–64 (2000). [CrossRef]  

23. S. B. Karman, S. Z. Diah, and I. C. Gebeshuber, “Bio-inspired polarized skylight-based navigation sensors: A review,” Sensors (Basel) 12(11), 14232–14261 (2012). [CrossRef]   [PubMed]  

24. Y. Wang, X. Hu, J. Lian, L. Zhang, Z. Xian, and T. Ma, “Design of a device for sky light polarization measurements,” Sensors (Basel) 14(8), 14916–14931 (2014). [CrossRef]   [PubMed]  

25. C. Fan, X. Hu, X. He, L. Zhang, and Y. Wang, “Multicamera polarized vision for the orientation with the skylight polarization patterns,” Opt. Eng. 57, 043101 (2018). [CrossRef]  

26. H. Lu, K. Zhao, Z. You, and K. Huang, “Angle algorithm based on Hough transform for imaging polarization navigation sensor,” Opt. Express 23(6), 7248–7262 (2015). [CrossRef]   [PubMed]  

27. W. Sturzl, “A lightweight single-camera polarization compass with covariance estimation,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2017), pp. 5363–5371. [CrossRef]  

28. W. Zhang, Y. Cao, X. Zhang, and Z. Liu, “Sky light polarization detection with linear polarizer triplet in light field camera inspired by insect vision,” Appl. Opt. 54(30), 8962–8970 (2015). [CrossRef]   [PubMed]  

29. H. Zhao, W. Xu, Y. Zhang, X. Li, H. Zhang, J. Xuan, and B. Jia, “Polarization patterns under different sky conditions and a navigation method based on the symmetry of the AOP map of skylight,” Opt. Express 26(22), 28589–28603 (2018). [CrossRef]   [PubMed]  

30. W. Zhang, Y. Cao, X. Zhang, Y. Yang, and Y. Ning, “Angle of sky light polarization derived from digital images of the sky under various conditions,” Appl. Opt. 56(3), 587–595 (2017). [CrossRef]   [PubMed]  

31. J. Tang, N. Zhang, D. Li, F. Wang, B. Zhang, C. Wang, C. Shen, J. Ren, C. Xue, and J. Liu, “Novel robust skylight compass method based on full-sky polarization imaging under harsh conditions,” Opt. Express 24(14), 15834–15844 (2016). [CrossRef]   [PubMed]  

32. N. W. Roberts, M. J. How, M. L. Porter, S. E. Temple, R. L. Caldwell, S. B. Powell, V. Gruev, N. J. Marshall, and T. W. Cronin, “Animal polarization imaging and implications for optical processing,” Proc. IEEE 102(10), 1427–1434 (2014). [CrossRef]  

33. G. D. Bernard and R. Wehner, “Functional similarities between polarization vision and color vision,” Vision Res. 17(9), 1019–1028 (1977). [CrossRef]   [PubMed]  

34. M. J. How and N. J. Marshall, “Polarization distance: a framework for modelling object detection by polarization vision systems,” Proc. Biol. Sci. 281(1776), 20131632 (2013). [CrossRef]   [PubMed]  

35. X. Wang, J. Gao, Z. G. Fan, and N. W. Roberts, “An analytical model for the celestial distribution of polarized light, accounting for polarization singularities, wavelength and atmospheric turbidity,” J. Opt. 18(6), 065601 (2016). [CrossRef]  

36. P. T. Weir, M. J. Henze, C. Bleul, F. Baumann-Klausener, T. Labhart, and M. H. Dickinson, “Anatomical reconstruction and functional imaging reveal an ordered array of skylight polarization detectors in Drosophila,” J. Neurosci. 36(19), 5397–5404 (2016). [CrossRef]   [PubMed]  

37. D. Graupe, Principles of Artificial Neural Networks (World Scientific, 2013).

38. A. Honkanen, A. Adden, J. da Silva Freitas, and S. Heinze, “The insect central complex and the neural basis of navigational strategies,” J. Exp. Biol. 222(PtSuppl 1), jeb188854 (2019). [CrossRef]   [PubMed]  

39. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (NIPS, 2012), pp. 1097–1105.

40. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 770–778.

41. B. el Jundi, E. J. Warrant, M. J. Byrne, L. Khaldy, E. Baird, J. Smolka, and M. Dacke, “Neural coding underlying the cue preference for celestial orientation,” Proc. Natl. Acad. Sci. U. S. A. 112(36), 11395–11400 (2015). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 The layouts of two different types of dorsal rim area (DRA) and their corresponding visual fields in space. (A) Type DRA-W with short but wide elliptical visual field. (B) Type DRA-L with elongated and narrow strip-like visual field.
Fig. 2
Fig. 2 The flow of polarization information. In the forward propagation process, original polarized skylight signals are first sampled by the DRA model. The outputs of the DRA model are taken as the input layer and transferred into the two hidden layers that have 3 and 16 neurons respectively. In the back-propagation process, the error signals are transferred back to the hidden layers and input layer from the output layer to adjust weights for the neurons in all of the layers. Where w 1 ij represents the vector of weights for the neurons between the input layer and the first hidden layer, w 2 jk represents the vector of weights for the neurons between the two hidden layers, w 3 k represents the vector of weights for the neurons between the second hidden layer and the output layer. The inputs and outputs for the first hidden layer, second hidden layer and output layer are S 1 j , S 2 k , S3 and O 1 j , O 2 k , O3 separately.
Fig. 3
Fig. 3 Representative images of simulated skylight polarization patterns. The first and second rows are Dop (degree of polarization) and Aop (angle of polarization) images respectively.
Fig. 4
Fig. 4 Calculated errors from each of the four DRA models. (A) Boxplot shows the minimum, first quartile, median, third quartile and maximum of errors in every DRA group. (B) Histograms show the occurrence rates of the errors and the total number of errors in every DRA group is 64. ‘W-1’ and ‘W-3′ represent the networks based on DRA-W model with receptors distributed in one circle and three circles respectively. While ‘L-1’ and ‘L-3′ have the same meaning for the networks based on DRA-L model.
Fig. 5
Fig. 5 Errors under four sky situations for different models. Images in the first row illustrate degree of polarization of the whole sky under four cases of different coverage, (A) un-obscured skylight polarization pattern, (B) skylight polarization pattern with one large obscured area, (C) skylight polarization pattern with several small missing patches, (D) skylight polarization pattern with randomly distributed missing points. The white areas in the calculated images represent the areas with missing information. For every sky situation, we used 64 calculated skylight polarization patterns for testing. Again, these test patterns were set at four different turbidities (3, 4, 6, 8), four different sun elevations (30°, 40°, 50°, 60°) and four different sun azimuths (30.5°, 118.5°, 233.5°, 324.5°). Boxplots show the minimum, first quartile, median, third quartile, maximum of errors and histograms show the occurrence rates of the errors. The third, fourth, fifth and sixth rows present results of the DRA-W-3 model, DRA-L-3 model, AlexNet model, and ResNet-50 model respectively.
Fig. 6
Fig. 6 Photo of the instrument for skylight polarization pattern measurements.
Fig. 7
Fig. 7 Representative images of measured skylight polarization patterns. The first and second rows are Dop (degree of polarization) and Aop (angle of polarization) images respectively.
Fig. 8
Fig. 8 Errors under real skylight polarization patterns. Similar to the tests with simulation data, some pixels of the images were removed to mimic different sky conditions, (A) un-obscured skylight polarization pattern, (B) skylight polarization pattern with one large obscured area, (C) skylight polarization pattern with several small missing patches, (D) skylight polarization pattern with randomly distributed missing points. Boxplots show the minimum, first quartile, median, third quartile, maximum of errors and histograms show the occurrence rates of the errors. The third, fourth, fifth and sixth rows present results of the DRA-W-3 model, DRA-L-3 model, AlexNet model, and ResNet-50 model respectively.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

S n =KI( 1+dcos2(φ φ n ) )
P n =log( 1+dcos2(φ φ n ) 1dcos2(φ φ n ) )
P v =log( n=1 r I n (1+ d n cos2( φ n φ v )) n=1 r I n (1 d n cos2( φ n φ v )) )
M DRA =[ P 1 , P 2 , P 3 P v P m ].
{ O 1 j (n)=f( S 1 j (n) )=f( i=1 2m w 1 ij (n) P i (n) )j[1,3], O 2 k (n)=f( S 2 k (n) )=f( j=1 3 w 2 jk (n) O 1 j (n) )k[1,16], O3(n)=f( S3(n) )=g( k=1 16 w 3 k (n) O 2 k (n) ),
e(n)=d(n)O3(n).
E(n)= 1 2 e 2 (n).
{ δ3(n)= E(n) S3(n) =e(n) g ' ( S3(n) ), δ 2 k (n)= E(n) S 2 k (n) = f ' ( S 2 k (n) )δ3(n)w 3 k (n)k[1,16], δ 1 j (n)= E(n) S 1 j (n) = f ' ( S 1 j (n) ) k=1 16 ( δ 2 k (n)w 2 jk (n) ) j[1,3].
{ Δw 3 k (n)=ηδ3(n)O 2 k (n)k[1,16], Δw 2 jk (n)=ηδ 2 k (n)O 1 j (n)j[1,3],k[1,16], Δw 1 ij (n)=ηδ 1 j (n) P i (n)i[1,2m],j[1,3],
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.