Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multispectral curved compound eye camera

Open Access Open Access

Abstract

In this work, we propose a new type of multispectral imaging system, named multispectral curved compound eye camera (MCCEC). The so called MCCEC consists of three subsystems, a curved micro-lens array integrated with selected narrow-band optical filters, an optical transformation subsystem, and the data processing unit with an image sensor. The novel MCCEC system can achieve multi-spectral imaging at an ultra-large field of view (FOV), and obtain information of multiple spectrum segments at real time. Moreover, the system has the advantages of small size, light weight, and high sensitivity in comparison with conventional multispectral cameras. In current work, we mainly focus on the optical design of the MCCEC based on the overlap of FOV between the neighboring clusters of ommatidia to achieve the multispectral imaging at an ultra-large FOV. The optical layout of the curved micro-lens array, narrow-band filter array and the optical relay system for image plane transformation are carefully designed and optimized. The whole size of the optical system is 93 mm × 42 mm × 42 mm. The simulation results show that a maximum FOV of about 120° can be achieved for seven-waveband multispectral imaging with center wavelengths of 480 nm, 550 nm, 591 nm, 676 nm, 704 nm, 740 nm, and 767 nm. The new designed MCCEC has a great potential as an airborne or satellite-born payload for real time remote sensing and thus paves a new way for the design of compact and light-weight spectral-imaging cameras with an ultra large FOV.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Multispectral imaging technology has found wide applications in remote sensing for it has a relatively high spectral resolution and can obtain both target image and spectral information simultaneously. To improve the performance of multi-spectral camera, various new multi-spectral imaging technologies have been proposed in the past [19]. Among of them, a filter wheel or a liquid crystal tunable filtering system is normally used to achieve a sequential narrowband filtering image and thus are not suitable for real time spectral imaging applications [710]. In addition, the integration of these systems is not quite high due to the large volume of the filter wheel and liquid crystal tunable filtering system. Furthermore, the FOV of normal multispectral camera is not large due to the use of the traditional single aperture optical imaging system. To realize the real time multispectral imaging, an array camera configuration can be used [1112]. In such system, every camera is integrated with a band filter to achieve the image at that waveband. In this way, the acquisition of the multi-spectral image of the same object can be obtained in every camera simultaneously. The problem for this kind of system is that the high complexity on the configuration as well as on the post image processing. Moreover, the FOV for this kind of configuration is not quite large. However, multispectral cameras with a light weight, compact size and real time imaging ability at a large FOV are highly desirable for applications in remote sensing based on airborne or satellite borne.

Inspired by the biological compound eye in nature, the bionic compound eye camera is an optical imaging system consists of multiple sub-eyes [1322]. The bionic compound eye can be divided into different types, i.e. the apposition compound eye, the optical superposition compound eye, and the neural superposition compound eye. Figure 1 shows the schematic of an apposition compound eye. Bio-inspired cameras are particularly notable for their advantages of great FOV, small size, light weight and high sensitivity to moving objects [2327]. As a result, it has been receiving more and more attentions in the past few decades.

 figure: Fig. 1.

Fig. 1. The schematic of an apposition compound eye.

Download Full Size | PDF

Owing to the great merits of the bionic compound eye, researcher proposed to apply it for multispectral imaging applications. In 2004, Rui Shogenji et al. proposed a compact multispectral imaging system based on the so called planer compound eye imaging system, i.e. TOMBO [28]. The multispectral TOMBO system consists of an interference filter array, a planar micro-lens array and an image sensor. By using this system, they successfully achieved the multispectral image with seven wavebands from 400-700 nm. In 2013, Jin Jian et al. demonstrated the fabrication of a multispectral compound eye camera with four band filters [29]. They also used a planar microlens array which was fabricated on the top of the filter array directly by employing MEMS fabrication method. More recently, A. Cao et al. demonstrated a three wavebands multispectral imaging technique based on a diffractive microlens array. In their experiments, the diffractve microlens array was also fabricated on the planar substrate to form the so called artificial compound eye [30]. In 2017, Jianwei Chen et al. developed a hybrid imprinting method to design and f abricate a compact multi-layer curved artificial compound eye system for multispectral imaging [31]. Although a curved compound configuration was used in their system, only a 3×4 microlens array was used and thus the FOV is quite small. In addition, only four filters with quite broadband (∼100 nm) were used. In summary, previous studies on multispectral compound eye camera either based on a planar microlens array with narrow band filters or on a curved microlens array with broadband filters. As a result, the FOV or spectral resolution is limited for aforementioned systems and thus their applications are limited as well.

Here we propose a new multispectral imaging system, named multispectral curved compound eye camera (MCCEC), which can simultaneously obtain information of multiple spectrum segments on 7 narrow wavebands at real time in an ultra large FOV of up to 120°.

2. The MCCEC system

As is shown in Fig. 2, the new designed MCCEC consists of three subsystems, namely a curved micro-lens array integrated with selected narrow-band optical filters, an optical transformation subsystem, and the data processing unit with an image sensor. The FOV of the MCCEC is determined by the front bionic curved compound eye, a FOV of $120^\circ$ can be achieved in this work. Narrow band optical filters are fixed right behind the microlens array, and there are seven wavebands for multispectral imaging in total.

 figure: Fig. 2.

Fig. 2. The cross-sectional schematic of the MCCEC.

Download Full Size | PDF

2.1 Imaging theory of the curved compound eye

In this part, the relationship between the object and image plane was established. Figure 3 shows the relationship between object and image for the curved compound eye and a single lens. For curved compound eye, the optical axis of the central lens is denoted by oo1, and the optical axes of other lenses on the curved surface are denoted by oijo1, where points o, o1 and oij refer to the central point of the curved surface, the central lens and other lenses. The image height at the ο-xyz coordinate system can be calculated from the transformation matrix of the spatial coordinate system. According to the optical matrix theory, one can have the transformation matrix for a single lens [32]

$$T = \left[ {\begin{array}{cc} {1 + d\frac{{({n_1} - {n_2})}}{{{r_1}{n_2}}}}&{d\frac{{{n_1}}}{{{n_2}}}}\\ {\frac{{{n_1} - {n_2}}}{{{n_1}{r_1}}} + \frac{{{n_2} - {n_1}}}{{{n_1}{r_2}}} - d\frac{{{{({n_2} - {n_1})}^2}}}{{{n_1}{n_2}{r_1}{r_2}}}}&{1 + \frac{{d({n_2} - {n_1})}}{{{r_2}{n_2}}}} \end{array}} \right]$$

 figure: Fig. 3.

Fig. 3. The image-object relationship for the curved compound eye (a) and the single lens (b).

Download Full Size | PDF

Where, the refractive index of the lens array is denoted by n2, the center thickness of the lens is d, r1 and r2 are the curvature radii of the first and the second surfaces of the lens respectively. The surrounding medium is air and its refractive index n1=1. For simplification, by defining $\frac{{{n_1} - {n_2}}}{{{n_1}}} = m$, $\frac{{{n_2} - {n_1}}}{{{n_2}}} = n$, $\frac{{{n_1}}}{{{n_2}}} = a$, Eq. (1) can be expressed as

$$T = \left[ {\begin{array}{cc} {1 - \frac{{nd}}{{{r_1}}}}&{da}\\ {\frac{m}{{{r_1}}} - \frac{m}{{r{}_2}} + d\frac{{mn}}{{{r_1}{r_2}}}}&{1 + \frac{{dn}}{{{r_2}}}} \end{array}} \right]$$

After imaging from the curved lens array in its respective coordinate system, the image distance ($L{^{\prime}_{ij}}$) can be calculated from the object distance (${L_{ij}}$). One can obtain the imaging matrix

$${T_{ij}} = \left[ {\begin{array}{cc} 1&{L{^{\prime}_{ij}}}\\ 0&1 \end{array}} \right]T\left[ {\begin{array}{cc} 1&{{L_{ij}}}\\ 0&1 \end{array}} \right]$$
$$\textrm{ = }\left[ {\begin{array}{cc} {1 - \frac{{nd}}{{{r_1}}} + (\frac{{m{r_2} - m{r_1} + dmn}}{{{r_1}{r_2}}})L{^{\prime}_{ij}}}&{(1 - \frac{{nd}}{{{r_1}}}){L_{ij}} + da + L{^{\prime}_{ij}}({L_{ij}}(\frac{{m{r_2} - m{r_1} + dmn}}{{{r_1}{r_2}}}) + 1 + \frac{{dn}}{{{r_2}}})}\\ {\frac{{m{r_2} - m{r_1} + dmn}}{{{r_1}{r_2}}}}&{1 + \frac{{dn}}{{{r_2}}} + {L_{ij}}(\frac{{m{r_2} - m{r_1} + dmn}}{{{r_1}{r_2}}})} \end{array}} \right]$$

Equation (4) can be simplified into Eq. (5)

$${T_{ij}} = \left[ {\begin{array}{cc} {{T_{11}}}&{{T_{12}}}\\ {{T_{21}}}&{{T_{22}}} \end{array}} \right]$$

Then, according to optical imaging theory

$$\left[ {\begin{array}{c} {{Y_{i{j^{\prime}}}}}\\ {{p_{2ij}}} \end{array}} \right] = {T_{ij}}\left[ {\begin{array}{c} {{Y_{ij}}}\\ {{p_{1ij}}} \end{array}} \right]$$
$${Y_{ij}}^{\prime} = {T_{11}}{Y_{ij}} + {T_{12}}{p_{1ij}}$$

Where Yij, Yij is the height for object and image respectively and P1ij, P2ij is the slope of the light ray at the incident and exit surface respectively. Therefore the magnification of the optical system can be expressed by

$${W_{ij}} = \frac{{{Y_{ij}}^{\prime}}}{{{Y_{ij}}}}$$

By substituting Eqs. (5) and (6) into Eq. (8), one can get

$${W_{ij}} = \frac{{{T_{11}}{Y_{ij}} + {T_{12}}{p_{1ij}}}}{{{Y_{ij}}}}$$

By defining (xij, yij, zij) as the coordinate of the image for the lens in row i and column j on the curved surface, one can have

$${x_{ij}}^{\prime} = {W_{ij}}{x_{ij}} = (\frac{{{T_{11}}{Y_{ij}} + {T_{12}}{p_{1ij}}}}{{{Y_{ij}}}}){x_{ij}}$$
$${y_{ij}}^{\prime} = {W_{ij}}{y_{ij}} = (\frac{{{T_{11}}{Y_{ij}} + {T_{12}}{p_{1ij}}}}{{{Y_{ij}}}}){y_{ij}}$$
$${z_{ij}}^{\prime} = {W_{ij}}{z_{ij}} = (\frac{{{T_{11}}{Y_{ij}} + {T_{12}}{p_{1ij}}}}{{{Y_{ij}}}}){z_{ij}}$$

Furthermore, according to the coordinate system transformation [33], the relationship between the coordinate system oij-xijyijzij and ο-xyz

$$\left[ {\begin{array}{c} x\\ y\\ z \end{array}} \right] = R\left[ {\begin{array}{c} {{x_{ij}}}\\ {{y_{ij}}}\\ {{z_{ij}}} \end{array}} \right] + \left[ {\begin{array}{c} {\Delta x}\\ {\Delta y}\\ {\Delta z} \end{array}} \right]$$

By substituting Eqs. (10)–(12) into Eq. (13), the final image coordinate can be obtained

$${x_{ij}}^{\prime} = {W_{ij}}\omega (x)$$
$${y_{ij}}^{\prime} = {W_{ij}}\omega (y)$$
$${z_{ij}}^{\prime} = {W_{ij}}\omega (z)$$

Where R = [R(ɛγ) R(ɛβ) R(ɛα)]3×3 is the rotation matrix, and ɛα, ɛβ and ɛγ represents the angle between x, y and z axis for oij-xijyijzij and o-xyz coordinate system respectively. [Δx Δy Δz]T is the translation matrix and can be represented by

$$\left[ {\begin{array}{c} {\Delta x}\\ {\Delta y}\\ {\Delta z} \end{array}} \right] = \left[ {\begin{array}{c} {r\sin \theta \cos \varphi }\\ {r\sin \theta \sin \varphi }\\ {r\cos \theta - r} \end{array}} \right]$$
Where r is the distance from the point (x, y, z) to the origin in coordinate o-xyz. By now, the relationship between the image and the object has been successfully established, one can easily determine the coordinate of the image for curved compound eye imaging system by using above equations.

2.2 Imaging theory of the MCCEC

In order to obtain multi-spectral information, the object must be imaged simultaneously by a cluster of ommatidia, i.e. a number of microlenses in the curved compound eye. In this case, there must be an overlap on FOV for neighboring microlenses in MCCEC. In our design, seven microlenses are designed to be able to image the same object from different angles of view. As shown in Fig. 4(a), a cluster of seven microlenses with one microlens surrounded by other six microlenses is taken as a multispectral imaging unit and these microlenses are attached with filters at different wavebands. Similarly, each microlens can be surrounded by other six microlenses so that a new cluster of seven microlenses can act as a new multispectral imaging unit. In this way, the whole working FOV of MCCED can be covered by many multispectral imaging units. As a result, images obtained by those microlenses with the same spectral filter can be stitched together to form an image with the same spectrum channel in the whole FOV as shown in Fig. 4(b). By following above design rules, the curved compound eye can be designed to achieve multispectral imaging function in the whole FOV. In this way, a MCCEC that can realize real time multispectral imaging at an ultra large FOV can be obtained.

 figure: Fig. 4.

Fig. 4. Multispectral channels layout for MCCEC. (a) Layout for a cluster of microlenses with seven spectrum channels. (b) Layout for a whole curved compound eye.

Download Full Size | PDF

2.3 FOV overlap for multispectral imaging

Figure 5 shows the schematic of the FOV overlap for the same spectrum channel. As can be seen clearly from Fig. 5, the FOV overlapping angle for the same spectrum channel is

$$\gamma = 2\alpha - {\beta _1}$$
where, 2α is the FOV of a single lens, and β1 is the angle between two neighboring lenses with the same spectrum channel. Furthermore, from geometrical optics, one can get the following equation
$$\tan \alpha = \frac{d}{{2f^{\prime}}}$$
where, f’ is the focal length of a single lens, and d is the clear aperture of the lens. As can be seen from Fig. 5 clearly, only when the object distance L > L0, the FOV overlap can be obtained for the same spectrum channel. Here L is the object distance. L0 is the object distance when the overlap of FOV is zero and it can be calculated by knowing the coordinates of point A which is the intersect point of the edge light rays between two neighboring lenses. The coordinates of point A at the ο-xyz coordinate system can be derived as follows
$${x_a} = \frac{{2R\tan ({\alpha - {\beta_1}} ){{\cos }^2}\frac{{{\beta _1}}}{2} - R\sin {\beta _1}}}{{\tan ({\alpha - {\beta_1}} )+ \tan \alpha }}$$
$${y_a} ={-} \tan \alpha \frac{{2R\tan ({\alpha - {\beta_1}} ){{\cos }^2}\frac{{{\beta _1}}}{2} - R\sin {\beta _1}}}{{\tan ({\alpha - {\beta_1}} )+ \tan \alpha }} - R$$

 figure: Fig. 5.

Fig. 5. The imaging theory of the MCCEC, where 2ω=2α is the FOV of the lens, L is the object distance, β1 is the angle between two neighboring lenses with the same spectrum channel, area B is the FOV overlap area for the same spectrum channel imaging.

Download Full Size | PDF

Thus one can get

$${L_0} = \tan \alpha \frac{{2R\tan ({\alpha - {\beta_1}} ){{\cos }^2}\frac{{{\beta _1}}}{2} - R\sin {\beta _1}}}{{\tan ({\alpha - {\beta_1}} )+ \tan \alpha }}$$

3. The optical system design of the MCCEC

To demonstrate above concept, an optical system of MCCEC have been designed by following above design rules. Figure 6 shows the optical layout for the designed MCCEC system. As can be seen, there are three main subsystems, i.e. a curved compound eye integrated with selected narrow-band optical filters, an optical transformation subsystem, and the optical imaging sensor. In order to eliminate the optical aberrations, the lenslet in the compound eye was designed to have an aspheric surface. The mathematical expression for the aspheric surface is expressed as follows [34]

$$z = \frac{{c{r^2}}}{{1 + \sqrt {1 - ({1 + k} ){c^2}{k^2}} }} + {a_4}{r^4} + {a_6}{r^6} + {a_8}{r^8} +{\cdot}{\cdot} \cdot$$

 figure: Fig. 6.

Fig. 6. Optical design result for the MCCEC system.

Download Full Size | PDF

Table 1 shows the parameters of the designed aspheric surface.

Tables Icon

Table 1. Parameters of the even aspheric surface

According to the principle of the MCCEC, the angle between adjacent lenses in compound eye is designed to be 8°. On the back of each lenslet in compound eye, a selected narrow-band optical filter is integrated. The optical filter was simulated by a flat glass plate (n = 2.05 and d = 1 mm) with a certain weight of the transmission wavelength. Central wavelengths for seven different optical filters are 480 nm, 550 nm, 591 nm, 676 nm, 704 nm, 740 nm and 767 nm respectively. The bandwidth of each spectral filter is 10 nm. The FOV of the MCCEC system is set as 120°. Table 2 shows the parameters of the optical transformation subsystem.

Tables Icon

Table 2. Specifications of the optical transformation subsystem

3.1 Optical imaging quality analysis of the MCCEC

The MTF curve is an important parameter for the quality evaluation of the optical system. According to Nyquist’ theorem, the theoretical resolution of the system can be calculated by using the following equation [35]

$$N = \frac{{1000}}{{2 \times d}}$$
where $N$ is the Nyquist cutoff frequency of the optical system and $d$ is the unit pixel of the detector. In this work, a monochromatic imaging sensor is used. The detector size, pixel size and the frame rate is 2/3 inches, 4 µm and 38 fps respectively. Therefore, the cutoff frequency of the MTF was calculated to be 125 lp/mm by using Eq. (24). Figures 7(a)–7(d) show the MTF for different wavelengths in different FOVs. Lines in different colors represent MTFs in different FOVs. As can be seen, MTF values are higher than 0.42 in the whole FOV at the cutoff spatial frequency. The high flatness of the optimized MTF indicates that a high uniformity in the imaging quality for MCCEC can be obtained.

 figure: Fig. 7.

Fig. 7. The MTF for different wavelengths and different FOVs. (a) FOV = 0° at 550 nm, (b) FOV = 32° at 704 nm, (c) FOV = 48° at 591 nm, (d) FOV = 56° at 740 nm.

Download Full Size | PDF

The spot diagram is another important parameter for the optical system. The smaller the value of the spot diagram is, the better the imaging quality of the optical system is. Figures 8(a)–8(d) show the spot diagram for the designed MCCEC system. As can be seen, the resulting RMS values of the spot radii in different FOVs and at different wavelengths ranging from 0.627 to 3.347 µm are much smaller than the pixel size. Therefore, the spot diagram indicates that the optical aberrations of the designed MCCEC are welled corrected.

 figure: Fig. 8.

Fig. 8. Spot diagram of for different wavelengths and different FOVs. (a) FOV = 0° at 550 nm, (b) FOV = 32° at 704 nm, (c) FOV = 48° at 591 nm, (d) FOV = 56° at 740 nm.

Download Full Size | PDF

Field curvature and distortion are important parameters for the final image quality evaluation of the optical system as well. Figures 9(a)–9(h) show the curvature of field and distortion in different FOVs and at different wavelengths. As can be seen, the field curvature is less than 20 µm. For comparison, the focal depth of the whole system was calculated to be about 25 µm. That is to say, the field curvature is less than that of the focal depth of the whole system, which means the field curvature has been well corrected. In addition, the maximum field distortion is less than 2%, which also meets the requirements for most applications.

 figure: Fig. 9.

Fig. 9. The curvature of field and distortion for different wavelengths and different FOVs. Curvature of field (a) and distortion (b) for FOV = 0° at 550 nm, Curvature of field (c) and distortion (d) for FOV = 32° at 704 nm, Curvature of field (e) and distortion (f) for FOV = 48° at 591 nm, Curvature of field (g) and distortion (h) for FOV = 56° at 740 nm.

Download Full Size | PDF

Furthermore, the wavefront map at the FOV of 56° in MCCEC is taken to evaluate the optical image quality of the MCCEC. For optical system with a high imaging quality, the peak to valley (P-V) value of the wavefront should be less than λ/4. Figure 10 shows the wavefront map for the designed MCCEC system. As can be seen, the P-V value of the wavefront of the lens at FOV of 56° is less than 0.2034 λ, which means that the image quality of the designed MCCEC can be guaranteed.

 figure: Fig. 10.

Fig. 10. The wavefront map at FOV of 56° of MCCEC.

Download Full Size | PDF

3.2 Tolerance analysis for MCCEC

The sensitivity analysis method was used to analyze the tolerance of the designed MCCEC. The diffraction MTF value is selected as the evaluation criterion for tolerance sensitivity analysis. Monte Carlo analysis was employed to perform a random simulation test and sensitivity analysis at spatial frequency of 100 lp/mm. Table 3 shows the parameters of the tolerance allocation scheme.

Tables Icon

Table 3. Specifications of the tolerance allocation scheme

Figure 11 shows the tolerance sensitivity analysis results for different surfaces of the optical transformation subsystem. TRAD denotes the radius tolerance, TETX and TETY represent the inclination of lens in X and Y directions respectively. As can be seen, TRAD values for surfaces 20, 23, 24 and 26 have a great effect on the performance of MTF. Especially for TRAD values of surface 24 and 26, the effect on the MTF drop can be larger than 0.1. In addition, TETX values of surfaces 19-25 and TETY values of surfaces 21-22 also have effect on MTF to some extent, but the MTF drop is only less than 0.07. This means that the surface curvature control is the most important factor to note so that a high optical imaging quality of the MCCEC can be guaranteed.

 figure: Fig. 11.

Fig. 11. (a) Surface of the optical transformation subsystem. (b) Tolerance analysis results for FOV of 48°.

Download Full Size | PDF

Moreover, Monte Carlo simulation analysis was performed to evaluate the diffraction MTF degradation due to the tolerances for MCCEC at different FOVs of 0°, 48° and 56° respectively. As shown in Fig. 12, the ray tracing results indicate that MTF is greater than 0.60 for the possibility of 90% in central FOV, while for FOVs of 48° and 56° the MTF can only maintained at above 0.3 and 0.2 respectively for the possibility of 90%. Even at 50% possibility level, MTF can only goes up to 0.4 and 0.38 at FOVs of 48° and 56° respectively. In general, we have achieved a reasonable diffraction MTF out of the manufacturing and assembling tolerances for the designed MCCEC.

 figure: Fig. 12.

Fig. 12. Result of the Monte Carlo simulation analysis for different FOVs.

Download Full Size | PDF

4. Discussions

In this work, the FOV of different lenslets in the compound eye is designed to have some overlap for the same spectrum channel, and the image obtained by MCCEC on detector is a hexagonal arranged sub-image array. Each lenslet in the compound eye contributes a sub-image by receiving the light from a sub-FOV. To achieve 7 different spectral images in the whole FOV, algorithm was developed to reconstruct the compete image [36]. Figure 13(a) shows that the image acquisition and reconstruction process. Each lenslet in the curved compound eye forms a small inverted partial image of the object, i.e. a symbol of ‘+’. All of the tiny inverted images are then combined together to form a spherical compound eye image. This spherical compound eye image is then transformed into a flat image by the optical relay system, and then detected by a detector. Figure 13(b) shows that the principle of the multispectral compound camera experiment. A simple image reconstruction method was developed to restore the image of the symbol ‘+’. The method contains two steps:

  • (a) The planar image at the same spectrum channel is extracted and projected onto a curved surface.
  • (b) Since each point of the target can be captured at the same spectrum, the relationship between the central position of the sub-image of the same spectrum channel on the sphere and the target can be established.

 figure: Fig. 13.

Fig. 13. The image reconstruction principle of the MCCEC. (a) The image acquisition and reconstruction process. (b) The principle of image reconstruction experiment.

Download Full Size | PDF

5. Conclusions

In summary, we propose and design a so called multispectral curved compound eye camera, i.e. MCCEC, which has the advantage to realize multispectral image with seven wavebands in real time at an ultra-large FOV of 120°. The design principle and imaging quality of the MCCEC were carefully analyzed and discussed. The design results show that the designed MCCEC has a good imaging quality with a cutoff spatial frequency larger than 100 lp/mm. Furthermore, the tolerance analysis indicates that the designed MCCEC has a reasonable diffraction MTF out of the manufacturing and assembling tolerances. The formed MCCEC consists of about 117 microlenses and 7 spectral channels. The bandwidth of each spectral filter is 10 $nm$. For different applications, the MCCEC can be optimized to increase the number of spectral channels to more than 7.

However, it should be noted that though we claim a FOV of 120° on multi-spectral imaging can be obtained, but in fact it may short of a little bit FOV on the edge due to the lack of some spectrum channels. Thus, more future work need to be done to further increase the FOV overlap between the lenslets for the same spectrum as well as stitching algorithm for synthesizing these sub-FOVs with overlaps. Other important directions for future research include how to improve resolution and further optimize the MCCEC system.

The designed MCCEC can be further adapted or optimized to be airborne or satellite-born payloads for remote sensing for multispectral imaging in real time with an ultra-large FOV.

Funding

National Natural Science Foundation of China (61975231).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. J. Liu, J. Chen, J. Liu, S. Feng, X. Li, and J. Cui, “Optical design of a prism-grating-based lenslet array integral field spectrometer,” Opt. Express 26(15), 19456–19469 (2018). [CrossRef]  

2. J. Reimers, A. Bauer, K. P. Thompson, and J. P. Rolland, “Freeform spectrometer enabling increased compactness,” Light: Sci. Appl. 6(7), e17026 (2017). [CrossRef]  

3. H. E. Torkildsen, S. Nicolas, T. Opsahl, T. Haavardsholm, I. Kåsen, and A. Rognmo, “Compact camera for multispectral and conventional imaging based on patterned filters,” Appl. Opt. 53(13), C64–C71 (2014). [CrossRef]  

4. R. M. Sullenberger, A. B. Milstein, and Y. Rachlin, “Computational reconfigured imaging spectrometer,” Opt. Express 25(25), 31960–31969 (2017). [CrossRef]  

5. P. Cu-Nguyen, A. Grewe, P. Feßer, A. Seifert, S. Sinzinger, and H. Zappe, “An imaging spectrometer employing tunable hyperchromatic micro lenses,” Light: Sci. Appl. 5(4), e16058 (2016). [CrossRef]  

6. J. S. Yoo and V. Ntziachristos, “Multispectral imaging using multiple-bandpass filters,” Opt. Lett. 33(9), 1023–1025 (2008). [CrossRef]  

7. P. J. Miller and C. C. Hoyt, “Multispectral imaging with a liquid crystal tunable filter,” Proc. SPIE 2345, 354–365 (1995). [CrossRef]  

8. J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using a liquid crystal tunable filter,” Opt. Eng. 41(10), 2532–2548 (2002). [CrossRef]  

9. J. Brauers and T. Aach, “Geometric Calibration of Lens and Filter Distortions for Multispectral Filter-Wheel Cameras,” IEEE Trans. on Image Process. 20(2), 496–504 (2011). [CrossRef]  

10. K. Nishino, K. Nakamura, M. Tsuta, M. Yoshimura, J. Sugiyama, and S. Nakauchi, “Optimization of excitationemission band-pass filter for visualization of viable bacteria distribution on the surface of pork meat,” Opt. Express 21(10), 12579–12591 (2013). [CrossRef]  

11. D. E. Escobar, J. H. Everitt, J. R. Norega, I. Cavazos, and M. R. Davis, “A twelve-band airborne digital video imaging system (ADVIS),” Remote Sens Environ. 66(2), 122–128 (1998). [CrossRef]  

12. Y. Zhao, T. Yue, L. Chen, H. Y. Wang, Z. Ma, David J. Brady, and X. Cao, “Heterogeneous camera array for multispectral light field imaging,” Opt. Express 25(13), 14008–14022 (2017). [CrossRef]  

13. J. W. Duparré and F. C. Wippermann, “Micro-optical artificial compound eyes,” Bioinspiration Biomimetics 1(1), R1–R16 (2006). [CrossRef]  

14. J. Duparre, P. Schreiber, A. Matthes, E. Pshenay-Severin, A. Brauer, A. Tunnermann, R. Volkel, M. Eisner, and T. Scharf, “Microoptical telescope compound eye,” Opt. Express 13(3), 889–903 (2005). [CrossRef]  

15. K. H. Jeong, J. Kim, and L. P. Lee, “Biologically inspired artificial compound eyes,” Science 312(5773), 557–561 (2006). [CrossRef]  

16. S. Zhang, L. Zhou, C. Xue, and L. Wang, “Design and simulation of a superposition compound eye system based on hybrid diffractive-refractive lenses,” Appl. Opt. 56(26), 7442–7449 (2017). [CrossRef]  

17. H. R. Fallah and A. Karimzadeh, “Design and simulation of a high-resolution superposition compound eye,” Opt. Express 54(1), 67–76 (2007). [CrossRef]  

18. R. Horisaki and J. Tanida, “Compact compound-eye projector using superresolved projection,” Opt. Lett. 36(2), 121–123 (2011). [CrossRef]  

19. F. Fan, Q. Hao, and X. Cheng, “Retina-like sensor based on a lens array with a large field of view,” Appl. Opt. 54(36), 10692–10697 (2015). [CrossRef]  

20. Luke P. Lee and R. Szema, “Inspirations from biological optics for advanced photonic systems,” Science 310(5751), 1148–1150 (2005). [CrossRef]  

21. D. Keum, H. Jung, and K. Jeong, “Planar emulation of natural compound eyes,” Small 8(14), 2169–2173 (2012). [CrossRef]  

22. Y. Cheng, J. Cao, Y. Zhang, and H. Qun, “Review of state-of-art artificial compound eye imaging systems,” Bioinspir. Biomim. 14(3), 031002 (2019). [CrossRef]  

23. H. Deng, X. Gao, M. Ma, Y. Li, H. Li, J. Zhang, and X. Zhong, “Catadioptric planar compound eye with large field of view,” Opt. Express 26(10), 12455–12468 (2018). [CrossRef]  

24. T. Wang, W. Yu, C. Li, H. Zhang, Z. Xu, Z. Lu, and Q. Sun, “Biomimetic compound eye with a high numerical aperture and anti-reflective nanostructures on curved surfaces,” Opt. Express 37(12), 2397–2399 (2012). [CrossRef]  

25. Y. Kitamura, R. Shogenji, K. Yamada, S. Miyatake, M. Miyamoto, T. Morimoto, Y. Masaki, N. Kondou, D. Miyazaki, J. Tanida, and Y. Ichioka, “Reconstruction of a high-resolution image on a compound-eye image-capturing system,” Appl. Opt. 43(8), 1719–1727 (2004). [CrossRef]  

26. H. Zhang, K. Wang, Z. Cao, and Q. Wu, “Position detection with high precision using novel compound-eye sensor,” Proc. SPIE 7508, 75081L (2009). [CrossRef]  

27. Y. M. Song, V. M. Y. Xie, J. Xiao, I. Jung, K.-J. Choi, Z. Liu, H. Park, C. Lu, R.-H. Kim, R. Li, K. B. Crozier, Y. Huang, and J. A. Rogers, “Digital cameras with designs inspired by the arthropod eye,” Nature 497(7447), 95–99 (2013). [CrossRef]  

28. R. Shogenji, Yoshiro Kitamura, K. Yamada, S. Miyatake, and J. Tanida, “Multispectral imaging using compact compound optics,” Opt. Express 12(8), 1643–1655 (2004). [CrossRef]  

29. J. Jin, S. Di, Y. Yao, and R. X. Du, “Design and fabrication of filtering artificial-compound-eye and its application in multi-spectral imaging,” Proc. SPIE 8911, 891106 (2013). [CrossRef]  

30. A. Cao, H. Pang, M. Zhang, L. Shi, Q. Deng, and S. Hu, “Design and Fabrication of an Artificial Compound Eye for Multi-Spectral Imaging,” Micromachines 10(3), 208 (2019). [CrossRef]  

31. J. Chen, H. H. Lee, D. Wang, S. Di, and S. C. Chen, “Hybrid imprinting process to fabricate a multi-layer compound eye for multispectral imaging,” Opt. Express 25(4), 4180–4189 (2017). [CrossRef]  

32. A. Thetford, “Introduction to Matrix Methods in Optics,” Opt. Acta 23(3), 255–256 (1976). [CrossRef]  

33. K. Zhang, D. J. Zhang, Y. H. Sheng, P. F. Wang, and Y. T. Pang, “Reasearch on two methods of three dimensional coordinate transformation and their comparision,” Math. Practice Theory 38(23), 122–128 (2008).

34. K. Pang, F. Fang, L. Song, Y. Zhang, and H. Zhang, “Bionic compound eye for 3D motion detection using an optical freeform surface,” J. Opt. Soc. Am. B 34(5), B28 (2017). [CrossRef]  

35. D. X. Zheng and X. P. Du, “Influence of detector pixel size on performance of optical detection system,” Chin. Space Sci. Technol. 31(3), 51–55 (2011).

36. C. Y. Shi, Y. Y. Wang, W. X. Yu, and Z. J. Xu, “A spherical compound eye camera for fast location and recognition of objects at large field of view,” Opt. Express 25(26), 32333–323445 (2017). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. The schematic of an apposition compound eye.
Fig. 2.
Fig. 2. The cross-sectional schematic of the MCCEC.
Fig. 3.
Fig. 3. The image-object relationship for the curved compound eye (a) and the single lens (b).
Fig. 4.
Fig. 4. Multispectral channels layout for MCCEC. (a) Layout for a cluster of microlenses with seven spectrum channels. (b) Layout for a whole curved compound eye.
Fig. 5.
Fig. 5. The imaging theory of the MCCEC, where 2ω=2α is the FOV of the lens, L is the object distance, β1 is the angle between two neighboring lenses with the same spectrum channel, area B is the FOV overlap area for the same spectrum channel imaging.
Fig. 6.
Fig. 6. Optical design result for the MCCEC system.
Fig. 7.
Fig. 7. The MTF for different wavelengths and different FOVs. (a) FOV = 0° at 550 nm, (b) FOV = 32° at 704 nm, (c) FOV = 48° at 591 nm, (d) FOV = 56° at 740 nm.
Fig. 8.
Fig. 8. Spot diagram of for different wavelengths and different FOVs. (a) FOV = 0° at 550 nm, (b) FOV = 32° at 704 nm, (c) FOV = 48° at 591 nm, (d) FOV = 56° at 740 nm.
Fig. 9.
Fig. 9. The curvature of field and distortion for different wavelengths and different FOVs. Curvature of field (a) and distortion (b) for FOV = 0° at 550 nm, Curvature of field (c) and distortion (d) for FOV = 32° at 704 nm, Curvature of field (e) and distortion (f) for FOV = 48° at 591 nm, Curvature of field (g) and distortion (h) for FOV = 56° at 740 nm.
Fig. 10.
Fig. 10. The wavefront map at FOV of 56° of MCCEC.
Fig. 11.
Fig. 11. (a) Surface of the optical transformation subsystem. (b) Tolerance analysis results for FOV of 48°.
Fig. 12.
Fig. 12. Result of the Monte Carlo simulation analysis for different FOVs.
Fig. 13.
Fig. 13. The image reconstruction principle of the MCCEC. (a) The image acquisition and reconstruction process. (b) The principle of image reconstruction experiment.

Tables (3)

Tables Icon

Table 1. Parameters of the even aspheric surface

Tables Icon

Table 2. Specifications of the optical transformation subsystem

Tables Icon

Table 3. Specifications of the tolerance allocation scheme

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

T = [ 1 + d ( n 1 n 2 ) r 1 n 2 d n 1 n 2 n 1 n 2 n 1 r 1 + n 2 n 1 n 1 r 2 d ( n 2 n 1 ) 2 n 1 n 2 r 1 r 2 1 + d ( n 2 n 1 ) r 2 n 2 ]
T = [ 1 n d r 1 d a m r 1 m r 2 + d m n r 1 r 2 1 + d n r 2 ]
T i j = [ 1 L i j 0 1 ] T [ 1 L i j 0 1 ]
 =  [ 1 n d r 1 + ( m r 2 m r 1 + d m n r 1 r 2 ) L i j ( 1 n d r 1 ) L i j + d a + L i j ( L i j ( m r 2 m r 1 + d m n r 1 r 2 ) + 1 + d n r 2 ) m r 2 m r 1 + d m n r 1 r 2 1 + d n r 2 + L i j ( m r 2 m r 1 + d m n r 1 r 2 ) ]
T i j = [ T 11 T 12 T 21 T 22 ]
[ Y i j p 2 i j ] = T i j [ Y i j p 1 i j ]
Y i j = T 11 Y i j + T 12 p 1 i j
W i j = Y i j Y i j
W i j = T 11 Y i j + T 12 p 1 i j Y i j
x i j = W i j x i j = ( T 11 Y i j + T 12 p 1 i j Y i j ) x i j
y i j = W i j y i j = ( T 11 Y i j + T 12 p 1 i j Y i j ) y i j
z i j = W i j z i j = ( T 11 Y i j + T 12 p 1 i j Y i j ) z i j
[ x y z ] = R [ x i j y i j z i j ] + [ Δ x Δ y Δ z ]
x i j = W i j ω ( x )
y i j = W i j ω ( y )
z i j = W i j ω ( z )
[ Δ x Δ y Δ z ] = [ r sin θ cos φ r sin θ sin φ r cos θ r ]
γ = 2 α β 1
tan α = d 2 f
x a = 2 R tan ( α β 1 ) cos 2 β 1 2 R sin β 1 tan ( α β 1 ) + tan α
y a = tan α 2 R tan ( α β 1 ) cos 2 β 1 2 R sin β 1 tan ( α β 1 ) + tan α R
L 0 = tan α 2 R tan ( α β 1 ) cos 2 β 1 2 R sin β 1 tan ( α β 1 ) + tan α
z = c r 2 1 + 1 ( 1 + k ) c 2 k 2 + a 4 r 4 + a 6 r 6 + a 8 r 8 +
N = 1000 2 × d
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.