Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Computational superposition compound eye imaging for extended depth-of-field and field-of-view

Open Access Open Access

Abstract

This paper describes a superposition compound eye imaging system for extending the depth-of-field (DOF) and the field-of-view (FOV) using a spherical array of erect imaging optics and deconvolution processing. This imaging system had a three-dimensionally space-invariant point spread function generated by the superposition optics. A sharp image with a deep DOF and a wide FOV could be reconstructed by deconvolution processing with a single filter from a single captured image. The properties of the proposed system were confirmed by ray-trace simulations.

© 2012 Optical Society of America

1. Introduction

Aberrations degrade the optical transfer function (OTF) in imaging optics, resulting in decreased depth-of-field (DOF) and field-of-view (FOV) in conventional imaging systems [1]. The framework of computational imaging, which is based on cooperative optical design and image processing, has been applied to solve the problem.

To increase the DOF using this framework, the optical system is designed to equalize the point spread functions (PSFs) along the optical axis with an optical modulation element, and then image processing retrieves a sharp image from the captured image by deconvolution filtering. A cubic phase plate, a spherically aberrated optical system, or a diffuser can be used for axial optical modulation [25]. Axial movement of the object and detector and a change of the lens focus during exposure can also be used to achieve PSF equalization along the optical axis [6, 7].

On the other hand, computational imaging has been applied to increase the FOV, which is limited by the off-axis aberrations. Typically, a monocentric optical system and/or a lens array equalize the PSFs laterally, and then image processing is applied to rearrange a number of captured images to reconstruct a large image [812]. A multiscale gigapixel camera with an FOV of 120°-by-50° has been demonstrated recently using the same concept [13].

We have proposed a method to achieve both a deep DOF and a wide FOV by using computational superposition imaging [14]. In this method, multiple images obtained under different optical conditions are superposed to equalize the PSFs both axially and laterally. Deconvolution filtering is applied to produce an aberration-reduced image. We have verified the principle by an experiment involving mechanical scanning of an aberrated imaging system [15].

In this paper, we present a computational superposition imaging technique based on spherical superposition compound eye optics. Computational superposition imaging and compound eye imaging are surveyed and applied to extended DOF and FOV imaging with a single-shot in the following two sections, respectively. In this system, the mechanical scanning employed in Ref. [15] can be eliminated to achieve single-shot computational superposition imaging. We describe the principle of the proposed system and show the results of ray-trace simulations performed to verify the effectiveness of the method.

2. Computational superposition imaging

2.1. Previous work

To realize deep-DOF and wide-FOV imaging, computational superposition imaging has been proposed [14]. A schematic diagram of the method is shown in Fig. 1. While changing the focusing distance and the optical axis direction, multiple images of an object are acquired by imaging optics whose PSFs are three-dimensionally space-variant owing to defocus and aberrations. The multiple images are superposed to equalize the PSFs. This superposed image has a single blur kernel at every point in the image. The superposed image can be approximated as an image that is captured by an imaging system with a three-dimensionally space-invariant PSF. A sharp aberration-reduced image is produced from a single superposition image by deconvolution processing with a single filter. With this method, we can acquire an aberration-reduced image within a large three-dimensional space. In other words, a deep-DOF, wide-FOV imaging system may be constructed using optics that are allowed to have defocus and aberrations, with low computational cost.

 figure: Fig. 1

Fig. 1 Schematic diagram of computational superposition imaging.

Download Full Size | PDF

The concept of computational superposition imaging can be directly implemented with mechanical scanning [15]. The imaging conditions (focal distance and optical axis direction in this work) are scanned mechanically. The object is captured multiple times during scanning, and then multiple captured images are superposed computationally to increase the DOF and FOV. This superposition imaging system requires multiple exposures with mechanical scanning and computational superposition of the captured images. To overcome these limitations, a scheme with optical scanning and superposition for achieving single-shot imaging is presented in the next subsection.

2.2. Implementation for single-shot imaging

The mechanical scanning and computational superposition mentioned above can both be emulated optically with a single-shot using a superposition compound eye. A schematic diagram of the optical implementation is shown in Fig. 2(a). The imaging optics is composed of an array of erect imaging optics on a spherical surface and a spherical image sensor. Each pair of elemental erect imaging optics has a different focusing distance and optical axis direction, as indicated in the figure. The images are superposed on the sensor surface by the optical superposition effect of the array of erect imaging optics. The image captured by the whole imaging optics can also be approximated as an image with a three-dimensionally space-invariant PSF. Therefore, computational superposition imaging with a single-shot may be implemented by this type of imaging optics.

 figure: Fig. 2

Fig. 2 Schematic diagram of single-shot computational superposition imaging based on spherical superposition compound eyes with (a) positive and (b) negative curvatures.

Download Full Size | PDF

The superposition imaging system in Fig. 2(a) can also be designed with a negative curvature, as shown in Fig. 2(b). The design with a positive curvature in Fig. 2(a) has a magnification smaller than unity, whereas the design with a negative curvature in Fig. 2(b) has a magnification larger than unity [16]. In this paper, we demonstrate the implementation in Fig. 2(a) by simulations.

3. Compound eye imaging

3.1. Previous work

Compound eyes of insects are composed of a number of elemental micro-optics, and they have been classified into two types: apposition compound eyes and superposition compound eyes [17]. The diameter of the each elemental optics is about 10 μm to 80 μm [1820]. The main difference between the two types is the relationship between the elemental optics and the photoreceptors. In the apposition type, the elemental optics are separated by partitions. Therefore, rays through a single elemental optics reach a single photoreceptor. In the superposition type, the elemental optics, each of which produces an erect image, are not separated. Therefore, rays through multiple elemental optics reach a single photoreceptor. Advantages of the superposition type over the apposition type are greater light efficiency and cutoff frequency because the overall imaging system behaves as a lens with a larger aperture than that of the elemental optics [2123]. In comparison with the OTFs of the apposition type, the OTFs of the superposition type are degraded by spherical aberration [24]; however, the degraded OTFs can be restored by postprocessing. The resolution in superposition compound eyes is not limited by diffraction of the elemental optics but by the spherical aberration of the whole imaging optics, which has a high numerical aperture (NA) [18]. Therefore, use of the superposition-type in conjunction with postprocessing may give a greater resolution than the apposition type. Our optical superposition method illustrated in Fig. 2 is inspired by the superposition compound eye.

In previous studies, compound eyes have been applied to high-performance imaging systems. The apposition type has been applied to high-resolution imaging with thin optics [2527]. Single-shot multi-dimensional imaging with compact apposition compound eye optics has also been proposed [28, 29].

The superposition type has been applied to wide-FOV and high-resolution imaging with a cluster of simple erect optical elements, e.g., erect lenses and mirrors [16, 22]. These previous superposition compound eye imaging systems assume a single object distance. In Ref. [16], rays causing defocus and spherical aberration are blocked by parallax barriers between elemental optics.

3.2. Application to computational superposition imaging

In contrast to the previous work described above, our imaging system exploits the rays that were suppressed in the previous work because such spherically aberrated rays can be used to extend the DOF, and the defocused and aberrated images can be restored by deconvolution filtering [4].

In this paper, a gradient index (GRIN) lens is assumed as the elemental erect imaging optics in the superposition compound eye. To construct monocentric imaging optics, a spherical image sensor is required. Some state-of-the-art technologies have realized spherical image sensors [30, 31]. Alternatively, a spherical array of planar sensors can approximate a spherical sensor [9,10].

4. Extending the DOF and FOV

In this section, we analyze spherical superposition compound eye imaging optics for extending the DOF and FOV. DOF extension with the imaging optics is illustrated in Fig. 3, together with definitions of the parameters. The imaging optics is composed of a spherical erect lens array and a spherical image sensor with a common center. Here, for system analysis, the array optics is assumed to consist of ideal aberration-free erect lenses without a diameter or length. This means that an infinite number of aberration-free, infinitesimally small erect lenses are arranged on the spherical surface. One lens is chosen arbitrarily to define the optical axis, as shown in the figure. A pair of lenses with the same angle with respect to the optical axis focuses at a certain distance, and different pairs with different angles focus at different distances. Therefore, the focusing distance can be scanned optically with the spherical superposition compound eye, and the captured images have axially space-invariant PSFs. A sharp, deep-DOF image can be produced by deconvolution processing of a single captured image.

 figure: Fig. 3

Fig. 3 Schematic diagram of DOF extension using a spherical array of ideal erect lenses.

Download Full Size | PDF

Every lens defines an optical axis because the proposed spherical superposition compound eye is monocentric, and the PSF is averaged laterally. Therefore, the spherical superposition compound eye extends the FOV and DOF simultaneously. The proposed method ultimately enables us to realize an omni-directional imaging system with a deep-DOF.

The DOF of the proposed system is limited by the scanning range s of the focusing distances. The scanning range s is determined by the paraxial and marginal rays, as shown in Fig. 3. The marginal rays are limited by the virtual pupil in the figure. The virtual pupil is caused by vignetting of each GRIN lens and parallax barriers between the lenses in practice. The vignetting is determined by partitions between the lenses. The partitions prevent stray light from the neighboring lenses. The vignetting and parallax barriers restrict the FOV of each erect lens. They also govern the diameter D of the virtual pupil and the scanning range s. In this paper, the effect caused by the restricted FOV of each lens is emulated by a virtual pupil for simplicity.

If a point source is located at a distance zo ∈ (0, ∞) from the erect lens on the optical axis, the point source is imaged at a distance zimar from the erect lens with the marginal rays. zimar can be expressed as

zimar=(zo+t)(R22r2)2r2R2r2(R22r2)+2(zo+t)R2r2+t,
where R is the radius of curvature of the lens array surface, r is the radius of the virtual pupil, and t is the axial distance between the marginal and paraxial lens positions. Here, r = D/2 and t=RR2r2. Note that rR/2. The focal length f of the imaging optics can be calculated, with a paraxial approximation, as
f=R2,
which is introduced from Eq. (1) with r → 0 and zo → ∞. The distance zipar of the image of the point source with the paraxial approximation can be expressed as
zipar=zofzo+f=zoR2zo+R.
In this paper, zipar is chosen as the position zi of the spherical image sensor for the ray-tracing simulations with a fixed zi described in the following section because zipar is independent of D. Therefore,
zi=zipar.
Note that the center of curvature of the lens array surface is the same as that of the sensor surface to achieve monocentric optics. Using this configuration, the marginal focusing distance zomar and the paraxial focusing distance zopar in Fig. 3 are given by
zomar=(zit)(R22r2)+2r2R2r2(R22r2)2(zit)R2r2t,
zopar=zifzi+f=ziR2zi+R,
respectively. The scanning range s in Fig. 3 can be described as
s=zomarzopar.
Here, the F-number Fi/# in the image space is defined as
Fi/#=ziD.
As shown in Eqs. (5)(7), a smaller radius of curvature R of the lens array and a larger diameter D of the virtual pupil increase the scanning range s. This means that a smaller F-number Fi/# increases the scanning range s.

The effective DOF in the system is part of the scanning range s. In Ref. [6], the DOF is about half of the scanning range of the object. The magnitude of the modulation transfer function (MTF) is inversely proportional to the scanning range of the object. A lower magnitude causes lower noise robustness in deconvolution filtering. In the next section, we describe simulations of the extent of the effective DOF of the proposed system.

The pitch of the erect lenses determines the pitch of discrete scans of the focusing distance and the optical axis direction. To approximate the GRIN lens array to an ideal lens array, more than one chief ray through the GRIN lenses must reach a single detector on the image sensor. Figure 4 shows this geometrical relationship. A lack of rays degrades the MTF of the GRIN lens array. The pitches of the erect lenses and detectors are pl and pd, respectively. With a planar approximation of the spherical surface, the condition can be roughly estimated as

p1(zimart)pdzizimar.
The impact of the lens pitch, pl, is also simulated in the next section.

 figure: Fig. 4

Fig. 4 Geometrical relationship between the pitches of the erect lenses and detectors.

Download Full Size | PDF

5. Simulations

The imagining properties of the spherical superposition compound eye were verified with simulations using Zemax optical design software [32]. These simulations demonstrated DOF extension because the FOV is obviously extended by the monocentric optics, as mentioned in the previous section. Imaging systems composed of arrays of ideal erect lenses and GRIN lenses were numerically analyzed and compared to show the effect of the discretization by the GRIN lenses.

The system parameters in the simulations are summarized in Table 1, where λ is the wavelength. The misfocus MTFs of the two models are shown with the different diameters D of the virtual pupil to show the impact of the F-number Fi/#. In this case, the radius of the spherical sensor is around 20 mm. Such a spherical sensor may be realized by extending some of the latest technologies available.

Tables Icon

Table 1. System parameters in simulations.

In the simulations, the refractive index of the GRIN lens was modeled as

n=n0+nd2d2+nd4d4+nd6d6+nzz+nz2z2+nz3z3,
where d is the radial distance, z is the axial distance, and nvariable is the coefficient of refractive index of each variable and order. The refractive indexes of the GRIN lens were optimized by Zemax. The cost function in the optimization was composed of Seidel aberrations and the root mean square (RMS) of the spot radius. The function was minimized to suppress the aberrations of the elemental GRIN lens under the constraint of minimum and maximum refractive indexes. The minimum and maximum values were set as 1.6 and 1.8, respectively. The obtained coefficients are shown in Table 2. The diameter of the GRIN lenses was chosen based on Eq. (9) with the parameters in Table 1, D = 20 mm, and pd = 8 μm. In this case, the maximum lens pitch, pl, was 88.6 μm.

Tables Icon

Table 2. Parameters of GRIN lenses in simulations.

The effect of the lens pitch on the MTFs was simulated. Figure 5 compares two MTFs of GRIN lens arrays with different diameters. The sidelobe of the MTF with pl = 80 μm is smoother than that with pl = 200 μm. This is because the former satisfies the condition of Eq. (9), whereas the latter does not.

 figure: Fig. 5

Fig. 5 MTFs of GRIN lens arrays with different diameters.

Download Full Size | PDF

The results of ray-tracing with the spherical GRIN lens array with D = 20 mm are shown in Fig. 6. Figures 6(a) and (b) show an overview of the system and the rays passing through the GRIN lenses, respectively. The GRIN lenses are optically separated with partitions to prevent stray light. The imaging optics had spherical aberration, as shown in Fig. 6(c). The spherical aberration was used for DOF extension, as mentioned in Section 3.2.

 figure: Fig. 6

Fig. 6 Ray-trace of a spherical GRIN lens array. (a) Overview, (b) rays passing through GRIN lenses, and (c) rays near sensor surface.

Download Full Size | PDF

The misfocus MTFs of the two models are shown in Fig. 7. They were evaluated with the following parameter, called the normalized misfocus parameter Ψ [1]:

Ψ=πD24λ(1f1zψ1zi).
In the simulations, the normalized misfocus parameter Ψ was varied from −90 to +90 by changing the object distance zψ while keeping the position zi of the sensor fixed. The scale of the frequency axis in the plots of Fig. 7 is fixed to twice the maximum sampling frequency of the sensor (2/2pd = 125 cycles/mm). The range of Ψ in this simulation was determined so that the misfocus MTFs of D = 20 mm do not have zero values below the maximum sampling frequency (1/2pd = 62.5 cycles/mm). This range is defined as the effective DOF in this paper. The misfocus MTF of an ideal single lens with a diameter of 20 mm is also shown in Fig. 7(a) as a baseline. The MTFs showed drastic variations with Ψ and had multiple zero values. The defocused images captured by this imaging optics could not be deconvolved with a single filter.

 figure: Fig. 7

Fig. 7 Misfocus MTFs. (a) MTFs of ideal single lens with a diameter of 20 mm. MTFs of ideal erect lens arrays with (b) D = 5 mm, (c) D = 10 mm, and (d) D = 20 mm. MTFs of GRIN lens arrays with (e) D = 5 mm, (f) D = 10 mm, and (g) D = 20 mm.

Download Full Size | PDF

Figures 7(b)–(d) show the misfocus MTFs of ideal erect lens arrays with D = 5 mm, D = 10 mm, and D = 20 mm, respectively. Figures 7(e)–(g) show those of GRIN lens arrays with D = 5 mm, D = 10 mm, and D = 20 mm, respectively. The results of both models show that a larger D increased the depth-invariance of the misfocus MTFs and decreased the magnitude of the MTFs. A larger D causes a smaller Fi/#, as shown in Eq. (8). Therefore, the depth-invariance of the misfocus MTFs of the proposed system is inversely proportional to Fi/#, and the magnitude is proportional to Fi/#, respectively. The tradeoff between the depth-invariance and the magnitude can be controlled by Fi/#.

In the simulations, the effective DOF of the proposed system was roughly eight-times shorter than the scanning range s. In contrast, the object scanning method, that changes the object distance during the exposure [6], achieves the same effect over half of the scanning range. Therefore, the effective DOF of the proposed method is four-times shorter than the object scanning method. This is because the former scans the focusing distance with partial (i.e., marginal) rays for mechanical scanning-free imaging, whereas the latter scans the object and uses all rays.

DOF extension was demonstrated with computationally generated images using the MTFs of the ideal single lens and the GRIN lens array with D = 20 mm. The two MTFs are shown in Figs. 7(a) and 7(g), respectively. In this demonstration, the object distance was scanned from Ψ = −90 to Ψ = +90. The object was the Lenna image. The pixel count of each image was 128 × 128. The pixel pitch, pd, was 8 μm, as in the above ray-trace simulation. Figures 8(a) and (b) show images captured by the ideal single lens and GRIN lens array, respectively. The images captured by the GRIN lens array were similarly defocused through the range of misfocus, whereas the in-focus and defocused images captured by the ideal lens were obviously different. Figures 8(c) and (d) show deconvolution results of the captured images without and with additional Gaussian noise whose signal-to-noise ratio (SNR) was 40 dB, respectively. The Wiener filter was applied in the deconvolution processing. The filter was calculated using the MTF at Ψ = 0. Sharp images with a larger DOF compared with the single lens imaging were reconstructed well, even from captured images with noise, by the proposed scheme.

 figure: Fig. 8

Fig. 8 Performance verification with Lenna image. Images captured by (a) the ideal single lens and (b) the GRIN lens array without noise. Deconvolution results of images captured by the GRIN lens array (c) with and (d) without additional Gaussian noise.

Download Full Size | PDF

The captured images of the ideal single lens and the deconvoluted images of the GRIN lens array were evaluated using peak signal-to-noise ratio (PSNR), as shown in Fig. 9. The deconvoluted images at misfocus distances of the GRIN lens array had better PSNRs than those captured with the ideal single lens even when 40 dB noise was added to the captured images. A further advantage of the superposition compound eye is the high light efficiency or high measurement SNR, as mentioned in Section 3.1.

 figure: Fig. 9

Fig. 9 PSNRs of the final images shown in Fig. 8. Note that the PSNR of the GRIN lens array at Ψ = 0 without noise is ∞ dB.

Download Full Size | PDF

6. Conclusions

In this study, we showed the principle and performance of spherical superposition compound eye optics for computational DOF and FOV extension. This system captures an optically superposed image of an object with different focusing distances and optical axes by using a spherical array of erect imaging optics. The PSFs of the captured image are three-dimensionally space-invariant. The original sharp image with a deep DOF and a wide FOV can be reproduced by deconvolution processing with a single filter from the single captured image. The system model and simulations were presented.

The misfocus MTFs of arrays of ideal erect lenses and GRIN lenses were analyzed with ray-tracing to verify the DOF extension. The depth-invariance of the proposed system was larger than that of a conventional, non-computational imaging system. The range of DOF extension was the almost one-fourth compared with that of the object scanning method in Ref. [6]. However, our method also extends the FOV simultaneously with a single-shot.

The proposed method realizes a single-shot, deep-DOF, wide-FOV, high NA imaging system composed of scalable optics. It may be useful for applications such as surveillance, machine vision, and so on. One of the issues to be addressed next is the physical implementation of the proposed system. In this paper, we assumed GRIN lenses with diameters of a few tens of micrometers, almost the same as those of the elemental optics of insects’ compound eyes. State-of-the-art technologies and a reflective optical design may enable us to implement a practical system that satisfies the requirements [3335].

Acknowledgments

The authors thank Prof. Yasuhiro Awatsuji at Kyoto Institute of Technology for his technical support in this project.

References and links

1. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).

2. J. E. R. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995). [CrossRef]   [PubMed]  

3. Y. Takahashi and S. Komatsu, “Optimized free-form phase mask for extension of depth of field in wavefront-coded imaging,” Opt. Lett. 33, 1515–1517 (2008). [CrossRef]   [PubMed]  

4. P. Mouroulis, “Depth of field extension with spherical optics,” Opt. Express 16, 12995–13004 (2008). [CrossRef]   [PubMed]  

5. O. Cossairt, C. Zhou, and S. K. Nayar, “Diffusion Coding Photography for Extended Depth of Field,” ACM Trans. Graph. (also Proc. of ACM SIGGRAPH) (2010).

6. G. Häusler, “A method to increase the depth of focus by two step image processing,” Opt. Commun. 6, 38–42 (1972). [CrossRef]  

7. S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, “Flexible depth of field photography,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 58–71 (2011). [CrossRef]  

8. D. J. Brady and N. Hagen, “Multiscale lens design,” Opt. Express 17, 10659–10674 (2009). [CrossRef]   [PubMed]  

9. D. L. Marks and D. J. Brady, “Gigagon: A monocentric lens design imaging 40 gigapixels,” in “Imaging Systems,” (Optical Society of America, 2010), p. ITuC2.

10. O. Cossairt, D. Miau, and S. K. Nayar, “Gigapixel computational imaging,” in “IEEE International Conference on Computational Photography (ICCP),” (2011).

11. G. Druart, N. Guérineau, R. Haïdar, S. Thétas, J. Taboury, S. Rommeluère, J. Primot, and M. Fendler, “Demonstration of an infrared microcamera inspired by Xenos peckii vision,” Appl. Opt. 48, 3368–3374 (2009). [CrossRef]   [PubMed]  

12. L. Li and A. Y. Yi, “Design and fabrication of a freeform microlens array for a compact large-field-of-view compound-eye camera,” Appl. Opt. 51, 1843–1852 (2012). [CrossRef]   [PubMed]  

13. D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. M. Vera, and S. D. Feller, “Multiscale gigapixel photography,” Nature 486, 386–389 (2012). [CrossRef]   [PubMed]  

14. R. Horisaki, T. Nakamura, and J. Tanida, “Superposition imaging for three-dimensionally space-invariant point spread functions,” Appl. Phys. Express 4, 112501 (2011). [CrossRef]  

15. T. Nakamura, R. Horisaki, and J. Tanida, “Experimental verification of computational superposition imaging for compensating defocus and off-axis aberrated images,” in “Computational Optical Sensing and Imaging,” (Optical Society of America, 2012), p. CM2B.4.

16. S. Hiura, A. Mohan, and R. Raskar, “Krill-eye: Superposition compound eye for wide-angle imaging via grin lenses,” IPSJ Transactions on Computer Vision and Applications 2, 186–199 (2010). [CrossRef]  

17. D. E. Nilsson, “A new type of imaging optics in compound eyes,” Nature 332, 76–78 (1988). [CrossRef]  

18. E. J. Warrant and P. D. McIntyre, “Limitations to resolution in superposition eyes,” J. Comp. Physiol., A 167, 785–803 (1990). [CrossRef]  

19. M. F. Land, F. A. Burton, and V. B. Meyer-Rochow, “The optical geometry of euphausiid eyes,” J. Comp. Physiol., A 130, 49–62 (1979). [CrossRef]  

20. S. Laughlin and S. McGinness, “The structures of dorsal and ventral regions of a dragonfly retina,” Cell Tissue Res. 188, 427–447 (1978). [CrossRef]   [PubMed]  

21. J. W. Duparré and F. C. Wippermann, “Micro-optical artificial compound eyes,” Bioinspiration Biomimetics 1, R1 (2006). [CrossRef]  

22. K. Stollberg, A. Brückner, J. Duparré, P. Dannberg, A. Bräuer, and A. Tünnermann, “The gabor superlens as an alternative wafer-level camera approach inspired by superposition compound eyes of nocturnal insects,” Opt. Express 17, 15747–15759 (2009). [CrossRef]   [PubMed]  

23. H. R. Fallah and A. Karimzadeh, “MTF of compound eye,” Opt. Express 18, 12304–12310 (2010). [CrossRef]   [PubMed]  

24. M. F. Land and D.-E. Nilsson, Animal Eyes (Oxford University Press, USA, 2002).

25. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): Concept and experimental verification,” Appl. Opt. 40, 1806–1813 (2001). [CrossRef]  

26. J. Duparré, P. Dannberg, P. Schreiber, A. Bräuer, and A. Tünnermann, “Thin compound-eye camera,” Appl. Opt. 44, 2949–2956 (2005). [CrossRef]   [PubMed]  

27. A. Brückner, J. Duparré, R. Leitel, P. Dannberg, A. Bräuer, and A. Tünnermann, “Thin wafer-level camera lenses inspired by insect compound eyes,” Opt. Express 18, 24379–24394 (2010). [CrossRef]   [PubMed]  

28. R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Opt. Rev. 14, 347–350 (2007). [CrossRef]  

29. R. Horisaki, K. Choi, J. Hahn, J. Tanida, and D. J. Brady, “Generalized sampling using a compound-eye imaging system for multi-dimensional object acquisition,” Opt. Express 18, 19367–19378 (2010). [CrossRef]   [PubMed]  

30. R. Dinyari, S.-B. Rim, K. Huang, P. B. Catrysse, and P. Peumans, “Curving monolithic silicon for nonplanar focal plane array applications,” Appl. Phys. Lett. 92, 091114 (2008). [CrossRef]  

31. D. Dumas, M. Fendler, F. Berger, B. Cloix, C. Pornin, N. Baier, G. Druart, J. Primot, and E. le Coarer, “Infrared camera based on a curved retina,” Opt. Lett. 37, 653–655 (2012). [CrossRef]   [PubMed]  

32. “Zemax,” http://www.zemax.com/.

33. K.-H. Jeong, J. Kim, and L. P. Lee, “Biologically inspired artificial compound eyes,” Science 312, 557–561 (2006). [CrossRef]   [PubMed]  

34. D. Keum, J. Hyukjin, and J. Ki-Hun, “Planar emulation of natural compound eyes,” Small 8, 2169–2173 (2012). [CrossRef]   [PubMed]  

35. S. Maekawa, K. Nitta, and O. Matoba, “Transmissive optical imaging device with micromirror array,” in “Proceedings of the SPIE,” (2006), p. 63920E. [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Schematic diagram of computational superposition imaging.
Fig. 2
Fig. 2 Schematic diagram of single-shot computational superposition imaging based on spherical superposition compound eyes with (a) positive and (b) negative curvatures.
Fig. 3
Fig. 3 Schematic diagram of DOF extension using a spherical array of ideal erect lenses.
Fig. 4
Fig. 4 Geometrical relationship between the pitches of the erect lenses and detectors.
Fig. 5
Fig. 5 MTFs of GRIN lens arrays with different diameters.
Fig. 6
Fig. 6 Ray-trace of a spherical GRIN lens array. (a) Overview, (b) rays passing through GRIN lenses, and (c) rays near sensor surface.
Fig. 7
Fig. 7 Misfocus MTFs. (a) MTFs of ideal single lens with a diameter of 20 mm. MTFs of ideal erect lens arrays with (b) D = 5 mm, (c) D = 10 mm, and (d) D = 20 mm. MTFs of GRIN lens arrays with (e) D = 5 mm, (f) D = 10 mm, and (g) D = 20 mm.
Fig. 8
Fig. 8 Performance verification with Lenna image. Images captured by (a) the ideal single lens and (b) the GRIN lens array without noise. Deconvolution results of images captured by the GRIN lens array (c) with and (d) without additional Gaussian noise.
Fig. 9
Fig. 9 PSNRs of the final images shown in Fig. 8. Note that the PSNR of the GRIN lens array at Ψ = 0 without noise is ∞ dB.

Tables (2)

Tables Icon

Table 1 System parameters in simulations.

Tables Icon

Table 2 Parameters of GRIN lenses in simulations.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

z i mar = ( z o + t ) ( R 2 2 r 2 ) 2 r 2 R 2 r 2 ( R 2 2 r 2 ) + 2 ( z o + t ) R 2 r 2 + t ,
f = R 2 ,
z i par = z o f z o + f = z o R 2 z o + R .
z i = z i par .
z o mar = ( z i t ) ( R 2 2 r 2 ) + 2 r 2 R 2 r 2 ( R 2 2 r 2 ) 2 ( z i t ) R 2 r 2 t ,
z o par = z i f z i + f = z i R 2 z i + R ,
s = z o mar z o par .
F i / # = z i D .
p 1 ( z i mar t ) p d z i z i mar .
n = n 0 + n d 2 d 2 + n d 4 d 4 + n d 6 d 6 + n z z + n z 2 z 2 + n z 3 z 3 ,
Ψ = π D 2 4 λ ( 1 f 1 z ψ 1 z i ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.