Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Thin-film camera using luminescent concentrators and an optical Söller collimator

Open Access Open Access

Abstract

This article reports our investigation of the potential of optical Söller collimators in combination with luminescent concentrators for lens-less, short-distance, and shape-independent thin-film imaging. We discuss optical imaging capabilities and limitations, and present first prototypes and results. Modern 3D laser lithography and deep X-ray lithography support the manufacturing of extremely fine collimator structures that pave the way for flexible and scalable thin-film cameras that are far thinner than 1 mm (including optical imaging and color sensor layers).

© 2017 Optical Society of America

Corrections

Alexander Koppelhuber and Oliver Bimber, "Thin-film camera using luminescent concentrators and an optical Söller collimator: publisher’s note," Opt. Express 25, 19084-19084 (2017)
https://opg.optica.org/oe/abstract.cfm?uri=oe-25-16-19084

26 July 2017: A typographical correction was made to the author listing.

1. Introduction

The development of thin-film image sensors in recent decades has attracted increasing interest in thin-film imaging and thin-film cameras. New optical approaches based, for instance, on organic photodiodes [1, 2] enable large and flexible images sensors. However, as with regular CCD sensors, lenses are required for capturing focussed images. This leads to large form factors and relatively thick (compared to the size of the sensor) inflexible cameras. Large-scale image sensors require alternative (lens-less) imaging options if the camera size is to remain acceptable.

In previous work, we have presented solutions and results towards a scalable, transparent, and flexible image sensor [3–8]. It consists of multiple luminescent concentrator (LC) layers that collect light of various wavelengths from images being optically focussed on their surfaces and transport it towards their edges. Multiplexing the resulting light signal at the edges of each LC layer into a two-dimensional light field allows direct measurement of the Radon transform of the image’s individual color channels. Solving the inverse Radon transform finally enables image reconstruction. We have shown that estimating the inverse Radon transform by machine learning yields significantly better results than conventional approaches, such as tomographic reconstruction. Although we demonstrated 300–600 µm thick and flexible sensor prototypes of arbitrary sizes, a practical solution for optical imaging that makes an integrated flexible and scalable thin-film camera system possible remains an open problem. In our previous work, images have either been focussed on the LC surface with external optics [3,5–7] or a constrained aperture of the external illumination source was required [4, 8].

A naive approach to obtaining a relatively flat camera is to use a simple pinhole instead of a lens. A sufficiently large image sensor allows the pinhole to be placed close to the sensor. One drawback is the extremely low light-gathering ability of such a camera, which results in long exposure times.

Camera arrays [9] in combination with computational methods that calculate a full-size image from the recorded sub-images are another option that achieves a low thickness-to-width ratio. However, large-scale camera arrays built from regular individual cameras are complex and expensive, while a small-scale camera array based on microlens arrays requires proportional scaling of the lens and the sensor pixels [10]. The imaging quality of microlens arrays can be improved (e.g. reducing blur) by using compound lenses that are printed with nanoscale accuracy directly on a CMOS image sensor [11]. The same technique enables foveated imaging that mimics the vision of predators such as eagles [12]. Microlenses with different focal lengths are arranged such that the highest angular resolution is achieved in the center of the image.

In [13], a lens array made from a flexible material was reported. The gathered light of a single lens is measured by a single photodetector through a small aperture that limits the field of view to achieve good image resolution. The limited and restricted field of view of each lens is compensated for by bending the whole image sensor. However, the geometry of the sensor in a deformed state must be known to be able to calculate a high-resolution image by means of the proposed imaging technique.

Another lens-less possibility for thin-film imaging is to extend the idea of a pinhole to an array of pinholes that forms an aperture mask. Such coded apertures were originally used in astronomy [14] for X-ray imaging where lenses are not applicable. Using coded aperture masks results in overlapping projections on the image sensor, which requires appropriate demultiplexing methods to recover an image. In [15], a calibration and reconstruction method was demonstrated that allows the mask to be placed very close to the image sensor. Coded aperture masks can also be combined with flexible image sensors to further increase the field of view. However, the multiplexed image that is formed on the sensor through the mask is different for different curvatures, and thus separate calibration for each possible deformation is required.

Classical single lenses or pinhole apertures do not efficiently support thin-film form factors for optical imaging because a decreasing distance to the sensor surface leads to an increasing field of view (a FOV approaching 180°) that is sampled with a limited sensor resolution (Figs. 1(a) and 1(c)). Furthermore, lenses require a focal distance to the sensor. Micro aperture arrays (MAAs) or microlens arrays (MLAs) are options for thin-film imaging that support practical FOVs (theoretically enabling parallel projection of the imaged scenery onto the sensor surface). Flexible sheets of MLA and MAA layers, as described in [13], are – compared to aperture structures–relatively thick, and still require an additional focal distance to the sensor (Fig. 1(b)). Coded aperture layers (Fig. 1(d)), as presented in [15], require careful aperture mask design. Because of the underlying image reconstruction principle, the pattern must be separable while supporting a broad frequency spectrum. The minimum size of the aperture elements depends on the distance of the mask to the image sensor to achieve optimal balance between optical blur and diffraction blur. The closer the mask to the sensor, the smaller its elements can be. However, a smaller element size also requires a higher image-sensor resolution. Two adjacent image pixels can only be distinguished when the emitted light produces a different projection of the mask on the image sensor.

 figure: Fig. 1

Fig. 1 Optical imaging options: Single lens (A), microlens array with micro aperture array (B), single aperture (C), (coded) micro aperture array (D), Söller colimator (E). Height (H) and field of view (FOV) are indicated.

Download Full Size | PDF

In this article, we investigate the potential of optical Söller collimators for short-distance thin-film imaging (Fig. 1(e)). Traditionally, such collimators are used to parallelize rays from neutron or X-ray sources [16, 17], but recent approaches apply them also in CCD-based fluorometry [18]. We discuss the optical imaging capabilities and limitations of Söller collimators, and present first prototypes and results for imaging in combination with our thin-film sensor. Further, we explain how modern 3D laser lithography and deep X-ray lithography support the manufacturing of extremely fine collimator structures that pave the way for flexible and scalable thin-film cameras that are far thinner than 1 mm (including optical imaging and color sensor layers). Figure 2 illustrates the underlying principle of our camera. Sections 2 and 3 describe the optical functions of the three layers and the image reconstruction process.

 figure: Fig. 2

Fig. 2 Thin-film camera design: Two 300 µm luminescent concentrator layers for color sensing (bottom) [7], and a 300 µm Söller collimator layer for optical imaging (top). Optical principle and image reconstruction are described in Sections 2 and 3.

Download Full Size | PDF

2. Optical Söller Collimator

A Söller collimator is a thick MAA (often implemented as a stack of thin overlapping MAAs). It is used for filtering rays in limited directions while all other rays are absorbed by the collimator. When applied to filter light rays, it is referred to as optical Söller collimator. In lens-free imaging devices, it provides several advantages compared to lenses and pinholes or coded apertures (Fig. 1): First, it can be very thin and directly attached to the sensor, eliminating the focal distance required for lenses –or the dependence between pinhole size a and distance to sensor d (a=2.44λd, where λ is the wavelength of light) of a pinhole or pinhole array–while still providing a practical FOV. Thus, it enables thin-film imaging systems. Second, it is efficient in short-distance imaging and does not suffer from vignetting. The shorter the distance to the imaging device, the more focussed is the image. This is in contrast to pinhole or coded apertures, while for lenses blur increases with increasing distance to the focal plane. Vignetting is an issue with both lenses and regular apertures. Third, due to its simplicity, it supports controlled imaging even for non-planar collimator shapes and can easily be manufactured at low cost.

For Söller collimators with non-subwavelength aperture structures, the following imaging properties apply (for subwavelength structures, diffraction limits must be considered [19]):

The collimation angle α of a single aperture can be approximated for sufficiently small collimator structures by

α2arccos(H1((wg)κ2)2(wg)2+(wg)2Hκ+H2),κ=1/r,
where κ is the collimator’s curvature (r is the radius of curvature), g is the wall thickness, w is the collimator’s aperture pitch (i.e., aperture hole plus wall thickness) and H is the height of the collimator layer (Fig. 3).

 figure: Fig. 3

Fig. 3 Collimator geometry with positive (A), zero (B), and negative (C) curvature.

Download Full Size | PDF

The overall field of view FOV of the collimator is

FOV=α+β=α+Wκ180π,
where W is the collimator’s width. Note, that we need to consider the non-parallel rays being collected beyond β. For non-zero collimation angels α, this extends the FOV of β by 2 · α/2 = α.

The blur diameter b of the PSF (Fig. 4) of a single-point emitter’s image created by the collimator on the underlying LC layer is

b=2tan(α2)d,
where d is the distance between point emitter and LC layer.

 figure: Fig. 4

Fig. 4 Image formation at the same collimation angle: Short (A) and long (B,C) distance to point emitter. Thin (A,B) and thick (C) aperture walls. Corresponding PSFs are shown beneath.

Download Full Size | PDF

By comparing Figs. 4(b) and 4(c) it can be seen that the amount of light emitted from a single point that is transmitted through the collimator is not only limited by α but also depends on the width g of the collimator walls. For the one-dimensional case, as illustrated in Fig. 4, the fraction f of transmitted (i.e., non-blocked) light within α can be formulated as the ratio between the integral of all non-blocked aperture subareas and the full area of the resulting PSF with blur diameter b that can possibly receive light (for infinitely thin walls):

f=2i=0max(0,wg(iw+g)H2(dH))b.

However, since f is independent of d and all non-blocked subareas of opposing aperture pairs (i.e., the first and the last, the second and the second last, etc.) always sum up to wg, the same ratio as in Eq. (4) can be determined by relating the non-blocked subarea of a single opposing aperture pair (i.e., wg) with the full area of this single pair (i.e., 2w):

f=wg2w.

For the two-dimensional case, the light-gathering ability L of the collimator can therefore be expressed using the numerical aperture NA = η sin(α/2), with η = 1 (for air), as

LNAf2.

From Eq. (6) it can be seen that 1/4NA cannot be exceeded (for aperture walls approaching g = 0, f approaches 1/2).

3. LC Layers and Image Reconstruction

The sensor part of our thin-film camera consists of multiple (300 µm thick) luminescent concentrator (LC) layers each of which is sensitive to a specific band of wavelengths [7] (Fig. 2). An LC is an efficient 2D waveguide made of a transparent host material (polycarbonate in our case) that is doped with fluorescent particles which absorb light of a specific sub-band of the light spectrum and emit it at a longer wavelength. The emitted light is transported to the edges of the LC by total internal reflection.

A special optical structure (a 1D array of triangular aperture slits cut into the four edges of each LC layer, as illustrated in Fig. 2 and described in detail in [3]) multiplexes the transported light signal into a variant of the Radon transform of the image that is focussed on the surface of the LC. This Radon-transformed signal is forwarded by optical fibers to external line-scan cameras, where it is measured. A solution to the inverse Radon transform determined by machine learning (linear regression in our case), as described in [5, 6], is used to reconstruct the image from measurements.

When combining a Söller collimator with a single LC layer sensor, the forward imaging model can be described by a system of linear equations:

l=THp+e=Xp+e,
where l is a vector of the coefficients of the measured Radon transform (i.e., the signal measured by the line-scan cameras), T is the light transport from every reconstructed image pixel on the LC surface to every measurement position on the edge of the LC, p is a vectorized optical image, and e is an unknown error term (including ambient light). The matrix H is a Toeplitz matrix and contains the PSF of the collimator. Thus, optical images are transformed to the measured signals by a combination of convolution (with the collimator’s PSF in H) and Radon transform (of the LC’s light transport in T). Since convolution and light transport are represented by matrices, their combination is the multiplication of both (X = TH).

The inverse model of Eq. (7) enables image reconstruction by simple matrix-vector multiplication:

p=(TH)1(le)=X1(le).

Note that X−1 now contains the inverse Radon transform of the LC’s light transport and the deconvolution of the collimator’s PSF, and is determined with machine learning as follows:

If P is the n × m matrix of n vectorized training images (n=60,000 in our case) with m pixels each (m = 64 × 64 in our case), and L the n × l matrix of n measurements of P with l (l = 54 × 32 × 4 in our case) vectorized measurement values, then we need to determine the m × l matrix X−1 for image reconstruction:

P=X1L.

Transposing all components in Eq. (8 leads to

P=LX,
which allows solving for X−1 using linear regression (l2−norm):
X1=[(LL)1LP].

This concept can also be applied to multiple LC layers (enabling color imaging) by extending Eq. (8) such that vector p is a stacked vector of the output channels and vector l is a stacked vector of measurements of the individual layers [7]. The size of X−1 changes accordingly. For example, in a 2-layer color-imaging prototype (as in [7]) vector p contains the values of the blue, green and red color channels of the reconstructed image, and vector l contains the Radon coefficients of the green and red LC layers. The matrix X−1 contains the estimated inverse light transport with twice the number of columns and three times the number of rows compared to the single-layer grayscale approach. More details on the optical design of the sensor and on image reconstruction approaches can be found in previous publications [3–8].

Note that the optical image formed on the sensor surface is not directly reconstructed. Instead, we learn the correlation between original images and their convolved Radon transforms. Therefore, occlusions caused by the collimator grid on the LC (i.e., light blocking by aperture walls) does not influence the image reconstruction result. This is in contrast to attaching a collimator directly to the surface of a regular image sensor (i.e., a grid of photosensors), which would result in a reduced image resolution because many sensor elements are completely covered by aperture walls. The collimator affects the reconstructed image quality directly by its collimation angle α, as explained in section 2: The smaller α, the smaller is the blur diameter of the collimator’s PSF. A narrow PSF passes a broad spectrum of the optically convolved image and consequently leads to an increased depth of field.

4. Prototype and Results

Figure 5(a) illustrates our experimental setup. We projected focussed images on a diffuser located at distances between 0 cm and 13 cm from the sensor. The blurred image that was formed optically on the sensor at a distance of 13 cm was made visible with the help of a second diffuser placed on the LC surface (Fig. 5(a)).

 figure: Fig. 5

Fig. 5 Experimental setup and prototype: (A) The image focussed on a diffuser (top) at distances between 0 cm and 13 cm from the sensor (bottom) is to be reconstructed from the blurred image formed optically on the sensor surface (bottom). The example shows a distance of 13 cm. The sensor is covered by a second diffuser to make the blurred image formed on its surface visible. (B–D) 3D-printed Söller collimator prototype put on top of a single LC layer for monochrome imaging.

Download Full Size | PDF

For proof of concept, we used a single LC layer (300 µm thick Bayer Makrofol® LISA Green, 128 triangular apertures, 3456 optical fibers) for sensing and a 3D-printed (H = 6 mm thick) Söller collimator with aperture dimensions of w = 1600 µm and g = 800 µm (collimation angle α = 15.19°, light gathering ability L = 0.001089) for optical imaging (Figs. 5(b)–5(d)). The material used for the collimator was ABS (Acrylonitrile butadiene styrene). Note that the structure size of 800 µm was the practical resolution limit of the 3D-printer utilised (a Stratasys J750).

Figure 6 presents reconstruction results for our prototype. We reconstructed images at distances of 3 cm and 13 cm, and compared these reconstructions to the ground truth using the structural similarity index metric (SSIM) [20]. Images were captured with exposure times of 300 ms and 100 ms, respectively. Our results show that the collimator significantly enhances the depth of field (i.e., the improved focus of the image due to a reduced collimation angle α).

 figure: Fig. 6

Fig. 6 Experimental results: Reconstructed images captured at distances of 3 cm and 13 cm. Note that the distance of 0 cm applies to images captured without collimator. The SSIM [20] values compare the reconstruction results to the ground truth images (blue frames). Top rows illustrate the optical image formed on the sensor plane without collimator. Bottom rows present the reconstructed images captured with collimator.

Download Full Size | PDF

5. Discussion and Conclusion

We have shown how luminescent concentrators combined with Söller collimators enable lensless thin-film imaging. Although our 3D printed collimator prototype is still relatively thick (6 mm), more advanced manufacturing techniques, such as deep X-ray lithography and 3D laser lithography enable structure sizes of 1 µm and make much thinner layers possible. A collimator with the same optical properties (α = 15.19° and L = 0.001089) as our prototype would be 7.5 µm thick in this case. Alternatively, a collimator of the same thickness as the LC layer (300 µm) and with the same collimation angle (α = 15.19°) would benefit from thinner aperture walls (g = 1 µm) and smaller aperture pitch (w = 41 µm), and would therefore almost quadruple its light-gathering ability (L = 0.004096). Figure 7 compares the PSFs of our 6 mm prototype with a 300 µm version with the same same collimation angle (α = 15.19°). We will explore such manufacturing techniques in the future.

 figure: Fig. 7

Fig. 7 Simulated PSFs of 6 mm (A) and 300 µm (B) collimators with same collimation angle (α = 15.19°): While the blur diameters are identical, the light gathering of the 300 µm collimator approximately quadruples when compared to the 6 mm collimator.

Download Full Size | PDF

Figure 8 plots light-gathering ability over various collimation angles (between 0° and 90°) for different collimator configurations: (i) a configuration with a hypothetical aperture wall thickness of g = 0 (f = 1/2), (ii) a configuration in which the aperture wall thickness matches the aperture hole size, as is the case in our prototype (g = 1/2w, f = 1/4), and (iii) an intermediate case with f = 3/8. These plots indicate that a minimal wall thickness is always optimal, as it maximizes light gathering; however, 1/4NA cannot be exceeded.

 figure: Fig. 8

Fig. 8 Light-gathering ability over various collimation angles (between 0° and 90°) and for different collimator configurations: zero aperture wall thickness (g = 0, f = 1/2), aperture wall thickness equals aperture hole size (g = 1/2w, f = 1/4), and an intermediate case of f = 3/8).

Download Full Size | PDF

Figure 9 illustrates the effect of curvature on the optical properties of the collimator. Note that for a W = 20 cm large collimator that is bent to β = 180°, the curvature is κ = 0.016 mm−1. We show plots for three example collimation angles: approaching 0°, 15.19° (our prototype), and 90°. These plots indicate that for sufficiently small collimator heights H, even strong curvature has no effect on imaging: ακ at curvature κ compared to α0 at no curvature changes by no more than 5% for large curvatures (180°), thick collimators (6000 µm, as in the case of our prototype), and extremely small collimation angles (approaching 0°). For thinner collimators (e.g., 300 µm and less), the variation in α is far below 0.3%. This suggests that minimal thickness of the collimator is optimal for supporting shape independent imaging, and that (in contrast to microlens arrays and coded apertures) the change in collimator geometry can be ignored for image reconstruction. An increasing curvature, however, leads to an increasing field of view, as explained in Eq. 2.

 figure: Fig. 9

Fig. 9 Effect of curvature on collimator’s optical properties for example collimation angles of close to 0° (A), 15.19° (B, our prototype), and 90° (C), achieved with different collimator heights H, and plotted over an increasing curvature κ.

Download Full Size | PDF

In summary, the optimal optical Söller collimator for enabling flexible thin-film cameras has minimal height and minimal aperture wall thickness to maximize light efficiency and support shape-independent imaging. Diffraction, however, sets the limit to non-subwavelength structure sizes.

Considering the holes as (cylindrical) waveguides, light propagation below the cut-off frequency will suffer from exponential decay with increasing height and decreasing aperture width. Therefore, considering the lowest waveguide mode, the aperture width must be kept at (wg) > (3.68λ)/(2π), where λ is the wavelength of light. A high absorption coefficient and roughness of the material must also be considered when choosing the aperture width.

Note, that a static Söller collimator causes a fixed focus (on the sensor plane) and a fixed depth of field (set by the collimation angle). Therefore, the collimation angle depends on the desired maximum imaging distance and must be traded off against light gathering. Recent advances in lithography make thin (< 1 mm) but large (up to several square meters), flexible, and full-color thin-film cameras possible. Their short-distance imaging capabilities open up new application possibilities, such as contactless sensing for enabling novel user interfaces, smart skin sensors to support autonomous robots, industrial machines, devices, and vehicles in environment sensing, or optical see-through monitoring for reading widely established analog elements, such as electric/water meters or set-top boxes.

Manufacturing thinner optical Söller collimators to demonstrate flexible thin-film cameras will be part of our future work. Since the relatively low light-gathering ability is the main drawback of optical Söller collimators, we will also investigate how shape-independent imaging and thin-film form factors can be achieved with refractive [21] or diffractive [22, 23] optics.

Funding

This research was funded by the Johannes Kepler University Linz, Linz Institute of Technology (LIT) under contract number LIT213640001 – LumiConCam.

Acknowledgments

We thank the Institute of Science and Technology Austria (IST Austria) for manufacturing the 3D-printed Söller collimator, the Karlsruhe Nano Micro Facility (KNMF) of the Karlsruhe Institute of Technology (KIT) for exchanges relating to deep X-ray lithography, and Nikita Arnold and Siegfried Bauer of Johannes Kepler University (JKU) Linz for discussions of subwavelength aperture structures.

References and links

1. G. Yu, J. Wang, J. McElvain, and A. J. Heeger, “Large-area, full-color image sensors made with semiconducting polymers,” Adv. Mater. 10, 1431–1434 (1998). [CrossRef]  

2. T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” IEEE Trans. Electron Devices 52, 2502–2511 (2005). [CrossRef]  

3. A. Koppelhuber and O. Bimber, “Towards a transparent, flexible, scalable and disposable image sensor using thin-film luminescent concentrators,” Opt. Express 21, 4796–4810 (2013). [CrossRef]   [PubMed]  

4. A. Koppelhuber, C. Birklbauer, S. Izadi, and O. Bimber, “A transparent thin-film sensor for multi-focal image reconstruction and depth estimation,” Opt. Express 22, 8928–8942 (2014). [CrossRef]   [PubMed]  

5. A. Koppelhuber, S. Fanello, C. Birklbauer, D. Schedl, S. Izadi, and O. Bimber, “Enhanced learning-based imaging with thin-film luminescent concentrators,” Opt. Express 22, 29531–29543 (2014). [CrossRef]  

6. A. Koppelhuber and O. Bimber, “A classification sensor based on compressed optical radon transform,” Opt. Express 23, 9397–9406 (2015). [CrossRef]   [PubMed]  

7. A. Koppelhuber and O. Bimber, “Multi-exposure color imaging with stacked thin-film luminescent concentrators,” Opt. Express 23, 33713–33720 (2015). [CrossRef]  

8. A. Koppelhuber and O. Bimber, “Computational imaging, relighting and depth sensing using flexible thin-film sensors,” Opt. Express 25, 2694–2702 (2017). [CrossRef]  

9. B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams, M. Horowitz, and M. Levoy, “High performance imaging using large camera arrays,” in ACM Transactions on Graphics (TOG) (ACM, 2005), pp. 765–776. [CrossRef]  

10. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (tombo): concept and experimentalverification,” Appl. Opt. 40, 1806–1813 (2001). [CrossRef]  

11. T. Gissibl, S. Thiele, A. Herkommer, and H. Giessen, “Two-photon direct laser writing of ultracompact multi-lens objectives,” Nat. Photonics 10, 554–560 (2016). [CrossRef]  

12. S. Thiele, K. Arzenbacher, T. Gissibl, H. Giessen, and A. M. Herkommer, “3d-printed eagle eye: Compound microlens system for foveated imaging,” Sci. Adv. 3, e1602655 (2017). [CrossRef]   [PubMed]  

13. D. C. Sims, Y. Yue, and S. K. Nayar, “Towards flexible sheet cameras: Deformable lens arrays with intrinsic optical adaptation,” in 2016 IEEE International Conference on Computational Photography (ICCP) (2016), pp. 1–11.

14. E. E. Fenimore and T. Cannon, “Coded aperture imaging with uniformly redundant arrays,” Appl. Opt. 17, 337–347 (1978). [CrossRef]   [PubMed]  

15. M. S. Asif, A. Ayremlou, A. Veeraraghavan, R. Baraniuk, and A. Sankaranarayanan, “Flatcam: Replacing lenses with masks and computation,” in 2015 IEEE International Conference on Computer Vision Workshop (ICCVW) (2015), pp. 663–666.

16. W. Soller, “A new precision x-ray spectrometer,” Phys. Rev. 24, 158 (1924). [CrossRef]  

17. F. Piegsa, “Highly collimating neutron optical devices,” Nuclear Instrum. Methods Phys. Res. 603, 401–405 (2009). [CrossRef]  

18. J. Balsam, M. Ossandon, H. A. Bruck, and A. Rasooly, “Modeling and design of micromachined optical soller collimators for lensless ccd-based fluorometry,” Analyst 137, 5011–5017 (2012). [CrossRef]   [PubMed]  

19. J. Weiner, “The physics of light transmission through subwavelength apertures and aperture arrays,” Rep. Prog. Phys. 72, 064401 (2009). [CrossRef]  

20. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004). [CrossRef]   [PubMed]  

21. N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in 2016 IEEE International Conference on Computational Photography (ICCP) (IEEE, 2016), pp. 1–11.

22. D. G. Stork and P. R. Gill, “Optical, mathematical, and computational foundations of lensless ultra-miniature diffractive imagers and sensors,” Int. J. Adv. Sys. Measurements 7, 4 (2014).

23. F. Heide, Q. Fu, Y. Peng, and W. Heidrich, “Encoded diffractive optics for full-spectrum computational imaging,” Sci. Rep. 6, 33543 (2016). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Optical imaging options: Single lens (A), microlens array with micro aperture array (B), single aperture (C), (coded) micro aperture array (D), Söller colimator (E). Height (H) and field of view (FOV) are indicated.
Fig. 2
Fig. 2 Thin-film camera design: Two 300 µm luminescent concentrator layers for color sensing (bottom) [7], and a 300 µm Söller collimator layer for optical imaging (top). Optical principle and image reconstruction are described in Sections 2 and 3.
Fig. 3
Fig. 3 Collimator geometry with positive (A), zero (B), and negative (C) curvature.
Fig. 4
Fig. 4 Image formation at the same collimation angle: Short (A) and long (B,C) distance to point emitter. Thin (A,B) and thick (C) aperture walls. Corresponding PSFs are shown beneath.
Fig. 5
Fig. 5 Experimental setup and prototype: (A) The image focussed on a diffuser (top) at distances between 0 cm and 13 cm from the sensor (bottom) is to be reconstructed from the blurred image formed optically on the sensor surface (bottom). The example shows a distance of 13 cm. The sensor is covered by a second diffuser to make the blurred image formed on its surface visible. (B–D) 3D-printed Söller collimator prototype put on top of a single LC layer for monochrome imaging.
Fig. 6
Fig. 6 Experimental results: Reconstructed images captured at distances of 3 cm and 13 cm. Note that the distance of 0 cm applies to images captured without collimator. The SSIM [20] values compare the reconstruction results to the ground truth images (blue frames). Top rows illustrate the optical image formed on the sensor plane without collimator. Bottom rows present the reconstructed images captured with collimator.
Fig. 7
Fig. 7 Simulated PSFs of 6 mm (A) and 300 µm (B) collimators with same collimation angle (α = 15.19°): While the blur diameters are identical, the light gathering of the 300 µm collimator approximately quadruples when compared to the 6 mm collimator.
Fig. 8
Fig. 8 Light-gathering ability over various collimation angles (between 0° and 90°) and for different collimator configurations: zero aperture wall thickness (g = 0, f = 1/2), aperture wall thickness equals aperture hole size (g = 1/2w, f = 1/4), and an intermediate case of f = 3/8).
Fig. 9
Fig. 9 Effect of curvature on collimator’s optical properties for example collimation angles of close to 0° (A), 15.19° (B, our prototype), and 90° (C), achieved with different collimator heights H, and plotted over an increasing curvature κ.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

α 2 arccos ( H 1 ( ( w g ) κ 2 ) 2 ( w g ) 2 + ( w g ) 2 H κ + H 2 ) , κ = 1 / r ,
F O V = α + β = α + W κ 180 π ,
b = 2 tan ( α 2 ) d ,
f = 2 i = 0 max ( 0 , w g ( i w + g ) H 2 ( d H ) ) b .
f = w g 2 w .
L N A f 2 .
l = T H p + e = X p + e ,
p = ( T H ) 1 ( l e ) = X 1 ( l e ) .
P = X 1 L .
P = L X ,
X 1 = [ ( L L ) 1 L P ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.