Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Light field camera based on hexagonal array of flat-surface nanostructured GRIN lenses

Open Access Open Access

Abstract

In this paper we present a light field camera system where a flat-surface hexagonal array of nanostructured gradient index lenses was used as a lens matrix. In our approach we use an array of 469 gradient index microlenses with a diameter of 20 µm and 100% fill factor. To develop the single lens and the lenslet array we used a modified stack-and-draw technology. In this technique, variation of refractive index is achieved by using quantized gradient index profiles and rods from different types of glasses. We show experimental results of using this type of lenses for imaging in a system of two kinds of light field cameras. In the first one, the microlens array is located in the focal plane of the main lens. The image is reconstructed, in this case using a Fourier slice photography algorithm. This allowed a partial reconstruction of a 3D scene with spatial and depth resolution of 20 µm and field of view of 500×500×500 µm. In the second configuration, the microlens array is located between a sample and a microscopic objective, thus allowing for superresolution 3D reconstruction of a microscopic image. The scale-invariant feature transform method was used for image reconstruction and obtained a partial 3D reconstruction with a field of view of 150×115×80 µm and a spatial resolution of 2 µm and depth resolution of 10 µm.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Classic camera allows to capture a 2D image, which is a 3D projection of the input scene through an optical system. However, information about the direction of the rays is lost. Compared to 2D images, holography image records full information about the light field, i.e. amplitude and phase. This technique is more complicated in practical implementation due to the need to use coherent illumination, mechanical stabilization and complex setup with the reference beam [1]. An intermediate method is the optical field camera. It allows recording information about the light field in a relatively simple setup due to the possibility of using most of the standard solutions known from classical photography [2].

The first concepts for the registration of image by several lenses in order to obtain more complete information about the studied object were presented at the beginning of the 20th century [3,4]. However, it was not until the 1990s that integral photography or lightfield photography, as it was called, began to develop faster [57]. The theoretical basis for a complex analysis of the optical field was developed during work on computer graphics and vision [810]. Further works allowed to build a compact optical field camera [11] such as Lytro [12] and Raytrix [13], as well as microscopic systems allowing for the reconstruction of the examined specimen in 3D [1416]. The light field camera (LFC), also known as a plenoptic camera, allows to extract the information necessary for post-digital refocusing and correcting errors in focus [17], calculating the depth map [18] and reconstructing the scene with small changes of viewpoint [19]. This approach requires numerical processing of the recorded image information and calculation of the final image [2023]. Therefore, this type of photography is also called computational imaging systems [2]. The first algorithms of image reconstruction allowed only to create images in low spatial resolutions [24], depending on the size of a single microlens, where each lens formed a single pixel of the resulting image, which resulted in a resolution of ∼150 µm. Further developments allowed to create light field superresolution algorithms that increased the resolution to that which is provided by the sensor and lens of the camera [25,26] and allows to achieve a spatial resolution of 1 µm [27].

Usually LFC uses a micro lens array to capture information about intensity and direction of all the light rays in a scene [28]. Several types of LFCs have been reported [29]. In the two most frequently used systems, the microlens array is positioned either in the focal plane of the main lens, as shown in Fig. 1(a) [28,30] or, can be placed in front of or behind the focal plane of the main lens, which is shown in Fig. 1(b) [28,31]. In the first case, the reconstruction of the recorded scene is based on a multidimensional Fourier analysis and in the second case on the analysis of ordinary 2D images.

 figure: Fig. 1.

Fig. 1. Schemes of the light field camera setup: (a) type 1.0, (b) type 2.0.

Download Full Size | PDF

In the standard commercially available LFCs, square arrays of circle refractive lenses are applied [12,13]. One of the disadvantages of this solution is a small filling factor of the lens arrays (less than 78%). This value can be increased up to nearly 91% if the lenses are arranged in a hexagonal lattice. The reported microlens arrays are bulky. The distance between the adjacent lenses is in the range of 150 µm and the focal length of an individual lens is in the range of a few mm [12,13]. F-number for this type of lens is at the level of f/20 which makes this type of system dark. Another issue is related to the presence of a refractive surface. The quality of imaging and optical power in this case depends on the properties of the medium in which the lens is located. A change in the refractive index of the medium, e.g. by immersing the lens in a liquid, precludes its correct operation. This is a serious limitation for experimental scenarios, when very small biological objects are observed in microscopic setups, where immersing liquid is required.

These disadvantages can be avoided if gradient index (GRIN) microlenses are used, where it is possible to obtain 100% filling factor, using hexagonal lenses with hexagonal arrangement, and their optical power does not depend on the medium in which the lenses are used. The GRIN microlens arrays have not been considered so far for use in the design of LFCs for two reasons. First of all, there is lack of appropriate technology for their fabrication. Secondly, the majority of commercially available GRIN lenses are not optimized for imaging applications but for fiber coupling and light collimation and focusing [32].

Recently we reported development of an array of nanostructured gradient index (nGRIN) microlenses [3335]. These arrays are fabricated using stack-and-draw technique well know from photonic crystal fiber technology [36]. This method allows to develop arrays of very small individual lenses with the diameter of 20 µm and 100% fill factor. What is more, it gives the possibility to arbitrarily adjust the distribution of the refractive index such as to optimise the imaging properties of the lenses. In addition, the microlens array is flat and thin, which simplifies its integration with other elements of the camera. Importantly, since the focal length of a GRIN lens depends on its thickness, they can be adjusted according to the needs of a particular application. Recently an array of nGRIN microlenses was used to build the record high-resolution Shack-Hartmann detector [37]. But in that work lenses were used not for imaging but for focusing light.

In this work we demonstrate, for the first time, application of nGRIN microlenslet array for imaging on the example of two types of optical field cameras. The nGRIN parabolic microlens array is 270 µm thick and the focal length of an individual lens in this case equals approximately 75 µm for all the wavelength range from 0.5 to 1.5 µm. The F-number is f/2.5, which makes the system relatively bright.

2. Nanostructured GRIN lenses

We consider for use a hexagonal array of nGRIN microlenses, which guarantees a maximum fill factor. Moreover, due to their small size and very short focal length, the nGRIN microlenses can offer high brightness of f/2.5. An additional advantage of using a GRIN lenses array is flatness of both surfaces, which is crucial for integration in the microscale [38]. Typical GRIN components are micro-optical elements made from inhomogeneous medium in which the refractive index varies from point to point [39]. The standard methods of GRIN microlens fabrication such as ion exchange process [40], neutron irradiation, chemical vapor deposition (CVD) and polymerization [41] is not optimal. The most important limits of above methods are related to a very small refractive index gradient (4×10−4–4×10−2 RIU/µm) [42], the inaccuracy of the refractive index distribution, and the possibility of only monotonic refractive index distributions. Moreover, the index of the refraction profile for most commercial GRIN lenses follows a second order expansion of a hyperbolic secant curve (a parabola). Such a distribution of the refractive index is ideal for focusing light propagating close to the axis, but is not optimal for light closer to the lens edge [32].

Therefore, with these limitations in mind, we have developed a method which allows fabrication of 2D GRIN microelements with a very high gradient of refractive index above 0.3 RIU/µm [43] and with any refractive index distribution. Using this approach, we have made such optical elements as elliptical GRIN microlens [33], microlens for Gaussian beam focusing [34] large diameter nGRIN microlenses [35], axicons [44], vortices [45] and diffractive optical elements [46].

To design and develop the array of nGRIN microlenslets we use the Effective Medium Theory (EMT) [47] and the stack-and-draw process, which was originally applied for the fabrication of photonic crystal fibers and imaging bundles [36]. We assume that microlens is composed of series of a few thousand identical subwavelength rods made of two types of glasses with different refractive indices. In the current approach, we assumed that the lenses would have a parabolic refractive index distribution. However, as presented above, the stack-and-draw technique allows to freely shape the refractive index distribution, particularly in order to optimize the imaging quality. So, the optimal distribution of both types of glass rods is calculated to mimic continuous parabolic refractive index distribution according to EMT (Fig. 2(a) and 2(b)) [48].

 figure: Fig. 2.

Fig. 2. Array of micro nGRIN lenslets fabricated using stack-and-draw technique: (a) design of an individual hexagonal nGRIN lens composed of 7651 rods made of two borosilicate glasses, (b) refractive index distribution for an ideal lens at a wavelength of λ=632.8 nm, (c) microscopic view of the whole array of 469 nGRIN microlenses ordered in a hexagonal lattice with 100% filling factor, and (d) enlarged fragment of an array. Diameter of an individual nGRIN microlens is 20 µm.

Download Full Size | PDF

To fabricate the parabolic lens array shown in Fig. 2(c) we used a pair of thermally matched silicate glasses. The first glass, with higher refraction index is a commercially available high-index lead-silicate glass F2. The second is an in-house synthesized low-index silicate glass labelled NC21 (55% SiO2, 1% Al2O3, 26% B2O3, 3% Li2O, 9.5% Na2O, 5.5% K2O, 0.8% As2O3). The refractive index difference for both glasses depends on the wavelength and equals, within the visible light range, approximately 0.09 RIU.

Next, the rods from both glasses, of approximately 12 cm length and of 500 µm diameter are developed using standard glass processing at fiber drawing tower. The rods are stacked into a hexagonal preform according to the calculated pattern (Fig. 2(a)). In our case, the preform consists of 7651 rods and they are ordered in a pattern that its local effective refractive index corresponds to parabolic refractive index distribution in GRIN lens:

$$n = {n_{F2}}\left( {1 - \frac{A}{2}{r^2}} \right)$$
where: nF2 is a refractive index of F2 glass, r is normalized radius of the microlens that varies from 0 in the center of the lens to 1 at its outer edge and A is gradient constant equal to A = 0.1115 µm-2. A value of A is selected that the refraction index n equals the refraction index of NC21 glass for r = 1, λ denote the wavelength of 632.8 nm.

In a following step, the hexagonal preform is drawn, which scales the structure down approximately 25 times. The result of this process is an integrated all-glass hexagonal rod with the diameter of 1 mm - which maintains internal binary structure of nGRIN lens. Next, the hexagonal rod is cut into segments of 10 cm long and arranged in a hexagonal array of 469 elements. This structure is embedded into an NC21 tube to create an intermediate preform. After the next drawing process, the final rod structure with a diameter of 318.5 µm was obtained. In the last step, the fiber structures are cut into slices, grounded and polished to the 270 µm thickness. Finally, we obtain a series of the hexagonal arrays of nGRIN parabolic microlenses, where the diameter of an individual lens is 20 µm and the fill factor is 100%. Each microlens has the identical geometrical parameters and optical properties, which correspond to the designed continuous distribution as shown in Fig. 2(b).

3. Light field camera type 1.0

The light field camera type 1.0 based on an array of nGRIN lenslets was built in accordance with the scheme shown in Fig. 3(a) for imaging of submillimeter objects. In this setup the nGRIN microlens array is positioned in the focal plane of the main lens. Each microlens generates its own focal spot directly on the camera image sensor. The principle of image reconstruction is based on common Fourier analysis of all registered images of focal points by numerical construction of a 4D lightfield and on using the Fourier Slice Photography (FSP) algorithm [49].

 figure: Fig. 3.

Fig. 3. Plenoptic camera types 1.0: (a) scheme of the camera system with nGRIN microlens array, (b) final configuration with test object and additional magnifying system.

Download Full Size | PDF

As the test object, two sections of 125 µm diameter optic fiber are used. The fibers are glued perpendicularly to each other on both sides of a 100 µm thick glass plate (Fig. 3(b)). The object was illuminated with a collimated white light beam. The nGRIN lens array has been positioned in the image plane of a main lens with a focal length of 25 mm. The image is registered using a CCD camera image sensor with a pixel size of 3 um. In order to increase spatial resolution of the registered image, an additional x20 magnification microscopic lens was used to project an image onto a camera chip (Fig. 3(b)).

An example of an image registered in the camera system is shown in Fig. 4. Use the FSP algorithm requires reference information about position of centers and size of all spots generated by individual lenses. For this purpose, at the camera calibration stage, a single reference image was registered (Fig. 5(a)), in a LFC system without an input scene. Then, during normal operation of the camera, 469 sub-images of photographed objects were obtained, one for each microlens. Additionally, because the FSP algorithm requires for 3D reconstruction a rectangular array of sub-images, the set of data has been extended by additional sub-images (Fig. 5(b)), which do not contain information about the input scene. In the next step, of the FSP algorithm, a four-dimensional array (4D lightfield) is constructed, where in each row and column there are individual sub-images (Fig. 5(c)). The final reconstruction uses the full information from all 625 sub-images. First, according to the equation [49]:

$${P_\alpha } = {\textrm{F}^{ - 2}} \circ {\beta _\alpha } \circ {\textrm{F}^4}\,({{L_F}} )$$
4D lightfield LF (Fig. 5(c) and Fig. 6(a)) is being transformed by the 4D Fourier transform. Next, a 2D intersection βα is selected from the obtained 4D space (Fig. 6(b)), which contains information about e.g. the appearance of the examined object at a given angle α (Fig. 6(c)). In the last step, a final reconstruction of the scene Pα at a given angle α is achieved using a 2D inverse Fourier transform (Fig. 6(d)). An example of reconstruction of an input scene, on the basis of the calculated depth map, for 3 different observer positions is shown in Fig. 7.

 figure: Fig. 4.

Fig. 4. Sample of images registered in lightfield camera type 1.0 system.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Reconstruction of the input scene: (a) reference image, (b) image with marked positions of individual lenses with additional lenses, (c) 4D light field.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Reconstruction of the input scene: (a) the most important steps of the Fourier Slice Photography algorithm, (b) image with marked positions of individual lenses, (c) image with additional lenses, (d) 4D light field.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Example of reconstruction of an input scene for 3 different observation positions. Animation of the reconstructed object at different angles available in the additional materials - Visualization 1.

Download Full Size | PDF

The obtained results show the ability to compute the appearance of the input scene from different angles using a single image in a lightfield camera with a nGRIN microlens array. However, due to the relatively small number of micro lenses and thus the number of sub images, the field of view (500×500×500 µm) and depth resolution (20 µm) of the reconstructed image is not the high, but spatial resolution (20 µm) corresponds to the distance between neighboring microlenses.

4. Light field camera type 2.0 for microscopes

The light field camera type 2.0, also called as a superresolution plenoptic camera 2.0 [50], is usually built in accordance with the scheme shown in Fig. 1(b) [51]. In order to achieve extended functionality, we modified this configuration by changing position of the lens array. In our proposed modified system, the array of microlenses is located, on a translation stage, between the sample and the microscopic objective (Fig. 8(a)). Depending on relative distances between the tested sample, the array of microlenses and the microscopic objective, the observation of a sharp sample with a blurred array of microlenses (Fig. 8(b)) or the observation of sharp sub-images generated by individual microlenses is possible (Fig. 8(c)).

 figure: Fig. 8.

Fig. 8. Proposed plenoptic camera types 2.0: (a) general overview, (b) sharp test scene (moss protonemata) visible in a microscopic system with blurred array of microlenses, (c) sharp sub-images generated by individual microlenses.

Download Full Size | PDF

Such positioning of microlenses array element allows for simplification of the system and makes it possible to use any standard camera or standard microscope system without its modification to enhance its functionality for 3D imaging. What is more, it also allows to adjust the number of pixels of the camera that ‘analyse’ the image delivered by a single microlens by changing the magnification of the microscopic objective. Examples of images captured in the proposed setup, at different magnifications, are shown in Fig. 9. It should also be noted that if the measurement were performed in the immersion microscope system, the use of the flat nGRIN lens array would still fulfil its function.

 figure: Fig. 9.

Fig. 9. Examples of images recorded using the proposed setup. Microscope objective magnification: (a) x20, (b) x40, (c) x100. The image shows basswood stem.

Download Full Size | PDF

In our setup, the relation between the thickness of the GRIN lens (270 µm) and its focus length (20 µm from the end of the lens to focus plane) results that each microlens creates an inverted image (Fig. 10(a)). Moreover, the images created by the neighbouring lenses overlap partially (Fig. 10(b) and Fig. 11(a)). Finally, due to the parallax effect, for objects captured on different depths of the sample, the relative distance between its images will vary depending on the angle of observation (Fig. 10(c) and Fig. 11(b)).

 figure: Fig. 10.

Fig. 10. Image formation in a light field camera with GRIN lenses: (a) ray path to provide image mirroring through the microlens, (b) comparison of images from neighboring microlenses, (c) change of the distance between images of two elements placed at different depths of the sample.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Image analysis: (a) partial overlapping of neighbouring images, (b) neighbouring sub-images with marked the same elements. The image shows moss protonemata.

Download Full Size | PDF

A microscopic sample of moss protonemata (Fig. 8(b)) was selected as the test scene. The image was registered in the system presented in Fig. 8(a) using a microscopic lens with a magnification of x100. 3D image reconstruction is based on the analysis of differences in relative distances between individual parts of the image in the same fragment but registered by neighboring lenses (Fig. 11(b) and Fig. 9(c)). The SIFT (Scale-Invariant Feature Transform) method [52] (from Matlab - Computer Vision Toolbox) was used for this purpose [53]. Unfortunately, this method allowed only to find characteristic points in the image such as the one shown in Fig. 11(b). Therefore, the fast normalized cross-correlation [54,55] method was used for pairs of images where the first method did not generate correct results.

The scheme of working of both algorithms is similar. For each neighboring pair of images (such as Fig. 11(b)) each of the algorithms finds corresponding points. Differences in relative distances between the same points found in neighboring images allow to determine the position of a given point in 3D space (according to Fig. 10(c)) and to create a so-called disparity map. Because each point of the input scene is registered by several microlenses, the final position of the point in 3D space is calculated as an average of the individual disparity maps. Finally, after taking into account all points from all images, a point cloud representing the examined object is created (Fig. 12(a)). However, due to the imperfections of nGRIN microlenses and the lack of optimization of the refractive index distribution for each microlens in terms of imaging quality, and thus the distortion of the images, the depth estimation is imprecise (Fig. 12(a)). Therefore, in order to better visualize the results, in the first step, we expand the algorithm with preprocessing steps where the background has been removed based on the analysis of the colors of the individual pixels. In the second step, the local average elevation of the points was calculated. Finally, we have introduced a discretization procedure where the available elevations were reduced to several levels. The obtained results (Fig. 12(b)) show the feasibility of reconstructing the 3D input scene. The field of view and spatial resolution in this case depends on the magnification of the microscopic lens used. For the reconstructed images presented in Fig. 12, the field of view is 150×115×80 µm, and the spatial resolution is about 2 µm and depth resolution is around 10 µm. The obtained resolutions are not ultimate [27,56]. This is due to the fact that they are obtained using flat nanostructured GRIN lenses which were not optimized for imaging quality. However, we hope that the nGRIN lens array with an imaging-optimized refractive index distribution will provide resolution at the level of the best available optical field cameras.

 figure: Fig. 12.

Fig. 12. Reconstruction of the image: (a) point clouds after reconstruction without removing the background and discretization, (b) reconstructed image after limiting the available depths to several levels and removing the background. The image shows moss protonemata. Animation of the reconstructed scene at different angles with the background reduction and discretization levels available in the additional materials - Visualization 2.

Download Full Size | PDF

5. Conclusions

We have demonstrated the imaging performance of the flat-surface nanostructured GRIN microlens array. Moreover, even though its parabolic refractive index distribution is not ideal in terms of imaging properties, we have shown that nanostructured GRIN lens array is enough to build imaging setups. This proves their applicability, which is not obvious due to its nanostructured design. We have used an array composed of 469 microlenses arranged in a hexagonal lattice with 100% fill factor (20 µm diameter of a single lens). Each lens was fabricated by the stack-and-draw technique and consists of 7651 rods of two thermally matched types of glass with different refractive indices. The size of the individual inclusions and the appropriate spatial distribution of the two types of glasses ensured that such an element works properly in the visible range as a GRIN lens, despite internal binary nanostructure that exist in every microlens.

At this state of work, each lens had a parabolic refractive index distribution. This is not an optimal shape in terms of imaging quality. But, as we have shown, the quality of imaging allowed us to build imaging systems based on the nanostructured lenses. As an example we have demonstrated that the fabricated array of parabolic nGRIN lenses can be used as a key component of two types of plenoptic cameras. In the case of camera type 1.0 good reconstruction of the tested element was obtained with the field of view 500×500×500 µm and both spatial and depth resolution of 20 µm. In the case of the light field camera type 2.0, we showed that it was possible to reconstruct the image with a similar resolution as in typical macroscopic system. The field of view and spatial resolution in this case can be modified by changing magnification of the microscopic objective. We demonstrated reconstruction of a 3D image of moss protonemata. The field of view is 150×115 µm with depth of 80 µm and a spatial resolution of 2 µm and depth resolution of 10 µm.

The use of the proposed nGRIN lens array is very attractive especially for 3D imaging of microobjects in plenoptic camera type 2.0 systems with modified position of the microlens array just behind the examined object. It allows to register 3D images using conventional microscope systems and traditional cameras without their modification. However, such use of a lens array is only possible when the array itself is thin (not more than several hundred µm) and each lens has a short focal length in range of 100 µm. But it is difficult or impossible to obtain such elements using other techniques. Our proposed method of fabrication based on the composition of a lens from rods made of two different glasses not only allows us to fabricate elements that meet these requirements, but also potentially to optimize the distribution of the refractive index for better imaging. What is more, the nGRIN lens array that we used has a very small diameter, 100% filling factor and a unique feature of the flat-surface. This is an advantage if the 3D image is performed in a microscopic system where immersing liquids are used. In this case, the imaging properties of the lenses do not change, as is the case for refractive lenses, but only the focal length is decreased. This attempt opens up the possibility of using such an element in biological research for 3D imaging in in-vitro and in-vivo systems using low-cost microscopic systems.

Funding

Fundacja na rzecz Nauki Polskiej (POIR.04.04.00-1C74/16).

References

1. V. Toal, Introduction to Holography (CRC, Boca Raton, 2011).

2. E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” J. Opt. Soc. Am. A 32(11), 2021–2032 (2015). [CrossRef]  

3. G. Lippmann, “Epreuves reversibles donnant la sensation du relief,” J. Phys. 7, 821–825 (1908). [CrossRef]  

4. D. E. Roberts and T. Smith, “The History of Integral Print Methods”, pp. 1–21, http://lenticulartechnology.com/files/2014/02/Integral-History.pdf

5. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36(7), 1598–1603 (1997). [CrossRef]  

6. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with computed reconstruction,” Opt. Lett. 26(3), 157–159 (2001). [CrossRef]  

7. A. Stern and B. Javidi, “3D Image Sensing and Reconstruction with Time-Division Multiplexed Computational Integral Imaging (CII),” Appl. Opt. 42(35), 7036–7042 (2003). [CrossRef]  

8. E. H. Adelson and J. R. Bergen, The plenoptic function and the elements of early vision. Computational Models of Visual Processing (MIT, Cambridge1991).

9. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceeding of Siggraph (ACM, 1996), pp. 31–42.

10. N. X. Xiao, B. Javidi, M. Martinez-Corral, and A. Stern, “Advances in Three-Dimensional Integral Imaging: Sensing, Display, and Applications,” Appl. Opt. 52(4), 546–560 (2013). [CrossRef]  

11. E. H. Adelson and J. Y. A. Wang, “Single lens stereo with a plenoptic camera,” in Proceeding of IEEE Conference Transactions on pattern analysis and machine intelligence (IEEE, (1992), pp. 99–106.

12. Lytro, The Lytro Camera. http://lytro.com

13. Raytrix, 3D light field camera technology. http://www.raytrix.de

14. J. S. Jang and B. Javidi, “Three-dimensional Integral Imaging of Micro-objects,” Opt. Lett. 29(11), 1230–1232 (2004). [CrossRef]  

15. A. Llavador, J. Sola-Pikabea, G. Saavedra, B. Javidi, and M. Martinez-Corral, “Resolution improvements in integral microscopy with Fourier plane recording,” Opt. Express 24(18), 20792–20798 (2016). [CrossRef]  

16. B. Javidi, I. Moon, and S. Yeom, “Three-dimensional identification of biological microorganism using integral imaging,” Opt. Express 14(25), 12096–12108 (2006). [CrossRef]  

17. J. Fiss, B. Curless, and R. Szeliski, “Refocusing plenoptic images using depth-adaptive splatting,” in Proceeding of IEEE International Conference on Computational Photography (ICCP) (IEEE, 2014), 14383062.

18. H. Jeon, J. Park, G. Choe, J. Park, Y. Bok, Y. Tai, and I. Kweon, “Accurate depth map estimation from a lenslet light field camera,” in Proceeding of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE, 2015), 15538728.

19. L. Mc Millan and G. Bishop, “Plenoptic modeling: An imagebased rendering system,” in Proceeding of Siggraph (ACM, 1995), pp. 39–46.

20. E. Y. Lam, G. Bennett, C. Fernandez-Cull, D. Gerwe, M. Kriss, and Z. Zalevsky, “Imaging systems and signal recovery: introduction to feature issue,” Appl. Opt. 54(13), IS1–IS2 (2015). [CrossRef]  

21. M. Martinez-Corral, A. Dorado, J. C. Barreiro, G. Saavedra, and B. Javidi, “Recent advances in the capture and display of macroscopic and microscopic 3D scenes by integral imaging,” in Proceedings of IEEE105 (IEEE, 2017), pp. 825–836.

22. M. Martinez-Corral and B. Javidi, “Fundamentals of 3D imaging and displays: A tutorial on integral imaging, Lightfield, and plenoptic systems,” Adv. Opt. Photonics 10(3), 512–566 (2018). [CrossRef]  

23. Stern, and B. Javidi, “3D Image Sensing, Visualization, and Processing using Integral Imaging,” Proceedings of the IEEE Journal94(3), (IEEE, 2006), pp. 591–608.

24. T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceeding of Eurographics Symposium on Rendering, (EGSR, 2006), pp. 263–272.

25. S. Prasad, “Digital superresolution and the generalized sampling theorem,” J. Opt. Soc. Am. A 24(2), 311–325 (2007). [CrossRef]  

26. F. Pérez Nava and J. P. Lüke, “Simultaneous estimation of super-resolved depth and all-in-focus Images from a plenoptic camera,” in Proceeding of 3DTV Conference, (3DTV, 2009), 10701793.

27. R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, and M. Zimmer, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014). [CrossRef]  

28. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” in Stanford Tech Report CTSR (2005).

29. F. Jin, J. S. Jang, and B. Javidi, “Effects of device resolution on three-dimensional integral imaging,” Opt. Lett. 29(12), 1345–1347 (2004). [CrossRef]  

30. R. Ng, “Digital light field photography,” PhD thesis, Stanford University, Stanford, CA, USA, Adviser: Patrick Hanrahan (2006).

31. A. Lumsdaine and T. Georgiev, “Full resolution lightfield rendering,” Tech. rep., Adobe Systems (2008).

32. L. Xuejin, Y. Jianquan, and Z. Baigangb, “Analyses on propagation and imaging properties of GRIN lenses,” in Proceedings of SPIE4919, (SPIE2002), pp. 155–160.

33. F. Hudelist, J. M. Nowosielski, R. Buczynski, A. J. Waddie, and M. R. Taghizadeh, “Nanostructured elliptical gradient-index microlenses,” Opt. Lett. 35(2), 130–132 (2010). [CrossRef]  

34. J. M. Nowosielski, R. Buczynski, F. Hudelist, A. J. Waddie, and M. R. Taghizadeh, “Nanostructured GRIN microlenses for Gaussian beam focusing,” Opt. Commun. 283(9), 1938–1944 (2010). [CrossRef]  

35. J. M. Nowosielski, R. Buczynski, A. J. Waddie, A. Filipkowski, D. Pysz, A. McCarthy, R. Stepien, and M. R. Taghizadeh, “Large diameter nanostructured gradient index lens,” Opt. Express 20(11), 11767–11777 (2012). [CrossRef]  

36. J. Cimek, R. Stepien, G. Stepniewski, B. Siwicki, P. Stafiej, M. Klimczak, D. Pysz, and R. Buczyński, “High contrast glasses for all-solid fibers fabrication,” Opt. Mater. 62, 159–163 (2016). [CrossRef]  

37. R. Kasztelanic, A. Filipkowski, D. Pysz, R. Stepien, A. J. Waddie, M. R. Taghizadeh, and R. Buczynski, “High resolution Shack-Hartmann sensor based on array of nanostructured GRIN lenses,” Opt. Express 25(3), 1680–1691 (2017). [CrossRef]  

38. C. Gómez-Reino, M. V. Perez, C. Bao, and M. T. Flores-Arias, “Design of GRIN optical components for coupling and interconnects,” Laser Photonics Rev. 2(3), 203–215 (2008). [CrossRef]  

39. C. Gómez-Reino, M. V. Perez, and C. Bao, Gradient-index Optics: Fundamentals and Applications (Springer, 2002).

40. J. R. Hensler, “Method of Producing a Refractive Index Gradient in Glass,” U.S. Patent 3,873,408 (25 Mar. 1975).

41. J. Teichman, J. Holzer, B. Balko, B. Fisher, and L. Buckley, “Gradient Index Optics at DARPA,” Institute For Defense Analyses (2013).

42. Y. Huang and S. T. Ho, “Superhigh numerical aperture (NA > 1.5) micro gradient-index lens based on a dualmaterial approach,” Opt. Lett. 30(11), 1291–1293 (2005). [CrossRef]  

43. F. Hudelist, R. Buczynski, A. J. Waddie, and M. R. Taghizadeh, “Design and fabrication of nano-structured gradient index microlenses,” Opt. Express 17(5), 3255–3263 (2009). [CrossRef]  

44. B. Filipkowski, D. Piechal, R. Pysz, A. J. Stepien, M. R. Waddie, R. Taghizadeh, and Buczynski, “Nanostructured gradient index microaxicons made by a modified stack and draw method,” Opt. Lett. 40(22), 5200–5203 (2015). [CrossRef]  

45. K. Switkowski, A. Anuszkiewicz, A. Filipkowski, D. Pysz, R. Stepien, W. Krolikowski, and R. Buczynski, “Formation of optical vortices with all-glass nanostructured gradient index masks,” Opt. Express 25(25), 31443–31450 (2017). [CrossRef]  

46. J. Pniewski, R. Kasztelanic, B. Piechal, J. M. Nowosielski, A. J. Waddie, I. Kujawa, R. Stepien, M. R. Taghizadeh, and R. Buczynski, “Diffractive optics development using a modified stack-and-draw technique,” Appl. Opt. 55(18), 4939–4945 (2016). [CrossRef]  

47. Sihvola, Electromagnetic Mixing Formulas and Applications. (The Institution of Electrical Engineers, 1999).

48. X. Zhang and Y. Wu, “Effective medium theory for anisotropic metamaterials,” Sci. Rep. 5(1), 7892 (2015). [CrossRef]  

49. R. Ng, “Fourier slice photography,” in Proceeding of Siggraph (ACM, 2005), pp. 735–744.

50. T. Georgiev and A. Lumsdaine, “Superresolution with Plenoptic Camera 2.0,” Tech. rep., Adobe Systems (2009).

51. M. Levo, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light Field Microscopy,” in Proceeding of Siggraph (ACM, 2006), pp. 924–934.

52. D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” Int. J. Comp. Vision 60(2), 91–110 (2004). [CrossRef]  

53. Mathworks Matlab, Computer Vision Toolbox. www.mathworks.com/products/computer-vision.html

54. S. D. Wei and S. H. Lai, “Fast template matching based on normalized cross correlation with adaptive multilevel winner update,” in Proceeding of IEEE Transformation Image Processing, (IEEE, 2008), pp. 2227–2235.

55. Z. Feng, H. Qingming, and G. Wen, “Image matching by normalized cross-correlation,” in Proceeding of IEEE International Conference on Acoustics Speed and Signal Processing, (IEEE, 2006), pp. 729–732.

56. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013). [CrossRef]  

Supplementary Material (2)

NameDescription
Visualization 1       Animation of the reconstructed scene (moss protonemata) by light field camera (type 2.0) at different angles with the background reduction and discretization levels.
Visualization 2       Example of reconstruction by light field camera (type 1.0) of an input scene - two sections of 125 µm diameter optic fiber glued perpendicularly to each other on both sides of a 100 µm thick glass plate

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Schemes of the light field camera setup: (a) type 1.0, (b) type 2.0.
Fig. 2.
Fig. 2. Array of micro nGRIN lenslets fabricated using stack-and-draw technique: (a) design of an individual hexagonal nGRIN lens composed of 7651 rods made of two borosilicate glasses, (b) refractive index distribution for an ideal lens at a wavelength of λ=632.8 nm, (c) microscopic view of the whole array of 469 nGRIN microlenses ordered in a hexagonal lattice with 100% filling factor, and (d) enlarged fragment of an array. Diameter of an individual nGRIN microlens is 20 µm.
Fig. 3.
Fig. 3. Plenoptic camera types 1.0: (a) scheme of the camera system with nGRIN microlens array, (b) final configuration with test object and additional magnifying system.
Fig. 4.
Fig. 4. Sample of images registered in lightfield camera type 1.0 system.
Fig. 5.
Fig. 5. Reconstruction of the input scene: (a) reference image, (b) image with marked positions of individual lenses with additional lenses, (c) 4D light field.
Fig. 6.
Fig. 6. Reconstruction of the input scene: (a) the most important steps of the Fourier Slice Photography algorithm, (b) image with marked positions of individual lenses, (c) image with additional lenses, (d) 4D light field.
Fig. 7.
Fig. 7. Example of reconstruction of an input scene for 3 different observation positions. Animation of the reconstructed object at different angles available in the additional materials - Visualization 1.
Fig. 8.
Fig. 8. Proposed plenoptic camera types 2.0: (a) general overview, (b) sharp test scene (moss protonemata) visible in a microscopic system with blurred array of microlenses, (c) sharp sub-images generated by individual microlenses.
Fig. 9.
Fig. 9. Examples of images recorded using the proposed setup. Microscope objective magnification: (a) x20, (b) x40, (c) x100. The image shows basswood stem.
Fig. 10.
Fig. 10. Image formation in a light field camera with GRIN lenses: (a) ray path to provide image mirroring through the microlens, (b) comparison of images from neighboring microlenses, (c) change of the distance between images of two elements placed at different depths of the sample.
Fig. 11.
Fig. 11. Image analysis: (a) partial overlapping of neighbouring images, (b) neighbouring sub-images with marked the same elements. The image shows moss protonemata.
Fig. 12.
Fig. 12. Reconstruction of the image: (a) point clouds after reconstruction without removing the background and discretization, (b) reconstructed image after limiting the available depths to several levels and removing the background. The image shows moss protonemata. Animation of the reconstructed scene at different angles with the background reduction and discretization levels available in the additional materials - Visualization 2.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

n = n F 2 ( 1 A 2 r 2 )
P α = F 2 β α F 4 ( L F )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.