Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Down-sampling slim camera using a micro-lens array

Open Access Open Access

Abstract

The thickness of a camera is proportional to the image distance, although the lens can be replaced by a flat optics, such as a meta lens. However, there is no suitable method to reduce this thickness for low-cost applications. Here we proposed a novel down-sampling slim camera based on a micro-lens array (MLA) and an array sensor. By down-sampling the multiple micro images with a suitable array sensor, an enlarged image directly appears. Since the imaging module only consists of a low-resolution array sensor and an MLA, the thickness of the camera can be reduced to sub-millimeter. The proposed low-cost slim camera is suitable for imaging and sensing of internet-of-things (IoT) in particular. It also has a great application potential in the imaging of non-visible light.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

For a conventional imaging system, both the image distance and the image size are roughly proportional to the effective focal length (EFL). To produce a large image, an imaging system with a large EFL is usually applied. The consequence is the raise of system size, weight and cost. Although the use of a meta-lens [17] or a planar lens [810] can reduce the thickness of the lens itself, the image distance is not changed and thus this problem retains. By using a special high effective refractive index spaceplate, the image distance can be slightly reduced [11], but both the weight and cost are increased. This problem may occur in low-cost imaging systems or a non-visible light imaging system, in which a high-resolution camera is unavailable, i.e., the resolution of the camera is much less than that of the imaging system. Fiber-optic magnifier (or fiber-optic taper), which is a fiber bundle with different sizes of the bundle’s two facets, is one of the other examples to generate a magnified image in a thin volume [12,13]. It produces an enlarged image on the output facet, while the object is on the input facet. However, its cost is still high, and another imaging system must be applied in front of the fiber-optic magnifier. Therefore, a low-cost and simple method is still demanded. To this end, in this paper we proposed a slim camera by using a micro-lens array (MLA) together with an array sensor. Although the MLAs have been widely applied in the plenoptic camera [14,15] and the compound-eye camera [1618], their obtained intermediate image must be further computed to retrieve the final image. By contrast, in the proposed method, the imaging is purely optically, and thus digitally post-processing is unnecessary. This feature also benefits a low-cost imaging system. The remaining of this paper is organized as follows. In section 2, we will introduce the concept of our method together with some simulation results. Experimental results will be demonstrated and discussed in section 3. Finally, a concluding remark is provided in section 4.

2. Method

2.1 Principle

Our proposed system consists of an MLA and an array senor, as shown in Fig. 1. Because the EFL of the MLA is usually small, the separation between the MLA and the sensor array can be very small. On the other hand, every lenslet of the MLA will produce a small image at the sensor plane. If the pixel pitch of the array sensor is comparable to the separation between small images, an enlarged image can be directly obtained. The sampling concept is similar to that of the Moiré magnifier in display [19,20]. To begin with, the MLA generates an array of small images on the image plane, as shown in Fig. 2(a). For the simplicity, it is supposed that all the small images are identical and thus the intensity on the image plane can be expressed as

$${I_0}(x,y) = f(x,y) \otimes \textrm{comb}(\frac{x}{{\tilde{X}}},\frac{y}{{\tilde{Y}}}) = \sum\limits_m {\sum\limits_n {f(x - m\tilde{X},y - n\tilde{Y})} } = \sum\limits_m {\sum\limits_n {{f_{mn}}(x,y)} } ,$$
where $f(x,y)$ is the intensity of a single small image on the center of the global image coordinates $(x,y)$, ${\otimes} $ denotes the two-dimensional (2D) convolution, and $\textrm{comb}(x/\tilde{X},y/\tilde{Y})$ represents the 2D comb function with separations $\tilde{X}$ along the x-axis and $\tilde{Y}$ along the y-axis. ${f_{mn}}(x,y)$ represents the duplicates of image centered at $(m\tilde{X},n\tilde{Y})$. It should be noted that $\tilde{X}$ and $\tilde{Y}$ represent the separations of duplicates, while the separations between lenslets are X and Y, as shown in Fig. 3. By comparing the similar triangles A and B, $\tilde{X}$ and $\tilde{Y}$ are found to be
$$\begin{aligned} \tilde{X} &= \rho X = (1 + {M_L})X\\ \tilde{Y} &= \rho Y = (1 + {M_L})Y, \end{aligned}$$
where ${M_L}$ is the magnification of a single small image. Subsequently, an array sensor on the image plane is applied to record the image. For the sake of simplicity without loss of generality, the sampling pitch of the array sensor is assumed to be the same as that of the MLA, i.e. X and Y. It is also assumed that the active area of each sensor is small enough, as shown in Fig. 2(b). Therefore, we can model the sensor array with a comb function, $\textrm{comb}({{x^{\prime}} / X},{{y^{\prime}} / Y})$, where $(x^{\prime},y^{\prime})$ are the coordinates of the array sensor. In setting up the array sensor, its origin coincides with that of $(x,y)$, but counterclockwise rotates a small angle $\theta $ related to the MLA, as shown in Fig. 4(a). The resulting intensity sampled by the array sensor is thus
$${I_S}(x,y) = {I_0}(x,y) \cdot \textrm{comb}(\frac{{x^{\prime}}}{X},\frac{{y^{\prime}}}{Y})$$

 figure: Fig. 1.

Fig. 1. The Schematic of our proposed system, which consists of an MLA and an array sensor.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Geometrical setup. (a) The array of small images. (b) The array sensor.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. The geometry between the object and the images.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. The geometrical relationship between the devices and the resulting image. (a) The small images and the array sensor with a small rotation angle $\theta $. (b) The corresponding samples on a virtual image.

Download Full Size | PDF

The comb function in Eq. (3) is re-written in the $(x,y)$ coordinates as

$$\begin{aligned} &\textrm{comb}(\frac{{x^{\prime}}}{X},\frac{{y^{\prime}}}{Y}) = \sum\limits_{m^{\prime}} {\sum\limits_{n^{\prime}} {\delta (x^{\prime} - m^{\prime}X,y^{\prime} - n^{\prime}Y)} } \\ &= \sum\limits_{m^{\prime}} {\sum\limits_{n^{\prime}} {\delta (x\cos \theta + y\sin \theta - m^{\prime}X, - x\sin \theta + y\cos \theta - n^{\prime}Y)} } , \end{aligned}$$
by applying the rotation transformation, $x^{\prime} = x\cos \theta + y\sin \theta$ and $y^{\prime} ={-} x\sin \theta + y\cos \theta$. Hence the location of the Dirac delta function in Eq. (4) is at
$$\left\{ \begin{array}{l} x\cos \theta + y\sin \theta - m^{\prime}X = 0\\ - x\sin \theta + y\cos \theta - n^{\prime}Y = 0 \end{array} \right..$$

By solving Eq. (5), the sampling location is found to be at

$$\begin{aligned} x &= m\cos \theta X - n\sin \theta Y\\ y &= m\sin \theta X + n\cos \theta Y. \end{aligned}$$

Here, we have assumed that the image ${I_{mn}}(x,y)$ is always sampled by the point detector indexed $(m^{\prime},n^{\prime})$, i.e. $m = m^{\prime}$, $n = n^{\prime}$. The sampling location described by Eq. (6) is in the image coordinates. Subsequently, we consider the sampling location related to the center of the corresponding small image, i.e., at the local coordinate of the small image. Since the image ${I_{mn}}(x,y)$ is centered at $(x,y) = (m\tilde{X},n\tilde{Y}) = (m\rho X,n\rho Y)$, the sampling point can be represented on the local coordinates at

$$(x^{\prime\prime},y^{\prime\prime}) = [ - mX(\rho - \cos \theta ) - nY\sin \theta ,\; - nY(\rho - \cos \theta ) + mX\sin \theta ].$$

To get a clear picture of the group of these points, the sampling points along $x^{\prime}$ axis are considered as an example, as shown in Fig. 4(a). For the sampling point ${s_0}$, the location is at $(m,n) = (0,0)$ in Fig. 4(a). On a virtual image, the point corresponding to ${s_0}$ is at $(x^{\prime\prime},y^{\prime\prime}) = (0,0)$ by Eq. (7), and is marked as ${s^{\prime\prime}_0}$ in Fig. 4(b). For the sampling point ${s_1}$ in Fig. 4(a), $(m,n) = (1,0)$, and thus the sampled image point is at $(x^{\prime\prime},y^{\prime\prime}) = [ - X(\rho - \cos \theta ),\;X\sin \theta ]$, which is marked as ${s^{\prime\prime}_1}$ in Fig. 4(b). Again for the sampling point ${s_2}$, $(m,n) = (2,0)$, the sampled image point is at $(x^{\prime\prime},y^{\prime\prime}) = [ - 2X(\rho - \cos \theta ),\;2X\sin \theta ]$. By repeating this procedure for points ${s_{ - 1}}$ and ${s_{ - 2}}$, we can conclude that each sampling location is shifted $- X(\rho - \cos \theta )$ along $x^{\prime\prime}$-axis and $X\sin \theta$ along $y^{\prime\prime}$-axis. Therefore, the sampling points align along the straight line (L) with separation $X\sqrt {2\rho - 2\cos \theta } $ at the coordinates of virtual image. On the other hand, at the image coordinates [Fig. 4(a)], the sampling pitch between points is always X along the $x^{\prime}$-axis. According to above analysis, it is known that a single magnified image will appear by using a slightly rotated array sensor on the image plane of the MLA. The magnification ${M_a}$ of the appeared image in comparison with a single small image is thus

$${M_a} = \frac{X}{{X\sqrt {2\rho - 2\cos \theta } }}. $$

In Fig. 4(b), the angle of L related to $x^{\prime\prime}$-axis, $\varphi ^{\prime\prime}$, is found to be

$$\tan \varphi ^{\prime\prime} = \frac{{\sin \theta }}{{\cos \theta - \rho }}.$$

Approximated values of ${M_a}$ and $\varphi ^{\prime\prime}$ can be found by assuming $\rho \approx 1$ (the magnification of a single small image is much smaller than one). With the help of manipulations of trigonometric functions, it is derived that ${M_a} \approx {|{2\sin ({\theta /2} )} |^{ - 1}}$ and $\varphi ^{\prime\prime} = ({{\theta^{( + )}} + \pi } )/2$ or $\varphi ^{\prime\prime} = ({{\theta^{( - )}} - \pi } )/2$, where ${\theta ^{( + )}}$ and ${\theta ^{( - )}}$ respectively represent the positive small rotation angle and the negative small rotation angle. Finally, the orientation of the appeared magnified image can be found by aligning the line L in Fig. 4(b) to the $x^{\prime}$-axis in Fig. 4(a). The resulting image is shown in Fig. 5. Apparently, the appeared magnified image is rotated related to the duplicates of small images, and the rotation angle is $\varphi = {\theta ^{( + )}} - \varphi ^{\prime\prime}$ or $\varphi = {\theta ^{( - )}} - \varphi ^{\prime\prime}$. By substituting $\varphi ^{\prime\prime}$ into the equation, we have

$$\varphi = \frac{{{\theta ^{( + )}} - \pi }}{2}\quad or\quad \varphi = \frac{{{\theta ^{( - )}} + \pi }}{2}.$$

 figure: Fig. 5.

Fig. 5. The magnified image appeared on the image plane. Counterclockwise is the positive rotation direction for all angles.

Download Full Size | PDF

Therefore, in a small rotation angle ${\theta ^{({\pm} )}}$, the appeared magnified image rotates approximately ${\mp} \pi /2$.

The system magnification ${M_s}$, that is the size of appeared image by the object size, is

$${M_s} = {M_L}{M_a} = \frac{{{M_L}}}{{\sqrt {2(1 + {M_L} - \cos \theta )} }}.$$

By using the small angle approximation, the maximum achieved system magnification is found to be $\sqrt {{M_L}/2} $.

Finally, the resolution of the proposed thin camera is the same as the resolution of the lenslet, provided the diameter of the active area of a sensor cell is much smaller than the imaging spot of the lenslet. Otherwise, the imaging spot size of the system is the same as the cell’s active area. Similar to the resolution issue, the working distance of the thin camera is nearly the same as that of a single lenslet. The EFL of an MLA is usually in the order of millimeters. Hence the working distance is typically from several centimeters to infinity.

2.2 Simulation

We have performed simulations to confirm the availability of the proposed system. The size of the image generated by the MLA is $5000 \times 5000$ pixels. To simplify the simulation, the imaging by MLA is omitted by setting $\rho = 1$, and the size of every small image is $50 \times 50$ pixels. Subsequently, the complete image is re-sampled by an array sensor, i.e., a single pixel of each small image is selected according to Eq. (6). These selected points are assembled as the final magnified image with $100 \times 100$ pixels. The simulation results are shown in Fig. 6. In most of the resulting images except for Fig. 6(d) (the smallest rotation angle), multiple patterns appear in the region of interest. The reason is that in section 2.1, we only considered the case of sampling $m = m^{\prime}$, $n = n^{\prime}$. In the region far from the origin of the coordinates, it is possible that the point detector indexed $(m^{\prime},n^{\prime})$ sampled the adjacent images, e.g., $m = m^{\prime} - 1$, $n = n^{\prime} - 1$, and so on. In this case, the sampling properties are the same as the case near the origin except that the magnified image is shifted. For a larger rotation angle, the magnified image is too small to be identified, and thereby the system should be always operated in a small rotation angle.

 figure: Fig. 6.

Fig. 6. The obtained images at different rotation angles of the array sensor. (a)$\theta = 3^\circ $; (b)$\theta = 2^\circ $; (c)$\theta = 1^\circ $; (d) $\theta ={-} 0.5^\circ $; (e)$\theta ={-} 1^\circ $; (f) $\theta ={-} 2^\circ $; (g) $\theta ={-} 3^\circ $; (h) $\theta ={-} 4^\circ $ (i) the ground truth image.

Download Full Size | PDF

We took the cases $\theta ={-} 2^\circ $ and $\theta ={-} 4^\circ $ as examples to compare the simulation results with the analysis conducted in section 2. The measured lengths of the central “a” in Fig. 6 (f) and (h) are 845 pixels and 445 pixels, respectively. The length of original small “a” is 30 pixels. Therefore, the MLA magnifications of the two images are 28.2 and 14.5, respectively. According to the theoretical model [Eq. (8)], the magnifications are ${M_a}(\theta ={-} 2^\circ ) = 28.7$ and ${M_a}(\theta ={-} 4^\circ ) = 14.3$, respectively. The simulation results agree with the theory well. The rotation angles of the images in Fig. 6 (f) and (h) are measured to be $88^\circ $ and $86.5^\circ $, while the theoretical values [Eq. (10)] are $\varphi (\theta ={-} 2^\circ ) = 89^\circ $ and $\varphi (\theta ={-} 4^\circ ) = 88^\circ $. The little discrepancy should be due to the discretization error in the simulation.

3. Experiment and discussion

The experimental setup is shown in Fig. 7. We applied a 1 W green LED (central wavelength:$0.53\,\mathrm{\mu}\textrm{m}$) as the light source. The light is collimated by two lenses for illuminating an object target. A liquid-crystal (LC) spatial light modulator (SLM) (Holoeye HEO-0017) is applied to generate the object target “a”. In the imaging module, we applied an MLA (Thorlabs, MLA150-5C) with EFL $4.1\,\textrm{mm}$ and lenslet pitch $150\,\mathrm{\mu}\textrm{m}$. The object distance is roughly 460 mm. Hence the magnification of the MLA is ${M_L} \approx 0.01$. The senor is a low-cost fingerprint sensor (Optigate GN1410) with $320 \times 320$ pixels ($8\,\textrm{mm} \times 8\,\textrm{mm}$). The pixel size is $25\,\mathrm{\mu}\textrm{m} \times 25\,\mathrm{\mu}\textrm{m}$ with 25% fill factor. Therefore, the active area of a single pixel is $12.5\,\mathrm{\mu}\textrm{m} \times 12.5\,\mathrm{\mu}\textrm{m}$. Because the pixel pitch of sensor is much different from that of MLA, we attached a pinhole-array film with pitch $150\,\mathrm{\mu}\textrm{m}$ (pinhole diameter ∼ $15\,\mathrm{\mu}\textrm{m}$) against the senor to simulate a pitch-match array sensor. Consequently, the effectively pixels of appeared images will be only $53 \times 53$ pixels.

 figure: Fig. 7.

Fig. 7. The experimental setup.

Download Full Size | PDF

The experimental results are shown in Fig. 8. The four corners are dim because partial of the object light is blocked by the rotational stage. It should be noted that the horizontal axis of Fig. 8 is not the x-axis in Fig. 2 because it is hard to align the MLA and the sensor array. For this reason, we adjusted the angle of MLA by the rotational manual stage to obtain the biggest image, and this stage position is assumed to be the zero degree of rotation angle. We also took the images obtained at $\theta ={-} 2^\circ $ and $\theta ={-} 4^\circ $ to compare the image sizes with theoretical values. The lengths of the appeared “a” are $2.8\,\textrm{mm}$ and $1.38\,\textrm{mm}$, respectively. The length of the “a” displayed on the SLM is $7.52\,\textrm{mm}$. Therefore, the total magnification of the demonstrated system is 0.37 ($\theta ={-} 2^\circ $) and 0.18 ($\theta ={-} 4^\circ $). According to the theory [Eq. (11)], the total magnification of the system is ${M_L}{M_a}(\theta ={-} 2^\circ ) = 0.29$ and ${M_L}{M_a}(\theta ={-} 4^\circ ) = 0.14$. The difference between the experimental results and the theoretical values is acceptable because both the image distance and the rotation angle cannot be accurately determined. To generate an image with the same size by a conventional camera in the present circumstance, the EFL of the lens must be 133 mm and 64 mm, respectively. The whole imaging system is thus too bulky to be applied in commercial devices. By contrast, the thickness of our proposed system (from the MLA to the sensor) is significantly reduced to below 10 mm.

 figure: Fig. 8.

Fig. 8. Experimental results. The rotation angles are (a) $\theta ={-} 1^\circ $; (b) $\theta ={-} 2^\circ $; (c) $\theta ={-} 3^\circ $; (d) $\theta ={-} 4^\circ $; (e) $\theta ={-} 5^\circ $; (f) $\theta ={-} 6^\circ $. Also see Visualization 1 for the imaging at various rotation angle, and Visualization 2 for the imaging of a dynamic object, respectively.

Download Full Size | PDF

The image rotation and magnification properties are basically the same as those predicted in the principle and simulation. Some of the disagreement is found in the experiment. First, there is some light leakage around the magnified image, as shown in Fig. 8 (c). The reason may be the misalignment between the sensor and the pinhole-array film. Besides, there is a thin gap between the film and the sensor, which may also affect the quality of the magnified image. The uneven pinhole-array film will also result in localized distortion of the magnified image. In this demonstration, the EFL is large for mounting the MLA on a rotational stage, and thus the form factor is still large. In practice, the form factor can be significantly reduced by using an MLA with sub-millimeter EFL [21,22]. The form factor can be further reduced by integrating the MLA on the array sensor [23]. Meanwhile, the array sensor must be optimized for this kind of imaging, e.g., the pixel pitch is increased while the pixel size remains the same. Then, the pinhole-array film can be omitted.

4. Conclusions

In this paper we proposed a down-sampling slim camera using an MLA and an array sensor. By direct down-sampling the multiple micro images produced by the MLA, an enlarged image appears. We have analyzed the properties of the imaging system and have proved the availability of the proposed system by both simulation and experiment. Ideally, the form factor of the proposed imaging module can be down to sub-millimeter scale. The shortcoming of the proposed slim camera is its low light efficiency because the fill factor of the array sensor must be small for down sampling the raw image. Besides, the imaging resolution is similar to that of a single lenslet of MLA. Hence the proposed slim camera is not suitable for high-resolution imaging. On the other hand, the cost of the proposed slim camera is extremely low and thus it can be applied as a low-cost camera or low-cost lidar for commercial applications such as internet of things (IoT). The scheme of slim camera is also suitable for nonvisible light applications because it is hard to make a high-resolution array sensor for nonvisible light.

Acknowledgments

The authors thank VP James Hsu and Dr. Robert Yang in FilmLens for their helpful discussion and the help in setting up the fingerprint sensor.

Disclosures

CHL: InFilm Optoelectronic Inc. (P), JPL: Ministry of Science and Technology of Taiwan (F), KHC: Ministry of Science and Technology of Taiwan (F).

Data availability

No data were generated or analyzed in the presented research.

References

1. X. Ni, S. Ishii, A. V. Kildishev, and V. M. Shalaev, “Ultra-thin, planar, Babinet-inverted plasmonic metalenses,” Light: Sci. Appl. 2(4), e72 (2013). [CrossRef]  

2. P. R. West, J. L. Stewart, A. V. Kildishev, V. M. Shalaev, V. V. Shkunov, F. Strohkendl, Y. A. Zakharenkov, R. K. Dodds, and R. Byren, “All-dielectric subwavelength metasurface focusing lens,” Opt. Express 22(21), 26212–26221 (2014). [CrossRef]  

3. W. Wang, Z. Guo, R. Li, J. Zhang, Y. Liu, X. Wang, and S. Qu, “Ultra-thin, planar, broadband, dual-polarity plasmonic metalens,” Photon. Res. 3(3), 68–71 (2015). [CrossRef]  

4. A. Arbabi, E. Arbabi, S. M. Kamali, Y. Horie, S. Han, and A. Faraon, “Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations,” Nat. Commun. 7(1), 13682 (2016). [CrossRef]  

5. B. Groever, W. T. Chen, and F. Capasso, “Meta-Lens Doublet in the Visible Region,” Nano Lett. 17(8), 4902–4907 (2017). [CrossRef]  

6. V. V. Kotlyar, A. G. Nalimov, S. S. Stafeev, C. Hu, L. O’Faolain, M. V. Kotlyar, D. Gibson, and S. Song, “Thin high numerical aperture metalens,” Opt. Express 25(7), 8158–8167 (2017). [CrossRef]  

7. S. Wang, P. C. Wu, V.-C. Su, Y.-C. Lai, M.-K. Chen, H. Y. Kuo, B. H. Chen, Y. H. Chen, T.-T. Huang, J.-H. Wang, R.-M. Lin, C.-H. Kuan, T. Li, Z. Wang, S. Zhu, and D. P. Tsai, “A broadband achromatic metalens in the visible,” Nature Nanotech 13(3), 227–232 (2018). [CrossRef]  

8. P. Wang, N. Mohammad, and R. Menon, “Chromatic-aberration-corrected diffractive lenses for ultra-broadband focusing,” Sci. Rep. 6(1), 21545 (2016). [CrossRef]  

9. S. Banerji, M. Meem, A. Majumder, F. G. Vasquez, B. Sensale-Rodriguez, and R. Menon, “Imaging with flat optics: metalenses or diffractive lenses?” Optica 6(6), 805–810 (2019). [CrossRef]  

10. M. Meem, A. Majumder, and R. Menon, “Full-color video and still imaging using two flat lenses,” Opt. Express 26(21), 26866–26871 (2018). [CrossRef]  

11. O. Reshef, M. P. DelMastro, K. K. M. Bearne, A. H. Alhulaymi, L. Giner, R. W. Boyd, and J. S. Lundeen, “An optic to replace space and its application towards ultra-thin imaging systems,” Nat. Commun. 12(1), 3512 (2021). [CrossRef]  

12. O. Alkhazragi, A. Trichili, I. Ashry, T. K. Ng, M.-S. Alouini, and B. S. Ooi, “Wide-field-of-view optical detectors using fused fiber-optic tapers,” Opt. Lett. 46(8), 1916–1919 (2021). [CrossRef]  

13. E. Peli and W. P. Siegmund, “Fiber-optic reading magnifiers for the visually impaired,” J. Opt. Soc. Am. A 12(10), 2274–2285 (1995). [CrossRef]  

14. E. Y. Lam, “Computational photography with plenoptic camera and light field capture: tutorial,” J. Opt. Soc. Am. A 32(11), 2021–2032 (2015). [CrossRef]  

15. M. Martínez-Corral and B. Javidi, “Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photon. 10(3), 512–566 (2018). [CrossRef]  

16. J. Tanida, T. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, “Thin observation module by bound optics (TOMBO): concept and experimental verification,” Appl. Opt. 40(11), 1806–1813 (2001). [CrossRef]  

17. J. Tanida, R. Shogenji, Y. Kitamura, K. Yamada, M. Miyamoto, and S. Miyatake, “Color imaging with an integrated compound imaging system,” Opt. Express 11(18), 2109–2117 (2003). [CrossRef]  

18. Y. Kitamura, R. Shogenji, K. Yamada, S. Miyatake, M. Miyamoto, T. Morimoto, Y. Masaki, N. Kondou, D. Miyazaki, J. Tanida, and Y. Ichioka, “Reconstruction of a high-resolution image on a compound-eye image-capturing system,” Appl. Opt. 43(8), 1719–1727 (2004). [CrossRef]  

19. M. C. Hutley, R. Hunt, R. F. Stevens, and P. Savander, “The moire magnifier,” Pure Appl. Opt. 3(2), 133–142 (1994). [CrossRef]  

20. H. Kamal, R. Völkel, and J. Alda, “Properties of moiré magnifiers,” Opt. Eng. 37(11), 3007–3014 (1998). [CrossRef]  

21. “Holographix”, retrieved https://holographix.com/microlens-arrays/

22. A. Tripathi, T. V. Chokshi, and N. Chronis, “A high numerical aperture, polymer-based, planar microlens array,” Opt. Express 17(22), 19908–19918 (2009). [CrossRef]  

23. G. Intermite, A. McCarthy, R. E. Warburton, X. Ren, F. Villa, R. Lussana, A. J. Waddie, M. R. Taghizadeh, A. Tosi, F. Zappa, and G. S. Buller, “Fill-factor improvement of Si CMOS single-photon avalanche diode detector arrays by integration of diffractive microlens arrays,” Opt. Express 23(26), 33777–33791 (2015). [CrossRef]  

Supplementary Material (2)

NameDescription
Visualization 1       The video is taken by a pinhole-array-attached fingerprint sensor together with a micro-lens array (MLA). While the MLA is rotated related to the sensor, the appeared images rotate and the magnification change.
Visualization 2       The video is taken by a pinhole-array-attached fingerprint sensor together with a micro-lens array (MLA). The object is a liquid-crystal spatial light modulator playing some slides.

Data availability

No data were generated or analyzed in the presented research.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. The Schematic of our proposed system, which consists of an MLA and an array sensor.
Fig. 2.
Fig. 2. Geometrical setup. (a) The array of small images. (b) The array sensor.
Fig. 3.
Fig. 3. The geometry between the object and the images.
Fig. 4.
Fig. 4. The geometrical relationship between the devices and the resulting image. (a) The small images and the array sensor with a small rotation angle $\theta $. (b) The corresponding samples on a virtual image.
Fig. 5.
Fig. 5. The magnified image appeared on the image plane. Counterclockwise is the positive rotation direction for all angles.
Fig. 6.
Fig. 6. The obtained images at different rotation angles of the array sensor. (a)$\theta = 3^\circ $; (b)$\theta = 2^\circ $; (c)$\theta = 1^\circ $; (d) $\theta ={-} 0.5^\circ $; (e)$\theta ={-} 1^\circ $; (f) $\theta ={-} 2^\circ $; (g) $\theta ={-} 3^\circ $; (h) $\theta ={-} 4^\circ $ (i) the ground truth image.
Fig. 7.
Fig. 7. The experimental setup.
Fig. 8.
Fig. 8. Experimental results. The rotation angles are (a) $\theta ={-} 1^\circ $; (b) $\theta ={-} 2^\circ $; (c) $\theta ={-} 3^\circ $; (d) $\theta ={-} 4^\circ $; (e) $\theta ={-} 5^\circ $; (f) $\theta ={-} 6^\circ $. Also see Visualization 1 for the imaging at various rotation angle, and Visualization 2 for the imaging of a dynamic object, respectively.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

I 0 ( x , y ) = f ( x , y ) comb ( x X ~ , y Y ~ ) = m n f ( x m X ~ , y n Y ~ ) = m n f m n ( x , y ) ,
X ~ = ρ X = ( 1 + M L ) X Y ~ = ρ Y = ( 1 + M L ) Y ,
I S ( x , y ) = I 0 ( x , y ) comb ( x X , y Y )
comb ( x X , y Y ) = m n δ ( x m X , y n Y ) = m n δ ( x cos θ + y sin θ m X , x sin θ + y cos θ n Y ) ,
{ x cos θ + y sin θ m X = 0 x sin θ + y cos θ n Y = 0 .
x = m cos θ X n sin θ Y y = m sin θ X + n cos θ Y .
( x , y ) = [ m X ( ρ cos θ ) n Y sin θ , n Y ( ρ cos θ ) + m X sin θ ] .
M a = X X 2 ρ 2 cos θ .
tan φ = sin θ cos θ ρ .
φ = θ ( + ) π 2 o r φ = θ ( ) + π 2 .
M s = M L M a = M L 2 ( 1 + M L cos θ ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.