Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Point spread function for the wide-field-of-view plenoptic cameras

Open Access Open Access

Abstract

Recently, single or multi-layer spherical lens (monocentric lens) coupled with a microlens array (MLA) and an imaging sensor are under investigation to expand the field of view (FOV) for handheld plenoptic cameras. However, there lacks modeling the point spread functions (PSFs) for them to improve the imaging quality and to reconstruct the light field in the object space. In this paper, a generic image formation model is proposed for wide-FOV plenoptic cameras that use a monocentric lens and an MLA. By analyzing the optical characteristics of the monocentric lens, we propose to approximate it by a superposition of a series of concentric lenses with variable apertures. Based on geometry simplification and wave propagation, the equivalent imaging process of each portion of a wide-FOV plenoptic camera is modeled, based on which the PSF is derived. By comparing PSFs captured by real wide-FOV plenoptic camera and those generated by the proposed model, the validity of this model is verified. Further, reconstruction process is applied by deconvolving captured images with the PSFs generated by the proposed model. Experimental results show that the quality of reconstructed images is better than that of subaperture images, which demonstrates that our proposed PSF model is beneficial for imaging quality improvement and light field reconstruction.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Corrections

9 September 2021: A typographical correction was made to the funding section.

1. Introduction

Different from camera arrays, the plenoptic camera can capture 4D spatial and angular information of the light rays via a single exposure by inserting a microlens array (MLA) between the main lens and the image sensor [14]. Its portability makes it very beneficial in capturing light field with 6 degree-of-freedom (DOF) [5]. However, the constraint between the entrance pupil diameter and the depth sensitivity limits its field-of-view (FOV) [67], which further constrains its applications.

To improve its FOV, novel compact optical structures by coupling spherical or monocentric lens with curved MLA are under investigation to make a full use of wide-FOV and wide aperture characteristics. The monocentric lens has the characteristics of symmetry optical structure and a large field of view [812]. This lens generally has a fixed aperture embedded at the center, and its image plane is curved [1314]. Moreover, on the one hand, light field cameras based on a main lens of monocentric lens often have a large field of view, but on the other hand, they also face a problem of relatively poor imaging quality. Hence, their imaging quality needs to be further improved by modeling the point spread function (PSF) and deconvolving those images [15].

To the best of our knowledge, though there are some existing approaches in deriving the PSF for normal plenoptic cameras, none of them can be directly ported to the wide-FOV plenoptic cameras whose imaging responses are generated by rays passing through the thick lens with directionally variable apertures. The existing PSF deriving methods designed for the normal plenoptic cameras can be classified into three groups: real measuring methods [14], geometric optical methods [1617] and diffractive optical methods [1820]. The real measuring methods [10] directly retrieve the PSF by recording the image of a point light source. However, since the PSF of the wide-FOV plenoptic camera is spatially variant and the FOV is large, spatially discrete measurements cannot provide precise PSF for an arbitrary spatial point. Also, the accuracy of the retrieved PSFs is directly affected by the idealness of the light sources used in measurement. Geometric optical modeling methods [1617] trace each ray of the point source and calculate the difference between images of real lens and images of ideal lens to obtain the image response on the image plane. However, their accuracy depends on the number of rays being traced and the phase information of the imaging response is lost. The diffractive optical methods model the image response by the simulating the light field propagation process using Fourier optics [2123]. A typical work is the PSF of plenoptic 2.0 camera modeled by wave propagation [2425] from the object plane to the image plane. It can model the image formation process in extremely high resolution with high accuracy. However, since it treats the main lens as thin lens with spatially invariant aperture, it cannot be directly applied to the wide-FOV plenoptic cameras with spherical or monocentric lens. So, theoretically modeling the PSF for the wide-FOV plenoptic cameras is fundamentally needed.

In this paper, image formation process of the wide-FOV plenoptic camera is modeled. First, based on optical characteristics’ analysis of the monocentric lens, we proposed to approximate the main lens of the wide-FOV plenoptic camera by a superposition of a series of concentric thick lenses with directionally variant apertures. Then, the equivalent thin lens is built to derive the geometric parameters for the approximated concentric thick lens portion, based on which the wave propagation model is derived. Coupled with curved MLA and curved sensor plane modeling, the PSF of the entire wide-FOV plenoptic camera is finally derived. The validity of this model in terms of geometric parameters and the imaging responses is verified by comparing the PSFs generated by the model and those captured from a real wide-FOV plenoptic camera. The effectiveness of this model is demonstrated by comparing the results generated by deconvolution and subaperture extraction.

The rest of the paper is organized as follows. Section 2 describes the proposed image formation model in detail. Experimental results are provided in Section 3 followed by conclusions in Section 4.

2. Modeling the wide-FOV plenoptic camera

In this section, the proposed model for the wide-FOV plenoptic camera is introduced in detail. Different from traditional plenoptic cameras, the wide-FOV plenoptic cameras use the monocentric lens as the main lens. A novel PSF model is proposed in this section by analyzing the unique optical properties of the monocentric main lens and subsequent micro lenses which capture the light field, and by combining the geometric optics with Fourier optics.

2.1 Proposed thin lens approximation and aperture equivalent

The optical structure of the wide-FOV plenoptic camera is shown in Fig. 1. Without loss of generality and simplicity, the two-shell monocentric lens [26] is used as an example of main lens of the wide-FOV plenoptic camera. As a typical monocentric lens, it has a series of good optical properties, such as sharing a single center of curvature, rare coma, rare astigmatism, wide-FOV and so on [26]. To make a full use of the large FOV while alleviate the aberrations caused by the off-axis light rays, an aperture is introduced at the center of the main lens [13,26,27].

 figure: Fig. 1.

Fig. 1. The general optical structure of the wide-FOV plenoptic camera.

Download Full Size | PDF

Different from conventional plenoptic camera, whose first image plane (image plane of main lens) is flat, the first image plane of the wide-FOV plenoptic camera is a spherical surface, as shown in Fig. 2(a). Thus, the MLA plane and the second image plane, where the sensor is placed and light field is captured, are also curved surfaces sharing the same curvature center with the main lens.

 figure: Fig. 2.

Fig. 2. Imaging process of the main lens for: (a) the light rays from different object points; (b) the light rays from an object point and the approximated thick lens; (c) main lens approximation by a superposition of a series of concentric thick lenses.

Download Full Size | PDF

Due to this property, it is difficult to model the wide-FOV plenoptic camera as an integral whole. However, noticing that paraxial rays of each object point mainly contribute to the image, we propose to simplify the monocentric main lens to an equivalent thick lens by only considering the rays passing through an arbitrary portion of the main lens and aperture, highlighted in blue in Fig. 2(b). Instead of modeling the wide-FOV plenoptic camera integrally, each directional portion is modeled as an independent thick lens, as shown in Fig. 2(c).

The spatial sampling rate of the image plane directly corresponds to the thickness of each portion, which is perpendicular to its local optical axis. So, the main lens is first approximated by calculating the equivalent geometric parameters of a series of concentric thick lenses using geometric optics.

The thick lens is central symmetric, the same as the monocentric main lens. Figure 3(a) shows a more detailed sketch of Fig. 2(b). For two-shell monocentric lens, the refractive indexes (RIs) of the outer and inner lens layer are denoted as n1 and n2, respectively, and n0 is the refractive index of air. The radius of outer layer is denoted as R1 and the radius of inner layer is denoted as R2. Thus, the thickness of outer layer is R1-R2. An aperture with radius r is placed in the center of the thick lens.

 figure: Fig. 3.

Fig. 3. (a) Sketch of an arbitrary thick lens and its aperture; (b) the equivalent lens of (a); (c) the equivalent lens of (b).

Download Full Size | PDF

To facilitate wave propagation modeling, further simplification is proposed to replace the thick lens by thin lenses. During the imaging process, for the two-shell monocentric lens, light rays will refract four times while passing through it, which can be regarded as that the rays pass through two identical thin lenses as shown in Fig. 3(b). The focal lengths of the two thin lenses are equal since the thick lens is symmetric, and can be calculated using classic geometry optics [1618]. The distance between these two thin lenses denoted by d in Fig. 3(b) is up to the distance between the front and rear principal planes of thick lens. For the monocentric lens, no matter one-shell or multi-shell, its front and rear principal planes coincide and locate at the central aperture plane. Hence, the distance d equals to zero and these two thin lenses can be merged into a single lens, as shown in Fig. 3(c). The focal length of equivalent thin lens equals to the focal length of the main lens, which is given by:

$$\frac{1}{f} = \frac{2}{{{R_1}}}(\frac{1}{{{n_0}}} - \frac{1}{{{n_1}}}) + \frac{2}{{{R_2}}}(\frac{1}{{{n_1}}} - \frac{1}{{{n_2}}})$$

Except for the thick-to-thin lens simplification, an aperture equivalence procedure is also proposed, which is consisted of two steps: aperture size equivalence and aperture shape equivalence.

For aperture size equivalence, since the thick lens is simplified to a thin lens, the size of thin lens aperture req equals to the exit pupil size of the original thick lens, highlighted in red in Fig. 3(a). For two-shell monocentric lens equivalence, req is given by [27]:

$${r_{ep}} = r(1 + \frac{{n{}_1 - {n_2}}}{{{n_1}}} + \frac{{2n{}_1 - {n_2}}}{{{n_1}}}\frac{{n{}_0 - {n_1}}}{{{n_0}}} + \frac{{{R_1}}}{{{R_2}}}\frac{{n{}_1 - {n_2}}}{{{n_1}}}\frac{{n{}_1 - {n_0}}}{{{n_0}}})$$

The major criterion to determine these RIs is to maximize the Modulation Transfer Function (MTF) value of the lens at 200 lp/mm, which corresponds to maximize the imaging quality at a given resolution. Given the focal length and aperture size of the lens and providing a list of candidate materials of different RIs, optical optimization can be performed using some simulators, like ZeMax, to calculate the MTF value for each material pair (materials of the inner and outer layer). The pair with the highest MTF value is chosen as the final solution. For a two-shell monocentric lens, there are several designs for the RIs of the inner and outer lens layer. An available model in reference paper [27] is using n1=2.003 (S-LAH79) for the outer lens layer and n2=1.816 (K-LASFN9) for the inner lens layer to form a f/1.7 f=12mm two-shell monocentric lens. For one-shell monocentric lens, the calculation can be simplified by setting n1 = n2.

Approximating the monocentric lens by a series of thick lenses will cause alterations in aperture shape while the direction changes. So, an aperture shape model is proposed to describe the directional aperture variations mathematically. As shown in Fig. 4(a), an arbitrary portion of main lens, highlighted in red, has β degree crossing angle with optical axis Z is simplified to a thin lens. Its optical axis, referred as Z’, rotates β degree along the X axis which points out of paper. The definitions of axes Y and Y’ follow the right-hand screw rule. As shown in Fig. 4(b), the effective aperture of this red thin lens (β°), Ea, is a projection of the basic aperture of blue thin lens (0°) on the XY’ plane via a cosine function. The same happens when the portion rotates along the Y axis. If the basic aperture is circular, Ea is elliptical and its shape is given by:

$${E_a}({x,y} )= \frac{{{x^2}}}{{{r_{eq}}^2{{\cos }^2}\alpha }} + \frac{{{y^2}}}{{{r_{eq}}^2{{\cos }^2}\beta }}, $$
where (x,y) is the horizontal and vertical coordinates of the arbitrary portion lens that are perpendicular to its local optical axis Z’; req is the equivalent thin lens aperture radius given in Eq. (2); α and β are the rotation angles between the local optical axis of the lens portion and that of the main lens along the Y axis and X axis, respectively.

 figure: Fig. 4.

Fig. 4. The aperture shape projection relationship: (a) Local optical axis Z’ of an arbitrary portion rotates β degree along the X axis; (b) 3D rotational projection with β degree rotation along the X axis and α degree rotation along the Y axis; (c) apertures shape while (α, β) = (0°,0°), (45°, 0°), (0°, 45°), (45°, 45°).

Download Full Size | PDF

Figure 4(c) shows some typical shapes of Ea, where (α, β) equals to (0°,0°), (45°,0°), (0°,45°), (45°,45°), respectively. Notice that the area of aperture at (45°,45°) is smaller than that at (0°,0°). The farther (α, β) is away from (0°,0°), the smaller aperture area will be, which will reduce light rays passing through and decrease the effective image area on sensor.

For the plenoptic camera whose main lens is one-shell monocentric lens or other types of monocentric lens, the approximation process is the same. As long as its aperture is located at the geometric and optical center of the lens, the main lens can still be approximated by steps proposed in this section. The accurate aperture equivalence will help a lot to model the image formation process in the next section.

2.2 Proposed wave propagation modeling for the wide-FOV plenoptic camera

After approximating the monocentric main lens by a series of thin lens with equivalent aperture, the wave propagation model is proposed in this section for the wide-FOV plenoptic camera.

Since the main lens of the wide-FOV plenoptic camera is approximated by a series of thin lens with equivalent aperture, the image plane of each approximation can be treated as a regional flat plane fitting the sphere. Considering the large depth-of-field of micro lenses, the MLA plane and the sensor plane can also be replaced with a finite number of flat planes to fit the curvature, which share the same optical axis of the thick lens, as shown in Fig. 5(a).

 figure: Fig. 5.

Fig. 5. Wave propagation model. (a) The MLA plane and the sensor plane are modeled as a series of flat planes. (b) an equivalent plenoptic camera portion in (a). The flat MLA and image plane in the figures are the regional flat approximations of the curved MLA and image plane.

Download Full Size | PDF

The wave propagation process through a lens portion at angle (α, β) is shown in Fig. 5(b). Light waves from the object point go through three propagation processes and two transformations before reaching the sensor. Light waves will first propagate from object plane to the front of the main lens, then they will pass through the lens and keep propagating to the front of MLA. After passing through the MLA, light waves will finally converge to the second image plane, where sensor is placed. The image formation process can be modeled using Fourier optics mathematically. The derivations of each step are as follows.

The first propagation process is from the object plane to the main lens. Since the medium is air, the propagation process is given by

$${u_{BML}} = {u_{obj}}(x,y) \otimes {h_1}(x,y), $$
$${h_1}\left( {x,y} \right) = {{{e^{jk{z_1}}}} \over {j\lambda {z_1}}}{\mathop{\rm exp}\nolimits} \left[ {{{jk} \over {2{z_1}}}\left( {{x^2} + {y^2}} \right)} \right],$$
where uobj and uBML represent the light field on the object plane and that on the front surface of the main lens, respectively; z1 is the distance between the object point and the lens, as shown in Fig. 5(b); λ is the wavelength of the light; ${\otimes} $ is convolution operator.

The light rays pass through the main lens and the phases of the lights are transformed as

$${u_{AML}} = {u_{BML}}({x,y} )\times {t_{ML}}({x,y} ), $$
$${t_{ML}}({x,y} )= {P_{ML}}({x,y} )\exp \left[ { - j\frac{k}{{2f}}({{x^2} + {y^2}} )} \right]seidel(x,y), $$
$${P_{ML}}(x,y) = \left\{ \begin{array}{ll} 1,&{E_a}(x,y) \le 1,\\ 0,&others. \end{array} \right., $$
where uAML is the light field on the rear surface of the main lens; tML represents the transformation function of the main lens; f is the focal length of the main lens; seidel(x,y) is the seidel term of the main lens which denotes the aberrations of the main lens, and PML is the pupil function of the main lens; Ea is the effective aperture function defined in Eq. (3).

After the first transformation, the light waves continue propagating from the rear surface of the main lens to the front of MLA. The light field on the front surface of the MLA, uBMLA, can be directly derived using Eq. (4) by replacing uobj with uAML and h2 with h1. And h2 is defined by replacing z1 in Eq. (5) with z2, the distance between the main lens and MLA.

Then, the light waves will pass through the MLA. The phase transformation of MLA acts similar to that of the main lens. Thus, uAMLA can be directly derived using Eq. (6) by replacing uBML with uBMLA and tML with tMLA. It is given by:

$${t_{MLA}}({x,y} )= \sum\limits_{i,j} {{P_{micro}}({x,y;{\xi_i},{\eta_j}} )\exp \left[ { - j\frac{k}{{2{f_{micro}}}}({{{(x - {\xi_i})}^2} + {{(y - {\eta_j})}^2}} )} \right]}, $$
$${P_{micro}}(x,y;{\xi _i},{\eta _j}) = \left\{ \begin{array}{ll} 1, &(x - {\xi_i}{)^2} + {(y - {\eta_j})^2} \le {r_{micro}}^2,\\ 0,& others. \end{array} \right., $$
where fmicro is the focal length of the micro lens; (ξij) is the coordinate of each micro lens center at the MLA plane; Pmicro is the pupil function of each micro. rmicro is the radius of each micro lens. We generally assume that MLA is an ideal perfect lens array. Thus, the seidel term of MLA is ignored.

After passing through the MLA, the light waves continue propagating from the rear surface of MLA to the sensor plane. The light field on sensor plane u can also be directly derived using Eq. (4) by replacing uobj with uAMLA and h3 with h1. And h3 is defined by replacing z1 in Eq. (5) with z3, the distance between the MLA and the sensor.

Combining Eq. (4)–(10) to mathematically express the entire image formation process of one equivalent plenoptic camera portion at an arbitrary angle (α, β) using incoherent imaging theory as

$$U{({x_3},{y_3})_{({\alpha ,\beta } )}} = {U_{obj}}({x_o},{y_o}) \otimes {|{h{{({x_3},{y_3},{x_o},{y_o})}_{({\alpha ,\beta } )}}} |^2}, $$
where (xo, yo) and (x3, y3) are the spatial coordinates at the object plane and image plane, respectively; Uobj(xo, yo) and U(x3, y3) are the ideal geometric irradiance image on the object plane and the corresponding image on the image plane, respectively; |h(x3, y3, xo, yo)(α, β)|2 is commonly known as the PSF of equivalent plenoptic camera portion at angle (α, β). h, the coherent impulse response of the imaging system, is given by
$$\begin{aligned} h{\textrm{(}{x_3},{y_3}\textrm{, }{x_o},{y_o}\textrm{)}_{(\alpha ,\beta )}} &= ({({({({{u_{obj}}({x_0},{y_0}) \otimes {h_1}({x_1},{y_1})} )\times {t_{ML}}({x_1},{y_1})} )\otimes {h_2}({x_2},{y_2})} )\times {t_{MLA}}({x_2},{y_2})} )\otimes {h_3}({x_3},{y_3})\\ &= \int\limits_{ - \infty }^{ + \infty } {\int {\int\limits_{ - \infty }^{ + \infty } {\int {\frac{{{e^{jk{z_1}}}}}{{j\lambda {z_1}}}\exp \left\{ {\frac{{jk}}{{2{z_1}}}[{{{({{x_1} - {x_o}} )}^2} + {{({{y_1} - {y_o}} )}^2}} ]} \right\}} } } } \\ &\quad \times {P_{ML}}{({x_1},{y_1})_{(\alpha ,\beta )}} \times \exp \left[ { - j\frac{k}{{2f}}({{x_1}^2 + {y_1}^2} )} \right] \times seidel({x_1},{y_1})\\ &\quad \times \frac{{{e^{jk{z_2}}}}}{{j\lambda {z_2}}}\exp \left[ {\frac{{jk}}{{2{z_2}}}({{{({{x_2} - {x_1}} )}^2} + {{({{y_2} - {y_1}} )}^2}} )} \right]d{x_1}d{y_1}\\ &\quad \times \sum\limits_{i,j} {{P_{micro}}({x_2},{y_2};{\xi _i},{\eta _j})} \exp \left[ { - j\frac{k}{{2{f_{micro}}}}({{{({x_2} - {\xi_i})}^2} + {{({y_2} - {\eta_j})}^2}} )} \right]\\ &\quad \times \frac{{{e^{jk{z_3}}}}}{{j\lambda {z_3}}}\exp \left[ {\frac{{jk}}{{2{z_3}}}({{{({{x_3} - {x_2}} )}^2} + {{({{y_3} - {y_2}} )}^2}} )} \right]d{x_2}d{y_2} \end{aligned}, $$
where (x1, y1), (x2, y2) and (x3, y3) are the spatial coordinates at the principle plane of main lens, the principle plane of MLA and image plane, respectively.

Using the PSF definition of each plenoptic camera portion, the PSF of the entire wide-FOV plenoptic camera can be defined as the summation of the PSF at all angles:

$$H({x_3},{y_3},{x_o},{y_o}) = {\sum\limits_{\alpha ,\beta } {|{h{{({x_3},{y_3},{x_o},{y_o})}_{({\alpha ,\beta } )}}} |} ^2}$$

3. Experimental results

To verify the correctness of the proposed model and to demonstrate its effectiveness in reconstructing light field, we first compare the PSFs generated by the proposed model with those captured by a real plenoptic camera system. Then, the reconstruction results of captured plenoptic images using PSFs generated by proposed model is evaluated to show the model’s effectiveness.

Figure 6 shows the prototype of wide-FOV plenoptic camera. The main lens used in the experiment is a one-shell spherical lens using BK7 glass with radius R equals to 5 mm, as shown in Fig. 6(c). The radius of its central aperture r is 1 mm. Using Eq. (1), the focal length of this single layer spherical lens is approximately 7.3 mm. The main lens is placed on a rotatable device to capture the image from all directions. Multi-shell lens can be used as well, but its manufacturing and assembling complexity is high, which sets obstacles to real applications. Although one-shell monocentric lens may bring larger imaging abbreviation, it is friendly to fabrication. So, we wish to use this simple optical configuration to achieve high imaging quality for wider application. Moreover, using one-shell monocentric lens can show the quality improvement obviously in the experiments to demonstrate the effectiveness of the proposed model.

 figure: Fig. 6.

Fig. 6. The imaging system of a real wide-field-of-view plenoptic camera. (a) and (b) show the front view and top view of this system. Component in blue rectangle is an iPad which is used as a displayer. Component in red rectangle is a ball lens working as the main lens of the plenoptic camera for a wide-field-of-view. Component in yellow rectangle is the sensor with MLA. (c) shows a close view of the ball lens with the pupil function in (d). (e) is the sensor with MLA cover in front. (f) is the arrangement of the MLA.

Download Full Size | PDF

The camera is GS3-U3-123S6C-C from FLIR company with the resolution of 4000 pixels ${\times} $3000 pixels and 3.45 um pixel size. The camera is placed on a 3D translation stage. In front of the sensor, an MLA with size 30 mm${\times} $25 mm covers the whole sensor, as shown in Fig. 6(e). The diameter of each micro lens is 381.6 um. Its focal length is 4/3 mm and the f-number is 3.49. The arrangement of micro lenses is hexagonal in order to use as much area of the sensor as possible, as shown in Fig. 6(f). Considering the limitation of time and computational resource, only central 450 pixels ${\times} $450 pixels area of the sensor is used in this experiment.

3.1 PSF verification of the proposed model

In this subsection, the PSFs of the wide-FOV plenoptic camera captured by our prototype camera and generated by the proposed model are compared.

For the real PSF capturing, a laser at 532nm is used and converged at 23 mm in front of the main lens as a point light source. Since wavelength λ is a parameter in wave propagation process, a laser at 532 nm is used to generate a green point light source for real PSF capturing. Green light source is generally used in PSF measurement since it is visible and sensitive to human eyes, which benefits focal adjustment for the real optical system. If the wavelength changes, the image plane of the plenoptic camera system will move forward or backward. However, all the derivations keep the same as that defined in Eq. (12) and the parameter λ in Eq. (12) needs to be changed to the corresponding value. The laser source used to achieve 532nm wavelength is MGL-III-532-10mW solid-state laser fabricated by Changchun New Industries Optoelectronics Technology Company, which uses Nd:YVO4 as medium.

For object distance in the experiments, it is decided by the magnification of camera system and the resolution of the sensor. Indeed, light can be converged at any point in front of the main lens to work as a point light source. The magnification of the whole system is about 1:8. For an object within 1.035 mm 1.035 mm area in the experiments, if the object distance is too far, it will beyond the sensor’s resolution and is hard to distinguish its image; if the object distance is too close, the image distance will be longer and the image quality will decrease dramatically due to the aberrations. To balance those limitations and to verify our model’s effectiveness, 23 mm is chosen in the experiments. The sensor is moved by the translation stage to be on focus. For proposed model, the object distance z1 is also set as 23mm and the corresponding image distance of the main lens is set as 10.8 mm to be on focus as well. The distance z3 is set as 1.7 mm after calibration of our plenoptic camera [28].

The first row of Fig. 7 shows the captured and simulated PSFs for the on-axis point source. The second row of Fig. 7 shows the captured and simulated PSFs for the point source 1mm off along the axis X and 1 mm right off along the axis Y. These PSFs are generated while (α, β) equals to ().

 figure: Fig. 7.

Fig. 7. PSF comparison. (a) Captured PSF images; (b) simulated PSF images; (c) illuminance intensity distribution of three selected rows. PSFs at the first row are captured and simulated while the light source is on the optical axis. PSFs at the second row are captured and simulated while the light source is 1 mm off along the axis X and 1mm off along the axis Y.

Download Full Size | PDF

To compare the captured PSFs and the simulated PSFs, mean square error (MSE) is calculated between them. For on-axis point, the MSE difference is only 0.0066, and for off-axis point, the MSE difference is only 0.0027. Also, they are visually identical, which shows that the proposed model can describe the wave propagation correctly.

For each PSF, we selected three rows from the image and compared their illuminance distribution as shown in Fig. 7(c). It can be found that the position of each intensity peak in the simulated PSF images matches that of captured PSF images. Although the illuminance intensity of the captured PSF and simulated PSF in each row is not exactly the same due to inhomogeneous illuminance in real-world capturing, like the third row of Fig. 7(c), we see exactly the same peak positions and consistent intensity ratios in the simulated PSFs and the captured PSFs. It further demonstrates that the proposed model can describe the real imaging system correctly and accurately.

3.2 Light field reconstruction performance verification of the proposed model

In this subsection, the real light field imaging and reconstruction results are compared to further demonstrate the correctness of the proposal model in real imaging scenarios and to show its effectiveness in light field reconstruction.

We use the central plenoptic portion at (α, β) = (0°, 0°) with circular effective aperture, and the plenoptic portion at (α, β) = (30°, 0°), whose aperture shape is significantly different from that of (α, β) = (0°, 0°), to compare the real imaging results with the simulated results generated by the proposed model. For the real wide-FOV plenoptic camera, the rotation angle is from -60° to 60° using the central portion of monocentric lens as 0°. The in-total 120° rotation angle is determined by the illuminance at the largest angle. As we mentioned in Eq. (3) and Fig. 4, when the rotation angle increase, the aperture will change from a circle to an ellipse and its area will decrease. At rotation angle 60°, the aperture area is half of that at 0°, which makes the image much darker with higher imaging noise. So, to keep the images at different rotation angles and present a comparable quality, the rotation angle is limited to be ±60° in the real wide-FOV plenoptic camera.

Considering the high computational complexity, PSFs of point light sources in a 300 pixels ${\times} $300 pixels area or 1.035 mm${\times} $1.035 mm physical size are calculated and the reconstructed image is cropped into 450 pixels${\times} $450 pixels. Each PSF image is reshaped to be a 202500 ${\times} $1 vector and packed to form a 202500${\times} $90000 PSF matrix.

Four imaging targets are chosen to demonstrate the effectiveness in reconstructing the light fields and improving the image quality. Each of them is designed as 300 pixels ${\times} $300 pixels (1.035 mm ${\times} $ 1.035 mm physical size). They are placed at 23 mm from the aperture of ball lens both for real imaging and simulation. For simulation, each object image is reshaped as a 90000 × 1 vector. The convolution process can be rewritten as the product of the PSF matrix and the object vector, and the reconstruction process is a reverse problem [24]. 532nm laser is also used as light source.

Figure 8 shows experimental results at (α, β) = (0°, 0°). It can be observed that the location and the intensity distribution of simulated and captured light fields, shown in Fig. 8(b) and (c), respectively, are the same for all the four imaging targets, which further verifies the correctness of the proposed model in real imaging scenarios. Both the captured light fields and the simulated light fields show a bit lower pixel intensity at the corners because of the vignetting effect. Although it is hard to provide illuminance as homogeneous as that in simulation, the real captured light fields still present high similarity with simulated results.

 figure: Fig. 8.

Fig. 8. Simulation and reconstruction results at (0°, 0°). (a) Four imaging targets; (b) simulated light field images of (a); (c) captured light field images of (a); (d) sub-aperture images rendered from (c); (e) reconstruction results by deconvoluting (c) using our modeled PSFs.

Download Full Size | PDF

Then, sub-aperture images are extracted from the captured light field images and shown in Fig. 8(d). It is found that blur and artifacts exist in the subaperture images, which are caused by lens aberrations and low-resolution property. In contrast, reconstructing the object image by deconvoluting the real captured light fields using the derived PSFs and Hyper-Laplacian deconvolution algorithm [29], the reconstructed images shown in Fig. 8(e) provides much higher quality in terms of clearer edges, less artifacts and consistent brightness, which indicates that the proposed model is beneficial to light field reconstruction. Table 1 lists the peak signal to noise ratio (PSNR) calculated for the rendered sub-aperture images and reconstructed images using the original imaging target as the anchor. It is obvious that the PSNR provided by the reconstructed images are much higher than those provided by subaperture images, which further demonstrates higher quality objectively.

Tables Icon

Table 1. PSNR (dB) Comparison between subaperture images and reconstruction images

Figure 9 shows experimental results at (α, β) = (30°, 0°). Again, the correctness of the proposed model in real imaging scenarios can be verified by the consistency of the location and the intensity distribution of simulated and captured light fields of all the four imaging targets, shown in Fig. 9(b) and (c), respectively. Comparing the corners of both the captured light fields and the simulated light fields, we can also find a bit lower pixel intensity. The real captured light fields still present high similarity with simulated results. Figure 9(d) shows sub-aperture images extracted from the captured light field images. Figure 9(e) shows reconstructed images with higher quality in terms of clearer edges, less artifacts and consistent brightness, which indicates that the proposed model is also beneficial to light field reconstruction at (α, β) = (30°, 0°).

 figure: Fig. 9.

Fig. 9. Simulation and reconstruction results at (30°, 0°). (a) Four imaging targets; (b) simulated light field images of (a); (c) captured light field images of (a); (d) sub-aperture images rendered from (c); (e) reconstruction results by deconvoluting (c) using our modeled PSFs.

Download Full Size | PDF

4. Conclusions

In this paper a novel theoretical PSF model is proposed for the wide-FOV plenoptic camera. By analyzing the optical characteristics of the monocentric lens, a superposition of a series of concentric lenses with variable apertures model is proposed and the equivalent imaging process of each portion of a wide-FOV plenoptic camera is modeled. PSF is derived by the proposed model using Fourier optics. Comparisons of captured PSFs and simulated PSFs are made to verify the model and reconstruction process is applied. Experimental results demonstrate that our proposed PSF model is helpful for imaging quality improvement and light field reconstruction. Our future work will focus on adding the aberrations into the model and use the PSFs obtained by the proposed model to correct aberrations like field curvature, distortion, and further improve the imaging quality.

Funding

Shenzhen Project, China (JCYJ20200109142810146); National Natural Science Foundation of China (61827804); Natural Science Foundation of Guangdong Province (2020A1515010345).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. R. Kingslake, “A history of the photographic lens,” Academic Press, (1989).

2. G. Krishnan and S. K. Nayar, “Towards a true spherical camera,” Proc. SPIE 7240, 724002 (2009). [CrossRef]  

3. John Hannavy, ed. “Encyclopedia of nineteenth-century photography,” Routledge, 2013.

4. A. Offner and W. B. Decker, “An f:1.0 Camera for Astronomical Spectroscopy,” J. Opt. Soc. Am. 41(3), 169–172 (1951). [CrossRef]  

5. R. S. Overbeck, D. Erickson, D. Evangelakos, M. Pharr, and P. Debevec, “A system for acquiring, processing, and rendering panoramic light field stills for virtual reality,” ACM Trans. Graph. 37(6), 1–15 (2019). [CrossRef]  

6. Milton Laikin, “Lens design,” CRC Press, 2018.

7. R. Jacobson, S. Ray, and G. G. Attridge, “Manual of Photography,” Routledge, 2013.

8. Y. Huang, Y. Fu, G. Zhang, and Z. Liu, “Modeling and analysis of a monocentric multi-scale optical system,” Opt. Express 28(22), 32657–32675 (2020). [CrossRef]  

9. G. M. Schuster, D. G. Dansereau, G. Wetzstein, and J. E. Ford, “Panoramic single-aperture multi-sensor light field camera,” Opt. Express 27(26), 37257–37273 (2019). [CrossRef]  

10. S. Igor, A. Arianpour, S. J. Olivas, I. P. Agurok, A. R. Johnson, R. A. Stack, R. L. Morrison, and J. E. Ford, “Panoramic monocentric imaging using fiber-coupled focal planes,” Opt. Express 22(26), 31708 (2014). [CrossRef]  

11. J. Li, D. Zhao, Y. Zhou, and Y. Ji, “Study on a monocentric lens with wide-field and high-resolution,” Proc. SPIE 11434, 114340Y (2020). [CrossRef]  

12. W. Lu, S. Chen, Y. Xiong, and J. Liu, “A single ball lens-based hybrid biomimetic fish eye/compound eye imaging system,” Opt. Commun. 480, 126458 (2021). [CrossRef]  

13. M. Schuster, I. P. Glenn, J. E. Agurok, D. G. Ford, G. Dansereau, and Wetzstein, “Panoramic monocentric light field camera,” In International Optical Design Conference, pp. ITh4A-5. Optical Society of America, 2017.

14. B. Guenter, N. Joshi, R. Stoakley, A. Keefe, K. Geary, R. Freeman, J. Hundley, P. Patteerson, D. Hammon, G. Herrera, and E. Sherman, “Highly curved image sensors: a practical approach for improved optical performance,” Opt. Express 25(12), 13010–13023 (2017). [CrossRef]  

15. V. Boominathan, K. Mitra, and V. Ashok, “Improving resolution and depth-of-field of light field cameras using a hybrid imaging system,” In Proceedings of 2014 IEEE International Conference on Computational Photography (ICCP) (2014), pp. 1–10.

16. J. E. Greivenkamp, “Field guide to geometrical optics,” SPIE Press, (2004).

17. Bob D Guenther, “Modern optics,” OUP Oxford, 2015.

18. D. G. Voelz, “Computational Fourier Optics: A MATLAB® Tutorial,” SPIE Press (2011).

19. J. W. Goodman, “Introduction to Fourier Optics,”, McGraw-Hill, (1968).

20. Stephane Perrin and M. Paul, “Fourier optics: basic concepts,” arXiv preprint arXiv:1802.07161 (2018).

21. E. Sahin, V. Katkovnik, and A. Gotchev, “Super-resolution in a defocused plenoptic camera: a wave-optics-based approach,” Opt. Lett. 41(5), 998–1001 (2016). [CrossRef]  

22. M. Broxton, L. Grosenick, S. Yang, N. Choen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013). [CrossRef]  

23. J. Pribošek, J. Steinbrener, M. Baumgart, T. Bereczki, K. Harms, A. Tortschanoff, and A. Kenda, “Wave-optical calibration of a light-field fluorescence microscopy,” Proc. SPIE 10883, 1088314 (2019). [CrossRef]  

24. X. Jin, L. Liu, Y. Chen, and Q. Dai, “Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 2.0,” Opt. Express 25(9), 9947 (2017). [CrossRef]  

25. Y. Chen, X. Jin, and B. Xiong, “Optical-aberrations-corrected light field re-projection for high-quality plenoptic imaging,” Opt. Express 28(3), 3057–3072 (2020). [CrossRef]  

26. D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A Wide-Field-of-View Monocentric Light Field Camera,” Computer Vision & Pattern Recognition, pp. 3757–3766 (2017).

27. S. Igor, I. P. Agurok, and J. E. Ford, “Optimization of two-glass monocentric lenses for compact panoramic imagers: general aberration analysis and specific designs,” Appl. Opt. 51(31), 7648–7661 (2012). [CrossRef]  

28. X. Jin, X. Sun, and C. Li, “Geometry parameter calibration for focused plenoptic cameras,” Opt. Express 28(3), 3428–3442 (2020). [CrossRef]  

29. D. Krishnan and R. Fergus, “Fast Image Deconvolution using Hyper-Laplacian Priors,” Adv. Neur. Info. Process. Syst. 22, 1033–1041 (2009).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. The general optical structure of the wide-FOV plenoptic camera.
Fig. 2.
Fig. 2. Imaging process of the main lens for: (a) the light rays from different object points; (b) the light rays from an object point and the approximated thick lens; (c) main lens approximation by a superposition of a series of concentric thick lenses.
Fig. 3.
Fig. 3. (a) Sketch of an arbitrary thick lens and its aperture; (b) the equivalent lens of (a); (c) the equivalent lens of (b).
Fig. 4.
Fig. 4. The aperture shape projection relationship: (a) Local optical axis Z’ of an arbitrary portion rotates β degree along the X axis; (b) 3D rotational projection with β degree rotation along the X axis and α degree rotation along the Y axis; (c) apertures shape while (α, β) = (0°,0°), (45°, 0°), (0°, 45°), (45°, 45°).
Fig. 5.
Fig. 5. Wave propagation model. (a) The MLA plane and the sensor plane are modeled as a series of flat planes. (b) an equivalent plenoptic camera portion in (a). The flat MLA and image plane in the figures are the regional flat approximations of the curved MLA and image plane.
Fig. 6.
Fig. 6. The imaging system of a real wide-field-of-view plenoptic camera. (a) and (b) show the front view and top view of this system. Component in blue rectangle is an iPad which is used as a displayer. Component in red rectangle is a ball lens working as the main lens of the plenoptic camera for a wide-field-of-view. Component in yellow rectangle is the sensor with MLA. (c) shows a close view of the ball lens with the pupil function in (d). (e) is the sensor with MLA cover in front. (f) is the arrangement of the MLA.
Fig. 7.
Fig. 7. PSF comparison. (a) Captured PSF images; (b) simulated PSF images; (c) illuminance intensity distribution of three selected rows. PSFs at the first row are captured and simulated while the light source is on the optical axis. PSFs at the second row are captured and simulated while the light source is 1 mm off along the axis X and 1mm off along the axis Y.
Fig. 8.
Fig. 8. Simulation and reconstruction results at (0°, 0°). (a) Four imaging targets; (b) simulated light field images of (a); (c) captured light field images of (a); (d) sub-aperture images rendered from (c); (e) reconstruction results by deconvoluting (c) using our modeled PSFs.
Fig. 9.
Fig. 9. Simulation and reconstruction results at (30°, 0°). (a) Four imaging targets; (b) simulated light field images of (a); (c) captured light field images of (a); (d) sub-aperture images rendered from (c); (e) reconstruction results by deconvoluting (c) using our modeled PSFs.

Tables (1)

Tables Icon

Table 1. PSNR (dB) Comparison between subaperture images and reconstruction images

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

1 f = 2 R 1 ( 1 n 0 1 n 1 ) + 2 R 2 ( 1 n 1 1 n 2 )
r e p = r ( 1 + n 1 n 2 n 1 + 2 n 1 n 2 n 1 n 0 n 1 n 0 + R 1 R 2 n 1 n 2 n 1 n 1 n 0 n 0 )
E a ( x , y ) = x 2 r e q 2 cos 2 α + y 2 r e q 2 cos 2 β ,
u B M L = u o b j ( x , y ) h 1 ( x , y ) ,
h 1 ( x , y ) = e j k z 1 j λ z 1 exp [ j k 2 z 1 ( x 2 + y 2 ) ] ,
u A M L = u B M L ( x , y ) × t M L ( x , y ) ,
t M L ( x , y ) = P M L ( x , y ) exp [ j k 2 f ( x 2 + y 2 ) ] s e i d e l ( x , y ) ,
P M L ( x , y ) = { 1 , E a ( x , y ) 1 , 0 , o t h e r s . ,
t M L A ( x , y ) = i , j P m i c r o ( x , y ; ξ i , η j ) exp [ j k 2 f m i c r o ( ( x ξ i ) 2 + ( y η j ) 2 ) ] ,
P m i c r o ( x , y ; ξ i , η j ) = { 1 , ( x ξ i ) 2 + ( y η j ) 2 r m i c r o 2 , 0 , o t h e r s . ,
U ( x 3 , y 3 ) ( α , β ) = U o b j ( x o , y o ) | h ( x 3 , y 3 , x o , y o ) ( α , β ) | 2 ,
h ( x 3 , y 3 x o , y o ) ( α , β ) = ( ( ( ( u o b j ( x 0 , y 0 ) h 1 ( x 1 , y 1 ) ) × t M L ( x 1 , y 1 ) ) h 2 ( x 2 , y 2 ) ) × t M L A ( x 2 , y 2 ) ) h 3 ( x 3 , y 3 ) = + + e j k z 1 j λ z 1 exp { j k 2 z 1 [ ( x 1 x o ) 2 + ( y 1 y o ) 2 ] } × P M L ( x 1 , y 1 ) ( α , β ) × exp [ j k 2 f ( x 1 2 + y 1 2 ) ] × s e i d e l ( x 1 , y 1 ) × e j k z 2 j λ z 2 exp [ j k 2 z 2 ( ( x 2 x 1 ) 2 + ( y 2 y 1 ) 2 ) ] d x 1 d y 1 × i , j P m i c r o ( x 2 , y 2 ; ξ i , η j ) exp [ j k 2 f m i c r o ( ( x 2 ξ i ) 2 + ( y 2 η j ) 2 ) ] × e j k z 3 j λ z 3 exp [ j k 2 z 3 ( ( x 3 x 2 ) 2 + ( y 3 y 2 ) 2 ) ] d x 2 d y 2 ,
H ( x 3 , y 3 , x o , y o ) = α , β | h ( x 3 , y 3 , x o , y o ) ( α , β ) | 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.