Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Optical-aberrations-corrected light field re-projection for high-quality plenoptic imaging

Open Access Open Access

Abstract

The singlet plenoptic camera, which consists of a single lens, microlens array (MLA) and image sensor, possesses the superiority that the imaging system is compact and lightweight, which is beneficial to miniaturization. However, such plenoptic cameras suffer from severe optical aberrations and their imaging quality is inferior for post-capture processing. Therefore, this paper proposes an optical-aberrations-corrected light field re-projection method to obtain high-quality singlet plenoptic imaging. First, optical aberrations are modeled by Seidel polynomials and included into point spread function (PSF) modeling. The modeled PSF is subsequently used to reconstruct imaging object information. Finally, the reconstructed imaging object information is re-projected back to the plenoptic imaging plane to obtain high-quality plenoptic images without optical aberrations. PSF modeling is validated by a self-built singlet plenoptic camera and the utility of the proposed optical-aberrations-corrected light field re-projection method is verified by numerical simulations and real imaging experiments.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Plenoptic cameras, which insert a microlens array (MLA) between the main lens and image sensor, have attracted lots of attention in recent years. Due to the ability of extracting depth and multi-view information via a single exposure, plenoptic cameras have been applied to various fields especially in virtual reality (VR) [12], wide-field-of-view navigation [3], 360-degree acquisition and display [45], and medical diagnosis [67], etc. In order to realize high portability, wearable capacity and 6DoF, there has been an increasing interest and demand to obtain high-quality plenoptic imaging using compact, lightweight systems.

An alternative approach to build a compact and lightweight plenoptic camera is to replace the complicated dozen lens elements with a single lens, which refers to singlet plenoptic cameras. Nevertheless, single lens will introduce severe optical aberrations that degrade the quality of captured plenoptic images. The degradation will bring errors into depth estimation and multi-view information extraction, etc. Hence, generating optical-aberrations-corrected light field for the singlet plenoptic cameras is fundamentally required.

X. Dallaire et al. [8] proposed a computationally efficient projection technique to correct the optical aberrations for reconstructing images based on ray tracing. However, this technique can only apply to imaging systems where aberrations are high enough to be described by geometric ray tracing. Besides, this technique only accomplish the reconstruction of imaging object information without further obtaining aberrations-corrected plenoptic images. Therefore, post-capture capabilities such as multi-view information extraction, virtual perspective synthesis and perspective shifting with high quality cannot be achieved. Consequently, this paper proposes an optical-aberrations-corrected light field re-projection method to obtain high-quality singlet plenoptic imaging. Specifically, it tries to correct the optical aberrations in the captured light field by first reconstructing the imaging object information using the aberrations-included PSF and then re-projecting the imaging object information back to the plenoptic imaging plane using the aberrations-free PSF to obtain high-quality light field. Experiments are conducted on a self-built singlet plenoptic camera to validate the PSF modeling. Results demonstrate the consistency between the modeled PSF and that captured by the self-built singlet plenoptic camera, which implies the correctness of PSF modeling. Furthermore, both numerical simulations and real imaging experiments are carried out to verify the utility of the proposed optical-aberrations-corrected light field re-projection method. Sub-aperture images extracted from captured plenoptic images and that extracted from re-projected plenoptic images are compared. Results indicate that the incorrect pixel distributions of captured plenoptic images caused by the optical aberrations can be corrected in the re-projected plenoptic images using the proposed method so that high-quality singlet plenoptic imaging can be obtained.

The rest of the paper is organized as follows. Section 2 describes the proposed optical-aberrations-corrected light field re-projection method in detail. Section 3 provides the experimental results for performance verification for the proposed method. Conclusions are drawn in Section 4 as well as the future work.

2. Proposed re-projection method

The architecture of the proposed optical-aberrations-corrected light field re-projection method is shown in Fig. 1. The proposed method uses aberrations-included PSFs to reconstruct the imaging object information from plenoptic images that are captured by the singlet plenoptic cameras and then re-projects the reconstructed imaging object information back to the plenoptic imaging plane to obtain optical-aberrations-corrected plenoptic images.

 figure: Fig. 1.

Fig. 1. The architecture of the proposed method.

Download Full Size | PDF

2.1 PSF modeling

As shown in Fig. 1, PSF modeling aims at deriving aberrations-included PSF for the singlet plenoptic cameras, whose optical structure is shown in Fig. 2. Different from the PSF modeling for singlet plenoptic cameras in [911], which treat the single lens as aberrations-free, we propose to model optical aberrations introduced by the single lens in Seidel polynomials and include them into PSF modeling. Hence, the modeled PSF in this paper will be more realistic and can be applied to real imaging. Detailed PSF modeling is described in the following.

 figure: Fig. 2.

Fig. 2. Schematic layout of singlet plenoptic cameras where main lens consists of only a single lens. T is the thickness of the single lens and purple lines indicate the principal planes.

Download Full Size | PDF

As shown in Fig. 2, (ξ,η), (u,v), (x,y) and (s,t) denote the spatial coordinates of object plane, main lens plane, MLA plane and sensor plane, respectively. Inheriting the subsystem propagation method from our previous work [12] and modeling the optical aberrations in Seidel polynomials, the impulse response of singlet plenoptic cameras is given by

$$\begin{aligned} h(s,t;\xi ,\eta ) &= \frac{{\exp [{ik({z_1} + {z_2} + {z_3})} ]}}{{ - i{\lambda ^3}{z_1}{z_2}{z_3}}}\int\limits_{ - \infty }^{ + \infty } {\int\limits_{ - \infty }^{ + \infty } {\sum\limits_{m \in M} {\sum\limits_{n \in N} {{P_{MLA}}(x - m{d_1},y - n{d_1})} } } } \\ & \textrm{ } \times \exp \left\{ { - \frac{{ik}}{{2{f_{MLA}}}}[{{{({x - m{d_1}} )}^2} + {{({y - n{d_1}} )}^2}} ]} \right\}\\ &\textrm{ } \times \exp\left\{ {\frac{{ik}}{{2{z_3}}}[{{{(x - s)}^2} + {{(y - t)}^2}} ]} \right\}\\ & \textrm{ } \times \left\{ {\int\limits_{ - \infty }^{ + \infty } {\int\limits_{ - \infty }^{ + \infty } {{P_{main}}(u,v)\exp( - ikW(\hat{x}\textrm{;}\hat{u}\textrm{,}\hat{v}))\exp\left[ { - \frac{{ik}}{{2{F_{main}}}}({u^2} + {v^2})} \right]} } } \right.\\ & \textrm{ } \times \exp \left\{ {\frac{{ik}}{{2{z_1}}}[{{{({\xi - u} )}^2}\textrm{ + }{{({\eta - v} )}^2}} ]} \right\}\\ &\textrm{ }\left. {\textrm{ } \times \exp \left\{ {\frac{{ik}}{{2{z_2}}}[{{{({x - u} )}^2}\textrm{ + }{{({y - v} )}^2}} ]} \right\}\textrm{d}u\textrm{d}v} \right\}\textrm{d}x\textrm{d}y,\;\;\;\; \end{aligned}$$
where $\lambda$ is the wavelength and $k \thinspace = \thinspace 2\pi /\lambda$ is the wavelength number;${P_{MLA}}(x,y)$ is the generalized pupil function of MLA, which consists of $M \times N$ microlenses with focal length being ${f_{MLA}}$ and diameter being ${d_1}$;${F_{main}}$ is the focal length of the single lens and ${P_{main}}(u,v)$ is its generalized pupil function. $W(\hat{x}\textrm{;}\hat{u}\textrm{,}\hat{v})$ denotes the optical aberrations introduced by the single lens in Seidel polynomials and is formulated as
$$\begin{aligned} W(\hat{x}{;}\hat{u}{,}\hat{v}) &= \sum\limits_{g,l,r} {{W_{glr}}{{\hat{x}}^g}} {\rho ^l}{\cos ^r}\theta \\ & \mathop = \limits_{\rho \cos \theta = \hat{u}}^{\rho = \sqrt {{{\hat{u}}^2} + {{\hat{v}}^2}} } {W_{040}}{({{\hat{u}}^2} + {{\hat{v}}^2})^2} + {W_{131}}\hat{x}({{\hat{u}}^2} + {{\hat{v}}^2})\hat{u}\\ & + {W_{222}}{{\hat{x}}^2}{{\hat{u}}^2} + {W_{220}}{{\hat{x}}^2}({{\hat{u}}^2} + {{\hat{v}}^2}) + {W_{311}}{{\hat{x}}^3}\hat{u}, \end{aligned}$$
where $\hat{u}$ and $\hat{v}$ are the normalized coordinates on the main lens plane; $\hat{x}$ is the normalized image height on the MLA plane; ${W_{glr}}$ represents the wavefront coefficients and the five primary Seidel aberrations satisfy $g + l = 4$; Each item in Eq. (2) orderly refers to spherical aberrations, coma, astigmatism, field curvature and distortion [1314]. Detailed calculations of the Seidel polynomials can be found in Appendix A, which is summarized from [15].

The modular square of the PSF, ${|{h(s,t;\xi ,\eta )} |^2}$, is the overall system response on the image sensor. As we will show in the following section, the intensity distribution of this 4D quantity ${|{h(s,t;\xi ,\eta )} |^2}$ does not look like an Airy disk so that it is not as spatially invariant as in traditional imaging systems. Besides, as given in Eq. (1), the periodicity of PSF among different microlenses is broken by the optical aberrations, which means the spatial variance of PSF becomes more severe.

2.2 Reconstruction and re-projection models

Because of the optical aberrations introduced by the single lens, captured plenoptic image presents pixel aliasing in spatial and angular domains, which disorders pixel intensities in the rendered sub-aperture images. For example, as shown in Fig. 3(c), using the widely identified rendering method in [16], the sub-aperture image extracted from the aberrations-included plenoptic image (shown in Fig. 3(b)), visually cannot provide any contents about the imaging object information (shown in Fig. 3(a)), which indicates that errors exist in the pixel distributions of the aberrations-included plenoptic image. Therefore, we propose to correct the optical aberrations in the captured plenoptic image by reconstruction using the modeled PSF and generate the aberrations-corrected plenoptic image by re-projection. The reconstruction and re-projection models are described as follows.

 figure: Fig. 3.

Fig. 3. Sub-aperture image extraction comparison. Notice that sub-aperture images are up-sampled for display.

Download Full Size | PDF

According to Eq. (1), when the integral is performed over all spatial coordinates on the object plane, the result is a function of only spatial coordinates on the image sensor plane. Therefore, after arranging the imaging object information ${O_{({\xi ,\eta } )}}$ and sensor data ${I^{({s,t} )}}$ in column vectors, we can give the image formation model, which is formulated as

$$ {I^{({s,t} )}} = H_{({\xi ,\eta } )}^{({s,t} )}{O_{({\xi ,\eta } )}} + {N^{({s,t} )}},$$
where $H_{({\xi ,\eta } )}^{({s,t} )}$ is a matrix-style rearrangement of the 4D quantity ${|{h(s,t;\xi ,\eta )} |^2}$ and ${N^{({s,t} )}}$ is the noise term. Due to the spatial variance of PSF, the image formation model of singlet plenoptic cameras cannot be simplified as a convolution like in traditional imaging systems. Therefore, it is not useful to reconstruct the imaging object information using direct deconvolution techniques such as Wiener filtering. Besides, the Richardson-Lucy iteration algorithm in [17] that assumes PSF has periodicity among different microlenses also cannot be used since the periodicity has been broken by the optical aberrations. Hence, we formulate the reconstruction as an optimization problem, which is given by
$${\hat{O}_{({\xi ,\eta } )}} = \mathop {\textrm{argmin}}\limits_{{O_{({\xi ,\eta } )}}} ||{{I^{({s,t} )}} - H_{({\xi ,\eta } )}^{({s,t} )}{O_{({\xi ,\eta } )}}} ||_2^2 + \tau ||{{O_{({\xi ,\eta } )}}} ||_2^2,$$
where $\tau$ is the coefficient of regularizer. Least Squares Minimal Residual (LSMR) algorithm proposed in [18] can be used to solve this optimization problem. Notice that the regularizer in Eq. (4) can also be l-1 norm and classic Two-step Iterative Shrinkage/Thresholding (TwIST) algorithm proposed in [19] can solve that optimization problem. However, this paper chooses l-2 norm regularization because it is found that the computation time cost of l-1 norm is much larger than that of l-2 norm for obtaining the same quality of reconstruction.

Since our goal is to obtain re-projected sensor data with the same high quality as that obtained by using an aberrations-free plenoptic imaging system under the same optical parameters, we need to make sure that artifacts should not exist in the reconstructed results. Therefore, we apply the depth-dependent anti-aliasing filters [2021] to the reconstructed results to avoid artifacts. The aliasing aware update scheme is then given by

$${\hat{O}^{\prime}_{({\xi ,\eta } )}} = {h_{r,{z_1}}} \ast {\hat{O}_{({\xi ,\eta } )}},$$
where ${\ast} $ represents the convolution operator and ${h_{r,{z_1}}}$ represents the depth-dependent anti-aliasing filters; ${\hat{O}^{\prime}_{({\xi ,\eta } )}}$ represents the artifacts-free reconstructed imaging object information. According to [2021], the radius of the depth-dependent anti-aliasing filter ${h_{r,{z_1}}}$ in pixels on object plane is given by
$${r_{obj,{z_1}}} = \frac{{\textrm{min}({|{{\gamma_{{z_1}}}{d_1} - {b_{{z_1}}}} |,{d_1}/2} )S}}{{{d_1}}},$$
where ${\gamma _{{z_1}}}$ is the magnification factor that scales the size of image at the MLA to the size of image under each microlens; ${b_{{z_1}}}$is the depth-dependent blur radius under each microlens due to the finite aperture of microlens; S is the super-resolution factor as defined in [17], which means we sample the imaging object information at a rate of S times the microlens diameter sampling rate and then the imaging object information is spaced by ${d_1}/({MS} )$ where M is the magnification at the MLA plane in the main lens imaging part. Notice that values in Eq. (6) are calculated by taking into account optical aberrations in the singlet plenoptic cameras so that they are different from that calculated for ideal plenoptic imaging systems without optical aberrations, as those in [2021]. Detailed calculations of ${r_{obj,{z_1}}}$, the magnification factor ${\gamma _{{z_1}}}$ and blur radius ${b_{{z_1}}}$ can refer to Appendix B.

After reconstruction and anti-aliasing filtering, re-projection will be carried out to the image sensor to obtain aberrations-free plenoptic images. The PSF used in re-projection will be modeled according to Eq. (1) but without including optical aberrations and the re-projection process is modeled as

$${\hat{I}^{({s,t} )}} = \tilde{H}_{({{\xi_0},{\eta_0}} )}^{({s,t} )}{\hat{O}^{\prime}_{({\xi ,\eta } )}},$$
where $\tilde{H}_{({{\xi_0},{\eta_0}} )}^{({s,t} )}$ is a matrix-style rearrangement of ideally modeled PSF; ${\hat{I}^{({s,t} )}}$ is the re-projected sensor data in column vectors.

Using the aberrations-included plenoptic image shown in Fig. 3(b) and the modeled PSFs as the inputs of the above models, the aberrations-corrected plenoptic image is shown in Fig. 3(d), from which the sub-aperture image is extracted and shown in Fig. 3(e). We can see that the sub-aperture image visually can reflect much of the imaging object information.

3. Experimental results and analysis

3.1 Verification of PSF modeling

First, the correctness of PSF modeling that takes into account optical aberrations is verified by testing a self-built singlet plenoptic camera, as shown in Fig. 4. The self-built singlet plenoptic camera is put on a three-dimensional (3D) translation table to control the offsets of point sources, which are generated from mounted LED (Thorlabs, M530L3) with some modifications. The threads on the tubes are used to adjust the distance between the main lens and MLA. F-number of main lens can be adjusted by the stop to make it match with that of MLA. A purchased MLA (RPC Photonics; MLA-s100-f8, no gaps; square pattern; 2mm thickness) and a self-designed MLA (no gaps; hexagon pattern; 1.34mm thickness) are used and integrated into the image sensor (PointGrey; GS3-U3-41C6C-C). Parameters of the self-built singlet plenoptic camera are summarized in Table 1.

 figure: Fig. 4.

Fig. 4. The self-built singlet plenoptic camera.

Download Full Size | PDF

Tables Icon

Table 1. Parameters of the self-built singlet plenoptic camera.

PSFs of point sources with different offsets simulated according to Eq. (1) and those captured by the self-built singlet plenoptic camera are compared. The offsets are changed by fixing the point source and moving the self-built singlet plenoptic camera on the 3D translation table. Figure 5 and Fig. 6 show the PSF results corresponding to the purchased MLA and the self-designed MLA, respectively. They are obtained by cropping 250×250 pixels from the sensor images, either captured ones or simulated ones. The red rectangles and circles marked in the figures represent the diameter of each microlens.

 figure: Fig. 5.

Fig. 5. Captured PSFs and simulated PSFs using the purchased MLA.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Captured PSFs and simulated PSFs using the self-designed MLA.

Download Full Size | PDF

As can be seen from the second row and the bottom row of Fig. 5 and Fig. 6, if without adding optical aberrations, the point sources are well-focused on MLA plane so that their responses are totally within or approximately within one microlens. After introducing the optical aberrations, their responses spread to several microlenses. It should be noticed that for point sources whose spatial coordinate $(\xi ,\eta )$ corresponds to (-0.2475mm, -0.2475mm) in Fig. 5 and (0mm, -3mm) in Fig. 6, since these two point sources are well-focused on the border of microlenses in ideal imaging situation, their responses will originally spread to several microlenses. However, after introducing the optical aberrations, their responses spread to more different microlenses. These results can illustrate that the PSF of singlet plenoptic cameras is spatially variant and the spatial variance becomes more severe because of the optical aberrations. These results also show that it is impossible to obtain PSFs of singlet plenoptic cameras via sparse measurements. Hence, it is significant to derive the accurately modeled PSFs at arbitrary spatial locations, even arbitrary wavelengths. Besides, by comparing the on-axis PSFs with and without optical aberrations, we can see that the spherical aberrations are obvious, which leads to not only response spreading, but also non-uniform intensity distribution within a microlens. For the off-axis PSFs, the intensity distributions are distorted by the optical aberrations. These results echo the foregoing conclusions from Fig. 3 that optical aberrations result in pixel aliasing in spatial and angular domains, disordering pixel intensities in the sub-aperture images.

In addition, as shown in the top row and the bottom row, the structures of modeled PSFs that include optical aberrations are quite the same with captured ones, which indicate the correctness of PSF modeling. The more realistic modeled PSFs can be applied to real imaging, as demonstrated in the following subsections.

3.2 Reconstruction and re-projection

3.2.1 Simulation results

In order to verify the utility the proposed optical-aberrations-corrected light field re-projection method, numerical simulations are conducted in MATLAB. The optical configuration of simulated singlet plenoptic imaging system is the same with that summarized in Table 1 where parameters of purchased MLA is used. The imaging objects used for simulations are shown in Fig. 7. Besides, the centers of these imaging objects are on the optical axis of the simulated singlet plenoptic imaging system.

 figure: Fig. 7.

Fig. 7. Imaging objects used in the simulations.

Download Full Size | PDF

First, the noise-free cases at different imaging distances are tested by using the imaging object shown in Fig. 7(a). The imaging results are shown in the left-most column of Fig. 8, which are cropped from the simulated plenoptic images and the size is 300×300 pixels. Since the simulated plenoptic images have no noise, τ is set to be zero in these reconstruction experiments. As we can see, using the proposed method, the imaging object information can be reconstructed with high quality and re-projected plenoptic images can then be obtained. We then compare the sub-aperture images extracted from the simulated plenoptic images and re-projected plenoptic images. We can find that after correcting the optical aberrations, pixel intensity disorder can be rectified and the artifacts can be avoided.

 figure: Fig. 8.

Fig. 8. Results of reconstruction and re-projection without noise. Notice that the contrast of (g) and (j) is enhanced for illustration.

Download Full Size | PDF

Second, imaging cases with Gaussian noise are tested, and the results are shown in Fig. 9. The PSNRs between the cropped noisy results and the above noise-free results are all ∼21dB. τ is set to be 0.01 in these reconstruction experiments. The reconstruction results become burry and noisy, which demonstrates that the noise will affect the quality of reconstruction. Nonetheless, as we can see from the fourth column and the right-most column, the re-projected plenoptic images can still support more accurate sub-aperture extraction. Besides, when comparing the results shown in the second column and the right-most column, the severely disordered pixel intensities of sub-aperture images extracted from noisy aberrations-included plenoptic images have been rectified by using the proposed optical-aberrations-corrected light field re-projection method, which indicates both the utility and robustness of the proposed method.

 figure: Fig. 9.

Fig. 9. Results of reconstruction and re-projection with noise.

Download Full Size | PDF

Besides, in order to investigate the effect of different imaging noise levels, imaging cases with Gaussian noise at different levels are tested by using the imaging object shown in Fig. 7(b). The aberrations-included imaging results are shown in the top row of Fig. 10, which are cropped from the simulated plenoptic images and their size is 672×672 pixels. From left to right, the PSNRs between the cropped noisy results and the noise-free result are 21dB, 18dB and 15dB, respectively. τ is set to be 0.01 in all reconstruction experiments. By comparing the sub-aperture images extracted from aberrations-included plenoptic images and re-projected plenoptic images, the conclusion still holds that the re-projected plenoptic images will support more accurate sub-aperture extraction. Besides, as shown in the bottom row, the quality of sub-aperture images extracted from the re-projected plenoptic images will not change a lot when the imaging noise becomes more severe, which is different from the case shown in the second row where the quality of sub-aperture images is susceptive to noise. These results further indicate the utility and robustness of the proposed optical-aberrations-corrected light field re-projection method.

 figure: Fig. 10.

Fig. 10. Results of reconstruction and re-projection with different imaging noise levels.

Download Full Size | PDF

The above simulation experiments are conducted under an assumption that the depth information of the imaging objects is known as a priori. Therefore, the absolute distance measurement method proposed in our previous work [2223] is incorporated into the proposed method to break this assumption. More specifically, the absolute distance measurement method is implemented first to estimate the depth information of the imaging objects. Then, the estimated depths will be used in Eq. (1) for PSF modeling and the modeled PSFs will be used for both reconstruction and re-projection. Experiments are conducted by using the imaging object shown in Fig. 7(c), where Letter “O” locates at z1=103.6mm and Letter “E” locates at z1=101.6mm. The simulated plenoptic image with Gaussian noise is shown in Fig. 11(a), whose PSNR with noise-free result is ∼21dB and the image size is also 672×672 pixels. After implementing the absolute distance measurement process using the simulated plenoptic image, the depth estimation results is summarized in Table 2. The estimation error is larger when z1 becomes larger, as figured out in [2223]. Subsequently, PSFs are modeled according to Eq. (1) when z1 equals to 99.37mm and 99.14mm, respectively. Other parameters are the same with that summarized in Table 1. Since the values of estimated depths obtained from the absolute distance measurement method [2223] will inherently be smaller than the actual values, PSFs are also modeled by increasing z1 to be 101.37mm and 103.37mm for Letter “E”, 101.14mm and 103.14mm for Letter “O”, respectively. These experiments can be used to investigate the effect of depth accuracy on the quality of plenoptic imaging using the proposed method.

 figure: Fig. 11.

Fig. 11. Results of reconstruction and re-projection without depth information as a priori. The yellow lines and orange lines are used to manifest the vertical shift, i.e., parallax, among the sub-aperture images.

Download Full Size | PDF

Tables Icon

Table 2. Depth estimation results using the absolute distance measurement method proposed in [2223].

The reconstruction and re-projection results are displayed in Fig. 11, where the results in top row are the ground truth, from the second row to the bottom row are the results corresponding to z1 equals to 99.37mm, 101.37mm and 103.37mm for Letter “E”, and z1 equals to 99.14mm, 101.14mm and 103.14mm for Letter “O”. τ is set to be 0.01 in all reconstruction experiments. Notice that the reconstruction results with and without anti-aliasing filtering are displayed in 2D x-y view. As can be seen from Fig. 11(b), the sub-aperture images extracted from the aberrations-included plenoptic image in Fig. 11(a) have obvious artifacts, which have been corrected by using the proposed method, as Fig. 11(e) shows. Since Letter “O” locates at z1=103.6mm and Letter “E” locates at z1=101.6mm, tiny parallax will exist in sub-aperture images for Letter “O” but not for Letter “E”, which is exactly manifested in Fig. 11(e). As shown in the first column from the second row to the bottom row, i.e., Figs. 11(f), 11(k), and 11(p), we can find that the quality of reconstruction can only be acceptable using the PSFs that are modeled at quite close to the actual values (101.37mm is close to 101.6mm for Letter “E” and 103.14mm is close to 103.6mm for Letter “O”). Besides, after implementing the anti-aliasing filtering, we will find that the filter can only be effective for the reconstructed results with acceptable quality, as shown in Figs. 11(g), 11(l), and 11(q). As demonstrated in the right-most column, poor reconstruction will lead to sub-aperture images with low quality and wrong parallax information.

These results indicate that depth accuracy is important for reconstruction and re-projection. Therefore, in order to obtain high-quality plenoptic imaging using the proposed method, we can incorporate the absolute distance measurement method to derive the initial depth information and model PSF stacks according to the initial depth information. Then, no-reference quality assessment algorithm is needed to evaluate the quality of reconstructed results. The reconstructed result with the best quality corresponds to a depth that will be used to model the PSFs for re-projection. To some extent, the mentioned work can additionally improve the accuracy of absolute distance measurement, which has been taken as our future work.

3.2.2 Real experiment results

Furthermore, real reconstruction and re-projection experiments are conducted based on the self-built singlet plenoptic camera. Still, purchased MLA is used and integrated into the image sensor. Here, an LED screen (MDP02BPFC, Creator Inventor Connector Co., Ltd, Shenzhen) is placed in front of the self-built singlet plenoptic camera. Images shown in the left-most column of Fig. 12 are projected onto the LED screen as imaging objects. The physical size of these imaging objects is also 1mm×1mm and their centers are all on the optical axis of the self-built singlet plenoptic camera. τ is also set to be 0.01 in these reconstruction experiments.

 figure: Fig. 12.

Fig. 12. Results of imaging and re-projection in real experiments. The yellow lines in (c) and (e) are used to manifest the vertical shift, i.e., parallax, among the sub-aperture images.

Download Full Size | PDF

The results of imaging and re-projection are shown as below. As can be seen from the results, the artifacts in the extracted sub-aperture images shown in Figs. 12(h) and 12(m) can be removed using the proposed method when z1 equals to 101.6mm and 103.6mm. Although the structural similarity of sub-aperture images shown in Fig. 12(c) with the imaging object is a little bit higher than that shown in Fig. 12(e), we can see that the sub-aperture images shown in Fig. 12(c) have no parallax. This is incorrect since when z1 equals to 96.6mm, the imaging situation without optical aberrations ought to be defocused and parallax should exist among sub-aperture images. As we can see from Fig. 12(e), the sub-aperture images extracted from re-projected plenoptic images have right parallax, which means the optical aberrations have been corrected. These real experimental results indicate the utility of the proposed optical-aberrations-corrected light field re-projection method. The higher quality of re-projected plenoptic images will be beneficial for post-capture capabilities such as refocusing, virtual perspective synthesis and perspective shifting, etc.

4. Conclusions

An optical-aberrations-corrected light field re-projection method is proposed in this paper to obtain high-quality singlet plenoptic imaging. The proposed method uses mathematically modeled PSF that includes optical aberrations based on wave optics theory to reconstruct imaging object information from aberrations-included plenoptic images, and then re-projects the imaging object information back to the plenoptic imaging plane to obtain aberrations-corrected plenoptic images. Experimental results demonstrate the correctness of PSF modeling and the utility of the proposed method.

In order to further improve the versatility of the proposed method, our future work includes combining the absolute distance measurement method [2223] to handle more complicated imaging cases and investigating an efficient no-reference quality assessment algorithm to decide the appropriate depth used for reconstruction and re-projection. It is also set as our future work to make the proposed work applicable to industrial detection, microscopy, or even broader areas.

Appendix A

According to [15], the wavefront coefficients of a single lens can be calculated by investigating their relationships with Seidel coefficients. The relationships are listed in Table 3. In the table, p represents the heights of intersections that rays impinging on the surfaces of the lens and $\bar{p}$ represents the corresponding heights of chief rays; $\varphi$ and $\varphi ^{\prime}$ denote the incident angles and refractive angles of rays impinging on the lens, respectively; Similarly, $\bar{\varphi }$ and $\bar{\varphi ^{\prime}}$ denote the corresponding angles of chief rays; n and $n^{\prime}$ represent the refractive index of environment (atmosphere in this paper) and the lens, respectively; C is the curvature of surfaces and $\phi$ is the optical power with $\phi = (n^{\prime} - n)C$. The definitions of some parameters can also be found from Fig. 13.

 figure: Fig. 13.

Fig. 13. Definitions of some parameters in Table 3.

Download Full Size | PDF

Tables Icon

Table 3. Relationships between wavefront coefficients and Seidel coefficients.

Appendix B

The magnification factor represents the scaling between the regular image (orange vector) that would form on the MLA plane and the actual image (green vector) that forms under each microlens. As shown in Figs. 14(a) and 14(b), due to the optical aberrations, the imaging distance behind the main lens will be different, which will lead to different magnification factor. The magnification factor in Fig. 14(b) is defined as

$${\gamma _{{z_1}}} = \frac{{{{z^{\prime}}_1} - \Delta z}}{{{z_2}}} \cdot \frac{{{z_3}}}{{{z_2} - ({{{z^{\prime}}_1} - \Delta z} )}},$$
where ${z^{\prime}_1}$ denotes the imaging distance when no optical aberrations exist and $\Delta z$ is the deviation between the actual imaging distance and ${z^{\prime}_1}$. According to [2021], under the pinhole approximation of MLA, the captured light field information on the image sensor is the conjugate light field at $({{{z^{\prime}}_1} - \Delta z} )$ convolved by a low-pass filter with cutoff frequency $1/(2{d_1})$ Therefore, utilizing ${\gamma _{{z_1}}}$, the scaled filter at the image sensor has a radius of ${\gamma _{{z_1}}}{d_1}$.

 figure: Fig. 14.

Fig. 14. Schematics for magnification factor and blur radius analysis.

Download Full Size | PDF

Considering the finite aperture of each microlens, additional blur will be introduced, as shown in Fig. 14(c), whose radius is given by

$${b_{{z_1}}} = \frac{{{d_1}{z_3}}}{2}\left|{\frac{1}{{{f_{MLA}}}} - \frac{1}{{{z_2} - ({{{z^{\prime}}_1} - \Delta z} )}} - \frac{1}{{{z_3}}}} \right|.$$
When we take into account the finite aperture of microlens, the aliasing will be reduced with the microlens blur ${b_{{z_1}}}$. Then, the radius of filter at the image sensor is given by
$${r_{sen,{z_1}}}\textrm{ = min}({|{{\gamma_{{z_1}}}{d_1} - {b_{{z_1}}}} |,{d_1}/2} ),$$
where the maximum is restricted to the radius of microlens diameter in the case that the actual imaging distance equals to ${z_2}$ (${z_2} - ({{{z^{\prime}}_1} - \Delta z} )\to 0$ and ${b_{{z_1}}} \to {d_1}/2$). Therefore, if we back-project the filter at the image sensor to the object plane, we will obtain Eq. (6).

Funding

National Natural Science Foundation of China (61771275); Shenzhen Technical Project (JCYJ20170817162658573); Tip-top Scientific and Technical Innovative Youth Talents of Guangdong Special Support Program (2016TQ03X998).

Disclosures

The authors declare no conflicts of interest.

References

1. F. Huang, K. Chen, and G. Wetzstein, “The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues,” ACM Trans. Graph. 34(4), 60–60:12 (2015). [CrossRef]  

2. R. S. Overbeck, D. Erickson, D. Evangelakos, M. Pharr, and P. Debevec, “A system for acquiring, processing, and rendering panoramic light field stills for virtual reality,” ACM Trans. Graph. 37(6), 1–15 (2018). [CrossRef]  

3. D. G. Dansereau, G. Schuster, J. Ford, and G. Wetzstein, “A wide-field-of-view monocentric light field camera,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 3757–3766.

4. Y. M. Jeong, S. K. Moon, J. S. Jeong, G. Li, J. B. Cho, and B. H. Lee, “One shot 360-degree light field capture and reconstruction with depth extraction based on optical flow for light field camera,” Appl. Sci. 8(6), 890 (2018). [CrossRef]  

5. A. Ö. Yöntem, K. Li, and D. P. Chu, “Reciprocal 360-deg 3D light-field image acquisition and display system [Invited],” J. Opt. Soc. Am. A 36(2), A77–A87 (2019). [CrossRef]  

6. N. Bedard, T. Shope, A. Hoberman, M. A. Haralam, N. Shaikh, J. Kovačević, N. Balram, and I. Tošić, “Light field otoscope design for 3D in vivo imaging of the middle ear,” Biomed. Opt. Express 8(1), 260–272 (2017). [CrossRef]  

7. D. W. Palmer, T. Coppin, K. Rana, D. G. Dansereau, M. Suheimat, M. Maynard, D. A. Atchison, J. Roberts, R. Crawford, and A. Jaiprakash, “Glare-free retinal imaging using a portable light field fundus camera,” Biomed. Opt. Express 9(7), 3178–3192 (2018). [CrossRef]  

8. X. Dallaire and S. Thibault, “Simplified projection technique to correct geometric and chromatic lens aberrations using plenoptic imaging,” Appl. Opt. 56(10), 2946–2951 (2017). [CrossRef]  

9. S.A. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Appl. Opt. 52(10), D22–D31 (2013). [CrossRef]  

10. P. Helin, V. Katkovnik, A. Gotchev, and J. Astola, “Super resolution inverse imaging for plenoptic caemras using wavefield modeling,” in proceedings of IEEE 3DTV Conference: The True Vision–Capture, Transmission and Display of 3D Video (3DTV-CON, 2014), pp. 1–4.

11. E. Sahin, V. Katkovnik, and A. Gotchev, “Super-resolution in a defocused plenoptic camera: a wave-optics-based approach,” Opt. Lett. 41(5), 998–1001 (2016). [CrossRef]  

12. X. Jin, L. Liu, Y.Q. Chen, and Q.H. Dai, “Point spread function and depth-invariant focal sweep point spread function for plenoptic camera 2.0,” Opt. Express 25(9), 9947–9962 (2017). [CrossRef]  

13. J. W. Goodman, Introduction to Fourier optics. Roberts and Company Publishers (2005).

14. D.G. Voelz, Computational fourier optics: A MATLAB Tutorial, (SPIE Tutorial Texts Vol. TT89), SPIE Press (2011).

15. J.M. Geary, Introduction to lens design with practical ZEMAX example, Willmann-Bell, Inc. (2007).

16. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Reports (CSTR), 2005.

17. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013). [CrossRef]  

18. David Fong and M. Saunders, “LSMR: An iterative algorithm for sparse least-squares problems,” SIAM J. Sci. Comput. 33(5), 2950–2971 (2011). [CrossRef]  

19. J. M. Bioucas-Dias and M. A. T. Figueiredo, “A New TwIST: Two-Step Iterative Shrinkage/Thresholding Algorithms for Image Restoration,” IEEE Trans. on Image Process. 16(12), 2992–3004 (2007). [CrossRef]  

20. T. E. Bishop and P. Favaro, “The light field camera: extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012). [CrossRef]  

21. A. Stefanoiu, J. Page, P. Symvoulidis, G. G. Westmeyer, and T. Lasser, “Artifact-free deconvolution in light field microscopy,” Opt. Express 27(22), 31644–31666 (2019). [CrossRef]  

22. Y.Q Chen, X Jin, and Q.H. Dai, “Distance measurement based on light field geometry and ray tracing,” Opt. Express 25(1), 59–76 (2017). [CrossRef]  

23. Y.Q Chen, X Jin, and Q.H. Dai, “Distance estimation based on light field geometric modeling,” 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Hong Kong, 2017, pp. 43–48.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. The architecture of the proposed method.
Fig. 2.
Fig. 2. Schematic layout of singlet plenoptic cameras where main lens consists of only a single lens. T is the thickness of the single lens and purple lines indicate the principal planes.
Fig. 3.
Fig. 3. Sub-aperture image extraction comparison. Notice that sub-aperture images are up-sampled for display.
Fig. 4.
Fig. 4. The self-built singlet plenoptic camera.
Fig. 5.
Fig. 5. Captured PSFs and simulated PSFs using the purchased MLA.
Fig. 6.
Fig. 6. Captured PSFs and simulated PSFs using the self-designed MLA.
Fig. 7.
Fig. 7. Imaging objects used in the simulations.
Fig. 8.
Fig. 8. Results of reconstruction and re-projection without noise. Notice that the contrast of (g) and (j) is enhanced for illustration.
Fig. 9.
Fig. 9. Results of reconstruction and re-projection with noise.
Fig. 10.
Fig. 10. Results of reconstruction and re-projection with different imaging noise levels.
Fig. 11.
Fig. 11. Results of reconstruction and re-projection without depth information as a priori. The yellow lines and orange lines are used to manifest the vertical shift, i.e., parallax, among the sub-aperture images.
Fig. 12.
Fig. 12. Results of imaging and re-projection in real experiments. The yellow lines in (c) and (e) are used to manifest the vertical shift, i.e., parallax, among the sub-aperture images.
Fig. 13.
Fig. 13. Definitions of some parameters in Table 3.
Fig. 14.
Fig. 14. Schematics for magnification factor and blur radius analysis.

Tables (3)

Tables Icon

Table 1. Parameters of the self-built singlet plenoptic camera.

Tables Icon

Table 2. Depth estimation results using the absolute distance measurement method proposed in [2223].

Tables Icon

Table 3. Relationships between wavefront coefficients and Seidel coefficients.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

h ( s , t ; ξ , η ) = exp [ i k ( z 1 + z 2 + z 3 ) ] i λ 3 z 1 z 2 z 3 + + m M n N P M L A ( x m d 1 , y n d 1 )   × exp { i k 2 f M L A [ ( x m d 1 ) 2 + ( y n d 1 ) 2 ] }   × exp { i k 2 z 3 [ ( x s ) 2 + ( y t ) 2 ] }   × { + + P m a i n ( u , v ) exp ( i k W ( x ^ ; u ^ , v ^ ) ) exp [ i k 2 F m a i n ( u 2 + v 2 ) ]   × exp { i k 2 z 1 [ ( ξ u ) 2  +  ( η v ) 2 ] }     × exp { i k 2 z 2 [ ( x u ) 2  +  ( y v ) 2 ] } d u d v } d x d y ,
W ( x ^ ; u ^ , v ^ ) = g , l , r W g l r x ^ g ρ l cos r θ = ρ cos θ = u ^ ρ = u ^ 2 + v ^ 2 W 040 ( u ^ 2 + v ^ 2 ) 2 + W 131 x ^ ( u ^ 2 + v ^ 2 ) u ^ + W 222 x ^ 2 u ^ 2 + W 220 x ^ 2 ( u ^ 2 + v ^ 2 ) + W 311 x ^ 3 u ^ ,
I ( s , t ) = H ( ξ , η ) ( s , t ) O ( ξ , η ) + N ( s , t ) ,
O ^ ( ξ , η ) = argmin O ( ξ , η ) | | I ( s , t ) H ( ξ , η ) ( s , t ) O ( ξ , η ) | | 2 2 + τ | | O ( ξ , η ) | | 2 2 ,
O ^ ( ξ , η ) = h r , z 1 O ^ ( ξ , η ) ,
r o b j , z 1 = min ( | γ z 1 d 1 b z 1 | , d 1 / 2 ) S d 1 ,
I ^ ( s , t ) = H ~ ( ξ 0 , η 0 ) ( s , t ) O ^ ( ξ , η ) ,
γ z 1 = z 1 Δ z z 2 z 3 z 2 ( z 1 Δ z ) ,
b z 1 = d 1 z 3 2 | 1 f M L A 1 z 2 ( z 1 Δ z ) 1 z 3 | .
r s e n , z 1  = min ( | γ z 1 d 1 b z 1 | , d 1 / 2 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.