Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Artifact-free deconvolution in light field microscopy

Open Access Open Access

Abstract

The sampling patterns of the light field microscope (LFM) are highly depth-dependent, which implies non-uniform recoverable lateral resolution across depth. Moreover, reconstructions using state-of-the-art approaches suffer from strong artifacts at axial ranges, where the LFM samples the light field at a coarse rate. In this work, we analyze the sampling patterns of the LFM, and introduce a flexible light field point spread function model (LFPSF) to cope with arbitrary LFM designs. We then propose a novel aliasing-aware deconvolution scheme to address the sampling artifacts. We demonstrate the high potential of the proposed method on real experimental data.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Since it was first proposed by Levoy et al. in 2006 [1], light field microscopy has proven very useful in biological applications involving fast dynamics, due to its high speed 3D imaging capability. The potential of the modality has been show-cased in various applications, including live cell imaging [2], fish larvae neuro-dynamics recording [3,4], heart imaging and blood flow monitoring [5].

The light field microscope (LFM) enables scan-less 3D imaging of fluorescent specimens by incorporating an array of micro-lenses into the optical path of a conventional wide-field microscope. Thus, both spatial and directional light field information is captured in a single shot, allowing for subsequent volumetric reconstruction of the imaged sample.

Based on the integral imaging principles [6], the lenslet-based (plenoptic) devices [7] have established plenoptic imaging as an active topic in computational imaging, enabling post-acquisition refocusing [810] or 3D imaging of transparent microscopic samples [1,11]. In the early stages, the methods for rendering extended depth of field images from plenoptic devices were limited to lenslet resolution [8], which is the number of available micro-lenses. In recent years, various approaches to address this limitation were proposed. Examples include variations of the hardware to allow for attenuation masks [12] or wavefront coding techniques [13], which demonstrated higher recoverable resolutions and extended depth of field. On the other hand, inspired by the large amount of work on computational super-resolution in the computer vision field [14,15], algorithms for super-resolving the light field were developed, involving multi-view reconstruction [16,17], or explicit image formation models for plenoptic devices employing either ray-based [1820] or wave-based optics [13,21,22].

In [22], Broxton et al. introduced a wave-based model to describe the propagation of light through an original plenoptic LFM setup [1], together with a 3D deconvolution method. They demonstrate superior reconstructions in terms of lateral resolution (compared to lenslet resolution) for most of the axial range. The improvement rate, however, is non-uniform across depth, and the recoverable resolution remains low near the native object plane; additionally this region exhibits strong artifacts. This effect is due to the depth-dependent sampling patterns and induced aliasing in light field imaging [19]. As the sample is naturally placed at the native object plane during the acquisition, i.e. in focus, the aliasing artifacts constitute a rather prominent problem in light field microscopy as expressed in [3,5,23], undermining the potential of the modality.

The sampling patterns and angular aliasing have previously been studied for light field imaging systems, like camera arrays [2427]. However, there are fundamental differences between the aliasing in camera arrays and plenoptic devices [19], which must be acknowledged in order to address the cause of the artifacts in LFM deconvolution. Plenoptic devices avoid angular aliasing while introducing considerable spatial aliasing, since neighboring emitters in the scene are projected to pixels far apart on the sensor. Ng et al. [8] analyzed aliasing in refocused light fields, and Georgiev et al. [28] discussed the impact of the micro-lens array to sensor distance on the sampling rate in plenoptic cameras. In [19,29], the authors studied the depth-dependent sampling requirements in light field cameras.

In this work we study the depth-dependent sampling patterns of a light field microscope and analyze how they introduce aliasing, in order to understand the cause of the artifacts; to the extent of our knowledge, such an analysis has not been performed for the LFM before. We then derive depth-dependent anti-aliasing filters and propose a novel and efficient aliasing-aware deconvolution method for artifact-free 3D reconstruction.

Our work and [19] share the anti-aliasing priors idea in an interesting way. In [19], the authors use light field projections of the filtering kernels directly on the micro-images to ensure correct, non-aliased disparity maps. They then incorporate the estimated disparity in the light propagation model and proceed to recover the 2D radiance (all-in-focus image) from a light field camera image employing a variational Bayesian framework. This can be interpreted as an implicit accounting for the aliasing through the disparity prior. In our work, on the contrary, we derive the anti-aliasing filter kernels in the object space and explicitly apply them to the light field as a correction step of our iterative aliasing-aware deconvolution, which employs a smoothing expectation maximization scheme.

The analysis and the deconvolution scheme we propose apply to arbitrary plenoptic configurations. Hence, we also derive a generalized wave-based forward light propagation model able to characterize both original [1,22] and “focused” LFM setups. In previous works, the “focused plenoptic” camera [30,31] design was proposed to enhance the spatial resolution of the captured light field, compared to the original plenoptic camera [9], by manipulating the placement of the sensor with respect to the micro-lens array (MLA) such that the micro-lenses are focused on the objective lens. When coming to the light field microscope, due to the presence of the tube lens, manipulating the distance between the MLA and the camera sensor immediately affects the distance between the tube lens and the MLA such that the conjugate image of the native object plane may be in front or behind the MLA, creating a defocused field incident on the MLA, as opposed to the original LFM case, where the native image pane coincides with the MLA plane. Thus, although conflicting with the established “focused plenoptic” term, we find the term “defocused LFM” to better reflect generic LFM designs, as the recoverable resolution at a certain depth in the object space strongly depends on the extent of the defocus generated at the MLA plane.

We discuss the depth-dependent trade-offs in terms of recoverable lateral resolution when comparing various LFM configurations, and evaluate the proposed reconstruction algorithm on real experimental data to demonstrate high quality reconstructions, superior to previous work. As a teaser, Fig. 1(c) shows the reconstruction of the USAF 1951 resolution target to compare our proposed method (right) with the baseline method in [22] (center).

 figure: Fig. 1.

Fig. 1. (a) Ray diagram: light propagation through the light field microscope. $f_{obj}$ denotes for objective focal length, $\Delta z$ represents the offset from the native object plane (NOP). A source point $\textbf {o} (o_{x},o_{y},o_{z} = z)$ in front of the microscope objective has a conjugate image by the tube lens at $z''$. Finally, the micro-lenses create micro-images at $z^{\prime\prime\prime}$, and the light reaches the camera sensor, producing a raw light field image. (b) Depth-dependent aliasing in LFM: The source points in the red group at depth $z_0$ in front of the microscope have completely overlapped images at the sensor plane. The points in the blue group, while being sampled at the same rate as the points in the red group, show partially non-overlapping images on the sensor as they are placed at depth $z_1$. The points in the green group, on the other side, are also placed at $z_1$; however they are sampled at a higher rate and their images are fully non-overlapping. (c) Reconstruction of the USAF 1951 target: Left: a light field image of the USAF 1951 resolution target, acquired with our experimental LFM. Center: reconstructed target using the method in [22]. Specific aliasing artifacts are present. Right: artifact-free reconstruction using our aliasing-aware deconvolution method.

Download Full Size | PDF

In summary, the contributions of the present work include: Analysis of the depth-dependent sampling patterns and aliasing in light field microscopy (section 2); a generalized wave-based light field point spread function to characterize defocused LFM designs (section 3); a novel and efficient aliasing-aware deconvolution method for artifact-free 3D reconstruction employing depth-dependent anti-aliasing filters (section 4); discussion and comparison of various LFM designs with respect to the recoverable resolution in the light field data and evaluation of the proposed deconvolution scheme on in-vivo biological samples and phantoms (section 5).

Table 1 contains a summary of the symbols used in this paper together with their definitions.

2. Aliasing in light field microscopy

2.1 Light field microscope

A light field microscope (LFM) is built by placing a micro-lens array (MLA) into the optical path of a conventional wide-field microscope [1,3]. Figure 1(a) shows a ray diagram as an intuitive overview of the light propagation through a LFM. A source point at a depth $z$ in front of the microscope objective has a conjugate image by the tube lens at $z''$. The objective lens creates a virtual image of the object at $z'$, which is not drawn here for the sake of clarity. We choose to represent $z$ as $f_{obj} + \Delta z$, since an object at depth $f_{obj}$ (the focal length of the objective) is usually in focus in the wide-field microscope. In order to be consistent with the literature [3,22], we will call this depth, $z = f_{obj}$, the native object plane (NOP) or the zero plane of the LFM. Then $\Delta z$ represents an offset from the native object plane, and we will refer to this quantity when talking about depth in the subsequent sections. Finally, the micro-lenses create micro-images at $z^{\prime\prime\prime}$, and the light reaches the camera sensor, producing a raw light field image.

Without loss of generality, Fig. 1(a) depicts a configuration where the conjugate image is formed in front of the MLA. However, throughout the paper, our derivations are valid for arbitrary configurations, i.e. they do not discriminate between defocused [30] and original plenoptic [1,8] light field imaging designs.

2.2 Depth-dependent sampling patterns of the LFM

We now proceed at investigating the sampling requirements of the LFM and deriving the depth-dependent quantities relevant for our algorithm, similar to the analysis of sampling patterns in plenoptic cameras [19].

Figure 1(b) is meant to build an intuition on the depth-dependence of the sampling patterns of the plenoptic microscope. The source points at $z_0$ in front of the microscope (circled in red) have completely overlapped images at the sensor plane. On the other side, the source points in the blue group have partially non-overlapping images on the sensor, while being sampled at the same rate as the points in the red group, but they originate from a different depth $z_1$. The points in the green group, also at $z_1$, are being sampled at a higher rate such that their images on the sensor are fully non-overlapping.

In order to characterize the depth-dependent nature of the sampling in light field microscopy, let us assume for now that the micro-lenses have very small apertures and behave like pinholes. Then we can approximate the MLA by an array of pinholes with spacing $p_{ml}$. The in-camera light field at the MLA (pinholes in this context) should be band-limited with a bandwidth of $f_0 = \frac {1}{2p_{ml}}$ in order to satisfy the Nyquist criteria [32]. Higher frequencies, outside this bandwidth, would be under-sampled by the pinhole array and appear aliased.

Since the sensor elements have a finite extent, we must look into what area the pixels effectively integrate over. Figure 2(a) illustrates how the image at the MLA scales to the actual image that forms under a micro-lens. For a clear visualization, we omit here the first part of the image formation and assume we have an image of an object at $z$ in front of the objective formed at $z'$ by the objective lens. The tube lens further creates a scaled image at the conjugate image plane (dark blue), $z''$. The image at the MLA (light blue) follows from tracking the chief rays. Finally, we pick a micro-lens and derive the micro-image behind it. By means of similar triangles, the size of the image under a micro-lens is the size of the image at the MLA, scaled by a factor:

$$\gamma_z = \frac{d_{mla}^{sens}}{d_{tl}^{mla}}\left|\frac{z''}{d_{tl}^{mla}-z''}\right|.$$

 figure: Fig. 2.

Fig. 2. (a) Micro-lens magnification: The image of an object under a micro-lens scales according to the object depth. (b) Micro-lens blur: The geometric blur radius behind a micro-lens is again depth-dependent. (c) Anti-aliasing filter radius over depth: In the original LFM design [1,22] (see Fig. 3(a)), the NOP is sampled at the coarsest rate by the LFM which implies our resampling anti-aliasing filter has the largest radius at this object depth. As we move away from the zero plane, the LFM sampling rate increases and the anti-aliasing requirements become milder.

Download Full Size | PDF

Here, $d_{mla}^{sens}$ is the distance between the MLA plane and the sensor plane, and $d_{tl}^{mla}$ is the distance between the tube lens and the MLA plane. The scaling amount $\gamma _z$ is depth-dependent, which means the actual area of the light field the sensor pixels integrate over varies with the object depth.

An interesting observation follows for telecentric microscopes (4f-systems), as in [3,22]: for these systems, although the magnification stays constant with object depth, the blur radius at the MLA (depicted in blue in Fig. 2(b) varies in extent with depth,

$$B_z = r_{tl}\left|\frac{d_{tl}^{mla} - z''}{z''}\right|.$$
Here, $r_{tl}$ is the effective tube lens radius. Consequently, the depth-dependent scaling factor $\gamma _z$ still applies, as we can write
$$\gamma_z = \frac{d_{mla}^{sens}}{d_{tl}^{mla}}\frac{r_{tl}}{B_z}.$$
Please note the relation to the magnification factor in [19], or the amount of refocusing in [9].

If we now drop the pinhole array approximation and consider a micro-lens with finite aperture $p_{ml}$, we have to take into account the additional blur they introduce. The depth-dependent blur under each micro-lens, depicted in red in Fig. 2(b), has a radius

$$b_z =r_{ml}\left|\frac{1}{z'''} - \frac{1}{d_{mla}^{sens}}\right|,$$
where $r_{ml} = \frac {p_{ml}}{2}$ is the radius of the micro-lens.

We have now derived all the ingredients we need to characterize the non-aliasing requirements of the LFM.

2.3 Anti-aliasing filters

The band-limited assumption we made in the previous section for the pinhole approximation of the MLA means the acquired light field is the conjugate light field at $z''$, convolved by a low-pass ideal (sinc) filter with cutoff frequency $\frac {1}{2p_{ml}}$. We define the sinc kernel radius as the first zero crossing of the filter, $p_{ml}$. Then, as every micro-image is the projection of the conjugate light field onto the sensor, we can project the filter in the same way. Employing Eq. (1), the scaled filter kernel has a radius of $\gamma _z p_{ml}$.

When we take into account the finite micro-lens apertures $p_{ml}$, the pixels effectively integrate over a larger area and the aliasing is reduced with the micro-lens blur $b_z$. Then the filter size at the sensor, accounting for the micro-lens blur, is:

$$w_{sens_z} = |\gamma_z p_{ml}-b_z|.$$
In the case where the conjugate image forms at the MLA ($z''\to d_{tl}^{mla}$), we have $z''' \to 0$ and $b_z \to \infty$. However, the micro-lens blur actually converges to the size of the micro-image and thus we restrict the maximum filter radius to $r_{ml}$:
$$w_{sens_z} = \textrm{min}\ (|\gamma_z p_{ml}-b_z| ,\ r_{ml}).$$
We now backproject the filter into the object space. For this we introduce the super-resolution factor, $s \in \mathbb {Z}$, as defined in [22]. If we sample the volume at a rate of $s$ times the lenslet resolution $p_{ml}$, then the voxels are spaced by $\frac {p_{ml}}{Ms}$, where $M$ is the objective magnification factor. Then the radius of our ideal filter kernel in pixels in object space is:
$$w_{obj_z} = \frac{w_{sens_z}s}{p_{ml}}.$$
Figure 2(c) illustrates the scaled pinhole filter radius together with the micro-lens blur radius and the final compensated anti-aliasing filter radius in pixels over an axial range $[-100, 100] \mu m$ for an original plenoptic LFM configuration as in Fig. 3(a). An important observation here is that, as we move away from the zero plane, the LFM samples at a higher rate, imposing milder anti-aliasing requirements.

 figure: Fig. 3.

Fig. 3. LFM configurations and their light propagation paths. (a) The original design as described in [1,22]. The objective and tube lens are arranged as a 4-f (tele-centric) system. The native object plane (NOP) is then located at $f_{obj}$ in front of the microscope objective, and the native image plane (NIP) follows at $f_{tl}$ behind the tube lens; $f_{tl}$ represents the focal length of the tube lens. The MLA is then placed at the NIP. The camera is behind the MLA at an offset $d_{mla}^{sens} = f_{ml}$, where $f_{ml}$ represents the focal length of the micro-lens. (b) and (c) Defocused LFM (similar to the focused plenoptic camera [30] design). The MLA is now placed behind the NIP (b) or in front of it (c), such that the NOP is not focused on the MLA. In the latter scenario, the virtual image that would form at the NIP is depicted in dashed orange. Top: experimentally acquired LFPSF of a point source at the NOP, $\textbf {o}(o_x, o_y, o_z) = (0,0,f_{obj})$ for each setup.

Download Full Size | PDF

Finally, we define $h_{f_{w,z}}$ as the anti-aliasing normalized resampling filter. $h_{f_{w,z}}$ is a depth-dependent ideal low-pass filter and its kernel size at each depth is given by $w_{obj_z}$.

Since the ideal filter is of unit value for all the frequencies inside the band-limit, and zero outside, it has infinite extent. In practice, we need to use an approximate non-ideal filter kernel, aiming at optimizing the unity gain in the pass-band and zero gain in the stop-band. While there are extensive filter design choices [32], for all the experiments in this paper, we obtained great results using a Lanczos2 windowed version of the sinc kernel.

3. Generalized light field point spread function

In this section we propose a generalized forward light propagation model describing the optical system’s impulse response for arbitrary LFM configurations (i.e. the light field point spread function, LFPSF). Figure 3 depicts such plenoptic configurations, where the micro-lens array is placed at the native image plane (left), behind (center) or in front of it (right). Experimentally acquired LFSPFs are shown on top for each setup.

In order to derive the diffraction pattern of a source point $\textbf {o}(o_x, o_y, o_z)$, when the light propagates through the LFM from the source to the camera sensor, we discuss the wavefront at intermediate key planes in the following subsections.

3.1 Wavefront at the MLA plane

Figure 4(a) depicts a defocused LFM configuration, where the NOP (native object plane) is imaged by the tube lens behind the MLA. However, all the derivations in this paper do apply for arbitrary placement of the MLA relative to the tube lens, unless specified otherwise.

 figure: Fig. 4.

Fig. 4. (a) Example defocused LFM configuration with $d_{tl}^{mla} < f_{tl}$. (b) The “focused” object depth for our LFM in (a). FOP represents the focused object plane which is imaged exactly at the MLA by the tube lens. FOP is then located at offset $\Delta _{NOP}$ from NOP. Then an object focused on the MLA by our microscope is placed at ${dof}_{mla}=f_{obj}+\Delta _{NOP}$ in front of the objective. (c) $\Delta z_{FOP}$ represents the depth offset from the FOP for a source point, $\textbf {o}(o_x, o_y, o_z = \Delta z_{FOP} + {dof}_{mla})$. (d) Optimal sensor plane coverage condition: The micro-lens blur radius for a source point $\textbf {o}(o_x, o_y, o_z = {dof}_{mla})$ needs to match the micro-lens radius $r_{ml}$, in order to ensure optimal sensor plane coverage, without overlapping micro-images. (e) Overlapping micro-images due to violation of criteria in Eq. (18).

Download Full Size | PDF

In order to evaluate the wavefront incident on the MLA produced by a source point in front of the microscope, we employ Abbe imaging theory for 4-f optical systems [33]. We proceed to find the “focused” configuration for our scenario. In Fig. 4(b) FOP represents the focused object plane, which is the depth in the object space that is imaged exactly at the MLA by the tube lens. This plane is then located at offset $\Delta _{NOP}$ from the native object plane. If we introduce $\Delta _{MLA} = d_{tl}^{mla} - f_{tl}$ to be the signed distance between the MLA plane and the tube lens, we can write:

$$\Delta {NOP} = \frac{1}{M^2}\Delta_{MLA},$$
where $\frac {1}{M^2}$ is the axial magnification factor [33].

We now introduce the axial coordinate ${dof}_{mla}=f_{obj}+\Delta _{NOP}$ as the object space depth that is focused on the MLA by our 4-f microscope. Then a source point $\textbf {o}(o_x, o_y, o_z = {dof}_{mla})$ produces a convergent wavefront exactly at the MLA plane; see Fig. 4(b).

Having defined these quantities, we can express any source point $\textbf {o}(o_x, o_y, o_z)$ relative to the FOP of our LFM setup as $\textbf {o}(o_x, o_y, o_z = {dof}_{mla} + \Delta z_{FOP})$, see Fig. 4(c). $\Delta z_{FOP}$ is the depth offset from the FOP. Then we observe a defocused wavefront at the MLA plane:

$$\begin{aligned} U_{mla-}(\textbf{o}, x_{mla}, y_{mla}) &= \frac{deM}{{o_z}^{2} \lambda^{2}} \exp\left(-\frac{iu}{4 \sin^2(\alpha/2)}\right) \cdot \nonumber\\ & \int_0^\alpha P(\theta) \exp\left(- \frac{i u \sin^{2}(\theta /2)}{2 \sin^{2}( \alpha/2)}\right) J_0 \left(\frac{\sin(\theta)}{\sin(\alpha)} v\right) \sin(\theta) \ d\theta, \end{aligned}$$
which is the Debye integral for circular lens apertures. $deM = \frac {{dof}_{mla}}{d_{tl}^{mla}}$ is the demagnification factor, $\alpha \approx \arcsin (NA / n)$ is the maximum entrance angle of the objective aperture, $n$ is the refractive index, $\lambda$ is the wavelength of the monochromatic light we assume, $P(\theta )$ is the apodization function of the microscope, $J_0$ the zeroth order Bessel function of the first kind, and $v, u$ are the normalized radial and axial optical coordinates respectively, given by:
$$\begin{aligned}v &= \frac{2 \pi}{\lambda} \sqrt{(x_{mla}-o_x)^2 + (y_{mla}-o_y)^2} \cdot sin(\alpha), \nonumber\\ u &= \frac{8 \pi}{\lambda} \Delta z_{FOP} \cdot sin^2(\alpha/2), \end{aligned}$$
$\Delta z_{FOP}$ represents the depth offset from the FOP for a source point, $\textbf {o}(o_x, o_y, o_z = \Delta z_{FOP} + {dof}_{mla})$. In order to stay consistent with the convention and for the clarity of the subsequent discussion, we will still refer to $\Delta z$ ($o_z = \Delta z + f_{obj}$) as the axial range of an object, via the following convenient substitution:
$$\Delta z_{FOP} = \Delta z + \Delta_{NOP}$$
An immediate observation follows when $\Delta _{NOP} = 0$, then ${dof}_{mla} = f_{obj}$ and $d_{tl}^{mla} = f_{tl}$. This is the original LFM configuration, and Eq. (9) is equivalent to the defocused PSF at the native image plane proposed in Broxton et al. [22].

Similar to us, in [2] the authors compute the wavefront at the MLA in a defocused LFM setup. They first model the PSF at the NIP as in [3,22] and then propagate the wavefront for a distance of $\Delta _{MLA}$ (the signed distance between the tube lens and the MLA plane) via Fresnel diffraction integral. In theory this is equivalent to our approach. In practice, however, FFT-based Fresnel propagation is implemented via its transfer function, which is a chirp function, and special sampling regimes have to be considered. This implies not only a computational overkill, but such requirements depend on the propagation distance and under-/over- sampling of the angular spectra introduce artifacts in the observation plane [34].

3.2 MLA transmittance

Having computed the light field at the plane immediately before the MLA, we account now for the effect of the MLA. The field $U_{mla+}$ immediately after the MLA is given by:

$$U_{mla+}(\textbf{o}, x_{mla}, y_{mla}) = U_{mla-}(\textbf{o}, x_{mla}, y_{mla}) \cdot T(x_{mla}, y_{mla}),$$
where the MLA transmittance function $T$ is modeled by replicating the single lenslet transmittance in a tiled fashion as in [22]:
$$T = rep_{p_{ml},p_{ml}}(t(x_{l}, y_{l})),$$
with $rep_{d,d}$ the 2D replication operator and $p_{ml}$ the spacing between micro-lenses. $t(x_l, y_l)$ is the complex transmittance function of a lenslet with local lenslet coordinates, $(x_l,y_l)$:
$$t(x_{l}, y_{l}) = P(x_l, y_l) e^{\frac{ik(x_l^2 + y_l^2)}{2f_{ml}}}.$$
The exponential term is responsible for the phase change in the incident light, while $P(x,y)$ represents the pupil function, where $P(x,y) = circ_{p_{ml}}(x,y)$ for circular aperture lenslets or $P(x,y) = rect_{p_{ml}}(x,y)$ for squared shaped lenslets. $k=\frac {2\pi }{\lambda }$ is the wavenumber.

3.3 MLA to sensor light field propagation

We now address the propagation of the field from the MLA plane to the camera plane. Since we aim to model arbitrary distances between the MLA and the sensor, without restricting the $d_{mla}^{sens}$ distance to satisfy the Fresnel number (paraxial assumption) [22], we use the more accurate Rayleigh-Sommerfeld diffraction solution [35] to predict the light field at the sensor plane:

$$U_{sens}(\textbf{o},x_s,y_s) = \mathcal{F}^{{-}1}\Big\{ \mathcal{F}\{ U_{mla+}(\textbf{o},x_s,y_s)\} \cdot H_{rs}(f_X, f_Y) \Big\},$$
where $(x_s, y_s)$ are the image plane coordinates, $\mathcal {F}$ denotes the Fourier transform, and $(f_X,f_Y)$ are the spatial frequencies at the sensor plane. $H_{rs}$ is the Rayleigh-Sommerfeld transfer function:
$$H_{rs}(f_X, f_Y) = e^{\Big(i k \cdot d_{mla}^{sens} \sqrt{1-(\lambda f_X)^2- (\lambda f_Y)^2}\Big)}$$

3.3.1 Sampling requirements of the Rayleigh-Sommerfeld transfer function

The transfer function in Eq. (16) is a “chirp” function; it contains a phase function whose absolute value increases with the square of the frequency. Sampling such a function in practice is problematic since it is not bandlimited. In [34], the authors analyze the sampling requirements for the Rayleigh-Sommerfeld FFT-based propagation simulation and propose the following ideal sampling criteria:

$$\Delta x = \frac{\lambda}{L} \sqrt{({d_{mla}^{sens})}^2 + {\frac{L}{2}}^2},$$
where $L$ is the side length of the source plane and $\Delta x$ is the sampling interval such that $f_X \in [-1/2\Delta x, 1/2\Delta x]$ and $\Delta f_X = 1/L.$

When this criteria is not met, and the transfer function in Eq. (16) is over-/under-sampled, the image might exhibit repeating patterns, aliasing and spike-like artifacts [34].

Since the source field $U_{mla+}$ needs to match the sampling rate of the kernel $H_{rs}$ when implementing FFT-based propagation, and since the sampling requirements in Eq. (17) vary with the $d_{mla}^{sens}$ distance, in practice, we implement a resampling strategy to satisfy these criteria.

3.4 F-number matching condition for defocused LFM setups

In order to ensure the micro-images optimally fill the sensor plane without overlapping when acquiring light field images, the effective image-side NA of the tube lens needs to match the effective NA of the micro-lenses.

As depicted in Fig. 4(d), it is important to notice that a point source $\textbf {o}(o_x, o_y, o_z = {dof}_{mla})$, generates the largest response (blur) behind a micro-lens. Conversely, as we move away from ${dof}_{mla}$, the micro-lens blur radius, $b_z$ decreases. Thus, we only need to constrain the maximum blur radius, $b_{{dof}_{mla}}$ to match the micro-lens radius $r_{ml}$ in order to ensure optimal non-overlapping sensor plane coverage. From Fig. 4(d) it quickly follows:

$$\frac{r_{tl}}{d_{tl}^{mla}} = \frac{r_{ml}}{d_{mla}^{sens}},$$
where $r_{tl}$ represents the effective tube radius; the radius of the field distribution incident on the tube lens by a source point at ${dof}_{mla}$ in front of the microscope. In practice, we compute the $r_{tl}$ following the marginal rays:
$$r_{tl} = r_{obj} \left| 1 - \frac{d_{obj}^{tl}}{z'} \right|,$$
where $z'$ is obtained using the thin lens equation and $d_{obj}^{tl} = f_{obj} + f_{tl}$ for 4f microscopes.

An immediate observation follows that when $d_{tl}^{mla} = f_{tl}$ and $d_{mla}^{sens} = f_{ml}$, we have $r_{tl} = r_{obj}$ (radius of the objective lens), and Eq. (18) is equivalent to $\frac {M}{2NA_{obj}} = \frac {f_{ml}}{p_{ml}}$, where $M$ is the objective magnification and $NA_{obj}$ the objective numerical aperture. This is the f-number matching condition for the original LFM [1,22].

While violations of Eq. (18) result in either suboptimal sensor plane coverage or overlapping micro-images (see Fig. 4(e)), the LFPSF we derived in the current section allows for arbitrary $d_{tl}^{mla}, d_{mla}^{sens}$ combinations and is consequently not limited to f-number matching configurations.

4. Aliasing aware 3D deconvolution

Having discussed the non-aliasing sampling requirements of the LFM and derived the generalized LFPSF, we now turn our attention to incorporating this prior knowledge into the reconstruction process of computing a 3D volume from a light field image.

4.1 Discretized imaging model

Given the raw noisy light field sensor measurements acquired $\textbf {m} = {({m_j})_{j \in J}}$ by pixels $j \in J$ ($\left | J \right | = m$) we seek to recover the fluorescence intensity at each discrete point in the volume which produced these measurements.

We represent the discretized volume ${v}$ by a coefficient vector ${(v_{i})_{i \in I}}$ with $|I| = n$. Note that the sampling rate in $\textbf{v}$ is dictated by the super-resolution factor $s$ defined in the previous section. We now denote the detection probabilities

$$a_{ji} = {{P}({\textrm{photon counted at sensor element} \,{j} } |{\textrm{ emission occurred in voxel}} \,i})$$
Due to the low photon counts in fluorescence microscopy, we define the number of photons emitted at voxel $i$ and detected by sensor element $j$ as random variables $z_{ji}$ with $z_{ji} \sim \textrm {Poisson}(v_i a_{ji})$, which we combine into the the iid random vector $\textbf {z} = (z_{ji})_{j \in J, i \in I}$.

Our measurements $\textbf {m} = (m_j)_{j\in J}$ arise from $z_{ji}$ as $m_{j}= \sum _{i\in I} v_ia_{ji}$, yielding the stochastic imaging model

$$\textbf{m} \sim \textrm{Poisson}(A\textbf{v}),$$
where $\textbf {m}$ denotes the light field measurement, $\textbf {v}$ denotes the discretized volume we seek to reconstruct, and the operator $A = (a_{ji})_{j\in J, i\in I}$ describes the light field forward model, which is effectively determined by the discretized version of the LFPSF in Eq. (15). For each point in a fluorescent object the image intensity is given by the modulus squared of its amplitude [33]:
$$a_{ji} = |U_{sens}(\textbf{o}(i), \textbf{x}_s(j))|^2,$$
where $\textbf {o}(i)$ is the object space coordinate of voxel $i$ and $\textbf {x}_s(j)$ is the coordinate of sensor pixel $j$.

4.2 Estimate-maximize-smooth algorithm

We now consider the estimation of $\textbf {v}$ by maximizing the Poisson log-likelihood

$$L\left(\textbf{z} \ | \ \textbf{v} \right) = \sum_{j \in J} \sum_{i \in I} v_i a_{ji} + z_{ji}\ln{v_ia_{ji}} - \ln{ z_{ji}!}.$$
If we look at $\textbf {z}$ as the complete version of the incomplete data $\textbf {m}$, the expectation maximization approach provides an iterative two-step scheme for increasing the likelihood of the current estimate $\textbf {v}$. In the first step, $\textbf {z}$ is estimated by computing the conditional expectation $E(z_{ij}|\textbf {m}, \textbf {v})$, and in the second step, the maximum likelihood estimate of $\textbf {v}$ is found, starting from an initial guess $\textbf {v}^0$:
$$\hat{z_{ji}} = m_j\frac{v^q_i a_{ji}}{\sum_{l\in I} v^q_l a_{jl}}$$
$$v^{q+1}_i = \frac{v^q_i}{\sum_{j\in J}a_{jl}} \sum_{j\in J}\frac{m_ja_{ji}}{\sum_{l\in I} v^q_l a_{jl}},$$
where $q$ is the iteration count.

This is the well known MLEM algorithm, and Eq. (25) also corresponds to the popular Richardson-Lucy [36] iterative update, which in matrix-vector notation reads:

$$\textbf{v}^{q+1} = \frac{\textbf{v}^{q}}{A^T \mathbf{1}} \left[ A^T\frac{\textbf{m}}{A\textbf{v}^q}\right].$$
We now propose an additional straightforward step in which we filter the result of Eq. (24) and Eq. (25) using the depth-dependent anti-aliasing filters $h_{f_{w,z}}$ that we derived in the previous section. Then the aliasing aware update scheme reads:
$$\textbf{EMS:}\qquad\qquad \textbf{v}^{q+1} = h_{f_{w,z}}*\frac{\textbf{v}^{q}}{A^T \mathbf{1}} \left[ A^T\frac{\textbf{m}}{A\textbf{v}^q}\right],$$
where $*$ represents the convolution operator. This smoothing step is computationally insignificant compared to the estimation and maximization steps.

The reconstructed $\textbf {v}$ has a uniform lateral resolution across depths as imposed by the depth invariant discretization we choose; see the super-resolution factor $s$ in the previous section. However, the non-aliasing sampling requirements of the LFM vary across depth, and the actual details that can be recovered depend on these patterns, among other factors.

Moreover, as our model does not incorporate explicit depth priors, information from one depth appears aliased when wrongly projected to another depth. This behavior is present from the first iteration of the Richardson-Lucy scheme, resulting in strong artifacts at the highly under-sampled depths where the process fails to converge. Thus, the resampling correction (by depth-dependent filtering) we propose is absolutely necessary.

The filtering in Eq. (27) can be interpreted as projecting $\textbf {v}^{q+1}$ to the set of true solutions, which consists of frequencies below the bandwidth dictated by the LFM sampling requirements at each reconstructed depth.

4.3 Convergence of the proposed scheme

In order to show convergence of our proposed algorithm, we use the results of [37], in which a similar EMS algorithm with a smoothing kernel $S$ is investigated. The authors in [37] demonstrate a modified (weighted) EMS algorithm with desirable convergence properties using a weighted smoothing kernel $T = W^{-1}SW$, where $W = \textrm {diag}(w_i)$ and $w^{q+1}_i = s_i^{1/2}e_i^{1/2}\theta ^{q}_i$. According to Lemma in Section 5.3 in [37], $S$ and $T$ will have approximately the same effect if $S$ and $W$ satisfy the three requirements:

  • 1. $S_{ji}\geq 0, \forall i,j$,
  • 2. $\sum S_{ji} = 1$,
  • 3. $|\frac {w_i}{w_j} - 1|\leq \delta$ when $S_{ji}\neq 0$ for some $\delta > 0$.
In our context, $s_i$ is the size of the voxel $i$, $e_i = \sum _{j\in J}a_{ij}$ and $\theta ^{q}_i = {(\frac {v_i}{s_i})}^{1/2}$. For our smoothing kernel, $h_{f_{w,z}}$, the first two requirements are trivially fulfilled. Regarding the third requirement, as we use a uniform sampling of our volume, we have $s_i = s_j \forall i,j$. Also $\forall i, j$ where $h_{f_{w,z}}$ is non-zero, we have $\theta ^{q}_i \simeq \theta ^{q}_j$, as the effective sampling rate is higher than the kernel radius. The same argument holds for $e_i$ and $e_j$, since the columns $a_i$ and $a_j$ act on regions of $\textbf {v}$ at most as large as the effective sampling rate of the LFM. Thus we can conclude $w_i \simeq w_j$ when $S_{ji}\neq 0$ and $\delta$ small.

5. Experimental results

All experiments in this work were performed with a custom-built LFM setup, configured as a 4-f system, combining a $0.5$ NA with $20\times$ magnification objective lens, and a tube lens with focal length $f_{tl} = 165mm$. We used a f-number matching square-shaped aperture MLA with $150 \mu m$ micro-lens pitch and $3000 \mu m$ focal lengths. The pixel pitch of the sCMOS camera is $6.5 \mu m$, yielding a total of $23\times 23$ pixels behind a micro-lens.

All the results we discuss in this section were reconstructed at sensor resolution, i.e. at a super-resolution factor $s = 23$, which translates to a uniform lateral resolution of $0.33 \mu m$. Note, this refers to the sampling rate we chose for rendering the volumes and has nothing to do with the actual details that can be recovered, which is the effective resolution of the LFM. We refer the interested reader to existing discussions on the subject [1,31,38].

5.1 Artifact-free deconvolution

In order to show the full potential of our method and to be able to use the method [22] as a baseline for comparison, the experiments in this section were done with the LFM in the original plenoptic configuration, i.e. $d_{tl}^{mla} = f_{tl}$ and $d_{mla}^{sens} = f_{ml}$. Then the zero plane ($\Delta z = 0 \mu m$) has a conjugate image exactly at the MLA and is the most under-sampled, exhibiting the most artifacts.

As the first experiment, a USAF 1951 resolution target was imaged at $\Delta z = 0 \mu m$ using the LFM, see Fig. 1(c) (left) for the raw light field image. Figure 1(c) shows the reconstruction using the baseline method from [22] (center), and it is obviously riddled with the typical zero plane artifacts. Conversely, Fig. 1(c) (right) shows the reconstruction of the same light field image with our proposed aliasing-aware algorithm, exhibiting a natural appearance with no artifacts.

To further characterize the recoverable lateral resolution of our light field microscope configuration, we performed an analysis similar to [22]. We imaged and subsequently reconstructed the USAF target over a depth range of $[-80, 80]\mu m$, with $1 \mu m$ steps. We then computed the contrast measure:

$$C = \frac{I_{max} - I_{min}}{I_{max} + I_{min}}$$
for each region of interest on the USAF target, from element 6.1, representing a spatial frequency of $64 [lp/mm]$ to element 7.6 with spatial frequency $228 [lp/mm]$.

In Fig. 5, we show, for both the baseline method [22] in (a) and our proposed method in (b), the lateral MTF by plotting the contrast of different line pair regions in the USAF reconstruction, over the axial position, together with example single plane reconstructed images. In (a), due to the presence of artifacts, the drop in resolution near the zero plane is not discernible, as artifacts are perceived as high contrast features. Conversely, in (b), the contrast plots match the expected resolution profile.

 figure: Fig. 5.

Fig. 5. Lateral resolution limits: Lateral MTF for the USAF 1951 resolution target for the $[-80, 80]\mu m$ depth range together with example single plane reconstructed images. (a) Baseline deconvolution [22]: Aliasing artifatcs are present in the $\Delta z = [-25, 25]\mu m$ producing high contrast score in this range, even though the resolution is low. Artifacts are visible in the shown reconstructed images at $\Delta z = -15, -5, 0 \mu m$. (b) Aliasing-aware deconvolution: The aliasing artifacts are not visible anymore in the reconstructed images at $\Delta z = -15, -5, 0 \mu m$ and the MTF plot now matches the expected resolution profile. The resolution can be reliably measured also around the zero plane due to the smoothing step. Outside of the critical range, we observe very similar profiles for both methods.

Download Full Size | PDF

While in the state-of-the-art deconvolution [22], the aliasing artifacts make it difficult to quantify the lateral recoverable resolution in the highly affected axial ranges, when using our proposed EMS scheme, due to the smoothing step, the resolution can be reliably measured. However, although visually improved in quality, in terms of actual details, the effective resolution is ultimately limited by the sampling patterns and the PSF of the system. We encourage the reader to zoom into the example reconstructed USAF images in Fig. 5(a) and 5(b) for comparison.

As discussed in Section 3, when moving away from $\Delta z = 0 \mu m$, we need milder anti-aliasing filters to remove the artifacts while keeping the details in the reconstruction. This effect is illustrated in Fig. 6, which shows the reconstruction of an eyeball of a zebrafish larvae (5 days post fertilization, expressing green fluorescent proteins) over a depth range of $[-50, 50]\mu m$; due to space constraints we only show several lateral slices through the volume.

 figure: Fig. 6.

Fig. 6. 3D reconstruction of a zebrafish eye over an axial range $\Delta z = [-50, 50]\mu m$. (a), (b): maximum intensity projections. (c), (d): lateral slices through the volume. The reconstruction with the baseline method in [22] shows strong specific aliasing artifacts (red arrows) at depth planes close to the zero plane, while they fade out as we move further away from this plane. In comparison, our aliasing-aware deconvolution scheme completely removes all the artifacts.

Download Full Size | PDF

Figures 6(a) and 6(c) show the reconstruction with the baseline method in [22] and the artifacts are strongly present at depths close to the zero plane, while they fade out as we move further away from this plane, see for example the slice at $\Delta z = 25 \mu m$. In Figs. 6(b) and 6(d) we show the reconstruction of the same light field data using our proposed method. The depth-dependent filter radius is shown in Fig. 2(c); note here how the kernel radius drops as the artifacts fade away. Our deconvolution produces superior artifact-free results compared to the reference method without over-smoothing, as the depth-dependent filter is dictated by the sampling requirements of the LFM. Both reconstructions (Fig. 6) were obtained after 8 iterations of the corresponding update schemes.

Figure 7 shows a reconstructed cardiomyocyte organoid labelled with the calcium dye Fluo4-AM. Such organoids are an emerging platform for clinical trials, enabling high-throughput studies (both logistically and ethically since they are not organisms), along with possible direct clinical applicability, since they are derived from human stems cells. Contrary to traditional cell-cultures they are extended across all three dimensions and they have interesting temporal dynamics like cell-signaling and, specifically for heart-organoids, movements.

 figure: Fig. 7.

Fig. 7. 3D reconstruction of a cardiomyocyte organoid over an axial range $\Delta z = [0, 50]\mu m$. (a), (b): maximum intensity projections. (c), (d): lateral slices through the volume. The reconstruction with the baseline method in [22] shows strong specific aliasing artifacts (red arrows) at depth planes close to the zero plane, while as we move away from this plane, the artifacts are less visible. Our aliasing-aware deconvolution method shows superior artifact-free results.

Download Full Size | PDF

For the reference method [22], reconstruction artifacts are again strongly present (Figs. 7(a) and 7(c)) around $\Delta z = 0 \mu m$, while gradually fading out further away; see the slice at $\Delta z = 50 \mu m$. In the corrupted regions, a subsequent data analysis is not only troublesome, but rather unreliable. In comparison, our proposed method, specifically treats these artifacts via our depth-dependent resampling strategy (Figs. 7(b) and 7(d)). Again 8 iterations were performed for both methods.

5.2 Defocused LFM design

In this section we evaluate the defocused LFM setup. For this purpose, we place the micro-lens array at a distance $d_{tl}^{mla} \neq f_{tl}$ from the tube lens, while keeping the tele-centricity as before (see Figs. 3(b) and 3(c)). The $d_{mla}^{sens}$ distance will then follow from Eq. (18) to ensure the micro-images optimally cover the sensor plane.

Figure 8 shows the reconstruction of a zebrafish eye (a different sample from the one in Fig. 6), over an axial range $\Delta z = [-40, 40]\mu m$, from LF images acquired when $d_{tl}^{mla} > f_{tl}$ and $d_{tl}^{mla} < f_{tl}$. In order to perform deconvolution on these images, we used the LFPSF we derived in Section 3. to describe the forward imaging model. In Fig. 8(b) we show the reconstruction using the classic Richardson-Lucy scheme, as in Eq. (26). While in the original LFM configuration the reconstructed samples show prominent aliasing artifacts around $\Delta z = 0 \mu m$ due to the coarse sampling in this region, here, in the defocused LFM design we observe the artifacts pushed to the edges of our axial range. This effect is due to the displacement of the MLA from the native image plane, $\Delta _{MLA}$, which effectively translates into a proportional axial shift of the depth dependent lateral sampling rates in the object space by $\Delta _{NOP}$, see Eq. (8). This means an axial range $\Delta z$ in front of a defocused LFM setup is sampled in the same way the $\Delta z_{FOP}$ range would be sampled by the original LFM setup with the same settings.

 figure: Fig. 8.

Fig. 8. Defocused LFM: 3D reconstruction of a zebrafish eye over an axial range $\Delta z = [-40, 40]\mu m$. (a) LF images acquired when $d_{tl}^{mla} > f_{tl}$ and $d_{tl}^{mla} < f_{tl}$. (b) The reconstruction using the Richardson-Lucy scheme shows artifacts around the FOP plane of each setup. The defocused LFM is effectively an axially shifted (by $\Delta _{NOP}$; see Table 2) version of the original LFM; the zero plane behavior is now appearing at the FOP plane. (c) The artifact-free reconstruction using our aliasing-aware deconvolution method. (d) Lateral slices through the reconstructed volume. The two defocused LFM configurations demonstrate higher resolved features at complementary axial ranges; marked with smileys.

Download Full Size | PDF

Tables Icon

Table 2. Data set acquisition parameters of our experimental LFM setup together with the corresponding reconstructed axial ranges. Light blue: original LFM design. Light red: defocused LFM.

Table 2 contains the system parameters for all the data sets we discuss in this paper, together with the relevant reconstruction settings. The Fish eye(>) and Fish eye(<) entries correspond to the $d_{tl}^{mla}>f_{tl}$ and $d_{tl}^{mla}>f_{tl}$ configurations in Fig. 8, respectively. Reconstructing the $[-40, 40]\mu m$ axial range in both situations is equivalent, in terms of recoverable resolution, to reconstructing the $[-80, 0]\mu m$ and $[5, 85]\mu m$ in the original LFM setup (see $\Delta z_{FOP}$ column); effectively shifting the zero plane by $\Delta _{NOP}$. This explains the strongest artifacts in Fig. 8(b) being at the right most end of the axial range ($\Delta z_{FOP}=[-80, 0]\mu m$) when $d_{tl}^{mla}>f_{tl}$, and at the left most end of the axial range ($\Delta z_{FOP}=[5, 85]\mu m$) for the $d_{tl}^{mla}<f_{tl}$ case. In Fig. 8(c) we show the artifact-free deconvolution obtained using our Estimate-Maximize-Smooth scheme, and in Fig. 8(d) we illustrate z-slices of the reconstructed volumes every $5 \mu m$. We perceive better spatially resolved features in the range $[0, 40]\mu m$ for the $d_{tl}^{mla}>f_{tl}$ setup, while in the $d_{tl}^{mla}<f_{tl}$ case this range appears blurred and the $[-40, 0]\mu m$ range is resolved better.

In Fig. 9, we imaged $1 \mu m$ fluorescent beads in agarose with the LFM in the three configurations depicted in Fig. 3. Figure 9(a) (top) shows the acquired LF image for the original plenoptic design ($d_{tl}^{mla} = f_{tl}$), together with the 3D reconstruction using our proposed method in Fig. 9(b) (top). Figure 9(a) (middle) and (bottom) illustrate the acquired LF image for the defocused design with $d_{tl}^{mla} > f_{tl}$ and $d_{tl}^{mla} < f_{tl}$, respectively, alongside the 3D reconstructions in Fig. 9(b) (middle) and (bottom). The red rectangle highlights two micro-spheres at the zero plane of the LFM ($\Delta z = 0$). While, in the original LFM setup (top) they are only reconstructed at the lenslet resolution, in Fig. 9 (middle) and (bottom) they appear better resolved. On the other side, the dashed arrow, for example, points to a sphere placed at the FOP plane in the defocused $d_{tl}^{mla} > f_{tl}$ (middle) case. While in this case it can only be recovered at lenslet resolution, in Fig. 9 (top) and (bottom) we observe it at higher resolution. Analogous discussion applies to the other beads. Just as we have seen for the fish eye in Fig. 8, while one LFM configuration performs well at spatially resolving certain depths in the axial range, it does so at the cost of other depths, which is also the case when imaging away from the zero plane with the original LFM. In order to extend the resolvable range in the 3-D reconstructions, such LFM configurations can be complementary, which supports and motivates the work towards multi-focus [39,40] or dual-camera [5,41] plenoptic setups.

 figure: Fig. 9.

Fig. 9. Defocused LFM: 3D reconstruction of fluorescent beads in agarose over an axial range of $\Delta z = [-45, 45]\mu m$. (a)Acquired LF images. (b) The reconstructed volumes using our method for the original plenoptic design with $d_{tl}^{mla} = f_{tl}$ (top) and the defocused LFM with $d_{tl}^{mla} > f_{tl}$ (middle) and $d_{tl}^{mla} < f_{tl}$ (bottom). The red and blue highlights suggest how different features at different depths are better resolved in one configuration than in the others. No plenoptic design is generally better or worse.

Download Full Size | PDF

6. Discussion

While we showed good results using a Lanczos2 windowed ideal filter (with depth-dependent cut-off frequencies) as the anti-aliasing resampling strategy, more advanced sampling-specific interpolation schemes should be considered to cope with the highly irregular sampling patterns of the LFM across depth, i.e. depth-dependent filter shapes in addition to the proposed depth-dependent filter sizes.

Although we improve the visual appearance of the 3D reconstructed objects by addressing the sampling artifacts, the effective lateral resolution is still limited by the depth-dependent sampling rate. We showed that defocused LFM designs better resolve the near native object plane range, while sacrificing resolution at other depth ranges by axially shifting the sampling patterns; similar to imaging away from the zero plane with the original LFM design. In this regard, the LFM in any of the configurations, introduces depth-dependent trade-offs in terms of resolution by design.

In order to improve on these limitations, hybrid systems have been proposed, combining a light field microscope with a wide-field acquisition [42], or splitting the optical path and image with complementary focused LF systems [5,41]. Such dual-/multi-camera setups aim at achieving highly uniform sampling and subsequently isotropic resolution.

Alternatively, employing an array of lenslets with mixed focal lengths introduces more irregularity in the sampling patterns and allows increased depth of field [39,40]. [13] also demonstrates higher lateral resolutions over extended depth of field by introducing phase masks into the optical path of the LFM and thus creating highly variant PSFs for different depths. Interestingly, in [29], the authors discuss denser and more uniform sampling patterns in light field photography by incorporating lens aberrations and misalignment into the imaging model.

In order to improve the effective resolution, all of these approaches target, in one way or another, the depth-dependent nature of light field sampling. We believe, however, that a setup-specific anti-aliasing resampling strategy is still necessary in deconvolution schemes for such approaches.

Finally, now that we can eliminate the troublesome artifacts with our proposed method, it becomes feasible to improve the deconvolution method by including other image priors, such as edge-enhancing regularization; previously, such regularization only enhanced the artifacts.

7. Conclusion

In this work we address one of the current challenges in 3D reconstruction of light field microscopy data, the aliasing artifacts. We perform an analysis of the aliasing-free sampling requirements of the LFM to derive depth-dependent anti-aliasing filters. We also derive a generalized wave-based LFPSF to propose a novel aliasing-aware deconvolution scheme that applies to arbitrary LFM designs. We compare the capabilities of the original and defocused LFM designs in terms of recoverable lateral resolution at various axial ranges and demonstrate the superior quality reconstruction performance of our method using experimental data from phantoms and in-vivo biological samples.

8. Implementation and datasets

The example datasets shown in section 5. together with the implementation of the methods described in this paper are available as part of our 3D reconstruction framework for light field microscopy oLaF, available at: https://gitlab.lrz.de/IP/olaf.

Funding

Deutsche Forschungsgemeinschaft (MAP, LA 3264/2-1).

Acknowledgments

We would like to gratefully acknowledge Tanja Orschmann and Sabrina Desbordes for providing us with the biological samples.

References

1. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924 (2006). [CrossRef]  

2. H. Li, C. Guo, D. Kim-Holzapfel, W. Li, Y. Altshuller, B. Schroeder, W. Liu, Y. Meng, J. B. French, K.-I. Takamaru, M. A. Frohman, and S. Jia, “Fast, volumetric live-cell imaging using high-resolution light-field microscopy,” Biomed. Opt. Express 10(1), 29–49 (2019). [CrossRef]  

3. R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014). [CrossRef]  

4. L. Cong, Z. Wang, Y. Chai, W. Hang, C. Shang, W. Yang, L. Bai, J. Du, K. Wang, and Q. Wen, “Rapid whole brain imaging of neural activity in freely behaving larval zebrafish (Danio rerio),” eLife 6, e28158 (2017). [CrossRef]  

5. N. Wagner, N. Norlin, J. Gierten, G. de Medeiros, B. Balázs, J. Wittbrodt, L. Hufnagel, and R. Prevedel, “Instantaneous isotropic volumetric imaging of fast biological processes,” Nat. Methods 16(6), 497–500 (2019). [CrossRef]  

6. G. Lippmann, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7(1), 821–825 (1908). [CrossRef]  

7. E. H. Adelson and J. Y. A. Wang, “Single Lens Stereo with a Plenoptic Camera,” IEEE Transactions on Pattern Analysis Mach. Intell. 14(2), 99–106 (1992). [CrossRef]  

8. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-Held Plenoptic Camera,” Tech. Rep. Stanford CTSR 02 (2005).

9. R. Ng, “Fourier slice photography,” in ACM Transactions on Graphics (Proc. SIGGRAPH), vol. 24 (2005), pp. 735–744.

10. D. G. Dansereau, O. Pizarro, and S. B. Williams, “Decoding, Calibration and Rectification for Lenselet-Based Plenoptic Cameras,” in 2013 IEEE Conference on Computer Vision and Pattern Recognition, (2013).

11. M. Levoy, Z. Zhang, and I. McDowall, “Recording and controlling the 4D light field in a microscope using microlens arrays,” J. Microsc. 235(2), 144–162 (2009). [CrossRef]  

12. A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing,” ACM Trans. Graph. 26(3), 69 (2007). [CrossRef]  

13. N. Cohen, S. Yang, A. Andalman, M. Broxton, L. Grosenick, K. Deisseroth, M. Horowitz, and M. Levoy, “Enhancing the performance of the light field microscope using wavefront coding,” Opt. Express 22(20), 24817–24839 (2014). [CrossRef]  

14. S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, “Advances and challenges in super-resolution,” Int. J. Imaging Syst. Technol. 14(2), 47–57 (2004). [CrossRef]  

15. S. C. Park, M. K. Park, and M. G. Kang, “IEEE Signal Process. Mag.,” IEEE Signal Process. Mag. 20(3), 21–36 (2003). [CrossRef]  

16. M. Rossi and P. Frossard, “Geometry-Consistent Light Field Super-Resolution via Graph-Based Regularization,” IEEE Trans. Image Processing 27(9), 4207–4218 (2018). [CrossRef]  

17. S. Wanner and B. Goldluecke, “Variational light field analysis for disparity estimation and super-resolution,” IEEE Trans. Pattern Anal. Mach. Intell. 36(3), 606–619 (2014). [CrossRef]  

18. T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in 2009 IEEE International Conference on Computational Photography, ICCP 09, (2009).

19. T. E. Bishop and P. Favaro, “The light field camera: Extended depth of field, aliasing, and superresolution,” IEEE Transactions on Pattern Analysis Mach. Intell. 34(5), 972–986 (2012). [CrossRef]  

20. C.-K. Liang and R. Ramamoorthi, “A Light Transport Framework for Lenslet Light Field Cameras,” ACM Trans. Graph. 34(2), 1–19 (2015). [CrossRef]  

21. S. A. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Appl. Opt. 52(10), D22–D31 (2013). [CrossRef]  

22. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013). [CrossRef]  

23. P. Fei, Z. Wang, H. Zhang, Y. Yang, Y. Li, and S. Gao, “Deep learning light field microscopy for rapid four-dimensional imaging of behaving animals,” bioRxiv 34, 432807 (2018). [CrossRef]  

24. M. Levoy and P. Hanrahan, “Light Field Rendering,” in Proc. ACM Siggraph, (1996) pp. 31–42.

25. J.-X. Chai, S.-C. Chan, H.-Y. Shum, and X. Tong, “Plenoptic sampling,” in Proc. ACM Siggraph, (2000), 307–318.

26. J. Stewart, J. Yu, S. Gortler, and L. McMillan, “A new reconstruction filter for undersampled light fields,” Proc. 14th Eurographics Work. Render. pp. 150–156(2003).

27. Z. Xiao, Q. Wang, G. Zhou, and J. Yu, “Aliasing detection and reduction in plenoptic imaging,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (2014), pp. 3326–3333.

28. T. Georgiev and A. Lumsdaine, “Depth of Field in Plenoptic Cameras,” Eurographics 2009, 5–8 (2009).

29. L.-Y. Wei, C.-K. Liang, G. Myhre, C. Pitts, and K. Akeley, “Improving light field camera sample design with irregularity and aberration,” ACM Trans. Graph. 34(4), 1–11 (2015). [CrossRef]  

30. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in 2009 IEEE International Conference on Computational Photography, ICCP 09, (IEEE, 2009), pp. 1–8.

31. C. Perwass and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” Proc. SPIE 8291, 829108 (2012). [CrossRef]  

32. G. Wolberg, “Sampling, Reconstruction, and Antialiasing,” in Digital Image Warping, (CRC, 2004), pp. 1–32.

33. M. Gu, Advanced Optical Imaging Theory, vol. 75 (Springer, 1999).

34. D. G. Voelz and M. C. Roggemann, “Digital simulation of scalar optical diffraction: revisiting chirp function sampling criteria and consequences,” Appl. Opt. 48(32), 6132 (2009). [CrossRef]  

35. D. G. Voelz, Computational Fourier Optics: A MATLAB® Tutorial (SPIE, 2011).

36. W. H. Richardson, “Bayesian-Based Iterative Method of Image Restoration,” J. Opt. Soc. Am. 62(1), 55 (1972). [CrossRef]  

37. B. W. Silverman, M. C. Jones, J. D. Wilson, and D. W. Nychka, “A Smoothed Em Approach to Indirect Estimation Problems, with Particular Reference to Stereology and Emission Tomography,” J. Royal Stat. Soc. Ser. B (Methodological) 52, 271–303 (1990). [CrossRef]  

38. T. Georgiev, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoffs in integral photography,” Eurographics Symposium on Rendering (EGSR), pp. 263–272, (2006).

39. T. Georgiev and A. Lumsdaine, “The multifocus plenoptic camera,” Proc. SPIE 8299, 829908 (2012). [CrossRef]  

40. A. Levin, S. W. Hasinoff, P. Green, F. Durand, and W. T. Freeman, “4D frequency analysis of computational cameras for depth of field extension,” ACM Trans. Graph. 28(3), 1–97 (2009). [CrossRef]  .

41. P. Favaro, “A split-sensor light field camera for extended depth of field and superresolution,” Proc. SPIE 8436, 843602 (2012). [CrossRef]  

42. C.-H. Lu, S. Muenzel, and J. Fleischer, “High-Resolution Light-Field Microscopy,” in Imaging and Applied Optics (Optical Society of America, 2013), p. CTh3B.2.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. (a) Ray diagram: light propagation through the light field microscope. $f_{obj}$ denotes for objective focal length, $\Delta z$ represents the offset from the native object plane (NOP). A source point $\textbf {o} (o_{x},o_{y},o_{z} = z)$ in front of the microscope objective has a conjugate image by the tube lens at $z''$. Finally, the micro-lenses create micro-images at $z^{\prime\prime\prime}$, and the light reaches the camera sensor, producing a raw light field image. (b) Depth-dependent aliasing in LFM: The source points in the red group at depth $z_0$ in front of the microscope have completely overlapped images at the sensor plane. The points in the blue group, while being sampled at the same rate as the points in the red group, show partially non-overlapping images on the sensor as they are placed at depth $z_1$. The points in the green group, on the other side, are also placed at $z_1$; however they are sampled at a higher rate and their images are fully non-overlapping. (c) Reconstruction of the USAF 1951 target: Left: a light field image of the USAF 1951 resolution target, acquired with our experimental LFM. Center: reconstructed target using the method in [22]. Specific aliasing artifacts are present. Right: artifact-free reconstruction using our aliasing-aware deconvolution method.
Fig. 2.
Fig. 2. (a) Micro-lens magnification: The image of an object under a micro-lens scales according to the object depth. (b) Micro-lens blur: The geometric blur radius behind a micro-lens is again depth-dependent. (c) Anti-aliasing filter radius over depth: In the original LFM design [1,22] (see Fig. 3(a)), the NOP is sampled at the coarsest rate by the LFM which implies our resampling anti-aliasing filter has the largest radius at this object depth. As we move away from the zero plane, the LFM sampling rate increases and the anti-aliasing requirements become milder.
Fig. 3.
Fig. 3. LFM configurations and their light propagation paths. (a) The original design as described in [1,22]. The objective and tube lens are arranged as a 4-f (tele-centric) system. The native object plane (NOP) is then located at $f_{obj}$ in front of the microscope objective, and the native image plane (NIP) follows at $f_{tl}$ behind the tube lens; $f_{tl}$ represents the focal length of the tube lens. The MLA is then placed at the NIP. The camera is behind the MLA at an offset $d_{mla}^{sens} = f_{ml}$, where $f_{ml}$ represents the focal length of the micro-lens. (b) and (c) Defocused LFM (similar to the focused plenoptic camera [30] design). The MLA is now placed behind the NIP (b) or in front of it (c), such that the NOP is not focused on the MLA. In the latter scenario, the virtual image that would form at the NIP is depicted in dashed orange. Top: experimentally acquired LFPSF of a point source at the NOP, $\textbf {o}(o_x, o_y, o_z) = (0,0,f_{obj})$ for each setup.
Fig. 4.
Fig. 4. (a) Example defocused LFM configuration with $d_{tl}^{mla} < f_{tl}$. (b) The “focused” object depth for our LFM in (a). FOP represents the focused object plane which is imaged exactly at the MLA by the tube lens. FOP is then located at offset $\Delta _{NOP}$ from NOP. Then an object focused on the MLA by our microscope is placed at ${dof}_{mla}=f_{obj}+\Delta _{NOP}$ in front of the objective. (c) $\Delta z_{FOP}$ represents the depth offset from the FOP for a source point, $\textbf {o}(o_x, o_y, o_z = \Delta z_{FOP} + {dof}_{mla})$. (d) Optimal sensor plane coverage condition: The micro-lens blur radius for a source point $\textbf {o}(o_x, o_y, o_z = {dof}_{mla})$ needs to match the micro-lens radius $r_{ml}$, in order to ensure optimal sensor plane coverage, without overlapping micro-images. (e) Overlapping micro-images due to violation of criteria in Eq. (18).
Fig. 5.
Fig. 5. Lateral resolution limits: Lateral MTF for the USAF 1951 resolution target for the $[-80, 80]\mu m$ depth range together with example single plane reconstructed images. (a) Baseline deconvolution [22]: Aliasing artifatcs are present in the $\Delta z = [-25, 25]\mu m$ producing high contrast score in this range, even though the resolution is low. Artifacts are visible in the shown reconstructed images at $\Delta z = -15, -5, 0 \mu m$. (b) Aliasing-aware deconvolution: The aliasing artifacts are not visible anymore in the reconstructed images at $\Delta z = -15, -5, 0 \mu m$ and the MTF plot now matches the expected resolution profile. The resolution can be reliably measured also around the zero plane due to the smoothing step. Outside of the critical range, we observe very similar profiles for both methods.
Fig. 6.
Fig. 6. 3D reconstruction of a zebrafish eye over an axial range $\Delta z = [-50, 50]\mu m$. (a), (b): maximum intensity projections. (c), (d): lateral slices through the volume. The reconstruction with the baseline method in [22] shows strong specific aliasing artifacts (red arrows) at depth planes close to the zero plane, while they fade out as we move further away from this plane. In comparison, our aliasing-aware deconvolution scheme completely removes all the artifacts.
Fig. 7.
Fig. 7. 3D reconstruction of a cardiomyocyte organoid over an axial range $\Delta z = [0, 50]\mu m$. (a), (b): maximum intensity projections. (c), (d): lateral slices through the volume. The reconstruction with the baseline method in [22] shows strong specific aliasing artifacts (red arrows) at depth planes close to the zero plane, while as we move away from this plane, the artifacts are less visible. Our aliasing-aware deconvolution method shows superior artifact-free results.
Fig. 8.
Fig. 8. Defocused LFM: 3D reconstruction of a zebrafish eye over an axial range $\Delta z = [-40, 40]\mu m$. (a) LF images acquired when $d_{tl}^{mla} > f_{tl}$ and $d_{tl}^{mla} < f_{tl}$. (b) The reconstruction using the Richardson-Lucy scheme shows artifacts around the FOP plane of each setup. The defocused LFM is effectively an axially shifted (by $\Delta _{NOP}$; see Table 2) version of the original LFM; the zero plane behavior is now appearing at the FOP plane. (c) The artifact-free reconstruction using our aliasing-aware deconvolution method. (d) Lateral slices through the reconstructed volume. The two defocused LFM configurations demonstrate higher resolved features at complementary axial ranges; marked with smileys.
Fig. 9.
Fig. 9. Defocused LFM: 3D reconstruction of fluorescent beads in agarose over an axial range of $\Delta z = [-45, 45]\mu m$. (a)Acquired LF images. (b) The reconstructed volumes using our method for the original plenoptic design with $d_{tl}^{mla} = f_{tl}$ (top) and the defocused LFM with $d_{tl}^{mla} > f_{tl}$ (middle) and $d_{tl}^{mla} < f_{tl}$ (bottom). The red and blue highlights suggest how different features at different depths are better resolved in one configuration than in the others. No plenoptic design is generally better or worse.

Tables (2)

Tables Icon

Table 1. List of symbols

Tables Icon

Table 2. Data set acquisition parameters of our experimental LFM setup together with the corresponding reconstructed axial ranges. Light blue: original LFM design. Light red: defocused LFM.

Equations (28)

Equations on this page are rendered with MathJax. Learn more.

γ z = d m l a s e n s d t l m l a | z d t l m l a z | .
B z = r t l | d t l m l a z z | .
γ z = d m l a s e n s d t l m l a r t l B z .
b z = r m l | 1 z 1 d m l a s e n s | ,
w s e n s z = | γ z p m l b z | .
w s e n s z = min   ( | γ z p m l b z | ,   r m l ) .
w o b j z = w s e n s z s p m l .
Δ N O P = 1 M 2 Δ M L A ,
U m l a ( o , x m l a , y m l a ) = d e M o z 2 λ 2 exp ( i u 4 sin 2 ( α / 2 ) ) 0 α P ( θ ) exp ( i u sin 2 ( θ / 2 ) 2 sin 2 ( α / 2 ) ) J 0 ( sin ( θ ) sin ( α ) v ) sin ( θ )   d θ ,
v = 2 π λ ( x m l a o x ) 2 + ( y m l a o y ) 2 s i n ( α ) , u = 8 π λ Δ z F O P s i n 2 ( α / 2 ) ,
Δ z F O P = Δ z + Δ N O P
U m l a + ( o , x m l a , y m l a ) = U m l a ( o , x m l a , y m l a ) T ( x m l a , y m l a ) ,
T = r e p p m l , p m l ( t ( x l , y l ) ) ,
t ( x l , y l ) = P ( x l , y l ) e i k ( x l 2 + y l 2 ) 2 f m l .
U s e n s ( o , x s , y s ) = F 1 { F { U m l a + ( o , x s , y s ) } H r s ( f X , f Y ) } ,
H r s ( f X , f Y ) = e ( i k d m l a s e n s 1 ( λ f X ) 2 ( λ f Y ) 2 )
Δ x = λ L ( d m l a s e n s ) 2 + L 2 2 ,
r t l d t l m l a = r m l d m l a s e n s ,
r t l = r o b j | 1 d o b j t l z | ,
a j i = P ( photon counted at sensor element j |  emission occurred in voxel i )
m Poisson ( A v ) ,
a j i = | U s e n s ( o ( i ) , x s ( j ) ) | 2 ,
L ( z   |   v ) = j J i I v i a j i + z j i ln v i a j i ln z j i ! .
z j i ^ = m j v i q a j i l I v l q a j l
v i q + 1 = v i q j J a j l j J m j a j i l I v l q a j l ,
v q + 1 = v q A T 1 [ A T m A v q ] .
EMS: v q + 1 = h f w , z v q A T 1 [ A T m A v q ] ,
C = I m a x I m i n I m a x + I m i n
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.