Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

What about computational super-resolution in fluorescence Fourier light field microscopy?

Open Access Open Access

Abstract

Recently, Fourier light field microscopy was proposed to overcome the limitations in conventional light field microscopy by placing a micro-lens array at the aperture stop of the microscope objective instead of the image plane. In this way, a collection of orthographic views from different perspectives are directly captured. When inspecting fluorescent samples, the sensitivity and noise of the sensors are a major concern and large sensor pixels are required to cope with low-light conditions, which implies under-sampling issues. In this context, we analyze the sampling patterns in Fourier light field microscopy to understand to what extent computational super-resolution can be triggered during deconvolution in order to improve the resolution of the 3D reconstruction of the imaged data.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Based on the principles of integral photography introduced in 1908 [1], lenslet-based (plenoptic) imaging systems [2] have since attracted a lot of interest and have become an important category in computational imaging research. Plenoptic cameras, commonly called light field cameras (although light field imaging is a more generic concept, not limited to lenslet-based devices), make use of a micro-lens array (MLA) inserted in the optical path at various relative distances to the camera sensor [36] to capture the spatio-angular information of the 3D far scene in a single shot, enabling various post-acquisition possibilities like volumetric refocusing [3,7,8] or depth estimation [911].

Naturally, the technology was also adopted in optical microscopy, where Levoy et al. [12] introduced it as light field microscopy (LFM); it is also referred to as integral microscopy (IMic) in the literature [13,14]. It has since emerged as a very capable scan-less optical imaging modality well-suited for highly-dynamic fluorescent biological specimens, allowing for subsequent 3D reconstruction [1517]. LFM has been demonstrated in various biomedical application, including recording neuro-dynamics in vertebrate model organisms [18,19] or live cell imaging [20].

While the scan-less 3D imaging capability is very attractive, the modality trades off lateral for axial resolution. In the early stages, the methods for rendering the 3D scene out of the captured light field were limited to lateral lenslet resolution [12,15], which is the number of available micro-lenses. More recently, Broxton et al. [16] demonstrated that superior lateral resolution (higher than the lenslet count) can be recovered via means of computational super-resolution. The same principles were explored in light field photography as well [21,22]. The key observation in these works is that the light sampling pattern of the plenoptic imaging systems is highly depth-dependent, and that the lenslet sampling rate is considerably below the band limit of the incoming light field for most of the axial range. Spatial aliasing can then be explored through sub-lenslet shifts to recover higher lateral resolution at certain depths in the 3D scene. However, the improvement in resolution is implicitly non-uniform across depth. Moreover, when performing 3D reconstruction at uniform object space resolution, the under-sampled (by the microscope) depths exhibit specific artifacts. This effect is particularly strong around the native object plane (NOP), which is commonly referred to as the zero plane [16,18]. While the zero plane artifacts in the reconstruction have been very recently addressed in [17] via a resampling strategy during the deconvolution process, the recoverable resolution is ultimately limited by the design of the microscope and the resolution at the NOP remains low.

To address this limitation, there is a considerable amount of research which proposes various approaches. Examples include defocused plenoptic configurations [17,20,24] which manipulate the sampling pattern by altering the relative distances in the optical path (e.g. MLA to sensor), various hardware variations like wavefront coding techniques [25], hybrid systems combining a light field with a wide field acquisition [26] or simultaneous dual LFM setups [19] with complementary acquisition parameters. While generally successful to some extent in improving the lateral resolution or extending the depth of field, these extensions come usually with a hardware overhead or high computational costs, or both.

Recently, Fourier integral microscopy (FiMic) [27], also called Fourier light field microscopy (FLFM), was proposed to address the limitations in conventional light field microscopy (or integral microscopy), by placing a micro-lens array at the aperture stop of the microscope objective. With such a modification, a Fourier (far-field) image of the aperture stop is recorded by each micro-lens and thus a collection of orthographic views (elemental images) is directly captured on the sensor. While compact in design, the proposed microscope demonstrated extended depth of field and enhanced lateral resolution of the elemental images (EIs) in comparison with the extracted views in regular light field microscopy [28]. A custom setup (with a specialized MLA consisting of interleaved micro-lenses with two different focal lengths) based on the same principles, was recently used to demonstrate impressive resolution in 3D neuro-dynamics recording of fish larvae [29]. Another recent work [30] proposes deconvolution by a wave-based point spread function (PSF) in FLFM. The particular PSF introduced, however, carries an unnecessary computational overhead and is not very flexible with respect to setup variations, as we will discuss in section 3.1.

A feature that inherently characterizes the orthographic images captured in FLFM, when working with fluorescent samples, is the under-sampling of the PSF. This happens due to the sensitivity and noise requirements of the camera sensors, which are often met by the use of large pixels. Note that this problem does not occur in conventional fluorescence microscopy due to the large magnification factor between the object and the sensor planes.

In this work, we analyze the image formation process in fluorescence FLFM to understand how the modified microscope samples the light field, and, based on this analysis, we discuss the conditions and extent of the computational super-resolution that is possible in FLFM. We then propose a diffraction-aware forward light propagation model to describe the system’s impulse response and use this to volumetrically reconstruct the images. We evaluate our method on experimental images of the USAF 1951 resolution target and cotton fibers. As a teaser, Fig. 1 shows clearly superior results compared to the well known shift and sum refocusing algorithm [23] used in the baseline work by Scrofani et al. [28].

 figure: Fig. 1.

Fig. 1. Reconstruction of the USAF 1951 resolution target. Top: (a) Raw elemental image of the resolution target acquired with our experimental FiMic (shown is a close up on groups 6 and 7 of the central elemental image). (b) The post-acquisition refocused image using the popular algorithm of shifting views and summing up [23]. (c) The deconvolved image at sensor resolution. (d) The reconstructed image at a 3x super-sampling of the object space, exploiting complementary multi-view aliasing in the elemental images. Bottom: Line profiles through the elements 6.4 to 7.3 of the images above. While the elemental image (a) and the refocused image (b) resolve up to element 6.4 (11 $\mu m$), the deconvolution resolves up to element 6.6 (8.8 $\mu m$) in (c) and element 7.1 (7.8 $\mu m$) in the computationally super-resolved image (d).

Download Full Size | PDF

The methods we develop in this work are related to computational super-resolution techniques used in computer vision and computational photography, where sub-pixel shifts (or sub-lenslet shifts in light field photography [16]) between multiple aliased views of the same scene are combined to recover an image at sub-pixel resolution. In this light, computational super-resolution should not be confused with optical super-resolution which aims at breaking the diffraction limit of imaging systems, like Stimulated Emission Depletion (STED) or Photoactivated Localization Microscopy (PALM).

2. Background

2.1 Fourier integral microscope

A Fourier integral microscope (FiMic) is built by inserting a micro-lens array (MLA) at the back aperture stop (AS) of a conventional microscope objective (MO) and recording far-field (Fourier) perspective views of the object under each micro-lens. In order to be consistent with the nomenclature in the state-of-the-art work [28], we will also refer to the perspective views as elemental images (EIs).

Figure 2(a) illustrates a ray diagram of the light propagation through a FiMic. Since the AS is usually not accessible for commercial microscope objectives, the configuration depicted here employs a telecentric optical relay system (RL1 and RL2, with focal lengths $f_1$ and $f_2$, respectively) to conjugate the AS plane and the MLA plane. Note that, when $f_1 \neq f_2$, there is a relay magnification factor $M_{relay} = \frac {f_2}{f_1}$ that contributes to the total system magnification. For an arbitrary source point in front of the objective, $\textbf {o}(o_x, o_y, z)$, we will represent the axial coordinate as $z = f_{obj} + \Delta z$, since an object at the front focal plane (the native object plane, NOP) is in focus in a conventional wide-field microscope. $f_{obj}$ is the focal length of the objective lens. Then $\Delta z$ is an offset from the NOP, and we will refer to it when talking about depth in the subsequent sections.

 figure: Fig. 2.

Fig. 2. Image formation in FLFM. (a) Ray diagram: light field propagation through the Fourier integral microscope. The FiMic depicted here makes use of an optical relay system (RL1 and RL2 with focal lengths $f_1$ and $f_2$, respectively) which conjugate the back aperture of the microscope objective (MO) with the MLA plane. The reason for the relay is that the back aperture is usually not accessible in conventional commercial MOs. A source point $\textbf {o}(o_x, o_y, z = f_{obj} + \Delta z)$ in front of the MO has a conjugate image by the first relay lens (RL1) at $z'$. RL2 picks up this image and magnified images are recorded behind each micro-lens as the light reaches the camera sensor. $f_{obj}$ denotes the MO focal length and $\Delta z$ represents the axial offset from the native object plane. (b) The field stop (FS) controls the size of the elemental images (EIs) as well as the size of the microscope’s field of view. See Eq. (1) and Eq. (2). (c) Overlapping images of the USAF resolution target when the FS is too large.

Download Full Size | PDF

A source point at a depth $z$ in front of the MO has a conjugate image at $z'$ by the first relay lens, RL1. This intermediate image is then picked up by the second relay lens, RL2, and finally, magnified images of the field stop (FS) are recorded behind each micro-lens as the light reaches the camera sensor. The FS, as depicted in Fig. 2(b), controls the lateral extent of the micro-images as $\mu _{image} = r_{FS} \frac {f_{ml}}{f_{2}}$. Here $r_{FS}$ is the radius of the FS, $f_{ml}$ is the focal length of the micro-lens and $\mu _{image}$ is the radius of the EI formed on the sensor. In order for the EIs to optimally cover the sensor plane (without leaving space between them or overlapping), the micro-image radius must match the micro-lens radius, $\mu _{image} = r_{ml}$. Then the radius of the FS satisfies:

$$r_{FS} = r_{ml}\cdot f_2 / f_{ml}.$$
It quickly follows, as depicted in Fig. 2(b), that the FS determines the field of view (FOV) of the FiMic, as its radius satisfies:
$$r_{FOV} = r_{FS}\cdot f_{obj} / f_{1}.$$
Figure 2(c) shows a simulated light field image of the USAF 1951 resolution target with overlapping EIs when the FS does not satisfy Eq. (1).

2.2 Aliasing and computational super-resolution

By design, behind the micro-lenses, the FiMic records EIs with dense spatial sampling and each with different angular content. The number of micro-lenses that can be fitted within the AS is $N = M_{relay}\frac {r_{obj}}{r_{ml}}$, with $r_{obj}$ being the radius of the AS. Then such a setup captures $N$ angular views of the imaged scene. With an equivalent (in terms of MO) wide-field microscope, these EIs could be captured if pinholes would be placed at certain positions over the AS. Hence, the recorded light field consists of pinhole views at $N$ locations over the numerical aperture of the objective, $NA_{obj}$. The number of micro-lenses controls the spatio-angular resolution trade-off. By increasing $N$ we may capture more views, however, at a lower spatial resolution as the effective numerical aperture is reduced proportionally, $\frac {NA_{obj}}{N}$ [28].

When aiming at 3D reconstruction of the imaged object, the resolution of the volumetric reconstruction is directly determined by the band limit of the recorded signal in each perspective view. Thus, high-frequency details in the volume can only be recovered if they can be resolved in the views. Under the Rayleigh resolution criterion for diffraction limited systems [31], two source points are resolved in each of the EIs, when they are separated by at least a distance:

$$\delta_{{dif\!\!f}} = N \frac{\lambda}{2NA_{obj}},$$
where $\lambda$ is the wavelength of the light we employ.

On the other hand, the sampling rate in an EI is crucial in determining how high frequencies of the light field signal are recorded on the sensor. Within each EI, the sampling period in the object space is given by the camera pixel pitch, $\rho _{px}$ divided by the total system magnification factor, $M_{FiMic} = \frac {f_{ml} f_1}{f_2 f_{obj}}$. Then, using Nyquist’s criterion, we define the sensor resolution as:

$$\delta_{sensor} = 2\frac{\rho_{px}}{M_{FiMic}}.$$
When $\delta _{sensor}$ samples the signal below the band limit, high frequency details of the light field appear aliased as low frequency features in the individual EIs. One could potentially try to alleviate the under-sampling issues by proper selection of the relay magnification. However, this is not a good strategy since such an increase of the magnification involves the reduction of the FOV and the need of high-NA micro-lenses which have poor optical quality [28].

We introduce the super-sampling factor $s \in \mathbb {Z}$ to characterize the object space sampling rate of our reconstructed volumes. If we sample the volume at a rate $s$ times the sensor resolution, the lateral voxel spacing is:

$$\delta_{super} = \frac{\delta_{sensor}}{s}.$$
There are various works in computer vision demonstrating computational super-resolution through combining multiple aliased low-resolution images acquired at sub-pixel camera movements [3237]. In light field photography and conventional light field microscopy, computational super-resolution was addressed by exploiting sub-lenslet sampling [16,17,21,22,38,39].

In Fourier light field microscopy, the EIs form behind the micro-lenses at specific translational offsets with respect to their corresponding micro-lens centers. In Fig. 3(a) a ray diagram of the image formation of a point source away from the NOP is depicted. The first part of the light propagation is omitted for the sake of clarity of the figure. The center of the image ($\mu _{image}$) formed behind an off-axis micro-lens, with respect to the optical axis (OA) of the system, is translationally offset from the corresponding micro-lens ($\mu _{lens}$) center. We will refer to the EI behind the micro-lens centered on the OA as the reference EI, since this image is aligned with the micro-lens. Then the translational offsets specific to the off-axis EIs are stated with respect to this image. Figure 3(b) shows an image acquired with our experimental FiMic of the USAF-1951 resolution target placed at $\Delta z = -100 \mu m$ in front of the MO. And Fig. 3(c) shows zoomed-in regions of three arbitrarily picked EIs of the image in Fig. 3(b). The centers of the micro-lenses are marked in red on the image, while the centers of the EIs are marked in blue to highlight the misalignment between them. These images contain distinct complementary aliasing patterns (especially noticeable for elements 6.4 and 6.5 of the USAF target) which motivate computational super-resolution. Figure 3(d) illustrates how the shift patterns change with object depth and Fig. 3(e) visualizes these shifts in pixels (between each micro-lens and its corresponding EI) for an axial range [-120, 120]$\mu m$ to give an intuitive understanding on how the image formation in FLFM varies with object depth. The $\mu _{lens}$ index being zero refers to the reference (central) micro-lens, which is the closest to the OA.

 figure: Fig. 3.

Fig. 3. Aliasing and EI sampling rates. a) The EIs formed behind off-axis micro-lenses are shifted with respect to the centers of the micro-lenses. b) FiMic image of the USAF 1951 resolution target placed at $\Delta z = -100 \mu m$. c) Zoomed-in regions of the EIs in b) showing distinct aliasing patterns in areas with high frequency features as highlighted by the arrows. The micro-lens centers (red dots) and the EI centers (dark blue dots) are mismatched for the off-axis EIs. d) The EIs exhibit different shift pattern with object depth. e) EIs offsets in pixels from the micro-lens centers with respect to a reference EI (closest to the optical axis) for objects placed at $\Delta z$ = [-120, 120] $\mu m$. The $\mu _{lens}$ index $= 0$ refers to the central micro-lens (closest to the OA). f) Sub-pixel shifts of the EIs with respect to the reference EI over depth. It is these sub-pixel shifts between the captured views that record complementary aliased information and motivate computational super-resolution.

Download Full Size | PDF

More interesting for our discussion are the sub-pixel shifts which are the fractional part of the pixel shifts in Fig. 3(e). Figure 3(f) displays the absolute value of the sub-pixel shifts as a function of axial position of the source point and $\mu _{lens}$ index and they appear highly irregular, although consistent in density across depth. The lack of symmetry with respect to the ’$\mu _{lens}$ index’ axis is due to the fact that the reference EI is not perfectly aligned with the optical axis, but rather the most central one, as it is not trivial to perfectly align the MLA with the optical axis in practice. However, this misalignment does not impact our reasoning, as long as the location of the sub-imaging systems (micro-lenses) can be determined. When computing the system’s response for a specific arrangement, we first detect the relative positions (with respect to the reference EI) of the centers of the micro-lenses. Also, at the zero plane ($\Delta z = 0 \mu m$) the off-axis EIs show no shift in Fig. 3(f). However, the concept of NOP in experimental FLFM is rather mathematical than physical as any small displacement from that plane gives rise to a collection of EIs with subpixel shifts. When we present the results we show that super-resolution is achievable also around the NOP.

In order to recover high-frequency features, enough images with distinct aliasing patterns should be combined, such that the sub-pixels shifts constitute a sufficiently dense sampling pattern [4043]. Such requirements contribute to the ultimate fundamental resolution limits of the deconvolved image, the band limit through diffraction limit, the camera pixel size, sensitivity, fill factor and prior scene information [44]. When the sensor pixels are close to or smaller than the diffraction limit (generally the case in microscopy), the aliasing is probably neglectable and computational super-resolution is of relatively low impact. We will discuss these aspects when we present our results in section 4..

3. 3D reconstruction

In order to obtain a 3D reconstruction of the imaged sample, we aim at characterizing the point spread function (PSF) of the system and use it to perform deconvolution.

3.1 Light field point spread function model

In this section we introduce a wave-based forward light propagation model to describe the optical system’s PSF. For that we derive the diffraction pattern of a source point when the light propagates through the FiMic from the source to the camera sensor and we discuss the wavefront at intermediate key planes in the following subsections.

A source point $\textbf {o}(0, 0, o_z = f_{obj} + \Delta z_o)$ in front of the microscope generates, according to Rayleigh-Sommerfeld theory [45], a wavefront distribution at the front focal plane of the objective:

$$U_0(x, y; \textbf{o}) = (A/r)\,e^{ikr(\textrm{sign}(\Delta z_o))},$$
where $A$ is the amplitude of the source electrical field, $r = \sqrt {x^{2} + y^{2} + \Delta z_o^{2}}$ is the distance between the source point and the observation point $(x,y)$ at the front focal plane, $k = \frac {2\pi }{\lambda }$ is the wave number and $\lambda$ is the wavelength of the assumed monochromatic light.

According to Debye scalar integral representation, the wavefront distribution at the back focal plane of the objective is given by [46]:

$$U_{AS}(r_{as}; \textbf{o}) = \int_0^{\alpha} U_0(\theta; \textbf{o}) \, J_0(kr_{as}\sin(\theta))\, \sin(\theta)\, d\theta,$$
where $r_{as} = (x_{as},y_{as})$ stands for the lateral coordinate at the AS, $\alpha$ is the aperture angle so that $NA_{obj} = \sin (\alpha )$, and $\theta = \sin ^{-1}(r_{as}/f_{obj})$. $J_{0}$ represents the zeroth order Bessel function of the first kind. From this equation we recognize a Fourier-Bessel transformation between the amplitude at the front and the back focal planes.

The wave propagation between the AS and the MLA can be accurately described under the Fresnel approximation, Thus, the wavefront incident on the MLA array is the magnified version of the wavefront at the MO aperture stop:

$$U_{MLA-}(x_{mla}, y_{mla}; \textbf{o}) = U_{AS}\left(\frac{x_{mla}}{M_{relay}}, \frac{y_{mla}}{M_{relay}}; \textbf{o}\right),$$
where $M_{relay} = \frac {f_2}{f_1}$. As pointed out in Sec. 2.1, for practical design reasons, the FiMic makes use of a relay system, depicted by the RL1 and RL2 lens in Fig. 2(a), in order to mimic the MLA being placed at the AS plane (Fourier plane) of the objective. There is no need to explicitly model the relay system, however we have to account for the induced magnification factor, $M_{relay}$. When $f_1 = f_2$, the relay is 1:1 and the wavefront distributions $U_{MLA-}$ and $U_{AS}$ are the same.

In [30], the authors have very recently presented a similar wave-based model for describing the incident field on the MLA. They directly compute the wavefront at the intermediate image plane (in our naming scenario this is at the back focal plane of RL1) using the Debye integral derivation for 4f systems [45] and then Fourier transform this field to obtain the distribution at the MLA. This brings an unnecessary computational overhead and a degree of inflexibility as the model confines the FiMic design to configurations containing the relay lenses, which as we have discussed above, do not need explicit modeling. The relay system in our experimental setup is an auxiliary construction due to the AS not being physically accessible in commercial MOs, and not an essential specification of the FiMic.

In the next step, the wavefront is further transmitted by the MLA. The field $U_{MLA+}$ immediately after the MLA is given by:

$$U_{MLA+}( x_{mla}, y_{mla}; \textbf{o}) = U_{MLA-}( x_{mla}, y_{mla}; \textbf{o}) \cdot T(x_{mla}, y_{mla}).$$
Here $T$ is the MLA transmittance function modeled by replicating the single lenslet transmittance in a tiled fashion, $T = rep_{p_{ml},p_{ml}}{\big (}t(x_{l}, y_{l}){\big )}$; with $rep_{p_{ml},p_{ml}}$ being the 2D replication operator and $p_{ml}$ the spacing between micro-lenses. $t(x_{l}, y_{l}) = P(x_l, y_l) e^{\frac {ik(x_l^{2} + y_l^{2})}{2f_{ml}}}$ is the complex transmittance function of a lenslet and $(x_l,y_l)$ are the local lenslet coordinates, while $P(x_l,y_l)$ is the lenslet pupil function [16,17].

Finally, similarly to [17], we employ the Rayleigh-Sommerfeld diffraction solution [31] to further propagate (for a $f_{ml}$ distance) the light field to the sensor plane:

$$U_{sens}(x_s,y_s; \textbf{o}) = \mathcal{F}^{{-}1}\Big\{ \mathcal{F}{\big\{} U_{MLA+}(x_s,y_s; \textbf{o}){\big\}} \cdot H_{rs}(f_X, f_Y) \Big\},$$
where $(x_s, y_s)$ are the coordinates at the sensor plane, $\mathcal {F}$ represents the Fourier transform operator, and $(f_X,f_Y)$ are the spatial frequencies at the image plane. $H_{rs}$ is the Rayleigh-Sommerfeld transfer function, given by:
$$H_{rs}(f_X, f_Y) = e^{\Big(i k \cdot f_{ml} \sqrt{1-(\lambda f_X)^{2}- (\lambda f_Y)^{2}}\Big)}.$$
The fact that we deal with an under-sampled process, in which the PSF is smaller than the pixel size, the small shape changes in the PSF due to the imperfections in the optical system have little relevance and therefore validate our decision of using the analytic LFPSF. Additionally, it must be taken into account that, since we are using well-corrected optical equipment (the MO and the achromatic lenses), our experiments are subjected to the same aberrations problems as conventional microscopes, and therefore we do not consider aberrations as a topic of special concern.

3.2 Effect of scattering

When it comes to scattering effects, the FLFM is less sensitive than classical plenoptic imaging devices. Note that in conventional LFM, all the rays emitted by a point at the native object plane should be collected by the same micro-lens. Then, scattering affects (depending on the magnitude) to an appreciable extent the micro-lens that is capturing the photons and subsequently, the spatio-angular information captured by the system. In contrast, in FLFM each micro-lens (combined with the MO) behaves as an independent microscope with a different perspective. And although each of them has problems with the scattering, they exhibit the same problems as the native microscope. Finally, the advantage is that the milky background that appears in the images due to scattering, is compensated in FLFM during the reconstruction procedure, since only the ballistic photons are recorded with the adequate disparity.

3.3 3D deconvolution

Given the raw noisy light field sensor measurements $\textbf {m}=\textrm {(}m_j\textrm {)}_{j \in J}$ acquired by pixels $j \in J$ ($|J| = m$) we seek to recover the fluorescence intensity at each discrete point in the volume which produced these measurements. We represent the discretized volume $\textbf {v}$ by a coefficient vector $\textrm {(}v_i\textrm {)}_{i \in I}$ with $|I| = n$. Note that the sampling rate in $\textbf {v}$ is dictated by the super-sampling factor $s$ defined in the previous section. Due to the low photon counts in fluorescence microscopy, the sensor pixels follow Poisson statistics, yielding the stochastic imaging model: $\textbf {m} \sim \textrm {Poisson}(A\textbf {v}), \label {imaging_model_poisson}$ where $\textbf {m}$ denotes the light field measurement, $\textbf {v}$ denotes the discretized volume we seek to reconstruct, and the operator $A = (a_{ji})_{j\in J, i\in I}$ describes the light field forward model, which is effectively determined by the FiMic point spread function in Eq. (10). For each point in a fluorescent object, the image intensity is given by the modulus squared of its amplitude [45]: $a_{ji} = {\big |}U_{sens}(\textbf {x}_s(j), \textbf {o}(i)){\big |}^{2}, \label {imageformation}$ where $\textbf {o}(i)$ is the object space coordinate of voxel $i$, and $\textbf {x}_s(j)$ is the coordinate of sensor pixel $j$. We now employ the well known Richardson-Lucy algorithm [47,48] to estimate $\textbf {v}$. The iterative update in matrix-vector notation reads:

$$\textbf{v}^{q+1} = \frac{\textbf{v}^{q}}{A^{T} \mathbf{1}} \left[ A^{T}\frac{\textbf{m}}{A\textbf{v}^{q}}\right],$$
where $q$ is the iteration count. For a more detailed derivation of the reconstruction algorithm we refer the reader to [17].

When we assume an aberration-free context, thanks to the strategic placement of the MLA at the at the back aperture stop of the MO, the light field PSF of the FiMic is translationally invariant for a fixed axial coordinate. Each micro-lens represents a sub-imaging system with spatially invariant PSFs and since all the micro-lenses are identical, the whole FiMic imaging system can be characterized by a shift invariant LFPSF, as a superposition of these individual PSFs [29]. Conveniently, this allows for the application of the columns of the matrix $A$ for each depth $\Delta z$ via a 2D convolution operation when implementing the iterative scheme in Eq. (12).

For practical reasons, when we discretize the object using the lateral spacing $\delta _{super}$ introduced in Eq. (5), we upsample the raw light field image by the super-sampling factor $s$. To make sure this step does not alter the measurements, we employ a nearest neighbor upsampling method.

4. Experimental results

In order to demonstrate the potential of our method, the deconvolution results in this section were obtained at various super-sampling factors and compared with the the refocusing algorithm of pixel shifting and summing [23] and with the central EI of the raw image. All the results were obtained after $50$ iterations of the scheme in Eq. (12), which coincides with a drop in the improvement rate (based on the absolute square difference from the previous iteration) below $10^{-2}$ and the solutions were initialized with uniform white texture (ones).

It is important to note that the factor $s$ relates to the sampling rate we chose for reconstructing the volumes and has nothing to do with the actual details that can be recovered, which is the effective resolution of the FiMic as addressed in section 2.2. We refer the interested reader to existing discussions on the subject [27,28,30].

The experiments in this work were performed with a custom-built FiMic containing a MLA with $f_{ml} = 6.5 mm$, $\mu _{\textrm {lens}} = 1.0 mm$ (AMUS APH-Q-P1000-R2.95) and an infinity corrected MO ($f_{obj} = 9.0 mm$ and $NA_{obj} = 0.4$). For recording the images we used a CMOS camera (EO-5012c 1/2") with pixel pitch $\rho _{px} = 2.2 \mu m$.

4.1 Analysis of the reconstruction resolution

We imaged the USAF-1951 resolution target at various axial positions in the [-120,120]$\mu m$ range using our experimental setup. As mentioned in section 2, since the AS was not mechanically accessible, we used an optical relay system ($f_1 = 125 mm$, $f_2 = 200 mm$) to conjugate the AS plane and the MLA plane. This configuration fits $N = 11.5$ micro-lenses in the AS. Under the resolution criterions in Eq. (3) and Eq. (4), the expected lateral resolution limit (when $\lambda = 480 nm$) of this setup is at best $\delta _{dif\!\!f} = 6.9 \mu m$, while the sensor sampling resolution is $\delta _{sensor} = 9.7 \mu m$. On the USAF resolution target, these values are approximately represented by the elements 7.2 and 6.5, respectively.

Figure 4(a) shows the central EI of the raw FiMic image (green), the shift and sum refocusing algorithm (yellow), our deconvolution at object space sampling $s=1$ (red) and $s=3$ (blue) of the groups 6 and 7 of the USAF 1951 resolution target placed at $\Delta z = \{0,-20,-50,-100\}\mu m$. To characterize the recoverable resolution of our FiMic configuration, in Fig. 1 we display line profiles for elements 6.4 to 7.3 of the USAF target arbitrarily placed at $\Delta z = -80 \mu m$. To determine if one element is resolved, we check for the existence of an intensity dip of $25 \%$ [13]. The central EI, and similarly the refocused image, resolves up to element 6.4, which corresponds to a lateral resolution of $11 \mu m$. By removing the out-of-focus blur, the deconvolution ($s = 1$) resolves up to element 6.6 ($8.8 \mu m$). And by fusing the aliased information in the EIs, in the super-resolved reconstruction ($s = 3$), we can recover element 7.1, corresponding to $7.8 \mu m$.

 figure: Fig. 4.

Fig. 4. Reconstruction of the USAF 1951 target imaged at $\Delta z$ = [-120, 120] $\mu m$. a) Example central EI of the FiMic image (green), the refocused image (yellow), the deconvolved image at sensor resolution (red), the deconvolved image at 3x sensor resolution (blue) for arbitrarily picked axial positions $\Delta z = \{0,-20,-50, -100\}$. When compared to the raw and refocused images, the deconvolved images appear to better resolve details through deblurring. Element 7.1 appears resolved in the super-resolved image (blue oval). b) Contrast of the USAF element 7.1 over $\Delta z$ = [-120, 120] $\mu m$ is generally constant for all the methods in a). As expected, the super-resolved deconvolution shows the best contrast.

Download Full Size | PDF

It is worth remarking here that the difference between $\delta _{dif\!\!f}$ and $\delta _{sensor}$ changes with the number of sub-aperture images, $N$. It quickly follows from the definitions in Eq. (3) and Eq. (4), that when $\lambda < 4NA_{ml}\rho _{px}$, the more angular samples we record, the more under-sampled they are by the system. For our configuration, this inequality is well satisfied and thus the potential for computational super-resolution. On the other hand, although we record $N = 11.5$ views, this does not dictate the actual improvement factor we can obtain via super-sampling, which is rather determined by the density and the level of distinction in the aliasing patterns the EIs exhibit and ultimately band limited. For the presented USAF target images, reconstructing at $s > 3$ did not improve the resolution any further.

Finally, in order to analyze the behaviour of the reconstruction regarding object axial positioning, we compute the contrast measure $c = (I_{max} - I_{min})/(I_{max} + I_{min})$ [16,49], for the element 7.1 for each method in Fig. 4(a), over the [-120,120]$\mu m$ axial range. $I_{max}$ and $I_{min}$ are the minimal and maximal intensities along a line perpendicular to the stripes of the element 7.1 The final contrast (average of the vertical and horizontal stripes contrasts) as a function of depth is aggregated for all the discussed methods in Fig. 4(b). In agreement with the analysis in Fig. 1, the central EI and the refocused image show low contrast when compared to the deconvolved and super-resolved images. And very importantly, the plots suggest that the variation in contrast over the axial position is rather low, which means the lateral resolution in FLFM is uniform across depth, unlike in conventional LFM, where the resolution is highly non uniform.

The runtime of one iteration of our unoptimized CPU Matlab implemented algorithm (see section 6.) is $0.55$ seconds at $s = 1$, and about $16$ seconds at $s = 3$ on an Intel Core i7-6800K at 3.40 GHz for the USAF target ($1920\times 2560$). The PSF kernels are very sparse and with dedicated accelerators for convolution operations the performance of the algorithm can be greatly improved.

4.2 Reconstruction of a real 3D sample

We further evaluate the proposed methods on real volumetric data of cotton fibers.

Figure 5(a) (left) shows a raw FiMic image of cotton fibers captured with our experimental setup configured in a similar way as in the previous section. This time, the relay lenses, RL1 and RL2 have focal lengths $f_1 = 50 mm$ and $f_2 = 40 mm$, which introduce a relay magnification, $M_{relay} = 0.8x$. Under red light with $\lambda = 680nm$, the expected lateral resolution limit for this configuration is $\delta _{dif\!\!f} = 4.9 \mu m$ and the sensor sampling resolution is about the same. In order to evaluate our proposed computational super-resolution algorithm, we binned the pixels ($2x2$) in the LF image to artificially double the sensor pixel size. This results in $\delta _{sensor} = 9.8 \mu m$. The LF image is shown in Fig. 5(a) together with zoomed-in regions of an EI for details. We reconstructed the sample over an axial range of $\Delta z$ = [-150,150]$\mu m$ for every 10 $\mu m$ at super-sampling rates $s = 1$ and $s = 4$ as displayed in Fig. 5(b) and Fig. 5(c). While in both cases we see details that are not visible in the raw image, the improvement in the super-resolved deconvolution is evident. The close-up in Fig. 5(c), as well as the xz and xy projections cleary show structure that are not resolved in the normal deconvolution in Fig. 5(b).

 figure: Fig. 5.

Fig. 5. 3D reconstruction of cotton fibers. a) Raw image acquired with our experimental FiMic setup and zoomed-in regions of an EI for details. b) Maximum intensity projections (MIPs) and zoomed-in regions of the 3D reconstructed sample ($\Delta z$ = [-150,150]$\mu m$) using our proposed method at sensor resolution ($s = 1$). c) MIPs of the super-resolved 3D reconstruction at 4x sensor resolution ($s = 4$). The deconvolved images resolve structures structures that do not show in the EI. The close-ups in b) and c) clearly shows that the super-resolved reconstruction recovers fine details in the sample, that are not resolved in the normal deconvolution.

Download Full Size | PDF

Finally, the runtime for one iteration of the algorithm for the cotton fibers is $0.85$ seconds at $s = 1$, and about $80$ seconds at $s = 4$ for the cotton fibers ($740\times 1020\times 31$), on the same workstation (Intel i7-6800K at 3.40 GHz).

5. Conclusion

Fourier light field microscopy addresses the limitations in conventional LFM, where computational super-resolution has a major impact. It is then natural to ask ourselves if there is something we can do computationally to improve the resolution in this different FLFM arrangement. When inspecting fluorescence specimens, the camera pixels need to be relatively large to cope with low light conditions. Then for certain FiMic setup configurations, the light field signal is under-sampled.

In this work we analyze the sampling requirements in FLFM to understand how the sampling rate of the camera pixels impacts the recoverable spatial resolution through volumetric deconvolution. We then derive a flexible wave-based light field point spread function and use it to perform 3D reconstruction. We demonstrate, using experimental images of the USAF 1951 resolution target, that when the system samples the light field below its band limit, computational super-resolution is possible to some extent. Hence, successful deconvolution fuses the complementary information in aliased perspective views to recover high frequency details in the imaged scene. We further evaluate the proposed methods for volumetric samples (cotton fibers) and show superior 3D reconstruction quality over state-of-the-art methods.

6. Implementation and datasets

The example datasets shown in section 4. together with the implementation of the methods described in this paper are available as part of our 3D reconstruction framework for light field microscopy oLaF, available at: https://gitlab.lrz.de/IP/olaf.

Funding

Deutsche Forschungsgemeinschaft (LA 3264/2-1); Ministerio de Ciencia, Innovación y Universidades (RTI2018-099041-B-I00); Generalitat Valenciana (PROMETEO/2019/048).

Acknowledgments

We would like to gratefully acknowledge Felix Wechsler for helping us to develop the forward propagation model.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. G. Lippmann, “Épreuves Réversibles Donnant La Sensation Du Relief,” J. Phys. Theor. Appl. 7(1), 821–825 (1908). [CrossRef]  

2. E. H. Adelson and J. Y. A. Wang, “Single Lens Stereo with a Plenoptic Camera,” Tech. Rep. 2 (1992).

3. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light Field Photography with a Hand-Held Plenoptic Camera – Stanford Tech Report CTSR 2005-02,” Tech. rep. (2005).

4. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in 2009 IEEE International Conference on Computational Photography, ICCP 09, (IEEE, 2009), pp. 1–8.

5. T. Georgiev and A. Lumsdaine, “The multifocus plenoptic camera,” Proc. SPIE 8299, 829908 (2012). [CrossRef]  

6. C. Perwass and L. Wietzke, “Single lens 3D-camera with extended depth-of-field,” (2012), p. 829108.

7. R. Ng, “Fourier slice photography,” in ACM SIGGRAPH 2005 Papers on - SIGGRAPH ’05, (2005), pp. 735–744.

8. D. G. Dansereau, O. Pizarro, and S. B. Williams, “Linear Volumetric Focus for Light Field Cameras,” ACM Trans. Graph. 34(2), 1–20 (2015). [CrossRef]  

9. T. E. Bishop, S. Zanetti, and P. Favaro, “Light field superresolution,” in 2009 IEEE International Conference on Computational Photography (ICCP), (2009), pp. 1–9.

10. M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” in Proceedings of the IEEE International Conference on Computer Vision, (2013), pp. 673–680.

11. M. W. Tao, J. C. Su, T. C. Wang, J. Malik, and R. Ramamoorthi, “Depth Estimation and Specular Removal for Glossy Surfaces Using Point and Line Consistency with Light-Field Cameras,” IEEE Trans. Pattern Anal. Mach. Intell. 38(6), 1155–1169 (2016). [CrossRef]  

12. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006). [CrossRef]  

13. A. Llavador, J. Garcia-Sucerquia, E. Sanchez-Ortiga, G. Saavedra, and M. Martinez-Corral, “View images with unprecedented resolution in integral microscopy,” OSA Continuum 1(1), 40–47 (2018). [CrossRef]  

14. M. Martinez-Corral and B. Javidi, “Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photonics 10(3), 512–566 (2018). [CrossRef]  

15. M. Levoy, Z. Zhang, and I. McDowall, “Recording and controlling the 4D light field in a microscope using microlens arrays,” J. Microsc. 235(2), 144–162 (2009). [CrossRef]  

16. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013). [CrossRef]  

17. A. Stefanoiu, J. Page, P. Symvoulidis, G. G. Westmeyer, and T. Lasser, “Artifact-free deconvolution in light field microscopy,” Opt. Express 27(22), 31644–31666 (2019). [CrossRef]  

18. R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014). [CrossRef]  

19. N. Wagner, N. Norlin, J. Gierten, G. de Medeiros, B. Balázs, J. Wittbrodt, L. Hufnagel, and R. Prevedel, “Instantaneous isotropic volumetric imaging of fast biological processes,” Nat. Methods 16(6), 497–500 (2019). [CrossRef]  

20. H. Li, C. Guo, D. Kim-Holzapfel, W. Li, Y. Altshuller, B. Schroeder, W. Liu, Y. Meng, J. B. French, K.-I. Takamaru, M. A. Frohman, and S. Jia, “Fast, volumetric live-cell imaging using high-resolution light-field microscopy,” Biomed. Opt. Express 10(1), 29–49 (2019). [CrossRef]  

21. C.-K. Liang and R. Ramamoorthi, “A Light Transport Framework for Lenslet Light Field Cameras,” ACM Trans. Graph. 34(2), 1–19 (2015). [CrossRef]  

22. T. E. Bishop and P. Favaro, “The light field camera: Extended depth of field, aliasing, and superresolution,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 972–986 (2012). [CrossRef]  

23. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12(3), 483–491 (2004). [CrossRef]  

24. Y. Chen, B. O. Xiong, Y. Xue, X. Jin, J. Greene, and A. Lei Tian, “Design of a high-resolution light field miniscope for volumetric imaging in scattering tissue,” Biomed. Opt. Express 11(3), 1662–1678 (2020). [CrossRef]  

25. N. Cohen, S. Yang, A. Andalman, M. Broxton, L. Grosenick, K. Deisseroth, M. Horowitz, and M. Levoy, “Enhancing the performance of the light field microscope using wavefront coding,” Opt. Express 22(20), 24817–24839 (2014). [CrossRef]  

26. C.-H. Lu, S. Muenzel, and J. Fleischer, “High-Resolution Light-Field Microscopy,” in Imaging and Applied Optics, (2013), p. CTh3B.2.

27. A. Llavador, J. Sola-Pikabea, G. Saavedra, B. Javidi, and M. Martínez-Corral, “Resolution improvements in integral microscopy with Fourier plane recording,” Opt. Express 24(18), 20792–20798 (2016). [CrossRef]  

28. G. Scrofani, J. Sola-Pikabea, A. Llavador, E. Sanchez-Ortiga, J. C. Barreiro, G. Saavedra, J. Garcia-Sucerquia, and M. Martinez-Corral, “FIMic: design for ultimate 3D-integral microscopy of in-vivo biological samples,” Biomed. Opt. Express 9(1), 335–346 (2018). [CrossRef]  

29. L. Cong, Z. Wang, Y. Chai, W. Hang, C. Shang, W. Yang, L. Bai, J. Du, K. Wang, and Q. Wen, “Rapid whole brain imaging of neural activity in freely behaving larval zebrafish (Danio rerio),” eLife 6, e28158 (2017). [CrossRef]  

30. C. Guo, W. Liu, X. Hua, H. Li, and S. Jia, “Fourier light-field microscopy,” Opt. Express 27(18), 25573–25594 (2019). [CrossRef]  

31. D. G. Voelz, Computational Fourier Optics: A MATLAB® Tutorial (SPIE, 2011).

32. M. G. Kang and S. Chaudhuri, “Super-resolution image reconstruction,” IEEE Signal Process. Mag. 20(3), 19–20 (2003). [CrossRef]  

33. S. Farsiu, D. Robinson, M. Elad, and P. Milanfar, “Advances and challenges in super-resolution,” Int. J. Imaging Syst. Technol. 14(2), 47–57 (2004). [CrossRef]  

34. S. S. Young and R. G. Driggers, “Superresolution image reconstruction from a sequence of aliased imagery,” Appl. Opt. 45(21), 5073–5085 (2006). [CrossRef]  

35. J. Tian and K. K. Ma, “A survey on super-resolution imaging,” Signal, Image Video Process. 5(3), 329–342 (2011). [CrossRef]  

36. J. Simpkins and R. Stevenson, “An Introduction to Super-Resolution Imaging,” in Mathematical Optics (CRC Press, 2012), pp. 555–580.

37. W. S. Chan, E. Y. Lam, M. K. Ng, and G. Y. Mak, “Super-resolution reconstruction in a computational compound-eye imaging system,” in Multidimensional Systems and Signal Processing, vol. 18 (2007), pp. 83–101.

38. S. Wanner and B. Goldluecke, “Variational light field analysis for disparity estimation and super-resolution,” IEEE Trans. Pattern Anal. Mach. Intell. 36(3), 606–619 (2014). [CrossRef]  

39. S. A. Shroff and K. Berkner, “Image formation analysis and high resolution image reconstruction for plenoptic imaging systems,” Appl. Opt. 52(10), D22–D31 (2013). [CrossRef]  

40. K. D. Sauer and J. P. Allebach, “Iterative Reconstruction of Band Limited Images from Nonuniformly Spaced Samples,” IEEE Trans. Circuits Syst. 34(12), 1497–1506 (1987). [CrossRef]  

41. K. Grochenig, “Reconstruction Algorithms in Irregular Sampling,” Math. Comput. 59(199), 181–194 (1992). [CrossRef]  

42. K. Gröchenig and T. Strohmer, “Numerical and Theoretical Aspects of Nonuniform Sampling of Band-Limited Images,” in F. Marvasti, (eds) Nonuniform Sampling. Information Technology: Transmission, Processing, and Storage (Springer, Boston, MA, 2001), pp. 283–324.

43. G. Wolberg, “Sampling, Reconstruction, and Antialiasing,” in Digital Image Warping (CRC Press, 2004), pp. 1–32.

44. S. Baker and T. Kanade, “Limits on super-resolution and how to break them,” IEEE Trans. Pattern Anal. Mach. Intell. 24(9), 1167–1183 (2002). [CrossRef]  

45. M. Gu, Advanced Optical Imaging Theory, vol. 75 (Springer, Berlin, Heidelberg, 1999).

46. M. Martínez-Corral and G. Saavedra, “The Resolution Challenge in 3D Optical Microscopy,” Prog. Opt. 53, 1–67 (2009). [CrossRef]  

47. W. H. Richardson, “Bayesian-Based Iterative Method of Image Restoration,” J. Opt. Soc. Am. 62(1), 55–59 (1972). [CrossRef]  

48. L. B. Lucy, “An iterative technique for the rectification of observed distributions,” The Astron. J. 79, 745 (1974). [CrossRef]  

49. J. Rosen, N. Siegel, and G. Brooker, “Theoretical and experimental demonstration of resolution beyond the Rayleigh limit by FINCH fluorescence microscopic imaging,” Opt. Express 19(27), 26249 (2011). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Reconstruction of the USAF 1951 resolution target. Top: (a) Raw elemental image of the resolution target acquired with our experimental FiMic (shown is a close up on groups 6 and 7 of the central elemental image). (b) The post-acquisition refocused image using the popular algorithm of shifting views and summing up [23]. (c) The deconvolved image at sensor resolution. (d) The reconstructed image at a 3x super-sampling of the object space, exploiting complementary multi-view aliasing in the elemental images. Bottom: Line profiles through the elements 6.4 to 7.3 of the images above. While the elemental image (a) and the refocused image (b) resolve up to element 6.4 (11 $\mu m$ ), the deconvolution resolves up to element 6.6 (8.8 $\mu m$ ) in (c) and element 7.1 (7.8 $\mu m$ ) in the computationally super-resolved image (d).
Fig. 2.
Fig. 2. Image formation in FLFM. (a) Ray diagram: light field propagation through the Fourier integral microscope. The FiMic depicted here makes use of an optical relay system (RL1 and RL2 with focal lengths $f_1$ and $f_2$ , respectively) which conjugate the back aperture of the microscope objective (MO) with the MLA plane. The reason for the relay is that the back aperture is usually not accessible in conventional commercial MOs. A source point $\textbf {o}(o_x, o_y, z = f_{obj} + \Delta z)$ in front of the MO has a conjugate image by the first relay lens (RL1) at $z'$ . RL2 picks up this image and magnified images are recorded behind each micro-lens as the light reaches the camera sensor. $f_{obj}$ denotes the MO focal length and $\Delta z$ represents the axial offset from the native object plane. (b) The field stop (FS) controls the size of the elemental images (EIs) as well as the size of the microscope’s field of view. See Eq. (1) and Eq. (2). (c) Overlapping images of the USAF resolution target when the FS is too large.
Fig. 3.
Fig. 3. Aliasing and EI sampling rates. a) The EIs formed behind off-axis micro-lenses are shifted with respect to the centers of the micro-lenses. b) FiMic image of the USAF 1951 resolution target placed at $\Delta z = -100 \mu m$ . c) Zoomed-in regions of the EIs in b) showing distinct aliasing patterns in areas with high frequency features as highlighted by the arrows. The micro-lens centers (red dots) and the EI centers (dark blue dots) are mismatched for the off-axis EIs. d) The EIs exhibit different shift pattern with object depth. e) EIs offsets in pixels from the micro-lens centers with respect to a reference EI (closest to the optical axis) for objects placed at $\Delta z$ = [-120, 120] $\mu m$ . The $\mu _{lens}$ index $= 0$ refers to the central micro-lens (closest to the OA). f) Sub-pixel shifts of the EIs with respect to the reference EI over depth. It is these sub-pixel shifts between the captured views that record complementary aliased information and motivate computational super-resolution.
Fig. 4.
Fig. 4. Reconstruction of the USAF 1951 target imaged at $\Delta z$ = [-120, 120] $\mu m$ . a) Example central EI of the FiMic image (green), the refocused image (yellow), the deconvolved image at sensor resolution (red), the deconvolved image at 3x sensor resolution (blue) for arbitrarily picked axial positions $\Delta z = \{0,-20,-50, -100\}$ . When compared to the raw and refocused images, the deconvolved images appear to better resolve details through deblurring. Element 7.1 appears resolved in the super-resolved image (blue oval). b) Contrast of the USAF element 7.1 over $\Delta z$ = [-120, 120] $\mu m$ is generally constant for all the methods in a). As expected, the super-resolved deconvolution shows the best contrast.
Fig. 5.
Fig. 5. 3D reconstruction of cotton fibers. a) Raw image acquired with our experimental FiMic setup and zoomed-in regions of an EI for details. b) Maximum intensity projections (MIPs) and zoomed-in regions of the 3D reconstructed sample ( $\Delta z$ = [-150,150] $\mu m$ ) using our proposed method at sensor resolution ( $s = 1$ ). c) MIPs of the super-resolved 3D reconstruction at 4x sensor resolution ( $s = 4$ ). The deconvolved images resolve structures structures that do not show in the EI. The close-ups in b) and c) clearly shows that the super-resolved reconstruction recovers fine details in the sample, that are not resolved in the normal deconvolution.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

r F S = r m l f 2 / f m l .
r F O V = r F S f o b j / f 1 .
δ d i f f = N λ 2 N A o b j ,
δ s e n s o r = 2 ρ p x M F i M i c .
δ s u p e r = δ s e n s o r s .
U 0 ( x , y ; o ) = ( A / r ) e i k r ( sign ( Δ z o ) ) ,
U A S ( r a s ; o ) = 0 α U 0 ( θ ; o ) J 0 ( k r a s sin ( θ ) ) sin ( θ ) d θ ,
U M L A ( x m l a , y m l a ; o ) = U A S ( x m l a M r e l a y , y m l a M r e l a y ; o ) ,
U M L A + ( x m l a , y m l a ; o ) = U M L A ( x m l a , y m l a ; o ) T ( x m l a , y m l a ) .
U s e n s ( x s , y s ; o ) = F 1 { F { U M L A + ( x s , y s ; o ) } H r s ( f X , f Y ) } ,
H r s ( f X , f Y ) = e ( i k f m l 1 ( λ f X ) 2 ( λ f Y ) 2 ) .
v q + 1 = v q A T 1 [ A T m A v q ] ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.