Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fourier light-field microscopy

Open Access Open Access

Abstract

Observing the various anatomical and functional information that spans many spatiotemporal scales with high resolution provides deep understandings of the fundamentals of biological systems. Light-field microscopy (LFM) has recently emerged as a scanning-free, scalable method that allows for high-speed, volumetric imaging ranging from single-cell specimens to the mammalian brain. However, the prohibitive reconstruction artifacts and severe computational cost have thus far limited broader applications of LFM. To address the challenge, in this work, we report Fourier LFM (FLFM), a system that processes the light-field information through the Fourier domain. We established a complete theoretical and algorithmic framework that describes light propagation, image formation and system characterization of FLFM. Compared with conventional LFM, FLFM fundamentally mitigates the artifacts, allowing high-resolution imaging across a two- to three-fold extended depth. In addition, the system substantially reduces the reconstruction time by roughly two orders of magnitude. FLFM was validated by high-resolution, artifact-free imaging of various caliber and biological samples. Furthermore, we proposed a generic design principle for FLFM, as a highly scalable method to meet broader imaging needs across various spatial levels. We anticipate FLFM to be a particularly powerful tool for imaging diverse phenotypic and functional information, spanning broad molecular, cellular and tissue systems.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Light-field microscopy (LFM) simultaneously captures both the 2D spatial and 2D angular information of the incident light, allowing computational reconstruction of the full 3D volume of a specimen from a single camera frame [15]. Unlike other fluorescent imaging techniques that accumulate spatial information in a sequential or scanning fashion, this 4D imaging scheme effectively liberates volume acquisition time from the spatial scales (e.g. field of view (FOV) and spatial resolution), thus making LFM a highly scalable tool for high-speed, volumetric imaging of biological systems with low photodamage across several spatial levels [59]. Towards the tissue level, the latest LFM techniques have demonstrated promising results for functional brain imaging with a cellular-level spatial resolution of several micrometers across a depth of tens to hundreds of micrometers, and at a fast volume acquisition time on the order of 10 milliseconds [58]. At the other extreme, the method has recently been demonstrated for observing structures and dynamics in single-cell specimens with a near-diffraction-limited 3D spatial resolution, an imaging depth of several micrometers to cover a significant volume of single cells, and a volume acquisition time of milliseconds [9].

For conventional LFM, a microlens array (MLA) is placed at the native image plane (NIP) of a wide-field microscope, and the optical signal is recorded in an aliased manner across the microlenses at the back focal plane of the MLA [24]. Mitigating the tradeoff between the spatial and angular information, the recent development of wave-optics models allows for the reconstruction of densely aliased high-spatial frequencies through point-spread function (PSF) deconvolution [4,5]. However, to date, there are two major limitations that restrict LFM from broader applications. First, the sampling pattern of the spatial information is uneven for LFM. Especially near the NIP, it becomes redundant, resulting in prohibitive reconstruction artifacts [4]. Existing LFM techniques mainly circumvent the problem by imaging on one side of the focal plane [4,5,7] or engineering the image formation or the wavefront [1012], which, however, compromise either the imaging depth or resolution. Meanwhile, the focused plenoptic scheme has also been reported, which removes the artifacts near the NIP but may impair the refocusing (or volumetric imaging) capability due to the restricted sampling geometry of the angular information [13]. Alternatively, a defocused optical design has been proposed to reject the artifact region away from the FOV, recovering the image quality across the NIP [9]. However, the problem remains fundamentally unresolved. Second, the volumetric reconstruction employs PSF deconvolution using the wave-optics model [4,5]. The PSF of conventional LFM is spatially-varying in both the lateral and axial dimensions, thus described by a 5D matrix that projects the 3D object domain onto the 2D camera plane [4]. This aggravates the computational cost, making the reconstruction considerably slow and impractical for rapid observation of the dynamic or functional data. While several algorithms have been reported to accelerate the process [6,7], their applications are particularly limited to improve post-processing of calcium imaging data.

Recently, Fourier imaging schemes have been proposed as a promising path to improve the current LFM techniques [1416]. However, the initial works rely on the ray-optics integral model, thus compromising the image quality for high-resolution microscopy [14,15]. Another Fourier approach has lately been demonstrated for recording sparse neural activities in larva zebrafish with an extended FOV [16]. The work employed experimentally measured PSFs for reconstruction. However, the system lacks a complete algorithmic framework for modeling light propagation and has thus not been fully explored to gain optimum performance for diverse applications beyond calcium imaging. Hence, the advancement and generalization of the optical design and computational framework of LFM for artifact-free and fast light-field imaging and reconstruction of broad volumetric data are still highly desirable.

Here, we introduce Fourier light-field microscopy (FLFM) to achieve high-quality imaging and rapid light-field reconstruction. By recording the 4D light field in the Fourier domain (FD), the imaging scheme transforms LFM in two main ways. First, the FD system allows allocating the spatial and angular information of the incident light in a consistently aliased manner, effectively avoiding any artifacts due to redundancy. Second, because the FD processes the signals in a parallel fashion, image formation can thus be depicted by a unified 3D PSF, leading to substantially reduced computational cost by more than two orders of magnitude. The system was validated by high-resolution imaging and artifact-free reconstruction of various caliber and biological samples. In addition, we constructed a complete model for light propagation, image formation and reconstruction. Using this model, we established a generic design principle to facilitate any further development of FLFM, as a highly scalable method to meet broader imaging needs across various spatial levels.

2. Methods

2.1 Experimental setup

We constructed FLFM on an epifluorescence microscope (Nikon Eclipse Ti-U) using a 40×, 0.95NA objective lens (Nikon CFI Plan Apochromat Lambda 40×, 0.95 NA) (Fig. 1(a)). The sample stage was controlled by a nano-positioning system (Prior). The samples were excited with either a 647-nm or a 561-nm laser (MPB). The corresponding emitted fluorescence was collected using a dichroic mirror and an emission filter (T660lpxr (Chroma) and ET700/75 (Chroma), and T560lpxr (Chroma) and FF02-617/73 (Semrock), respectively). The imaging area on the NIP was adjusted by an iris (SM1D12, Thorlabs) to avoid overlapping light-field signals on the camera plane. A Fourier lens (FL, ${f_{\textrm{FL}}}$ = 75 or 100 mm, Thorlabs) performed optical Fourier transform of the image at the NIP, forming a 4f system with the tube lens (TL). A microlens array (MLA, S600-f28, RPC photonics; ${f_{\textrm{MLA}}}$ = 16.8 mm; pitch = 600 µm) was placed at the back focal plane of the FL, thus conjugated to the back pupil of the objective. The light field was imaged using a 1:1 relay lens (Nikon AF-S VR Micro-Nikkor 105 mm f/2.8G IF-ED) and recorded on a scientific complementary metal-oxide-semiconductor (sCMOS) camera (Andor Zyla 4.2 PLUS sCMOS) at the back focal plane of the MLA.

 figure: Fig. 1.

Fig. 1. Fourier light-field microscopy (FLFM). (a) A schematic of the experimental setup for FLFM. The objective lens (OL) and the tube lens (TL) form an image at the native image plane (NIP), which area is adjusted by an iris. The Fourier lens (FL) transforms the image at the NIP to the back focal plane of the FL, where the microlens array (MLA) is situated. The light-field information is recorded by the sCMOS camera at the back focal plane of the MLA. DM, dichroic mirror; M, mirror. The inset diagram illustrates image formation through the MLA for emitters at different axial positions, implying recording of both the spatial and angular information in an uncompromised manner. (b, c) 3D perspective view (b) and stack projection (c) of the simulated PSF through a 5×5 MLA (effective pitch = 89 µm in the object space) within an axial range from −40 µm to 40 µm. (d) The corresponding experimental PSF of the system across the same range, showing good agreement with the numerical model. The depth information in (c) and (d) is coded as shown in the color-scale bars.

Download Full Size | PDF

2.2 Image formation and model for system characterization

Image formation in the FLFM system is illustrated in the inset of Fig. 1(a) and described in detail in Appendices A-E. As seen, the light field from the image at the NIP is transformed to the FD by the FL, conjugated to the wavefront at the back pupil of the objective lens. The MLA segments the wavefront, thus transmitting the corresponding spatial frequencies (i.e. the angular information) after individual microlenses to form images in different regions on the camera. Using this scheme, the spatial and angular components can be recorded in an uncompromised and well-aliased manner. This advantage results in image formation free of redundancy near the NIP and uniform across the depth of focus (DOF), permitting high-quality volumetric reconstruction without artifacts.

We developed the theoretical framework that integrates the complete set of FLFM parameters and thus allows for model-based system characterization (Appendices B-E). The model links the input parameters of FLFM such as the wavelength, NA, magnification, camera pixel and sensor size, and the focal length of the FL, with the system performance parameters, including the lateral (${R_{xy}}$) and axial (${R_z}$) resolution, the FOV, and the DOF. For example, for the setup described above, using the FL of ${f_{\textrm{FL}}}$ = 75 mm, these system performance values can be obtained as ${R_{xy}} = 2.12$ µm, ${R_z} = 4.70$ µm, $FOV = 67$ µm and $DOF = 64$ µm. The performance indicates high-resolution over a 2 to 3-fold extended depth, compared to conventional LFM using a similar objective [4,5]. The model and its results were validated using various numerical and experimental methods as demonstrated in Section 3 and extended to generate a generic FLFM design principle in Section 4.

2.3 Model of light-field propagation

Projecting the 3D volume in the object domain to the 2D image space, the wavefunction at the NIP using the high-NA objective, is predicted by the Debye theory as [17]:

$$\begin{array}{{c}} {{U_i}({x,\; p} )= \frac{M}{{f_{obj}^2{\lambda ^2}}}exp \left[ { - \frac{{iu}}{{4{{sin }^2}\left( {\frac{\alpha }{2}} \right)}}} \right] \times }\\ {\mathop \smallint \nolimits_0^\alpha P(\theta )exp \left[ {\frac{{iu \,{{sin }^2}({\theta /2} )}}{{2{{sin }^2}({\alpha /2} )}}} \right]{J_0}\left[ {\frac{{sin (\theta )}}{{sin (\alpha )}}v} \right]sin (\theta )d\theta } \end{array}$$
where ${f_{obj}}$ is the focal length of the objective, and ${J_0}$ is the zeroth order Bessel function of the first kind. The variables v and u represent normalized radial and axial coordinates; the two variables are defined by $v = k{[{{{({{x_1}/M - {p_1}} )}^2} + {{({{x_2}/M - {p_2}} )}^2}} ]^{1/2}}\sin (\alpha )$ and $u = 4k{p_3}{\sin ^2}({\alpha /2} )$; ${{p}} = ({{p_1},{p_2},{p_3}} )\in {{\mathbb R}^3}$ is the position for a point source in a volume in the object domain; $\textrm{x} = ({{x_1},{x_2}} )\in {{\mathbb R}^2}$ represents the coordinates on the NIP; M is the magnification of the objective; the half-angle of the NA is $\alpha = {\sin ^{ - 1}}({\textrm{NA}/n} )$; the wavenumber $k = 2\pi n/\lambda $ were calculated using the emission wavelength $\lambda $ and the refractive index n of the immersion medium. For Abbe-sine corrected objectives, the apodization function of the microscope $P(\theta )= {\cos ^{1/2}}(\theta )$ was used.

Next, the image at the NIP, ${U_i}({\textrm{x},\textrm{p}} )$, is optically Fourier transformed onto the back focal plane of the FL, described as $O{\cal F}T[{{U_i}({\textrm{x},\textrm{p}} )} ]$, which is then modulated by the MLA, described using the transmission function $\phi ({\textrm{x}^{\prime}} )$, where $\textrm{x}^{\prime} = ({x_1^{\prime},x_2^{\prime}} )\in {{\mathbb R}^2}$ represents the coordinates on the MLA. Specifically, the aperture of a microlens can be described as an amplitude mask $\textrm{rect}({\textrm{x}^{\prime}/{d_{\textrm{MLA}}}} )$, combined with a phase mask $\exp \left( \frac{ - ik}{2{f}_{\textrm{MLA}}}\left\| \textrm{x}^{\prime} \right\|_2^2 \right)$. The modulation induced by a microlens is then described as:

$$\phi \left( {{ x'}} \right) = { rect}\left( {{ x'}/{{ d}_{{ MLA}}}} \right)exp \left( {\frac{{ - { ik}}}{{2{{f}_{{MLA}}}}}\left\| {{x}^{\prime}} \right\|_2^2} \right)$$
where ${f_{\textrm{MLA}}}$ is the focal length of the MLA, and ${d_{\textrm{MLA}}}$ is the diameter of a single microlens (or the pitch of the MLA if the microlenses are tiled in a seamless manner). Thus, the modulation of the entire MLA, composed of periodic microlenses, can be described by convolving $\phi ({\textrm{x}^{\prime}} )$ with a 2D comb function $\textrm{comb}({\textrm{x}^{\prime}/{d_{\textrm{MLA}}}} )$, i.e. ${\Phi }({\textrm{x}^{\prime}} )= \phi ({\textrm{x}^{\prime}} )\otimes \textrm{comb}({\textrm{x}^{\prime}/{d_{\textrm{MLA}}}} )$, where ⊗ is the convolution operator.

The light field propagating from the MLA to the camera can be modelled using the Fresnel propagation over a distance of ${f_{\textrm{MLA}}}$ [18]:

$${h}({{x}^{\prime\prime},{p}} )= {{{\cal F}}^{ - 1}}\left\{ {{{\cal F}}[{{O}{{\cal F}}{T}[{{{U}_{i}}({{x},\; {p}} )} ]{\varPhi }({{x}^{\prime}} )} ]\times exp \left[ {{i}2{\pi }{{f}_{{MLA}}}\sqrt {{{\left( {\frac{1}{{\lambda }}} \right)}^2} - ({{f}_{x}^2 + {f}_{y}^2} )} } \right]} \right\}$$
where $\textrm{x}^{\prime\prime} = ({x_1^{\textrm{}^{\prime\prime}},x_2^{\textrm{}^{\prime\prime}}} )\in {{\mathbb R}^2}$ represents the coordinates on the camera plane, the exponential term is the Fresnel transfer function, ${f_x}$ and ${f_y}$ are the spatial frequencies in the camera plane, and ${\cal F}\{ \} $ and ${{\cal F}^{ - 1}}\{ \} $ represent the Fourier transform and inverse Fourier transform operators, respectively. $O{\cal F}T\{ \} $ is the optical Fourier transform performed by the FL. In practice, the Fresnel propagation over the distance of ${f_{\textrm{MLA}}}$ is divided and calculated over steps for computational accuracy. The final intensity image $O({\textrm{x}^{\prime\prime}} )$ at the camera plane is described by:
$${O}({{x}^{\prime\prime}} )= \mathop \smallint \nolimits^ {|{{h}({{x}^{\prime\prime},{p}} )} |^2}{g}({p} ){dp}$$
where $\textrm{p} \in {{\mathbb R}^3}$, as defined in Eq. (1), is the position in a volume containing isotropic emitters, whose combined intensities are distributed according to $g(\textrm{p} )$.

Using this model, the light-field propagation in FLFM at varying depths was demonstrated (Figs. 1(b) and 1(c)) and compared with the experimental data acquired by recording 200-nm fluorescent beads (T7280, ThermoFisher) at the same depths (Fig. 1(d)). As seen, the numerical and experimental data agreed well across the entire DOF and exhibited several unique features of the system. First, like conventional telecentric microscopes, the lateral translation of an emitter provides no parallax of images formed by each individual microlens. Second, unlike the orthographic views produced by conventional microscopes, different axial depths lead to variations of the wavefront (i.e. composite spatial frequencies) at the back pupil. This axial information is coupled to the lateral displacement and recorded in a radially asymmetric manner (except for the microlens centered at the optical axis). Therefore, such paralleled recording of signals in the FD allows describing the imaging system using a unified 3D PSF (e.g. Figs. 1(b)–1(d)), rather than a five-dimensional matrix as in the conventional, spatially-varying LFM system. This provides superb computational benefit in the reconstruction process.

Furthermore, the model is fully compatible with various modules for phase variations (e.g. PSF engineering [10], aberrations, customized MLA, etc.) to accurately describe the actual performance of the FLFM system. In addition, such a complete framework of light propagation enables the reconstruction of those volumetric data, where experimentally measured PSFs may not be readily available [16], such as in intact, dynamic, or time-evolving biological samples.

2.4 Reconstruction algorithm

Mathematically, the reconstruction is an inverse problem to recover the radiant intensity at each point in the 3D volume, denoted by g, using the camera image O. They satisfy the relationship $O = Hg$, where the measurement matrix H is determined by the PSF, and its elements ${h_{j,k}}$ represent the projection of the light arriving at pixel $O(j )$ on the camera from the kth calculated volume ${g^{(k )}}$ in the object space. To obtain g, the inverse problem thus becomes:

$${{g}^{({{k} + 1} )}} = {diag}[{{diag}{{({{{H}^{T}}{H}{{g}^{({k} )}}} )}^{ - 1}}({{{H}^{T}}{O}} )} ]{{g}^{({k} )}}$$
where the operator $\textrm{diag}\{ \} $ diagonalizes a matrix. This expression is a modified deconvolution algorithm based on the Richardson-Lucy iteration scheme [19,20]. In our case, the sampling intervals for the reconstruction of the object space are given as ${\Delta _{xy}} = 0.943$ µm, and ${\Delta _z} = 0.5$ µm. For visualization, we additionally interpolated 4×4 pixels into each reconstructed pixel in x and y.

The 3D deconvolution conducts forward projection ($H{g^{(k )}}$) and backward projection (${H^T}O$ and ${H^T}H{g^{(k )}}$) iteratively between the 3D object space and the 2D camera plane. Furthermore, because of the spatially-invariant nature, the 3D PSF for reconstruction can be simplified to $\textrm{PSF}({\textrm{x}^{\prime\prime},z} )= {|{h({\textrm{x}^{\prime\prime},\textrm{p}} )} |^2}$, where the PSF can be described by an emitter located on the optical axis, i.e. $\textrm{p} = ({0,0,z} )$. As a result, the forward projection can be described as a sum of 2D convolution layer by layer for an axial range of [${z_0}, {z_1}$], i.e. $H{g^{(k )}} = \mathop \sum \nolimits_{z = {z_0}}^{z = {z_1}} \textrm{PSF}({\textrm{x}^{\prime\prime},z} )\otimes {g^{(k )}}(z )$, where ${g^{(k )}}(z )$ relates to the single layer of the 3D object located at the depth of z. The back projection can thus be given as $[{{H^T}O} ](z )= \textrm{PSF}({\textrm{x}^{\prime\prime},z} )\otimes O$ and $[{{H^T}H{g^{(k )}}} ](z )= \textrm{PS}{\textrm{F}^{\prime}}({\textrm{x}^{\prime\prime},z} )\otimes H{g^{(k )}}$, where $\textrm{PS}{\textrm{F}^{\prime}}({\textrm{x}^{\prime\prime},z} )$ is obtained by rotating $\textrm{PSF}({\textrm{x}^{\prime\prime},z} )$ by 180 degrees. As seen, the simplified computational scheme using the unified 3D PSF effectively reduces the dimensions of the deconvolution algorithm, allowing for substantial reduction of the reconstruction time by orders of magnitude. Furthermore, due to the viable alignment of the system and consistency between the numerical and experimental PSFs, no image registration and matching have been considered in the reconstruction process, which, however, can be employed to further improve the image quality. An algorithm flow chart has been provided in Appendix F (Fig.   14) for readers’ information and the code is available from the corresponding author upon request.

3. Results

3.1 Imaging caliber structures

Using the 647-nm laser, we first imaged sub-diffraction-limit 200-nm dark-red fluorescent beads (T7280, ThermoFisher) and measured the 3D reconstructed images at varying depths (Fig. 2(a)). The full-width at half-maximum (FWHM) values of these reconstructed images at each depth were ∼2 µm and ∼4–5 µm in the lateral and axial dimensions, respectively, consistent with the theoretical values of 2.12 µm and 4.70 µm (Section 2.2, Appendices B and C), respectively. Furthermore, two beads separated by 2.90 µm can be resolved using FLFM, in good agreement with the FWHM values (Fig. 2(b)). It should be noted that the wave-optics model of FLFM allows for high-resolution reconstruction through PSF deconvolution. In contrast, the conventional ray-optics based integral model cannot provide sufficient resolution [14,15] (Fig. 2(b)).

 figure: Fig. 2.

Fig. 2. Characterization of FLFM and imaging caliber samples. (a) Left panel, 3D view of a reconstructed 200-nm fluorescent bead using the MATLAB function ‘isosurface’. Right panel, the reconstructed cross-sectional images and the corresponding profiles in x-y, y-z and x-z across the center of the bead, exhibiting FWHM values of 1.98 µm, 2.07 µm, and 4.39 µm in x, y, and z, respectively. (b) Wide-field (i) and raw FLFM (ii) images of 200-nm fluorescent beads located on the focal plane. The inset in (i) shows the zoomed-in image of the boxed region. A reconstructed axial stack of (ii) using wave-optics deconvolution (iii) and ray-optics integral (iv) models. (v) and (vi) show the zoomed-in images (left) and interpolated cross-sectional profiles (right) of the corresponding boxed regions in (iii) and (iv), respectively. FLFM with the wave-optics model has been shown to resolve two beads separated by 2.90 µm, confirmed by the inset in (i). (c) The reconstructed cross-sectional images (left panel) and profiles along the dashed lines (right panel) of a surface-stained 6-µm fluorescent bead using FLFM with the wave-optics (top row) and ray-optics (middle row) models. The hollow structure was clearly observed using the wave-optics model, while the ray-optics model failed to provide sufficient resolution. The same sample was also imaged using conventional LFM (bottom row) near the NIP, where strong artifacts prohibited proper visualization of lateral and axial structures. (d) Raw light-field (top) and reconstructed 3D FLFM (bottom) images of 200-nm fluorescent beads distributed in a volume. The reconstructed axial positions of four beads were identified at −28 µm, −22 µm, −9 µm and −5µm. The dashed lines in the raw image represent the edges of the square-shaped microlenses. (e) Top left, 3D reconstructed trajectories of 200-nm fluorescent beads suspended in water and axially separated by > 30 µm, tracked at a volume acquisition time of 100 ms (see Visualization 1). Top right, zoomed-in 3D trajectory of the corresponding boxed region in the left image. Bottom, the trajectories of the beads in x-y, x-z, and y-z. Different time-points are linearly color-coded from 0 to 4 s. The FLs of ${f_{\textrm{FL}}} = 75\textrm{ mm}$ (a, c, d) and ${f_{\textrm{FL}}} = 100\textrm{ mm}$ (b, e) were employed. Scale bars: 2 µm (a, b (i, v, vi)), 5 µm (c), 20 µm (b (ii), d), 10 µm (b (iii, iv), e).

Download Full Size | PDF

Next, we imaged surface-stained, 6-µm fluorescent microspheres (F14807, ThermoFisher), which hollow structure was clearly resolved using FLFM (top row, Fig. 2(c)). The corresponding lateral and axial cross-sectional profiles exhibited the FWHM values of the stained surface at 1–2 µm and 2–4 µm in the lateral and axial dimensions, respectively, consistent with the theoretical values and experimental measurements in Fig. 2a. Notably, the structure of the microsphere revealed by FLFM agreed well with our numerical results (Appendix G). We further analyzed in Appendix G the corresponding resolving capability of FLFM and the influence of any intrinsic cross-talk among the overlapping 3D information, which can be readily mitigated by adjusting ${f_{\textrm{FL}}}$, the focal length of the FL.

For comparison, we reconstructed the same structure using the integral imaging model, which was, however, poorly resolved due to the limited 3D resolution (middle row, Fig. 2(c)). Furthermore, we imaged the microsphere using conventional LFM, i.e. by placing the MLA (MLA-S125-f30, RPC photonics; ${f_{\textrm{MLA}}}$ = 3.75 mm; pitch = 125 µm) on the NIP (bottom row, Fig. 2(c)). However, prohibitive artifacts have been observed near the NIP and extending into a significant range of the DOF. In this case, the result showed substantially degraded volumetric imaging capability, unable to display the structure in the lateral dimension and leading to erroneously reconstructed patterns in the axial dimension.

Using FLFM, we also imaged and reconstructed 200-nm fluorescent beads distributed in the 3D space (Fig. 2(d)). Four emitters located at different depths of −28 µm, −22 µm, −9 µm and −5 µm were reconstructed from one camera frame without the need for axial scanning, compared to wide-field microscopy. Lastly, we tracked the movement of 200-nm fluorescent beads suspended in water at various volume acquisition rates of 10 ms (Fig. 2(e) and Visualization 1), 40 ms (Visualization 2) and 100 ms (Visualization 3). The 3D positions and trajectories of the particles were determined by localizing the reconstructed particles using Gaussian fitting with nanometer-level precision in all three dimensions. The measurements of the reconstructed particles are consistent with the values as in Fig. 2(a), showing no compromise in spatial precision as the volume acquisition rate is varied.

3.2 Imaging biological samples

We then demonstrated FLFM by imaging mixed pollen grains stained with hematoxylin and phloxine B (304264, Carolina Biological Supply) using the 561-nm laser (Fig. 3). As seen, the raw data behind individual 3 × 3 microlenses have revealed different perspective views of the sample (Fig. 3(a)). The system captured a FOV >67 × 67 µm and allows to reconstruct across a DOF > 50 µm, enclosing the entire pollen, at a volume acquisition time of 0.1 s. The full 4D light-field information recorded by a single camera frame consents to synthesize the focal stacks of the full volume of the specimen (Figs. 3(b) and 3(c)). Remarkably, FLFM recovered the structures that were out-of-focus and poorly sectioned by wide-field microscopy (e.g. insets in Fig. 3(c)). The high spatial resolution allows us to visualize the fine spine structures of the pollen that exhibited FWHM values of ∼1–2 µm in width and of ∼5 µm in length, and were separated as close as a few micrometers in all three dimensions (Figs. 3(c) and 3(d)). It should be mentioned that these measured system parameters are consistent with our theoretical predictions in Section 2.2, and the use of denoising or thresholding strategies should further improve the crosstalk between the axial stacks due to the limited axial resolution and the overlapping 3D data during projection onto the 2D camera sensor.

 figure: Fig. 3.

Fig. 3. Imaging pollen grains using FLFM. (a,b) Raw light-field (a) and reconstructed 3D FLFM (b) images of a pollen grain stained with hematoxylin and phloxine B. The spines of the pollen oriented into three dimensions can be observed in (b). The depth information across a 50-µm range in (b) is color-coded according to the color scale bar. (c) Selected z-stack images. The insets show the zoomed-in FLFM (left) and wide-field (right) images at z = 2 µm of two spines (both are 12.03 × 12.03 µm), respectively. The results show sensitive axial discrimination and 3D resolution of the pollen structure using FLFM. (d) Corresponding cross-sectional profiles along the dashed line in (c) at z = 2 µm. The profiles exhibited two spines separated by ∼4 µm and their FWHM values of 1–2 µm resolved by FLFM. Scale bars: 20 µm (a), 10 µm (b, c).

Download Full Size | PDF

Next, we imaged mouse kidney tissue slice (F24630, ThermoFisher) using the 561-nm laser, where the filamentous actins, prevalent in glomeruli and the brush border, were stained with Alexa Fluor 568 phalloidin (Fig. 4 and Visualization 4). FLFM recorded the 4D light-field information of the proximal tubule brush border (Fig. 4(a)) and allows to reconstruct the entire thickness of the tissue slice (∼20 µm) from one camera frame (Fig. 4(b)). In the reconstructed image, the pattern of proximal tubules and brush borders can be well visualized in all three dimensions. It is also noticed that the apical domain of the brush border of the tubules exhibited brighter stain, as known due to the denser distribution of actin bundles. Compared to wide-field microscopy, FLFM is capable of capturing the volumetric signals from a single camera frame and recovering synthesized axial stacks. The high resolution and sectioning capability of FLFM allow visualizing finer 3D structural variations (Figs. 4(c) and 4(d)). Tubular structures of a few micrometers across the DOF in all three dimensions can be well resolved (Figs. 4(c)–4(f)).

 figure: Fig. 4.

Fig. 4. Imaging mouse kidney tissue using FLFM (see Visualization 4). (a,b) Raw light-field (a) and reconstructed 3D FLFM (b) images of a cryostat section of mouse kidney stained with Alexa Fluor 568 phalloidin using a 75-mm Fourier lens. The inset in (b) shows the cross-sectional view in x-z of the corresponding layer marked in (b). The color represents intensity levels in (b). (c, d) Axial-stack images of the reconstructed volume by FLFM (c) and by scanning wide-field microscopy (d). The arrows indicate the enhanced signals and sectioning capability of FLFM that reveal fine 3D structural changes. The grids in the images are the edges of the microlenses when acquiring the corresponding wide-field images through the MLA. (e) The cross-sectional profile along the axial dimension of the region as marked in (b), exhibiting a FWHM value of 4.75 µm. (f) The cross-sectional profile along the dashed line in (c), exhibiting resolved filaments at z=−1.5 µm separated by 3.3 µm. Scale bars: 20 µm.

Download Full Size | PDF

Lastly, it should be addressed that for all the above imaging tasks, we used standard GPU calculation on a desktop computer (Intel(R) CPU i7-6850K (3.60 GHz), 128GB RAM, NVIDIA GeForce GTX1080Ti (Default/Turbo Clock 1480.5/1582MHz)). For example, conventional LFM typically takes an average of 31 s per deconvolution iteration to reconstruct a volume of 67 × 67 × 30 µm with a voxel size of 0.943 × 0.943 × 0.5 µm from a single camera frame of 210 × 210 pixels. As a comparison, FLFM utilizes an average of 0.34 s per iteration for an image of the same size, representing a roughly two orders of magnitude of improvement in computational cost (both methods use a similar total of 10–100 iterations for reconstruction). The reconstruction speed can be further accelerated using cluster computing and advanced deconvolution algorithms. The scheme has thereby paved the way for ultimately overcoming the spatio-temporal-limiting step for live imaging and rapid video-rate reconstruction.

4. General design principle of FLFM

In this work, we have established the complete theoretical and algorithmic framework underlying light propagation, image formation and system characterization of FLFM. Here, we further derived this framework into a general design principle for FLFM, essential for advancing the highly scalable approach to meet broader imaging needs across various spatial levels.

As seen in Fig. 1(a), as well as the top panel in Table 1, the FLFM system is composed of two main modules, the original wide-field imaging module including the camera, and the light-field module including the FL and the MLA. Given a standard wide-field system, to design a FLFM system is inherently to identify the parameters for the light-field module.

Tables Icon

Table 1. Design principle of FLFM.

First, we summarized the complete set of FLFM parameters, which can be organized into three categories: 1) the input parameters of the wide-field imaging module, including the emission wavelength λ, the numerical aperture NA and the magnification M of the objective, and the camera pixel size P and the physical size of the sensor Dcam; 2) the performance parameters of the intended FLFM imaging capability, including the lateral and axial resolution ${R_{xy}}$ and ${R_z}$, pixel sampling rate ${S_r}$ (i.e. the ratio between ${R_{xy}}$ and the effective pixel size P/MT, where MT is the magnification of the FLFM system), FOV, and DOF; 3) the design parameters to describe the light-field imaging module, including the focal length of the Fourier lens fFL, the diameter of the microlens ${d_{\textrm{MLA}}}$, the occupancy ratio N (i.e. the ratio between the effective pupil size at the MLA ${D_{\textrm{pupil}}}$ and ${d_{\textrm{MLA}}}$), the focal length of the microlens ${f_{\textrm{MLA}}}$, the pitch of the MLA ${d_1}$, and the distance from the outmost microlens covered by the illumination beam to the center of the MLA ${d_{\textrm{max}}}$. It should be mentioned that we here considered hexagonal MLAs (e.g. the right in top panel of Table 1) for the best radial symmetry and dense packing to gain optimum light-field reconstruction, although the strategy is applicable to various MLA patterns. The goal of the design is thus to identify the design parameters in the third category.

Next, we derived theoretical relationships between these parameters as an inverse problem based on the model of system characterization developed in Section 2.B. As listed in Table 1 and derived in Appendix H, the design parameters can be determined using these relationships with the input and performance parameters. Specifically, the input parameters are usually provided for the wide-field imaging system, while the developers can decide on the desired performance parameters of FLFM. It should be noted that the identification of the proper performance and design parameters can be iterative due to their interdependent relationships (e.g. lateral and axial resolution ${R_{xy}}$ and ${R_z}$ are intrinsically connected with ${d_{\textrm{MLA}}}$ and ${d_{\textrm{max}}}$). In practice, only three independent performance parameters such as ${R_{xy}}$, ${S_r}$, FOV are initially required in order to determine all the design parameters and the rest of the performance parameters. The developers can then adjust the initial values and iterate the process to obtain an optimum combination of elements to satisfy the imaging need.

As an illustration, we can customize the performance of the FLFM system reported in this work using this design protocol. First, the input parameters are given as λ = 680 nm, NA = 0.95, M = 40, P = 6.5 µm, and ${D_{\textrm{cam}}}$ = 13.3 mm. Next, our desired performance, for instance, is to achieve a 3D resolution of ∼2–3 µm, a large FOV > 500 µm, and a DOF > 50 µm for a specific imaging need. We first set $ {D_{\textrm{pupil}}}$ equal to ${D_{\textrm{cam}}}$ = 13.3 mm for the maximum use of the camera sensor. Then, using the model, we can generate various combinations of the performance parameters that meet the requirement. One set of such parameters could be ${R_{xy}}$ = 2.14 µm, ${R_z}$ = 2.79 µm, ${S_r}$ = 1.56, MT = 4.74, FOV = 561 µm, and DOF = 71.2 µm. Hence, the design parameters for the light-field module can be obtained as ${f_{\textrm{FL}}}$ = 280 mm, ${d_{\textrm{MLA}}}$ = 2.2 mm, N = 6, ${f_{\textrm{MLA}}}$ = 33.2 mm, ${d_1}$ = 2.66 mm, and ${d_{\textrm{max}}}$ = 5.32 mm.

5. Conclusion

In summary, we have developed FLFM to achieve high-quality light-field imaging and rapid volumetric reconstruction. Imaging the light field in the FD fundamentally mitigates the prohibitive artifacts, providing two- to three-fold larger DOF than the previous LFM design. The design also substantially reduces the computational cost by more than two orders of magnitude. We have established the theoretical and algorithmic models to describe the system and demonstrated high-resolution, artifact-free imaging of various caliber and biological samples. Furthermore, the work provided a generic design principle for constructing and customizing FLFM to address broad imaging needs at various spatial scales.

The study has overcome the two major limitations in the current LFM methods and may advance new imaging physics and applications for LFM and MLA-facilitated microscopy. The advancements in both imaging capability and computation speed promise further development toward high-resolution, volumetric data acquisition, analysis and observation at the video rate and ultimately in real time. Combining molecular specificity, great scalability, engineered MLAs and advanced computing, we anticipate FLFM to be a particularly powerful tool for imaging diverse phenotypic and functional information, spanning broad molecular, cellular, and tissue systems.

Appendix A: Image formation in traditional LFM and FLFM

Here, we demonstrated both the numerical and experimental data for light propagation (Figs.   5 and 6) for traditional LFM and FLFM. We also provided the corresponding image formation for traditional LFM (Fig. 7), compared to the result in Fig. 2(a).

 figure: Fig. 5.

Fig. 5. Experimental setup and light propagation for traditional LFM (top) and FLFM (bottom) using a 40×, 0.95NA objective lens. The insets on the right show the corresponding light-field images on the camera. NOP, native object plane; OL, objective lens; TL, tube lens; NIP, native image plane; FL, Fourier lens; CP, camera plane.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. x-z view of the PSFs across the central row of microlenses for FLFM (left panel) and traditional LFM (right panel). Numerical (top panel) and experimental (bottom panel) results show good agreement. The dashed lines represent the NIP and the redundantly aliased region can be observed in the PSF of traditional LFM. The experimental results show background noised due to less sufficient SNRs. Scale bars: 10 µm.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Left to right, raw light-field data, reconstructed 3D image, cross-sectional images at the NIP and the corresponding profiles of the 200-nm fluorescent bead located on the native object plane using traditional LFM. Compared to the result in Fig. 2(a), severe reconstruction artifacts can be observed for traditional LFM near the NIP.

Download Full Size | PDF

Appendix B: Lateral resolution.

The lateral resolution ${R_{xy}}$ is analyzed as shown in Fig. 8. We take into account both the numerical aperture (NA) of the microlens and the diffraction-limited spot size on the sCMOS camera, to determine the final resolution of the FLFM system. Here, we first considered a point emitter in the object space, which is then imaged to the native image plane (NIP) through the wide-field microscopy system ($M = 40 \times $, NA = 0.95), followed by Fourier transform to the Fourier domain (FD) at the back focal plane of the Fourier lens (FL). The beam in the FD can be considered as collimated when the image of the point emitter is located on the NIP. According to the Abbe diffraction limit of the microlens, the beam spot on the sCMOS camera after each microlens is given as $\frac{\lambda }{{2N{A_{\textrm{ML}}}}}$, where $N{A_{\textrm{ML}}}$ is the NA of each microlens, $\lambda $ is the wavelength of the emission. This defines the lateral resolution of the images from two emitters on the NIP. Converting to the object space, the lateral resolution of the whole system can be determined by this diffraction limit of the microlens, given as ${R_{xy}} = \frac{\lambda }{{2N{A_{\textrm{ML}}}}} \times \frac{{{f_{\textrm{FL}}}}}{{{f_{\textrm{MLA}}}}} \times \frac{1}{M} = \frac{{\lambda N}}{{2NA}}$. Here, the paraxial approximation permits $2N{A_{\textrm{ML}}}{f_{\textrm{MLA}}} = {d_{\textrm{MLA}}}$. For example, given the parameter values of ${f_{\textrm{FL}}} = 75$mm and ${d_{\textrm{MLA}}} = 600$ µm, $N = 5.9375$ in our system. The lateral resolution is thus given as ${R_{xy}} = 2.12$ µm. As seen from the expression, the lateral resolution can be improved by enlarging ${d_{\textrm{MLA}}}$ or reducing ${f_{\textrm{FL}}}$. In addition, we validated the calculated lateral resolution by simulating PSF distributions using the wave-optics model for light propagation as described in Section 2.3 of the main text. We simulated the light field of point emitters at varying depths and reconstructed their images. As seen, the FWHM values of the reconstructed images increase (i.e. the lateral resolution becomes worse) when the emitter is moved further away from the focal plane (Fig. 9).

 figure: Fig. 8.

Fig. 8. Analysis of the lateral resolution of FLFM. The figure shows the model of light propagation, image formation and symbols to determine the lateral resolution.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Numerical study of the FWHM values of the reconstructed images of a point emitter in all three dimensions, as a function of axial positions from −20 µm to 20 µm. 5 iterations were used for deconvolution, consistent with Fig. 2(a).

Download Full Size | PDF

Appendix C: Axial resolution.

We utilized theoretical and numerical models to analyze the axial resolution ${R_z}$ in this section. As shown in Fig. 10 (a), we considered two patterns on the camera plane, generated respectively from two emitters located at different axial positions on the optical axis. We reasoned that if the elemental images of these two patterns after the outmost microlenses, described as (−1, −1), (−1, 1), (1, −1), and (1, 1) (top right panel, Fig. 10 (a)), can be resolved (i.e. by the diffraction limit of $\frac{\lambda }{{2N{A_{\textrm{ML}}}}}$ of each microlens), the axial positions of the two emitters can be resolved through the reconstruction process. We assume that the PSF experiences negligible change within the axial range of interest, i.e. maintaining the width of $\frac{\lambda }{{2N{A_{\textrm{ML}}}}}$. In this sense, the axial resolution can be calculated from ray optics directly. In our case, it is given as ${R_z} = \frac{\lambda }{{2N{A_{\textrm{ML}}}}} \times \frac{{{f_{\textrm{FL}}}}}{{{f_{\textrm{MLA}}}}} \times \frac{1}{{\textrm{tan}({\theta}^{\prime} )}} \times \frac{1}{{{M^2}}} = \frac{{\lambda {N^2}}}{{4\sqrt 2 N{A^2}NA_{obj}^2}}$, where $\textrm{tan}({\theta}^{\prime} )= \frac{{\sqrt 2 {d_{\textrm{MLA}}}}}{{{f_{\textrm{FL}}}}}$ and $2N{A_{\textrm{ML}}}{f_{\textrm{MLA}}} = {d_{\textrm{MLA}}}$ based on the paraxial approximation. Plugging the system parameters, we obtain the axial resolution ${R_z} = 4.7$ µm.

 figure: Fig. 10.

Fig. 10. Analysis of the axial resolution of FLFM. (a) Left, the model and symbols to determine the axial resolution. Right, symbols and indices of the corresponding microlenses. (b) Simulated light propagation and identification of axial distributions of the PSF after microlenses. Scale bar: 20 µm.

Download Full Size | PDF

We assessed the theoretical axial resolution by simulating the PSF using our wave-optics model for light propagation as described in Section 2.3 of the main text (Figs. 10(b) and 10(c)). We first simulated light propagation in the 3D space (Fig. 10(b)), where the lateral displacement of the PSF generated by the (−1, 1) microlens as a function of the axial position can be identified by Gaussian fitting of the beam profile (Fig. 10(c)). The fitted curve exhibited a linear relationship and the slope was calculated as $p = 0.2763$, indicating that one-pixel change in the z direction leads to a shift of 0.2763 pixel in the x-y plane. In practice, as mentioned in Section 2.4, the pixel size in the x-y plane in object space is ${\Delta _{xy}} = 0.943$ µm, and the pixel size in z is ${\Delta _z} = 0.5$ µm. The axial resolution can thus be determined as ${R_z} = \frac{{{R_{xy}}}}{{p \times {\Delta _{xy}}}} \times {\Delta _z} = 4.1$ µm, consistent with the above theoretical value of 4.7 µm.

Next, we performed another numerical study to verify the axial resolution (Fig. 11). We studied two axially separated point emitters at (4 µm and 0 µm), (2 µm and −2µm), and (0 µm and −4 µm), respectively. In Fig. 11, the left panel shows the raw light field images generated by the two point emitters at different depths; the middle panel shows the 3D reconstructed emitters and their perspective and axial views; the right panel shows the profiles of the emitters along the dashed lines in the axial images. As seen, these emitters separated by 4 µm at varying depths can be resolved. In practice, due to the noise and other system imperfections, the axial resolution in our experiments was validated closed to 5 µm, still consistent with the theoretical value of 4.7 µm. Like the lateral resolution, the axial resolution also varies along the axial dimension as shown in Fig. 9. As seen, the axial FWHM values increase from ∼5 µm to 10 µm when the emitter is moved from the focal plane to 20 µm away from the plane.

 figure: Fig. 11.

Fig. 11. Analysis of the axial resolution of FLFM by reconstructing two axially separated point emitters. Scale bars: 20 µm (left panel), 2 µm (middle panel).

Download Full Size | PDF

Appendix D: Field of view (FOV).

The FOV around the focal position in the object space can be identified from the model shown in Fig. 12. As seen, the FOV is determined by the image area after each individual microlens. Therefore, the FOV depends on the size and focal length of the microlens and the focal length of the FL, given as $FOV = {d_{\textrm{MLA}}} \times \frac{{{f_{\textrm{FL}}}}}{{{f_{\textrm{MLA}}}}} \times \frac{1}{M}$. Using our system parameters, the value is givenas FOV = 66.96 µm. In addition, the FOV can be customized by adjusting the FL and MLA, while maintaining the same spatial resolution. Furthermore, the FOV can be expressed as $FOV = {d_1} \times \frac{{{f_{\textrm{FL}}}}}{{{f_{\textrm{MLA}}}}} \times \frac{1}{M}$, when ${d_1} = {d_{MLA}}$. Substituting with ${d_{MLA}} = \frac{{{D_{cam}}}}{N}$, $N = {R_{xy}}\frac{{2NA}}{\lambda }$, ${f_{MLA}} = \frac{{{S_r}P{f_{FL}}}}{{{R_{xy}}M}}$, and maintaining ${S_r} = \frac{\lambda }{{2NA}}\frac{M}{P}$, we have $FOV = {d_1} \times \frac{{{f_{\textrm{FL}}}}}{{{f_{\textrm{MLA}}}}} \times \frac{1}{M} = \frac{{{D_{cam}}}}{{{R_{xy}}\frac{{2NA}}{\lambda }}}\frac{{{f_{\textrm{FL}}}}}{{\frac{\lambda }{{2NA}}\frac{M}{P}\frac{{{f_{\textrm{FL}}}P}}{{{R_{xy}}M}}}}\frac{1}{M} = \frac{{{D_{cam}}}}{M}$. This suggests a type of FLFM design that does not compromise the FOV by engineering ${S_r}$. This condition can be fulfilled in most high-resolution wide-field fluorescence microscopy. Lastly, in our FLFM system, we have noticed that the FOV moderately decreases in a linear fashion, as the sample is moved away from the focal plane.

 figure: Fig. 12.

Fig. 12. Analysis of the field of view (FOV) of FLFM. The figure shows the model and symbols to determine the FOV.

Download Full Size | PDF

Appendix E: Depth of field (DOF).

In this section, we analyzed the DOF of FLFM (Fig. 13). Two factors should be considered when calculating the DOF:

 figure: Fig. 13.

Fig. 13. Analysis of the depth of field (DOF) of FLFM. (a) Ray optics sketch of the model and symbols to determine the DOF based on axial diffraction of light. (b) Simulated light propagation and identification of axial distributions of the PSF after microlenses. (c) Ray optics sketch of the model and symbols to determine the DOF based on the translation of the light-field pattern at varying depths. Scale bars: 20 µm.

Download Full Size | PDF

First, the light-field information on the camera will be out of focus and the intensity will decrease to a threshold value as the emitter is away from the focal plane along the axial direction. Therefore, the DOF can be calculated by the range of detectable intensity considering the diffraction effect in the axial dimension. As shown in Figs. 13(a) and 13(b), the depth can be obtained from the FWHM value of the PSF behind the central microlens on the optical axis along the axial direction. Theoretically, the axial FWHM value of the PSF of the central microlens is $\frac{{{N^2}\lambda }}{{N{A^2}}} + \frac{{{N^2}\lambda }}{{2{S_r}N{A^2}}} = $32 µm. Numerically, the measurement using our wave-optics model demonstrated the FWHM value of 33 µm (Fig. 13(b)), in good agreement with the theoretical result. In this work, because the reconstruction relies on deconvolution, which is known to recover the diffracted information outside of the Rayleigh range of the axial PSF, we thereby used the full width of the axial PSF (i.e. 2 times of the FWHM value in the axial dimension) to describe the DOF. Therefore, both our theoretical and numerical models resulted in the DOF of 64 µm.

Second, when the object is away from the focal plane, the light-field pattern will spread out or shrink in the lateral dimension. The change of the pattern size is thus limited by the pitch of the MLA (Fig. 13(c)). This value is denoted as $DOF = {d_{\textrm{MLA}}} \times \frac{{{f_{\textrm{FL}}}}}{{{f_{\textrm{MLA}}}}} \times \frac{1}{{\textrm{tan}(\theta )}} \times \frac{1}{{{M^2}}}$, where $\textrm{tan}(\theta )= \frac{{{d_{\textrm{MLA}}}}}{{{f_{\textrm{FL}}}}}$, obtaining a value of 209 µm in our experimental system. Beyond this value, the light-field pattern of an emitter is not fully captured by one microlens and thus generates cross talk into the nearby microlenses. Here this value is much greater than the DOF of 64 µm limited by the diffraction.

These two factors set the restriction of the DOF of FLFM, and in this work, we used the minimum value of the two to determine the final DOF (i.e. 64 µm).

Appendix F: Algorithm flow chart.

 figure: Fig. 14.

Fig. 14. Algorithm flow chart of FLFM.

Download Full Size | PDF

Appendix G: Analysis for reconstruction of the surface-stained structure.

In this section, we analyzed the structure of the reconstructed surface-stained microsphere as observed in Fig. 2(c) in the main text. We simulated such a structure (diameter = 20 µm) and assessed the reconstruction results using different axial sampling step sizes (Fig. 15). As seen, as tighter sampling slices were taken from Figs. 15(a)–15(c), the reconstructed microsphere appeared a stretched axial pattern and the rugby-like profile was clearly observed in Fig. 15(c), similar to our experimental observation in the top row of Fig. 2(c). The stretch is attributed to the cross talk of each axial slices as they become denser, which project more volumetric information onto limited 2D imaging space on the sensor, the interference of the light field leads to the distortion of the original structure of the object during the deconvolution process.

 figure: Fig. 15.

Fig. 15. Analysis for the reconstruction of the surface-stained structure (diameter = 20 µm). (a) top row from left to right, the structure sampled with 5 slices, raw light-field image, reconstructed image of the surface-stained structure, cross-sectional images in x-z of the sampled structure, reconstructed image and the overlay, showing difference between the original and reconstructed structures due to the limited sampling step size and overlapping spatial information. (b) Same images with respect to (a) but with 11 slices. (c) Top rows are the same images with respect to (a) but with infinite slices (or sufficiently small step size to be exact), showing increasingly continuous structure and the enhanced stretching effect in the axial dimension (e.g. the 3rd image from the left in the top row of (c)) compared to (a) and (b). The bottom row of (c) demonstrates the agreement of the lateral patterns in x-y on the focal plane of the raw (bottom left), reconstructed (bottom middle) and overlay (bottom right) structures. The numerical results are consistent with our experimental observation in Fig. 2(c). Scale bars: 20 µm (c, left on the top row), 5 µm (c, right on the top row and the bottom row).

Download Full Size | PDF

This problem can be readily resolved by adjusting the FL and the MLA with different parameters to improve the resolution of the system. Here, we numerically reconstruct the continuous microsphere (diameter = 20 µm) with a different FL (i.e. changed from ${f_{\textrm{FL}}} = $ 75 to 50 mm), in which case the resolution will be improved based on the analysis given above. This improvement mitigated the cross talk of the light field, recovering the original structure after reconstruction (Fig. 16).

 figure: Fig. 16.

Fig. 16. Mitigating of the cross talk using a different FL (${f_{\textrm{FL}}} = 50$ mm). Top row from left to right, the densely sampled structure, raw light-field image, and reconstructed image of the surface-stained structure (diameter = 20 µm). Bottom row from left to right, overlay of the original (red) and reconstructed (green) x-z (left) and x-y (middle) sections. Bottom right, the reconstructed hollow structure of the microsphere. Scale bars: 20 µm (top middle), 5 µm (bottom middle).

Download Full Size | PDF

Appendix H: Notes on the derivation of the equations in Table 1.

  • 1. ${f_{FL}}$ can be directly calculated from the figure in the left panel of Table 1. As shown in the figure, the diameter of the back pupil can be given as ${D_{\textrm{puipl}}} = 2 \times {f_{\textrm{FL}}} \times \frac{{\textrm{NA}}}{M}$. As we set ${D_{\textrm{puipl}}} = {D_{\textrm{cam}}}$ for the maximum use of the camera sensor, the focal length of the FL is finally described as ${f_{\textrm{FL}}} = \frac{{{D_{\textrm{cam}}} \times M}}{{2\textrm{NA}}}$.
  • 2. As discussed earlier, the lateral resolution ${R_{xy}} = \frac{\lambda }{{2N{A_{\textrm{ML}}}}} \times \frac{{{f_{\textrm{FL}}}}}{{{f_{\textrm{MLA}}}}} \times \frac{1}{M}$, and given $N{A_{\textrm{ML}}} = \frac{{{d_{\textrm{MLA}}}}}{{2{f_{\textrm{MLA}}}}}$, the relationship between ${R_{xy}}$ and ${d_{\textrm{MLA}}}$ is presented as ${R_{xy}} = \frac{\lambda }{{2N{A_{\textrm{ML}}}}} \times \frac{{{f_{\textrm{FL}}}}}{{{f_{\textrm{MLA}}}}} \times \frac{1}{M} = \frac{{\lambda {f_{\textrm{MLA}}}}}{{{d_{\textrm{MLA}}}}} \times \frac{{{f_{\textrm{FL}}}}}{{{f_{\textrm{MLA}}}}} \times \frac{1}{M}$. We define N as the ratio between the FLFM resolution and the resolution of the corresponding wide-field system, namely $N = {R_{xy}}/\left( {\frac{\lambda }{{2NA}}} \right)$.
  • 3. Hence, $N = \frac{{\lambda {f_{\textrm{MLA}}}}}{{{d_{\textrm{MLA}}}}} \times \frac{{{f_{\textrm{FL}}}}}{{{f_{\textrm{MLA}}}}} \times \frac{1}{M} \times \frac{{2NA}}{\lambda } = \frac{{2NA}}{M} \times \frac{{{f_{\textrm{FL}}}}}{{{d_{\textrm{MLA}}}}}$. Given ${D_{\textrm{cam}}} = \frac{{2\textrm{NA}}}{M} \times {f_{\textrm{FL}}}$, we obtain ${d_{\textrm{MLA}}} = \frac{{{D_{\textrm{cam}}}}}{N}$. Therefore, the occupancy ratio N can also be described as the ratio between the effective pupil size at the MLA ${D_{\textrm{pupil}}}$ and ${d_{\textrm{MLA}}}$.
  • 4. As we defined the sampling rate as the ratio of the diffraction limit of a microlens and the physical pixel size, namely ${S_r} = \frac{\lambda }{{2N{A_{\textrm{ML}}}}}/P$. We then have ${R_{xy}} = \frac{\lambda }{{2N{A_{\textrm{ML}}}}} \times \frac{{{f_{\textrm{FL}}}}}{{{f_{\textrm{MLA}}}}} \times \frac{1}{M} = {S_r}P \times \frac{{{f_{\textrm{FL}}}}}{{{f_{\textrm{MLA}}}}} \times \frac{1}{M}$, thus ${f_{\textrm{MLA}}} = \frac{{{S_r}P{f_{\textrm{FL}}}}}{{{R_{xy}}M}}$.
  • 5. The relationship between ${d_1}$ and FOV is given as FOV $= {d_1} \times \frac{{{f_{\textrm{FL}}}}}{{{f_{\textrm{MLA}}}}} \times \frac{1}{M}$, so ${d_1} = FOV\frac{{M{f_{\textrm{MLA}}}}}{{{f_{\textrm{FL}}}}}$.
  • 6. ${d_{\textrm{max}}}$ can be directly given from the figure as ${d_1} \times \frac{{{D_{\textrm{cam}}} - {d_{\textrm{MLA}}}}}{{2{d_1}}}$, where $\lfloor \rfloor$ calculates the greatest integer less than or equal to the variable.
  • 7. ${R_z}$ can be given from Fig. 7 in which $\textrm{tan}({\theta}^{\prime} )$ is replaced by $\frac{{{d_{\textrm{max}}}}}{{{f_{\textrm{FL}}}}}$. Then ${R_z} = \frac{\lambda }{{2N{A_{\textrm{ML}}}}} \times \frac{{{f_{\textrm{FL}}}}}{{{f_{\textrm{MLA}}}}} \times \frac{1}{{\tan ({\theta}^{\prime} )}} \times \frac{1}{{{M^2}}}$$= \frac{\lambda }{{2N{A_{\textrm{ML}}}}} \times \frac{{{f_{\textrm{FL}}}}}{{{f_{\textrm{MLA}}}}} \times \frac{{{f_{\textrm{FL}}}}}{{{d_{\textrm{max}}}}} \times \frac{1}{{{M^2}}} = {R_{xy}} \times \frac{{{f_{\textrm{FL}}}}}{{{d_{\textrm{max}}}}} \times \frac{1}{M}.$ As we have obtained ${f_{\textrm{FL}}} = \frac{{{D_{\textrm{cam}}} \times M}}{{2\textrm{NA}}}$ and ${d_{\textrm{MLA}}} = \frac{{{D_{\textrm{cam}}}}}{N}$, ${R_z} = {R_{xy}} \times \frac{1}{{{d_{\textrm{max}}}}} \times \frac{{{d_{\textrm{MLA}}}N}}{{2\textrm{NA}}} = {R_{xy}} \times \frac{{{d_{\textrm{MLA}}}}}{{{d_{\textrm{max}}}}} \times \frac{{{R_{xy}}}}{\lambda } = \frac{{{d_{\textrm{MLA}}}{R_{xy}}^2}}{{\lambda {d_{\textrm{max}}}}}$.
  • 8. The DOF is given as twice the axial FWHM value of the PSF, i.e. $\textrm{DOF} = 2\left( {\frac{{\lambda {N^2}}}{{N{A^2}}} + P\frac{N}{{NA}} \times \frac{{{f_{\textrm{FL}}}}}{{{f_{\textrm{MLA}}}}} \times \frac{1}{M}} \right)$. As $N = {R_{xy}}\frac{{2\textrm{NA}}}{\lambda }$ and ${S_r} = \frac{\lambda }{{2N{A_{\textrm{ML}}}}}/P$, ${DOF} = 2(\frac{{4{{R}_{{xy}}}^2}}{{\lambda }} + {{R}_{{xy}}}\frac{{{{f}_{{MLA}}}}}{{{{f}_{{FL}}}}}\frac{{M}}{{{{S}_{r}}}} \times \frac{{N}}{{{NA}}} \times \frac{{{{f}_{{FL}}}}}{{{{f}_{{MLA}}}}} \times \frac{1}{{M}})$$= 2\left( {\frac{{4{{R}_{{xy}}}^2}}{{\lambda }} + \frac{{{{R}_{{xy}}}}}{{{{S}_{r}}}} \times \frac{{N}}{{{NA}}}} \right) = 2\left( {\frac{{4{{R}_{{xy}}}^2}}{{\lambda }} + \frac{{{{R}_{{xy}}}}}{{{{S}_{r}}}} \times \frac{{N}}{{{NA}}}} \right)$ $= 2(\frac{{4{{R}_{{xy}}}^2}}{{\lambda }} + \frac{{{{R}_{{xy}}}}}{{{{S}_{r}}}} \times {{R}_{{xy}}}\frac{2}{{\lambda }}) = \frac{{2\left( {4 + \frac{2}{{{{s}_{r}}}}} \right){{R}_{{xy}}}^2}}{{\lambda }}$ .
  • 9. The magnification of the FLFM system is given as ${M_T} = M \times \frac{{{f_{\textrm{MLA}}}}}{{{f_{\textrm{FL}}}}}$. Since ${f_{\textrm{MLA}}} = \frac{{{S_r}P{f_{\textrm{FL}}}}}{{{R_{xy}}M}}$, ${M_T} = M \times \frac{{{S_r}P{f_{FL}}}}{{{R_{xy}}M}} \times \frac{1}{{{f_{\textrm{FL}}}}} = \frac{{{S_r}P}}{{{R_{xy}}}}$.

Funding

National Science Foundation (EFMA1830941); National Institute of General Medical Sciences (R35GM124846).

Acknowledgment

We acknowledge the support of the NSF-EFMA program and the NIH-NIGMS MIRA program.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. G. Lippmann, “Epreuves reversibles, photographies integrales,” Comptes Rendus l’Académie des Sci. 444, 446–451 (1908).

2. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006). [CrossRef]  

3. M. Levoy, Z. Zhang, and I. McDowall, “Recording and controlling the 4D light field in a microscope using microlens arrays,” J. Microsc. 235(2), 144–162 (2009). [CrossRef]  

4. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013). [CrossRef]  

5. R. Prevedel, Y.-G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014). [CrossRef]  

6. N. C. Pégard, H.-Y. Liu, N. Antipa, M. Gerlock, H. Adesnik, and L. Waller, “Compressive light-field microscopy for 3D neural activity recording,” Optica 3(5), 517–524 (2016). [CrossRef]  

7. T. Nöbauer, O. Skocek, A. J. Pernía-Andrade, L. Weilguny, F. M. Traub, M. I. Molodtsov, and A. Vaziri, “Video rate volumetric Ca2+ imaging across cortex using seeded iterative demixing (SID) microscopy,” Nat. Methods 14(8), 811–818 (2017). [CrossRef]  

8. M. A. Taylor, T. Nöbauer, A. Pernia-Andrade, F. Schlumm, and A. Vaziri, “Brain-wide 3D light-field imaging of neuronal activity with speckle-enhanced resolution,” Optica 5(4), 345–353 (2018). [CrossRef]  

9. H. Li, C. Guo, D. Kim-Holzapfel, W. Li, Y. Altshuller, B. Schroeder, W. Liu, Y. Meng, J. B. French, K.-I. Takamaru, M. A. Frohman, and S. Jia, “Fast, volumetric live-cell imaging using high-resolution light-field microscopy,” Biomed. Opt. Express 10(1), 29–49 (2019). [CrossRef]  

10. N. Cohen, S. Yang, A. Andalman, M. Broxton, K. Deisseroth, M. Horowitz, and M. Levoy, “Enhancing the performance of the light field microscope using wavefront coding,” Opt. Express 22(1), 727–730 (2014). [CrossRef]  

11. I. Kauvar, J. Chang, and G. Wetzstein, “Aperture interference and the volumetric resolution of light field fluorescence microscopy,” in IEEE International Conference on Computational Photography (ICCP) (2017).

12. X. Lin, J. Wu, G. Zheng, and Q. Dai, “Camera array based light field microscopy,” Biomed. Opt. Express 6(9), 3179–3189 (2015). [CrossRef]  

13. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in IEEE International Conference on Computational Photography (ICCP) (2009).

14. A. Llavador, J. Sola-Pikabea, G. Saavedra, B. Javidi, and M. Martínez-Corral, “Resolution improvements in integral microscopy with Fourier plane recording,” Opt. Express 24(18), 20792–20798 (2016). [CrossRef]  

15. G. Scrofani, J. Sola-Pikabea, A. Llavador, E. Sanchez-Ortiga, J. C. Barreiro, G. Saavedra, J. Garcia-Sucerquia, and M. Martínez-Corral, “FIMic: design for ultimate 3D-integral microscopy of in-vivo biological samples,” Biomed. Opt. Express 9(1), 335–346 (2018). [CrossRef]  

16. L. Cong, Z. Wang, Y. Chai, W. Hang, C. Shang, W. Yang, L. Bai, J. Du, K. Wang, and Q. Wen, “Rapid whole brain imaging of neural activity in freely behaving larval zebrafish (Danio rerio),” eLife 6, e28158 (2017). [CrossRef]  

17. M. Gu, Advanced Optical Imaging Theory (Springer, 2000).

18. N. Delen and B. Hooker, “Free-space beam propagation between arbitrarily oriented planes based on full diffraction theory: a fast Fourier transform approach,” J. Opt. Soc. Am. A 15(4), 857–867 (1998). [CrossRef]  

19. F. Dell’Acqua, G. Rizzo, P. Scifo, R. A. Clarke, G. Scotti, and F. Fazio, “A model-based deconvolution approach to solve fiber crossing in diffusion-weighted MR imaging,” IEEE Trans. Biomed. Eng. 54(3), 462–472 (2007). [CrossRef]  

20. A. C. Kak and M. Slaney, “3. Algorithms for Reconstruction with Nondiffracting Sources,” in Principles of Computerized Tomographic Imaging (Society for Industrial and Applied Mathematics, 2001), pp. 49–112.

Supplementary Material (4)

NameDescription
Visualization 1       Tracking 200-nm fluorescent particles at a volume acquisition rate of 10 Hz. The corresponding images were shown in Fig. 2e.
Visualization 2       Tracking 200-nm fluorescent particles at a volume acquisition rate of 26 Hz.
Visualization 3       Tracking 200-nm fluorescent particles at a volume acquisition rate of 100 Hz.
Visualization 4       Imaging mouse kidney tissue using FLFM. The video demonstrates a reconstructed volume of 67 µm × 67 µm ×20 µm in x, y and z, respectively. The corresponding images were shown in Fig. 4.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1.
Fig. 1. Fourier light-field microscopy (FLFM). (a) A schematic of the experimental setup for FLFM. The objective lens (OL) and the tube lens (TL) form an image at the native image plane (NIP), which area is adjusted by an iris. The Fourier lens (FL) transforms the image at the NIP to the back focal plane of the FL, where the microlens array (MLA) is situated. The light-field information is recorded by the sCMOS camera at the back focal plane of the MLA. DM, dichroic mirror; M, mirror. The inset diagram illustrates image formation through the MLA for emitters at different axial positions, implying recording of both the spatial and angular information in an uncompromised manner. (b, c) 3D perspective view (b) and stack projection (c) of the simulated PSF through a 5×5 MLA (effective pitch = 89 µm in the object space) within an axial range from −40 µm to 40 µm. (d) The corresponding experimental PSF of the system across the same range, showing good agreement with the numerical model. The depth information in (c) and (d) is coded as shown in the color-scale bars.
Fig. 2.
Fig. 2. Characterization of FLFM and imaging caliber samples. (a) Left panel, 3D view of a reconstructed 200-nm fluorescent bead using the MATLAB function ‘isosurface’. Right panel, the reconstructed cross-sectional images and the corresponding profiles in x-y, y-z and x-z across the center of the bead, exhibiting FWHM values of 1.98 µm, 2.07 µm, and 4.39 µm in x, y, and z, respectively. (b) Wide-field (i) and raw FLFM (ii) images of 200-nm fluorescent beads located on the focal plane. The inset in (i) shows the zoomed-in image of the boxed region. A reconstructed axial stack of (ii) using wave-optics deconvolution (iii) and ray-optics integral (iv) models. (v) and (vi) show the zoomed-in images (left) and interpolated cross-sectional profiles (right) of the corresponding boxed regions in (iii) and (iv), respectively. FLFM with the wave-optics model has been shown to resolve two beads separated by 2.90 µm, confirmed by the inset in (i). (c) The reconstructed cross-sectional images (left panel) and profiles along the dashed lines (right panel) of a surface-stained 6-µm fluorescent bead using FLFM with the wave-optics (top row) and ray-optics (middle row) models. The hollow structure was clearly observed using the wave-optics model, while the ray-optics model failed to provide sufficient resolution. The same sample was also imaged using conventional LFM (bottom row) near the NIP, where strong artifacts prohibited proper visualization of lateral and axial structures. (d) Raw light-field (top) and reconstructed 3D FLFM (bottom) images of 200-nm fluorescent beads distributed in a volume. The reconstructed axial positions of four beads were identified at −28 µm, −22 µm, −9 µm and −5µm. The dashed lines in the raw image represent the edges of the square-shaped microlenses. (e) Top left, 3D reconstructed trajectories of 200-nm fluorescent beads suspended in water and axially separated by > 30 µm, tracked at a volume acquisition time of 100 ms (see Visualization 1). Top right, zoomed-in 3D trajectory of the corresponding boxed region in the left image. Bottom, the trajectories of the beads in x-y, x-z, and y-z. Different time-points are linearly color-coded from 0 to 4 s. The FLs of ${f_{\textrm{FL}}} = 75\textrm{ mm}$ (a, c, d) and ${f_{\textrm{FL}}} = 100\textrm{ mm}$ (b, e) were employed. Scale bars: 2 µm (a, b (i, v, vi)), 5 µm (c), 20 µm (b (ii), d), 10 µm (b (iii, iv), e).
Fig. 3.
Fig. 3. Imaging pollen grains using FLFM. (a,b) Raw light-field (a) and reconstructed 3D FLFM (b) images of a pollen grain stained with hematoxylin and phloxine B. The spines of the pollen oriented into three dimensions can be observed in (b). The depth information across a 50-µm range in (b) is color-coded according to the color scale bar. (c) Selected z-stack images. The insets show the zoomed-in FLFM (left) and wide-field (right) images at z = 2 µm of two spines (both are 12.03 × 12.03 µm), respectively. The results show sensitive axial discrimination and 3D resolution of the pollen structure using FLFM. (d) Corresponding cross-sectional profiles along the dashed line in (c) at z = 2 µm. The profiles exhibited two spines separated by ∼4 µm and their FWHM values of 1–2 µm resolved by FLFM. Scale bars: 20 µm (a), 10 µm (b, c).
Fig. 4.
Fig. 4. Imaging mouse kidney tissue using FLFM (see Visualization 4). (a,b) Raw light-field (a) and reconstructed 3D FLFM (b) images of a cryostat section of mouse kidney stained with Alexa Fluor 568 phalloidin using a 75-mm Fourier lens. The inset in (b) shows the cross-sectional view in x-z of the corresponding layer marked in (b). The color represents intensity levels in (b). (c, d) Axial-stack images of the reconstructed volume by FLFM (c) and by scanning wide-field microscopy (d). The arrows indicate the enhanced signals and sectioning capability of FLFM that reveal fine 3D structural changes. The grids in the images are the edges of the microlenses when acquiring the corresponding wide-field images through the MLA. (e) The cross-sectional profile along the axial dimension of the region as marked in (b), exhibiting a FWHM value of 4.75 µm. (f) The cross-sectional profile along the dashed line in (c), exhibiting resolved filaments at z=−1.5 µm separated by 3.3 µm. Scale bars: 20 µm.
Fig. 5.
Fig. 5. Experimental setup and light propagation for traditional LFM (top) and FLFM (bottom) using a 40×, 0.95NA objective lens. The insets on the right show the corresponding light-field images on the camera. NOP, native object plane; OL, objective lens; TL, tube lens; NIP, native image plane; FL, Fourier lens; CP, camera plane.
Fig. 6.
Fig. 6. x-z view of the PSFs across the central row of microlenses for FLFM (left panel) and traditional LFM (right panel). Numerical (top panel) and experimental (bottom panel) results show good agreement. The dashed lines represent the NIP and the redundantly aliased region can be observed in the PSF of traditional LFM. The experimental results show background noised due to less sufficient SNRs. Scale bars: 10 µm.
Fig. 7.
Fig. 7. Left to right, raw light-field data, reconstructed 3D image, cross-sectional images at the NIP and the corresponding profiles of the 200-nm fluorescent bead located on the native object plane using traditional LFM. Compared to the result in Fig. 2(a), severe reconstruction artifacts can be observed for traditional LFM near the NIP.
Fig. 8.
Fig. 8. Analysis of the lateral resolution of FLFM. The figure shows the model of light propagation, image formation and symbols to determine the lateral resolution.
Fig. 9.
Fig. 9. Numerical study of the FWHM values of the reconstructed images of a point emitter in all three dimensions, as a function of axial positions from −20 µm to 20 µm. 5 iterations were used for deconvolution, consistent with Fig. 2(a).
Fig. 10.
Fig. 10. Analysis of the axial resolution of FLFM. (a) Left, the model and symbols to determine the axial resolution. Right, symbols and indices of the corresponding microlenses. (b) Simulated light propagation and identification of axial distributions of the PSF after microlenses. Scale bar: 20 µm.
Fig. 11.
Fig. 11. Analysis of the axial resolution of FLFM by reconstructing two axially separated point emitters. Scale bars: 20 µm (left panel), 2 µm (middle panel).
Fig. 12.
Fig. 12. Analysis of the field of view (FOV) of FLFM. The figure shows the model and symbols to determine the FOV.
Fig. 13.
Fig. 13. Analysis of the depth of field (DOF) of FLFM. (a) Ray optics sketch of the model and symbols to determine the DOF based on axial diffraction of light. (b) Simulated light propagation and identification of axial distributions of the PSF after microlenses. (c) Ray optics sketch of the model and symbols to determine the DOF based on the translation of the light-field pattern at varying depths. Scale bars: 20 µm.
Fig. 14.
Fig. 14. Algorithm flow chart of FLFM.
Fig. 15.
Fig. 15. Analysis for the reconstruction of the surface-stained structure (diameter = 20 µm). (a) top row from left to right, the structure sampled with 5 slices, raw light-field image, reconstructed image of the surface-stained structure, cross-sectional images in x-z of the sampled structure, reconstructed image and the overlay, showing difference between the original and reconstructed structures due to the limited sampling step size and overlapping spatial information. (b) Same images with respect to (a) but with 11 slices. (c) Top rows are the same images with respect to (a) but with infinite slices (or sufficiently small step size to be exact), showing increasingly continuous structure and the enhanced stretching effect in the axial dimension (e.g. the 3rd image from the left in the top row of (c)) compared to (a) and (b). The bottom row of (c) demonstrates the agreement of the lateral patterns in x-y on the focal plane of the raw (bottom left), reconstructed (bottom middle) and overlay (bottom right) structures. The numerical results are consistent with our experimental observation in Fig. 2(c). Scale bars: 20 µm (c, left on the top row), 5 µm (c, right on the top row and the bottom row).
Fig. 16.
Fig. 16. Mitigating of the cross talk using a different FL (${f_{\textrm{FL}}} = 50$ mm). Top row from left to right, the densely sampled structure, raw light-field image, and reconstructed image of the surface-stained structure (diameter = 20 µm). Bottom row from left to right, overlay of the original (red) and reconstructed (green) x-z (left) and x-y (middle) sections. Bottom right, the reconstructed hollow structure of the microsphere. Scale bars: 20 µm (top middle), 5 µm (bottom middle).

Tables (1)

Tables Icon

Table 1. Design principle of FLFM.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

U i ( x , p ) = M f o b j 2 λ 2 e x p [ i u 4 s i n 2 ( α 2 ) ] × 0 α P ( θ ) e x p [ i u s i n 2 ( θ / 2 ) 2 s i n 2 ( α / 2 ) ] J 0 [ s i n ( θ ) s i n ( α ) v ] s i n ( θ ) d θ
ϕ ( x ) = r e c t ( x / d M L A ) e x p ( i k 2 f M L A x 2 2 )
h ( x , p ) = F 1 { F [ O F T [ U i ( x , p ) ] Φ ( x ) ] × e x p [ i 2 π f M L A ( 1 λ ) 2 ( f x 2 + f y 2 ) ] }
O ( x ) = | h ( x , p ) | 2 g ( p ) d p
g ( k + 1 ) = d i a g [ d i a g ( H T H g ( k ) ) 1 ( H T O ) ] g ( k )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.