Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Phase-space deconvolution for light field microscopy

Open Access Open Access

Abstract

Light field microscopy, featuring with snapshot large-scale three-dimensional (3D) fluorescence imaging, has aroused great interests in various biological applications, especially for high-speed 3D calcium imaging. Traditional 3D deconvolution algorithms based on the beam propagation model facilitate high-resolution 3D reconstructions. However, such a high-precision model is not robust enough for the experimental data with different system errors such as optical aberrations and background fluorescence, which bring great periodic artifacts and reduce the image contrast. In order to solve this problem, here we propose a phase-space deconvolution method for light field microscopy, which fully exploits the smoothness prior in the phase-space domain. By modeling the imaging process in the phase-space domain, we convert the spatially-nonuniform point spread function (PSF) into a spatially-uniform one with a much smaller size. Experiments on various biological samples and resolution charts are demonstrated to verify the contrast enhancement with much fewer artifacts and 10-times less computational cost by our method without any hardware modifications required.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Light field microscopy (LFM) has been widely applied in high-speed large-scale 3D fluorescence imaging [1,2] since it has the unique advantage of capturing the four-dimensional (4D) phase space [3–5] within a snapshot. Various biomedical applications, such as whole-animal calcium imaging [1] and 3D behavioral phenotyping [6], benefit from the unprecedented volumetric imaging speed. In the past decade, different LFM schematics [3,5,7–9] are proposed to capture the light field data with tradeoffs among spatial resolution, field of view (FOV) and imaging speed. Meanwhile, researchers have designed lots of algorithms [10–12] based on geometrical optics to reduce the artifacts in light-field-synthesized depth map and digital refocus in photography. However, very few algorithms based on wave optics, except for the straight-forward 3D Richard-Lucy (RL) deconvolution [4], are exploited to fully digest the 3D information embedded in the phase-space measurements of LFM. Although the resolution of LFM can be greatly enhanced by 3D RL deconvolution algorithm [4], strong artifacts close to the native object plane are introduced in the reconstructed volume due to low spatial sampling, which can be even worse with unavoidable system aberrations [13] and low signal-noise-ratio (SNR) conditions [14]. Especially for high numerical-aperture (NA) objective lenses with a small depth of field, a lot of out-of-focus fluorescence signals are collected and the sample-induced aberrations will easily change the distribution of the point-spread-function (PSF) [15]. In this case, the periodic and sharp PSF in 3D deconvolution for LFM, which is simulated by the beam propagation after the phase modulation of a microlens array, can easily overfit for the complicated experimental data [16] and increase the artifacts. Such high-frequency sample-dependent artifacts, which are hard to be removed, greatly reduce the image contrast in 3D visualization. By applying hardware modifications to the LFM, the artifacts can be removed with an enhanced resolution at the sacrifice of the depth of field [17] or FOV [5]. By exploiting the intrinsic properties of phase space, artifact-free high-speed 3D fluorescence imaging can be achieved [18,19], but the aperture coding in these setups will reduce the light efficiency.

In addition, the 3D PSF in the traditional RL deconvolution of LFM is spatially non-uniform and usually represented as a large-scale matrix with a lot of zero elements. It greatly raises the computational cost for the deconvolution process and restricts the reconstructed depth range due to the huge memory requirement [4]. However, in fact, the effective number of pixels containing most of the information is orders-of-magnitude smaller than the total pixel number of the PSF, which provides great potential to increase the reconstruction speed [20]. Although algorithms considering the spatiotemporal sparsity in the time-lapse 3D video have shown great success to accelerate the process [2], they are still limited to certain application such as calcium imaging and cannot be applied for the observation of 3D morphological changes.

To solve these problems, we propose a phase-space deconvolution framework, which eliminates the artifacts in the 3D reconstruction of LFM data and substantially increases the image contrast and the convergence speed. The microlens array in LFM samples the local coherence in a parallel way, which captures the 4D phase-space information within a snapshot. Since the semi-transparent assumption holds for most 3D fluorescence samples, the phase-space distribution in normal imaging conditions without occlusions and nonlinear effects should be smooth. However, traditional 3D deconvolution in LFM simply up-samples the solution space with a high-resolution volume, leading to a spatially-nonuniform 3D PSF, and never considers the smooth property of phase-space distribution in linear beam propagation [3]. Therefore, the pixelated effect and reconstruction artifacts appear with the discontinuity in the phase-space domain, which is originally induced by the low spatial sampling. Here, rather than using the 3D PSF directly simulated by beam propagation with the phase modulation of a microlens array, we convert the forward imaging process into an equivalent model in the phase-space dimension, which produces spatially-uniform 3D PSFs for different spatial frequency components with much smaller size. An up-sampling interpolation process is then conducted in the phase-space domain to use the smoothness in phase space as a reconstruction prior [21]. We verify the algorithm performance in various biological samples and a USAF-1951 resolution chart to show its superior performance to conventional algorithms in terms of artifact reduction, contrast enhancement, and convergence speed.

2. Method

To verify our algorithm, we build a traditional LFM setup as the schematic diagram shown in Fig. 1(a). We insert a microlens array at the image plane of a commercial microscope using an objective lens with the matched NA (63 × /1.4 NA oil immersion objective lens, Zeiss Apochromat). A relay system is applied for the light field data collection to relay the back focal plane of the microlens array to the sensor plane. The magnification of the relay system is chosen as 1:1 so that each microlens (2.1 mm focal length with 100 μm pitch size, corresponding to about 1.43 μm at sample plane) covers a certain number of sensor pixels for different angular resolution (~13 × 13 here). To model the PSF, we need to get the theoretical influence of an arbitrary 3D point to different measurements by the sensor due to the incoherent property of the fluorescence imaging [4]. For simplicity, we don’t take the relay system into account during modeling. We set the 3D point located at the axial plane z with a lateral coordinate p=(p1,p2). An arbitrary pixel on the sensor can be defined with a relative lateral coordinate u=(u1,u2) to the center of the microlens with a center position x=(x1,x2), as illustrated in Fig. 1(a). The sensor pixels with the same coordinates u (marked with various colors in the inset of Fig. 1(a)) represent different spatial frequency components.

 figure: Fig. 1

Fig. 1 Schematic of the experimental setup and the principle of phase-space deconvolution. (a) The schematic of the experimental setup with illustrations showing the preprocessing procedures for phase-space deconvolution. Different spatial frequency components (marked with different colors) are extracted from LFM raw data into higher dimensions, followed with an up-sampling interpolation for high-resolution reconstruction. (b) The axial slice (at z = 5 μm) of the traditional point spread function (PSF) used in 3D deconvolution for LFM, with a cross-section profile to show its large size and periodic patterns. (c) The axial slices (at z = 5 μm) of the PSFs after realignment used in phase-space deconvolution for two typical spatial frequency components, with the cross-section profiles to show the small size and smooth property. Scale bar, 10 μm.

Download Full Size | PDF

Then the PSF used for traditional light field deconvolution can be marked as hl(x+u,p,z) [4]. With the increase of defocus distance, the wavefront of a single emitted source at each small area becomes closer to a plane wave, creating lots of zeros after each microlens (Fig. 1(b)). In addition, such a PSF is spatially-nonuniform for the sample point with different positions relative to the center of the microlens. To get rid of the redundancy and incorporate the smooth constraint in phase space, we find that we can directly realign the pixels captured by LFM to obtain the 4D phase-space measurements without any image calibration required in previous methods [5], since every microlens works as a local Fourier transform in the schematic of LFM. For the model in phase-space, the PSF can be represented as hp(x,p,z,u) in a higher dimension, with the realignment process according to their coordinates x=(x1,x2) andu=(u1,u2). The generation of the phase-space PSFs from traditional PSF is also provided in Code 1 [22]. Figure 1(c) shows two PSFs of different spatial frequency components at z = 5 μm, illustrating a much smaller PSF size, compared with the PSF in 3D (Fig. 1(b)). In addition, the PSF after realigned in phase-space is more continuous and smoother than the PSF in 3D which has periodic patterns.

To analyze the PSF in phase-space in detail, we describe the representation of hp(x,p,z,u) in the coordinates x and u defined previously. Then the influence of one point emitter, with a 3D location with lateral coordinate p=(p1,p2) and fixed axial coordinatez, to one sensor pixel after a specific microlens can be represented as below:

hp(x,p,z,u)=ωxxKzFωx-x{Uz(xp)t(xx)}s(u)22dωxx,
where Uz(xp) represents the spherical complex field confined with the objective NA at different lateral positions of the image plane x=(x1,x2), generated by a point source (p1,p2,z). Kz is an imaginary constant related to z. t(xx) represents the 2D rectangle window function in spatial sampling, where the window size is defined by the pitch size of one microlens. Fωx-x() represents the 2D Fourier transformation with the spatial frequency coordinate ωx-x. s(u)=rect(ωx-xuds), represents another 2D rectangle window function in spatial frequency sampling, determined by the sensor pixel size ds after each microlens (ds=6.5μm for our system). Specifically, Uz(xp) can be represented as below according to the previous study [4].
Uz(xp)=Mfobj2λ2exp(i2πλz)0αcosθexp(i4πzsin2(θ/2)λ)J0(2πsin(θ)λ(x1p1)2+(x1p2)2)sin(θ)dθ,
where J0() is the zeroth order Bessel function of the first kind. fobj and M are the focal length and the magnification of objective respectively. λ is the wavelength of the emission fluorescence. α represents the half-angle of the numerical aperture.

With the Parseval’s theorem and a simple transformation, we can transform Eq. (1) into

hp(x,p,z,u)=xxKz{Uz(xp)t(xx)}Fx-x-1(s(u))22d(xx)x=x-x__Kzx{Uz(x+xp)t(x)}Fx-1(s(u))22dx,
where represents the 2D convolution operation. Fx-x-1() represents the 2D inverse Fourier transformation with the spatial coordinatexx. The spatial-invariance property hp(x,p,z,u)=hp(x+Δx,p+Δx,z,u) can be clearly observed from Eq. (3) for each spatial frequency component.

The measurements LF(x) captured by the light field data can then be represented as

LF(x)=zpg(p,z)|hl(x+u,p,z)|2dpdz,
where g(p,z) is the intensity distribution of the 3D volume. After the pixel realignment based on the coordinates x and u, the 4D low-resolution phase space WDFL(x,u) can be obtained byLF(x). Due to the physical limitation of the microlens array, we cannot obtain the full sampling of x, indicating a low spatial sampling in the phase-space domain, which is the key origin for the artifacts. Due to the smooth constraint in phase space, an interpolation process can then be applied to the captured data WDFL(x,u) in preprocessing procedures, which is not considered in traditional 3D RL deconvolution. We use a simple cubic interpolation and the sub-aperture filter to apply the prior in phase space. Finally, the phase space WDF(x,u) used for further deconvolution can be computed as below:
WDF(x,u)=cubic(WDFL(x,u))filteru,
where cubic() represents the 2D cubic interpolation, improving the smooth constraint of phase-space [23]. filteru is the low-pass filter corresponding to the sub-aperture frequency u. More complicated algorithms [21] or learning-based frameworks [24] can be adopted for better performance. Therefore, the imaging model can be transformed into the representation of phase-space domain as

WDF(x,u)=zpg(p,z)|hp(x,p,z,u)|2dpdz.

For algorithm implementation, we further discrete the aforementioned imaging model, then each pixel sampling can be represented as below:

Hi,j,k=βi(αjγk|hp(x,p,z,u)|2dudx)dpdz,
where αj is the area of a spatial pixel in phase space with an index j=(j1,j2), corresponding to the center position of the microlens, and a range of Nj=(Nj1,Nj2). βi is the voxel volume in the object side with an index of i=(i1,i2,i3), and γk is the area of a sensor pixel, corresponding to a specific spatial frequency range for a sub-aperture with an index k=(k1,k2). Similarly, the phase space WDF(x,u) and the object g(p,z) can be discretized as

Yj,k=αjγkWDF(x,u)dudx,
Xi=βig(p,z)dpdz.

Therefore, the imaging model can be discretized as

Yj,k=iXiHi,j,k,k={Ω|(k1,k2)Ω},
where Ω is the set of all the spatial frequency components with a total number of the pixels after each microlens (13 × 13 in our setup). The common noise assumption in fluorescence imaging is the Poisson-distributed noise [4], which transforms the imaging model as
Y˜j,k=Pois(iXiHi,j,k),
where Pois() represents the Poisson-distributed noise function. Then different spatial frequency components have different distributions of the shot noise due to the energy difference of different PSFs in phase space. Specifically, the PSFs of high-spatial-frequency components usually have lower intensity than the ones of low spatial frequency, indicating a lower SNR in terms of shot noise [25]. To make use of this prior knowledge, we solve the reconstruction problem as the optimizations for different spatial frequency components separately, inspired by the ptychographic methods used for coherent imaging [21]. In practice, we use the Richardson-Lucy algorithm to update the volume with different spatial frequency components sequentially by a weight number wk to balance the shot noise distribution for different spatial frequency components. Within a large iteration iter, we update the volume by the following iterative formula:
Xi(k,iter)(1-cwk)Xi(k-1,iter)+cwkXi(k-1,iter)(j(Y˜j,k./(iXi(k-1,iter)Hi,j,k))Hi,Nj-j,k)./(j1Hi,Nj-j,k),
where represents the dot product operator, and ./ represents point division operator. c is a constant to adjust the convergence speed (c = 80 for all experiments here). Xi(k,iter)represents the volume updated from the kth spatial frequency components in the iteration number of iter, which is set to all ones for the first iteration and Xi(1,iter+1)=Xi(Nk+1,iter). The weight number wk corresponding to the spatial frequency uk can be calculated based on the energy distribution of the PSFs as below:

wk=iHi,k1kΩiHi,k1.

After going through all the spatial frequency components, our algorithm goes into the next large iteration until Xi converges. It usually takes only 2 iterations for convergence. The whole algorithm pipeline is illustrated in Fig. 2. Benefited from the spatially-uniform PSFs for each spatial frequency components, we can use a discrete convolution operator to significantly reduce the computational cost. The detailed implementation of the phase-space deconvolution algorithm with a demo image can be found in Code 1 [22].

 figure: Fig. 2

Fig. 2 Pipeline of the phase-space deconvolution algorithm. (a) The procedures to calculate the PSFs used for phase-space deconvolution, with typical examples of the output for each step. (b) The pre-processing procedures applied to the captured light field image with typical outputs for each step. We first realign the pixels according to the coordinates defined in phase space and then up-sampling the phase-space measurements with sub-aperture filtering to apply the smooth prior in phase space. (c) The procedures of the iterative reconstruction with typical outputs for each step. Within each iteration, we update the volume with different spatial frequency components sequentially by the corresponding PSFs and preprocessed measurements.

Download Full Size | PDF

3. Results

For a quantitative analysis of both the reconstruction performance and convergence speed, we conducted a numerical simulation of two 8-μm-diameter fluorescence beads separated by 20 μm laterally with parameters same as the experimental setup described before (63 × / 1.4 NA oil objective lens and f/21 100 μm pitch size microlens). We ran both traditional 3D deconvolution and our phase-space deconvolution by Matlab 2017 on a computer with an Intel Core i9 processor (2.8 GHz) and memory of 64 Gbytes. The reconstructed SNR is calculated as below:

10log10||X||22||XY||22,
where X represents the ground truth and Y represents the reconstructed 3D volume. The reconstructed axial slices of both methods with the highest SNR are shown in Fig. 3(a), together with ground truth for detailed comparison. Our phase-space deconvolution cannot only retrieve a sharper image with higher contrast across the whole depth range, but also has much fewer artifacts close to the native object plane. The performance improvement will be more remarkable for samples with background fluorescence. Some halo effects can be observed around the beads, which may be introduced by the insufficient sampling in the spatial domain of LFM. And our deconvolution process further emphasizes the effect, which may reduce the quantification capability. But the intensity of the halo effects is much smaller than the useful information. In addition to the higher peak SNR and fewer artifacts, much fewer iteration numbers are required for convergence by using the phase-space deconvolution (Fig. 3(b)). The complicated and sharp PSF used in traditional 3D deconvolution can easily overfit for the ill-posed high-resolution 3D reconstruction problem, as illustrated by the quickly-decreasing SNR with the increase of the iteration number. On the contrary, the phase-space deconvolution is more robust with the spatially-uniform and smooth PSF. Due to the high convergence speed of phase-space deconvolution, here we calculate the computation time by multiple inner iterations within the large iteration covering all the spatial frequency components for comparison, as shown in Fig. 3(c). If considering 6dB SNR as the reference, the phase-space deconvolution (~80 seconds to reach 6dB) is more than 11 times faster than the traditional 3D RL deconvolution (~940 seconds to reach 6dB) for LFM reconstruction. In addition, we add TV regularization [26] in both algorithms for comparison. Despite slight improvements in SNR of both algorithms with better robustness to overfitting, the advanced optimization will increase the computational cost (Figs. 3(b) and 3(c)).

 figure: Fig. 3

Fig. 3 Numerical simulations for quantitative analysis of reconstruction performance and convergence speed. (a) Different axial slices (arranged from left to right with depth marked on the top) reconstructed by the traditional 3D RL deconvolution (first row), the traditional 3D RL deconvolution with TV regularization (second row), the phase-space deconvolution (third row), and the phase-space deconvolution with TV regularization (fourth row) at highest SNR, together with the ground truth for comparison (fifth row). Scale bar, 10 μm. (b) The SNR-versus-iteration curve illustrates that the phase-space deconvolution has much faster convergence speed and more robust performance than the traditional 3D RL deconvolution for LFM reconstruction. And TV regularization improves reconstructed SNR slightly with better robustness to overfitting. (c) The SNR-versus-time curve shows an ~11 times reduction in computation cost for the phase-space deconvolution (with or without TV regularization) compared with the traditional 3D RL deconvolution, taking 6dB SNR (dashed line) for reference. And the advanced optimization method with TV regularization will slow down both algorithms similarly.

Download Full Size | PDF

To experimentally verify the enhanced performance of our phase-space deconvolution, we imaged different biological samples with the LFM described before (Fig. 1(a)) and compared the results to wide-field microscopy (WFM) with axial scanning by the 63 × / 1.4 NA oil objective lens. The same excitation power and exposure time were used for each frame. Firstly, we imaged the static GFP-labeled B16 cells, which were seeded on cover slides to 35%-50% confluency and cultured in RPMI-1640 medium overnight. The orthogonal maximum intensity projections (MIP) of the reconstructed volumes by LFM with the 3D RL deconvolution using 2 iterations and 20 iterations are shown in Figs. 4(a) and 4(b), respectively. With small iteration numbers, the cell looks blurry without detailed structures. Moreover, as the axial planes close to the native object plane shown on the right side, the reconstructed results have quite low resolution and appears as large pixel blocks. When we increase the iteration number as shown in Fig. 4(b), the images become sharper but great artifacts can be observed close to the native object plane. The distribution of artifacts is highly related to the sample itself in a complicated way, which cannot be regarded as a fixed noise pattern and thus is hard to be removed. On the contrary, almost no artifacts are left in Fig. 4(c) with the phase-space deconvolution (2 iterations). In addition, our result also reveals much more subcellular structures than the 3D RL deconvolution with only 2 iterations. Figure 4(d) shows the reconstructed results by WFM using 3D deconvolution with 300 iterations, illustrating similar sub-cellular structures reconstructed by our method.

 figure: Fig. 4

Fig. 4 Performance comparison on the GFP-labeled B16 cell. The orthogonal maximum intensity projections (MIP) of the reconstructed LFM volume with the 3D RL deconvolution using 2 iterations and 20 iterations are shown in (a) and (b) respectively. (c) The orthogonal MIPs of the reconstructed LFM volume with our phase-space deconvolution (2 iterations), showing no artifacts and more subcellular structures. (d) The orthogonal MIPs of the reconstructed volume by the wide-field deconvolution microscopy with axial scanning (300 iterations). Different axial slices close to the native object plane at different depths from the reconstructed volume are shown on the right for all the methods. Both deconvolution algorithms show the optical sectioning ability of LFM, while our phase-space deconvolution has no artifacts and achieves much higher contrast than the traditional 3D RL deconvolution. Scale bars, 10 μm.

Download Full Size | PDF

Moreover, we imaged the histone-labeled C. elegans for a demonstration of dynamic samples (also with 63 × / 1.4 NA oil objective lens), as shown in Fig. 5 and Visualization 1. Figure 5 displays the reconstructed results by the 3D RL deconvolution with 2 iterations, the 3D RL deconvolution with 20 iterations, and the phase-space deconvolution with 2 iterations, respectively. The reconstructed volume of the 3D RL deconvolution is quite blurry without enough iterations. However, if 10 times more iterations of the 3D RL deconvolution are used for higher resolution in most axial planes, heavy artifacts will be introduced simultaneously (marked with the yellow arrows in Fig. 5(b)) and the image contrast will be greatly reduced. Our phase-space deconvolution exhibits clearer subcellular structures than other methods with no artifacts and small iteration numbers (2 iterations). Due to the large size for the reconstructed volume (1079 × 1079 × 81 voxels), corresponding to ~100 Megavoxels, it takes around 188.3 minutes of the 3D RL deconvolution to process a single frame with an acceptable resolution regardless of artifacts, while only 14.6 mins is required for our phase-space deconvolution with much better performance, as shown in Fig. 5(c). Such a computational acceleration makes the 3D reconstruction feasible to a video with hundreds of frames within one day, which is necessary for the practical use of LFM, especially for this kind of high-speed large-scale volumetric time-lapse imaging.

 figure: Fig. 5

Fig. 5 The dynamics of a freely-moving histone-labeled C. elegans imaged at 50 volumes per second. The reconstructed orthogonal MIPs at one time stamp by the 3D RL deconvolution using 2 iterations and 20 iterations are shown in (a) and (b) respectively. Zoom-in views of the marked yellow box at different time stamps are shown on the right. Reconstruction by traditional 3D RL deconvolution with low iteration numbers has a low spatial resolution, while higher iteration numbers will induce more artifacts. (c) The reconstructed orthogonal MIPs by the phase-space deconvolution (2 iterations). The zoom-in views in the same area are shown on the right. Higher image contrast in both xy and xz planes can be observed with no artifacts and much less computation time. Scale bars, 10 μm.

Download Full Size | PDF

To further demonstrate the contrast enhancement of our proposed phase-space deconvolution algorithm, we imaged a USAF-1951 resolution chart placed at different axial positions away from the native object plane (z = 0 μm) with a 10 × / 0.3 NA dry objective lens. Figure 6(a) shows the reconstructed result (z = −150 μm) of the in-focus plane by the 3D RL deconvolution algorithm with 2 iterations. More iterations can further enhance the contrast, as shown in Fig. 6(b). While the results obtained by the phase-space deconvolution algorithm shows an even better contrast in Fig. 6(c). In addition, the line pairs (group 5 line 6, 57 cycles/mm), which can be distinguished by phase-space deconvolutions, has no contrast for 3D RL deconvolution with the same iteration number. We further placed the resolution chart in the native object plane (z = 0 μm), as shown in Figs. 6(d)-(f). Phase-space deconvolution shows higher contrast in the native object plane (z = 0 μm) without pixelated effects appeared in 3D RL deconvolution. Additionally, we plot the modulation transfer function of the reconstructed line pairs (group 5 line 6) with the resolution chart placed in different axial positions, which illustrates that both algorithms can achieve similar resolution for the defocus planes with enough iterations. However, in the reconstructed results with 3D RL deconvolution, there is a drop of MTF close to the native object plane, while phase-space deconvolution can achieve a smooth MTF curve. Our method cannot get around the low sampling density in the native focal plane, which is limited by the LFM, and cannot improve the intrinsic resolution of LFM. But with the smooth constraint applied in the phase-space domain, we can reduce the reconstruction artifacts and improve its image contrast, especially for the axial planes close to the native object plane.

 figure: Fig. 6

Fig. 6 Contrast comparison by imaging a USAF-1951 resolution chart. The resolution chart is placed at different axial planes away from the native object plane (z = 0 μm) and imaged using the LFM with a 10 × 0.3NA dry objective. (a-c) The reconstructed in-focus plane (z = −150 μm) by 3D RL deconvolution with 2 iterations, with 5 iterations, and phase-space deconvolution with 2 iterations, respectively. (d-f) The reconstructed in-focus plane (z = 0 μm) by 3D RL deconvolution algorithm with 2 iterations, with 5 iterations, and phase-space deconvolution with 2 iterations, respectively. The cross-section profiles of the marked yellow lines in a-f are shown on the right. (g) The modulation transfer function (MTF) of the line pairs (group 5 line 6) in the resolution chart placed at different axial positions. Scale bar, 100 μm.

Download Full Size | PDF

In addition, we find that strong background fluorescence will aggravate the effects of artifacts in traditional 3D RL deconvolution and degrade the imaging performance, which is quite common for in vivo imaging [2]. But our phase-space deconvolution can achieve better performance by reducing the artifacts. To verify such capability of phase-space deconvolution with strong background fluorescence, we imaged a GFP-labeled B16 tumor spheroid with multiple cells distributed in 3D, most of which will contribute to the background. The GFP-B16 cells were maintained in RPMI1640 medium supplemented with 10%FBS, 1% Pen/Strap antibiotics and 1% NEAA (all from GIBCO). For the preparation of GFP-B16 tumor spheroids, 4 × 103 cells per well were seeded in round-bottomed 96 plates (Corning) and cultured in RPMI1640 medium supplemented with 10%FBS, 2% B-27 supplement (GIBCO), 2% methylcellulose (Sigma Aldrich), 1% Pen/Strap antibiotics and 1% NEAA. After 2days, each formed spheroids were picked and transferred as 1 spheroid per well, and cultured for another 2 days. Upon imaging, the GFP-B16 tumor spheroids were transferred to Lab-Tek II cover-glass-bottomed 8-chamber slide and imaged in HBSS supplemented with 2% FBS (All from Invitrogen) on our LFM (also with 63 × / 1.4 NA oil objective lens). As shown in Fig. 7, traditional 3D deconvolution with 2 iterations show blurry images of different axial planes with almost no difference between them. When we increase the iteration number, the images become sharper but strong artifacts can be clearly observed for the planes close to the native objective plane. On the contrary, our phase-space deconvolution achieves sharp images for all the axial planes without clear artifacts. Such a comparison shows the robustness of our algorithm for samples with strong background fluorescence.

 figure: Fig. 7

Fig. 7 Performance comparison on the GFP-labeled B16 tumor spheroid with strong background fluorescence. (a-c) The orthogonal maximum intensity projections (MIPs) of the reconstructed volume by the 3D RL deconvolution with 2 iterations, with 20 iterations, and phase-space deconvolution with 2 iterations, respectively. (d-f) Different axial slices of the reconstructed volumes with the depths marked at the bottom and the algorithms in the left. Scale bars, 10 μm.

Download Full Size | PDF

4. Conclusions

To summarize, we propose a phase-space deconvolution algorithm for 3D reconstruction in LFM. Compared with the traditional 3D RL deconvolution, our method achieves much better image contrast, significantly reduces the reconstruction artifacts, and has a ten-times less computational cost. The sudden drop of resolution and the rapid increase of artifacts close to the native object plane greatly limit the effective depth range of traditional LFM methods. Such a discontinuity also avoids a clear visualization of the 3D volume and poses great challenges to quantitative cell analysis [27] and most image-processing algorithms, such as segmentation [28] and tracking [29]. The proposed phase-space deconvolution successfully retrieves the corrupted information and facilitates a smooth 3D reconstruction. We believe the improvement in both the performance and speed can further promote the use of LFM in various biological studies. In addition, since our method has no requirement of any hardware modifications like other performance-enhanced LFM methods [5,30,31], tremendous data [1] captured by LFM in previous studies can be reinterpreted with our algorithm, which may give more insights by enhanced contrast. The new modeling framework in 4D phase-space domain may also inspire better optical designs for LFM and other computational optics.

Funding

National Natural Science Foundation of China (NSFC) (No. 61327902, 61671265). Beijing Municipal Science & Technology Commission (BMSTC) (No. Z181100003118014)

Acknowledgment

We thank Guangshuo Ou and Hao Zhu for providing the histone-labeled C. elegans.

References

1. R. Prevedel, Y.-G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014). [CrossRef]   [PubMed]  

2. T. Nöbauer, O. Skocek, A. J. Pernía-Andrade, L. Weilguny, F. M. Traub, M. I. Molodtsov, and A. Vaziri, “Video rate volumetric Ca2+ imaging across cortex using seeded iterative demixing (SID) microscopy,” Nat. Methods 14(8), 811–818 (2017). [CrossRef]   [PubMed]  

3. L. Waller, G. Situ, and J. W. Fleischer, “Phase-space measurement and coherence synthesis of optical beams,” Nat. Photonics 6(7), 474–479 (2012). [CrossRef]  

4. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013). [CrossRef]   [PubMed]  

5. L. Cong, Z. Wang, Y. Chai, W. Hang, C. Shang, W. Yang, L. Bai, J. Du, K. Wang, and Q. Wen, “Rapid whole brain imaging of neural activity in freely behaving larval zebrafish (Danio rerio),” eLife 6, e28158 (2017). [CrossRef]   [PubMed]  

6. M. Shaw, H. Zhan, M. Elmi, V. Pawar, C. Essmann, and M. A. Srinivasan, “Three-dimensional behavioural phenotyping of freely moving C. elegans using quantitative light field microscopy,” PLoS One 13(7), e0200108 (2018). [CrossRef]   [PubMed]  

7. M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. 25(3), 924–934 (2006). [CrossRef]  

8. J. Wu, B. Xiong, X. Lin, J. He, J. Suo, and Q. Dai, “Snapshot hyperspectral volumetric microscopy,” Sci. Rep. 6(1), 24624 (2016). [CrossRef]   [PubMed]  

9. X. Lin, J. Wu, G. Zheng, and Q. Dai, “Camera array based light field microscopy,” Biomed. Opt. Express 6(9), 3179–3189 (2015). [CrossRef]   [PubMed]  

10. Z. Xiao, Q. Wang, G. Zhou, and J. Yu, “Aliasing Detection and Reduction Scheme on Angularly Undersampled Light Fields,” IEEE Trans. Image Process. 26(5), 2103–2115 (2017). [CrossRef]   [PubMed]  

11. M. Martínez-Corral and B. Javidi, “Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photonics 10(3), 512 (2018). [CrossRef]  

12. E. Lee, S. Yang, M. Han, and J. Kim, “Depth-based refocusing for reducing directional aliasing artifacts,” Opt. Express 24(24), 28065–28079 (2016). [CrossRef]   [PubMed]  

13. J. Demmerle, C. Innocent, A. J. North, G. Ball, M. Müller, E. Miron, A. Matsuda, I. M. Dobbie, Y. Markaki, and L. Schermelleh, “Strategic and practical guidelines for successful structured illumination microscopy,” Nat. Protoc. 12(5), 988–1010 (2017). [CrossRef]   [PubMed]  

14. X. Huang, J. Fan, L. Li, H. Liu, R. Wu, Y. Wu, L. Wei, H. Mao, A. Lal, P. Xi, L. Tang, Y. Zhang, Y. Liu, S. Tan, and L. Chen, “Fast, long-term, super-resolution imaging with Hessian structured illumination microscopy,” Nat. Biotechnol. 36(5), 451–459 (2018). [CrossRef]   [PubMed]  

15. T. L. Liu, S. Upadhyayula, D. E. Milkie, V. Singh, K. Wang, I. A. Swinburne, K. R. Mosaliganti, Z. M. Collins, T. W. Hiscock, J. Shea, A. Q. Kohrman, T. N. Medwig, D. Dambournet, R. Forster, B. Cunniff, Y. Ruan, H. Yashiro, S. Scholpp, E. M. Meyerowitz, D. Hockemeyer, D. G. Drubin, B. L. Martin, D. Q. Matus, M. Koyama, S. G. Megason, T. Kirchhausen, and E. Betzig, “Observing the cell in its native state: Imaging subcellular dynamics in multicellular organisms,” Science 360(6386), 1392 (2018). [PubMed]  

16. G. Vicidomini, S. W. Hell, and A. Schönle, “Automatic deconvolution of 4Pi-microscopy data with arbitrary phase,” Opt. Lett. 34(22), 3583–3585 (2009). [CrossRef]   [PubMed]  

17. H. Li, C. Guo, D. Kim-Holzapfel, W. Li, Y. Altshuller, B. Schroeder, W. Liu, Y. Meng, J. B. French, K.-I. Takamaru, M. A. Frohman, and S. Jia, “Fast, volumetric live-cell imaging using high-resolution light-field microscopy,” Biomed. Opt. Express 10(1), 29–49 (2018). [CrossRef]   [PubMed]  

18. H.-Y. Liu, J. Zhong, and L. Waller, “Multiplexed phase-space imaging for 3D fluorescence microscopy,” Opt. Express 25(13), 14986–14995 (2017). [CrossRef]   [PubMed]  

19. H.-Y. Liu, E. Jonas, L. Tian, J. Zhong, B. Recht, and L. Waller, “3D imaging in volumetric scattering media using phase-space measurements,” Opt. Express 23(11), 14461–14471 (2015). [CrossRef]   [PubMed]  

20. T. T. Do, L. Gan, N. H. Nguyen, and T. D. Tran, “Fast and efficient compressive sensing using structurally random matrices,” IEEE Trans. Signal Process. 60(1), 139–154 (2012). [CrossRef]  

21. Z. Zhang, Y. Liu, and Q. Dai, “Light field from micro-baseline image pair,” CVPR in IEEE, 3800–3809 (2015).

22. Z. Lu, ''LightField-deconvolution,'' GitHub (2019) https://github.com/bbncWLG/LightField-deconvolution.

23. R. G. Keys, “Cubic convolution interpolation for digital image processing,” IEEE Trans. Signal Process. Speech, and Signal Processing 29(6), 1153–1160 (1981). [CrossRef]  

24. G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field reconstruction using deep convolutional network on EPI,” CVPR in (IEEE, 2017), Vol. 2017-January, pp. 1638–1646.

25. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]   [PubMed]  

26. X. Ou, J. Chung, R. Horstmeyer, and C. Yang, “Aperture scanning Fourier ptychographic microscopy,” Biomed. Opt. Express 7(8), 3140–3150 (2016). [CrossRef]   [PubMed]  

27. C. Brede, M. Friedrich, A. L. Jordán-Garrote, S. S. Riedel, C. A. Bäuerlein, K. G. Heinze, T. Bopp, S. Schulz, A. Mottok, C. Kiesel, K. Mattenheimer, M. Ritz, V. von Krosigk, A. Rosenwald, H. Einsele, R. S. Negrin, G. S. Harms, and A. Beilhack, “Mapping immune processes in intact tissues at cellular resolution,” J. Clin. Invest. 122(12), 4439–4446 (2012). [CrossRef]   [PubMed]  

28. C. J. Niedworok, A. P. Y. Brown, M. Jorge Cardoso, P. Osten, S. Ourselin, M. Modat, and T. W. Margrie, “aMAP is a validated pipeline for registration and segmentation of high-resolution mouse brain data,” Nat. Commun. 7(1), 11879 (2016). [CrossRef]   [PubMed]  

29. J. Y. Tinevez, N. Perry, J. Schindelin, G. M. Hoopes, G. D. Reynolds, E. Laplantine, S. Y. Bednarek, S. L. Shorte, and K. W. Eliceiri, “TrackMate: An open and extensible platform for single-particle tracking,” Methods 115, 80–90 (2017). [CrossRef]   [PubMed]  

30. N. Cohen, S. Yang, A. Andalman, M. Broxton, L. Grosenick, K. Deisseroth, M. Horowitz, and M. Levoy, “Enhancing the performance of the light field microscope using wavefront coding,” Opt. Express 22(20), 24817–24839 (2014). [CrossRef]   [PubMed]  

31. M. A. Taylor, T. Nöbauer, A. Pernia-Andrade, F. Schlumm, and A. Vaziri, “Brain-wide 3D light-field imaging of neuronal activity with speckle-enhanced resolution,” Optica 5(4), 345 (2018). [CrossRef]  

Supplementary Material (2)

NameDescription
Code 1       Matlab code for "Phase-space deconvolution for light field microscopy"
Visualization 1       The dynamics of a freely-moving histone-labeled C. elegans imaged at 50 volumes per second.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Schematic of the experimental setup and the principle of phase-space deconvolution. (a) The schematic of the experimental setup with illustrations showing the preprocessing procedures for phase-space deconvolution. Different spatial frequency components (marked with different colors) are extracted from LFM raw data into higher dimensions, followed with an up-sampling interpolation for high-resolution reconstruction. (b) The axial slice (at z = 5 μm) of the traditional point spread function (PSF) used in 3D deconvolution for LFM, with a cross-section profile to show its large size and periodic patterns. (c) The axial slices (at z = 5 μm) of the PSFs after realignment used in phase-space deconvolution for two typical spatial frequency components, with the cross-section profiles to show the small size and smooth property. Scale bar, 10 μm.
Fig. 2
Fig. 2 Pipeline of the phase-space deconvolution algorithm. (a) The procedures to calculate the PSFs used for phase-space deconvolution, with typical examples of the output for each step. (b) The pre-processing procedures applied to the captured light field image with typical outputs for each step. We first realign the pixels according to the coordinates defined in phase space and then up-sampling the phase-space measurements with sub-aperture filtering to apply the smooth prior in phase space. (c) The procedures of the iterative reconstruction with typical outputs for each step. Within each iteration, we update the volume with different spatial frequency components sequentially by the corresponding PSFs and preprocessed measurements.
Fig. 3
Fig. 3 Numerical simulations for quantitative analysis of reconstruction performance and convergence speed. (a) Different axial slices (arranged from left to right with depth marked on the top) reconstructed by the traditional 3D RL deconvolution (first row), the traditional 3D RL deconvolution with TV regularization (second row), the phase-space deconvolution (third row), and the phase-space deconvolution with TV regularization (fourth row) at highest SNR, together with the ground truth for comparison (fifth row). Scale bar, 10 μm. (b) The SNR-versus-iteration curve illustrates that the phase-space deconvolution has much faster convergence speed and more robust performance than the traditional 3D RL deconvolution for LFM reconstruction. And TV regularization improves reconstructed SNR slightly with better robustness to overfitting. (c) The SNR-versus-time curve shows an ~11 times reduction in computation cost for the phase-space deconvolution (with or without TV regularization) compared with the traditional 3D RL deconvolution, taking 6dB SNR (dashed line) for reference. And the advanced optimization method with TV regularization will slow down both algorithms similarly.
Fig. 4
Fig. 4 Performance comparison on the GFP-labeled B16 cell. The orthogonal maximum intensity projections (MIP) of the reconstructed LFM volume with the 3D RL deconvolution using 2 iterations and 20 iterations are shown in (a) and (b) respectively. (c) The orthogonal MIPs of the reconstructed LFM volume with our phase-space deconvolution (2 iterations), showing no artifacts and more subcellular structures. (d) The orthogonal MIPs of the reconstructed volume by the wide-field deconvolution microscopy with axial scanning (300 iterations). Different axial slices close to the native object plane at different depths from the reconstructed volume are shown on the right for all the methods. Both deconvolution algorithms show the optical sectioning ability of LFM, while our phase-space deconvolution has no artifacts and achieves much higher contrast than the traditional 3D RL deconvolution. Scale bars, 10 μm.
Fig. 5
Fig. 5 The dynamics of a freely-moving histone-labeled C. elegans imaged at 50 volumes per second. The reconstructed orthogonal MIPs at one time stamp by the 3D RL deconvolution using 2 iterations and 20 iterations are shown in (a) and (b) respectively. Zoom-in views of the marked yellow box at different time stamps are shown on the right. Reconstruction by traditional 3D RL deconvolution with low iteration numbers has a low spatial resolution, while higher iteration numbers will induce more artifacts. (c) The reconstructed orthogonal MIPs by the phase-space deconvolution (2 iterations). The zoom-in views in the same area are shown on the right. Higher image contrast in both xy and xz planes can be observed with no artifacts and much less computation time. Scale bars, 10 μm.
Fig. 6
Fig. 6 Contrast comparison by imaging a USAF-1951 resolution chart. The resolution chart is placed at different axial planes away from the native object plane (z = 0 μm) and imaged using the LFM with a 10 × 0.3NA dry objective. (a-c) The reconstructed in-focus plane (z = −150 μm) by 3D RL deconvolution with 2 iterations, with 5 iterations, and phase-space deconvolution with 2 iterations, respectively. (d-f) The reconstructed in-focus plane (z = 0 μm) by 3D RL deconvolution algorithm with 2 iterations, with 5 iterations, and phase-space deconvolution with 2 iterations, respectively. The cross-section profiles of the marked yellow lines in a-f are shown on the right. (g) The modulation transfer function (MTF) of the line pairs (group 5 line 6) in the resolution chart placed at different axial positions. Scale bar, 100 μm.
Fig. 7
Fig. 7 Performance comparison on the GFP-labeled B16 tumor spheroid with strong background fluorescence. (a-c) The orthogonal maximum intensity projections (MIPs) of the reconstructed volume by the 3D RL deconvolution with 2 iterations, with 20 iterations, and phase-space deconvolution with 2 iterations, respectively. (d-f) Different axial slices of the reconstructed volumes with the depths marked at the bottom and the algorithms in the left. Scale bars, 10 μm.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

h p (x,p,z,u)= ω xx K z F ω x-x { U z (xp)t(xx) }s(u) 2 2 d ω xx ,
U z (xp)= M f obj 2 λ 2 exp( i 2π λ z ) 0 α cosθ exp( i 4πz sin 2 (θ/2) λ ) J 0 ( 2πsin( θ ) λ ( x 1 p 1 ) 2 + ( x 1 p 2 ) 2 )sin( θ )dθ,
h p (x,p,z,u)= xx K z { U z (xp)t(xx) } F x-x -1 ( s(u) ) 2 2 d( xx ) x=x-x _ _ K z x { U z (x+xp)t(x) } F x -1 ( s(u) ) 2 2 dx,
LF( x )= z p g( p,z ) | h l (x+u,p,z) | 2 dpdz ,
WDF( x,u )=cubic( WD F L ( x,u ) )filte r u ,
WDF( x,u )= z p g( p,z ) | h p (x,p,z,u) | 2 dpdz .
H i,j,k = β i ( α j γ k | h p ( x,p,z,u ) | 2 dudx )dpdz ,
Y j,k = α j γ k WDF( x,u )dudx ,
X i = β i g(p,z)dpdz .
Y j,k = i X i H i,j,k , k={ Ω| ( k 1 , k 2 )Ω },
Y ˜ j,k =Pois( i X i H i,j,k ),
X i (k,iter) ( 1-c w k ) X i (k-1,iter) +c w k X i (k-1,iter) ( j ( Y ˜ j,k ./( i X i (k-1,iter) H i,j,k ) ) H i, N j -j,k )./ ( j 1 H i, N j -j,k ) ,
w k = i H i,k 1 kΩ i H i,k 1 .
10 log 10 ||X| | 2 2 ||XY| | 2 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.