Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Spatial-temporal low-rank prior for low-light volumetric fluorescence imaging

Open Access Open Access

Abstract

In biological fluorescence imaging, obtaining high spatial-temporal resolution volumetric images under low light conditions is one of the critical requirements. As a widely-used snapshot volumetric imaging modality, light field microscopy has the problem of impeded imaging performance caused by reconstruction artifacts, especially under low light conditions. Fortunately, low-rank prior-based approaches have recently shown great success in image, video and volume denoising. In this paper, we propose an approach based on the spatial-temporal low-rank prior combining weighted nuclear norm minimization (WNNM) denoising and phase-space 3D deconvolution to enhance the performance of light field microscopy (LFM) under low light conditions. We evaluated the method quantitatively through various numerical simulations. Experiments on fluorescence beads and Drosophila larvae were also conducted to show the effectiveness of our approach in biological applications.

© 2021 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

In biological fluorescence imaging, high light exposure provides high signal-to-noise ratio (SNR) images but leads to heavy phototoxicity and photobleaching. For longer capturing duration and less damage to specimens, low-light conditions are always preferred. Commonly, there is a compromise among light intensity, exposure duration, specimen tolerance and image quality in observing neural activities or structures of Caenorhabditis elegans, Drosophila, zebrafish, mouse and other model organisms in vivo [13]. Since most of the biological processes take place within three-dimensional (3D) spaces at high spatial-temporal resolution, high-speed volumetric imaging is the most preferable method to observe and record these dynamic processes. However, low light intensity and shorter exposure time bring severe noises to acquired images. While capturing volumetric images under low light conditions, no matter through point scanning, axial scanning or snapshot, the imaging rate is always limited in getting a high-SNR image. The captured low-SNR images will greatly decrease both the quality and the spatial resolution of volumetric reconstruction results. Compared with other volumetric imaging modalities, light field microscopy (LFM) has attracted significant interest for its simplicity and efficiency. LFM has been used for observing neural activity and visualizing specific complex cellular structures of the biological specimen for years [2,47]. However, under low-light conditions, LFM is even more susceptible to noise due to its multiplexed sampling of both spatial and angular information on one single sensor. The noisy measurements from LFM will lead to severe artifacts in subsequent volume reconstructions. Although several deep-learning-based methods have been proposed for noise reduction in fluorescence imaging, they usually require specific data priors with a tradeoff in generalization [8,9].

Low-rank prior, which stems from non-local self-similarity (NSS) property [10], has been extensively explored in image denoising. The NSS property states that for each patch in natural images, there exist similar patches near its neighborhood. Since similar patches have similar underlying structures, after vectorization of each patch, the grouped similar patches form a low-rank matrix. Based on the NSS prior, a variety of methods convert image, video and volume denoising tasks into low-rank matrix recovery problems [1114], while some others process the low-rank matrix by the collaborative filtering [1517]. The latter approaches are mainly applied for Gaussian noise removal. In image denoising, the block search window is a two-dimensional (2D) region around the reference patch [15], while in video denoising, the searching area is extended to temporally adjacent video frames [16]. Further, in volume denoising, the reference patch is extended into 3D cubes, and the searching window is also a 3D cube keeping its center as the Ref. [17]. After convex relaxation, the low-rank matrix recovery problem can be transformed into the nuclear norm minimization (NNM) [11]. The NNM approach can remove the mixed noise in video more effectively than the collaborative filtering approach [12,18]. However, the NNM method has its own drawback, since it treats the singular values of low-rank matrix with the same threshold. In fact, the singular values for low-rank matrix have different importance, thus they should be given different penalties. The WNNM method solves this problem by suppressing the singular values of the low-rank matrix that correspond to artifacts and noise with higher weights, obtaining even better restoration [13].

Inspired by the generation of low-rank matrix from image, video and volume, we propose an algorithm with the spatial-temporal low-rank prior for low-light fluorescence volumetric reconstruction. More specifically, we focus on the volumetric reconstruction of temporal sequence of low-light fluorescence LFM. In LFM, volumes can be reconstructed by 3D deconvolution from captured images with proper SNR [3,5,19], but the native objective plane artifacts remain an unavoidable challenge. Although the phase-space deconvolution and dictionary learning based DiLFM framework achieve better results with a significant restrain of artifacts especially near the native object plane [20,21], the SNR of reconstructed volume remains unsatisfying. Although the NSS prior has already been used in light field image denoising, those approaches focus mainly on the video-like image sequences [22], epipolar plane images sequence [23] from specially arranged light field sub-aperture images or disparity compensated 4D patches in light field directly [24]. The weighted nuclear norm minimization (WNNM) [13], as an improved low-rank approximation approach, shows great potential in light field denoising. None of them have analyzed and considered the influence of 3D reconstruction methods in fluorescence microscopy. Moreover, the observation of high temporal resolution biological process usually requires the acquisition of thousands of images in a short time, resulting in tiny differences between adjacent frames of the captured image sequence. This makes the collected image sequence of LFM exhibiting the low-rank characteristic in time dimension, which has not been exploited by existing methods yet. Therefore, in this work, we introduce the spatial-temporal low-rank prior in 3D phase-space deconvolution to improve the performance of LFM under low-light conditions.

The structure of the paper is listed as follows. In Section 2, we describe the whole procedure of our spatial-temporal low-rank prior enhanced approach for fluorescence LFM reconstruction. Section 3 verifies the effectiveness of spatial-temporal low-rank prior in the reconstructed volume sequences, and evaluates the influence on the reconstruction resolution with fluorescence beads. Section 4 provides the simulation results with USAF-1951 target. Experimental results of the dynamic neuron-labeled Drosophila larvae with our approach are shown in Section 5. Section 6 summarizes and concludes the work.

2. Implementation

To test our approach, we build a normal LFM system, as shown in the schematic diagram in Fig. 1. We insert a microlens array (MLA) at the image plane of a commercial microscope (ZESS Observer Z1). Two objective lenses (20×/0.5 NA dry, Zeiss EC Plan-NEOFLUAN and 40×/1.0 NA, ZEISS W Plan-APOCHROMAT) are utilized for different specimens. We use a 1:1 4f system to relay the light field images onto the camera sensor. Each microlens of the MLA (2.1 mm focal length and 100 μm pitch size) covers a certain number of sensor pixels (13×13 pixels here) for different angular information. The illumination is provided by the 488-nm laser (COHERENT OBIS Laser Box), and the image is captured by an sCMOS sensor (Andor Zyla 4.2).

 figure: Fig. 1.

Fig. 1. The schematic of the light field microscopy (LFM), working in an epi-fluorescence imaging mode.

Download Full Size | PDF

3D deconvolution has been widely used in fluorescence imaging. Through iteratively forward and backward projection, the out-of-focus information is reassigned to its original location. Traditionally, wave-optics-based Richardson-Lucy (RL) deconvolution algorithm is adopted in LFM 3D reconstruction [19]. In that algorithm, the point spread function (PSF) is calculated by the beam propagation model after MLA modulation. But this kind of PSF is spatially non-uniform and has a lot of zero elements in borders, which will greatly slow down the reconstruction process. According to the phase-space domain distribution in linear beam propagation model, a recently proposed phase-space deconvolution framework [21] converts the former non-uniform PSF into the spatially uniform 3D PSF for different frequency components. The phase-space algorithm facilitates significant reduction of reconstruction artifacts and ten-times less computational cost. However, even with the phase-space deconvolution framework, the noisy LFM images still caused severe artifacts during volume reconstruction, due to its multiplexed sampling of both spatial and angular information on a single sensor.

As described in Section 1, the low-rank prior has been applied in image, video and volume denoising, but it has not been exploited in volumetric temporal sequences. Previous studies of the low-rank prior in the light field dataset have shown that more structural information leads to a higher peak-signal-to-noise-ratio (PSNR) and structure similarity index (SSIM) [2224]. Our proposed approach builds upon both the phase-space deconvolution and 3D spatial and temporal low-rank prior, focusing on the reconstructed volume sequence from low-light LFM images.

The whole framework of our approach is presented in Fig. 2. We initialize and reconstruct the noisy volume sequence from low-light LFM images with phase-space deconvolution. To fulfill our low-rank denoising requirements, we reconstruct volumes from multiple LFM images at adjacent time points during each iteration. Under low-light conditions, these volumes are quite noisy. We then slice these noisy volumes into overlapped 3D cubes, and search for similar cubes in its surrounding zones for each cube. We name the cube as reference cube, and the surrounding zone as the searching window. Different from previous low-rank approaches [1517], the searching windows in this paper are in four dimensions (4D): 3D spatial domain and 1D temporal domain. We choose similar cubes for the reference cube from its 3D spatial neighborhood and its temporal neighborhood volumes (block-matching). For each reference cube, after block-matching in 4D searching window, the similar cubes are grouped together to form 4D stacks and then vectorized into low-rank matrices. Later we perform WNNM denoising to decrease the noise-induced artifacts. After denoising, the similar cubes in denoised low-rank matrices are aggregated back to their original locations using a weighted average. Then these volumes are used as the input of the phase-space deconvolution for the next iteration. The whole procedure is also shown in Algorithm 1 below.

oe-29-25-40721-i001

 figure: Fig. 2.

Fig. 2. The framework of our proposed method with a spatial-temporal low-rank prior for LFM volume reconstruction under low-light conditions.

Download Full Size | PDF

We perform reconstruction of the volumes from LFM temporal sequence $im{g_{1,2, \cdots N}}$ with phase-space deconvolution. In order to take the advantage of the spatial-temporal low-rank prior, after each iteration, we generate the low-rank matrix along 3 spatial domain and the temporal domain of the reconstructed volume sequence, and then integrate the WNNM framework for volumetric denoising.

For the purpose of suppressing the strong reconstruction artifacts (especially close to the native image plane under low light), we apply the phase-space deconvolution framework [21]. The phase-space PSF used for deconvolution is denoted as ${h_p}({\textrm{x,p,z,u}} )$, where $\textrm{x} = ({{x_1},{x_2}} )$ is the MLA center position, $\textrm{u} = ({{u_1},{u_2}} )$ is the relative coordinate to the MLA center $\textrm{x}$. According to [21], the PSF in phase-space domain is smooth and spatially-uniform for different spatial frequency components, which can also reduce the computational costs.

As we have discussed, the low-rank prior can better improve the recovered image quality under low-light conditions. Therefore, after each iteration of phase-space 3D deconvolution $Ph\_deconv$, we conduct a denoising process, applying the spatial-temporal low-rank prior in LFM temporal sequence. Concretely, we integrate the WNNM denoising approach with 3D deconvolution after each iteration. We slice the searching window into cubes with the function $vol2pat$, find the similar cubes with function $blockmatching$, and then sort these cubes according to block matching results $match\_ind$ and group the most similar cubes into a low-rank matrix Y. For each matrix Y, we utilize the function $WNNM$ to find the approximate denoised matrix X.

The WNNM problem is described as follow:

$$\hat{X} = {\mathrm{arg\,min}}_X||{Y - X} ||_F^2 + {||X ||_{w,\ast }},$$
where ${||X ||_{w,\ast }} = {\sum\nolimits_i {|{{\omega_i}{\sigma_i}(X)} |} _1}$ is the weighted nuclear norm, $w = [{\omega _1}, \cdots ,{\omega _n}]$ is the weights for each singular value of X. For each cube ${y_i}$ in the reconstructed volume ${\textrm{g}_t}$, we search for its nonlocal similar cubes in its surrounding windows and the same window in volumes at adjacent time points. For example in ${\textrm{g}_{t - 2}},{\textrm{g}_{t - 1}},{\textrm{g}_{t + 1}},{\textrm{g}_{t + 2}}$, we set the time window size as 5. Then we vectorize and stack these similar cubes into a matrix ${Y_j}$, as shown in Eq. (2). We optimize to find the denoised matrix ${X_j}$ that approximates to ${Y_j}$, and assume the noise variance as $\sigma _n^2$. This process can be expressed as follows:
$$\hat{X}{}_j = {\mathrm {argmin} _{{X_j}}}||{{Y_j} - {X_j}} ||_F^2 + \sigma _n^2{||{{X_j}} ||_{w,\ast }}. $$

The weights for the $i - th$ singular value of ${X_j}$ is defined as:

$$\omega {}_i = c\sqrt n /({\sigma _i}({X_j}) + \varepsilon ), $$
where c is a positive constant, n is the number of similar cubes in the matrix ${Y_j}$, and $\varepsilon$ is a small value designed to avoid dividing by zero. The initial ${\sigma _i}({X_j})$ is estimated as:
$${\hat{\sigma }_i}({X_j}) = \sqrt {\max (\sigma _i^2({Y_j}) - n\sigma _n^2,0)}. $$

After denoising, the estimated cubes in matrix ${X_j}$ are aggregated back to their original position in the volumes ${\textrm{g}_{t - 2}},{\textrm{g}_{t - 1}},\textrm{g},{\textrm{g}_{t + 1}},{\textrm{g}_{t + 2}}$. Then the algorithm runs into the next iteration for volumetric reconstruction.

3. Low-rank prior and resolution verification

The widely used VBM3D video denoising approach makes full use of the high temporal redundancies among video frames [16]. Intuitively, high temporal redundancies also exist among volumetric temporal sequence, as a result, we can get even lower spectrum of the low-rank matrix for the same number of voxels or pixels in the volume by introducing the higher dimensional information. To verify this assumption, we image neuron-labeled Drosophila embryo with our LFM system under a 40×/1.0NA water-immersion objective, and then construct low-rank matrices from the video and the volume sequence separately. We choose a single slice from the reconstructed volumes as a video sequence, and then compare the singular values of the low-rank matrices from video and volume sequences to verify our assumption. As shown in Fig. 3(a), for video denoising, the representing search window is labeled in the yellow square (left). The zoomed-in regions show the reference patch (solid orange square) from the center frame of the searching window (middle). Then similar patches (blue square) of the reference patch are vectorized and stacked into a low-rank matrix (right). In Fig. 3(b), for volume sequence denoising, the representing searching window in the first volume is labeled in the yellow cube (left). The zoomed-in regions show the reference cube (orange cube line) in the center volume of the 4D search window (middle). Then similar cubes (blue line) of the reference cube are vectorized and stacked into a low-rank matrix (right).

 figure: Fig. 3.

Fig. 3. Comparison of the accumulated singular values in the low-rank matrices from 2D patches and 3D cubes. (a) illustrates the block matching similar patches for the reference patch along both temporal and spatial dimensions. The reference patch is selected from the center frame of the search window. The search window in the first frame is labeled in the yellow square (left), and zoomed-in searching window shows the reference patch and similar patches in the orange and blue square (middle), the representing low-rank matrix is shown in the right. (b) illustrates the block matching similar cubes across volumes. The search window in the first volume is labeled in the yellow cube (left). The zoomed-in searching window shows the reference cube and similar cubes in orange and blue (middle). The sample low-rank matrix is shown in the right. (c) shows the accumulation results for the top 20 low-rank matrix singular values from about 400 reference patches (14×14) and cubes (7×7×4, 11×11×4).

Download Full Size | PDF

In order to get a fair comparison, we select three types of reference patches and cubes, and two types of searching window. For the video sequence, we set the reference patch size as 14×14, and the searching window as 43×43×5. As for our proposed volume sequence denoising approach, we set the reference cube size as 7×7×4 and 11×11×4 separately, and the 4D searching window as 21×21×7×5. Under these parameters, the reference patch (14×14) and cube (7×7×4) and their searching windows cover the same number of pixels. We construct low-rank matrices for more than 400 reference patches and cubes as above. Afterwards we accumulate and compare their singular values from different types of reference patches and cubes. Since the singular values of different low-rank matrices show great differences, we normalize the singular values from each low-rank matrix, and then sum up singular values in descending order from all the reference patches or cubes. Figure 3(c) shows the accumulated singular values for different types of reference patches and cubes. Conformed with our assumption, with the introduction of 3D spatial and temporal dimension information, the accumulated singular values for reference cube 7×7×4 (blue line) is more intensive than that for reference patch 14×14 (red line), which means lower rank matrix is constructed. Under the same 4D searching window, the accumulated singular values for reference cube 7×7×4 shows more intensity than that for reference cube 11×11×4 (black line). This result also agrees with the non-local similarity character.

Another problem is to evaluate the influence of the spatial-temporal prior on the spatial resolution of the reconstructed volume at different depth. Therefore, we imaged tens of 0.5-μm fluorescence beads (Thermo Scientific, FluoSpheres, carboxylate-modified microspheres) at different axial positions in our LFM system and then reconstructed them with the proposed approach. With the 20×/0.5 NA objective lens, we captured 81 LFM images from -40 μm to 40 μm by axial scanning, with the scanning step as 1 μm. Next, we compared the full-width-half-maximum (FWHM) of beads with and without our approach. In traditional phase-space approach, we conducted deconvolution on these LFM images frame by frame. While in our proposed approach, to cope with the 4D low-rank matrix searching window, we reconstructed multiple volumes at each iteration, and then search for similar cubes in each reconstructed volume near to the location of reference cubes. We set the cube size as 7×7×5, the searching window size as 13×13×9×5, and the number of similar cubes as 200. That is, we tried to find similar cubes in adjacent 5 volumes. And in each volume, the search window is a 13×13×9 cube. After reconstruction, we calculated the FWHM of each bead in both the lateral and axial directions. The statistical results of these FWHMs at different axial positions are shown in Fig. 4. Figures 4(a), and 4(b) illustrate the lateral and axial resolution of the beads at different axial positions separately, with the red for the traditional phase-space approach, and the blue for our proposed approach. Our proposed approach shows similar performance on both the axial and lateral resolutions compared with the traditional phase-space approach.

 figure: Fig. 4.

Fig. 4. Resolution difference with and without low-rank prior enhancement. (a) The lateral resolution at different axial positions. (b) The axial resolution at different axial positions. The error bar stands for the standard deviation of the FWHM.

Download Full Size | PDF

4. Quantitative evaluation

To quantitatively evaluate our approach, we simulated the LF dataset of a USAF-1951 resolution target under both the extremely-low-light and low-light conditions. The resolution chart was stacked on z axis for 3 layers to form a 3D stack. And we introduced random shifts to the stack in x and y direction to simulate the temporal sequence. The fluorescence microscopy images are dominated by Poisson noises, so we scaled the light field image to 1 to $\lambda $, and assume that the brightest pixel follows a Poisson distribution with parameter $\lambda $. We set $\lambda = 1$ for extremely-low-light images and $\lambda = 10$ for low-light images. The simulated USAF-1951 LFM datasets were reconstructed through traditional phase-space deconvolution [20] and our approach with the low-rank prior respectively. The reconstructions of extremely low light dataset are shown in Fig. 5(a), and the low light results are shown in Fig. 5(b). Under both light conditions, we can see that our proposed approach achieves better results. This can also be quantitatively assessed through the PSNR and SSIM indexes. Specifically, under the extremely low-light condition, the PSNR and SSIM indexes of both approaches after each iteration are shown in Fig. 5(c), while the PSNR and SSIM indexes of low light results are shown in Fig. 5(d). The maximum PSNR and SSIM through iteration under different light levels are shown in Fig. 5(e). It is shown that our proposed approach provides much better results than the traditional phase-space deconvolution approach under both extremely low light and low light circumstances. Under extremely low light condition, our proposed approach reached its best PSNR and SSIM in about 3 to 4 iterations. Under low-light conditions, it takes about 6 to 8 iterations for our approach to get its best performance. The results also demonstrate amelioration of the over-fitting problem in phase-space deconvolution. To further investigate the reconstruction performance on thicker samples, we constructed a thicker 3D test chart by stacking 15 USAF-1951 test charts vertically. The light field image was scaled to 0 to 10, and the brightest pixel follows a Poisson distribution with $\lambda = 10$. We calculated the SSIM of the reconstructed image in every layer (Fig. 5(f)). Our proposed approach achieved better performance on every layer, demonstrating the efficiency of the proposed algorithm on thick samples.

 figure: Fig. 5.

Fig. 5. Comparison of traditional and proposed approach results under extremely low light and low light conditions. (a) Reconstruction results (maximum intensity projection, MIP, on x-y plane) of the traditional (left) and our proposed approach (right) for the simulated extremely low light USAF-1951 resolution target dataset. (b) Reconstruction results (maximum intensity projection, MIP, on x-y plane) for the simulated low light dataset of traditional method (left) and our method (right). (c) The PSNR and SSIM evolution using the traditional and proposed approach for extremely low light dataset. (d) The PSNR and SSIM evolution for low light dataset. (e) The maximum PSNR and SSIM through iteration of traditional and proposed approach under different light levels. (f) SSIM in different vertical layers, comparing phase-space deconvolution method and proposed method.

Download Full Size | PDF

Despite the prominent reconstruction quality improvement, an undeniable computational load compromise was made in our proposed approach. To quantify this compromise, we performed quantitative tests on samples with different data sizes and evaluated the processing time and memory to perform two key steps in one iteration. It usually takes around 5 iterations for the algorithm to converge to an optimal result, depending on the light intensity and signal-to-noise ratio (SNR) (Fig. 5). The cube size used in the test for block matching was 5×5×3, and the searching window size was 21×21×7×5. The experiment was performed on a PC equipped with Intel Core i9-9920X CPU, NVIDIA GeForce RTX 2080 Ti GPU and 128 GB memory. The results are shown in Table 1.

Tables Icon

Table 1. Computational cost of the proposed method.

The memory and time cost of traditional phase-space deconvolution per iteration are listed in the column ‘Key step1: Phase-space deconvolution’. The bottleneck of the proposed algorithm lies in the step of low-rank enhancement, particularly in block-matching and singular value decomposition (SVD). We would suggest the algorithm be applied after determining the region of interest (ROI) from raw data, which could be accessed through the pre-deconvolved center view of the raw light field image. This could help to recover the detailed spatial-temporal structures of the most concerned biological process while keeping the computational load acceptable. Also, deep-learning-based methods may be employed to replace the block matching and SVD processes. These two methods can potentially reduce the computational load, while the essence of spatial-temporal low rank prior remains throughout both approaches.

Another lower-computational-cost substitute of the proposed method is to perform low-rank-based denoise after RL deconvolution. This approach may reduce the low-rank enhancement step to one or two iterations while still producing a better-quality reconstruction result than deconvolution alone. The main drawback of the deconvolution-plus-denoise approach is the difficulty to determine the algorithm shifting point. With the presence of noise, RL deconvolution easily overfits with artifact and never converges to a stable optimal result. So, it is hard to decide where the deconvolution iteration should be altered to denoise iteration in practice, since no ground truth could be used to validate the image quality. In contrast, our proposed algorithm overcomes this problem by continuously optimizing the reconstruction.

To further validate this issue, we performed a simulation test. A neutrophil video sequence was used as ground truth, and we added additional Poisson noise after forward-convolution. The light field image was scaled to 0 to 3, and the brightest pixel follows a Poisson distribution with $\lambda = 3$. Then we reconstruct the volume and evaluated the algorithm performance. Two iteration strategies were used. One was in our proposed alternate deconvolution-denoise version, and another was in a separate deconvolution-plus-denoise version. In the latter approach, we stopped the RL deconvolution when it reaches the highest SSIM. This could not always be done in real experiments since no ground truth is available. The SSIM was calculated after every minor step (deconvolution or low-rank enhancement), and the results are summarized in Fig. 6. The separate iteration strategy could indeed improve image quality with only a few denoise iterations, but its maximum SSIM remained worse than the alternate iteration strategy.

 figure: Fig. 6.

Fig. 6. Reconstruction quality comparisons among different iteration strategies. a, Structural similarity index (SSIM) of different iteration strategy. SSIM was calculated after each minor step (either RL-deconvolution or low-rank enhancement). b, 3D rendering of the reconstructed neutrophil cell from three strategies: deconvolution only strategy (left), separate iteration (middle) and alternate iteration (right).

Download Full Size | PDF

5. Experiment

We then validated our approach through experimental results of a Drosophila larvae using the LFM system described in Section 2. The mushroom body lobes (OK107-GAL4) of the Drosophila larvae are combined with UAS to drive the expression of mCD8GFP. We kept the exposure time unchanged as 100 ms all along, and created 4 different light conditions by setting the laser light intensity as 1 mW, 2 mW, 5 mW and 10 mW respectively. We placed the specimens under the 20×/0.5NA objective in the culture plate, and captured a temporal sequence of 100 LF images for each light intensity level (400 images in total). During reconstruction, we set the cube size as 6×6×6, window size as 10×10×10×5, and the noise level as 20. For each reference cube, we searched the temporally adjacent 5 volumes for 100 similar ones to generate the low-rank matrix. The reconstruction results from the phase-space approach and our proposed approach for normal-light (10 mW) and low-light LFM images are shown in Fig. 7 and Visualization 1. Figure 7(a) is the fluorescence labeled Drosophila larvae under normal light. Figure 7(b) shows noisy volume sequence reconstructed from low-light LFM images with phase-space deconvolution approach. Figure 7(c) is the results from our proposed approach. Compared with Fig. 7(b), our proposed approach provides remarkably clearer results in all four volumes at different time-points and an apparent decrease of noise. It also shows relatively consistent performance with the 10 mW reconstruction result using the phase-space approach in Fig. 7(a).

 figure: Fig. 7.

Fig. 7. Reconstruction results on neuron-labelled Drosophila larvae with and without low-rank prior. (a) The phase-space approach results with normal light images. (b) The phase-space approach results with low light images. (c) The proposed low-rank approach results with low-light images.

Download Full Size | PDF

6. Conclusion

In conclusion, we introduced the spatial-temporal low-rank prior in phase-space deconvolution to enhance the volumetric reconstruction performance of LFM in low-light conditions. We first verified the spatial-temporal low-rank prior assumption in experimental data of normal fluorescence imaging and found that the time-lapse volumes show more low-rank property than time-lapse images. We then confirmed that our algorithm has almost no influence on spatial resolution by imaging the 0.5-μm fluorescence beads. Finally, we showed the effectiveness of our approach through both quantitative numerical simulations and experimental biological results. Note that our approach can be easily adapted to other low-light biological volume sequences without the requirement of any hardware modifications.

However, in current approach, a compromise always exists between the denoising performance and the window size for block matching. In other words, the higher performance of reconstruction requires larger searching window and longer block matching time. Optimizing the parameters for reference cube and searching window sizes plays an important role in the balance of the reconstruction performance and time consumption. An improved low-rank denoising approach can also speed up our reconstruction procedure [25]. Further improvements of our approach will focus on self-adjustable parameters optimization, time-consuming block matching accelerating and better low-rank optimization algorithms.

Funding

National Key Research and Development Program of China (2020AAA0130000); National Natural Science Foundation of China (61927802, 62071272, 62088102).

Acknowledgments

We thank Jing He and Zhi Lu for providing the samples and Yunmin Zeng for discussions.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. Z. Zhang, L. Bai, L. Cong, P. Yu, T. Zhang, W. Shi, F. Li, J. Du, and K. Wang, “Imaging volumetric dynamics at high speed in mouse and zebrafish brain with confocal light field microscopy,” Nat Biotechnol 39(1), 74–83 (2021). [CrossRef]  

2. R. Prevedel, Y. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014). [CrossRef]  

3. K. Mcdole, K. Branson, J. Freeman, W. C. Lemon, S. R. Pulver, and B. Ho, “Whole-central nervous system functional imaging in larval Drosophila,” Nat. Commun. 6(1), 7924 (2015). [CrossRef]  

4. T. Nöbauer, O. Skocek, A. J. Pernía-Andrade, L. Weilguny, F. Martínez Traub, M. I. Molodtsov, and A. Vaziri, “Video rate volumetric Ca2+ imaging across cortex using seeded iterative demixing (SID) microscopy,” Nat. Methods 14(8), 811–818 (2017). [CrossRef]  

5. H. Li, C. Guo, D. Kim-Holzapfel, W. Li, Y. Altshuller, B. Schroeder, W. Liu, Y. Meng, J. B. French, K.-I. Takamaru, M. A. Frohman, and S. Jia, “Fast, volumetric live-cell imaging using high-resolution light-field microscopy,” Biomed. Opt. Express 10(1), 29–49 (2019). [CrossRef]  

6. O. Skocek, T. Nöbauer, L. Weilguny, F. Martínez Traub, C. N. Xia, M. I. Molodtsov, A. Grama, M. Yamagata, D. Aharoni, D. D. Cox, P. Golshani, and A. Vaziri, “High-speed volumetric imaging of neuronal activity in freely moving rodents,” Nat. Methods 15(6), 429–432 (2018). [CrossRef]  

7. J. Wu, Z. Lu, D. Jiang, Y. Guo, H. Qiao, Y. Zhang, T. Zhu, Y. Cai, X. Zhang, K. Zhanghao, H. Xie, T. Yan, G. Zhang, X. Li, Z. Jiang, X. Lin, L. Fang, B. Zhou, P. Xi, J. Fan, L. Yu, and Q. Dai, “Iterative tomography with digital adaptive optics permits hour-long intravital observation of 3D subcellular dynamics at millisecond scale,” Cell 184(12), 3318–3332 (2021). [CrossRef]  

8. X. Li, G. Zhang, J. Wu, Y. Zhang, Z. Zhao, X. Lin, H. Qiao, H. Xie, H. Wang, L. Fang, and Q. Dai, “Reinforcing neuron extraction and spike inference in calcium imaging using deep self-supervised denoising,” Nat. Methods 18(11), 1395–1400 (2021). [CrossRef]  

9. B. Mandracchia, X. Hua, C. Guo, J. Son, T. Urner, and S. Jia, “Fast and accurate sCMOS noise correction for fluorescence microscopy,” Nat. Commun. 11(1), 94 (2020). [CrossRef]  

10. A. Buades, B. Coll, and J. M. Morel, “A non-local algorithm for image denoising,” CVPR in IEEE, 60–65 (2005).

11. J. Cai, E. J. Candès, and Z. Shen, “A Singular Value Thresholding Algorithm for Matrix Completion,” SIAM J. Optim. 20(4), 1956–1982 (2010). [CrossRef]  

12. H. Ji, C. Liu, Z. Shen, and Y. Xu, “Robust video denoising using Low rank matrix completion,” CVPR in IEEE, 1791–1798 (2010).

13. S. Gu, L. Zhang, W. Zuo, and X. Feng, “Weighted nuclear norm minimization with application to image denoising,” CVPR in IEEE, 2862–2869 (2014).

14. Y. Peng, J. Suo, Q. Dai, and W. Xu, “Reweighted low-rank matrix recovery and its application in image restoration,” IEEE Trans. Cybern. 44(12), 2418–2430 (2014). [CrossRef]  

15. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering,” IEEE Trans. on Image Process. 16(8), 2080–2095 (2007). [CrossRef]  

16. M. Maggioni, G. Boracchi, A. Foi, and K. Egiazarian, “Video denoising, deblocking, and enhancement through separable 4-D nonlocal spatiotemporal transforms,” IEEE Trans. on Image Process. 21(9), 3952–3966 (2012). [CrossRef]  

17. M. Maggioni, V. Katkovnik, K. Egiazarian, and A. Foi, “Nonlocal transform-domain filter for volumetric data denoising and reconstruction,” IEEE Trans. on Image Process. 22(1), 119–133 (2013). [CrossRef]  

18. K. Dabov, A. Foi, and K. Egiazarian, “Video denoising by sparse 3D transform-domain collaborative filtering,” Eur. Signal Process. Conf.16(8), 145–149 (2007).

19. M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express 21(21), 25418–25439 (2013). [CrossRef]  

20. Y. Zhang, B. Xiong, Y. Zhang, Z. Lu, J. Wu, and Q. Dai, “DiLFM: an artifact-suppressed and noise-robust light-field microscopy through dictionary learning,” Light Adv. Manuf. 10, 1546 (2021). [CrossRef]  

21. Z. Lu, J. Wu, H. Qiao, Y. Zhou, T. Yan, Z. Zhou, X. Zhang, J. Fan, and Q. Dai, “Phase-space deconvolution for light field microscopy,” Opt. Express 27(13), 18131–18145 (2019). [CrossRef]  

22. D. G. Dansereau, O. Pizarro, and S. B. Williams, “Linear volumetric focus for light field cameras,” ACM Trans. Graph. 34(2), 1–20 (2015). [CrossRef]  

23. A. Sepas-Moghaddam, P. L. Correia, and F. Pereira, “Light field denoising: Exploiting the redundancy of an epipolar sequence representation,” 3DTV-CON in IEEE, 1–4 (2016).

24. M. Alain and A. Smolic, “Light field denoising by sparse 5D transform domain collaborative filtering,” MMSP in IEEE, 1–6 (2017).

25. H. Yang, Y. Park, J. Yoon, and B. Jeong, “An Improved Weighted Nuclear Norm Minimization Method for Image Denoising,” IEEE Access 7, 97919–97927 (2019). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       This is a reconstruction results on neuron-labelled Drosophila larvae with and without low-rank prior. This video shows clearer volumes reconstructed from proposed approach than without low-rank prior phase-space deconvolution. We set the exposure ti

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. The schematic of the light field microscopy (LFM), working in an epi-fluorescence imaging mode.
Fig. 2.
Fig. 2. The framework of our proposed method with a spatial-temporal low-rank prior for LFM volume reconstruction under low-light conditions.
Fig. 3.
Fig. 3. Comparison of the accumulated singular values in the low-rank matrices from 2D patches and 3D cubes. (a) illustrates the block matching similar patches for the reference patch along both temporal and spatial dimensions. The reference patch is selected from the center frame of the search window. The search window in the first frame is labeled in the yellow square (left), and zoomed-in searching window shows the reference patch and similar patches in the orange and blue square (middle), the representing low-rank matrix is shown in the right. (b) illustrates the block matching similar cubes across volumes. The search window in the first volume is labeled in the yellow cube (left). The zoomed-in searching window shows the reference cube and similar cubes in orange and blue (middle). The sample low-rank matrix is shown in the right. (c) shows the accumulation results for the top 20 low-rank matrix singular values from about 400 reference patches (14×14) and cubes (7×7×4, 11×11×4).
Fig. 4.
Fig. 4. Resolution difference with and without low-rank prior enhancement. (a) The lateral resolution at different axial positions. (b) The axial resolution at different axial positions. The error bar stands for the standard deviation of the FWHM.
Fig. 5.
Fig. 5. Comparison of traditional and proposed approach results under extremely low light and low light conditions. (a) Reconstruction results (maximum intensity projection, MIP, on x-y plane) of the traditional (left) and our proposed approach (right) for the simulated extremely low light USAF-1951 resolution target dataset. (b) Reconstruction results (maximum intensity projection, MIP, on x-y plane) for the simulated low light dataset of traditional method (left) and our method (right). (c) The PSNR and SSIM evolution using the traditional and proposed approach for extremely low light dataset. (d) The PSNR and SSIM evolution for low light dataset. (e) The maximum PSNR and SSIM through iteration of traditional and proposed approach under different light levels. (f) SSIM in different vertical layers, comparing phase-space deconvolution method and proposed method.
Fig. 6.
Fig. 6. Reconstruction quality comparisons among different iteration strategies. a, Structural similarity index (SSIM) of different iteration strategy. SSIM was calculated after each minor step (either RL-deconvolution or low-rank enhancement). b, 3D rendering of the reconstructed neutrophil cell from three strategies: deconvolution only strategy (left), separate iteration (middle) and alternate iteration (right).
Fig. 7.
Fig. 7. Reconstruction results on neuron-labelled Drosophila larvae with and without low-rank prior. (a) The phase-space approach results with normal light images. (b) The phase-space approach results with low light images. (c) The proposed low-rank approach results with low-light images.

Tables (1)

Tables Icon

Table 1. Computational cost of the proposed method.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

X ^ = a r g m i n X | | Y X | | F 2 + | | X | | w , ,
X ^ j = a r g m i n X j | | Y j X j | | F 2 + σ n 2 | | X j | | w , .
ω i = c n / ( σ i ( X j ) + ε ) ,
σ ^ i ( X j ) = max ( σ i 2 ( Y j ) n σ n 2 , 0 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.