Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

3DM: deep decomposition and deconvolution microscopy for rapid neural activity imaging

Open Access Open Access

Abstract

We report the development of deep decomposition and deconvolution microscopy (3DM), a computational microscopy method for the volumetric imaging of neural activity. 3DM overcomes the major challenge of deconvolution microscopy, the ill-posed inverse problem. We take advantage of the temporal sparsity of neural activity to reformulate and solve the inverse problem using two neural networks which perform sparse decomposition and deconvolution. We demonstrate the capability of 3DM via in vivo imaging of the neural activity of a whole larval zebrafish brain with a field of view of 1040 µm × 400 µm × 235 µm and with estimated lateral and axial resolutions of 1.7 µm and 5.4 µm, respectively, at imaging rates of up to 4.2 volumes per second.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The development of genetically encoded calcium indicators (GECIs) [13] and optical microscopy techniques have allowed us to monitor the activity of neural populations with single cell resolution across a large volumetric field of view [4]. Notably, the whole brain imaging of larval zebrafish at cellular resolution has been demonstrated by multiple groups using technologically orthogonal methods, thanks to the transparency and small size of the organism [49]. Such 3-D brain activity imaging datasets are especially attractive as they offer a unique opportunity to map brain activity at the whole brain scale in an unbiased and systematic manner.

Most 3-D imaging methods are based on the sequential 2-D imaging of multiple planes combined with optical sectioning such as confocal imaging [10], multi-photon excitation [1113] and light-sheet excitation [14,15]. While these methods offer excellent image quality, wide adoption of their extensions for high speed 3-D imaging [16,17] by the neuroscience community has been hindered by their high costs. Deconvolution microscopy [18], on the other hand, shifts the burden to the computational side, thereby simplifying the optical implementation. While deconvolution microscopy appears to be a cost-effective alternative to true optical sectioning methods, it has not been used for calcium imaging in 3-D because deconvolution becomes numerically ill-posed for densely labeled samples [19]. In such cases, the point spread functions (PSFs) of optical systems are essentially low-pass filtering convolution kernels. In other words, while deconvolution microscopy performs very well for sparsely labeled samples, its performance is severely degraded for densely labeled samples such as a brain with pan-neuronal GECIs [20,21].

Recently, a computational imaging technique called sparse decomposition light-field microscopy (SDLFM) was introduced [22] that utilized the temporal sparsity of fluorescent signals to introduce spatial sparsity in images. This was exploited to improve the resolution as in localization-based super-resolution microscopy methods [2325]. To convert the temporal sparsity into spatial sparsity, the recorded raw light-field images were decomposed into a low rank component and a sparse component prior to the volume reconstruction. Because the volume reconstruction only needs to be applied to the sparse component, which represents the neural activity, each neuron can be localized beyond the optical resolution of the system. The potential of this technique was demonstrated by imaging the whole brain neural activity of larval zebrafish and adult Drosophila. However, SDLFM relies on robust principal component analysis (RPCA) [26] for low rank and sparse decomposition and Richardson-Lucy iteration [27] for volume reconstruction which are computationally expensive, requiring multiple days to process a video. Furthermore, RPCA requires storing the entire data and its replica in memory, which necessitates a workstation with hundreds of gigabytes of RAM to process the data.

Here, we propose deep decomposition and deconvolution microscopy (3DM), a computational microscopy method that offers a cost-effective solution for the rapid 3-D imaging of neural activity. The hardware implementation of 3DM is extremely simple, since it is based on an epi-fluorescence microscope. Like SDLFM, it alleviates the major limitation of deconvolution microscopy – the ill-posed inverse problem – by reformulating the inverse problem using low rank and sparse decomposition. Furthermore, 3DM overcomes the limitation of SDLFM by employing two neural networks for low rank and sparse decomposition and deconvolution, which reduces the computational costs by an order of magnitude.

2. Method

2.1 Overview of 3DM

3DM is a computational microscopy technique that turns an ordinary epi-fluorescence microscope into a high-speed neural activity imaging system by employing three components: an electrically tunable lens (ETL) for rapid axial scanning [2830], a sparse decomposition network for separating neural activity from backgrounds, and a deconvolution network for accurate volume reconstruction. In our system (Fig. 1(a), Fig. S1,2), an ETL whose focal length is controlled by an electrical signal is placed at the Fourier plane of the system for rapid volumetric imaging [28]. The acquired raw images are essentially wide-field focal stacks that are swamped by out-of-focus light that needs to be removed. Unfortunately, even the state-of-the-art deconvolution algorithms have limited ability to accurately recover the volume when the sample is densely labeled [19]. In [22], it was demonstrated that performing sparse decomposition prior to volume reconstruction of light-field images increases the effective resolution; the sparse decomposition translated temporal sparsity of spikes into spatial sparsity in images which enabled accurate volume reconstruction. Likewise, 3DM performs sparse decomposition before the volume reconstruction. Instead of using RPCA for the decomposition, which is computationally expensive and requires storing all of the data on the RAM, we implemented a bilinear neural network for efficient approximation of RPCA (BEAR) [31]. BEAR is a neural network that can decompose arbitrary sized data, with orders of magnitude speed improvement, without any need for training data. The sparse components are then fed to a deconvolution network which is employed to improve the image quality and eliminate the iterative deconvolution process which is extremely time consuming even with GPU acceleration. The data processing pipeline is illustrated in Fig. 1(b).

 figure: Fig. 1.

Fig. 1. Deep decomposition deconvolution microscopy (3DM). (a) Schematic of 3DM hardware. An electrically tunable lens (ETL) is conjugated to the back pupil plane of the objective lens. (b) Schematic of the 3DM algorithm. A raw video consists of a time series of 3-D volumes. Using a bilinear neural network for efficient approximation of RPCA (BEAR), the raw video is decomposed into a low rank component and a sparse component that correspond to the background and the neural activity, respectively. The decomposed sparse component is deconvolved using our 3-D deconvolution network.

Download Full Size | PDF

2.2 3DM imaging system

2.2.1 Optical design

We built an epi-fluorescence microscope integrated with an ETL for axial scanning as shown in Fig. 1(a). For excitation, we used a blue LED ($\lambda$=470nm, 250mW, M470L3-C2, Thorlabs) which was collimated by an aspheric condenser lens (ACL25416U, Thorlabs). We used a 16x 0.8NA water dipping objective lens (CFI175 LWD 16xW, Nikon), an achromatic lens (f=200mm, 49-286, Edmund optics) as the tube lens, and two identical achromatic lenses (f=150mm, 49-285, Edmund optics) for a 4f relay system. The ETL (EL-16-40-TC-VIS-5D-C, Optotune) was placed between the relay lenses to conjugate to the back pupil plane of the objective lens in a vertical configuration to eliminate gravity-induced aberrations (Fig. S1) [28]. The ratio of the focal lengths of the tube lens and the relay lenses were determined to ensure that the back aperture of the objective lens imaged onto the Fourier plane was smaller than the aperture size of the ETL. The components list of our configuration is summarized in Table S1.

2.2.2 ETL control and image acquisition

The curvature of the ETL, which determines the optical focus of the microscope (Fig. S3), was controlled using an analog signal from a DAQ board (USB-6363, National Instruments) which was synchronized with the trigger output signals from the scientific complementary metaloxide semiconductor (sCMOS) camera (Zyla 5.5 sCMOS, Andor). For image acquisition, custom software was implemented using MATLAB to control the stage controller, ETL lens driver, DAQ board, and sCMOS camera. A schematic diagram of the software and the timing diagram are shown in Fig. S4. To perform rapid axial scanning, we modulated the focal length of the ETL with a sawtooth waveform signal at a target volume rate. For whole brain imaging of larval zebrafish, the camera recorded 51 images sequentially at 215 frames per second in each period including 3 discarded images due to the settling time of the ETL. The resulting imaging speed was 4.2 volumes per second (VPS) where each volume consisted of 48 z-slices (with a 5 $\mathrm {\mu }$m step size between the adjacent planes). The detailed setup is summarized in Table S2.

2.3 Neural networks for 3DM

2.3.1 BEAR: network for unsupervised low rank and sparse decomposition

For low rank and sparse decomposition, we implemented BEAR (Fig. 2(a)), which performs low rank and sparse decomposition in a computationally efficient manner based on the following formulation of the optimization problem [31]:

$$\min_{W} ||S||_1 \textrm{ subject to } Y = L+ S \textrm{ and } L = WW^TY$$
where $W,Y,L,S,$ and $||\cdot ||_{1}$ are the network parameters, a data matrix, a low rank matrix, a sparse matrix, and $L_1$ norm, respectively. Since the brightness of each neuron increases with activity, a positivity constraint was imposed on $S$ by inserting a ReLU layer at the output, and modifying the loss function as $\mathcal {L} = ||S||_1 + \alpha ||Y-L-S||_{F}$ where $||\cdot ||_{F}$, and $\alpha$ are the Frobenius norm and a hyper-parameter, respectively. The modified optimization problem is as follows:
$$\min_{W} ||S||_1 + \alpha ||Y-L-S||_{F} \textrm{ subject to } S = \textit{ReLU }(Y-L) \textrm{ and } L=WW^TY.$$
Once the network was trained, the sparse component $S$ was obtained as $S = \textit { ReLU}(Y-WW^TY)$.

 figure: Fig. 2.

Fig. 2. Neural networks for sparse decomposition and deconvolution. (a) Architecture of BEAR. The network takes the raw video $Y$ and produces the low rank component $L$. The non-negative sparse component $S$ is obtained as $S = \textit {ReLU }(Y-L)$. (b) Deconvolution network architecture. The deconvolved images are obtained by feeding the wide-field images to the network.

Download Full Size | PDF

2.3.2 Network for 3-D deconvolution

2.3.2.1 Network architecture

For 3-D deconvolution, we employed the 3-D U-Net [32] architecture which consists of contraction and expansion paths (Fig. 2(b)). In the contraction path, each layer contained two $3\ (z)\times 3(y)\times 3(x)$ convolutions each followed by a LeakyReLU with a slope of 0.01, and a max pooling. Max pooling with two configurations was used alternately: $1\ (z)\times 2(y)\times 2(x)$ and $2\ (z)\times 2(y)\times 2(x)$. With each contraction step, we doubled the number of channels. In the expansion path, each layer consisted of a transpose convolution followed by channel-wise concatenation using a shortcut connection, and two $3\ (z)\times 3(y)\times 3(x)$ convolutions each followed by a LeakyReLU with a slope of 0.01. With each transpose convolution, we halved the number of channels. The transpose convolution layers alternated between two configurations: a kernel size and stride of $2\ (z)\times 2(y)\times 2(x)$ and $1\ (z)\times 2(y)\times 2(x)$. In the shortcut connection, the feature map from the contraction path at the same level was concatenated in a channel-wise manner. When the dimensions of the tensor after the transpose convolution and the feature map from the contraction path were different, we interpolated the feature map to match the dimensions. The network implementation is available at https://github.com/NICALab/3DM.

2.3.2.2 Dataset for training

For training the deconvolution network, we used 3-D time series images of larval zebrafish brain obtained with confocal microscopy and synthetic bead volumes as the ground truth images and the corresponding wide-field images obtained by convolving the ground truth images with the PSF of the microscope. In addition, we used sparse components of the time series images obtained using BEAR and the corresponding simulated wide-field images to ensure that our network could perform a faithful reconstruction of the neuronal activity. For synthetic bead image generation, spheres with radii between 1 and 8 voxels were randomly distributed in each volume. The radii and the center locations of the spheres were sampled from uniform distributions. Each pixel value inside the beads was drawn from normal distribution with a mean value of one and a variance of 0.25. We used the theoretical PSF of the microscope obtained using the Born & Wolf PSF model [33]. Further details on each dataset are described in Table S3.

2.3.3 Training BEAR and the deconvolution network

All computations were performed on a single GPU (NVIDIA RTX 3090) and the networks were implemented using Pytorch [34]. To train BEAR, we set the hyperparameter $\alpha$ as 10,000 and used an Adam optimizer with a batch size of 16. The learning rate was 0.00001 and the number of training epochs was 100. The 3-D deconvolution network was trained with an $L_{1}$ loss function in a supervised manner. We generated pairs of patches for supervised learning by randomly cropping the same location of the synthetic wide-field image volumes and the ground truth volumes. The sizes of patches were set as 48(z) $\times$ 160(y) $\times$ 160(x) for the zebrafish dataset and 48(z) $\times$ 64(y) $\times$ 64(x) for the synthetic beads, respectively. The training samples were augmented by flipping (horizontal and vertical) and rotating (90$^\circ$). Each pair was normalized by dividing by the maximum value of the wide-field volume patch. Finally, we added Poisson noise to the wide-field volume patch. We used an Adam optimizer with a batch size of 2. The learning rate was 0.0004, and the number of training epochs was 26,000.

2.4 3DM simulation

We performed the 3DM simulation using 3-D time series images of larval zebrafish brain obtained with confocal microscopy. We first generated synthetic raw 3DM images by convolving the confocal image volumes with the PSF of our optical system and then the volumes were reconstructed. To verify the effectiveness of each component in 3DM, the results from four different methods were compared:

  • • sparse decomposition using BEAR and then deconvolution using a neural network (3DM).
  • • sparse decomposition using BEAR and then Richardson-Lucy deconvolution (SD-RL).
  • • direct deconvolution using a neural network.
  • • direct Richardson-Lucy deconvolution.
The results from the direct deconvolution methods were decomposed using BEAR to determine the accuracy of the neural activity reconstruction. The number of iterations for the Richardson-Lucy deconvolution was set as 30. The reconstructed images were compared to the sparse component of the confocal microscopy data, and mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) were evaluated. Images of different fish samples were used for training and testing the deconvolution network.

2.5 Imaging larval zebrafish

For zebrafish experiments, larval zebrafish expressing pan-neuronal GCaMP7a [3,35] with a casper background were imaged at 3-4 days post-fertilization (dpf). The fish were paralyzed by immersing in $0.25mg/ml$ of pancuronium bromide (Sigma-Aldrich) solution for 2 minutes and then embedded in agar with $2.0 \%$ low melting point agarose (TopVision) in Petri dishes. After the solidification of the agarose gel, the dishes were filled with standard fish water and then the fish were imaged using the 3DM imaging system or a confocal microscope (Fig. S5). Confocal microscopy images of the larval zebrafish brains, used for simulation and training the deconvolution network, were obtained using a point-scanning confocal microscopy system (C2 Plus, Nikon) equipped with a 16x 0.8NA water dipping objective lens. Fish were imaged with a pixel size of $\sim 1.5$ $\mathrm {\mu }$m $\times 1.5$ $\mathrm {\mu }$m and an axial step size of 5 $\mathrm {\mu }$m. We obtained the synthetic 4-D data of neural activity by concatenating the results from time-lapse imaging at different depths. The animal experiments conducted for this study were approved by the Institutional Animal Care and Use Committee (IACUC) of KAIST (KA2019-13).

2.6 Data analysis and rendering

For the extraction of neuronal activity from the larval zebrafish data, we obtained a temporal maximum intensity projection (MIP) of the reconstructed volumes, and the active neurons were directly segmented by applying 3-D matched filters [22] to the temporal MIP. The segmentation masks were applied to the time series volumes to extract the activity of the neurons. 3-D video rendering was performed using Napari [36] and BrainTour (Kioxia Corporation).

3. Results

3.1 Performance verification via simulation

For quantitative assessment of our approach, we performed a simulation study using the 3-D time series images of larval zebrafish brain obtained with confocal microscopy. The confocal images were used for generating raw 3DM images and also as the ground truth for evaluating MSE, PSNR and SSIM, as summarized in Table 1. 3DM, which performs sparse decomposition first and deconvolution using the neural network, outperformed other methods in all metrics on the test data (Fig. 3, Visualization 1). Performing sparse decomposition before deconvolution yielded more accurate results with both deconvolution methods and the deconvolution network performed better than the Richardson-Lucy deconvolution regardless of the order of operations. In terms of computational costs, the Richardson-Lucy deconvolution and the deconvolution network took approximately 120 seconds and 0.45 seconds, respectively, for deconvolution of a volume with 48 (z) $\times$ 240 (y) $\times$ 480 (x) voxels. Training the deconvolution network took approximately 1.5 days. The computation times of the decomposition and deconvolution are summarized in Table 2.

 figure: Fig. 3.

Fig. 3. Temporal maximum intensity projection (MIP) of the sparse components of the simulated images. Scale bar, 100 $\mathrm {\mu }$m. Top, lateral MIP. Bottom, axial MIP. (a) Ground truth. (b) Wide-field image. (c) Direct Richardson-Lucy deconvolution result. (d) Direct network-based deconvolution result. (e) SD-RL result. (f) 3DM result.

Download Full Size | PDF

Tables Icon

Table 1. Quantitative comparison of four deconvolution methods through simulation. Mean square error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) are evaluated. 3DM outperforms other methods in all metrics.

Tables Icon

Table 2. Computation times of the decomposition and deconvolution (seconds).

3.2 3DM experiments

To demonstrate the capability of 3DM, we imaged the neural activity of entire larval zebrafish brains expressing GCaMP7a pan-neuronally at 3-4 days post fertilization (dpf). We imaged the spontaneous neuronal activity at imaging rates of 1– 4.2 VPS with a volumetric field of view of 1040 $\mathrm {\mu }$m (x) $\times$ 400 $\mathrm {\mu }$m (y) $\times$ 235 $\mathrm {\mu }$m (z) for 5 – 20 minutes. In the raw images, neuronal cell bodies were not visible mainly due to the out-of-focus light (Fig. 4(a)). The image contrast was enhanced with Richardson-Lucy deconvolution, but the neurons were not clearly visible due to the intrinsic limitation of deconvolution on densely labeled samples (Fig. 4(b)). With SD-RL which performs sparse decomposition prior to the Richardson-Lucy deconvolution, the resolution was improved and the haze above and below the brain was removed (Fig. 4(c)). Finally, with 3DM reconstruction, each neuron was clearly resolved thanks to the increase in resolution, contrast and signal-to-noise ratio (Fig. 4(d),e, Visualization 2,Visualization 3). The inference for a single volume took 0.63 seconds using the deconvolution network, while Richardson-Lucy deconvolution took 133 seconds. To assess the spatial resolution, we performed 2-D Fourier domain analysis [37] as shown in Fig. S6. The measured lateral and axial resolutions of 3DM results were 1.7 $\mathrm {\mu }$m and 5.4 $\mathrm {\mu }$m, respectively while those from SD-RL were 2.7 $\mathrm {\mu }$m and 5.8 $\mathrm {\mu }$m respectively.

 figure: Fig. 4.

Fig. 4. Imaging whole brain and spinal cord of a larval zebrafish expressing pan-neuronal GCaMP7a using 3DM. (a) Temporal MIP of the raw images. Scale bar, 100 $\mathrm {\mu }$m. Top, lateral MIP. Bottom, axial MIP. (b) Temporal MIP of Richardson-Lucy deconvolution result. Scale bar, 100 $\mathrm {\mu }$m. Top, lateral MIP. Bottom, axial MIP. (c) Temporal MIP of SD-RL result. Scale bar, 100 $\mathrm {\mu }$m. Top, lateral MIP. Bottom, axial MIP. (d) Temporal MIP of the 3DM result. Scale bar, 100 $\mathrm {\mu }$m. Top, lateral MIP. Bottom, axial MIP. (e) Enlarged views of (a)-(d) showing the optic tectum (boxed area in (a)). Scale bar, 30 $\mathrm {\mu }$m. (f) Squared norm of the Fourier transform of the images in (a)-(d) displayed in log scale. Top, Fourier transforms of lateral (xy) slices. Bottom, Fourier transforms of axial (xz) slices.

Download Full Size | PDF

For further verification of the image quality, we superimposed the neuronal activity on the reconstructed low rank component (Fig. 5(a), Visualization 4,Visualization 5), which was obtained by feeding the low rank image from BEAR to the deconvolution network, and inspected axial slices at multiple time points when groups of adjacent neurons were simultaneously active. Such synchronicity decreases the level of spatial sparsity, the main source of the resolution improvement in 3DM, locally and temporarily and hence poses a challenge to 3DM. Fortunately, we found that the neuronal cell bodies were clearly resolved even with the high level of local synchronicity. Neuronal activities from $\sim 3,000$ neurons were extracted by directly segmenting the temporal MIP volume (Fig. 5(b), Fig. S7, Visualization 6, Visualization 7, Visualization 8). The extracted signals showed stable baselines thanks to the low crosstalk among adjacent neurons (Fig. 5(c), Fig. S8). In addition, the shape and the size of the neurons in 3DM images matched well with that in the confocal image of the same larval zebrafish (Fig. S9).

 figure: Fig. 5.

Fig. 5. Imaging the neuronal activity of a larval zebrafish brain expressing pan-neuronal GCaMP7a using 3DM. (a) Three z-slices (at z=70 $\mathrm {\mu }$m, 100 $\mathrm {\mu }$m, 140 $\mathrm {\mu }$m, counted upwards from top to bottom, z = 0 $\mathrm {\mu }$m indicates the top surface of the brain) from the deconvolved low rank image (left). Enlarged views of the boxed area with the neuronal activity superimposed on the low rank image at multiple time points (right). Scale bar, 50 $\mathrm {\mu }$m. (b) Extracted neuronal activities shown as a heat map. (c) Randomly selected neurons and the corresponding extracted neuronal activity. The color of each selected neuron is matched to that of the corresponding activity. Scale bar, 50 $\mathrm {\mu }$m.

Download Full Size | PDF

On a side note, we tested the generalization capability of our deconvolution network with wide-field images of pollen grains and a first instar Drosophila larva with pan-neuronal GFP expression, which was not only unseen during the training but also largely different from the training data. In the deconvolution results, the out-of-focus light was successfully removed and the fine details of pollen grains and the Drosophila larva were restored (Fig. S10,11, Visualization 9).

4. Conclusion

In summary, we implemented 3DM, a computational imaging method for the high-speed volumetric imaging and demonstrated its capability via in vivo imaging of the neural activity of whole larval zebrafish brains at up to 4.2 Hz. 3DM overcomes the ill-posedness of the inverse problem in deconvolution microscopy in two ways: by translating the temporal sparsity of neuronal activities into the spatial sparsity of images, and the use of a neural network for deconvolution that exploits prior knowledge of the data distribution. 3DM is extremely cost effective, as it can be implemented by adding an ETL and a 4f relay system to a standard microscope and a graphic card to a personal computer. The simplicity, performance and scalability of 3DM make it an attractive tool for the high speed imaging of neural activity.

Funding

National Research Foundation of Korea (2020R1C1C1009869, NRF2021R1A4A102159411).

Acknowledgments

We thank Yosuke Bando (Kioxia Corporation and MIT Media Lab) for providing BrainTour. The zebrafish lines used for calcium imaging were provided by the Zebrafish Center for Disease Modeling (ZCDM), Korea. The Drosophila lines were provided by the Korea Drosophila Resource Center (KDRC), Korea. This research was supported by National Research Foundation of Korea (2020R1C1C1009869, NRF2021R1A4A102159411).

Disclosures

The authors declare no conflicts of interest.

Data availability

The datasets presented in this paper are available from the corresponding author upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. L. Tian, S. Hires, T. Mao, D. Huber, E. Chiappe, S. Chalasani, L. Petreanu, J. Akerboom, S. Mckinney, E. Schreiter, C. Bargmann, V. Jayaraman, K. Svoboda, and L. Looger, “Imaging neural activity in worms, flies and mice with improved gcamp calcium indicators,” Nat. Methods 6(12), 875–881 (2009). [CrossRef]  

2. T.-W. Chen, T. J. Wardill, Y. Sun, S. R. Pulver, S. L. Renninger, A. Baohan, E. R. Schreiter, R. A. Kerr, M. B. Orger, V. Jayaraman, L. L. Looger, K. Svoboda, and D. S. Kim, “Ultrasensitive fluorescent proteins for imaging neuronal activity,” Nature 499(7458), 295–300 (2013). [CrossRef]  

3. A. Muto, M. Ohkura, G. Abe, J. Nakai, and K. Kawakami, “Real-time visualization of neuronal activity during perception,” Curr. Biol. 23(4), 307–311 (2013). [CrossRef]  

4. M. B. Ahrens, M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller, “Whole-brain functional imaging at cellular resolution using light-sheet microscopy,” Nat. Methods 10(5), 413–420 (2013). [CrossRef]  

5. T. Panier, S. Romano, R. Olive, T. Pietri, G. Sumbre, R. Candelier, and G. Debrégeas, “Fast functional imaging of multiple brain regions in intact zebrafish larvae using selective plane illumination microscopy,” Front. Neural Circuits 7, 65 (2013). [CrossRef]  

6. R. Tomer, M. Lovett-Barron, I. Kauvar, A. Andalman, V. M. Burns, S. Sankaran, L. Grosenick, M. Broxton, S. Yang, and K. Deisseroth, “Sped light sheet microscopy: fast mapping of biological system structure and function,” Cell 163(7), 1796–1806 (2015). [CrossRef]  

7. S. Quirin, N. Vladimirov, C.-T. Yang, D. S. Peterka, R. Yuste, and M. B. Ahrens, “Calcium imaging of neural circuits with extended depth-of-field light-sheet microscopy,” Opt. Lett. 41(5), 855–858 (2016). [CrossRef]  

8. D. H. Kim, J. Kim, J. C. Marques, A. Grama, D. G. Hildebrand, W. Gu, J. M. Li, and D. N. Robson, “Pan-neuronal calcium imaging with cellular resolution in freely swimming zebrafish,” Nat. Methods 14(11), 1107–1114 (2017). [CrossRef]  

9. V. Voleti, K. B. Patel, W. Li, C. P. Campos, S. Bharadwaj, H. Yu, C. Ford, M. J. Casper, R. W. Yan, W. Liang, C. Wen, K. D. Kimura, K. L. Targoff, and E. M. C. Hillman, “Real-time volumetric microscopy of in vivo dynamics and large-scale samples with scape 2.0,” Nat. Methods 16(10), 1054–1062 (2019). [CrossRef]  

10. J. Pawley, Handbook of biological confocal microscopy, vol. 236 (2006).

11. D. Kobat, M. E. Durst, N. Nishimura, A. W. Wong, C. B. Schaffer, and C. Xu, “Deep tissue multiphoton microscopy using longer wavelength excitation,” Opt. Express 17(16), 13354–13364 (2009). [CrossRef]  

12. S. L. Renninger and M. B. Orger, “Two-photon imaging of neural population activity in zebrafish,” Methods 62(3), 255–267 (2013). [CrossRef]  

13. N. G. Horton, K. Wang, D. Kobat, C. G. Clark, F. W. Wise, C. B. Schaffer, and C. Xu, “In vivo three-photon microscopy of subcortical structures within an intact mouse brain,” Nat. Photonics 7(3), 205–209 (2013). [CrossRef]  

14. M. Kumar, S. Kishore, J. Nasenbeny, D. L. McLean, and Y. Kozorovitskiy, “Integrated one-and two-photon scanned oblique plane illumination (sopi) microscopy for rapid volumetric imaging,” Opt. Express 26(10), 13027–13041 (2018). [CrossRef]  

15. E. M. Hillman, V. Voleti, W. Li, and H. Yu, “Light-sheet microscopy in neuroscience,” Annu. Rev. Neurosci. 42(1), 295–313 (2019). [CrossRef]  

16. T. Schrödel, R. Prevedel, K. Aumayr, M. Zimmer, and A. Vaziri, “Brain-wide 3d imaging of neuronal activity in caenorhabditis elegans with sculpted light,” Nat. Methods 10(10), 1013–1020 (2013). [CrossRef]  

17. R. Prevedel, A. J. Verhoef, A. J. Pernia-Andrade, S. Weisenburger, B. S. Huang, T. Nöbauer, A. Fernández, J. E. Delcour, P. Golshani, A. Baltuska, and A. Vaziri, “Fast volumetric calcium imaging across multiple cortical layers using sculpted light,” Nat. Methods 13(12), 1021–1028 (2016). [CrossRef]  

18. J.-B. Sibarita, “Deconvolution microscopy,” Microsc. Tech. pp. 201–243 (2005).

19. S. Hugelier, J. J. De Rooi, R. Bernex, S. Duwé, O. Devos, M. Sliwa, P. Dedecker, P. H. Eilers, and C. Ruckebusch, “Sparse deconvolution of high-density super-resolution images,” Sci. Rep. 6(1), 21413 (2016). [CrossRef]  

20. R. Prevedel, Y.-G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3d imaging of neuronal activity using light-field microscopy,” Nat. Methods 11(7), 727–730 (2014). [CrossRef]  

21. L. Cong, Z. Wang, Y. Chai, W. Hang, C. Shang, W. Yang, L. Bai, J. Du, K. Wang, and Q. Wen, “Rapid whole brain imaging of neural activity in freely behaving larval zebrafish (danio rerio),” eLife 6, e28158 (2017). [CrossRef]  

22. Y.-G. Yoon, Z. Wang, N. Pak, D. Park, P. Dai, J. S. Kang, H.-J. Suk, P. Symvoulidis, B. Guner-Ataman, K. Wang, and E. S. Boyden, “Sparse decomposition light-field microscopy for high speed imaging of neuronal activity,” Optica 7(10), 1457–1468 (2020). [CrossRef]  

23. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef]  

24. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (storm),” Nat. Methods 3(10), 793–796 (2006). [CrossRef]  

25. A. Sharonov and R. M. Hochstrasser, “Wide-field subdiffraction imaging by accumulated binding of diffusing probes,” Proc. Natl. Acad. Sci. 103(50), 18911–18916 (2006). [CrossRef]  

26. E. J. Candès, X. Li, Y. Ma, and J. Wright, “Robust principal component analysis?” J. ACM (JACM) 58, 1–37 (2011).

27. N. Dey, L. Blanc-Feraud, C. Zimmer, P. Roux, Z. Kam, J.-C. Olivo-Marin, and J. Zerubia, “Richardson–lucy algorithm with total variation regularization for 3d confocal microscope deconvolution,” Microsc. Res. Tech. 69(4), 260–266 (2006). [CrossRef]  

28. F. O. Fahrbach, F. F. Voigt, B. Schmid, F. Helmchen, and J. Huisken, “Rapid 3d light-sheet microscopy with a tunable lens,” Opt. Express 21(18), 21010–21026 (2013). [CrossRef]  

29. K. Philipp, A. Smolarski, N. Koukourakis, A. Fischer, M. Stürmer, U. Wallrabe, and J. W. Czarske, “Volumetric hilo microscopy employing an electrically tunable lens,” Opt. Express 24(13), 15029–15041 (2016). [CrossRef]  

30. R. Shi, C. Jin, H. Xie, Y. Zhang, X. Li, Q. Dai, and L. Kong, “Multi-plane, wide-field fluorescent microscopy for biodynamic imaging in vivo,” Biomed. Opt. Express 10(12), 6625–6635 (2019). [CrossRef]  

31. S. Han, E.-S. Cho, I. Park, K. Shin, and Y.-G. Yoon, “Efficient neural network approximation of robust pca for automated analysis of calcium imaging data,” arXiv preprint arXiv:2108.01665 (2021).

32. Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, 3d u-net: learning dense volumetric segmentation from sparse annotation,” in International conference on medical image computing and computer-assisted intervention, (2016), pp. 424–432.

33. H. Kirshner, F. Aguet, D. Sage, and M. Unser, “3-d psf fitting for fluorescence microscopy: implementation and localization application,” J. Microsc. 249(1), 13–25 (2013). [CrossRef]  

34. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style, high-performance deep learning library,” Adv. neural information processing systems 32, 8026–8037 (2019).

35. Y.-M. Jeong, T.-I. Choi, K.-S. Hwang, J.-S. Lee, R. Gerlai, and C.-H. Kim, “Optogenetic manipulation of olfactory responses in transgenic zebrafish: A neurobiological and behavioral study,” Int. J. Mol. Sci. 22(13), 7191 (2021). [CrossRef]  

36. N. Sofroniew, K. Evans, J. Nunez-Iglesias, A. C. Solak, T. Lambert, kevinyamauchi, J. Freeman, L. Royer, S. Axelrod, P. Boone, T. Tung, jakirkham, P. Vemuri, M. Huang, H. Har-Gil, G. Buckley, Bryant, A. Rokem, wconnell, S. Li, R. Anderson, M. Bussonnier, Hector, H. Patterson, G. Gay, E. Perlman, D. Bennett, C. Gohlke, and A. de Siqueira, “napari/napari: 0.2.9,” (2020).

37. R. Mizutani, R. Saiga, S. Takekoshi, C. Inomoto, N. Nakamura, M. Itokawa, M. Arai, K. Oshima, A. Takeuchi, K. Uesugi, Y. Terada, and Y. Suzuki, “A method for estimating spatial resolution of real image in the fourier domain,” J. Microsc. 261(1), 57–66 (2016). [CrossRef]  

Supplementary Material (10)

NameDescription
Supplement 1       Supplementary material
Visualization 1       Sparse components of the simulated images. Top, lateral maximum intensity projection (MIP). Bottom, axial MIP. Scale bar, 100 µm. (a) Ground truth. (b) Wide-field image. (c) Direct Richardson-Lucy deconvolution result. (d) Direct network-based deconv
Visualization 2       Whole brain functional imaging of a larval zebrafish (4 dpf) expressing pan-neuronal GCaMP7a. MIP with rotating view. The entire brain volume was recorded at a volume rate of 4.2 Hz.
Visualization 3       Whole brain functional imaging of a larval zebrafish (4 dpf) expressing pan-neuronal GCaMP7a. MIP with rotating view. The entire brain volume was recorded at a volume rate of 1 Hz.
Visualization 4       Neuronal activity in a larval zebrafish brain expressing pan-neuronal GCaMP7a at multiple depths. The neuronal activity is superimposed on the deconvolved low-rank image. The entire brain volume was recorded at a volume rate of 4.2 Hz. Scale bar, 100
Visualization 5       Neuronal activity in a larval zebrafish brain expressing pan-neuronal GCaMP7a at multiple depths. The neuronal activity is superimposed on the deconvolved low-rank image. The entire brain volume was recorded at a volume rate of 1 Hz. Scale bar, 100 µ
Visualization 6       Axial slices of temporal MIP of whole brain of a larval zebrafish (4 dpf) expressing pan-neuronal GCaMP7a. The entire brain volume was recorded at a volume rate of 4.2 Hz. Temporal MIP of the raw video (left) and temporal MIP of the 3DM result (right
Visualization 7       Axial slices of temporal MIP of whole brain of a larval zebrafish (4 dpf) expressing pan-neuronal GCaMP7a. The entire brain volume was recorded at a volume rate of 1 Hz. Temporal MIP of the raw video (left) and temporal MIP of the 3DM result (right).
Visualization 8       3-D rendering of temporal MIP of whole brain of a larval zebrafish (4 dpf) expressing pan-neuronal GCaMP7a. The entire brain volume was recorded at a volume rate of 1 Hz. The video was rendered using BrainTour (Kioxia Inc.).
Visualization 9       Imaging of a first instar Drosophila larva with pan-neuronal GFP expression. The larva was recorded at a volume rate of 4.2 Hz.

Data availability

The datasets presented in this paper are available from the corresponding author upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Deep decomposition deconvolution microscopy (3DM). (a) Schematic of 3DM hardware. An electrically tunable lens (ETL) is conjugated to the back pupil plane of the objective lens. (b) Schematic of the 3DM algorithm. A raw video consists of a time series of 3-D volumes. Using a bilinear neural network for efficient approximation of RPCA (BEAR), the raw video is decomposed into a low rank component and a sparse component that correspond to the background and the neural activity, respectively. The decomposed sparse component is deconvolved using our 3-D deconvolution network.
Fig. 2.
Fig. 2. Neural networks for sparse decomposition and deconvolution. (a) Architecture of BEAR. The network takes the raw video $Y$ and produces the low rank component $L$. The non-negative sparse component $S$ is obtained as $S = \textit {ReLU }(Y-L)$. (b) Deconvolution network architecture. The deconvolved images are obtained by feeding the wide-field images to the network.
Fig. 3.
Fig. 3. Temporal maximum intensity projection (MIP) of the sparse components of the simulated images. Scale bar, 100 $\mathrm {\mu }$m. Top, lateral MIP. Bottom, axial MIP. (a) Ground truth. (b) Wide-field image. (c) Direct Richardson-Lucy deconvolution result. (d) Direct network-based deconvolution result. (e) SD-RL result. (f) 3DM result.
Fig. 4.
Fig. 4. Imaging whole brain and spinal cord of a larval zebrafish expressing pan-neuronal GCaMP7a using 3DM. (a) Temporal MIP of the raw images. Scale bar, 100 $\mathrm {\mu }$m. Top, lateral MIP. Bottom, axial MIP. (b) Temporal MIP of Richardson-Lucy deconvolution result. Scale bar, 100 $\mathrm {\mu }$m. Top, lateral MIP. Bottom, axial MIP. (c) Temporal MIP of SD-RL result. Scale bar, 100 $\mathrm {\mu }$m. Top, lateral MIP. Bottom, axial MIP. (d) Temporal MIP of the 3DM result. Scale bar, 100 $\mathrm {\mu }$m. Top, lateral MIP. Bottom, axial MIP. (e) Enlarged views of (a)-(d) showing the optic tectum (boxed area in (a)). Scale bar, 30 $\mathrm {\mu }$m. (f) Squared norm of the Fourier transform of the images in (a)-(d) displayed in log scale. Top, Fourier transforms of lateral (xy) slices. Bottom, Fourier transforms of axial (xz) slices.
Fig. 5.
Fig. 5. Imaging the neuronal activity of a larval zebrafish brain expressing pan-neuronal GCaMP7a using 3DM. (a) Three z-slices (at z=70 $\mathrm {\mu }$m, 100 $\mathrm {\mu }$m, 140 $\mathrm {\mu }$m, counted upwards from top to bottom, z = 0 $\mathrm {\mu }$m indicates the top surface of the brain) from the deconvolved low rank image (left). Enlarged views of the boxed area with the neuronal activity superimposed on the low rank image at multiple time points (right). Scale bar, 50 $\mathrm {\mu }$m. (b) Extracted neuronal activities shown as a heat map. (c) Randomly selected neurons and the corresponding extracted neuronal activity. The color of each selected neuron is matched to that of the corresponding activity. Scale bar, 50 $\mathrm {\mu }$m.

Tables (2)

Tables Icon

Table 1. Quantitative comparison of four deconvolution methods through simulation. Mean square error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) are evaluated. 3DM outperforms other methods in all metrics.

Tables Icon

Table 2. Computation times of the decomposition and deconvolution (seconds).

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

min W | | S | | 1  subject to  Y = L + S  and  L = W W T Y
min W | | S | | 1 + α | | Y L S | | F  subject to  S = ReLU  ( Y L )  and  L = W W T Y .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.