Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deep learning-enabled efficient image restoration for 3D microscopy of turbid biological specimens

Open Access Open Access

Abstract

Though three-dimensional (3D) fluorescence microscopy has been an essential tool for modern life science research, the light scattering by biological specimens fundamentally prevents its more widespread applications in live imaging. We hereby report a deep-learning approach, termed ScatNet, that enables reversion of 3D fluorescence microscopy from high-resolution targets to low-quality, light-scattered measurements, thereby allowing restoration for a blurred and light-scattered 3D image of deep tissue. Our approach can computationally extend the imaging depth for current 3D fluorescence microscopes, without the addition of complicated optics. Combining ScatNet approach with cutting-edge light-sheet fluorescence microscopy (LSFM), we demonstrate the image restoration of cell nuclei in the deep layer of live Drosophila melanogaster embryos at single-cell resolution. Applying our approach to two-photon excitation microscopy, we could improve the signal-to-noise ratio (SNR) and resolution of neurons in mouse brain beyond the photon ballistic region.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Corrections

12 October 2020: Typographical corrections were made to the author affiliations.

1. Introduction

A recurring challenge in biology is the attempt to look inside various biological specimens to study the structure and function of their cellular constituents [1,2]. These increasing quests are posing substantial challenges to the current 3D light microscopy. Conventional 3D optical microscopy methods based on linear one-photon absorption processes, either epi-illumination [3] or plane illumination modes [4], are limited to the imaging of the tissue surface (less than 100 µm) because highly anisotropic scattering blurs the images at increasingly greater depths. Tissue clearing methods [5,6] generally use chemical reagents to remove the scattering cellular constituents and then select high refractive index solution for refractive index matching, thereby suppressing the multiple light scattering and substantially increasing the imaging depth. However, tissue clearing can’t thoroughly eliminate the scattering effect, especially for very thick disordered medium [7]. More importantly, clearing methods require tissue extraction and histological preparation of the sample, making it impossible for imaging intact tissue or living organisms [5]. Nonlinear two-photon-excited fluorescence microscopes (TPEM) provide superior optical sectioning and mitigate the light scattering effect, owing to the dominance of ballistic photons that are inherent in the nonlinear two-photon laser excitation process and the use of longer excitation wavelength [8]. However, the contribution of ballistic fluorescence emission photons remains minor beyond about one scattering mean-free-path (50–90 µm at 600 nm in brain gray matter) and becomes negligible several hundred microns deep into brain tissue [8,9]. Temporal focusing TPEM with generalized phase contrast [10] could further increase the imaging depth to a few hundreds of microns, but at the cost of more acquisition time and thereby being difficult for live imaging.

Aside from the direct acquisition of scattering-suppressed image, several hardware-based computational methods, such as adaptive optics [11], and speckle correlation [12] techniques, have been proposed to recover scattering images by extracting useful information from the blurred signals. Adaptive optics relies on highly sophisticated optics as well as extra response time of wavefront measurement and correction, making it less compatible with commercialized confocal or TPE microscopes, and less suited for observing highly dynamic samples. Speckle correlation methods utilize the correlation of laser speckle imaging and optical memory effects, but the angular range of optical memory effects limit the imaging depth below the superficial layers of tissues.

Instead of recovering scattered signals based on conventional optical models, the recently-emerging deep neural networks can be another promising alternative, which can learn end-to-end image mapping relationship from data pairs without the need of explicit analytical modeling [13]. Deep learning based on convolutional neural networks (CNNs) [7] has become an effective method in the fields of enhancing biomedical images quality. For example, fluorescence microscopy has recently benefited from the advances in deep-learning-based enhancement such as image restoration [14], deconvolution [15], super-resolution [16,17], and style transformation [18,19].

Here we propose deep-learning-enable computational workflow, termed ScatNet, which can efficiently restore blurred signals induced by tissue scattering in a 3D fluorescence image stack with the enhancement of spatial resolution and image contrast. Based on multi-view acquisition and bead-based registration [20] by light-sheet microscopy, and stepwise tissue thickening experiment by TPEM, we construct a faithful 3D dataset containing well-registered scattering and scattering-free image pairs for ScatNet (U-Net architecture) training. Afterward, a well-trained ScatNet is capable of predicting a deblurred high-quality image from the scattering input directly, without any hardware addition or complicated computation. To demonstrate its enhancement to various microscopy modalities, we applied ScatNet restoration to the single-view selective plane illumination microscopy (SPIM) images of Drosophila embryo, and successfully recovered the completely blurred nuclei structures at the deep region of embryo, thereby showing its potential to substitute more complicated multi-view SPIM imaging strategy. We also demonstrate ScatNet can double the penetration depth for commercialized TPEM imaging of tagged neurons in Thy1-YFP mouse brain tissues.

2. Principle

We acquire well-registered pairs of 3D images which show different degrees of light scattering by our delicately designed setups using multi-view light-sheet microscopy or two-photon excitation microscopy. Two pairs of acquired raw images are first normalized and cropped into many patches to fit the network training (Fig. 1(a), step 1). A background filter is then applied to reduce their backgrounds (Fig. 1(a), step 2). After data pre-processing, the network corresponds these low-quality scattering and high-quality scattering-free images as blurred data and label data, respectively, to initiate the training process (Fig. 1(b), step 3). During each epoch of the training, the neural network basically learns how to better restore high-quality images from the blurred inputs. The generated intermediate output by each epoch is compared with the fixed label data to optimize the loss function of the network, and thereby iteratively push the network toward its optimized stage, at which the network can predict high-quality outputs close enough to the label data (Fig. 1(b), step 4, 5, 6). After the network being efficiently trained, we apply it to the restoration of blurred 3D images of turbid samples acquired by LSFM or TPEM (Fig. 1(c), step 7). By computationally removing the blurs caused by light scattering, ScatNet significantly improves the SNR and resolution of 3D images (Fig. 1(d), step 8, 9, 10). In the following sections, we demonstrate how ScatNet can optimize the LSFM and TPEM imaging in live imaging of biological specimens, with substantially increased the imaging depth.

 figure: Fig. 1.

Fig. 1. Workflow of ScatNet. (a) Data pre-preparation including the intensity normalization, cropping (step 1), and background filtration (step 2) of the raw 3D image pairs acquired by imaging setups that exclude/include effect of light scattering. (b) Iterative network (U-Net architecture) training based on the registered and preprocessed image pairs. The network generates intermediate outputs based on the input blurred data (scattering images) and compares them quantitatively with the label data (scattering-free images), to calculate the system loss function which can iteratively push the optimization of the network (step 3 to 6). (c) Real microscopy imaging of deep tissues (Drosophila embryo, mouse brain, etc.), which generates experimental images with low SNR and resolution by light scattering (step 7). (d) Restoration of the degraded experimental images through predicting higher-resolution, higher-SNR images (step 8, 9) using the well-trained ScatNet. The restored patches are finally stitched back into a complete 3D image volume of the sample (step 10).

Download Full Size | PPT Slide | PDF

3. Results

3.1 Characterization of ScatNet

We characterized the performance of ScatNet through the point-spread-function (PSF) imaging of sub-diffraction fluorescent beads (∼500 nm in diameter, Lumisphere) using a line-synchronized Bessel light-sheet microscope built on an upright microscope (Olympus BX51, 800-nm light-sheet thickness, 20×/0.5 detection objective) [21]. The beads were embedded in an agarose column (1%), mixed with a cluster of cultured human cervical cancer (Hela) cells (∼500 × 500 × 600 µm3), serving as a scattering medium to provide light scattering for PSF imaging. In the experiment, we rotated the sample 180° to flip it upside down, and thereby imaged the same beads twice (with and without cells’ scattering medium coupled). After imaging a few regions of interest, a bead-based 3D image registration was applied to precisely align the scattering and scattering-free PSF images for network training [20]. Then we imaged another region of beads (∼90 × 90 × 25 µm3) using the same procedure, to acquire the scattered image and corresponding ground-truth (GT) for network application (Fig. 2(a)). As shown in Figs. 2(b)–2(d), the ScatNet successfully restored low-quality scattering image with achieving significantly-improved spatial resolution as well as high structure similarity to the ground-truth image. The magnified views of the yz and xz projections of a selected region of interest (ROI) further revealed the details of the imaged/restored beads (Figs. 2(c), 2(d)). Moreover, we measured the full-width-at-half-maximums (FWHMs) of ∼90 individual beads in ROIs along three axes, respectively. The result confirmed much narrower FWHMs of restored PSFs, as compared to the raw ones.

 figure: Fig. 2.

Fig. 2. Performance of ScatNet. (a) Light-sheet microscopy imaging of 3D fluorescent beads buried with a cluster of scattering Hela cells. Two image stacks of the same beads are acquired under two opposite views (0° and 180°), and spatially registered to obtain pairs of scattering and scattering-free bead images. (b) 3D reconstructions of the same beads by scattering mode (raw input), scattering-free mode (ground truth), and ScatNet restoration (network output). (c)-(d) Lateral (xy) and axial (xz) maximum intensity projections (MIPs) of the same selected ROIs (yellow boxes in b). The results clearly demonstrate a notable improvement in lateral and axial resolution for ScatNet. Scale bar, 5 µm. (e) ScatNet improves the lateral/axial resolution across the ∼25-µm overlapping volume of interest (∼90 beads), with achieving relatively uniform resolution of ∼1.09 ± 0.07 µm, ∼1.44 ± 0.23 µm and ∼1.77 ± 0.57 µm in x, y and z, respectively, which are compared to ∼1.95 ± 0.36 µm, ∼2.69 ± 0.43 µm and ∼2.72 ± 1.02 µm in the raw scattering image.

Download Full Size | PPT Slide | PDF

3.2 Improvement for light-sheet microscopy of live Drosophila melanogaster embryos

We applied ScatNet to the SPIM imaging of live Drosophila melanogaster embryos, which suffered from strongly tissue scattering majorly from the lipids of the embryo body. Multi-view LSFM imaging as we mentioned above were thereby used to obtain a set of LSFM stacks (8-16 groups), which are further registered and fused to reconstruct a 3D image containing complete sample signals. In our implementation, we used the open SPIM images of Drosophila embryos for ScatNet training [20]. We divided 6-view data into three groups with each containing a pair of corresponding upside-down 3D images (0°-180°, 45°-225°, 90°-270°). Following the above-mentioned registration and preprocessing steps, three groups of training data contained well-registered scattering/scattering-free signals at deep/superficial layers for the network training. Then neural network predictions could directly recover the blurred low-contrast nuclei signals of single-view SPIM image (Figs. 3(b), 3(c)), allowing complete visualization of all nuclei signals with quality similar to the multi-view-fused ground-truth result. Thereby, our network recovery procedure eliminated the use of multi-view acquisition as well as computation-demanding registration/fusion. Three selected ROIs that suffered from weak, medium, and strong scatterings were restored by our ScatNet and compared with their corresponding ground truths (Fig. 3(d)). The network predictions were verified to have sufficient structural similarity (Fig. 3(d)), improved resolution (Fig. 3(e)), and enhanced SNR (Fig. 3(f)), which were adaptable to all the regions regardless of the degree of tissue scattering.

 figure: Fig. 3.

Fig. 3. Restoration of living Drosophila embryos imaged by LSFM. (a) 3D image stacks of Drosophila embryos acquired by Multi-view SPIM and registered by multi-view registration [20]. The 3D stacks from different views are registered to obtain the training pairs of scattering and scattering-free images from different parts of the embryo. (b) The full xy slices of Drosophila embryo by single-view SPIM (raw), ScatNet (restoration) and multi-view fusion (ground-truth). (c) 3D views of raw, restored and ground-truth signals of nuclei at the same embryo region. (d) ScatNet restoration of signals with different degrees of light scattering (at different depths). The comparative results are shown in both raw x-y (row 1-3) and reconstructed x-z planes (row 4). The error maps of raw and restored images, as compared to ground truths, are correspondingly shown in the right two columns. (e) Intensity plots of the lines through the nuclei resolved in raw, restored, and ground-truth images. The plot profiles of ScatNet fit those of ground-truths well, showing sufficiently high resolution as well as restoration fidelity have been achieved regardless of the scattering degree. (f) The averaged SNR values of the image stacks shown in (c). The poor SNR in raw images has been also improved by ScatNet.

Download Full Size | PPT Slide | PDF

3.3 Improvement for TPEM of mouse brain

Light microscopy imaging of live model animals, such as mouse and rat, has become a type of widely-used technique for life science research. Though optical imaging can provide highly specific structural and functional information, the observation is often limited to very superficial layer owing to the light scattering in tissue. For example, for in vivo two-photon excitation microscopy, the ballistic regime of photons (∼920 nm) in a mouse brain tissue is usually limited to ∼200 µm in depth, which merely corresponds to the Cortex I area in an intact brain. Various methods have been proposed to increase the imaging depth of current light microscopy, but most of them require the adaptive optics or other extra hardware to be added into the original microscopes.

Here we used ScatNet to computationally solve the scattering problem and substantially increase the imaging depth for a commercialized two-photon excitation microscope (Olympus FV1000, Fig. 4(a)), with no need for hardware retrofit. To quantitatively mimic the scattering effect at different imaging depth, we first imaged (Olympus XLPLN, 25×/1.05 water dipping objective) the YFP-tagged neurons in a coronal slice of mouse brain (100-µm thickness, Thy1-YFP), then we imaged the same brain slice shielded by a series of homologous brain tissue slices with different thickness of 100, 200 and 300 µm (Fig. 4(a)). Accordingly, we acquired four groups of 3D image stacks of neurons, serving as the ground-truth data (no shield, signals depth 0-100 µm), weak scattering data (100-µm shield, signals depth 100-200 µm), medium scattering (200-µm shield, signals depth 200-300 µm), and strong scattering data (300-µm shield, signals depth 300-400 µm). In our ScatNet implementation, we paired the three groups of scattering images with the ground-truth images to construct a training system containing hierarchical scattering models (Fig. 4(a)). Then we applied the well-trained ScatNet to the blurred images of neurons acquired at different scattering levels, i.e. with different shield depths (Figs. 4(b3),4(b5), 4(b7)), and compared the results with raw images (Figs. 4(b2), 4(b4), 4(b6)), as well as the ground truth (Fig. 4(b1)). We noticed that the restored images show notably more details of neuron fibers compared to the ambiguous scattering inputs. The linecuts through individual neuronal fibers (Figs. 4(b1) to 4(b7)) resolved at 100 to 300-µm depth further confirmed narrower FWHMs by ScatNet restoration (plots in Fig. 4(c)). Referring to the scattering-free ground-truth, we also compared the peak signal to noise ratio (PSNR) and structure similarity (SSIM) values of the raw scattering and network-restored images across the entire imaging depth [22,23] (Figs. 4(d), 4(e)). These quantitative analyses have verified the capability of our ScatNet to enhance the two-photon microscopy image quality for deep tissues imaging. Thereby, we successfully demonstrated the application of ScatNet to the signal recovery of a 300-µm turbid brain tissue (Fig. 5).

 figure: Fig. 4.

Fig. 4. ScatNet restoration of neuronal signals in mouse brain slices (Thy1-YFP-M) imaged by TPEM. (a) The experimental design of TPEM imaging for obtaining the gradient scattering information through covering the sample with different thickness of brain slices. The network trained by data encompassing gradient scattering information is then able to recover signals suffering from different degrees of scattering. (b) Maximum intensity projections (MIPs) of the same 70 × 70 × 100 µm3 regions along z- (top) and y-axis (bottom). (b1), (b2), (b4), (b6) show the raw imaging results through 0 (ground-truth), 100, 200, and 300 µm brain slices, respectively. (b3), (b5), (b7) correspondingly show the results restored from the scattered signals in (b2), (b4), and (b6), respectively. (c) Intensity plots of linecuts through individual neuronal fibers resolved in b1 to b7. The comparative intensity profiles quantitatively verify improved resolution as well as high restoration fidelity by ScatNet for different degrees of scattering. (d), (e) Variation of SSIM and PSNR values of the raw measurements and corresponding ScatNet restorations, across the imaging depth. The ScatNet improvement is more obvious at deeper region.

Download Full Size | PPT Slide | PDF

 figure: Fig. 5.

Fig. 5. Restoration of a 300-µm thick brain slice with neurons labelled by YFP (Thy1-YFP-M). (a), (b) 3D views of the raw and restored neurons in the same brain volume across entire 300-µm depth. Scale bar, 100 µm. (c) Lateral (xy) planes of the selected ROIs (dashed boxes in a and b) at 60 µm, 180 µm and 240 µm depth in the raw (upper row) and restoration (lower row) results, respectively. Scale bar, 80 µm. (d), (e) The 3D reconstructions of raw TPEM image and ScatNet-enabled TPEM image. Two pyramidal neurons from the deep regions (∼290 and ∼180 µm depth) in the raw and restored brain images are segmented (yellow and green color renderings) for comparison. Scale bar, 80 µm.

Download Full Size | PPT Slide | PDF

3.4 ScatNet restoration for TPEM image of a 300-µm brain slice

Beside the performance validation, we also practically applied our well-trained network to the 3D TPEM image of a 300-µm thick brain slice. Our well-trained ScatNet successfully restored raw 3D images with similar structure.

The restoration results were shown and compared with raw inputs in Fig. 5. We specifically compared planes at depths of 60, 180, and 240 µm, which represented superficial, middle, and deep layers in raw and restored image stack to demonstrate the high robustness of our model for different scattering situations (Fig. 5(c)). It was clearly revealed from both xy and xz reconstructed planes that our ScatNet restoration was able to maintain the original high-quality signals at the superficial layer and at the same time, recovered degraded signals at deep layer with providing increased resolution and contrast. We further segmented two pyramidal neurons from the deep regions in the raw and restored brain images (Figs. 5(d), 5(e)). The results have verified that ScatNet restoration can substantially provide more details for neuron tracing and thus has its practical meaning for potential applications.

4. Methods

4.1 Deep network structure

Restoration of scattered images is an inverse problem aiming to reconstruct a high quality output image from a degraded input. In this work, we directly obtain the nonlinear mapping function between input-output using deep neural network rather than those complicated optical scattering models. Herein, a U-Net structure (Fig. 6) is used as it has achieved good performance in different biomedical applications. Due to the elastic deformation data enhancement, U-Net only requires a small amount of label images and relatively shorter training time [14,24].

 figure: Fig. 6.

Fig. 6. Illustration of our U-Net-based network. This learning procedure contains an encoder-decoder architecture with skip-connections. The “Conv” represents a convolutional layer. The 3×3×3 Conv layers use the Rectified Linear Unit (ReLU) as the active function. “Max pool” is the MaxPooling3D layer. “Up Sample” is the UpSampling3D layer. “Concat” means concatenation operation.

Download Full Size | PPT Slide | PDF

4.2 Brain slices imaging

Before imaging, the sample brain slice (neurons labelled by Thy1-YFP) was mounted onto a glass slide substrate. Then the packed sample was carefully fixed on the microscope stage to prevent the shift caused by adding/removing scattering tissue slices. We used a commercialized TPE microscope (Olympus FV 1000, Olympus XLPLN, 25×/1.05 water dipping objective) to image neurons in sample brain slices beneath different depths of scattering tissues. To obtain clear 3D volumes for network training, we first imaged the 100 µm-thickness sample brain slices (size of each frame 512×512 pixels, step size 2 µm, 1×1×2 µm3 voxel). The imaging parameters for obtaining ground-truth data were: 920-nm pulse excitation with 10% power and 0-10% gain, corresponding to a power of 3-5 mW under the objective. Then, to obtain light-scattered fuzzy 3D images for network training as well as validation, we added 1 to 3 blank brain slices (100 µm-thickness each, no fluorescence labelling) onto the sample slice step by step, to simulate the tissue scattering at different depths from 0 to 400 µm. As the imaging depth of sample signals became larger with thicker blank tissues added, the laser power was also gradually increased to obtain signals with enough SNR for subpixel registration with the ground truths. Finally, after a 300 µm-thickness blank tissue was added, the neuron signals became too weak to be processed for network training, indicating a maximum imaging depth of ∼400 µm in our experiment.

5. Discussion

Unlike classical approaches introducing extra sophisticated optics or algorithm to suppress light scattering, our data-driven method is purely computational, based on powerful prediction capability of deep neural network. In addition, ScatNet contributes to simple, fast image restoration without laborious manual operations. In our ScatNet applications, we successfully restored the scattering-induced blurring for light-sheet microscopy imaging of live Drosophila embryos, thereby allowing in-vivo observation of embryo development with a basic light-sheet microscope setup. We also demonstrated that ScatNet substantially increase imaging depth of intact mouse brain for TPEM, without involving adaptive optics. The performance of ScatNet restoration, including the achieved resolution and signal accuracy, were also quantitatively analyzed and verified to be far better than the raw scattering inputs. Despite these advantages, as a proof-of-concept, our network method remains highly data dependent, relying on the precise registration of experimental scattering and scattering-free data, which are difficult to obtain in some applications. We envision this can be possibly improved by recent advances in Generative Adversarial Network (GAN) which can correlate unpaired images at the superficial and deep side, or the development of image degradation algorithm that can generate intrinsically aligned synthetic scattering images from those scattering-free measurements. Also, our method still has to compromise between the higher recovery fidelity and stronger scattering-induced degradation. We expect this could be addressed with the unceasing development of deep learning, allowing more versatile architectures and powerful training pipelines.

6. Conclusion

In summary, we have proposed a deep-learning-based method that can restore 3D microscopy image degradation caused by light scattering in biological tissues. The versatile enhancements of our method in wide-field, light-sheet, and two-photon excitation microscopes have been sufficiently verified via imaging various types of biological tissues. Therefore, our method provides a paradigm for readily addressing the challenge of light scattering, which widely exists in deep tissue imaging. Furthermore, through computationally reconstruction of a 3D image with drastically improved quality, more in-depth biological applications, e.g., imaging beyond the superficial cortex area, or more accurate image-based analyses, e.g., region segmentation/annotation, could possibly be realized in a simpler and more efficient way.

Appendix A: ScatNet on USAF resolution target with wide-field microscope

We also verified our approach on the 2D image of a standard USAF resolution target (Thorlabs R3L3S1N). A wide-field microscope (Olympus BX51, 4×/0.1 objective) was used to image the resolution board with/without the coverage of a layer of tissue phantom (frosted tape), for obtaining the light-scattered and corresponding scattering-free image pairs, respectively, with highest resolution of 114.0 line pairs per mm (lpm). Then we used ∼80% amount of such automatically aligned image pairs for 2D ScatNet training, and the rest ∼20% data for validation. Before network implementation, the training images were cropped into 128×128-pixel sizes and a background filter was applied to extract the informative images. An image rotation and shear were applied to these pre-processed images to augment the data to 3750 image pairs. After the network training achieving convergence, we validated its performance on the testing data (another 20%), which were not included in the network before. The results showed that our ScatNet was capable to restore the line pairs (32.0 to 114.0 lines per mm, lpm), which were all blurred and distorted in the fuzzy raw images, owing to the strong and non-uniform light scattering by the tissue phantom (Fig. 7).

 figure: Fig. 7.

Fig. 7. ScatNet restoration of 2D image of USAF resolution target acquired through a layer of tissue phantom. Raw (tissue phantom covered), ScatNet-restoration and ground-truth (tape removed) images of the same area in the USAF resolution target, shown as top, middle, and bottom parts, respectively. (a) Comparative results of single line pairs in group 5 (32.0 to 57.0 lpm) before and after network restoration. (b) The highest resolution of single line pairs (group 6, 64.0 to 114.0 lpm) can be restored by our ScatNet. (c), (d) Intensity profiles of the lines through the 3rd and 6th line pairs in group 6, respectively. The profiles of ScatNet restoration fit those of ground-truths well, indicating sufficiently high resolution as well as restoration fidelity.

Download Full Size | PPT Slide | PDF

Appendix B: Scat-Net restoration of single-view and two-view-fused Drosophila embryo images

We further compared the single-view input, two-view-fused input and their corresponding ScatNet outputs (Fig. 8). For the single-view input, ScatNet successfully restored the completely-blurred nuclei at the deepest side of embryo. The result was comparable to the two-view-fused input and its restoration output. Considering that the two-view-fused input originally has better image quality, ScatNet restoration unsurprisingly showed less improvement. This indicated that a trained ScatNet could be flexible for restoring signals with different degrees of degradation.

 figure: Fig. 8.

Fig. 8. Single-view LSFM input, two-view-fused LSFM input, and their Scat-Net restorations. Scale bar, 40 µm.

Download Full Size | PPT Slide | PDF

Appendix C: Limitations of ScatNet and Comparison of deep learning methods

The details of network training were summarized in Table 1. It took ∼12s for our approach to restore a new data for a typical volume size of 512 × 512 × 128 voxels using a single Nvidia 2080Ti GPU. Like many data-driven CNN approaches, the performance of ScatNet also highly depends on the quality of training dataset and the quality of input data. Therefore, two major limitations have been found for ScatNet as: failure cases when recovering completely-blurred inputs, and poor performance caused by misalignment of training image pairs (Fig. 9).

 figure: Fig. 9.

Fig. 9. Failure cases of ScatNet restorations. (a) Failed restorations of extreme inputs that are completely blurred by severe tissue scattering. (b) Restoration errors due to the lack of accurate registration. Without an additional registration being applied, noticeable misalignment between the blurred and label images (arrow indicators) leads to a poor network training, which consequently causes wrong inferences of model (Restoration 1). After the training data are well aligned using an image registration, much improved Restorations 2 can be generated by the correctly-trained model.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. Overview of training details on above-mentioned samples.

We also noted that the network parameters, including architectures, and loss functions, could also have effects on the final restored results. Here, we selected a region of interests (ROI) ∼70 × 70 × 100 µm3 in the 200-µm shield 100-µm thick Thy1-YFP-M brain slice to validate the accuracy and generalization capability of a few different deep-learning models. With using the same loss function (Mean Square Error, MSE or Cross Entropy, CE), we compared the performance of Resnet with our U-Net-based ScatNet on the same brain neuron dataset, as shown in Figs. 10(c)–10(f). This difference is possibly because the influence of long and short skip connections. Long skip connections provide a shortcut for gradient flow in shallow layers [25]. On the other hand, with using the same network architecture, either U-Net-based ScatNet or ResNet, we found that MSE might not be suitable for handling deep tissue scattering tasks and loss function of CE performs better (Figs. 10(c), 10(d) vs 10(e), 10(f)). Compared to pure denoising tasks, deep tissue scattering is more complex owing to the multiple deflection of lights randomly distributed across the whole image. In this case, CE loss aims to find the global distribution with maximum likelihood, which is more effective than conventional MSE [26] (Fig. 10(f)).

 figure: Fig. 10.

Fig. 10. Comparison of the restored results by different deep-learning models with different parameters. (a)-(f) Axial (xz) maximum intensity projections (MIPs) of the same selected ROIs of raw scattering image, ground-truth scattering-free image, restoration by Resnet with MSE loss function, restoration by ScatNet with MSE loss function, restoration by Resnet with CE loss function, and restoration by ScatNet with CE loss function, respectively. Scale bar, 20 µm.

Download Full Size | PPT Slide | PDF

Funding

Junior Thousand Talents Program of China; Innovation Fund of WNLO (2019); National Key Research and Development Program of China (2017YFA0700501); National Natural Science Foundation of China (21874052, 61860206009).

Acknowledgments

We thank Hao Zhang for the help on the code implementation.

Disclosures

The authors declare no conflicts of interest.

References

1. A. L. Eberle, O. Selchow, M. Thaler, D. Zeidler, and R. Kirmse, “Mission (im)possible - mapping the brain becomes a reality,” Microscopy (Tokyo) 64(1), 45–55 (2015). [CrossRef]  

2. W. Yang and R. Yuste, “In vivo imaging of neural activity,” Nat. Methods 14(4), 349–359 (2017). [CrossRef]  

3. P. Frasconi, L. Silvestri, P. Soda, R. Cortini, F. S. Pavone, and G. Iannello, “Large-scale automated identification of mouse brain cells in confocal light sheet microscopy images,” Bioinformatics 30(17), i587–i593 (2014). [CrossRef]  

4. M. B. Ahrens, M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller, “Whole-brain functional imaging at cellular resolution using light-sheet microscopy,” Nat. Methods 10(5), 413–420 (2013). [CrossRef]  

5. E. A. Susaki, K. Tainaka, D. Perrin, H. Yukinaga, A. Kuno, and H. R. Ueda, “Advanced CUBIC protocols for whole-brain and whole-body clearing and imaging,” Nat. Protoc. 10(11), 1709–1727 (2015). [CrossRef]  

6. C. Pan, R. Cai, F. P. Quacquarelli, A. Ghasemigharagoz, A. Lourbopoulos, P. Matryba, N. Plesnila, M. Dichgans, F. Hellal, and A. Ertürk, “Shrinkage-mediated imaging of entire organs and organisms using uDISCO,” Nat. Methods 13(10), 859–867 (2016). [CrossRef]  

7. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015). [CrossRef]  

8. F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2(12), 932–940 (2005). [CrossRef]  

9. A. N. Yaroslavsky, P. C. Schulze, I. V. Yaroslavsky, R. Schober, H. J. S. J. P. i. Medicine, and Biology, “Optical properties of selected native and coagulated human brain tissues in vitro in the visible and near infrared spectral range,” Phys. Med. Biol. 47(12), 2059–2073 (2002). [CrossRef]  

10. E. Papagiakoumou, A. Bègue, B. Leshem, O. Schwartz, B. M. Stell, J. Bradley, D. Oron, and V. Emiliani, “Functional patterned multiphoton excitation deep inside scattering tissue,” Nat. Photonics 7(4), 274–278 (2013). [CrossRef]  

11. L. Sherman, J. Ye, O. Albert, and T. J. J. O. M. Norris, “Adaptive correction of depth-induced aberrations in multiphoton scanning microscopy using a deformable mirror,” J. Microsc. 206(1), 65–71 (2002). [CrossRef]  

12. J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012). [CrossRef]  

13. C. Belthangady and L. A. Royer, “Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction,” Nat. Methods 16(12), 1215–1225 (2019). [CrossRef]  

14. M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018). [CrossRef]  

15. A. Shajkofci and M. Liebling, “Semi-blind spatially-variant deconvolution in optical microscopy with local point spread function estimation by use of convolutional neural networks,” in 2018 25th IEEE International Conference on Image Processing (ICIP) (IEEE, 2018), pp. 3818–3822.

16. H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019). [CrossRef]  

17. H. Zhang, C. Fang, X. Xie, Y. Yang, W. Mei, D. Jin, and P. Fei, “High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network,” Biomed. Opt. Express 10(3), 1044–1063 (2019). [CrossRef]  

18. E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” Cell 173(3), 792–803.e19 (2018). [CrossRef]  

19. C. Ounkomol, S. Seshamani, M. M. Maleckar, F. Collman, and G. R. Johnson, “Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy,” Nat. Methods 15(11), 917–920 (2018). [CrossRef]  

20. S. Preibisch, S. Saalfeld, J. Schindelin, and P. Tomancak, “Software for bead-based registration of selective plane illumination microscopy data,” Nat. Methods 7(6), 418–419 (2010). [CrossRef]  

21. C. Fang, T. Chu, T. Yu, Y. Huang, Y. Li, P. Wan, W. Feng, X. Wang, W. Mei, D. Zhu, and P. Fei, “Minutes-timescale 3D isotropic imaging of entire organs at subcellular resolution by content-aware compressed-sensing light-sheet microscopy,” bioRxiv, 825901 (2020).

22. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. J. I. T. I. P. Simoncelli, “Image Quality Assessment: From Error Visibility to Structural Similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

23. I. Arganda-Carreras, V. Kaynig, C. Rueden, K. W. Eliceiri, J. Schindelin, A. Cardona, and H. J. B. Sebastian Seung, “Trainable Weka Segmentation: a machine learning tool for microscopy pixel classification,” Bioinformatics btx180 (2017).

24. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, eds. (Springer International Publishing, Cham, 2015), pp. 234–241.

25. M. Drozdzal, E. Vorontsov, G. Chartrand, S. Kadoury, and C. Pal, “The importance of skip connections in biomedical image segmentation,” in Deep Learning and Data Labeling for Medical Applications (Springer, 2016), pp. 179–187.

26. R. Puetter, T. Gosnell, and A. J. A. R. A. A. Yahil, “Digital image reconstruction: Deblurring and denoising,” Annu. Rev. Astron. Astrophys. 43(1), 139–194 (2005). [CrossRef]  

References

  • View by:

  1. A. L. Eberle, O. Selchow, M. Thaler, D. Zeidler, and R. Kirmse, “Mission (im)possible - mapping the brain becomes a reality,” Microscopy (Tokyo) 64(1), 45–55 (2015).
    [Crossref]
  2. W. Yang and R. Yuste, “In vivo imaging of neural activity,” Nat. Methods 14(4), 349–359 (2017).
    [Crossref]
  3. P. Frasconi, L. Silvestri, P. Soda, R. Cortini, F. S. Pavone, and G. Iannello, “Large-scale automated identification of mouse brain cells in confocal light sheet microscopy images,” Bioinformatics 30(17), i587–i593 (2014).
    [Crossref]
  4. M. B. Ahrens, M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller, “Whole-brain functional imaging at cellular resolution using light-sheet microscopy,” Nat. Methods 10(5), 413–420 (2013).
    [Crossref]
  5. E. A. Susaki, K. Tainaka, D. Perrin, H. Yukinaga, A. Kuno, and H. R. Ueda, “Advanced CUBIC protocols for whole-brain and whole-body clearing and imaging,” Nat. Protoc. 10(11), 1709–1727 (2015).
    [Crossref]
  6. C. Pan, R. Cai, F. P. Quacquarelli, A. Ghasemigharagoz, A. Lourbopoulos, P. Matryba, N. Plesnila, M. Dichgans, F. Hellal, and A. Ertürk, “Shrinkage-mediated imaging of entire organs and organisms using uDISCO,” Nat. Methods 13(10), 859–867 (2016).
    [Crossref]
  7. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
    [Crossref]
  8. F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2(12), 932–940 (2005).
    [Crossref]
  9. A. N. Yaroslavsky, P. C. Schulze, I. V. Yaroslavsky, and R. Schober, H. J. S. J. P. i. Medicine, and Biology, “Optical properties of selected native and coagulated human brain tissues in vitro in the visible and near infrared spectral range,” Phys. Med. Biol. 47(12), 2059–2073 (2002).
    [Crossref]
  10. E. Papagiakoumou, A. Bègue, B. Leshem, O. Schwartz, B. M. Stell, J. Bradley, D. Oron, and V. Emiliani, “Functional patterned multiphoton excitation deep inside scattering tissue,” Nat. Photonics 7(4), 274–278 (2013).
    [Crossref]
  11. L. Sherman, J. Ye, O. Albert, and T. J. J. O. M. Norris, “Adaptive correction of depth-induced aberrations in multiphoton scanning microscopy using a deformable mirror,” J. Microsc. 206(1), 65–71 (2002).
    [Crossref]
  12. J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
    [Crossref]
  13. C. Belthangady and L. A. Royer, “Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction,” Nat. Methods 16(12), 1215–1225 (2019).
    [Crossref]
  14. M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
    [Crossref]
  15. A. Shajkofci and M. Liebling, “Semi-blind spatially-variant deconvolution in optical microscopy with local point spread function estimation by use of convolutional neural networks,” in 2018 25th IEEE International Conference on Image Processing (ICIP) (IEEE, 2018), pp. 3818–3822.
  16. H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
    [Crossref]
  17. H. Zhang, C. Fang, X. Xie, Y. Yang, W. Mei, D. Jin, and P. Fei, “High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network,” Biomed. Opt. Express 10(3), 1044–1063 (2019).
    [Crossref]
  18. E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” Cell 173(3), 792–803.e19 (2018).
    [Crossref]
  19. C. Ounkomol, S. Seshamani, M. M. Maleckar, F. Collman, and G. R. Johnson, “Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy,” Nat. Methods 15(11), 917–920 (2018).
    [Crossref]
  20. S. Preibisch, S. Saalfeld, J. Schindelin, and P. Tomancak, “Software for bead-based registration of selective plane illumination microscopy data,” Nat. Methods 7(6), 418–419 (2010).
    [Crossref]
  21. C. Fang, T. Chu, T. Yu, Y. Huang, Y. Li, P. Wan, W. Feng, X. Wang, W. Mei, D. Zhu, and P. Fei, “Minutes-timescale 3D isotropic imaging of entire organs at subcellular resolution by content-aware compressed-sensing light-sheet microscopy,” bioRxiv, 825901 (2020).
  22. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. J. I. T. I. P. Simoncelli, “Image Quality Assessment: From Error Visibility to Structural Similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
    [Crossref]
  23. I. Arganda-Carreras, V. Kaynig, C. Rueden, K. W. Eliceiri, J. Schindelin, A. Cardona, and H. J. B. Sebastian Seung, “Trainable Weka Segmentation: a machine learning tool for microscopy pixel classification,” Bioinformatics btx180 (2017).
  24. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, eds. (Springer International Publishing, Cham, 2015), pp. 234–241.
  25. M. Drozdzal, E. Vorontsov, G. Chartrand, S. Kadoury, and C. Pal, “The importance of skip connections in biomedical image segmentation,” in Deep Learning and Data Labeling for Medical Applications (Springer, 2016), pp. 179–187.
  26. R. Puetter, T. Gosnell, and A. J. A. R. A. A. Yahil, “Digital image reconstruction: Deblurring and denoising,” Annu. Rev. Astron. Astrophys. 43(1), 139–194 (2005).
    [Crossref]

2019 (3)

C. Belthangady and L. A. Royer, “Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction,” Nat. Methods 16(12), 1215–1225 (2019).
[Crossref]

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
[Crossref]

H. Zhang, C. Fang, X. Xie, Y. Yang, W. Mei, D. Jin, and P. Fei, “High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network,” Biomed. Opt. Express 10(3), 1044–1063 (2019).
[Crossref]

2018 (3)

E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” Cell 173(3), 792–803.e19 (2018).
[Crossref]

C. Ounkomol, S. Seshamani, M. M. Maleckar, F. Collman, and G. R. Johnson, “Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy,” Nat. Methods 15(11), 917–920 (2018).
[Crossref]

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

2017 (1)

W. Yang and R. Yuste, “In vivo imaging of neural activity,” Nat. Methods 14(4), 349–359 (2017).
[Crossref]

2016 (1)

C. Pan, R. Cai, F. P. Quacquarelli, A. Ghasemigharagoz, A. Lourbopoulos, P. Matryba, N. Plesnila, M. Dichgans, F. Hellal, and A. Ertürk, “Shrinkage-mediated imaging of entire organs and organisms using uDISCO,” Nat. Methods 13(10), 859–867 (2016).
[Crossref]

2015 (3)

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

A. L. Eberle, O. Selchow, M. Thaler, D. Zeidler, and R. Kirmse, “Mission (im)possible - mapping the brain becomes a reality,” Microscopy (Tokyo) 64(1), 45–55 (2015).
[Crossref]

E. A. Susaki, K. Tainaka, D. Perrin, H. Yukinaga, A. Kuno, and H. R. Ueda, “Advanced CUBIC protocols for whole-brain and whole-body clearing and imaging,” Nat. Protoc. 10(11), 1709–1727 (2015).
[Crossref]

2014 (1)

P. Frasconi, L. Silvestri, P. Soda, R. Cortini, F. S. Pavone, and G. Iannello, “Large-scale automated identification of mouse brain cells in confocal light sheet microscopy images,” Bioinformatics 30(17), i587–i593 (2014).
[Crossref]

2013 (2)

M. B. Ahrens, M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller, “Whole-brain functional imaging at cellular resolution using light-sheet microscopy,” Nat. Methods 10(5), 413–420 (2013).
[Crossref]

E. Papagiakoumou, A. Bègue, B. Leshem, O. Schwartz, B. M. Stell, J. Bradley, D. Oron, and V. Emiliani, “Functional patterned multiphoton excitation deep inside scattering tissue,” Nat. Photonics 7(4), 274–278 (2013).
[Crossref]

2012 (1)

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

2010 (1)

S. Preibisch, S. Saalfeld, J. Schindelin, and P. Tomancak, “Software for bead-based registration of selective plane illumination microscopy data,” Nat. Methods 7(6), 418–419 (2010).
[Crossref]

2005 (2)

F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2(12), 932–940 (2005).
[Crossref]

R. Puetter, T. Gosnell, and A. J. A. R. A. A. Yahil, “Digital image reconstruction: Deblurring and denoising,” Annu. Rev. Astron. Astrophys. 43(1), 139–194 (2005).
[Crossref]

2004 (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. J. I. T. I. P. Simoncelli, “Image Quality Assessment: From Error Visibility to Structural Similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

2002 (2)

L. Sherman, J. Ye, O. Albert, and T. J. J. O. M. Norris, “Adaptive correction of depth-induced aberrations in multiphoton scanning microscopy using a deformable mirror,” J. Microsc. 206(1), 65–71 (2002).
[Crossref]

A. N. Yaroslavsky, P. C. Schulze, I. V. Yaroslavsky, and R. Schober, H. J. S. J. P. i. Medicine, and Biology, “Optical properties of selected native and coagulated human brain tissues in vitro in the visible and near infrared spectral range,” Phys. Med. Biol. 47(12), 2059–2073 (2002).
[Crossref]

Ahrens, M. B.

M. B. Ahrens, M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller, “Whole-brain functional imaging at cellular resolution using light-sheet microscopy,” Nat. Methods 10(5), 413–420 (2013).
[Crossref]

Albert, O.

L. Sherman, J. Ye, O. Albert, and T. J. J. O. M. Norris, “Adaptive correction of depth-induced aberrations in multiphoton scanning microscopy using a deformable mirror,” J. Microsc. 206(1), 65–71 (2002).
[Crossref]

Ando, D. M.

E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” Cell 173(3), 792–803.e19 (2018).
[Crossref]

Arganda-Carreras, I.

I. Arganda-Carreras, V. Kaynig, C. Rueden, K. W. Eliceiri, J. Schindelin, A. Cardona, and H. J. B. Sebastian Seung, “Trainable Weka Segmentation: a machine learning tool for microscopy pixel classification,” Bioinformatics btx180 (2017).

Bègue, A.

E. Papagiakoumou, A. Bègue, B. Leshem, O. Schwartz, B. M. Stell, J. Bradley, D. Oron, and V. Emiliani, “Functional patterned multiphoton excitation deep inside scattering tissue,” Nat. Photonics 7(4), 274–278 (2013).
[Crossref]

Belthangady, C.

C. Belthangady and L. A. Royer, “Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction,” Nat. Methods 16(12), 1215–1225 (2019).
[Crossref]

Bengio, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Bentolila, L. A.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
[Crossref]

Berndl, M.

E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” Cell 173(3), 792–803.e19 (2018).
[Crossref]

Bertolotti, J.

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Blum, C.

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Boothe, T.

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. J. I. T. I. P. Simoncelli, “Image Quality Assessment: From Error Visibility to Structural Similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Bradley, J.

E. Papagiakoumou, A. Bègue, B. Leshem, O. Schwartz, B. M. Stell, J. Bradley, D. Oron, and V. Emiliani, “Functional patterned multiphoton excitation deep inside scattering tissue,” Nat. Photonics 7(4), 274–278 (2013).
[Crossref]

Broaddus, C.

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, eds. (Springer International Publishing, Cham, 2015), pp. 234–241.

Cai, R.

C. Pan, R. Cai, F. P. Quacquarelli, A. Ghasemigharagoz, A. Lourbopoulos, P. Matryba, N. Plesnila, M. Dichgans, F. Hellal, and A. Ertürk, “Shrinkage-mediated imaging of entire organs and organisms using uDISCO,” Nat. Methods 13(10), 859–867 (2016).
[Crossref]

Cardona, A.

I. Arganda-Carreras, V. Kaynig, C. Rueden, K. W. Eliceiri, J. Schindelin, A. Cardona, and H. J. B. Sebastian Seung, “Trainable Weka Segmentation: a machine learning tool for microscopy pixel classification,” Bioinformatics btx180 (2017).

Chartrand, G.

M. Drozdzal, E. Vorontsov, G. Chartrand, S. Kadoury, and C. Pal, “The importance of skip connections in biomedical image segmentation,” in Deep Learning and Data Labeling for Medical Applications (Springer, 2016), pp. 179–187.

Christiansen, E. M.

E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” Cell 173(3), 792–803.e19 (2018).
[Crossref]

Chu, T.

C. Fang, T. Chu, T. Yu, Y. Huang, Y. Li, P. Wan, W. Feng, X. Wang, W. Mei, D. Zhu, and P. Fei, “Minutes-timescale 3D isotropic imaging of entire organs at subcellular resolution by content-aware compressed-sensing light-sheet microscopy,” bioRxiv, 825901 (2020).

Collman, F.

C. Ounkomol, S. Seshamani, M. M. Maleckar, F. Collman, and G. R. Johnson, “Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy,” Nat. Methods 15(11), 917–920 (2018).
[Crossref]

Cortini, R.

P. Frasconi, L. Silvestri, P. Soda, R. Cortini, F. S. Pavone, and G. Iannello, “Large-scale automated identification of mouse brain cells in confocal light sheet microscopy images,” Bioinformatics 30(17), i587–i593 (2014).
[Crossref]

Culley, S.

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Denk, W.

F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2(12), 932–940 (2005).
[Crossref]

Dibrov, A.

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Dichgans, M.

C. Pan, R. Cai, F. P. Quacquarelli, A. Ghasemigharagoz, A. Lourbopoulos, P. Matryba, N. Plesnila, M. Dichgans, F. Hellal, and A. Ertürk, “Shrinkage-mediated imaging of entire organs and organisms using uDISCO,” Nat. Methods 13(10), 859–867 (2016).
[Crossref]

Drozdzal, M.

M. Drozdzal, E. Vorontsov, G. Chartrand, S. Kadoury, and C. Pal, “The importance of skip connections in biomedical image segmentation,” in Deep Learning and Data Labeling for Medical Applications (Springer, 2016), pp. 179–187.

Eberle, A. L.

A. L. Eberle, O. Selchow, M. Thaler, D. Zeidler, and R. Kirmse, “Mission (im)possible - mapping the brain becomes a reality,” Microscopy (Tokyo) 64(1), 45–55 (2015).
[Crossref]

Eliceiri, K. W.

I. Arganda-Carreras, V. Kaynig, C. Rueden, K. W. Eliceiri, J. Schindelin, A. Cardona, and H. J. B. Sebastian Seung, “Trainable Weka Segmentation: a machine learning tool for microscopy pixel classification,” Bioinformatics btx180 (2017).

Emiliani, V.

E. Papagiakoumou, A. Bègue, B. Leshem, O. Schwartz, B. M. Stell, J. Bradley, D. Oron, and V. Emiliani, “Functional patterned multiphoton excitation deep inside scattering tissue,” Nat. Photonics 7(4), 274–278 (2013).
[Crossref]

Ertürk, A.

C. Pan, R. Cai, F. P. Quacquarelli, A. Ghasemigharagoz, A. Lourbopoulos, P. Matryba, N. Plesnila, M. Dichgans, F. Hellal, and A. Ertürk, “Shrinkage-mediated imaging of entire organs and organisms using uDISCO,” Nat. Methods 13(10), 859–867 (2016).
[Crossref]

Esteva, A.

E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” Cell 173(3), 792–803.e19 (2018).
[Crossref]

Fang, C.

H. Zhang, C. Fang, X. Xie, Y. Yang, W. Mei, D. Jin, and P. Fei, “High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network,” Biomed. Opt. Express 10(3), 1044–1063 (2019).
[Crossref]

C. Fang, T. Chu, T. Yu, Y. Huang, Y. Li, P. Wan, W. Feng, X. Wang, W. Mei, D. Zhu, and P. Fei, “Minutes-timescale 3D isotropic imaging of entire organs at subcellular resolution by content-aware compressed-sensing light-sheet microscopy,” bioRxiv, 825901 (2020).

Fedus, W.

E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” Cell 173(3), 792–803.e19 (2018).
[Crossref]

Fei, P.

H. Zhang, C. Fang, X. Xie, Y. Yang, W. Mei, D. Jin, and P. Fei, “High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network,” Biomed. Opt. Express 10(3), 1044–1063 (2019).
[Crossref]

C. Fang, T. Chu, T. Yu, Y. Huang, Y. Li, P. Wan, W. Feng, X. Wang, W. Mei, D. Zhu, and P. Fei, “Minutes-timescale 3D isotropic imaging of entire organs at subcellular resolution by content-aware compressed-sensing light-sheet microscopy,” bioRxiv, 825901 (2020).

Feng, W.

C. Fang, T. Chu, T. Yu, Y. Huang, Y. Li, P. Wan, W. Feng, X. Wang, W. Mei, D. Zhu, and P. Fei, “Minutes-timescale 3D isotropic imaging of entire organs at subcellular resolution by content-aware compressed-sensing light-sheet microscopy,” bioRxiv, 825901 (2020).

Finkbeiner, S.

E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” Cell 173(3), 792–803.e19 (2018).
[Crossref]

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, eds. (Springer International Publishing, Cham, 2015), pp. 234–241.

Frasconi, P.

P. Frasconi, L. Silvestri, P. Soda, R. Cortini, F. S. Pavone, and G. Iannello, “Large-scale automated identification of mouse brain cells in confocal light sheet microscopy images,” Bioinformatics 30(17), i587–i593 (2014).
[Crossref]

Gao, R.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
[Crossref]

Ghasemigharagoz, A.

C. Pan, R. Cai, F. P. Quacquarelli, A. Ghasemigharagoz, A. Lourbopoulos, P. Matryba, N. Plesnila, M. Dichgans, F. Hellal, and A. Ertürk, “Shrinkage-mediated imaging of entire organs and organisms using uDISCO,” Nat. Methods 13(10), 859–867 (2016).
[Crossref]

Gosnell, T.

R. Puetter, T. Gosnell, and A. J. A. R. A. A. Yahil, “Digital image reconstruction: Deblurring and denoising,” Annu. Rev. Astron. Astrophys. 43(1), 139–194 (2005).
[Crossref]

Goyal, P.

E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” Cell 173(3), 792–803.e19 (2018).
[Crossref]

Gunaydin, H.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
[Crossref]

Hellal, F.

C. Pan, R. Cai, F. P. Quacquarelli, A. Ghasemigharagoz, A. Lourbopoulos, P. Matryba, N. Plesnila, M. Dichgans, F. Hellal, and A. Ertürk, “Shrinkage-mediated imaging of entire organs and organisms using uDISCO,” Nat. Methods 13(10), 859–867 (2016).
[Crossref]

Helmchen, F.

F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2(12), 932–940 (2005).
[Crossref]

Henriques, R.

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Hinton, G.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Huang, Y.

C. Fang, T. Chu, T. Yu, Y. Huang, Y. Li, P. Wan, W. Feng, X. Wang, W. Mei, D. Zhu, and P. Fei, “Minutes-timescale 3D isotropic imaging of entire organs at subcellular resolution by content-aware compressed-sensing light-sheet microscopy,” bioRxiv, 825901 (2020).

Iannello, G.

P. Frasconi, L. Silvestri, P. Soda, R. Cortini, F. S. Pavone, and G. Iannello, “Large-scale automated identification of mouse brain cells in confocal light sheet microscopy images,” Bioinformatics 30(17), i587–i593 (2014).
[Crossref]

Jain, A.

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Javaherian, A.

E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” Cell 173(3), 792–803.e19 (2018).
[Crossref]

Jin, D.

Jin, Y.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
[Crossref]

Johnson, G. R.

C. Ounkomol, S. Seshamani, M. M. Maleckar, F. Collman, and G. R. Johnson, “Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy,” Nat. Methods 15(11), 917–920 (2018).
[Crossref]

Jug, F.

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Kadoury, S.

M. Drozdzal, E. Vorontsov, G. Chartrand, S. Kadoury, and C. Pal, “The importance of skip connections in biomedical image segmentation,” in Deep Learning and Data Labeling for Medical Applications (Springer, 2016), pp. 179–187.

Kaynig, V.

I. Arganda-Carreras, V. Kaynig, C. Rueden, K. W. Eliceiri, J. Schindelin, A. Cardona, and H. J. B. Sebastian Seung, “Trainable Weka Segmentation: a machine learning tool for microscopy pixel classification,” Bioinformatics btx180 (2017).

Keller, P. J.

M. B. Ahrens, M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller, “Whole-brain functional imaging at cellular resolution using light-sheet microscopy,” Nat. Methods 10(5), 413–420 (2013).
[Crossref]

Kirmse, R.

A. L. Eberle, O. Selchow, M. Thaler, D. Zeidler, and R. Kirmse, “Mission (im)possible - mapping the brain becomes a reality,” Microscopy (Tokyo) 64(1), 45–55 (2015).
[Crossref]

Kuno, A.

E. A. Susaki, K. Tainaka, D. Perrin, H. Yukinaga, A. Kuno, and H. R. Ueda, “Advanced CUBIC protocols for whole-brain and whole-body clearing and imaging,” Nat. Protoc. 10(11), 1709–1727 (2015).
[Crossref]

Kural, C.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
[Crossref]

Lagendijk, A.

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

LeCun, Y.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

Lee, A. K.

E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” Cell 173(3), 792–803.e19 (2018).
[Crossref]

Leshem, B.

E. Papagiakoumou, A. Bègue, B. Leshem, O. Schwartz, B. M. Stell, J. Bradley, D. Oron, and V. Emiliani, “Functional patterned multiphoton excitation deep inside scattering tissue,” Nat. Photonics 7(4), 274–278 (2013).
[Crossref]

Li, J. M.

M. B. Ahrens, M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller, “Whole-brain functional imaging at cellular resolution using light-sheet microscopy,” Nat. Methods 10(5), 413–420 (2013).
[Crossref]

Li, Y.

C. Fang, T. Chu, T. Yu, Y. Huang, Y. Li, P. Wan, W. Feng, X. Wang, W. Mei, D. Zhu, and P. Fei, “Minutes-timescale 3D isotropic imaging of entire organs at subcellular resolution by content-aware compressed-sensing light-sheet microscopy,” bioRxiv, 825901 (2020).

Liebling, M.

A. Shajkofci and M. Liebling, “Semi-blind spatially-variant deconvolution in optical microscopy with local point spread function estimation by use of convolutional neural networks,” in 2018 25th IEEE International Conference on Image Processing (ICIP) (IEEE, 2018), pp. 3818–3822.

Lipnick, S.

E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” Cell 173(3), 792–803.e19 (2018).
[Crossref]

Lourbopoulos, A.

C. Pan, R. Cai, F. P. Quacquarelli, A. Ghasemigharagoz, A. Lourbopoulos, P. Matryba, N. Plesnila, M. Dichgans, F. Hellal, and A. Ertürk, “Shrinkage-mediated imaging of entire organs and organisms using uDISCO,” Nat. Methods 13(10), 859–867 (2016).
[Crossref]

Maleckar, M. M.

C. Ounkomol, S. Seshamani, M. M. Maleckar, F. Collman, and G. R. Johnson, “Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy,” Nat. Methods 15(11), 917–920 (2018).
[Crossref]

Matryba, P.

C. Pan, R. Cai, F. P. Quacquarelli, A. Ghasemigharagoz, A. Lourbopoulos, P. Matryba, N. Plesnila, M. Dichgans, F. Hellal, and A. Ertürk, “Shrinkage-mediated imaging of entire organs and organisms using uDISCO,” Nat. Methods 13(10), 859–867 (2016).
[Crossref]

Mei, W.

H. Zhang, C. Fang, X. Xie, Y. Yang, W. Mei, D. Jin, and P. Fei, “High-throughput, high-resolution deep learning microscopy based on registration-free generative adversarial network,” Biomed. Opt. Express 10(3), 1044–1063 (2019).
[Crossref]

C. Fang, T. Chu, T. Yu, Y. Huang, Y. Li, P. Wan, W. Feng, X. Wang, W. Mei, D. Zhu, and P. Fei, “Minutes-timescale 3D isotropic imaging of entire organs at subcellular resolution by content-aware compressed-sensing light-sheet microscopy,” bioRxiv, 825901 (2020).

Mosk, A. P.

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Mount, E.

E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” Cell 173(3), 792–803.e19 (2018).
[Crossref]

Muller, A.

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Myers, E. W.

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Nelson, P.

E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” Cell 173(3), 792–803.e19 (2018).
[Crossref]

Norden, C.

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Norris, T. J. J. O. M.

L. Sherman, J. Ye, O. Albert, and T. J. J. O. M. Norris, “Adaptive correction of depth-induced aberrations in multiphoton scanning microscopy using a deformable mirror,” J. Microsc. 206(1), 65–71 (2002).
[Crossref]

O’Neil, A.

E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” Cell 173(3), 792–803.e19 (2018).
[Crossref]

Orger, M. B.

M. B. Ahrens, M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller, “Whole-brain functional imaging at cellular resolution using light-sheet microscopy,” Nat. Methods 10(5), 413–420 (2013).
[Crossref]

Oron, D.

E. Papagiakoumou, A. Bègue, B. Leshem, O. Schwartz, B. M. Stell, J. Bradley, D. Oron, and V. Emiliani, “Functional patterned multiphoton excitation deep inside scattering tissue,” Nat. Photonics 7(4), 274–278 (2013).
[Crossref]

Ounkomol, C.

C. Ounkomol, S. Seshamani, M. M. Maleckar, F. Collman, and G. R. Johnson, “Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy,” Nat. Methods 15(11), 917–920 (2018).
[Crossref]

Ozcan, A.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
[Crossref]

Pal, C.

M. Drozdzal, E. Vorontsov, G. Chartrand, S. Kadoury, and C. Pal, “The importance of skip connections in biomedical image segmentation,” in Deep Learning and Data Labeling for Medical Applications (Springer, 2016), pp. 179–187.

Pan, C.

C. Pan, R. Cai, F. P. Quacquarelli, A. Ghasemigharagoz, A. Lourbopoulos, P. Matryba, N. Plesnila, M. Dichgans, F. Hellal, and A. Ertürk, “Shrinkage-mediated imaging of entire organs and organisms using uDISCO,” Nat. Methods 13(10), 859–867 (2016).
[Crossref]

Papagiakoumou, E.

E. Papagiakoumou, A. Bègue, B. Leshem, O. Schwartz, B. M. Stell, J. Bradley, D. Oron, and V. Emiliani, “Functional patterned multiphoton excitation deep inside scattering tissue,” Nat. Photonics 7(4), 274–278 (2013).
[Crossref]

Pavone, F. S.

P. Frasconi, L. Silvestri, P. Soda, R. Cortini, F. S. Pavone, and G. Iannello, “Large-scale automated identification of mouse brain cells in confocal light sheet microscopy images,” Bioinformatics 30(17), i587–i593 (2014).
[Crossref]

Perrin, D.

E. A. Susaki, K. Tainaka, D. Perrin, H. Yukinaga, A. Kuno, and H. R. Ueda, “Advanced CUBIC protocols for whole-brain and whole-body clearing and imaging,” Nat. Protoc. 10(11), 1709–1727 (2015).
[Crossref]

Plesnila, N.

C. Pan, R. Cai, F. P. Quacquarelli, A. Ghasemigharagoz, A. Lourbopoulos, P. Matryba, N. Plesnila, M. Dichgans, F. Hellal, and A. Ertürk, “Shrinkage-mediated imaging of entire organs and organisms using uDISCO,” Nat. Methods 13(10), 859–867 (2016).
[Crossref]

Poplin, R.

E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” Cell 173(3), 792–803.e19 (2018).
[Crossref]

Preibisch, S.

S. Preibisch, S. Saalfeld, J. Schindelin, and P. Tomancak, “Software for bead-based registration of selective plane illumination microscopy data,” Nat. Methods 7(6), 418–419 (2010).
[Crossref]

Puetter, R.

R. Puetter, T. Gosnell, and A. J. A. R. A. A. Yahil, “Digital image reconstruction: Deblurring and denoising,” Annu. Rev. Astron. Astrophys. 43(1), 139–194 (2005).
[Crossref]

Quacquarelli, F. P.

C. Pan, R. Cai, F. P. Quacquarelli, A. Ghasemigharagoz, A. Lourbopoulos, P. Matryba, N. Plesnila, M. Dichgans, F. Hellal, and A. Ertürk, “Shrinkage-mediated imaging of entire organs and organisms using uDISCO,” Nat. Methods 13(10), 859–867 (2016).
[Crossref]

Rink, J.

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Rivenson, Y.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
[Crossref]

Robson, D. N.

M. B. Ahrens, M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller, “Whole-brain functional imaging at cellular resolution using light-sheet microscopy,” Nat. Methods 10(5), 413–420 (2013).
[Crossref]

Rocha-Martins, M.

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, eds. (Springer International Publishing, Cham, 2015), pp. 234–241.

Royer, L.

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Royer, L. A.

C. Belthangady and L. A. Royer, “Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction,” Nat. Methods 16(12), 1215–1225 (2019).
[Crossref]

Rubin, L. L.

E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” Cell 173(3), 792–803.e19 (2018).
[Crossref]

Rueden, C.

I. Arganda-Carreras, V. Kaynig, C. Rueden, K. W. Eliceiri, J. Schindelin, A. Cardona, and H. J. B. Sebastian Seung, “Trainable Weka Segmentation: a machine learning tool for microscopy pixel classification,” Bioinformatics btx180 (2017).

Saalfeld, S.

S. Preibisch, S. Saalfeld, J. Schindelin, and P. Tomancak, “Software for bead-based registration of selective plane illumination microscopy data,” Nat. Methods 7(6), 418–419 (2010).
[Crossref]

Schindelin, J.

S. Preibisch, S. Saalfeld, J. Schindelin, and P. Tomancak, “Software for bead-based registration of selective plane illumination microscopy data,” Nat. Methods 7(6), 418–419 (2010).
[Crossref]

I. Arganda-Carreras, V. Kaynig, C. Rueden, K. W. Eliceiri, J. Schindelin, A. Cardona, and H. J. B. Sebastian Seung, “Trainable Weka Segmentation: a machine learning tool for microscopy pixel classification,” Bioinformatics btx180 (2017).

Schmidt, D.

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Schmidt, U.

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Schober, R.

A. N. Yaroslavsky, P. C. Schulze, I. V. Yaroslavsky, and R. Schober, H. J. S. J. P. i. Medicine, and Biology, “Optical properties of selected native and coagulated human brain tissues in vitro in the visible and near infrared spectral range,” Phys. Med. Biol. 47(12), 2059–2073 (2002).
[Crossref]

Schulze, P. C.

A. N. Yaroslavsky, P. C. Schulze, I. V. Yaroslavsky, and R. Schober, H. J. S. J. P. i. Medicine, and Biology, “Optical properties of selected native and coagulated human brain tissues in vitro in the visible and near infrared spectral range,” Phys. Med. Biol. 47(12), 2059–2073 (2002).
[Crossref]

Schwartz, O.

E. Papagiakoumou, A. Bègue, B. Leshem, O. Schwartz, B. M. Stell, J. Bradley, D. Oron, and V. Emiliani, “Functional patterned multiphoton excitation deep inside scattering tissue,” Nat. Photonics 7(4), 274–278 (2013).
[Crossref]

Sebastian Seung, H. J. B.

I. Arganda-Carreras, V. Kaynig, C. Rueden, K. W. Eliceiri, J. Schindelin, A. Cardona, and H. J. B. Sebastian Seung, “Trainable Weka Segmentation: a machine learning tool for microscopy pixel classification,” Bioinformatics btx180 (2017).

Segovia-Miranda, F.

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Selchow, O.

A. L. Eberle, O. Selchow, M. Thaler, D. Zeidler, and R. Kirmse, “Mission (im)possible - mapping the brain becomes a reality,” Microscopy (Tokyo) 64(1), 45–55 (2015).
[Crossref]

Seshamani, S.

C. Ounkomol, S. Seshamani, M. M. Maleckar, F. Collman, and G. R. Johnson, “Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy,” Nat. Methods 15(11), 917–920 (2018).
[Crossref]

Shah, K.

E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” Cell 173(3), 792–803.e19 (2018).
[Crossref]

Shajkofci, A.

A. Shajkofci and M. Liebling, “Semi-blind spatially-variant deconvolution in optical microscopy with local point spread function estimation by use of convolutional neural networks,” in 2018 25th IEEE International Conference on Image Processing (ICIP) (IEEE, 2018), pp. 3818–3822.

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. J. I. T. I. P. Simoncelli, “Image Quality Assessment: From Error Visibility to Structural Similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Sherman, L.

L. Sherman, J. Ye, O. Albert, and T. J. J. O. M. Norris, “Adaptive correction of depth-induced aberrations in multiphoton scanning microscopy using a deformable mirror,” J. Microsc. 206(1), 65–71 (2002).
[Crossref]

Silvestri, L.

P. Frasconi, L. Silvestri, P. Soda, R. Cortini, F. S. Pavone, and G. Iannello, “Large-scale automated identification of mouse brain cells in confocal light sheet microscopy images,” Bioinformatics 30(17), i587–i593 (2014).
[Crossref]

Simoncelli, E. P. J. I. T. I. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. J. I. T. I. P. Simoncelli, “Image Quality Assessment: From Error Visibility to Structural Similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Skibinski, G.

E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” Cell 173(3), 792–803.e19 (2018).
[Crossref]

Soda, P.

P. Frasconi, L. Silvestri, P. Soda, R. Cortini, F. S. Pavone, and G. Iannello, “Large-scale automated identification of mouse brain cells in confocal light sheet microscopy images,” Bioinformatics 30(17), i587–i593 (2014).
[Crossref]

Solimena, M.

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Stell, B. M.

E. Papagiakoumou, A. Bègue, B. Leshem, O. Schwartz, B. M. Stell, J. Bradley, D. Oron, and V. Emiliani, “Functional patterned multiphoton excitation deep inside scattering tissue,” Nat. Photonics 7(4), 274–278 (2013).
[Crossref]

Susaki, E. A.

E. A. Susaki, K. Tainaka, D. Perrin, H. Yukinaga, A. Kuno, and H. R. Ueda, “Advanced CUBIC protocols for whole-brain and whole-body clearing and imaging,” Nat. Protoc. 10(11), 1709–1727 (2015).
[Crossref]

Tainaka, K.

E. A. Susaki, K. Tainaka, D. Perrin, H. Yukinaga, A. Kuno, and H. R. Ueda, “Advanced CUBIC protocols for whole-brain and whole-body clearing and imaging,” Nat. Protoc. 10(11), 1709–1727 (2015).
[Crossref]

Thaler, M.

A. L. Eberle, O. Selchow, M. Thaler, D. Zeidler, and R. Kirmse, “Mission (im)possible - mapping the brain becomes a reality,” Microscopy (Tokyo) 64(1), 45–55 (2015).
[Crossref]

Tomancak, P.

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

S. Preibisch, S. Saalfeld, J. Schindelin, and P. Tomancak, “Software for bead-based registration of selective plane illumination microscopy data,” Nat. Methods 7(6), 418–419 (2010).
[Crossref]

Ueda, H. R.

E. A. Susaki, K. Tainaka, D. Perrin, H. Yukinaga, A. Kuno, and H. R. Ueda, “Advanced CUBIC protocols for whole-brain and whole-body clearing and imaging,” Nat. Protoc. 10(11), 1709–1727 (2015).
[Crossref]

Van Putten, E. G.

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Vorontsov, E.

M. Drozdzal, E. Vorontsov, G. Chartrand, S. Kadoury, and C. Pal, “The importance of skip connections in biomedical image segmentation,” in Deep Learning and Data Labeling for Medical Applications (Springer, 2016), pp. 179–187.

Vos, W. L.

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Wan, P.

C. Fang, T. Chu, T. Yu, Y. Huang, Y. Li, P. Wan, W. Feng, X. Wang, W. Mei, D. Zhu, and P. Fei, “Minutes-timescale 3D isotropic imaging of entire organs at subcellular resolution by content-aware compressed-sensing light-sheet microscopy,” bioRxiv, 825901 (2020).

Wang, H.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
[Crossref]

Wang, X.

C. Fang, T. Chu, T. Yu, Y. Huang, Y. Li, P. Wan, W. Feng, X. Wang, W. Mei, D. Zhu, and P. Fei, “Minutes-timescale 3D isotropic imaging of entire organs at subcellular resolution by content-aware compressed-sensing light-sheet microscopy,” bioRxiv, 825901 (2020).

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. J. I. T. I. P. Simoncelli, “Image Quality Assessment: From Error Visibility to Structural Similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

Wei, Z.

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
[Crossref]

Weigert, M.

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Wilhelm, B.

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Xie, X.

Yahil, A. J. A. R. A. A.

R. Puetter, T. Gosnell, and A. J. A. R. A. A. Yahil, “Digital image reconstruction: Deblurring and denoising,” Annu. Rev. Astron. Astrophys. 43(1), 139–194 (2005).
[Crossref]

Yang, S. J.

E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” Cell 173(3), 792–803.e19 (2018).
[Crossref]

Yang, W.

W. Yang and R. Yuste, “In vivo imaging of neural activity,” Nat. Methods 14(4), 349–359 (2017).
[Crossref]

Yang, Y.

Yaroslavsky, A. N.

A. N. Yaroslavsky, P. C. Schulze, I. V. Yaroslavsky, and R. Schober, H. J. S. J. P. i. Medicine, and Biology, “Optical properties of selected native and coagulated human brain tissues in vitro in the visible and near infrared spectral range,” Phys. Med. Biol. 47(12), 2059–2073 (2002).
[Crossref]

Yaroslavsky, I. V.

A. N. Yaroslavsky, P. C. Schulze, I. V. Yaroslavsky, and R. Schober, H. J. S. J. P. i. Medicine, and Biology, “Optical properties of selected native and coagulated human brain tissues in vitro in the visible and near infrared spectral range,” Phys. Med. Biol. 47(12), 2059–2073 (2002).
[Crossref]

Ye, J.

L. Sherman, J. Ye, O. Albert, and T. J. J. O. M. Norris, “Adaptive correction of depth-induced aberrations in multiphoton scanning microscopy using a deformable mirror,” J. Microsc. 206(1), 65–71 (2002).
[Crossref]

Yu, T.

C. Fang, T. Chu, T. Yu, Y. Huang, Y. Li, P. Wan, W. Feng, X. Wang, W. Mei, D. Zhu, and P. Fei, “Minutes-timescale 3D isotropic imaging of entire organs at subcellular resolution by content-aware compressed-sensing light-sheet microscopy,” bioRxiv, 825901 (2020).

Yukinaga, H.

E. A. Susaki, K. Tainaka, D. Perrin, H. Yukinaga, A. Kuno, and H. R. Ueda, “Advanced CUBIC protocols for whole-brain and whole-body clearing and imaging,” Nat. Protoc. 10(11), 1709–1727 (2015).
[Crossref]

Yuste, R.

W. Yang and R. Yuste, “In vivo imaging of neural activity,” Nat. Methods 14(4), 349–359 (2017).
[Crossref]

Zeidler, D.

A. L. Eberle, O. Selchow, M. Thaler, D. Zeidler, and R. Kirmse, “Mission (im)possible - mapping the brain becomes a reality,” Microscopy (Tokyo) 64(1), 45–55 (2015).
[Crossref]

Zerial, M.

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

Zhang, H.

Zhu, D.

C. Fang, T. Chu, T. Yu, Y. Huang, Y. Li, P. Wan, W. Feng, X. Wang, W. Mei, D. Zhu, and P. Fei, “Minutes-timescale 3D isotropic imaging of entire organs at subcellular resolution by content-aware compressed-sensing light-sheet microscopy,” bioRxiv, 825901 (2020).

Annu. Rev. Astron. Astrophys. (1)

R. Puetter, T. Gosnell, and A. J. A. R. A. A. Yahil, “Digital image reconstruction: Deblurring and denoising,” Annu. Rev. Astron. Astrophys. 43(1), 139–194 (2005).
[Crossref]

Bioinformatics (1)

P. Frasconi, L. Silvestri, P. Soda, R. Cortini, F. S. Pavone, and G. Iannello, “Large-scale automated identification of mouse brain cells in confocal light sheet microscopy images,” Bioinformatics 30(17), i587–i593 (2014).
[Crossref]

Biomed. Opt. Express (1)

Cell (1)

E. M. Christiansen, S. J. Yang, D. M. Ando, A. Javaherian, G. Skibinski, S. Lipnick, E. Mount, A. O’Neil, K. Shah, A. K. Lee, P. Goyal, W. Fedus, R. Poplin, A. Esteva, M. Berndl, L. L. Rubin, P. Nelson, and S. Finkbeiner, “In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images,” Cell 173(3), 792–803.e19 (2018).
[Crossref]

IEEE Trans. on Image Process. (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. J. I. T. I. P. Simoncelli, “Image Quality Assessment: From Error Visibility to Structural Similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004).
[Crossref]

J. Microsc. (1)

L. Sherman, J. Ye, O. Albert, and T. J. J. O. M. Norris, “Adaptive correction of depth-induced aberrations in multiphoton scanning microscopy using a deformable mirror,” J. Microsc. 206(1), 65–71 (2002).
[Crossref]

Microscopy (Tokyo) (1)

A. L. Eberle, O. Selchow, M. Thaler, D. Zeidler, and R. Kirmse, “Mission (im)possible - mapping the brain becomes a reality,” Microscopy (Tokyo) 64(1), 45–55 (2015).
[Crossref]

Nat. Methods (9)

W. Yang and R. Yuste, “In vivo imaging of neural activity,” Nat. Methods 14(4), 349–359 (2017).
[Crossref]

M. B. Ahrens, M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller, “Whole-brain functional imaging at cellular resolution using light-sheet microscopy,” Nat. Methods 10(5), 413–420 (2013).
[Crossref]

C. Pan, R. Cai, F. P. Quacquarelli, A. Ghasemigharagoz, A. Lourbopoulos, P. Matryba, N. Plesnila, M. Dichgans, F. Hellal, and A. Ertürk, “Shrinkage-mediated imaging of entire organs and organisms using uDISCO,” Nat. Methods 13(10), 859–867 (2016).
[Crossref]

F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2(12), 932–940 (2005).
[Crossref]

C. Belthangady and L. A. Royer, “Applications, promises, and pitfalls of deep learning for fluorescence image reconstruction,” Nat. Methods 16(12), 1215–1225 (2019).
[Crossref]

M. Weigert, U. Schmidt, T. Boothe, A. Muller, A. Dibrov, A. Jain, B. Wilhelm, D. Schmidt, C. Broaddus, S. Culley, M. Rocha-Martins, F. Segovia-Miranda, C. Norden, R. Henriques, M. Zerial, M. Solimena, J. Rink, P. Tomancak, L. Royer, F. Jug, and E. W. Myers, “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018).
[Crossref]

C. Ounkomol, S. Seshamani, M. M. Maleckar, F. Collman, and G. R. Johnson, “Label-free prediction of three-dimensional fluorescence images from transmitted-light microscopy,” Nat. Methods 15(11), 917–920 (2018).
[Crossref]

S. Preibisch, S. Saalfeld, J. Schindelin, and P. Tomancak, “Software for bead-based registration of selective plane illumination microscopy data,” Nat. Methods 7(6), 418–419 (2010).
[Crossref]

H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Gunaydin, L. A. Bentolila, C. Kural, and A. Ozcan, “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat. Methods 16(1), 103–110 (2019).
[Crossref]

Nat. Photonics (1)

E. Papagiakoumou, A. Bègue, B. Leshem, O. Schwartz, B. M. Stell, J. Bradley, D. Oron, and V. Emiliani, “Functional patterned multiphoton excitation deep inside scattering tissue,” Nat. Photonics 7(4), 274–278 (2013).
[Crossref]

Nat. Protoc. (1)

E. A. Susaki, K. Tainaka, D. Perrin, H. Yukinaga, A. Kuno, and H. R. Ueda, “Advanced CUBIC protocols for whole-brain and whole-body clearing and imaging,” Nat. Protoc. 10(11), 1709–1727 (2015).
[Crossref]

Nature (2)

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature 521(7553), 436–444 (2015).
[Crossref]

J. Bertolotti, E. G. Van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012).
[Crossref]

Phys. Med. Biol. (1)

A. N. Yaroslavsky, P. C. Schulze, I. V. Yaroslavsky, and R. Schober, H. J. S. J. P. i. Medicine, and Biology, “Optical properties of selected native and coagulated human brain tissues in vitro in the visible and near infrared spectral range,” Phys. Med. Biol. 47(12), 2059–2073 (2002).
[Crossref]

Other (5)

A. Shajkofci and M. Liebling, “Semi-blind spatially-variant deconvolution in optical microscopy with local point spread function estimation by use of convolutional neural networks,” in 2018 25th IEEE International Conference on Image Processing (ICIP) (IEEE, 2018), pp. 3818–3822.

C. Fang, T. Chu, T. Yu, Y. Huang, Y. Li, P. Wan, W. Feng, X. Wang, W. Mei, D. Zhu, and P. Fei, “Minutes-timescale 3D isotropic imaging of entire organs at subcellular resolution by content-aware compressed-sensing light-sheet microscopy,” bioRxiv, 825901 (2020).

I. Arganda-Carreras, V. Kaynig, C. Rueden, K. W. Eliceiri, J. Schindelin, A. Cardona, and H. J. B. Sebastian Seung, “Trainable Weka Segmentation: a machine learning tool for microscopy pixel classification,” Bioinformatics btx180 (2017).

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, eds. (Springer International Publishing, Cham, 2015), pp. 234–241.

M. Drozdzal, E. Vorontsov, G. Chartrand, S. Kadoury, and C. Pal, “The importance of skip connections in biomedical image segmentation,” in Deep Learning and Data Labeling for Medical Applications (Springer, 2016), pp. 179–187.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Workflow of ScatNet. (a) Data pre-preparation including the intensity normalization, cropping (step 1), and background filtration (step 2) of the raw 3D image pairs acquired by imaging setups that exclude/include effect of light scattering. (b) Iterative network (U-Net architecture) training based on the registered and preprocessed image pairs. The network generates intermediate outputs based on the input blurred data (scattering images) and compares them quantitatively with the label data (scattering-free images), to calculate the system loss function which can iteratively push the optimization of the network (step 3 to 6). (c) Real microscopy imaging of deep tissues (Drosophila embryo, mouse brain, etc.), which generates experimental images with low SNR and resolution by light scattering (step 7). (d) Restoration of the degraded experimental images through predicting higher-resolution, higher-SNR images (step 8, 9) using the well-trained ScatNet. The restored patches are finally stitched back into a complete 3D image volume of the sample (step 10).
Fig. 2.
Fig. 2. Performance of ScatNet. (a) Light-sheet microscopy imaging of 3D fluorescent beads buried with a cluster of scattering Hela cells. Two image stacks of the same beads are acquired under two opposite views (0° and 180°), and spatially registered to obtain pairs of scattering and scattering-free bead images. (b) 3D reconstructions of the same beads by scattering mode (raw input), scattering-free mode (ground truth), and ScatNet restoration (network output). (c)-(d) Lateral (xy) and axial (xz) maximum intensity projections (MIPs) of the same selected ROIs (yellow boxes in b). The results clearly demonstrate a notable improvement in lateral and axial resolution for ScatNet. Scale bar, 5 µm. (e) ScatNet improves the lateral/axial resolution across the ∼25-µm overlapping volume of interest (∼90 beads), with achieving relatively uniform resolution of ∼1.09 ± 0.07 µm, ∼1.44 ± 0.23 µm and ∼1.77 ± 0.57 µm in x, y and z, respectively, which are compared to ∼1.95 ± 0.36 µm, ∼2.69 ± 0.43 µm and ∼2.72 ± 1.02 µm in the raw scattering image.
Fig. 3.
Fig. 3. Restoration of living Drosophila embryos imaged by LSFM. (a) 3D image stacks of Drosophila embryos acquired by Multi-view SPIM and registered by multi-view registration [20]. The 3D stacks from different views are registered to obtain the training pairs of scattering and scattering-free images from different parts of the embryo. (b) The full xy slices of Drosophila embryo by single-view SPIM (raw), ScatNet (restoration) and multi-view fusion (ground-truth). (c) 3D views of raw, restored and ground-truth signals of nuclei at the same embryo region. (d) ScatNet restoration of signals with different degrees of light scattering (at different depths). The comparative results are shown in both raw x-y (row 1-3) and reconstructed x-z planes (row 4). The error maps of raw and restored images, as compared to ground truths, are correspondingly shown in the right two columns. (e) Intensity plots of the lines through the nuclei resolved in raw, restored, and ground-truth images. The plot profiles of ScatNet fit those of ground-truths well, showing sufficiently high resolution as well as restoration fidelity have been achieved regardless of the scattering degree. (f) The averaged SNR values of the image stacks shown in (c). The poor SNR in raw images has been also improved by ScatNet.
Fig. 4.
Fig. 4. ScatNet restoration of neuronal signals in mouse brain slices (Thy1-YFP-M) imaged by TPEM. (a) The experimental design of TPEM imaging for obtaining the gradient scattering information through covering the sample with different thickness of brain slices. The network trained by data encompassing gradient scattering information is then able to recover signals suffering from different degrees of scattering. (b) Maximum intensity projections (MIPs) of the same 70 × 70 × 100 µm3 regions along z- (top) and y-axis (bottom). (b1), (b2), (b4), (b6) show the raw imaging results through 0 (ground-truth), 100, 200, and 300 µm brain slices, respectively. (b3), (b5), (b7) correspondingly show the results restored from the scattered signals in (b2), (b4), and (b6), respectively. (c) Intensity plots of linecuts through individual neuronal fibers resolved in b1 to b7. The comparative intensity profiles quantitatively verify improved resolution as well as high restoration fidelity by ScatNet for different degrees of scattering. (d), (e) Variation of SSIM and PSNR values of the raw measurements and corresponding ScatNet restorations, across the imaging depth. The ScatNet improvement is more obvious at deeper region.
Fig. 5.
Fig. 5. Restoration of a 300-µm thick brain slice with neurons labelled by YFP (Thy1-YFP-M). (a), (b) 3D views of the raw and restored neurons in the same brain volume across entire 300-µm depth. Scale bar, 100 µm. (c) Lateral (xy) planes of the selected ROIs (dashed boxes in a and b) at 60 µm, 180 µm and 240 µm depth in the raw (upper row) and restoration (lower row) results, respectively. Scale bar, 80 µm. (d), (e) The 3D reconstructions of raw TPEM image and ScatNet-enabled TPEM image. Two pyramidal neurons from the deep regions (∼290 and ∼180 µm depth) in the raw and restored brain images are segmented (yellow and green color renderings) for comparison. Scale bar, 80 µm.
Fig. 6.
Fig. 6. Illustration of our U-Net-based network. This learning procedure contains an encoder-decoder architecture with skip-connections. The “Conv” represents a convolutional layer. The 3×3×3 Conv layers use the Rectified Linear Unit (ReLU) as the active function. “Max pool” is the MaxPooling3D layer. “Up Sample” is the UpSampling3D layer. “Concat” means concatenation operation.
Fig. 7.
Fig. 7. ScatNet restoration of 2D image of USAF resolution target acquired through a layer of tissue phantom. Raw (tissue phantom covered), ScatNet-restoration and ground-truth (tape removed) images of the same area in the USAF resolution target, shown as top, middle, and bottom parts, respectively. (a) Comparative results of single line pairs in group 5 (32.0 to 57.0 lpm) before and after network restoration. (b) The highest resolution of single line pairs (group 6, 64.0 to 114.0 lpm) can be restored by our ScatNet. (c), (d) Intensity profiles of the lines through the 3rd and 6th line pairs in group 6, respectively. The profiles of ScatNet restoration fit those of ground-truths well, indicating sufficiently high resolution as well as restoration fidelity.
Fig. 8.
Fig. 8. Single-view LSFM input, two-view-fused LSFM input, and their Scat-Net restorations. Scale bar, 40 µm.
Fig. 9.
Fig. 9. Failure cases of ScatNet restorations. (a) Failed restorations of extreme inputs that are completely blurred by severe tissue scattering. (b) Restoration errors due to the lack of accurate registration. Without an additional registration being applied, noticeable misalignment between the blurred and label images (arrow indicators) leads to a poor network training, which consequently causes wrong inferences of model (Restoration 1). After the training data are well aligned using an image registration, much improved Restorations 2 can be generated by the correctly-trained model.
Fig. 10.
Fig. 10. Comparison of the restored results by different deep-learning models with different parameters. (a)-(f) Axial (xz) maximum intensity projections (MIPs) of the same selected ROIs of raw scattering image, ground-truth scattering-free image, restoration by Resnet with MSE loss function, restoration by ScatNet with MSE loss function, restoration by Resnet with CE loss function, and restoration by ScatNet with CE loss function, respectively. Scale bar, 20 µm.

Tables (1)

Tables Icon

Table 1. Overview of training details on above-mentioned samples.

Metrics

Select as filters


Select Topics Cancel
© Copyright 2022 | Optica Publishing Group. All Rights Reserved