Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deep learning-based adaptive optics for light sheet fluorescence microscopy

Open Access Open Access

Abstract

Light sheet fluorescence microscopy (LSFM) is a high-speed imaging technique that is often used to image intact tissue-cleared specimens with cellular or subcellular resolution. Like other optical imaging systems, LSFM suffers from sample-induced optical aberrations that decrement imaging quality. Optical aberrations become more severe when imaging a few millimeters deep into tissue-cleared specimens, complicating subsequent analyses. Adaptive optics are commonly used to correct sample-induced aberrations using a deformable mirror. However, routinely used sensorless adaptive optics techniques are slow, as they require multiple images of the same region of interest to iteratively estimate the aberrations. In addition to the fading of fluorescent signal, this is a major limitation as thousands of images are required to image a single intact organ even without adaptive optics. Thus, a fast and accurate aberration estimation method is needed. Here, we used deep-learning techniques to estimate sample-induced aberrations from only two images of the same region of interest in cleared tissues. We show that the application of correction using a deformable mirror greatly improves image quality. We also introduce a sampling technique that requires a minimum number of images to train the network. Two conceptually different network architectures are compared; one that shares convolutional features and another that estimates each aberration independently. Overall, we have presented an efficient way to correct aberrations in LSFM and to improve image quality.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

High-resolution three-dimensional optical imaging methods have been instrumental in experimental biology and life sciences [1,2]. Among the light microscopy modalities that provide optical sectioning (e.g., confocal and 2-photon), Light Sheet Fluorescence Microscopy (LSFM) is considered the fastest as it avoids the point by point scanning of the sample [35] by acquiring the entire field of view at once [68]. The fast acquisition speed (tens of frames per second) has revolutionized LSFM for documenting and quantitatively analyzing tissue-cleared tissues in various settings and models [924].

Similar to other optical imaging techniques, LSFM suffers from sample-induced aberrations in the detection path. In tissue-cleared samples, such aberrations occur deep in the specimen (i.e., starting at ∼1 mm) [3,2527]. These specimen-based aberrations are irregular as they can be influenced by factors such as the quality of the tissue clearing process, as well as geometrical complexity and heterogeneity of the sample's refractive index. The optical aberrations are expressed as distortions in the wavefront, and adaptive optics (AO) has been developed to correct for them [2732]. In AO, aberration can be estimated with [3338] or without [3944] wavefront sensors, but most AO methods used in microscopy lack wavefront sensors [29,45]. Once estimated, the aberration is corrected using a Deformable mirror (DM) or a Spatial light modulator (SLM) that is placed in the exit pupil plane of the detection objective. In this paper, we will focus on sample-induced aberrations in the detection path. Other studies are available on the correction for coplanarity aberrations (i.e., aberrations related to the illumination beam intersection with the focal plane of the detection objective) [3,4,25,46].

Sensorless method of AO uses an iterative process to estimate the wavefront aberration with high accuracy [32,42,47,48]. The wavefront in the exit pupil plane of the detection objective is represented by a combination of different Zernike modes that each represent a different type of aberration (e.g., $Z_2^0 = defocus,{\; }Z_2^2 = vertical{\; }astigmatism{\; }and{\; }Z_2^{ - 2} = horizontal{\; }astigmatism$) [49]. The amplitude of each Zernike mode describes the severity of that aberration, and the amplitude value is assessed using a progressive grid search algorithm that continuously evaluates the image. Although the sensorless method of AO is highly reliable and accurate, it requires capturing multiple images (∼8-15) per Zernike mode which is too slow to evaluate local aberrations in whole organs (Fig. 1). Fast evaluation of aberrations can be achieved using optical phase conjugation [50], phase retrieval and phase diversity methods [5153], parameterized point spread function (PSF) fitting methods [5456] and pupil segmentation [57]. These methods are beneficial, but typically require expensive additional hardware to assess the aberration.

 figure: Fig. 1.

Fig. 1. Specimen-based aberrations in light sheet fluorescence microscopy. Deep in a tissue cleared specimen, the emitted wavefront accumulates spatially variable aberrations, these aberrations distort the image. The non-aberrated image was obtained on the surface of the specimen (red spot), where sample induced aberrations are minimal, while the aberrated image was obtained deep in the tissue (blue spot), without any corrections.

Download Full Size | PDF

Deep learning has greatly advanced image processing tasks, such as image denoising, object detection, and more. Accordingly, deep learning is currently utilized to assess aberrations in sensorless AO [58,59], either using simulated or synthetic data for training, or using nonbiological experimental data [60,61]. We have used a 13- layer RESNET-based network that is trained and validated on tissue cleared samples with biomedical value (e.g., porcine cochlea and mouse brain). The aberration estimation process uses only two defocused images, and the aberration correction is done using a DM. In our implementation, two different computational approaches are evaluated, the first approach uses one network to estimate the five dominant Zernike modes, whereas the other approach uses five independent networks to estimate each aberration independently. The latter approach proves to be more accurate, and it implies that aberrations can be estimated independently even when the images show the presence of other aberrations. The training dataset was acquired based on the aberration’s statistics determined experimentally, and this approach allows us to record a relatively low number of images (7,000) to train our networks. All in all, the proposed technique facilitates real-time correction of aberrations in LSFM imaging, and provides an elegant new procedure to minimize the number of images required for model training.

2. Methods

A schematic of our optical setup is shown in Fig. 2, the DM was used for correcting the aberrations in the detection plane. Our setup can also control the light sheet illumination beam roll and yaw angles using two dual galvanometer scanners to intersect the illumination beam with the focal plane of the detection objective [3,4,25,46]. Here we manually verified that the illumination beam is coplanar with the focal plane of the detection objective. We took this manual approach as in this study we focused on timely correction of the detection side aberrations, i.e., the light path from the emitted signal to the camera.

 figure: Fig. 2.

Fig. 2. Schematic of the optical setup used for correcting optical aberrations in the detection path of light sheet fluorescence microscope. A neural network is used to estimate the amplitude of five Zernike modes from two defocused images, and the correction is done using a deformable mirror. A translation stage is used to correct for illumination beam defocus.

Download Full Size | PDF

2.1 Sample preparation

A total of 5 male rat brains [62], 2 pig cochleae [20] and 2 mouse brains [63] underwent pretreatment with methanol, immunolabeling, and clearing [22]. The porcine cochlea samples (newborn) were stained using rabbit anti-Myosin VIIa (Cy3 as secondary). The mouse brain samples (postnatal day 30) were stained using chicken anti-GFP (Alexa Fluor 647 as secondary) and rabbit anti-RFP (Cy3 as secondary). The rat samples (embryonic day 14) were stained using rabbit anti-TH (Alexa Fluor 555 as secondary) and goat anti-5-HT (Alexa Fluor 647 as secondary). All the animals were harvested under the regulation and approval of the Institutional Animal Care and Use Committee (IACUC) at North Carolina State University.

2.2 Experimental setup

After clearing, samples were mounted and placed in a custom-built immersion chamber filled with dibenzyl-ether (DBE) with a refractive index (RI) of 1.562. The experimental setup for controlling the illumination beam angle was described in detail before [20,25,46,64]. Figure 2 shows a schematic that excludes mirrors and lenses. Briefly, a Gaussian beam was emitted by a continuous-wave laser (Coherent; OBIS LS 561-50); the beam was then expended and dithered up and down across the field-of-view at a high frequency (600 Hz) to generate the light sheet illumination. The calculated waist full-width-half-maxima (FWHM) was ∼8 µm. The scanning galvo system (Cambridge Technology; 6215 H) dithered the illumination beam and controlled the roll angle relative to the detection focal plane. Control on the scanning galvo was achieved using a dual-channel arbitrary function generator (Tektronix; AFG31022A), which can synchronize the phase between its two channels. To control the yaw angle, an additional dual galvo system was integrated (Thorlabs; GVS202) that was controlled using an analog signal (Measurement Computing; USB-1208HS-4AO). The detection objective lens (10× numerical aperture (NA) 0.6, Olympus; XLPLN10XSVMP-2) was mounted on a translation stage (Newport; 561D-XYZ, and CONEX-TRB12CC motor) to correct for any defocus errors between the illumination beam and the detection lens focal plane. An emission filter (AVRO; FF01-593/40-25) was placed immediately after the detection objective in the detection path, followed by a mirror to direct the light to the DM (ALPAO; DM69-15). The DM had 69 actuators, with a stroke range of 40 µm. The light from the DM was reflected towards a tube lens (ASI; TL180-MMC) followed by a CMOS camera (Hamamatsu, C13440-20CU). To compensate for aberrations, the DM is positioned at the back pupil plane of the objective lens, which coincides with the Fourier plane of the specimen. After this correction, the parallel beam of light is collected by a tube lens located after the DM, which focuses it onto the detector or camera. The tube lens performs an additional Fourier transform on the light transmitted through the specimen and objective lens before detection. A graphical user interface was written in MATLAB (2019b) and used during image acquisition.

2.3 Aberration estimation

2.3.1 Datasets acquisition for training and testing

Datasets for training and testing our computational models were recorded using the LSFM setup, and we relied on the slow but accurate sensorless AO method. The sensorless AO method required capturing 75 images for every field of view (FOV), which we used to generate ground truth data to train our computational methods. Initially, we measured the static system aberrations using the iterative method at the surface of the sample, while the chamber was filled with DBE. These aberrations were not sample related. The estimated coefficients were used as the baseline for determining the specimen-based aberrations, which were then statistically analyzed. Specimen (5 in total) were mounted in the imaging chamber and the Zernike amplitudes were estimated at 30 locations with depth varying from surface to 7 mm deep. Figure 3(a) shows the extent of the Zernike amplitudes (wave RMS (µm)) corresponding to the different aberrations. The aberrations value in this Fig. 3(a) are different from our previous studies [25] as more data has been collected and added in this work. Based on the distribution of the aberrations, we recorded our datasets, simply put, when generating the training dataset, the recorded Zernike amplitude remained inside the amplitude range per aberration that is shown in Fig. 3(a) and Fig. S1. Next, we determined the minimum amplitude change in each Zernike mode amplitude that resulted in observable image quality change. We observed that a minimum change of 0.05 in the Zernike amplitude was necessary to have an impact on the imaging quality. Since smaller relative changes in the Zernike amplitude at higher amplitudes degraded the image more, the minimum change was determined at higher amplitudes. This factor was essential, as our aberration estimation model will classify the size of the aberration into predetermined ranges, and this factor determines how many classes we would have per aberration. Hence in the dataset collection step for coma and astigmatism, Zernike amplitudes in the range of -0.2 to 0.2 with a step size of 0.05 and for spherical -0.05 to 0.05 with a step size of 0.05 were selected. Therefore, we had 19,683 different combinations of aberrations.

 figure: Fig. 3.

Fig. 3. Specimen-induced aberrations and the methodology for dataset recording. (a) Quantification of specimen-induced aberration using different samples of pig cochlea and rat brains. (b) A schematic that describes the recording of the training dataset. Per one selected Zernike mode, its amplitude is sampled using the discrete amplitude generator (DAG), whereas the rest of the amplitudes are randomly sampled from a normal distribution (RAG) with zero mean and rounded standard deviation that corresponds to (a).

Download Full Size | PDF

Instead of recording every combination of different Zernike amplitudes for training, we took a different approach to minimize the size of the training dataset. In this approach, as shown in Fig. 3(b), we selected a discrete amplitude set (DAG) for only one of the Zernike modes, whereas we treated the other Zernike modes amplitudes as noise, and therefore, we randomly sampled them from a normal distribution whose mean was zero and whose standard deviation (RAG) was according to Fig. 3(a). For example, if the DAG value for horizontal coma is 0.20, the Zernike amplitudes that correspond to vertical coma, astigmatism and spherical were sampled from gaussian distribution (Fig. 3(b)).

To acquire images for training, in each spatial location we estimated the aberrations using a sensorless AO method, and corrected for them using the DM. Then we translated the sample by 10 µm in the z-direction, under the assumption that the applied aberration compensation was still optimal [25]. Given that the images after the correction did not exhibit any observable aberrations, we introduced known aberrations to the scene and captured two defocused images. Consequently, the two captured images could be associated with known aberrations. For each spatial location, we cycled through the five Zernike modes, starting from vertical coma, selecting the DAG and adding the RAG, for each configuration we captured two defocused images. We kept this process until we captured all the discrete values for all the Zernike modes (39 in total). In the above process after recording all DAGs for each Zernike mode, we also moved the sample by 10 µm in the z-direction to avoid fading. After completion, the sample is translated to a new spatial location, and the process is repeated. Overall, 9,750 instances were captured with known Zernike amplitudes. The dataset was then divided into 70-30% for training and validation. For testing, additional 1,900 cases were captured on new specimens. During the recording of the training data, we captured FOVs from all parts of the specimen, including from the surface and deep within the tissue. This was done to ensure that low SNR data resulting from deep tissue imaging is included in the training set, and to capture as many unique features of the sample as possible. By doing this, we aimed to create a diverse training dataset. This diverse training dataset would enable the deep learning model to be more robust in estimating aberrations, even in challenging conditions where the SNR is low, or the sample features are complex.

2.3.2 Network architecture and training process

We estimated five Zernike modes from two defocused images. These modes represent the dominant aberrations in optical imaging of biological samples: Coma, Astigmatism and Spherical. Both regression and classification networks can be used to estimate the Zernike modes, but we preferred working with classification networks since the exact value of the aberration did not affect the image quality as long as the network accurately classified the images to the right DAG. For classification, horizontal coma, vertical coma, horizontal astigmatism, and vertical astigmatism aberrations had nine classes that corresponded to Zernike amplitudes of -0.2, -0.15, -0.1, -0.05, 0, 0.05, 0.1, 0.15, and 0.2, whereas spherical aberration had only three classes that corresponded to Zernike amplitudes of -0.05, 0, and 0.05. The number of classes and amplitudes were determined as described earlier.

We tested two architectures; first, a shared network that shared its convolutional features, while in a second non-shared network we used an independent network for each aberration. For the shared network, we used a modified RESNET-based architecture [65], see Fig. 4(a) and Fig. S2. The RESNET structure was modified after the convolutional block, the features were linearized and connected to fully connected layers with a size of 256 × 1, and then followed by another fully connected layer to predict the value for each Zernike mode. In the second configuration non-shared network (Fig. 4(b)) all aberration estimations were done in separate networks.

 figure: Fig. 4.

Fig. 4. Network architecture. (a) The network received two images as input, followed by three convolutional blocks (where [3 × 3,32] format implies a convolution filter of 3 × 3 and total of 32 filters), a linear layer and two fully connected layers to sub-classes to classify the different mode of aberration. In a shared network, all modes are connected to the convolution layer with a fully connected layer. (b) The non-shared network has the same structure as the shared network, with the difference being that no features of the network are shared between each other. The network is simplified for illustration and the ReLu and pooling layers are not shown.

Download Full Size | PDF

During training, the network was fed two images with dimensions of 128 × 128 pixels each, which were randomly cropped from a 1024 × 1024-pixel window. This window was centered on the middle of the full field of view. As demonstrated in Fig. S3, the aberrations remained consistent throughout the entire field of view, indicating that the precise placement of the cropped images was not a critical factor for aberration estimation. The first image was in-focus aberrated image where the focus was determined manually, and the second image was a defocused-aberrated image. The defocus was achieved by adding an additional amplitude of 0.25 to the defocus Zernike mode on the DM. Please note, Zernike amplitudes sampled from gaussian distribution (RAG), were assigned the nearest class for training and testing. While training, the input images were normalized to values between 0 to 1. The cross-entropy function was used as the loss function, which for the non-shared network was calculated separately for each aberration and added to generate the final loss function. A total of 6,825 captured and labeled cases were used for training and ∼2,000 images were used for validation. The network was implemented in Python 3.5 with Py-Torch 1.4.0 and trained on 4 Nvidia Tesla V100-16 GB GPU on the longleaf cluster. The learning rate was set to 0.00001 with Adam optimizer and all the image processing (normalization, saturation and cropping) were done while training on the server.

3. Results

3.1 Shared versus non-shared–accuracy of aberration estimation

We first compared the performance of the shared versus non-shared networks using a test set that contained ∼1900 cases. A confusion matrix was then plotted for each aberration (Fig. 5(a), vertical coma for the two architectures) and the rest of the confusion matrices are shown in Fig. S4-S5. Fig. S6-S7 shows the normalized confusion matrix by number of images in each class. In Fig. 5(b), the normalized root-mean-square error (NRMSE) is presented, which was calculated by dividing the root-mean-square error (RMSE) values for each mode by the standard deviation of the corresponding RAG used for each Zernike modes respectively. By dividing the RMSE by the standard deviation of the corresponding mode, we obtained a metric that reflects the relative error of the estimation for each mode, independent of its amplitude. Therefore, the NRMSE provides a better measure of the overall performance of the two architectures, as it accounts for the differences in the amplitude of the aberrations. Results indicate that both architectures can accurately classify the aberrations based on their strength and with high accuracy (Fig. 5(b)). Additionally, we investigated the combined estimation error by adding all the Zernike modes errors per image, and after averaging this value across 200 cases, we have found an NRMSE of 3.64 for the non-shared network and 3.92 for the shared network. To summarize, the non-shared network has better estimation accuracy when compared with the shared network. This result can be explained, as the non-shared network has specific convolutional features to estimate each aberration.

 figure: Fig. 5.

Fig. 5. Quantitative comparison shared and non-shared networks. (a) Confusion matrix of vertical coma for the shared and non-shared network. (b) Comparison of normalized root mean square error (NRMSE) of different Zernike modes/aberrations (Whisker is the maximum/minimum and box shows the standard deviation). We used NRMSE measure since the standard deviation of the different aberrations was different. The figure shows data acquired from tissue cleared rat brains.

Download Full Size | PDF

3.2 Shared versus non-shared network–improvement in image quality

From an imaging point of view, the improvement in the image quality was the most important metric for comparison between the shared and non-shared networks. To perform this comparison, first the aberrations were removed using the iterative sensorless AO approach. Then a known set of random Zernike amplitudes, which degraded the image quality, was displayed on the DM, and two images were captured. Then, the aberrated pair of images were fed into the two networks for estimating the Zernike amplitudes independently. Based on the two predictions the aberrations were corrected, and the resulting images were captured. Please note, to cancel the effects of signal fading between the image acquisitions when aberrations were introduced, we moved our field of view a few micrometers in depth. Since the specimens were tissue cleared organs the aberrations and image content did not change drastically from the change in depth. Therefore, this change in depth did not affect our quantitative results. As an unbiased image quality metric, we used the Discrete Cosine Transform Energy (DCTE) [66] on the entire FOV. The improvement in image quality is shown in Fig. 6(a) and Fig. S8, where the original aberrated image and the corrected images for the shared and non-shared networks are shown where improvement in image quality is evident. To quantitatively show the improvement in image quality, we repeated this process for 20 FOVs. Figure 6(b) shows the DCTE values for aberrated, shared and non-shared network-corrected images across 20 FOVs. These results demonstrated that the non-shared network can improve the image quality and that it has superior performance when compared to the shared network. In term of processing time, the Shared network required 1.7 seconds, and all five non-shared networks needed 4 seconds to estimate the five Zernike modes. We did not quantify the impact of aberration corrections on the axial PSF in our study, as the NA of our objective is 0.6. In this range, the effect of aberrations on the axial PSF is relatively small [67], allowing us to confidently evaluate the overall impact of aberrations on image quality.

 figure: Fig. 6.

Fig. 6. Comparison of imaging performance for shared and non-shared networks. (a) Aberrated and corrected images using the shared and non-shared networks. Images in the red boxes are zoomed in regions. The estimated Zernike coefficient for the shared network and the non-shared network are [ZVC = 0.15, ZHC = -0.10, ZVA = -0.05, ZHA = -0.10, ZSP = 0.05] and [ZVC = 0.10, ZHC = -0.10, ZVA = -0.10, ZHA = -0.10, ZSP = 0] respectively. (b) Comparison of image quality before and after aberration correction using shared and non-shared networks. DCTE is used to evaluate the image quality. The figure shows data acquired from tissue cleared rat brains. VC – vertical coma, HC – horizontal coma, VA – vertical astigmatism, VH – horizontal astigmatism, SP – spherical.

Download Full Size | PDF

3.3 Aberration estimation and correction–unseen-tissue

We next tested our trained networks on unseen tissue, i.e., a tissue type that was not used during the training stage. The unseen tissue was a tissue cleared mouse hemisphere, and the training dataset contained only tissue cleared rat brains. As previously explained, a known set of aberrations were presented using the DM to generate the aberrated images. The recorded images were then sent to the shared and non-shared networks for Zernike amplitude estimation, and the correction was applied using a DM. Figure 7 shows a representative example and the DCTE value of the aberrated image and the corrected images based on the estimation provided by the shared and non-shared networks. As expected, the corrected images still suffered from minor aberrations, but the image quality was dramatically improved.

 figure: Fig. 7.

Fig. 7. Testing on unseen tissue. Aberrated and corrected images using the shared and non-shared networks with corresponding image quality metric. The estimated Zernike coefficient for the shared network and the non-shared network are [ZVC = -0.10, ZHC = -0.10, ZVA = 0.15, ZHA = -0.10, ZSP = -0.05] and [ZVC = -0.15, ZHC = -0.10, ZVA = 0.15, ZHA = -0.10, ZSP = -0.05] respectively. Images show data acquired from a tissue cleared mouse brain. VC – vertical coma, HC – horizontal coma, VA – vertical astigmatism, VH – horizontal astigmatism, SP – spherical.

Download Full Size | PDF

4. Discussion and conclusion

Here, we have developed a deep neural network-based technique to correct aberrations in LSFM, and the aberration estimation process requires only two images. We aimed to correct for sample-induced aberrations in tissue cleared samples, rather than fixed optical system-based aberrations. The source for these sample-induced aberrations is the heterogeneity of the tissue within the specimen. Since in tissue clearing the image acquisition stage involves acquiring hundreds of thousands of images, a need for a dynamic and fast aberration correction method is eminent. Most sensorless AO techniques are slow, require acquisition of tens of images, and estimate the aberrations without considering the aberrations’ statistics as they typically begin with a large initial amplitude estimate of the aberration followed by iteratively converging towards the aberration minima by decreasing the range of the estimation. Here, we first quantify the statistics of specimen-based aberrations and their range, and we find that coma, astigmatism and spherical are the three primary aberrations that are specimen induced. Then, we evaluate the required accuracy for aberration estimation, i.e., what is the maximum estimation error that does not affect the image quality. This estimation allows us to minimize the acquired dataset size for training, we do that by selecting the value of only one Zernike amplitude at a time based on a uniform distribution, while treating all other modes as independent noise, and deciding their value by sampling from a normal distribution. Conceptually this approach is similar to an autofocus routine, where the sharpest focal distance is evaluated while ignoring or treating the rest of the aberrations as noise. Using this approach, we only captured aberrations in 39 separated classes, while the alternative is to record every permutation of the Zernike amplitudes as done conventionally, which would require 19,683 classes in our implementation. Once the training dataset is acquired, we used two Resnet-based classification networks to estimate the Zernike amplitudes. In the first network, the convolution features of the network are shared between all the aberration modes toward aberration estimation, whereas the second architecture has five separate networks that each estimate only one Zernike amplitude independently of the others. The training and testing results show that independent networks approach (non-shared) is more accurate (∼8%) than the shared network. When comparing the improvement in image quality before and after aberration corrections, we achieve 23%, and 27% improvement for the shared and non-shared network, respectively. This fact is not surprising as the non-shared network has more training parameters for each Zernike mode compared to the shared network.

Additionally, using simulation, we also test the improvement of network estimation when the number of defocused images used is changed from two to three. We find that the shared network has an NRMSE of 3.802 for two images and 3.787 for three images. For the non-shared network, the NRMSE is 3.408 for two images and 3.381 for three images. These results indicate that there is not a significant improvement in aberration estimation when increasing the number of input images from two to three.

As mentioned earlier, aberrations become more pronounced when imaging from the surface to deep within the sample. However, these aberrations exhibit variations across different samples, indicating that they are not exclusively dependent on depth but are influenced by various factors including the quality of clearing, the age of the animal, and the structural complexity (Fig. S1). In order to accommodate this variability in our study, we meticulously chose field-of-views from diverse locations within the sample, spanning from the surface to deeper regions within the tissue.

Although the non-shared network shows better results in comparison with the shared network, it requires each network to run separately. Thus, in live imaging of cells and tissues, the non-shared networks will have to run in parallel to match the computational time of the shared network approach. In terms of computational time (without parallel computing solutions), 1.7 sec and 4 sec were required to run the shared and non-shared networks, respectively. This appears to be the tradeoff in our proposed method, where a higher level of correction requires more time. Additionally, in our proposed method the images are focused manually, and we ignore the Zernike mode that corresponds to defocus. We take this approach, as we want to isolate the detection optical path-based errors from the effects of illumination-based inaccuracies, which cannot be corrected using a deformable mirror. In our previous work we have demonstrated that the defocus and the illumination angle can also be corrected using deep learning [65] and a galvo scanner. In the future, we envision our microscope to be driven by deep learning enabling simultaneous correction of multiple types of aberrations. Achieving crisp and high signal-to-noise images across the entire tissue cleared specimen will greatly facilitate the image processing stage that is the current bottle neck in applying tissue clearing techniques and imaging to basic biology research [64].

Funding

National Institute of Neurological Disorders and Stroke (R21NS129093, R56NS117019).

Acknowledgments

The authors would like to thank Dr. Adele Moatti and the NCSU Central Procedure Lab for their help with tissue collection.

Disclosures

The authors declare no conflicts of interests.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. P. J. Keller, “Imaging morphogenesis: technological advances and biological insights,” Science 340(6137), 1234168 (2013). [CrossRef]  

2. C. A. Combs, “Fluorescence Microscopy: A Concise Guide to Current Imaging Methods,” Curr. Protoc. Neurosci. Editor. Board Jacqueline N Crawley Al 0 2, Unit2.1 (2010).

3. L. A. Royer, W. C. Lemon, R. K. Chhetri, Y. Wan, M. Coleman, E. W. Myers, and P. J. Keller, “Adaptive light-sheet microscopy for long-term, high-resolution imaging in living organisms,” Nat. Biotechnol. 34(12), 1267–1278 (2016). [CrossRef]  

4. L. A. Royer, W. C. Lemon, R. K. Chhetri, and P. J. Keller, “A practical guide to adaptive light-sheet microscopy,” Nat Protocol 13(11), 2462–2500 (2018). [CrossRef]  

5. T. Chakraborty, M. K. Driscoll, E. Jeffery, et al., “Light-sheet microscopy of cleared tissues with isotropic, subcellular resolution,” Nat. Methods 16(11), 1109–1113 (2019). [CrossRef]  

6. E. H. K. Stelzer, F. Strobl, B.-J. Chang, F. Preusser, S. Preibisch, K. McDole, and R. Fiolka, “Light sheet fluorescence microscopy,” Nat. Rev. Methods Primer 1(1), 73 (2021). [CrossRef]  

7. M. Weber and J. Huisken, “Light sheet microscopy for real-time developmental biology,” Curr. Opin. Genet. Dev. 21(5), 566–572 (2011). [CrossRef]  

8. P. J. Keller and H.-U. Dodt, “Light sheet microscopy of living or cleared specimens,” Curr. Opin. Neurobiol. 22(1), 138–143 (2012). [CrossRef]  

9. T. Tian, Z. Yang, and X. Li, “Tissue clearing technique: Recent progress and biomedical applications,” J. Anat. 238(2), 489–507 (2021). [CrossRef]  

10. C. Pan, R. Cai, F. P. Quacquarelli, A. Ghasemigharagoz, A. Lourbopoulos, P. Matryba, N. Plesnila, M. Dichgans, F. Hellal, and A. Ertürk, “Shrinkage-mediated imaging of entire organs and organisms using uDISCO,” Nat. Methods 13(10), 859–867 (2016). [CrossRef]  

11. E. Lee, J. Choi, Y. Jo, J. Y. Kim, Y. J. Jang, H. M. Lee, S. Y. Kim, H.-J. Lee, K. Cho, N. Jung, E. M. Hur, S. J. Jeong, C. Moon, Y. Choe, I. J. Rhyu, H. Kim, and W. Sun, “ACT-PRESTO: Rapid and consistent tissue clearing and labeling method for 3-dimensional (3D) imaging,” Sci. Rep. 6(1), 18631 (2016). [CrossRef]  

12. H. Hama, H. Hioki, K. Namiki, T. Hoshida, H. Kurokawa, F. Ishidate, T. Kaneko, T. Akagi, T. Saito, T. Saido, and A. Miyawaki, “ScaleS: an optical clearing palette for biological imaging,” Nat. Neurosci. 18(10), 1518–1529 (2015). [CrossRef]  

13. M. R. Cronan, A. F. Rosenberg, S. H. Oehlers, J. W. Saelens, D. M. Sisk, K. L. Jurcic Smith, S. Lee, and D. M. Tobin, “CLARITY and PACT-based imaging of adult zebrafish and mouse for whole-animal analysis of infections,” Dis. Model. Mech. 8(12), 1643–1650 (2015). [CrossRef]  

14. I. Costantini, J.-P. Ghobril, A. P. Di Giovanna, A. L. A. Mascaro, L. Silvestri, M. C. Müllenbroich, L. Onofri, V. Conti, F. Vanzi, L. Sacconi, R. Guerrini, H. Markram, G. Iannello, and F. S. Pavone, “A versatile clearing agent for multi-modal brain imaging,” Sci. Rep. 5(1), 9808 (2015). [CrossRef]  

15. K. Chung, J. Wallace, S.-Y. Kim, S. Kalyanasundaram, A. S. Andalman, T. J. Davidson, J. J. Mirzabekov, K. A. Zalocusky, J. Mattis, A. K. Denisin, S. Pak, H. Bernstein, C. Ramakrishnan, L. Grosenick, V. Gradinaru, and K. Deisseroth, “Structural and molecular interrogation of intact biological systems,” Nature 497(7449), 332–337 (2013). [CrossRef]  

16. M. E. Boutin and D. Hoffman-Kim, “Application and assessment of optical clearing methods for imaging of tissue-engineered neural stem cell spheres,” Tissue Eng. Part C Methods 21(3), 292–302 (2015). [CrossRef]  

17. Y. Aoyagi, R. Kawakami, H. Osanai, T. Hibi, and T. Nemoto, “A rapid optical clearing protocol using 2,2′-thiodiethanol for microscopic observation of fixed mouse brain,” PLoS One 10(1), e0116280 (2015). [CrossRef]  

18. K. Sung, Y. Ding, J. Ma, H. Chen, V. Huang, M. Cheng, C. F. Yang, J. T. Kim, D. Eguchi, D. Di Carlo, T. K. Hsiai, A. Nakano, and R. P. Kulkarni, “Simplified three-dimensional tissue clearing and incorporation of colorimetric phenotyping,” Sci. Rep. 6(1), 30736 (2016). [CrossRef]  

19. M. Belle, D. Godefroy, C. Dominici, C. Heitz-Marchaland, P. Zelina, F. Hellal, F. Bradke, and A. Chédotal, “A simple method for 3D analysis of immunolabeled axonal tracts in a transparent nervous system,” Cell Rep. 9(4), 1191–1201 (2014). [CrossRef]  

20. A. Moatti, Y. Cai, C. Li, T. Sattler, L. Edwards, J. Piedrahita, F. S. Ligler, and A. Greenbaum, “Three-dimensional imaging of intact porcine cochlea using tissue clearing and custom-built light-sheet microscopy,” Biomed. Opt. Express 11(11), 6181–6196 (2020). [CrossRef]  

21. E. A. Susaki, K. Tainaka, D. Perrin, F. Kishino, T. Tawara, T. M. Watanabe, C. Yokoyama, H. Onoe, M. Eguchi, S. Yamaguchi, T. Abe, H. Kiyonari, Y. Shimizu, A. Miyawaki, H. Yokota, and H. R. Ueda, “Whole-brain imaging with single-cell resolution using chemical cocktails and computational analysis,” Cell 157(3), 726–739 (2014). [CrossRef]  

22. N. Renier, Z. Wu, D. J. Simon, J. Yang, P. Ariel, and M. Tessier-Lavigne, “iDISCO: a simple, rapid method to immunolabel large tissue samples for volume imaging,” Cell 159(4), 896–910 (2014). [CrossRef]  

23. H. R. Ueda, A. Ertürk, K. Chung, V. Gradinaru, A. Chédotal, P. Tomancak, and P. J. Keller, “Tissue clearing and its applications in neuroscience,” Nat. Rev. Neurosci. 21(2), 61–79 (2020). [CrossRef]  

24. A. Vieites-Prado and N. Reiner, “Tissue clearing and 3D imaging in developmental biology,” Development 148(18), dev199369 (2021). [CrossRef]  

25. M. R. Rai, C. Li, and A. Greenbaum, “Quantitative analysis of illumination and detection corrections in adaptive light sheet fluorescence microscopy,” Biomed. Opt. Express 13(5), 2960–2974 (2022). [CrossRef]  

26. C. Bourgenot, C. D. Saunter, J. M. Taylor, J. M. Girkin, and G. D. Love, “3D adaptive optics in a light sheet microscope,” Opt. Express 20(12), 13252–13261 (2012). [CrossRef]  

27. C. Zhang, W. Sun, Q. Mu, Z. Cao, X. Zhang, S. Wang, and L. Xuan, “Analysis of aberrations and performance evaluation of adaptive optics in two-photon light-sheet microscopy,” Opt. Commun. 435, 46–53 (2019). [CrossRef]  

28. M. J. Booth, “Adaptive optical microscopy: the ongoing quest for a perfect image,” Light: Sci. Appl. 3(4), e165 (2014). [CrossRef]  

29. A. Hubert, F. Harms, R. Juvénal, P. Treimany, X. Levecq, V. Loriette, G. Farkouh, F. Rouyer, and A. Fragola, “Adaptive optics light-sheet microscopy based on direct wavefront sensing without any guide star,” Opt. Lett. 44(10), 2514–2517 (2019). [CrossRef]  

30. Y. Liu, K. Lawrence, A. Malik, C. E. Gunderson, R. Ball, J. D. Lauderdale, and P. Kner, “Imaging neural activity in zebrafish larvae with adaptive optics and structured illumination light sheet microscopy,” Proc. SPIE 10886, 1088607 (2019). [CrossRef]  

31. V. Marx, “Microscopy: hello, adaptive optics,” Nat. Methods 14(12), 1133–1136 (2017). [CrossRef]  

32. C. Bourgenot, J. M. Taylor, C. D. Saunter, J. M. Girkin, and G. D. Love, “Light sheet adaptive optics microscope for 3D live imaging,” Proc. SPIE 8589, 85890W (2013). [CrossRef]  

33. J.-W. Cha, J. Ballesta, and P. T. C. So, “Shack-Hartmann wavefront-sensor-based adaptive optics system for multiphoton microscopy,” J. Biomed. Opt. 15(4), 046022 (2010). [CrossRef]  

34. A. G. Basden, D. Atkinson, N. A. Bharmal, et al., “Experience with wavefront sensor and deformable mirror interfaces for wide-field adaptive optics systems,” Mon. Not. R. Astron. Soc. 459(2), 1350–1359 (2016). [CrossRef]  

35. A. L. Rukosuev, A. V. Kudryashov, A. N. Lylova, V. V. Samarkin, and Y. V. Sheldakova, “Adaptive optics system for real-time wavefront correction,” Atmospheric Ocean. Opt. 28(4), 381–386 (2015). [CrossRef]  

36. O. Azucena, J. Crest, J. Cao, W. Sullivan, P. Kner, D. Gavel, D. Dillon, S. Olivier, and J. Kubby, “Wavefront aberration measurements and corrections through thick tissue using fluorescent microsphere reference beacons,” Opt. Express 18(16), 17521–17532 (2010). [CrossRef]  

37. X. Tao, A. Norton, M. Kissel, O. Azucena, and J. Kubby, “Adaptive optics two photon microscopy with direct wavefront sensing using autofluorescent guide-stars,” Proc. SPIE 8978, 89780D (2014). [CrossRef]  

38. K. Wang, D. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over largevolumes,” Nat. Methods 11(6), 625–628 (2014). [CrossRef]  

39. Y. Liu and P. Kner, “Sensorless adaptive optics for light sheet microscopy,” in Imaging and Applied Optics Congress, OF2B.2 (2020).

40. M. J. Booth, “Wavefront sensorless adaptive optics for large aberrations,” Opt. Lett. 32(1), 5–7 (2007). [CrossRef]  

41. A. Jesacher and M. J. Booth, “Sensorless adaptive optics for microscopy,” Proc. SPIE 7931, 79310G (2011). [CrossRef]  

42. M. J. Booth, “Wave front sensor-less adaptive optics: a model-based approach using sphere packings,” Opt. Express 14(4), 1339–1352 (2006). [CrossRef]  

43. L. Wang, W. Yan, R. Li, X. Weng, J. Zhang, Z. Yang, L. Liu, T. Ye, and J. Qu, “Aberration correction for improving the image quality in STED microscopy using the genetic algorithm,” Nanophotonics 7(12), 1971–1980 (2018). [CrossRef]  

44. D. Débarre, E. J. Botcherby, T. Watanabe, S. Srinivas, M. J. Booth, and T. Wilson, “Image-based adaptive optics for two-photon microscopy,” Opt. Lett. 34(16), 2495–2497 (2009). [CrossRef]  

45. J. A. Kubby, Adaptive Optics for Biological Imaging (CRC Press, 2020).

46. C. Li, M. R. Rai, H. T. Ghashghaei, and A. Greenbaum, “Illumination angle correction during image acquisition in light-sheet fluorescence microscopy using deep learning,” Biomed. Opt. Express 13(2), 888–901 (2022). [CrossRef]  

47. A. Facomprez, E. Beaurepaire, and D. Débarre, “Accuracy of correction in modal sensorless adaptive optics,” Opt. Express 20(3), 2598–2612 (2012). [CrossRef]  

48. Q. Hu, J. Wang, J. Antonello, M. Hailstone, M. Wincott, R. Turcotte, D. Gala, and M. J. Booth, “A universal framework for microscope sensorless adaptive optics: Generalized aberration representations,” APL Photonics 5(10), 100801 (2020). [CrossRef]  

49. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company Publishers, 2005).

50. L. Kong and M. Cui, “In vivo neuroimaging through the highly scattering tissue via iterative multi-photon adaptive compensation technique,” Opt. Express 23(5), 6145–6150 (2015). [CrossRef]  

51. R. A. Gonsalves, “Perspectives on phase retrieval and phase diversity in astronomy,” Proc. SPIE 9148, 91482P (2014). [CrossRef]  

52. P. Kner, “Phase diversity for three-dimensional imaging,” J. Opt. Soc. Am. A 30(10), 1980–1987 (2013). [CrossRef]  

53. A. P. Krishnan, C. Belthangady, C. Nyby, M. Lange, B. Yang, and L. A. Royer, “Optical Aberration Correction via Phase Diversity and Deep Learning,” bioRxiv, 215731498 (2020). [CrossRef]  

54. A. Aristov, B. Lelandais, E. Rensen, and C. Zimmer, “ZOLA-3D allows flexible 3D localization microscopy over an adjustable axial range,” Nat. Commun. 9(1), 2409 (2018). [CrossRef]  

55. B. Ferdman, E. Nehme, L. E. Weiss, R. Orange, O. Alalouf, and Y. Shechtman, “VIPR: vectorial implementation of phase retrieval for fast and accurate microscopic pixel-wise pupil estimation,” Opt. Express 28(7), 10179–10198 (2020). [CrossRef]  

56. B. Zhang, J. Zhu, K. Si, and W. Gong, “Deep learning assisted zonal adaptive aberration correction,” Front. Phys. 8, 634 (2021). [CrossRef]  

57. N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7(2), 141–147 (2010). [CrossRef]  

58. B. P. Cumming and M. Gu, “Direct determination of aberration functions in microscopy by an artificial neural network,” Opt. Express 28(10), 14511–14521 (2020). [CrossRef]  

59. L. Möckl, P. N. Petrov, and W. E. Moerner, “Accurate phase retrieval of complex 3D point spread functions with deep residual neural networks,” Appl. Phys. Lett. 115(25), 251106 (2019). [CrossRef]  

60. Q. Xin, G. Ju, C. Zhang, and S. Xu, “Object-independent image-based wavefront sensing approach using phase diversity images and deep learning,” Opt. Express 27(18), 26102–26119 (2019). [CrossRef]  

61. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017). [CrossRef]  

62. A. J. Newell, V. A. Kapps, Y. Cai, M. R. Rai, G. St. Armour, B. M. Horman, K. D. Rock, S. K. Witchey, A. Greenbaum, and H. B. Patisaul, “Maternal organophosphate flame retardant exposure alters the developing mesencephalic dopamine system in fetal rat,” Toxicol. Sci. 191(2), 357–373 (2023). [CrossRef]  

63. Y. Cai, X. Zhang, C. Li, H. T. Ghashghaei, and A. Greenbaum, “COMBINe enables automated detection and classification of neurons and astrocytes in tissue-cleared mouse brains,” Cell Reports Methods 3(4), 100454 (2023). [CrossRef]  

64. C. Li, A. Moatti, X. Zhang, H. T. Ghashghaei, and A. Greenabum, “Deep learning-based autofocus method enhances image quality in light-sheet fluorescence microscopy,” Biomed. Opt. Express 12(8), 5214–5226 (2021). [CrossRef]  

65. K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in IEEE Conference on Computer Vision and Pattern Recognition in (2016), 770–778 (2016).

66. C. H. Shen and H. H. Chen, “Robust focus measure for low-contrast images,” in Digest of Technical Papers International Conference on Consumer Electronics, 69–70 (2006).

67. N. Ji, “Adaptive optical fluorescence microscopy,” Nat. Methods 14(4), 374–380 (2017). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Supplementary information

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Specimen-based aberrations in light sheet fluorescence microscopy. Deep in a tissue cleared specimen, the emitted wavefront accumulates spatially variable aberrations, these aberrations distort the image. The non-aberrated image was obtained on the surface of the specimen (red spot), where sample induced aberrations are minimal, while the aberrated image was obtained deep in the tissue (blue spot), without any corrections.
Fig. 2.
Fig. 2. Schematic of the optical setup used for correcting optical aberrations in the detection path of light sheet fluorescence microscope. A neural network is used to estimate the amplitude of five Zernike modes from two defocused images, and the correction is done using a deformable mirror. A translation stage is used to correct for illumination beam defocus.
Fig. 3.
Fig. 3. Specimen-induced aberrations and the methodology for dataset recording. (a) Quantification of specimen-induced aberration using different samples of pig cochlea and rat brains. (b) A schematic that describes the recording of the training dataset. Per one selected Zernike mode, its amplitude is sampled using the discrete amplitude generator (DAG), whereas the rest of the amplitudes are randomly sampled from a normal distribution (RAG) with zero mean and rounded standard deviation that corresponds to (a).
Fig. 4.
Fig. 4. Network architecture. (a) The network received two images as input, followed by three convolutional blocks (where [3 × 3,32] format implies a convolution filter of 3 × 3 and total of 32 filters), a linear layer and two fully connected layers to sub-classes to classify the different mode of aberration. In a shared network, all modes are connected to the convolution layer with a fully connected layer. (b) The non-shared network has the same structure as the shared network, with the difference being that no features of the network are shared between each other. The network is simplified for illustration and the ReLu and pooling layers are not shown.
Fig. 5.
Fig. 5. Quantitative comparison shared and non-shared networks. (a) Confusion matrix of vertical coma for the shared and non-shared network. (b) Comparison of normalized root mean square error (NRMSE) of different Zernike modes/aberrations (Whisker is the maximum/minimum and box shows the standard deviation). We used NRMSE measure since the standard deviation of the different aberrations was different. The figure shows data acquired from tissue cleared rat brains.
Fig. 6.
Fig. 6. Comparison of imaging performance for shared and non-shared networks. (a) Aberrated and corrected images using the shared and non-shared networks. Images in the red boxes are zoomed in regions. The estimated Zernike coefficient for the shared network and the non-shared network are [ZVC = 0.15, ZHC = -0.10, ZVA = -0.05, ZHA = -0.10, ZSP = 0.05] and [ZVC = 0.10, ZHC = -0.10, ZVA = -0.10, ZHA = -0.10, ZSP = 0] respectively. (b) Comparison of image quality before and after aberration correction using shared and non-shared networks. DCTE is used to evaluate the image quality. The figure shows data acquired from tissue cleared rat brains. VC – vertical coma, HC – horizontal coma, VA – vertical astigmatism, VH – horizontal astigmatism, SP – spherical.
Fig. 7.
Fig. 7. Testing on unseen tissue. Aberrated and corrected images using the shared and non-shared networks with corresponding image quality metric. The estimated Zernike coefficient for the shared network and the non-shared network are [ZVC = -0.10, ZHC = -0.10, ZVA = 0.15, ZHA = -0.10, ZSP = -0.05] and [ZVC = -0.15, ZHC = -0.10, ZVA = 0.15, ZHA = -0.10, ZSP = -0.05] respectively. Images show data acquired from a tissue cleared mouse brain. VC – vertical coma, HC – horizontal coma, VA – vertical astigmatism, VH – horizontal astigmatism, SP – spherical.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.