Abstract
Compared to point-scanning multiphoton microscopy, line-scanning temporal focusing microscopy (LTFM) is competitive in high imaging speed while maintaining tight axial confinement. However, considering its wide-field detection mode, LTFM suffers from shallow penetration depth as a result of the crosstalk induced by tissue scattering. In contrast to the spatial filtering based on confocal slit detection, here we propose the extended detection LTFM (ED-LTFM), the first wide-field two-photon imaging technique to extract signals from scattered photons and thus effectively extend the imaging depth. By recording a succession of line-shape excited signals in 2D and reconstructing signals under Hessian regularization, we can push the depth limitation of wide-field imaging in scattering tissues. We validate the concept with numerical simulations, and demonstrate the performance of enhanced imaging depth in in vivo imaging of mouse brains.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
1. Introduction
Benefiting from the inherent advantages in deep penetration, 3D sectioning capability, and low phototoxicity, multiphoton microscopy(MPM) has found great applications in biomedical studies, including neuroscience and immunology [1,2]. In conventional MPM, a tight focus is formed and thus multi-dimensional imaging is generally performed by scanning the focus. However, the inertia of mechanical scanners and the collected fluorescent signal for sufficient signal-to-noise ratio limit the imaging speed [3–5], which hampers the studies of most biological dynamics [6]. Recently, temporal focusing microscopy (TFM) has been proposed to achieve wide-field imaging while maintaining optical sectioning simultaneously [7,8]: by introducing an angular dispersion to the excitation femtosecond pulses with a dispersion component, a spatiotemporal focus is formed when different frequency components overlap at the focal plane of the objective lens. Current progress of TFM have demonstrated the confinement of two-photon wide-field excitation with decent axial resolutions [9]. Compared with the conventional point-scanning MPM, TFM enables high-speed imaging by parallel excitation [10,11]. Generally, there are two modalities: planar- excitation TFM [12–14] and line-scanning TFM [15,16]. In the former one, a planar region of the samples is excited in parallel; in the latter one, samples are excited by a sweeping line. In comparison, the axial focusing is weak in planar-excitation TFM, while LTFM exhibits better axial confinement and scattering resistance [17].
The good balance between imaging speed and axial resolution makes LTFM ideal for various applications, including laser processing [18] and large-scale imaging of biological dynamics [19]. To exploit the potential of LTFM in deep tissue imaging, Rowland et al have employed longer wavelength for minimizing the scattering suffered by the excitation beam [20]; we have proposed the focal modulation technique to modulate the excitation beam for eliminating the fluorescence background [21], and the hybrid spatio-spectral coherent adaptive compensation technique to compensate the aberrations experienced by the excitation beam [22].
Even though various strategies of multi-photon excitation reduce the effects of scattering suffered by the excitation beam in TFM, as mentioned above, the crosstalk induced by tissue scattering of the emission fluorescence remains unsolved yet. Apparently, crosstalk of neighboring pixels in parallel readout via 2D sensors, such as sCMOS or EMCCD cameras, limits the signal-to-noise ratio (SNR) and the imaging depth in LTFM. To this end, confocal slit detection has recently been proposed [23,24] that exploits the same principle as that in confocal microscopy, where a confocal slit is conjugated to the line-shaped excitation for spatial filtering of the scattered fluorescence. In practice, “virtual” confocal slit can be realized by setting the readout of sCMOS camera in “rolling shutter” mode, which would filter out the crosstalk in the direction orthogonal to the line. Apparently, such a technique could resist the scattering induced crosstalk between excitation lines effectively, but it would fail to resist the crosstalk along the excitation lines. Moreover, in confocal slit detection, one would lose most part of fluorescent signals when the scattering is severe.
Here we propose the extended detection LTFM (ED-LTFM), a technique that could maintain the signal contrast and resist to scattering-induced noise in deep tissue LTFM. A 2D fluorescent image is captured at each line-shape excitation position, so that the signals, including the scattered signals, are fully recorded. Then computational reconstruction is performed to recover the signals. Moreover, we incorporate Hessian regularization in the deconvolution, for the first time, which would ensure smooth transitions in the reconstructed images and thus reduce the artifacts induced by low SNR [25]. We demonstrate the enhanced performance of ED-LTFM in in vivo deep imaging of neurons in Thy1-YFP mouse brains and dynamic imaging of microglia in CX3CR1-GFP mouse brains.
2. Imaging modeling
As shown in Fig. 1(a), we denote one slice of the 3D sample as where is the lateral coordinates. is excited column by column (along x-axis) by steering the laser line formed by temporal focusing. In conventional LTFM, the shutter of camera keeps open when the line-shape excitation beam excites the sample from one end to the other. At each excitation position, temporal focused laser line excites the sample, while the emitted fluorescent signals go through the sample and the optical elements before being recorded by the camera. Unfortunately, the emitted fluorescent signals would suffer from tissue scattering, and photons from different excitation positions would mix in the sensor plane, as shown in Fig. 1(b). Consequently, the captured signals in wide-field detection (WD) LTFM thus could be written as
where h is the point spread function (PSF) of the system, is the location of the line-shape excitation beam. The captured image would be susceptible to serious crosstalks along both x and y axes if h is largely affected by tissue scattering.For confocal slit detection (CSD), a detection slit is adopted to block the scattering photons. The captured signals by CSD then could be written as
Even though the crosstalk along y-axis could be effectively reduced by the confocal slit detection, the crosstalk along x-axis remains, as shown in Fig. 1(c). Moreover, if scattering enlarges the PSF h, the confocal slit would cut the majority of the signals and affect the final imaging SNR. To suppress the effect of scattering along both x-axis and y-axis, we propose the ED-LTFM method that fully utilizes the crosstalk information to recover the original signals.
More specifically, ED-LTFM acquires an image stack instead of a single image, for a single depth imaging:
In other words, is the set of images that are excited by the laser line at each excitation position, as shown in Fig. 1(d). To recover image from the captured we propose the following optimization problem:wherewhich is the Hessian regularization term. Here, and are weight parameters, and the subscript 1 and 2 represent L-1 norm and L-2 norm.Note that, in our model, the shift-invariance property of the PSF is employed, which is not obvious considering the complexity of tissue scattering. However, as presented in Section 5.3.1, we validate that in deep tissue imaging, the scattering PSF is highly similar across the whole field of view (~80 μm). Thus the assumption of shift-invariance is feasible.
3. Computational reconstruction
3.1 Fitting of scattering PSF
To solve Eq. (4), we need to calculate the PSF through curve fitting first. Numerical simulations have shown that the pattern induced by tissue scattering has the property of circular symmetry [26,27]. Furthermore, Henyey and Greenstein introduced a scattering function which describes scattering probability in relation to the scattering angle [28]
in which and are the angle and anisotropy parameters, respectively.For PSF fitting, Eq. (6) should be changed into the imaging coordinate system. We replace the angular coordinatewith coordinates by introducing a parameter so that Eq. (6) could be reformulated as
where M is a normalizing constant that ensures the total intensity of the PSF is 1. Note that could be represented by the full width at half maximum (FWHM) of the PSF asThe modelled PSF with parameters then could be fitted to the captured point-source-like signals. Root Mean Square Error (RMSE) is chosen as the fitting metric. The best parameter combination is then used to generate the scattering PSF for the subsequent deconvolution process.
We demonstrate the aforementioned process in Fig. 2. Similar to the guide star techniques employed in adaptive optics [29,30], we capture several small structures which locate at ~140 μm under the dura in mouse brain in vivo and treat them as point sources. We then fit them with the proposed H-G function. Specifically, we first measure the dark noise of the camera (~103) and subtract this value from the raw data. Then we fit the data with the proposed H-G function and one of fitting results is shown in Fig. 2(b).
We also show that H-G function would fit better compared to Lorentz function (Fig. 2(c)) and Gaussian function (Fig. 2(d)) with the same raw data. Furthermore, we plot the fitting error of 25 different point sources in RMSE in Fig. 2(a), which shows that H-G function leads a significant advance in statistics. We observe that the distribution of the PSF is heavy-tailed due to strong tissue scattering (raw pixels in Figs. 2(b)–2(d)), thus the PSF model needs to parameterize both the FWHM and the tail distribution. However, Lorentz function and Gaussian function are controlled by only one parameter, which makes them hard to fully characterize the scattering PSF in deep tissue imaging. This is why the Lorentz and Gaussian functions produce obvious fitting error in the “hump” part in Figs. 2(c) and 2(d). On the other hand, H-G function supports more flexible control of the scattering PSF and thus fits better. All the PSFs in the following content are modelled by H-G function with the same process as mentioned above.
3.2 Hessian regularized hybrid-deconvolution
In this section, we formulate the deconvolution algorithm to solve the optimization problem described in Eq. (4)
Firstly, we note
andWe can rewrite the Eq. (4) aswhere * denotes 3d convolution.We adopt the alternative direction method of multipliers (ADMM) [31] to solve the above problem. First, three new variables b1, b2, b3 are introduced and the equivalent minimization problem becomes
whereAfter that, the augmented Lagrangian could be written as:in which is the penalty parameter. The problem could then be solved in a three-step iterative manner:
and , are updated in the same way.
We then detail the process of updating and demonstrate that Eq. (15) has an analytical solution. Using the Parseval’s identity [32], the problem in Eq. (15) can be re-written in Fourier domain:
in which the symbols with tilde represent the Fourier transforms of the original signal. is element-wise product. Note that we previously defined and So we have and In this way, the right-hand-side of the above equation can be viewed as a function with the variable of and the analytical solution iswhere Finally, is obtained by taking the inverse Fourier transform ofThe reconstruction software is available from ref [33]. Note that it is necessary to choose proper parameters and in Eq. (12), so in following experiments we use the grid-search technique to determine the best parameters with image sharpness as the metric [34]. All the computational reconstructions are performed on a personal computer with Intel(R) Core(TM) i5-7500 CPU and 16.0 GB RAM. One iteration described in Eqs. (16)-(18) takes ~0.6 seconds for a 650 × 400-pixel image, and the algorithm takes tens of iterations to converge.
4. Simulation results
After building the algorithm, we evaluate the performance of the proposed methods via numerical simulations (Fig. 3). We show the reconstructed images of the microtubules by WD, CSD and ED, in Figs. 3(b)–3(d), respectively. We split the original microtubule image into columns, then convolve each column with the PSF h to generate A 40 dB Gaussian white noise (Peak Signal-to-Noise Ratio) is added to simulate the real experiments. After generating and are then calculated via Eqs. (1), (2), and (4), respectively. We could see that shows the lowest background among and.
We also conduct deconvolution with Hessian regularization on the WD and CSD to show that the proposed ED technique still has the best performances, as shown in Figs. 3(e) and 3(f). For WD, the same PSF as in ED is used, while for CSD 1D deconvolution is performed on each detected slit since slit detection is performed in the CSD. The 1D PSF is formulated by selecting the central line of the fitted PSF. By comparing Figs. 3(b) and 3(e), we could see that the improvement of deconvolution is obvious. However, deconvolution still could not handle the serve crosstalk caused by WD. While CSD could also improve its performance after deconvolution (Figs. 3(c) and 3(f)), the loss of signals by involving confocal slit would affect its final performance. On the other hand, the proposed ED technique is insusceptible to both the crosstalk from tissue scattering and the loss of signals induced by confocal slit based spatial filtering, thus it could retrieve the best performance after the same deconvolution process.
We quantitatively measure the width of retrieved microtubules achieved with these methods and could see that pED resembles ground truth the most, which demonstrates great advantages of our ED-LTFM in strong scattering and noisy conditions. We also label the structured structural similarity index (SSIM) of each method in Figs. 3(b)–(f), which is widely used to evaluate the similarity of the reconstructed images to the ground truth [35]. The result with ED is 2.1 times better than that with CSD and 15.5 times better than that with WD in terms of the index value. In Fig. 3(h), we compare the reconstruction SSIM of all the methods under different noise levels. Using 0.5 as the reconstruction SSIM threshold, ED-LTFM can extend the tolerable noise range by ~8 dB compared to the second best method (CSD-deconv), which explicitly shows the outperformances for our methods.
5. Experiments
5.1 Optical configuration
Figure 4 shows the optical configuration of ED-LTFM. We use an 80 MHz, ∼120 fs laser (Chameleon Discovery, Coherent) for two-photon excitation at 920 nm, and a following electro-optical modulator (M3202RM, Conoptics) to control the laser intensity. The laser beam is expanded to ~5 mm with a telescope (L1: f = 60 mm, L2: f = 150 mm), and then scanned in the vertical direction with a one-dimensional galvanometer (GVS211, Thorlabs). The beam is focused to a thin line on the surface of the diffraction grating (Edmund Optics, 830 lines/mm) with a cylindrical lens (f = 300 mm). The incident angle to the grating is ∼50° to ensure that the central wavelength of the 1st diffraction light is perpendicular to the grating surface. The spectrally-spread pulse is collimated with a collimating lens (L3: f = 200 mm), so that the expanded beam fulfills the back pupil of the objective (25 × , 1.05 NA, water immersion, Olympus, XLPLN25XWMP2). A line-shaped laser beam, around 80 µm in length, is formed at the focal plane of the objective. An epi-fluorescence setup is built-up for image acquisition, including a dichroic mirror (DMSP750B, Thorlabs), a bandpass filter (E510/80, Chroma), a 200 mm tube lens (L4, TTL200-A, Thorlabs), and an sCMOS (Zyla 5.5 plus, Andor). Three-dimensional imaging is performed by axially moving the sample stage (M-VP-25XA-XYZL, Newport).
In wide-field detection pED, the camera keeps open during the line-shaped beam scans the sample. To capture we park the beam in the center of the field-of-view (FOV) and translate the sample stage to finish the 1D scan, which will simplify the experimental setup for extended detection and make the imaging field unlimited by the FOV of the objective. In this case, the camera captures a 650-by-200-pixel image at each stage position and the total stack is formed by 400 captures. To mimic the confocal slit detection, we use the center signal of the captured image stack to recover pED is calculated from the proposed reconstruction algorithm. For fair comparison, the exposure time of each row in all three cases are the same (50 ms). Note that the excitation line is along x axis, the same as the derivation above.
5.2 Images of fluorescent beads
We first demonstrate the enhanced performances of the proposed ED technique via imaging 3 µm fluorescent beads (T14792, Thermo fisher) under 300 µm scattering phantom (non-fluorescent beads embedded in 2% solution of agar). Figures 5(a)–5(c) show the maximum intensity projection (MIP) along z-axis (MIPs of a 10 μm x-y stack) of the beads via WD, CSD, and ED, respectively. We could see that beads are seriously blurred under WD, while CSD could effectively reduce the blurriness along y-axis compared to WD but it helps less along x-axis. On the other hand, ED could effectively reduce the blurriness along both axes. We further quantitatively compare the blurriness reduction of these three methods via measuring the captured beads profiles along x and y-axis in Figs. 5(d) and 5(e), which strongly suggests that the proposed ED technique is effective for reducing scattering along both axes.
5.3 In vivo imaging of Thy1-YFP mice
Then we demonstrate the performance of ED-LTFM in in vivo imaging of living Thy1-YFP mice (JAX No. 003782). After craniotomy, we conduct acute imaging of neurons in the cerebral cortex with the living mice under anesthesia [36] (all procedures involving mice were approved by the Animal Care and Use Committees of Tsinghua University).
5.3.1 Validation of the PSF invariance
So far, we have assumed the shift-invariance of scattering PSF in imaging modelling and reconstruction, which is not obvious considering that the tissue structures and properties are complicated. To validate this assumption, we fit PSF with the proposed H-G model at different locations, as shown in Fig. 6(a). We choose 3 targets for the fitting, draw the intensity fluctuation around the target, and search the best fitting parameters as shown in Figs. 6(b)–6(d). It is found that the fitted parameters vary little in different locations, as shown in Fig. 6(e). The results suggest that the PSF is near shift-invariant across the whole FOV, thus the deconvolution process of our proposed algorithm is feasible. We deduce that the observed shift-invariance of the PSF is due to: 1) Scattering properties vary across different regions of the mouse brain [37], but these properties are highly similar locally [38]. 2) When imaging depth goes beyond the mean free path (<50 μm for emitted photons in our case [39,40]), emitted fluorescent photons will experience multiple scattering, which will lead to similar scattering PSFs across the FOV.
5.3.2 ED outperforms WD and CSD in neuroimaging
We further compare ED, WD and CSD in in vivo neuron imaging. In Figs. 7(a)–(c), we show the maximum intensity projection (MIP) of neurons along the z-axis of a 13-µm-thick image stack (80–92 μm under the dura) acquired via WD, CSD and ED, respectively. For precise comparison, we show the zoomed-in view of the captured images in Figs. 7(d)–7(f). As expected, the dendrites are blurred in WD due to the strong scattering, while CSD techniques help to eliminate the crosstalk induced by scattering along y-axis, and ED effectively eliminates the crosstalk along both x-axis and y-axis. In Fig. 7(j), we show the signal improvement of ED over that in WD and CSD by quantitatively comparing intensity along the dashed line in Figs. 7(d)–7(f).
In Fig. 7(k), we also show that the proposed ED technique could help improve the signal contrast along z-axis via MIP along the y-axis of a 10 μm-thick x-z image stack [labeled by the dashed box in Figs. 7(a)–7(c)] in Figs. 7(g)–7(i). It can be seen that the improvements of ED-LTFM are obvious.
5.3.3 Hessian regularization in ED helps to reduce reconstruction artifacts
We also verify the improved performances of Hessian regularization. For comparison, we conduct ED with and without Hessian regularization in the deconvolution step, as shown in Fig. 8. We could see that without regularization, the ED deconvolution process amplifies both the noise and the signals (Figs. 8(a) and 8(b)). By carefully checking the details, we could find that ED without Hessian regularization generates more structures than that with Hessian regularization (Figs. 8(d) and 8(e)). We further check the result from CSD (which is free from any post-processing) and find that it matches well with the result from ED with Hessian regularization (Figs. 8(d) and 8(f)). In other words, ED without Hessian regularization generates artifacts in the retrieved images. The introduced Hessian regularization helps to reduce artifacts and maintain the reconstruction fidelity.
5.4 In vivo imaging of CX3CR1-GFP mice
Finally, we demonstrate the performance of ED-LTFM in in vivo dynamical imaging of microglia cells in living CX3CR1-GFP mice (JAX No. 005582). In Figs. 9(a)–9(c), we show the MIP along the z-axis of a 7-µm-thick image stack acquired via WD, CSD, and ED, respectively, at the depths of 172–178 μm under the dura. To the best of our knowledge, no such penetration depths in LTFM has been demonstrated in vivo so far. We could see that the background noise by ED has been suppressed significantly compared to those by both WD and CSD. By selecting a small part of Figs. 9(a)–9(c), we could see that CSD eliminates the crosstalk along y-axis effectively but fails to eliminate the crosstalk along x-axis. The fine process could be recovered by ED effectively, which even could not be recognized in original WD images. We also image the “non-resting” dynamical movement of microglia cells over a total time of 16 minutes within a depth range of 30 μm. In Figs. 9(g)–9(i), we can see that, compared with the results from WD and CSD, ED can record the movement of the processes of microglia cells with fine details.
6. Conclusion
In summary, we propose a novel technique for overcoming tissue scattering in wide-field deep imaging by extended detection and computational reconstruction. Through both numerical simulations and in vivo imaging experiments, we have demonstrated that the proposed ED-LTFM can effectively enhance the penetration depth. Considering the line-rate of our sCMOS camera is about 2.2 × 105 Hz, it can enable 55 Hz imaging rate (faster than ~30 Hz speed of typical point-scanning two photon microscopy with resonant galvo scanners) for a 650 × 400-pixel capture with a 10-pixel width extended detection, in which case a femtosecond laser of low repetition rate but high pulse energy should be used to ensure the SNR. Besides, since our PSF fitting technique relies on the detection of small sources, ED-LTFM would work better for sparsely labelled tissues (in some cases, such as microglia cells in Fig. 9, the targets are inherently sparse). It is worth noting that ED-LTFM can also be integrated with other strategies, such as three-photon LTFM [20], adaptive optics based LTFM [22], and multiphoton multispot systems [24] to push the limits of imaging depth further in scattering tissues.
Funding.
National Natural Science Foundation of China (NSFC) (61831014, 61771287, 61327902, 61741116, and 61722209).
Acknowledgments
YZ thanks Yingjun Tang for helps in sample preparations. LK thanks the support from Tsinghua University and the “Thousand Talents Plan” Youth Program.
References
1. W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248(4951), 73–76 (1990). [CrossRef] [PubMed]
2. J. Bewersdorf, R. Pick, and S. W. Hell, “Multifocal multiphoton microscopy,” Opt. Lett. 23(9), 655–657 (1998). [CrossRef] [PubMed]
3. E. J. Botcherby, C. W. Smith, M. M. Kohl, D. Débarre, M. J. Booth, R. Juškaitis, O. Paulsen, and T. Wilson, “Aberration-free three-dimensional multiphoton imaging of neuronal activity at kHz rates,” Proc. Natl. Acad. Sci. U.S.A. 109(8), 2919–2924 (2012). [CrossRef] [PubMed]
4. L. Kong, J. Tang, J. P. Little, Y. Yu, T. Lämmermann, C. P. Lin, R. N. Germain, and M. Cui, “Continuous volumetric imaging via an optical phase-locked ultrasound lens,” Nat. Methods 12(8), 759–762 (2015). [CrossRef] [PubMed]
5. N. Ji, J. Freeman, and S. L. Smith, “Technologies for imaging neural activity in large volumes,” Nat. Neurosci. 19(9), 1154–1164 (2016). [CrossRef] [PubMed]
6. W. Yang, J. E. Miller, L. Carrillo-Reid, E. Pnevmatikakis, L. Paninski, R. Yuste, and D. S. Peterka, “Simultaneous multi-plane imaging of neural circuits,” Neuron 89(2), 269–284 (2016). [CrossRef] [PubMed]
7. D. Oron and Y. Silberberg, “Spatiotemporal coherent control using shaped, temporally focused pulses,” Opt. Express 13(24), 9903–9908 (2005). [CrossRef] [PubMed]
8. G. Zhu, J. van Howe, M. Durst, W. Zipfel, and C. Xu, “Simultaneous spatial and temporal focusing of femtosecond pulses,” Opt. Express 13(6), 2153–2159 (2005). [CrossRef] [PubMed]
9. M. E. Durst, G. Zhu, and C. Xu, “Simultaneous Spatial and Temporal Focusing in Nonlinear Microscopy,” Opt. Commun. 281(7), 1796–1805 (2008). [CrossRef] [PubMed]
10. R. Prevedel, A. J. Verhoef, A. J. Pernía-Andrade, S. Weisenburger, B. S. Huang, T. Nöbauer, A. Fernández, J. E. Delcour, P. Golshani, A. Baltuska, and A. Vaziri, “Fast volumetric calcium imaging across multiple cortical layers using sculpted light,” Nat. Methods 13(12), 1021–1028 (2016). [CrossRef] [PubMed]
11. Y. Meng, W. Lin, C. Li, and S. C. Chen, “Fast two-snapshot structured illumination for temporal focusing microscopy with enhanced axial resolution,” Opt. Express 25(19), 23109–23121 (2017). [CrossRef] [PubMed]
12. J. N. Yih, Y. Y. Hu, Y. D. Sie, L. C. Cheng, C. H. Lien, and S. J. Chen, “Temporal focusing-based multiphoton excitation microscopy via digital micromirror device,” Opt. Lett. 39(11), 3134–3137 (2014). [CrossRef] [PubMed]
13. E. Papagiakoumou, A. Bègue, B. Leshem, O. Schwartz, B. M. Stell, J. Bradley, D. Oron, and V. Emiliani, “Functional patterned multiphoton excitation deep inside scattering tissue,” Nat. Photonics 7(4), 274–278 (2013). [CrossRef]
14. A. Escobet-Montalbán, R. Spesyvtsev, M. Chen, W. A. Saber, M. Andrews, C. S. Herrington, M. Mazilu, and K. Dholakia, “Wide-field multiphoton imaging through scattering media without correction,” Sci. Adv. 4, (2018).
15. Y. Xue, K. P. Berry, J. R. Boivin, D. Wadduwage, E. Nedivi, and P. T. C. So, “Scattering reduction by structured light illumination in line-scanning temporal focusing microscopy,” Biomed. Opt. Express 9(11), 5654–5666 (2018). [CrossRef] [PubMed]
16. E. Tal, D. Oron, and Y. Silberberg, “Improved depth resolution in video-rate line-scanning multiphoton microscopy using temporal focusing,” Opt. Lett. 30(13), 1686–1688 (2005). [CrossRef] [PubMed]
17. H. Dana, N. Kruger, A. Ellman, and S. Shoham, “Line temporal focusing characteristics in transparent and scattering media,” Opt. Express 21(5), 5677–5687 (2013). [CrossRef] [PubMed]
18. B. Sun, P. S. Salter, C. Roider, A. Jesacher, J. Strauss, J. Heberle, M. Schmidt, and M. J. Booth, “Four-dimensional light shaping: manipulating ultrafast spatiotemporal foci in space and time,” Light Sci. Appl. 7(1), 17117 (2018). [CrossRef] [PubMed]
19. H. Dana, A. Marom, S. Paluch, R. Dvorkin, I. Brosh, and S. Shoham, “Hybrid multiphoton volumetric functional imaging of large-scale bioengineered neuronal networks,” Nat. Commun. 5(1), 3997 (2014). [CrossRef] [PubMed]
20. C. J. Rowlands, D. Park, O. T. Bruns, K. D. Piatkevich, D. Fukumura, R. K. Jain, M. G. Bawendi, E. S. Boyden, and P. T. So, “Wide-field three-photon excitation in biological samples,” Light Sci. Appl. 6(5), e16255 (2017). [CrossRef] [PubMed]
21. Y. Zhang, L. Kong, H. Xie, X. Han, and Q. Dai, “Enhancing axial resolution and background rejection in line-scanning temporal focusing microscopy by focal modulation,” Opt. Express 26(17), 21518–21526 (2018). [CrossRef] [PubMed]
22. Y. Zhang, X. Li, H. Xie, L. Kong, and Q. Dai, “Hybrid spatio-spectral coherent adaptive compensation for line-scanning temporal focusing microscopy,” J. Phys. D 52(2), 024001 (2019). [CrossRef]
23. P. Rupprecht, R. Prevedel, F. Groessl, W. E. Haubensak, and A. Vaziri, “Optimizing and extending light-sculpting microscopy for fast functional imaging in neuroscience,” Biomed. Opt. Express 6(2), 353–368 (2015). [CrossRef] [PubMed]
24. M.-P. Adam, M. C. Müllenbroich, A. P. Di Giovanna, D. Alfieri, L. Silvestri, L. Sacconi, and F. S. Pavone, “Confocal multispot microscope for fast and deep imaging in semicleared tissues,” J. Biomed. Opt. 23(2), 1–4 (2018). [CrossRef] [PubMed]
25. S. Lefkimmiatis, A. Bourquard, and M. Unser, “Hessian-based norm regularization for image restoration with biomedical applications,” IEEE Trans. Image Process. 21(3), 983–995 (2012). [CrossRef] [PubMed]
26. B. G. Saar, C. W. Freudiger, J. Reichman, C. M. Stanley, G. R. Holtom, and X. S. Xie, “Video-rate molecular imaging in vivo with stimulated Raman scattering,” Science 330(6009), 1368–1370 (2010). [CrossRef] [PubMed]
27. P. Rupprecht, R. Prevedel, F. Groessl, W. E. Haubensak, and A. Vaziri, “Optimizing and extending light-sculpting microscopy for fast functional imaging in neuroscience,” Biomed. Opt. Express 6(2), 353–368 (2015). [CrossRef] [PubMed]
28. L. G. Henyey and J. L. Greenstein, “Diffuse radiation in the galaxy,” Astrophys. J. 93, 70–83 (1941). [CrossRef]
29. J. Liang, D. R. Williams, and D. T. Miller, “Supernormal vision and high-resolution retinal imaging through adaptive optics,” J. Opt. Soc. Am. A 14(11), 2884–2892 (1997). [CrossRef] [PubMed]
30. K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11(6), 625–628 (2014). [CrossRef] [PubMed]
31. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine learning 3, 1–122 (2011).
32. M.-A. Parseval, “Mémoire sur les séries et sur l’intégration complète d’une équation aux différences partielles linéaires du second ordre, à coefficients constants,” Mém. prés. par divers savants. Acad. des Sciences, Paris 1(1), 638–648 (1806).
33. T. Zhou, “Source Code for ED_LTFM” (2019), https://github.com/rickyim/ED_LTFM.
34. D. Burke, B. Patton, F. Huang, J. Bewersdorf, and M. J. Booth, “Adaptive optics correction of specimen-induced aberrations in single-molecule switching microscopy,” Optica 2(2), 177–185 (2015). [CrossRef]
35. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef] [PubMed]
36. L. Kong and M. Cui, “In vivo fluorescence microscopy via iterative multi-photon adaptive compensation technique,” Opt. Express 22(20), 23786–23794 (2014). [CrossRef] [PubMed]
37. S. I. Al-Juboori, A. Dondzillo, E. A. Stubblefield, G. Felsen, T. C. Lei, and A. Klug, “Light scattering properties vary across different regions of the adult mouse brain,” PLoS One 8(7), e67626 (2013). [CrossRef] [PubMed]
38. C.-Y. Dong, B. Yu, L. L. Hsu, P. D. Kaplan, D. Blankschstein, R. Langer, and P. T. So, “Applications of two-photon fluorescence microscopy in deep-tissue imaging,” in Optical Sensing, Imaging, and Manipulation for Biological and Biomedical Applications, (International Society for Optics and Photonics, 2000), 105–115.
39. F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2(12), 932–940 (2005). [CrossRef] [PubMed]
40. D. M. Chudakov, M. V. Matz, S. Lukyanov, and K. A. Lukyanov, “Fluorescent proteins and their applications in imaging living cells and tissues,” Physiol. Rev. 90(3), 1103–1163 (2010). [CrossRef] [PubMed]