Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Overcoming tissue scattering in wide-field two-photon imaging by extended detection and computational reconstruction

Open Access Open Access

Abstract

Compared to point-scanning multiphoton microscopy, line-scanning temporal focusing microscopy (LTFM) is competitive in high imaging speed while maintaining tight axial confinement. However, considering its wide-field detection mode, LTFM suffers from shallow penetration depth as a result of the crosstalk induced by tissue scattering. In contrast to the spatial filtering based on confocal slit detection, here we propose the extended detection LTFM (ED-LTFM), the first wide-field two-photon imaging technique to extract signals from scattered photons and thus effectively extend the imaging depth. By recording a succession of line-shape excited signals in 2D and reconstructing signals under Hessian regularization, we can push the depth limitation of wide-field imaging in scattering tissues. We validate the concept with numerical simulations, and demonstrate the performance of enhanced imaging depth in in vivo imaging of mouse brains.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Benefiting from the inherent advantages in deep penetration, 3D sectioning capability, and low phototoxicity, multiphoton microscopy(MPM) has found great applications in biomedical studies, including neuroscience and immunology [1,2]. In conventional MPM, a tight focus is formed and thus multi-dimensional imaging is generally performed by scanning the focus. However, the inertia of mechanical scanners and the collected fluorescent signal for sufficient signal-to-noise ratio limit the imaging speed [3–5], which hampers the studies of most biological dynamics [6]. Recently, temporal focusing microscopy (TFM) has been proposed to achieve wide-field imaging while maintaining optical sectioning simultaneously [7,8]: by introducing an angular dispersion to the excitation femtosecond pulses with a dispersion component, a spatiotemporal focus is formed when different frequency components overlap at the focal plane of the objective lens. Current progress of TFM have demonstrated the confinement of two-photon wide-field excitation with decent axial resolutions [9]. Compared with the conventional point-scanning MPM, TFM enables high-speed imaging by parallel excitation [10,11]. Generally, there are two modalities: planar- excitation TFM [12–14] and line-scanning TFM [15,16]. In the former one, a planar region of the samples is excited in parallel; in the latter one, samples are excited by a sweeping line. In comparison, the axial focusing is weak in planar-excitation TFM, while LTFM exhibits better axial confinement and scattering resistance [17].

The good balance between imaging speed and axial resolution makes LTFM ideal for various applications, including laser processing [18] and large-scale imaging of biological dynamics [19]. To exploit the potential of LTFM in deep tissue imaging, Rowland et al have employed longer wavelength for minimizing the scattering suffered by the excitation beam [20]; we have proposed the focal modulation technique to modulate the excitation beam for eliminating the fluorescence background [21], and the hybrid spatio-spectral coherent adaptive compensation technique to compensate the aberrations experienced by the excitation beam [22].

Even though various strategies of multi-photon excitation reduce the effects of scattering suffered by the excitation beam in TFM, as mentioned above, the crosstalk induced by tissue scattering of the emission fluorescence remains unsolved yet. Apparently, crosstalk of neighboring pixels in parallel readout via 2D sensors, such as sCMOS or EMCCD cameras, limits the signal-to-noise ratio (SNR) and the imaging depth in LTFM. To this end, confocal slit detection has recently been proposed [23,24] that exploits the same principle as that in confocal microscopy, where a confocal slit is conjugated to the line-shaped excitation for spatial filtering of the scattered fluorescence. In practice, “virtual” confocal slit can be realized by setting the readout of sCMOS camera in “rolling shutter” mode, which would filter out the crosstalk in the direction orthogonal to the line. Apparently, such a technique could resist the scattering induced crosstalk between excitation lines effectively, but it would fail to resist the crosstalk along the excitation lines. Moreover, in confocal slit detection, one would lose most part of fluorescent signals when the scattering is severe.

Here we propose the extended detection LTFM (ED-LTFM), a technique that could maintain the signal contrast and resist to scattering-induced noise in deep tissue LTFM. A 2D fluorescent image is captured at each line-shape excitation position, so that the signals, including the scattered signals, are fully recorded. Then computational reconstruction is performed to recover the signals. Moreover, we incorporate Hessian regularization in the deconvolution, for the first time, which would ensure smooth transitions in the reconstructed images and thus reduce the artifacts induced by low SNR [25]. We demonstrate the enhanced performance of ED-LTFM in in vivo deep imaging of neurons in Thy1-YFP mouse brains and dynamic imaging of microglia in CX3CR1-GFP mouse brains.

2. Imaging modeling

As shown in Fig. 1(a), we denote one slice of the 3D sample asf(x,y), where(x,y) is the lateral coordinates. f(x,y) is excited column by column (along x-axis) by steering the laser line formed by temporal focusing. In conventional LTFM, the shutter of camera keeps open when the line-shape excitation beam excites the sample from one end to the other. At each excitation position, temporal focused laser line excites the sample, while the emitted fluorescent signals go through the sample and the optical elements before being recorded by the camera. Unfortunately, the emitted fluorescent signals would suffer from tissue scattering, and photons from different excitation positions would mix in the sensor plane, as shown in Fig. 1(b). Consequently, the captured signals in wide-field detection (WD) LTFM thus could be written as

pWD(x,y)=h(xx',yey)f(x',y')δ(y'ey)dx'dy'dey
where h is the point spread function (PSF) of the system, ey is the location of the line-shape excitation beam. The captured image pWD(x,y) would be susceptible to serious crosstalks along both x and y axes if h is largely affected by tissue scattering.

 figure: Fig. 1

Fig. 1 Illustrations of the ED-LTFM method. (a) The scattering PSF h makes the excitation line-shape signals overlapped along y-axis (direction of scanning) and blurred along x-axis (direction of excitation line). (b) Line-shape signals in conventional LTFM are integrated by the wide-field detection camera, thus are overlapped in both x- and y-axis. (c) A confocal slit is inserted before the sensor, which can reduce the crosstalk along y-axis but not the crosstalk along x-axis. (d) The extended detection (ED) technique records all of the signals, including scattering signals, for subsequent computational reconstruction, which could reduce cross talk along both along x- and y-axis.

Download Full Size | PDF

For confocal slit detection (CSD), a detection slit is adopted to block the scattering photons. The captured signals by CSD then could be written as

pCSD(x,y)=h(xx',0)f(x',y')δ(yy')dx'dy'

Even though the crosstalk along y-axis could be effectively reduced by the confocal slit detection, the crosstalk along x-axis remains, as shown in Fig. 1(c). Moreover, if scattering enlarges the PSF h, the confocal slit would cut the majority of the signals and affect the final imaging SNR. To suppress the effect of scattering along both x-axis and y-axis, we propose the ED-LTFM method that fully utilizes the crosstalk information to recover the original signals.

More specifically, ED-LTFM acquires an image stack instead of a single image, for a single depth imaging:

p'(x,y,ey)=h(xx',y)f(x',y')δ(y'ey)dx'dy'
In other words, {p'(x,y,ey)} is the set of images that are excited by the laser line at each excitation position, as shown in Fig. 1(d). To recover imagef(x,y) from the captured{p'(x,y,ey)}, we propose the following optimization problem:
pED=argminf{μh(xx',y)f(x',y')δ(y'ey)dx'dy'p'(x,y,ey)22+λ|R(f)|1}
where
R(f)=(|2fx2|1+|2fy2|1+|2fxy|1)dxdy
which is the Hessian regularization term. Here, μ and λ are weight parameters, and the subscript 1 and 2 represent L-1 norm and L-2 norm.

Note that, in our model, the shift-invariance property of the PSF is employed, which is not obvious considering the complexity of tissue scattering. However, as presented in Section 5.3.1, we validate that in deep tissue imaging, the scattering PSF is highly similar across the whole field of view (~80 μm). Thus the assumption of shift-invariance is feasible.

3. Computational reconstruction

3.1 Fitting of scattering PSF

To solve Eq. (4), we need to calculate the PSF through curve fitting first. Numerical simulations have shown that the pattern induced by tissue scattering has the property of circular symmetry [26,27]. Furthermore, Henyey and Greenstein introduced a scattering function which describes scattering probability in relation to the scattering angle [28]

p(θ|g)=14π1g2[1+g22gcos(θ)]32
in which θ and g are the angle and anisotropy parameters, respectively.

For PSF fitting, Eq. (6) should be changed into the imaging coordinate system. We replace the angular coordinateθwith coordinates (x,y) by introducing a parameter α so thatcos(θ)=α/x2+y2+α2. Eq. (6) could be reformulated as

p(x,y|g,α)=1M1g2[1+g22gαx2+y2+α2]32
where M is a normalizing constant that ensures the total intensity of the PSF is 1. Note that αcould be represented by the full width at half maximum (FWHM) of the PSF as

α=FWHM2(2g(1+g)2223(1g)2)21

The modelled PSF with parameters (α,g)then could be fitted to the captured point-source-like signals. Root Mean Square Error (RMSE) is chosen as the fitting metric. The best parameter combination is then used to generate the scattering PSF for the subsequent deconvolution process.

We demonstrate the aforementioned process in Fig. 2. Similar to the guide star techniques employed in adaptive optics [29,30], we capture several small structures which locate at ~140 μm under the dura in mouse brain in vivo and treat them as point sources. We then fit them with the proposed H-G function. Specifically, we first measure the dark noise of the camera (~103) and subtract this value from the raw data. Then we fit the data with the proposed H-G function and one of fitting results is shown in Fig. 2(b).

 figure: Fig. 2

Fig. 2 PSF fitting with different functions. (a) Fitting RMSE by H-G, Lorentz and Gaussian functions. The boxplot is measured from 25 different scattering sources. Red line: median. Bottom and top edges of the blue box: 25th and 75th percentiles. Fitting error of H-G is significantly smaller than the other two models (P<10−9 and P<10−10 respectively, by Student’s t-test). (b)(c)(d) Raw pixel intensity from the captured image of one point-like source (blue circle) and best fitting results by H-G function (b), Lorentz function (c), and Gaussian function (d).

Download Full Size | PDF

We also show that H-G function would fit better compared to Lorentz function (Fig. 2(c)) and Gaussian function (Fig. 2(d)) with the same raw data. Furthermore, we plot the fitting error of 25 different point sources in RMSE in Fig. 2(a), which shows that H-G function leads a significant advance in statistics. We observe that the distribution of the PSF is heavy-tailed due to strong tissue scattering (raw pixels in Figs. 2(b)–2(d)), thus the PSF model needs to parameterize both the FWHM and the tail distribution. However, Lorentz function and Gaussian function are controlled by only one parameter, which makes them hard to fully characterize the scattering PSF in deep tissue imaging. This is why the Lorentz and Gaussian functions produce obvious fitting error in the “hump” part in Figs. 2(c) and 2(d). On the other hand, H-G function supports more flexible control of the scattering PSF and thus fits better. All the PSFs in the following content are modelled by H-G function with the same process as mentioned above.

3.2 Hessian regularized hybrid-deconvolution

In this section, we formulate the deconvolution algorithm to solve the optimization problem described in Eq. (4)

Firstly, we note

f3d(x,y,ey)=f(x,ey)δ(y)
and
h3d(x,y,ey)=h(x,y)δ(ey)
We can rewrite the Eq. (4) as
pED=argminf{μ||(f3d*h3d)(x,y,ey)p'(x,y,ey)||22+λ|R(f)|1}
where * denotes 3d convolution.

We adopt the alternative direction method of multipliers (ADMM) [31] to solve the above problem. First, three new variables b1, b2, b3 are introduced and the equivalent minimization problem becomes

pED=argminf{μ||f3d*h3dp'||22+λ(|b1|1+|b2|1+|b3|1)}
where
fxx=b1,fyy=b2,2fxy=b3
After that, the augmented Lagrangian could be written as:
L(f,b,u|μ,λ,ρ)=μ||f3d*h3dp'||22+λ(|b1|1+|b2|1+|b3|1)+ρ(||fxxb1+u1||22+||fyyb2+u2||22+||2fxyb3+u3||22)
in which ρ is the penalty parameter. The problem could then be solved in a three-step iterative manner:

  • i. Update fk
    fk=argminfL(f,bk1,uk1)
  • ii. Update bk
    bk=argminbL(fk,b,uk1)

specifically,b1k={fxxk+u1k1λ2ρ,iffxxk+u1k1>λ2ρfxxk+u1k1+λ2ρ,iffxxk+u1k1<λ2ρ0,otherwise

and b2k, b3k are updated in the same way.

  • iii. Update uk
    u1k=fxxkb1k+u1k1

    and u2, u3 are updated in the same way.

We then detail the process of updating fk and demonstrate that Eq. (15) has an analytical solution. Using the Parseval’s identity [32], the problem in Eq. (15) can be re-written in Fourier domain:

f˜=argminf{μ||f3d˜h3d˜p'˜||22+ρ(||xx˜f˜b1˜+u1˜||22+||yy˜f˜b2˜+u2˜||22+||2xy˜f˜b3˜+u3˜||22)}
in which the symbols with tilde represent the Fourier transforms of the original signal. is element-wise product. Note that we previously definedh3d(x,y,ey)=h(x,y)δ(ey) andf3d(x,y,ey)=f(x,ey)δ(y), So we have h3d˜(kx,ky,key)=h˜(kx,ky)and f3d˜(kx,ky,key)=f˜(kx,key). In this way, the right-hand-side of the above equation can be viewed as a function with the variable off˜(kx,key), and the analytical solution is
f˜(kx,key)=ND
where
N=xx˜*(b1˜(kx,key)u1˜(kx,key))+yy˜*(b2˜(kx,key)u2˜(kx,key))+2xy˜*(b3˜(kx,key)u3˜(kx,key))+μρp'˜(kx,ky,key)h˜*(ky,key)dky
D=|xx˜|2+|yy˜|2+4|xy˜|2+μρ|h˜(kx,ky)|2dky
Finally,f(x,y) is obtained by taking the inverse Fourier transform of f˜(kx,key).

The reconstruction software is available from ref [33]. Note that it is necessary to choose proper parameters μ and λ in Eq. (12), so in following experiments we use the grid-search technique to determine the best parameters with image sharpness as the metric [34]. All the computational reconstructions are performed on a personal computer with Intel(R) Core(TM) i5-7500 CPU and 16.0 GB RAM. One iteration described in Eqs. (16)-(18) takes ~0.6 seconds for a 650 × 400-pixel image, and the algorithm takes tens of iterations to converge.

4. Simulation results

After building the algorithm, we evaluate the performance of the proposed methods via numerical simulations (Fig. 3). We show the reconstructed images of the microtubules by WD, CSD and ED, in Figs. 3(b)–3(d), respectively. We split the original microtubule image into columns, then convolve each column with the PSF h to generate{p'(x,y,ey)}. A 40 dB Gaussian white noise (Peak Signal-to-Noise Ratio) is added to simulate the real experiments. After generating{p'(x,y,ey)}, pWD, pCSD and pED are then calculated via Eqs. (1), (2), and (4), respectively. We could see that pED shows the lowest background among pWD, pCSD andpED.

 figure: Fig. 3

Fig. 3 Numerical simulation results. (a) A ground-truth (GT) image of microtubule used in simulations. (b)(c)(d), images retrieved with WD, CSD and ED, respectively. (e)(f), images reconstructed with Hessian deconvolution enhanced WD and CSD, respectively. (g) The intensity along dashed lines in (a)-(f). (h) Structural Similarity (SSIM) Index of reconstructions by all the methods under different noise levels in Peak Signal-to-Noise Ratio (PSNR). Black dashed line: threshold for the acceptable reconstruction.

Download Full Size | PDF

We also conduct deconvolution with Hessian regularization on the WD and CSD to show that the proposed ED technique still has the best performances, as shown in Figs. 3(e) and 3(f). For WD, the same PSF as in ED is used, while for CSD 1D deconvolution is performed on each detected slit since slit detection is performed in the CSD. The 1D PSF is formulated by selecting the central line of the fitted PSF. By comparing Figs. 3(b) and 3(e), we could see that the improvement of deconvolution is obvious. However, deconvolution still could not handle the serve crosstalk caused by WD. While CSD could also improve its performance after deconvolution (Figs. 3(c) and 3(f)), the loss of signals by involving confocal slit would affect its final performance. On the other hand, the proposed ED technique is insusceptible to both the crosstalk from tissue scattering and the loss of signals induced by confocal slit based spatial filtering, thus it could retrieve the best performance after the same deconvolution process.

We quantitatively measure the width of retrieved microtubules achieved with these methods and could see that pED resembles ground truth the most, which demonstrates great advantages of our ED-LTFM in strong scattering and noisy conditions. We also label the structured structural similarity index (SSIM) of each method in Figs. 3(b)–(f), which is widely used to evaluate the similarity of the reconstructed images to the ground truth [35]. The result with ED is 2.1 times better than that with CSD and 15.5 times better than that with WD in terms of the index value. In Fig. 3(h), we compare the reconstruction SSIM of all the methods under different noise levels. Using 0.5 as the reconstruction SSIM threshold, ED-LTFM can extend the tolerable noise range by ~8 dB compared to the second best method (CSD-deconv), which explicitly shows the outperformances for our methods.

5. Experiments

5.1 Optical configuration

Figure 4 shows the optical configuration of ED-LTFM. We use an 80 MHz, ∼120 fs laser (Chameleon Discovery, Coherent) for two-photon excitation at 920 nm, and a following electro-optical modulator (M3202RM, Conoptics) to control the laser intensity. The laser beam is expanded to ~5 mm with a telescope (L1: f = 60 mm, L2: f = 150 mm), and then scanned in the vertical direction with a one-dimensional galvanometer (GVS211, Thorlabs). The beam is focused to a thin line on the surface of the diffraction grating (Edmund Optics, 830 lines/mm) with a cylindrical lens (f = 300 mm). The incident angle to the grating is ∼50° to ensure that the central wavelength of the 1st diffraction light is perpendicular to the grating surface. The spectrally-spread pulse is collimated with a collimating lens (L3: f = 200 mm), so that the expanded beam fulfills the back pupil of the objective (25 × , 1.05 NA, water immersion, Olympus, XLPLN25XWMP2). A line-shaped laser beam, around 80 µm in length, is formed at the focal plane of the objective. An epi-fluorescence setup is built-up for image acquisition, including a dichroic mirror (DMSP750B, Thorlabs), a bandpass filter (E510/80, Chroma), a 200 mm tube lens (L4, TTL200-A, Thorlabs), and an sCMOS (Zyla 5.5 plus, Andor). Three-dimensional imaging is performed by axially moving the sample stage (M-VP-25XA-XYZL, Newport).

 figure: Fig. 4

Fig. 4 Optical configuration of the ED-LTFM. EOM, electro-optical modulator; HWP, half wave plate; Cyl. Lens, cylindrical lens; D, dichroic mirror. Inset 1: In our setup, the excitation line is fixed and the sample stage moves to achieve line scan.

Download Full Size | PDF

In wide-field detection pED, the camera keeps open during the line-shaped beam scans the sample. To capture {p'(x,y,ey)}, we park the beam in the center of the field-of-view (FOV) and translate the sample stage to finish the 1D scan, which will simplify the experimental setup for extended detection and make the imaging field unlimited by the FOV of the objective. In this case, the camera captures a 650-by-200-pixel image at each stage position and the total {p'(x,y,ey)} stack is formed by 400 captures. To mimic the confocal slit detection, we use the center signal of the captured image stack{p'(x,y,ey)} to recover pCSD(x,y). pED is calculated from the proposed reconstruction algorithm. For fair comparison, the exposure time of each row in all three cases are the same (50 ms). Note that the excitation line is along x axis, the same as the derivation above.

5.2 Images of fluorescent beads

We first demonstrate the enhanced performances of the proposed ED technique via imaging 3 µm fluorescent beads (T14792, Thermo fisher) under 300 µm scattering phantom (non-fluorescent beads embedded in 2% solution of agar). Figures 5(a)–5(c) show the maximum intensity projection (MIP) along z-axis (MIPs of a 10 μm x-y stack) of the beads via WD, CSD, and ED, respectively. We could see that beads are seriously blurred under WD, while CSD could effectively reduce the blurriness along y-axis compared to WD but it helps less along x-axis. On the other hand, ED could effectively reduce the blurriness along both axes. We further quantitatively compare the blurriness reduction of these three methods via measuring the captured beads profiles along x and y-axis in Figs. 5(d) and 5(e), which strongly suggests that the proposed ED technique is effective for reducing scattering along both axes.

 figure: Fig. 5

Fig. 5 Images of 3 μm fluorescent beads with WD in (a), CSD in (b) and ED in (c). (d) and (e) show the intensity fluctuation along dashed lines in (a), (b) and (c). Scale bar: 5 μm.

Download Full Size | PDF

5.3 In vivo imaging of Thy1-YFP mice

Then we demonstrate the performance of ED-LTFM in in vivo imaging of living Thy1-YFP mice (JAX No. 003782). After craniotomy, we conduct acute imaging of neurons in the cerebral cortex with the living mice under anesthesia [36] (all procedures involving mice were approved by the Animal Care and Use Committees of Tsinghua University).

5.3.1 Validation of the PSF invariance

So far, we have assumed the shift-invariance of scattering PSF in imaging modelling and reconstruction, which is not obvious considering that the tissue structures and properties are complicated. To validate this assumption, we fit PSF with the proposed H-G model at different locations, as shown in Fig. 6(a). We choose 3 targets for the fitting, draw the intensity fluctuation around the target, and search the best fitting parameters (α,g), as shown in Figs. 6(b)–6(d). It is found that the fitted parameters vary little in different locations, as shown in Fig. 6(e). The results suggest that the PSF is near shift-invariant across the whole FOV, thus the deconvolution process of our proposed algorithm is feasible. We deduce that the observed shift-invariance of the PSF is due to: 1) Scattering properties vary across different regions of the mouse brain [37], but these properties are highly similar locally [38]. 2) When imaging depth goes beyond the mean free path (<50 μm for emitted photons in our case [39,40]), emitted fluorescent photons will experience multiple scattering, which will lead to similar scattering PSFs across the FOV.

 figure: Fig. 6

Fig. 6 Validation of PSF invariance across the whole FOV. (a) MIP of neurons along z-axis of a 13-μm-thick image stack (80-92 μm under the dura) acquired with the ED. To test the PSF invariance, we choose three targets across the whole FOV for fitting, which are marked by the white arrows. (b)(c)(d) Scattering profiles and the fitting results at the target locations in (a). (e) Plotting the fitting results at (b)(c)(d) in one figure. It could be seen that the results are highly overlapped, demonstrating the PSF is invariant. Scalebar: 10 µm.

Download Full Size | PDF

5.3.2 ED outperforms WD and CSD in neuroimaging

We further compare ED, WD and CSD in in vivo neuron imaging. In Figs. 7(a)–(c), we show the maximum intensity projection (MIP) of neurons along the z-axis of a 13-µm-thick image stack (80–92 μm under the dura) acquired via WD, CSD and ED, respectively. For precise comparison, we show the zoomed-in view of the captured images in Figs. 7(d)–7(f). As expected, the dendrites are blurred in WD due to the strong scattering, while CSD techniques help to eliminate the crosstalk induced by scattering along y-axis, and ED effectively eliminates the crosstalk along both x-axis and y-axis. In Fig. 7(j), we show the signal improvement of ED over that in WD and CSD by quantitatively comparing intensity along the dashed line in Figs. 7(d)–7(f).

 figure: Fig. 7

Fig. 7 Comparison of different techniques in in vivo deep imaging of neurons. (a)(b)(c) MIPs of neurons along z-axis of a 13-μm-thick image stack (80–92 μm under the dura) acquired with the WD, CSD and ED, respectively. (d)(e)(f) Zoom-in views of lateral area marked by the dashed box in (a), (b) and (c), respectively. (g)(h)(i) MIPs along y-axis of a 10 μm thick x-z stack [marked by the dashed box in (a)(b)(c)]. The MIPs of x-z stacks are shown with bilinear interpolation along z-axis to equate the lateral and axial pixel sizes. (j) Intensity profiles along the indicated lines in (d)(e)(f). (k) Intensity profiles along the indicated lines in (g)(h)(i). Scale bars in (a)(b)(c) are 10 μm, in (d)(e)(f) and (g)(h)(i) are 3 μm.

Download Full Size | PDF

In Fig. 7(k), we also show that the proposed ED technique could help improve the signal contrast along z-axis via MIP along the y-axis of a 10 μm-thick x-z image stack [labeled by the dashed box in Figs. 7(a)–7(c)] in Figs. 7(g)–7(i). It can be seen that the improvements of ED-LTFM are obvious.

5.3.3 Hessian regularization in ED helps to reduce reconstruction artifacts

We also verify the improved performances of Hessian regularization. For comparison, we conduct ED with and without Hessian regularization in the deconvolution step, as shown in Fig. 8. We could see that without regularization, the ED deconvolution process amplifies both the noise and the signals (Figs. 8(a) and 8(b)). By carefully checking the details, we could find that ED without Hessian regularization generates more structures than that with Hessian regularization (Figs. 8(d) and 8(e)). We further check the result from CSD (which is free from any post-processing) and find that it matches well with the result from ED with Hessian regularization (Figs. 8(d) and 8(f)). In other words, ED without Hessian regularization generates artifacts in the retrieved images. The introduced Hessian regularization helps to reduce artifacts and maintain the reconstruction fidelity.

 figure: Fig. 8

Fig. 8 Comparing ED reconstruction with and without Hessian regularization. (a) ED reconstruction with Hessian regularization. (b) ED reconstruction without Hessian regularization (ED no He). (c) CSD reconstruction, which is free from post processing. (d)(e)(f) Zoom-in views of the lateral area marked by white dashed boxes in (a)(b)(c), respectively. Scale bars in (a)(b)(c) are 10 µm, in (d)(e)(f) are 3 µm.

Download Full Size | PDF

5.4 In vivo imaging of CX3CR1-GFP mice

Finally, we demonstrate the performance of ED-LTFM in in vivo dynamical imaging of microglia cells in living CX3CR1-GFP mice (JAX No. 005582). In Figs. 9(a)–9(c), we show the MIP along the z-axis of a 7-µm-thick image stack acquired via WD, CSD, and ED, respectively, at the depths of 172–178 μm under the dura. To the best of our knowledge, no such penetration depths in LTFM has been demonstrated in vivo so far. We could see that the background noise by ED has been suppressed significantly compared to those by both WD and CSD. By selecting a small part of Figs. 9(a)–9(c), we could see that CSD eliminates the crosstalk along y-axis effectively but fails to eliminate the crosstalk along x-axis. The fine process could be recovered by ED effectively, which even could not be recognized in original WD images. We also image the “non-resting” dynamical movement of microglia cells over a total time of 16 minutes within a depth range of 30 μm. In Figs. 9(g)–9(i), we can see that, compared with the results from WD and CSD, ED can record the movement of the processes of microglia cells with fine details.

 figure: Fig. 9

Fig. 9 Comparison of different techniques in in vivo deep imaging of microglia cells and their dynamics. (a)(b)(c) MIP of microglia cells along z-axis of a 7-μm-thick image stack (172–178 μm under the dura) acquired with the WD, CSD and ED, respectively. (d)(e)(f) Zoomed-in views of lateral area marked by the dashed box in (a), (b) and (c), respectively. (g)(h)(i), temporal color-coded MIP sequences of microglia cells along 30-μm-thick image stack (100–130 μm under the dura) acquired with WD, CSD and ED. Scale bars in (a)(b)(f)(g)(h)(i) are 10 μm, in (d)(e)(f) are 3 μm.

Download Full Size | PDF

6. Conclusion

In summary, we propose a novel technique for overcoming tissue scattering in wide-field deep imaging by extended detection and computational reconstruction. Through both numerical simulations and in vivo imaging experiments, we have demonstrated that the proposed ED-LTFM can effectively enhance the penetration depth. Considering the line-rate of our sCMOS camera is about 2.2 × 105 Hz, it can enable 55 Hz imaging rate (faster than ~30 Hz speed of typical point-scanning two photon microscopy with resonant galvo scanners) for a 650 × 400-pixel capture with a 10-pixel width extended detection, in which case a femtosecond laser of low repetition rate but high pulse energy should be used to ensure the SNR. Besides, since our PSF fitting technique relies on the detection of small sources, ED-LTFM would work better for sparsely labelled tissues (in some cases, such as microglia cells in Fig. 9, the targets are inherently sparse). It is worth noting that ED-LTFM can also be integrated with other strategies, such as three-photon LTFM [20], adaptive optics based LTFM [22], and multiphoton multispot systems [24] to push the limits of imaging depth further in scattering tissues.

Funding.

National Natural Science Foundation of China (NSFC) (61831014, 61771287, 61327902, 61741116, and 61722209).

Acknowledgments

YZ thanks Yingjun Tang for helps in sample preparations. LK thanks the support from Tsinghua University and the “Thousand Talents Plan” Youth Program.

References

1. W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248(4951), 73–76 (1990). [CrossRef]   [PubMed]  

2. J. Bewersdorf, R. Pick, and S. W. Hell, “Multifocal multiphoton microscopy,” Opt. Lett. 23(9), 655–657 (1998). [CrossRef]   [PubMed]  

3. E. J. Botcherby, C. W. Smith, M. M. Kohl, D. Débarre, M. J. Booth, R. Juškaitis, O. Paulsen, and T. Wilson, “Aberration-free three-dimensional multiphoton imaging of neuronal activity at kHz rates,” Proc. Natl. Acad. Sci. U.S.A. 109(8), 2919–2924 (2012). [CrossRef]   [PubMed]  

4. L. Kong, J. Tang, J. P. Little, Y. Yu, T. Lämmermann, C. P. Lin, R. N. Germain, and M. Cui, “Continuous volumetric imaging via an optical phase-locked ultrasound lens,” Nat. Methods 12(8), 759–762 (2015). [CrossRef]   [PubMed]  

5. N. Ji, J. Freeman, and S. L. Smith, “Technologies for imaging neural activity in large volumes,” Nat. Neurosci. 19(9), 1154–1164 (2016). [CrossRef]   [PubMed]  

6. W. Yang, J. E. Miller, L. Carrillo-Reid, E. Pnevmatikakis, L. Paninski, R. Yuste, and D. S. Peterka, “Simultaneous multi-plane imaging of neural circuits,” Neuron 89(2), 269–284 (2016). [CrossRef]   [PubMed]  

7. D. Oron and Y. Silberberg, “Spatiotemporal coherent control using shaped, temporally focused pulses,” Opt. Express 13(24), 9903–9908 (2005). [CrossRef]   [PubMed]  

8. G. Zhu, J. van Howe, M. Durst, W. Zipfel, and C. Xu, “Simultaneous spatial and temporal focusing of femtosecond pulses,” Opt. Express 13(6), 2153–2159 (2005). [CrossRef]   [PubMed]  

9. M. E. Durst, G. Zhu, and C. Xu, “Simultaneous Spatial and Temporal Focusing in Nonlinear Microscopy,” Opt. Commun. 281(7), 1796–1805 (2008). [CrossRef]   [PubMed]  

10. R. Prevedel, A. J. Verhoef, A. J. Pernía-Andrade, S. Weisenburger, B. S. Huang, T. Nöbauer, A. Fernández, J. E. Delcour, P. Golshani, A. Baltuska, and A. Vaziri, “Fast volumetric calcium imaging across multiple cortical layers using sculpted light,” Nat. Methods 13(12), 1021–1028 (2016). [CrossRef]   [PubMed]  

11. Y. Meng, W. Lin, C. Li, and S. C. Chen, “Fast two-snapshot structured illumination for temporal focusing microscopy with enhanced axial resolution,” Opt. Express 25(19), 23109–23121 (2017). [CrossRef]   [PubMed]  

12. J. N. Yih, Y. Y. Hu, Y. D. Sie, L. C. Cheng, C. H. Lien, and S. J. Chen, “Temporal focusing-based multiphoton excitation microscopy via digital micromirror device,” Opt. Lett. 39(11), 3134–3137 (2014). [CrossRef]   [PubMed]  

13. E. Papagiakoumou, A. Bègue, B. Leshem, O. Schwartz, B. M. Stell, J. Bradley, D. Oron, and V. Emiliani, “Functional patterned multiphoton excitation deep inside scattering tissue,” Nat. Photonics 7(4), 274–278 (2013). [CrossRef]  

14. A. Escobet-Montalbán, R. Spesyvtsev, M. Chen, W. A. Saber, M. Andrews, C. S. Herrington, M. Mazilu, and K. Dholakia, “Wide-field multiphoton imaging through scattering media without correction,” Sci. Adv. 4, (2018).

15. Y. Xue, K. P. Berry, J. R. Boivin, D. Wadduwage, E. Nedivi, and P. T. C. So, “Scattering reduction by structured light illumination in line-scanning temporal focusing microscopy,” Biomed. Opt. Express 9(11), 5654–5666 (2018). [CrossRef]   [PubMed]  

16. E. Tal, D. Oron, and Y. Silberberg, “Improved depth resolution in video-rate line-scanning multiphoton microscopy using temporal focusing,” Opt. Lett. 30(13), 1686–1688 (2005). [CrossRef]   [PubMed]  

17. H. Dana, N. Kruger, A. Ellman, and S. Shoham, “Line temporal focusing characteristics in transparent and scattering media,” Opt. Express 21(5), 5677–5687 (2013). [CrossRef]   [PubMed]  

18. B. Sun, P. S. Salter, C. Roider, A. Jesacher, J. Strauss, J. Heberle, M. Schmidt, and M. J. Booth, “Four-dimensional light shaping: manipulating ultrafast spatiotemporal foci in space and time,” Light Sci. Appl. 7(1), 17117 (2018). [CrossRef]   [PubMed]  

19. H. Dana, A. Marom, S. Paluch, R. Dvorkin, I. Brosh, and S. Shoham, “Hybrid multiphoton volumetric functional imaging of large-scale bioengineered neuronal networks,” Nat. Commun. 5(1), 3997 (2014). [CrossRef]   [PubMed]  

20. C. J. Rowlands, D. Park, O. T. Bruns, K. D. Piatkevich, D. Fukumura, R. K. Jain, M. G. Bawendi, E. S. Boyden, and P. T. So, “Wide-field three-photon excitation in biological samples,” Light Sci. Appl. 6(5), e16255 (2017). [CrossRef]   [PubMed]  

21. Y. Zhang, L. Kong, H. Xie, X. Han, and Q. Dai, “Enhancing axial resolution and background rejection in line-scanning temporal focusing microscopy by focal modulation,” Opt. Express 26(17), 21518–21526 (2018). [CrossRef]   [PubMed]  

22. Y. Zhang, X. Li, H. Xie, L. Kong, and Q. Dai, “Hybrid spatio-spectral coherent adaptive compensation for line-scanning temporal focusing microscopy,” J. Phys. D 52(2), 024001 (2019). [CrossRef]  

23. P. Rupprecht, R. Prevedel, F. Groessl, W. E. Haubensak, and A. Vaziri, “Optimizing and extending light-sculpting microscopy for fast functional imaging in neuroscience,” Biomed. Opt. Express 6(2), 353–368 (2015). [CrossRef]   [PubMed]  

24. M.-P. Adam, M. C. Müllenbroich, A. P. Di Giovanna, D. Alfieri, L. Silvestri, L. Sacconi, and F. S. Pavone, “Confocal multispot microscope for fast and deep imaging in semicleared tissues,” J. Biomed. Opt. 23(2), 1–4 (2018). [CrossRef]   [PubMed]  

25. S. Lefkimmiatis, A. Bourquard, and M. Unser, “Hessian-based norm regularization for image restoration with biomedical applications,” IEEE Trans. Image Process. 21(3), 983–995 (2012). [CrossRef]   [PubMed]  

26. B. G. Saar, C. W. Freudiger, J. Reichman, C. M. Stanley, G. R. Holtom, and X. S. Xie, “Video-rate molecular imaging in vivo with stimulated Raman scattering,” Science 330(6009), 1368–1370 (2010). [CrossRef]   [PubMed]  

27. P. Rupprecht, R. Prevedel, F. Groessl, W. E. Haubensak, and A. Vaziri, “Optimizing and extending light-sculpting microscopy for fast functional imaging in neuroscience,” Biomed. Opt. Express 6(2), 353–368 (2015). [CrossRef]   [PubMed]  

28. L. G. Henyey and J. L. Greenstein, “Diffuse radiation in the galaxy,” Astrophys. J. 93, 70–83 (1941). [CrossRef]  

29. J. Liang, D. R. Williams, and D. T. Miller, “Supernormal vision and high-resolution retinal imaging through adaptive optics,” J. Opt. Soc. Am. A 14(11), 2884–2892 (1997). [CrossRef]   [PubMed]  

30. K. Wang, D. E. Milkie, A. Saxena, P. Engerer, T. Misgeld, M. E. Bronner, J. Mumm, and E. Betzig, “Rapid adaptive optical recovery of optimal resolution over large volumes,” Nat. Methods 11(6), 625–628 (2014). [CrossRef]   [PubMed]  

31. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine learning 3, 1–122 (2011).

32. M.-A. Parseval, “Mémoire sur les séries et sur l’intégration complète d’une équation aux différences partielles linéaires du second ordre, à coefficients constants,” Mém. prés. par divers savants. Acad. des Sciences, Paris 1(1), 638–648 (1806).

33. T. Zhou, “Source Code for ED_LTFM” (2019), https://github.com/rickyim/ED_LTFM.

34. D. Burke, B. Patton, F. Huang, J. Bewersdorf, and M. J. Booth, “Adaptive optics correction of specimen-induced aberrations in single-molecule switching microscopy,” Optica 2(2), 177–185 (2015). [CrossRef]  

35. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]   [PubMed]  

36. L. Kong and M. Cui, “In vivo fluorescence microscopy via iterative multi-photon adaptive compensation technique,” Opt. Express 22(20), 23786–23794 (2014). [CrossRef]   [PubMed]  

37. S. I. Al-Juboori, A. Dondzillo, E. A. Stubblefield, G. Felsen, T. C. Lei, and A. Klug, “Light scattering properties vary across different regions of the adult mouse brain,” PLoS One 8(7), e67626 (2013). [CrossRef]   [PubMed]  

38. C.-Y. Dong, B. Yu, L. L. Hsu, P. D. Kaplan, D. Blankschstein, R. Langer, and P. T. So, “Applications of two-photon fluorescence microscopy in deep-tissue imaging,” in Optical Sensing, Imaging, and Manipulation for Biological and Biomedical Applications, (International Society for Optics and Photonics, 2000), 105–115.

39. F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2(12), 932–940 (2005). [CrossRef]   [PubMed]  

40. D. M. Chudakov, M. V. Matz, S. Lukyanov, and K. A. Lukyanov, “Fluorescent proteins and their applications in imaging living cells and tissues,” Physiol. Rev. 90(3), 1103–1163 (2010). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 Illustrations of the ED-LTFM method. (a) The scattering PSF h makes the excitation line-shape signals overlapped along y-axis (direction of scanning) and blurred along x-axis (direction of excitation line). (b) Line-shape signals in conventional LTFM are integrated by the wide-field detection camera, thus are overlapped in both x- and y-axis. (c) A confocal slit is inserted before the sensor, which can reduce the crosstalk along y-axis but not the crosstalk along x-axis. (d) The extended detection (ED) technique records all of the signals, including scattering signals, for subsequent computational reconstruction, which could reduce cross talk along both along x- and y-axis.
Fig. 2
Fig. 2 PSF fitting with different functions. (a) Fitting RMSE by H-G, Lorentz and Gaussian functions. The boxplot is measured from 25 different scattering sources. Red line: median. Bottom and top edges of the blue box: 25th and 75th percentiles. Fitting error of H-G is significantly smaller than the other two models (P<10−9 and P<10−10 respectively, by Student’s t-test). (b)(c)(d) Raw pixel intensity from the captured image of one point-like source (blue circle) and best fitting results by H-G function (b), Lorentz function (c), and Gaussian function (d).
Fig. 3
Fig. 3 Numerical simulation results. (a) A ground-truth (GT) image of microtubule used in simulations. (b)(c)(d), images retrieved with WD, CSD and ED, respectively. (e)(f), images reconstructed with Hessian deconvolution enhanced WD and CSD, respectively. (g) The intensity along dashed lines in (a)-(f). (h) Structural Similarity (SSIM) Index of reconstructions by all the methods under different noise levels in Peak Signal-to-Noise Ratio (PSNR). Black dashed line: threshold for the acceptable reconstruction.
Fig. 4
Fig. 4 Optical configuration of the ED-LTFM. EOM, electro-optical modulator; HWP, half wave plate; Cyl. Lens, cylindrical lens; D, dichroic mirror. Inset 1: In our setup, the excitation line is fixed and the sample stage moves to achieve line scan.
Fig. 5
Fig. 5 Images of 3 μm fluorescent beads with WD in (a), CSD in (b) and ED in (c). (d) and (e) show the intensity fluctuation along dashed lines in (a), (b) and (c). Scale bar: 5 μm.
Fig. 6
Fig. 6 Validation of PSF invariance across the whole FOV. (a) MIP of neurons along z-axis of a 13-μm-thick image stack (80-92 μm under the dura) acquired with the ED. To test the PSF invariance, we choose three targets across the whole FOV for fitting, which are marked by the white arrows. (b)(c)(d) Scattering profiles and the fitting results at the target locations in (a). (e) Plotting the fitting results at (b)(c)(d) in one figure. It could be seen that the results are highly overlapped, demonstrating the PSF is invariant. Scalebar: 10 µm.
Fig. 7
Fig. 7 Comparison of different techniques in in vivo deep imaging of neurons. (a)(b)(c) MIPs of neurons along z-axis of a 13-μm-thick image stack (80–92 μm under the dura) acquired with the WD, CSD and ED, respectively. (d)(e)(f) Zoom-in views of lateral area marked by the dashed box in (a), (b) and (c), respectively. (g)(h)(i) MIPs along y-axis of a 10 μm thick x-z stack [marked by the dashed box in (a)(b)(c)]. The MIPs of x-z stacks are shown with bilinear interpolation along z-axis to equate the lateral and axial pixel sizes. (j) Intensity profiles along the indicated lines in (d)(e)(f). (k) Intensity profiles along the indicated lines in (g)(h)(i). Scale bars in (a)(b)(c) are 10 μm, in (d)(e)(f) and (g)(h)(i) are 3 μm.
Fig. 8
Fig. 8 Comparing ED reconstruction with and without Hessian regularization. (a) ED reconstruction with Hessian regularization. (b) ED reconstruction without Hessian regularization (ED no He). (c) CSD reconstruction, which is free from post processing. (d)(e)(f) Zoom-in views of the lateral area marked by white dashed boxes in (a)(b)(c), respectively. Scale bars in (a)(b)(c) are 10 µm, in (d)(e)(f) are 3 µm.
Fig. 9
Fig. 9 Comparison of different techniques in in vivo deep imaging of microglia cells and their dynamics. (a)(b)(c) MIP of microglia cells along z-axis of a 7-μm-thick image stack (172–178 μm under the dura) acquired with the WD, CSD and ED, respectively. (d)(e)(f) Zoomed-in views of lateral area marked by the dashed box in (a), (b) and (c), respectively. (g)(h)(i), temporal color-coded MIP sequences of microglia cells along 30-μm-thick image stack (100–130 μm under the dura) acquired with WD, CSD and ED. Scale bars in (a)(b)(f)(g)(h)(i) are 10 μm, in (d)(e)(f) are 3 μm.

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

p WD (x,y)= h(xx',y e y )f(x',y')δ(y' e y )dx'dy'd e y
p CSD (x,y)= h(xx',0)f(x',y')δ(yy')dx'dy'
p'(x,y, e y )= h(xx',y)f(x',y')δ(y' e y ) dx'dy'
p ED =arg min f {μ h(xx',y)f(x',y')δ(y' e y ) dx'dy'p'(x,y, e y ) 2 2 +λ | R(f) | 1 }
R(f)= ( | 2 f x 2 | 1 + | 2 f y 2 | 1 + | 2 f xy | 1 ) dxdy
p(θ|g)= 1 4π 1 g 2 [1+ g 2 2gcos(θ)] 3 2
p(x,y|g,α)= 1 M 1 g 2 [1+ g 2 2gα x 2 + y 2 + α 2 ] 3 2
α= FWHM 2 ( 2g (1+g) 2 2 2 3 (1g) 2 ) 2 1
f 3d (x,y, e y )=f(x, e y )δ(y)
h 3d (x,y, e y )=h(x,y)δ( e y )
p ED =arg min f {μ||( f 3d * h 3d )(x,y, e y )p'(x,y, e y )| | 2 2 +λ|R(f) | 1 }
p ED =arg min f {μ|| f 3d * h 3d p'| | 2 2 +λ(| b 1 | 1 +| b 2 | 1 +| b 3 | 1 )}
f xx = b 1 , f yy = b 2 ,2 f xy = b 3
L(f,b,u|μ,λ,ρ)=μ|| f 3d * h 3d p'| | 2 2 +λ(| b 1 | 1 +| b 2 | 1 +| b 3 | 1 ) +ρ(|| f xx b 1 + u 1 | | 2 2 +|| f yy b 2 + u 2 | | 2 2 +||2 f xy b 3 + u 3 | | 2 2 )
f k =arg min f L(f, b k1 , u k1 )
b k =arg min b L( f k ,b, u k1 )
u 1 k = f xx k b 1 k + u 1 k1
f ˜ =arg min f {μ|| f 3d ˜ h 3d ˜ p' ˜ | | 2 2 +ρ(|| xx ˜ f ˜ b 1 ˜ + u 1 ˜ | | 2 2 +|| yy ˜ f ˜ b 2 ˜ + u 2 ˜ | | 2 2 +||2 xy ˜ f ˜ b 3 ˜ + u 3 ˜ | | 2 2 )}
f ˜ ( k x , k e y )= N D
N= xx ˜ * ( b 1 ˜ ( k x , k e y ) u 1 ˜ ( k x , k e y ))+ yy ˜ * ( b 2 ˜ ( k x , k e y ) u 2 ˜ ( k x , k e y )) +2 xy ˜ * ( b 3 ˜ ( k x , k e y ) u 3 ˜ ( k x , k e y ))+ μ ρ p' ˜ ( k x , k y , k e y ) h ˜ * ( k y , k e y )d k y
D=| xx ˜ | 2 +| yy ˜ | 2 +4| xy ˜ | 2 + μ ρ | h ˜ ( k x , k y ) | 2 d k y
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.