Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fast two-snapshot structured illumination for temporal focusing microscopy with enhanced axial resolution

Open Access Open Access

Abstract

We present a new two-snapshot structured light illumination (SLI) reconstruction algorithm for fast image acquisition. The new algorithm, which only requires two mutually π phase-shifted raw structured images, is implemented on a custom-built temporal focusing fluorescence microscope (TFFM) to enhance its axial resolution via a digital micromirror device (DMD). First, the orientation of the modulated sinusoidal fringe patterns is automatically identified via spatial frequency vector detection. Subsequently, the modulated in-focal-plane images are obtained via rotation and subtraction. Lastly, a parallel amplitude demodulation method, derived based on Hilbert transform, is applied to complete the decoding processes. To demonstrate the new SLI algorithm, a TFFM is custom-constructed, where a DMD replaces the generic blazed grating in the system and simultaneously functions as a diffraction grating and a programmable binary mask, generating arbitrary fringe patterns. The experimental results show promising depth-discrimination capability with an axial resolution enhancement factor of 1.25, which matches well with the theoretical estimation, i.e, 1.27. Imaging experiments on pollen grain and mouse kidney samples have been performed. The results indicate that the two-snapshot algorithm presents comparable contrast reconstruction and optical cross-sectioning capability than those adopting the conventional root-mean-square (RMS) reconstruction method. The two-snapshot method can be readily applied to any sinusoidally modulated illumination systems to realize high-speed 3D imaging as less frames are required for each in-focal-plane image restoration, i.e., the image acquisition speed is improved by 2.5 times for any two-photon systems.

© 2017 Optical Society of America

1. Introduction

Two-photon excitation (TPE) microscopy has been widely used in the field of bio-imaging and noninvasive diagnosis since its first introduction in the early 1990s [1]. Importantly, TPE microscopy presents distinctive advantages in deep tissue and in vivo imaging due to its long (near-infrared) excitation wavelength and low photobleaching/phototoxicity effect respectively [2]. A generic TPE system employs a scanning system that raster-scans the laser focus through a specimen; the emissions from each pixel are then collected to form an optical cross-section. To increase the imaging speed, i.e., video rate or higher, one direct approach is to use high speed scanners, e.g., resonant galvanometric scanners, polygonal scanners, and acousto-optic deflectors (AODs) [3–5]. However, in these systems high scanning speeds are achieved at the expense of pixel dwell time and may compromise the signal-to-noise ratio (SNR). To address the issue, one may excite or illuminate multiple pixels in the specimen simultaneously. Good examples include temporal focusing fluorescence microscopy (TFFM), and line-scan or multifocal multi-photon microscope [6–14]. Among the aforementioned methods, TFFM has a higher degree of parallelization, i.e., higher speed. TFFM is realized by spatially and temporally focusing a femtosecond laser in the focal region [15]. Accordingly, the excitation speed can be as fast as the laser repetition rate [16]; and the imaging speed is practically limited by the speed of cameras, i.e., 1 – 10 kHz.

Compared with a point-scanning system, the axial resolution of the wide-field TFFM is slightly compromised [17,18], which prevents it from broad adoption and commercialization. Structured light illumination (SLI) has been applied in TFFM to improve its optical cross-sectioning capability [19]. In addition, SLI algorithm also helps to suppress the effect of tissue scattering by acting as a virtual pinhole that computationally filters out the scattered photons outside the focal plane based on a set of raw structured images [20]. However, for any two-photon imaging system, at least five equal phase-stepped images are required to remove the sinusoidal modulation terms and obtain the in-focus image when the conventional root-mean-square (RMS) method is applied [21, 22], which compromises the speed of TFFM. One way to reduce the required raw structured images is to apply the HiLo technique [23–25], which applies a high-pass filer to a uniform image, and a low-pass filter to a structured image to form a fused HiLo image that contains all frequencies that can be collected by the image system. However, the optimal filter cut-off frequency and scaling factor (η) need to be empirically determined in the image reconstruction process; this prevents HiLo technique to be fully automated, especially when switching among different biological specimens.

In this paper, we present a new two-snapshot SLI reconstruction algorithm for fast image acquisition. The new algorithm, which only requires two mutually π phase-shifted raw structured images, is implemented on a custom-designed TFFM to enhance its optical cross-sectioning capability via a digital micromirror device (DMD), which functions as a programmable blazed grating that carries the designed structured patterns. Mathematical models of the new SLI algorithm have been derived; imaging experiments are performed to verify the predicted performance. The new SLI algorithm can be implemented in any sinusoidally modulated illumination system in a fully automatic, adaptive, and robust fashion that enables high-resolution and high-speed 3D imaging. In the following sections, we report the (1) principle of generating structured patterns in a temporal focusing system via a DMD, (2) theoretical development of the two-snapshot SLI algorithm, and (3) SLI TFFM imaging results on pollen grain and mouse kidney samples.

2. Methods

2.1 Generation of temporal focusing and structured patterns via a DMD

Figure 1 illustrates the principle of generating structured images in a TFFM based on a DMD. A DMD consists of several millions of binary micromirrors; each mirror has two stable angular positions, i.e., ± 12°, and typically operates at a speed of 4.2 – 32.5 kHz. As shown in Fig. 1(a), the femtosecond laser beam is first spatially dispersed by the DMD, which functions as a programmable blazed grating, a lens then collimates these beams, and lastly the objective lens recombines the beams in the focal region. As shown in Fig. 1(b), the dispersion causes the laser pulses to be broadened after the DMD and everywhere before and after the front focal plane of the objective due to spectral-temporal relationship. Temporal focusing is thus achieved at the focal region, where all frequency components spatially overlap, leading to the shortest laser pulse and the capability of optical cross-sectioning [9, 15]. Note that to ensure high diffraction efficiency, the incident angle of the laser beam and the DMD micromirror array should satisfy the blazing condition.

 figure: Fig. 1

Fig. 1 Principles of generating structured images in a TFFM based on a DMD: (a) DMD disperses the laser pulses along the optical path until the objective lens (OBJ) recombines the spectral components, achieving temporal focusing at the focal region, where the shortest pulse is formed; CL: collimating lens; FFP: front focal plane; (b) illustration of pulse widening effect outside the focal region due to temporal focusing; (c) sinusoidal fringe patterns are formed in the focal plane by the interference of the 0th and ± 1st order diffractions; the characteristics of the fringes can be controlled by the vertical stripes on the DMD. The right image shows measured in-focus fluorescence emissions encoded with sinusoidal patterns. Note that in (a) and (c), the CL and OBJ form a 4-f system.

Download Full Size | PDF

To encode sinusoidal fringe patterns to the image, one may recall the inverse Fourier transform of the sinusoidal fringes is a set of spatial frequencies of opposite polarity, i.e., parallel stripes on the DMD. Accordingly, the sinusoidal fringes can be generated at the focal region by spatially selecting the 0th and ± 1st order diffractions, which form the periodic fringes at the front focal plane of the objective lens, as shown in Fig. 1(c). Observing the results in Figs. 1(b) and 1(c), one may find the plane of temporal focusing and sinusoidal fringes coincide, and accordingly structured images encoded with the desired fringe patterns can be directly generated by controlling the dimension of the “parallel stripes” on the DMD. In addition, since out-of-focus fluorescent signals do not contain the periodic fringe patterns, we can computationally remove the out-of-focus emissions and demodulate the sinusoidal fringe patterns from the in-focus images to enhance the axial resolution of the temporal focusing images. Next, we present the new two-snapshot SLI method and its application for in-focus signal decoding.

2.2 Fast two-snapshot SLI for in-focus emission demodulation

We first mathematically describe the imaging field. The image captured by the camera, D(r), can be expressed as:

D(r)=S(r){I0(r)[1+mcos(2πpθr+φ)]}2H(r)+Dp2(r).
where r:=(x,y) is a 2D spatial position vector; S(r) represents the fluorophore density distribution within the illuminated sample; I0(r)[1+mcos(2πpθr+φ)] is a mathematical expression of the intensity distribution of the 0th and ± 1st order beam interference that generates the sinusoidal fringe pattern; m is modulation factor; pθ=(|pθ|cosθ,  |pθ|sinθ) is a frequency vector in the reciprocal space; φ is the initial phase of the fringes; H(r) is the point spread function (PSF) of the imaging system; and Dp2(r) represents the out-of-focus fluorescent signals. Note that time variation for the femtosecond laser pulse is not considered in the analysis. For a two-photon imaging system, the in-focus image is quadratically proportional to the excitation intensity, and thus the first term in D(r) is squared. The second term in D(r), i.e., the out-of-focus emissions, does not have frequency modulation [26].

To derive the two-snapshot algorithm, first the orientation of the modulated fringe patterns is automatically identified via spatial frequency vector detection. By rotation and subtraction, the modulated in-focus images are obtained. Lastly, a parallel amplitude demodulation method, derived based on Hilbert transform, is applied to complete the decoding processes via two phase-shifted structured images, i.e., D1(r) and D2(r). The process flow is illustrated in Fig. 2(a). In the following analysis, the phase difference of D1(r) and D2(r) is set to π to simplify the process, i.e., φ1φ2=(2l+1)π, where l is an integer. First, we consider the frequency vector, pθ, in the reciprocal space. The position of pθ can be determined in the reciprocal space by applying local maxima detection. This will allow us to identify the skew angle θ of the fringe pattern, shown in Fig. 2(c), by using the formula  θ=arctan(pv/pu), where (pv/pu) are the coordinates of pθ as illustrated in Fig. 2(e). Next, we perform a planar rotation to ensure the fringe pattern is parallel to the y-axis, as shown in Fig. 2(f); the angle of rotation is set to  π/2θ. Image padding processes are applied before the rotation step in order to avoid introducing artefacts in the reconstructed images. The rotated structured images can be mathematically expressed as:

Dn'(r)=S'(r)I0'2(r){1+m22+2mcos(2π|pθ|x+φn)+m22cos[2(2π|pθ|x+φn)]}H(r)+Dp2'(r).
where n = 1 or 2; S'(r)I0'2(r) is the rotated in-focus fluorescent signals; and Dp2'(r) is the rotated out-of-focus fluorescent signals. Accordingly, the non-modulated term, Dp2'(r) can be removed by a subtraction step, i.e., Dd'(r)=D1'(r)D2'(r). The result is expressed in Eq. (3):
Dd'(r)S'(r)[I0'2(r)cos(2π|pθ|x+φ1)]H(r).
where Dd'(r) is the subtracted image, as illustrated in Fig. 2(g). Note that Dd'(r) represents the sinusoidally modulated in-focus signal, where its amplitude, S'(r)I0'2(r), is our target of decoding.

 figure: Fig. 2

Fig. 2 Processing steps of the two-snapshot SLI algorithm: (a) flowchart of the two-snapshot SLI algorithm; (b) test object: image of the cameraman, i.e., S(r); (c) structured fringe pattern, 1+mcos(2πpθr+φn), n = 1 or 2; (d) generation of the raw structured image described in Eq. (1), i.e., the image is simulated by convolving the modulated test object image with the PSF of the optical system; (e) identification of the skew angle, θ, of the spatial frequency vector, pθ, in reciprocal space via local maxima detection; (f) rotation of the structured images by π/2θ to align the fringe pattern parallel to the y-axis, described in Eq. (2); image padding is performed before rotation to avoid the generation of artifacts. The region of interest is indicated by the white box; (g) image subtraction to remove the Dp2'(r) term, described in Eq. (3); (h) result of parallel Hilbert transform, where s 1D vectors are reshaped as a 2D image; (i) amplitude demodulation result for the rotated in-focus image, described in Eq. (9); and (j) reconstructed in-focus image, i.e., S(r)I02(r).

Download Full Size | PDF

Next, a set of complex vectors is constructed based on Dd'(r) for decoding. First, the 2D image, Dd'(r), is decomposed to many row vectors, i.e.,V1(e1),V2(e2) , …,Vk(ek) , …,Vs(es), where s represents the total number of rows of Dd'(r); Vk(ek) represents the kth row vector (k= 1, 2, 3, , s); and e1, e2, …, ek, …, es are the orthogonal bases for Dd'(r). Vk(ek) is mathematically expressed as:

Vk(ek)S'(ek)[I0'2(ek)cos(2π|pθ|x+φn)]H(r).

Accordingly, we can process the 1D complex vectors in a parallel fashion in the computer. To achieve this, we define VAk(ek) as:

VAk(ek)=Vk(ek)+iVHk(ek).
where the real part is the kth row vector, Vk(ek); the imaginary part, VHk(ek), is generated by the 1D Hilbert transform, i.e., VHk(ek)=H{Vk(ek)} [27]. VHk(ek) can be mathematically express as:

VHk(ek)S'(ek)I0'2(ek)sin(2π|pθ|x+φn)]H(r).

Next, we parallelly perform amplitude demodulation by recombining the Euclidean norms of the complex vectors. First, substitute the expressions of Vk(ek) and VHk(ek) into Eq. (5), we have:

VAk(ek)S'(ek)I0'2(ek)[cos(2π|pθ|x+φn)+isin(2π|pθ|x+φn)]H(r).

By taking the Euclidean norm of the complex vectors, we arrive at:

|VAk(ek)|S'(ek)I0'2(ek)H(r).

Importantly, from Eq. (8), |VAk(ek)| is proportional to Sk'(ek)I0'2(ek)H(r), where all the sine and cosine modulation terms in the expression are removed. Thus, just by combining |VH1(e1)|, |VH2(e2)|, …, |VHk(ek)|, …., |VHs(es)|, the rotated in-focus image can be decoded and expressed as:

S'(r)I0'2(r)[|VH1(e1)||VH2(e2)|,...,|VHs(es)|]T.

The decoded in-focus image is illustrated in Fig. 2(i). To fully restore the in-focus fluorescent signals, S(r)I02(r), we lastly rotate S'(r)I0'2(r) by an angle of  θπ/2 to align the resultant image to its original orientation, as shown in Fig. 2(j). Note that since the fringes are parallel to the y-axis, it is unnecessary for our method to implement the bidimensional empirical mode decomposition (BEMD) and spiral phase Hilbert transform [28] in the amplitude demodulation step, making the algorithm simple and efficient. It is worthwhile to note that if the optical system is perfectly aligned such that the modulation fringes are parallel to the y-axis, step 1 and step 2, i.e., detection of spatial vector as well as image padding and rotation, may be entirely omitted, leading to shorter calculation time.

3. Experimental results

3.1 Experimental setup

Figure 3 presents the optical configuration of the DMD-based TFFM. The laser source is a Ti:sapphire regenerative laser amplifier (Spitfire Pro, Spectra-Physics) with an average power of 4W; pulse width of 100 fs; repetition rate of 1 kHz; center wavelength of 800 nm; and output beam diameter of ~12 mm. A mechanical shutter is included in the system to control the excitation time. A half-wave plate and a polarizing beam splitter together control the laser power. Two high reflectance mirrors, M1 and M2, are used to guide the laser beam to the DMD (DLP 4500, 912 × 1140 pixels; pixel size: 7.6 × 7.6 μm, Texas Instrument), where the DMD replaces the blazed grating in a generic temporal focusing system and simultaneously functions as a diffraction grating and a programmable binary mask. To achieve high efficiency, the incident angle of the laser beam is set to 24.8° in reference to the DMD surface so that the collected 5th order diffraction satisfies the blazing condition. After the collimating lens (CL), an objective lens (CFI Apo LWD 40X; NA 1.15, Nikon) recombines the dispersed spectral components on the x-z plane, shown in Fig. 1(a); and temporal focusing is achieved at the focal plane of the objective lens. Note that the objective lens simultaneously recombines the 0th and ± 1st order diffractions on the y-z plane, shown in Fig. 1(c) and Fig. 3, to form the sinusoidal fringes at the focal plane. The field of view is ~300 × 250 μm2, where each DMD pixel maps to an area of 270 × 270 nm2. The sample is mounted on a precision xyz stage (MP-285, Sutter Instrument) for positioning. The emissions are collected by the objective lens, separated from the excitation signals via a dichroic mirror (ET780lp, Chroma) and a band-pass filter, and lastly imaged to a high sensitivity electron-multiplying charge-coupled device (EMCCD) camera (ProEM, Princeton Instrument, USA). A camera zoom lens (EF 70-200 mm f/2.8L IS II USM, Canon) is installed before the EMCCD camera for the ease of adjusting the focal plane. A data acquisition (DAQ) card (USB-6363, National Instruments) collects and processes the signals to form images.

 figure: Fig. 3

Fig. 3 Schematics of the DMD-based SLI TFFM system. Laser: regenerative laser amplifier; HWP: half-wave plate; PBS: polarizing beam splitter; M1 - M3: mirrors; CL: collimating lens; DM: dichroic mirror; BPF: band pass filter; OBJ: objective lens; ZL: zoom lens.

Download Full Size | PDF

3.2 Generation of sinusoidal fringes via the DMD

Before implementing the two-snapshot algorithm, we first verify the DMD can precisely generate and control the designed sinusoidal fringe patterns. A thin layer of Rhodamine 6G (Rh6G) mixed in polymethyl-methacrylate (PMMA) is prepared [29] to illustrate the capability of the DMD. In the experiment, the on-off states of the DMD micromirrors are controlled electronically to generate parallel stripes of 50% duty cycle, as discussed in Section 2.1. The period of the stripe pattern is set as 2.70 µm, i.e., 10 DMD pixels.

Note that π-phase shifted patterns can be generated by applying binary patterns of reversed states. Figures 4(a) and 4(b) present the binary patterns on the DMD that have opposite states; the imaging results of the Rh6G sample are presented in Figs. 4(c) and 4(d), respectively. The fluorescent intensity profiles along the red and blue lines in Figs. 4(c) and 4(d) are plotted in Fig. 4(e), where the circles represent the measured fluorescent intensities, and solid lines represent the least-squares fitted results using [1+cos(|pθ|x+φn)]2, where n = 1 or 2. From the results, one can conclude that the measured intensities have sinusoidal profiles; and the two curves in Fig. 4(e) have a precise phase shift of 180°. Figure 4(f) presents the Fourier transform results of the measured intensity data in Fig. 4(e), where one can clearly observe the color-coded ± 1st peaks have opposite signs, confirming again the 180° phase shift. Importantly, from Fig. 4(f), we also observe the 2nd order peaks, as indicated by the black arrows, which represent the second harmonic frequency component, i.e., the m22cos[2(2π|pθ|x+φn)] term in Eq. (2).

 figure: Fig. 4

Fig. 4 Generation of π phase-shifted sinusoidal fringe patterns via a DMD. (a) and (b): binary fringe patterns programmed to the DMD of opposite states; the period of the stripe pattern is ten pixels (2.70 µm); (c) and (d): imaging results of the Rh6G sample at the focal plane when applying DMD patterns of (a) and (b) respectively; (e) measured fluorescent intensity profiles across the red and blue lines in (c) and (d) respectively. The circles and solid lines represent the raw data and the least-squares fitting results, which have a 180° phase shift; (f) Fourier transform of the fluorescent intensity profiles in (e), where the 1st and 2nd order peaks can be clearly observed.

Download Full Size | PDF

3.3 Axial resolution enhancement

In this section, we experimentally characterize the axial resolution of the SLI TFFM based on the two-snapshot algorithm via the thin Rh6G sample. The axial resolution is determined by axially scanning the Rh6G sample with 1 μm steps over a range of 20 μm, where the fluorescent intensities are recorded at every step. The results are presented in Fig. 5, where the red and blue circles represent the measured fluorescent intensities of the TFFM “without” and “with” the two-snapshot algorithm respectively. The solid lines are the least-squares fitted curves of a Gaussian. Analyzing the results, one can conclude that the axial resolution, i.e., full width at half maximum (FWHM), of the TFFM “without” and “with” the two-snapshot algorithm is 5.84 μm and 4.30 μm respectively. Accordingly, the axial resolution enhancement factor, defined in [30], is calculated to be 1.25.

 figure: Fig. 5

Fig. 5 Axial resolution characterization via the thin Rh6G sample; the red and blue circles represent the measured fluorescent intensities of the TFFM “without” and “with” the two-snapshot algorithm.

Download Full Size | PDF

Note that the theoretical prediction of the axial resolution of a TFFM is derived and reported in [31,32]. By substituting our system characteristics, i.e., NA of objective = 1.15, refractive index of the immersion fluid n = 1.33; and magnification of the system M = 40, into the equation, the optimal axial resolution of our system is calculated to be 3.79 μm. Next, based on Stokseth’s approximation [33], we can calculate the axial resolution for structured light illumination systems. By substituting the system characteristics again, the axial resolution is found to be 2.73 μm. Accordingly, the predicted axial resolution enhancement factor is 1.27, which matches well with the experimental result. It is worthwhile to note that the theoretical axial resolution of the two-snapshot algorithm is identical to the conventional RMS algorithm; in other words, the two-snapshot algorithm improves the data acquisition speed by a factor of 2.5 without sacrificing the image resolution.

3.4 Imaging experiments on pollen grain samples

Imaging experiments on pollen grain samples (Item# 304264, Carolina Biological Supply) have been performed to characterize the performance of the TFFM with the two-snapshot algorithm. Figures 6(a) - 6(c) present the imaging results of a pollen grain (~30 × 30 μm2, cropped from the original image) at three different depths, i.e., 0 μm, 6 μm, and 12 μm without SLI algorithms. From the images, one can observe the presence of out-of-focus fluorescent signals, which blur the image. Figures 6(d) - 6(f) present the reconstructed images via two snapshots in the same imaging field. From the results, we find the out-of-focus emissions have been effectively suppressed by the two-snapshot algorithm; and low contrast features become evident and clear, i.e., improved signal to background ratio (SBR). The effect may be clearly observed when one compares the contours of the pollen in Figs. 6(c) and 6(f). Figure 6(g) presents normalized intensity profiles along the green and blue dashed line in Figs. 6(c) and 6(f) respectively. The results confirm the strong reduction in mean background intensity, i.e., from 0.69 to 0.36 (calculated along the dashed lines), and an increase of SBR value from 1.5 to 2.7. Overall, the imaging results demonstrate the capability of the two-snapshot method on (1) out-of-focus fluorescent emission rejection and (2) axial resolution enhancement.

 figure: Fig. 6

Fig. 6 TFFM imaging results of a pollen grain at three different depths. (a) - (c) cropped optical cross-sections of the pollen at 0 µm, 6 µm, and 12 µm respectively; (d) - (f) two snap shots reconstructed images of the same pollen grain at 0 µm, 6 µm, and 12 µm respectively, where one may clearly observe the enhanced axial resolution and effect of out-of-focus fluorescent emission rejection in (c) and (f); (g) normalized intensity profiles along the red and blue dashed line in (c) and (f). The scale bar is 10 μm. All raw images are collected at 20 ms.

Download Full Size | PDF

3.5 Imaging experiments on mouse kidney slices

Next, we perform imaging experiments on mouse kidney slices, i.e., fluorescently labeled glomeruli and convoluted tubules sections with a thickness of 16 µm (F24630, Invitrogen, USA). In this experiment, we compare the raw cross-sectional images with the conventional RMS algorithm as well as the two-snapshot algorithm. Figure 7 presents the TFFM optical cross-sectional imaging results “without” and “with” the SLI algorithms. Each image recorded from the EMCCD camera has 512 × 384 pixels, mapping to an imaging field of 204 × 154 µm2. Figures 7(a) - 7(e) present the raw images captured by the TFFM at different depths, i.e., −6 μm, −3 μm, 0 μm, 3 μm and 6 μm. Figures 7(f) - 7(j) present the reconstructed images based on the RMS algorithm at the five corresponding depths. (Note that the RMS algorithm requires five 2π/5 phase-shifted raw images for image reconstruction.) Figs. 7(k) - 7(o) present the reconstructed images based on the two-snapshot algorithm at the five corresponding depths, i.e., each column in Fig. 7 corresponds to the same depth and field of view. From the results, one may observe the strong presence of out-of-focus emissions in the raw images at different depths, i.e., Figs. 7(a) - 7(e). Comparing the RMS and two-snapshot algorithms in Figs. 7(f) - 7(j) and Figs. 7(k) - 7(o), one may find that when the raw images are collected with the same exposure time, i.e., 20 ms, both algorithms can effectively suppress the out-of-focus emissions and improve the axial resolution, and the reconstructed images have comparable noise level. This can be confirmed by calculating the normalized background intensities of Figs. 7(b), 7(d), 7(g), 7(i) and 7(l), 7(n) inside the randomly selected dashed boxes, which are found to be 0.49, 0.46, 0.33, 0.26, 0.31, and 0.24 respectively. A video demonstration of the results in Fig. 7 is presented in Visualization 1. It is worthwhile to note that in Figs. 6 and 7, some artifacts have been observed in the reconstructed images for both the RMS and two-snapshot algorithms. This is due to the low normalized modulation frequency, and may be removed by (1) using an objective lens of larger aperture; or (2) using a photodetector of better resolution, i.e., smaller pixels.

 figure: Fig. 7

Fig. 7 TFFM imaging results of fluorescently labeled glomeruli and convoluted tubules samples (mouse kidney) “without” and “with” the SLI algorithms at five different depths, i.e., −6 μm, −3 μm, 0 μm, 3 μm and 6 μm from left to right; each column corresponds to the same depth and field of view. All raw images are collected at 20 ms exposure time. (a) - (e) raw images captured by the TFFM, (f) - (j) reconstructed images based on the RMS algorithm; (k) - (o) reconstructed images based on the two-snapshot algorithm; the scale bar is 20 μm.

Download Full Size | PDF

In summary, comparing with the conventional RMS algorithm, the two-snapshot algorithm has the following advantages: (1) the image acquisition speed is increased, i.e., a factor of 2.5 in two-photon microscopy, as only two raw images are required versus five for the RMS method; and (2) at the same imaging speed, the two snapshot method requires less raw images, which effectively increases the exposure time by a factor of 2.5, resulting in images with better signal-to-noise ratios. One limitation of the two-snapshot algorithm is that the Hilbert transform is a relatively time-consuming operation and thus the reconstructed images may not be displayed in real-time (note the raw images can be displayed at the acquisition speed), which the RMS algorithm may allow when paired with a high-performance computer. This drawback may be addressed by more powerful computers with parallel computing techniques [34]. Comparing with other algorithms that require two raw images, i.e., the HiLo technique [23–25] and BEMD [28], our new two-snapshot algorithm has the advantage of being completely adaptive without the need of manual inspection and selection of critical operation parameters, such as scaling factor, cut-off frequency, or bidimensional intrinsic mode functions etc. Note that the two-snapshot algorithm has superior computation speeds versus both the HiLo method and BEMD method.

4. Conclusion

We have presented a new two-snapshot SLI algorithm for fast image acquisition that only requires two mutually π phase-shifted raw images. The new algorithm is implemented on a custom-built DMD-based TFFM to demonstrate its unique characteristics, including the enhanced axial resolution and background rejection capability. Mathematical models of the two-snapshot method have been derived. The axial resolution enhancement has been experimentally characterized; the results show an enhancement factor of 1.25, which matches well with the theoretical prediction, i.e., 1.27. Imaging experiments have been devised and performed on pollen as well as mouse kidney slices; the experimental results show clean optical cross-sectional images, confirming the great background rejection capability and improved axial resolution. Importantly, these results have been achieved with only two raw images in a completely adaptive and automated fashion, promising 2.5 times higher image acquisition speed. The new two-snapshot algorithm can be readily adapted to any sinusoidally modulated imaging systems, e.g., LED- or continuous wave laser-based SLI microscopes [35, 36], multifocal structured illumination microscope [37], light sheet microscopy [38], or electron microscope, and generate significant impact to the field of biomedical imaging.

Funding

HKSAR Research Grants Council (RGC); General Research Fund (GRF) (CUHK 14202815); Innovation and Technology Commission (ITC), Innovation and Technology Fund (ITF), ITS/007/15P.

References and links

1. W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248(4951), 73–76 (1990). [CrossRef]   [PubMed]  

2. F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2(12), 932–940 (2005). [CrossRef]   [PubMed]  

3. K. H. Kim, C. Buehler, and P. T. So, “High-speed, two-photon scanning microscope,” Appl. Opt. 38(28), 6004–6009 (1999). [CrossRef]   [PubMed]  

4. J. D. Lechleiter, D.-T. Lin, and I. Sieneart, “Multi-photon laser scanning microscopy using an acoustic optical deflector,” Biophys. J. 83(4), 2292–2299 (2002). [CrossRef]   [PubMed]  

5. V. Iyer, T. M. Hoogland, and P. Saggau, “Fast Functional Imaging of Single Neurons Using Random-Access Multiphoton (RAMP) Microscopy,” J. Neurophysiol. 95(1), 535–545 (2006). [CrossRef]   [PubMed]  

6. D. Oron, E. Tal, and Y. Silberberg, “Scanningless depth-resolved microscopy,” Opt. Express 13(5), 1468–1476 (2005). [CrossRef]   [PubMed]  

7. E. Papagiakoumou, F. Anselmi, A. Bègue, V. de Sars, J. Glückstad, E. Y. Isacoff, and V. Emiliani, “Scanless two-photon excitation of channelrhodopsin-2,” Nat. Methods 7(10), 848–854 (2010). [CrossRef]   [PubMed]  

8. E. Block, M. Greco, D. Vitek, O. Masihzadeh, D. A. Ammar, M. Y. Kahook, N. Mandava, C. Durfee, and J. Squier, “Simultaneous spatial and temporal focusing for tissue ablation,” Biomed. Opt. Express 4(6), 831–841 (2013). [CrossRef]   [PubMed]  

9. J.-N. Yih, Y. Y. Hu, Y. D. Sie, L.-C. Cheng, C.-H. Lien, and S.-J. Chen, “Temporal focusing-based multiphoton excitation microscopy via digital micromirror device,” Opt. Lett. 39(11), 3134–3137 (2014). [CrossRef]   [PubMed]  

10. G. J. Brakenhoff, J. Squier, T. Norris, A. C. Bliton, M. H. Wade, and B. Athey, “Real-time two-photon confocal microscopy using a femtosecond, amplified Ti:sapphire system,” J. Microsc. 181(3), 253–259 (1996). [CrossRef]   [PubMed]  

11. J. Bewersdorf, R. Pick, and S. W. Hell, “Multifocal multiphoton microscopy,” Opt. Lett. 23(9), 655–657 (1998). [CrossRef]   [PubMed]  

12. A. H. Buist, M. Muller, J. Squier, and G. J. Brakenhoff, “Real time two-photon absorption microscopy using multi point excitation,” J. Microsc. 192(2), 217–226 (1998). [CrossRef]  

13. T. Nielsen, M. Fricke, D. Hellweg, and P. Andresen, “High efficiency beam splitter for multifocal multiphoton microscopy,” J. Microsc. 201(3), 368–376 (2001). [CrossRef]   [PubMed]  

14. K. H. Kim, C. Buehler, K. Bahlmann, T. Ragan, W.-C. A. Lee, E. Nedivi, E. L. Heffer, S. Fantini, and P. T. C. So, “Multifocal multiphoton microscopy based on multianode photomultiplier tubes,” Opt. Express 15(18), 11658–11678 (2007). [CrossRef]   [PubMed]  

15. G. Zhu, J. van Howe, M. Durst, W. Zipfel, and C. Xu, “Simultaneous spatial and temporal focusing of femtosecond pulses,” Opt. Express 13(6), 2153–2159 (2005). [CrossRef]   [PubMed]  

16. J. Jiang, D. Zhang, S. Walker, C. Gu, Y. Ke, W. H. Yung, and S. C. Chen, “Fast 3-D temporal focusing microscopy using an electrically tunable lens,” Opt. Express 23(19), 24362–24368 (2015). [CrossRef]   [PubMed]  

17. H. Dana and S. Shoham, “Numerical evaluation of temporal focusing characteristics in transparent and scattering media,” Opt. Express 19(6), 4937–4948 (2011). [CrossRef]   [PubMed]  

18. A. Straub, M. E. Durst, and C. Xu, “High speed multiphoton axial scanning through an optical fiber in a remotely scanned temporal focusing setup,” Biomed. Opt. Express 2(1), 80–88 (2010). [CrossRef]   [PubMed]  

19. H. Choi, E. Y. S. Yew, B. Hallacoglu, S. Fantini, C. J. R. Sheppard, and P. T. C. So, “Improvement of axial resolution and contrast in temporally focused widefield two-photon microscopy with structured light illumination,” Biomed. Opt. Express 4(7), 995–1005 (2013). [CrossRef]   [PubMed]  

20. M. A. Neil, R. Juškaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. 22(24), 1905–1907 (1997). [CrossRef]   [PubMed]  

21. K. Isobe, T. Takeda, K. Mochizuki, Q. Song, A. Suda, F. Kannari, H. Kawano, A. Kumagai, A. Miyawaki, and K. Midorikawa, “Enhancement of lateral resolution and optical sectioning capability of two-photon fluorescence microscopy by combining temporal-focusing with structured illumination,” Biomed. Opt. Express 4(11), 2396–2410 (2013). [CrossRef]   [PubMed]  

22. L.-C. Cheng, C.-H. Lien, Y. Da Sie, Y. Y. Hu, C.-Y. Lin, F.-C. Chien, C. Xu, C. Y. Dong, and S.-J. Chen, “Nonlinear structured-illumination enhanced temporal focusing multiphoton excitation microscopy with a digital micromirror device,” Biomed. Opt. Express 5(8), 2526–2536 (2014). [CrossRef]   [PubMed]  

23. D. Lim, K. K. Chu, and J. Mertz, “Wide-field fluorescence sectioning with hybrid speckle and uniform-illumination microscopy,” Opt. Lett. 33(16), 1819–1821 (2008). [CrossRef]   [PubMed]  

24. E. Y. S. Yew, H. Choi, D. Kim, and P. T. C. So, “Wide-field two-photon microscopy with temporal focusing and HiLo background rejection,” Proc. SPIE 7903, 79031O (2011). [CrossRef]  

25. C.-Y. Chang, Y. Y. Hu, C.-Y. Lin, C.-H. Lin, H.-Y. Chang, S.-F. Tsai, T.-W. Lin, and S.-J. Chen, “Fast volumetric imaging with patterned illumination via digital micro-mirror device-based temporal focusing multiphoton microscopy,” Biomed. Opt. Express 7(5), 1727–1736 (2016). [CrossRef]   [PubMed]  

26. D. Karadaglić and T. Wilson, “Image formation in structured illumination wide-field fluorescence microscopy,” Micron 39(7), 808–818 (2008). [CrossRef]   [PubMed]  

27. S. L. Hahn, Hilbert Transforms in Signal Processing (Artech Print on Demand, 1996).

28. K. Patorski, M. Trusiak, and T. Tkaczyk, “Optically-sectioned two-shot structured illumination microscopy with Hilbert-Huang processing,” Opt. Express 22(8), 9517–9527 (2014). [CrossRef]   [PubMed]  

29. M. A. Lauterbach, E. Ronzitti, J. R. Sternberg, C. Wyart, and V. Emiliani, “Fast Calcium Imaging with Optical Sectioning via HiLo Microscopy,” PLoS One 10(12), e0143681 (2015). [CrossRef]   [PubMed]  

30. M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000). [CrossRef]   [PubMed]  

31. M. E. Durst, G. Zhu, and C. Xu, “Simultaneous spatial and temporal focusing in nonlinear microscopy,” Opt. Commun. 281(7), 1796–1805 (2008). [CrossRef]   [PubMed]  

32. E. Y. S. Yew, C. J. R. Sheppard, and P. T. C. So, “Temporally focused wide-field two-photon microscopy: paraxial to vectorial,” Opt. Express 21(10), 12951–12963 (2013). [CrossRef]   [PubMed]  

33. P. A. Stokseth, “Properties of a Defocused Optical System,” J. Opt. Soc. Am. 59(10), 1314–1321 (1969). [CrossRef]  

34. Z. Yang, Y. Zhu, and Y. Pu, “Parallel Image Processing Based on CUDA,” in 2008 International Conference on Computer Science and Software Engineering (IEEE, 2008), pp. 198–201. [CrossRef]  

35. T. C. Schlichenmeyer, M. Wang, K. N. Elfer, and J. Q. Brown, “Video-rate structured illumination microscopy for high-throughput imaging of large tissue areas,” Biomed. Opt. Express 5(2), 366–377 (2014). [CrossRef]   [PubMed]  

36. M. A. Neil, A. Squire, R. Juskaitis, P. I. Bastiaens, and T. Wilson, “Wide-field optically sectioning fluorescence microscopy with laser illumination,” J. Microsc. 197(1), 1–4 (2000). [CrossRef]   [PubMed]  

37. A. G. York, S. H. Parekh, D. Dalle Nogare, R. S. Fischer, K. Temprine, M. Mione, A. B. Chitnis, C. A. Combs, and H. Shroff, “Resolution doubling in live, multicellular organisms via multifocal structured illumination microscopy,” Nat. Methods 9(7), 749–754 (2012). [CrossRef]   [PubMed]  

38. P. J. Keller, A. D. Schmidt, A. Santella, K. Khairy, Z. Bao, J. Wittbrodt, and E. H. Stelzer, “Fast, high-contrast imaging of animal development with scanned light sheet-based structured-illumination microscopy,” Nat. Methods 7(8), 637–642 (2010). [CrossRef]   [PubMed]  

Supplementary Material (1)

NameDescription
Visualization 1       TFFM imaging results of fluorescently labeled glomeruli and convoluted tubules samples (mouse kidney) “without” and “with” the SLI algorithms, i.e., RMS and two-snapshot methods, from -6 µm to 6 µm.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Principles of generating structured images in a TFFM based on a DMD: (a) DMD disperses the laser pulses along the optical path until the objective lens (OBJ) recombines the spectral components, achieving temporal focusing at the focal region, where the shortest pulse is formed; CL: collimating lens; FFP: front focal plane; (b) illustration of pulse widening effect outside the focal region due to temporal focusing; (c) sinusoidal fringe patterns are formed in the focal plane by the interference of the 0th and ± 1st order diffractions; the characteristics of the fringes can be controlled by the vertical stripes on the DMD. The right image shows measured in-focus fluorescence emissions encoded with sinusoidal patterns. Note that in (a) and (c), the CL and OBJ form a 4-f system.
Fig. 2
Fig. 2 Processing steps of the two-snapshot SLI algorithm: (a) flowchart of the two-snapshot SLI algorithm; (b) test object: image of the cameraman, i.e., S( r ); (c) structured fringe pattern, 1+mcos( 2π p θ r + φ n ), n = 1 or 2; (d) generation of the raw structured image described in Eq. (1), i.e., the image is simulated by convolving the modulated test object image with the PSF of the optical system; (e) identification of the skew angle, θ, of the spatial frequency vector, p θ , in reciprocal space via local maxima detection; (f) rotation of the structured images by π/2θ to align the fringe pattern parallel to the y-axis, described in Eq. (2); image padding is performed before rotation to avoid the generation of artifacts. The region of interest is indicated by the white box; (g) image subtraction to remove the D p2 ' ( r ) term, described in Eq. (3); (h) result of parallel Hilbert transform, where s 1D vectors are reshaped as a 2D image; (i) amplitude demodulation result for the rotated in-focus image, described in Eq. (9); and (j) reconstructed in-focus image, i.e., S( r ) I 0 2 ( r ).
Fig. 3
Fig. 3 Schematics of the DMD-based SLI TFFM system. Laser: regenerative laser amplifier; HWP: half-wave plate; PBS: polarizing beam splitter; M1 - M3: mirrors; CL: collimating lens; DM: dichroic mirror; BPF: band pass filter; OBJ: objective lens; ZL: zoom lens.
Fig. 4
Fig. 4 Generation of π phase-shifted sinusoidal fringe patterns via a DMD. (a) and (b): binary fringe patterns programmed to the DMD of opposite states; the period of the stripe pattern is ten pixels (2.70 µm); (c) and (d): imaging results of the Rh6G sample at the focal plane when applying DMD patterns of (a) and (b) respectively; (e) measured fluorescent intensity profiles across the red and blue lines in (c) and (d) respectively. The circles and solid lines represent the raw data and the least-squares fitting results, which have a 180° phase shift; (f) Fourier transform of the fluorescent intensity profiles in (e), where the 1st and 2nd order peaks can be clearly observed.
Fig. 5
Fig. 5 Axial resolution characterization via the thin Rh6G sample; the red and blue circles represent the measured fluorescent intensities of the TFFM “without” and “with” the two-snapshot algorithm.
Fig. 6
Fig. 6 TFFM imaging results of a pollen grain at three different depths. (a) - (c) cropped optical cross-sections of the pollen at 0 µm, 6 µm, and 12 µm respectively; (d) - (f) two snap shots reconstructed images of the same pollen grain at 0 µm, 6 µm, and 12 µm respectively, where one may clearly observe the enhanced axial resolution and effect of out-of-focus fluorescent emission rejection in (c) and (f); (g) normalized intensity profiles along the red and blue dashed line in (c) and (f). The scale bar is 10 μm. All raw images are collected at 20 ms.
Fig. 7
Fig. 7 TFFM imaging results of fluorescently labeled glomeruli and convoluted tubules samples (mouse kidney) “without” and “with” the SLI algorithms at five different depths, i.e., −6 μm, −3 μm, 0 μm, 3 μm and 6 μm from left to right; each column corresponds to the same depth and field of view. All raw images are collected at 20 ms exposure time. (a) - (e) raw images captured by the TFFM, (f) - (j) reconstructed images based on the RMS algorithm; (k) - (o) reconstructed images based on the two-snapshot algorithm; the scale bar is 20 μm.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

D( r )=S( r ) { I 0 ( r )[1+mcos(2π p θ r +φ)]} 2 H( r )+ D p2 ( r ).
D n ' ( r )= S ' ( r ) I 0 '2 ( r ){1+ m 2 2 +2mcos(2π| p θ |x+ φ n ) + m 2 2 cos[2(2π| p θ |x+ φ n )]}H( r )+ D p2 ' ( r ).
D d ' ( r ) S ' ( r )[ I 0 '2 ( r )cos(2π| p θ |x+ φ 1 )]H( r ).
V k ( e k ) S ' ( e k )[ I 0 '2 ( e k )cos(2π| p θ |x+ φ n )]H( r ).
V A k ( e k )= V k ( e k )+iV H k ( e k ).
V H k ( e k ) S ' ( e k ) I 0 '2 ( e k )sin(2π| p θ |x+ φ n )]H( r ).
V A k ( e k ) S ' ( e k ) I 0 '2 ( e k )[cos(2π| p θ |x+ φ n )+isin(2π| p θ |x+ φ n )]H( r ).
| V A k ( e k ) | S ' ( e k ) I 0 '2 ( e k )H( r ).
S ' ( r ) I 0 '2 ( r ) [| V H 1 ( e 1 ) | | V H 2 ( e 2 ) |, ..., | V H s ( e s ) |] T .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.