Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Miniature structured illumination microscope for in vivo 3D imaging of brain structures with optical sectioning

Open Access Open Access

Abstract

We present a high-resolution miniature, light-weight fluorescence microscope with electrowetting lens and onboard CMOS for high resolution volumetric imaging and structured illumination for rejection of out-of-focus and scattered light. The miniature microscope (SIMscope3D) delivers structured light using a coherent fiber bundle to obtain optical sectioning with an axial resolution of 18 µm. Volumetric imaging of eGFP labeled cells in fixed mouse brain tissue at depths up to 260 µm is demonstrated. The functionality of SIMscope3D to provide background free 3D imaging is shown by recording time series of microglia dynamics in awake mice at depths up to 120 µm in the brain.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

To further our understanding of neural circuits and their function, there is a need for new tools that can perform high resolution imaging of neural dynamics in animals performing complex behaviors. This goal has motivated development of miniature lightweight microscopes that can be head-attached, providing unrestricted motion for studies of behaviors such as navigation or socialization. A variety of miniaturized microscopes have been developed for recording neural activity from large populations of neurons involved in a common neural computation. For example, fluorescence microscopy in awake animals expressing genetically encoded Ca2+ fluorescent indicators, can provide real-time functional information with single cell specificity within local neural circuits. The widely used UCLA miniscope [1,2] includes an LED source and miniature camera and is designed with enough spatial and temporal resolution to capture GCaMP fluorescent transients from single cells. However, these systems have a large depth of field and do not provide any 3D structural information. Imaging in 3D imposes further requirements on the microscope that include low aberration optical designs, along with the ability to perform optical sectioning and axial scanning to distinguish structures at different depths in the tissue. This opens new applications in resolving functions of neural circuits in different brain regions, as well as studies of structural cellular changes.

Fiber-coupled miniature microscopes using multiphoton excitation have been demonstrated for recording neuronal activity in brain [38] along with the capability of volumetric imaging [4,5]. Multiphoton excitation processes used for imaging in these miniature microscopes have inherent optical sectioning and provide high spatial resolution. However, laser scanning technique requires active mechanical scanning elements that limit acquisition frame rate, in addition to bulky and expensive ultrafast pulsed laser sources. These limitations are partly addressed by single photon widefield miniature microscopes, i.e. UCLA miniscope [1,2,9], NINscope [10], FinchScope [11], CHEndoscope [12], Miniscope [13] and Inscopix [14], with some versions even providing wireless capabilities [2], multi-site recording [10] and axial scanning [15]. Recent work used a widefield miniscope modified by placing a phase plate [16] or microlens array [17] in the optical detection path which provides additional information for computational reconstruction in three dimensions. However, challenges associated with removal of scattered light, particularly to identify structural features, still exist with these modified 3D miniscopes. There is a need for higher resolution imaging to clearly identify individual cells in dense tissue volumes, and to image processes of both neurons and non-neuronal glial cell populations in the brain.

The idea of obtaining optical sectioning in conventional widefield microscopes by projecting a single spatial frequency grid pattern with three relative spatial phase shifts to obtain optically sectioned images was experimentally demonstrated by Neil et al. [18]. The reconstruction method, based on the square law detection scheme, rejects the zero spatial frequency components which are not attenuated out of focus, while capturing the components corresponding to the frequency of the grid pattern. The fluorescence generated from the grid pattern is imaged most sharply from the focal plane, hence providing inherent optical sectioning using this structured illumination microscopy method. Reconstruction techniques using structured illumination have also demonstrated sub-diffraction limited imaging in biological tissue [1923], in addition to providing optical sectioning. However, these imaging systems can require high numerical aperture (NA) objectives, which poses a challenge for miniature microscopes where weight and dimensions are critical parameters.

Here, we demonstrate the first fiber-coupled miniature microscope with optical sectioning structured illumination to remove out-of-focus fluorescence and scattered light, that can obtain full 3D imaging with improved contrast in scattering tissue. The miniature microscope includes an active axial scanning element providing volumetric imaging. The structured illumination miniature microscope with 3D imaging (SIMscope3D) uses a digital micromirror device [21] to create the structured illumination pattern that is then relayed to the imaging plane using a coherent fiber bundle [24,25]. The onboard CMOS camera with a 2.2 µm pixel size enables high lateral resolution images free of artifacts from the fiber bundle. The electrowetting axial scanning element provides depth scanning of up to 550 µm into the sample. Using the SIMscope3D we demonstrate proof-of-principle optically sectioned high-resolution images up to 260 µm deep in fixed brain tissue labeled with PLP-eGFP, and time series multiplane images up to 120 µm of microglia processes motion in a head-fixed awake mouse.

2. Methods and materials

2.1 Imaging system

A light emitting diode (LED) at 470 nm (Thorlabs EP470S04) was used as the illumination source for our imaging system. The fiber-coupled LED output was collimated using an objective lens (Olympus UPlanSapo 10x/0.4 NA) and passed through an excitation filter (Chroma ET470/24m). The collimated and filtered output from the LED was then incident on the digital micromirror device (Texas Instruments DLP Lightcrafter 6500). The spatial pattern generated on the digital micromirror device (DMD) was focused onto the back focal plane of a microscope objective (Olympus UPlanSapo 10x/0.4 NA) using a 200 mm achromatic lens (Thorlabs AC508-200-A-ML). The patterned beam was then coupled into a 1.5 m long coherent fiber bundle (Fujikura FIGH-10-500N, 10,000 cores, 2.9 µm core diameter, and 4.5 µm core-to-core spacing [26]) at the focal plane of the microscope objective.

The pattern generated on the DMD is relayed from the distal to proximal end of the fiber bundle and coupled into the miniature microscope. The SIMscope3D is designed with an NA of 0.3 and a magnification of 2.2X, providing a circular FOV with a 207 µm diameter from the 460 µm active imaging diameter of the fiber bundle. Commercially available achromatic doublets are used in the optical design along with a custom dichroic cube beam splitter (Shanghai Optics) to separate the excitation and fluorescence emission paths. The SIMscope3D utilizes an electrowetting axial scanning element (Corning Varioptic A-25H) which provides up to 550 µm of active axial scanning below the cover slip (Figure S2 and S3). The fluorescence generated at the imaging plane is imaged onto a CMOS sensor (Ximea MU9PM-MBRD) through an emission filter (Chroma ET525/50m) integrated with the microscope. The CMOS sensor has a pixel size of 2.2 µm resulting in a lateral sampling of 1µm/pixel. The total weight of the SIMscope3D was 6.7 g with a measured height of 30 mm. Pattern generation on the DMD, axial depth control using the electrowetting lens, and image acquisition from the CMOS sensor are controlled using a custom script in the µ-manager software environment. A schematic of the miniature imaging system along with capabilities of optical sectioning and volumetric imaging are shown in Fig. 1 (detailed description of SIMscope3D setup, design, characterization, and assembly is given in Figures S1-S4).

 figure: Fig. 1.

Fig. 1. Schematic of the SIMscope3D optical setup. (A) CAD rendering of the SIMscope3D; (B) Cross-sectional view of the SIMscope3D. The design consists of achromatic doublets for fiber coupling and focusing on the sample. The excitation light and fluorescence emission paths are separated by a dichroic cube. Active axial scanning up to 550 µm into the sample achieved using an electrowetting lens. The fluorescence from the sample is collected onto an onboard CMOS sensor as shown by the stripe pattern in (C); (D) SIM reconstructed image compared to the pseudo-widefield (p-WF) reconstruction in a PLP-eGFP labeled fixed tissue sample showing the optical sectioning provided by SIM; (E) Demonstration of depth resolving capability of the SIMscope3D with color coded cells from different imaging planes in a PLP-eGFP labeled fixed tissue sample.

Download Full Size | PDF

2.2 Imaging sample preparation

For resolution characterization, a sample of fluorescent microspheres (Fluormax G0100) suspended in agarose was prepared in a 35 mm cell culture dish. A coverslip was placed on the exposed surface of the agarose mixture to provide a uniform interface for imaging through. The agarose sample was then mounted under the SIMscope3D for imaging. For fixed tissue imaging, a 350 µm thick coronal slice of fixed, PLP-eGFP mouse brain tissue was mounted on a microscope slide in VECTASHIELD Plus Antifade Mounting Media (H-1900).

For in vivo imaging, cranial windows were implanted as previously described [27]. Briefly, 2 mm2 cranial windows were implanted centered on stereotactic coordinates AP +1, ML +1.5 over the motor cortex. A head bar with a 12 mm diameter open window was used to head fix the mouse without interfering with the SIMscope3D. The mice were anesthetized using 4.5% isoflurane and mounted to the head bar apparatus while unconscious prior to imaging. Imaging in identical locations was completed on SIMscope3D and 2-photon systems within 5 hours of surgery. B6.129P-Cx3cr1tm1Litt/J (Jackson lab stock #005582) mice were used for all experiments. All experiments involving animals were conducted in accordance with protocols approved by the Animal Care and Use Committee at the University of Colorado Anschutz Medical Campus.

2.3 SIM reconstruction

The fluorescent bead, fixed tissue, and live animal images were acquired using three separate grating orientations with three phases per orientation with an incident power of 7.4 µW per phase. The spatial frequency of the grating on the imaging plane was chosen as 68.9 mm-1 ($\bar{\nu } = \; 0.108$) based on our previous preliminary work [28] to maintain modulation contrast at depths up to 300 µm in the tissue, while also not being limited by the core-to-core spacing of the fiber bundle. This frequency corresponds to an optical sectioning strength of 17.6 µm with the Stokseth approximation [18]. The exposure time per phase for fluorescent bead and fixed tissue acquisition was 80 ms, and for live animal imaging was 300 ms. Following previous approaches to extract optically sectioned images from sinusoidal patterned image sets with varying phase [18,29], we define

$$I(\phi )= {I_0} + {I_C}\cos ({{\phi_k}} )+ {I_s}\sin ({{\phi_k}} )$$
$$\mathrm{For}\,\,{\phi _k} = \left[ {0,\frac{\pi }{2},\pi } \right]$$
$$\begin{array}{l} {{I_1} = I({\phi = 0} )= {I_0} + {I_C}}\\ {{I_2} = I\left( {\phi = \frac{\pi }{2}} \right) = {I_0} + {I_s}}\\ {{I_3} = I({\phi = \pi } )= {I_0} - {I_c}} \end{array}$$
where, I0 is the non-attenuated background and Ic and Is are cosine and sine components of the spatial frequency projected on the imaging plane. The goal of OS-SIM is to recover ${I_p} = {({{I_c} + {I_s}} )^{1/2}}$, the optically sectioned image. Given set of phase angle used in this work,
$${I_p} = \frac{1}{2}{({{{({2{I_2} - {I_1} - {I_3}} )}^2} + {{({{I_3} - {I_1}} )}^2}} )^{1/2}}. $$

Optical sectioning can also be obtained by using a more conventional choice of phase shifts of 0, 2π/3, and 4π/3 for uniform intensity exposure, and can be easily implemented by adjusting the DMD pattern. A more general OS-SIM algorithm valid for any set of phase angles is provided in Brown et al. [30]. Non-sectioned pseudo widefield (p-WF) images were obtained by adding the images from the three phases and taking the mean intensity of the image over the three orientations. To process SIMscope3D data, we utilized the following approach. µ-manager TIFF data was loaded, the OS-SIM calculated for each angle at every z-plane, the resulting OS-SIM images were averaged across angles at every z-plane, and the resulting z-stack was then corrected for axial attenuation using histogram matching to the brightest frame [31]. Denoising was performed on the attenuation corrected, OS-SIM z-stack using the J-invariance tuned total variation algorithm [3133]. The full pipeline is implemented in Python and provided as Supplemental File SIMscope3D Reconstruction (Code 1 Ref. [34]).

3. Experimental results

3.1 Optical sectioning and axial resolution

The optical sectioning characteristics of the SIMscope3D were experimentally determined by imaging a thin fluorescent slide (Valley Scientific FluorCal FC-OCS-RGB), and the axial resolution by imaging 1 µm fluorescent beads. The electrowetting lens was set to the center of its actuation range (z = 275 µm, applied voltage of 54 V). A region was chosen with multiple fluorescent microspheres (Fig. 2(A)) in the FOV. Image acquisition for SIM was performed through this region of interest with 61 slices at an axial stepping interval of 1 µm for a total axial depth of 60 µm.

 figure: Fig. 2.

Fig. 2. (A) Image of 1 µm diameter fluorescent beads for axial resolution characterization; (B) Close up view of a single bead for characterizing the z-axis profile; (C) x-z cross-sectional view of the fluorescent beads; (D) The intensity vs axial depth for all fluorescent beads was fit to a Gaussian function, yielding an axial full width half maximum of 18.14 µm; (E) The intensity vs axial position profile for a thin fluorescent slide was fit to a Gaussian function to characterize the optical sectioning strength, yielding a full width half maximum of 17.8 µm; Error bars correspond to ±1 standard deviation.

Download Full Size | PDF

A 25 × 25 pixel region around each fluorescent bead was cropped through the depth of the image stack (Fig. 2(B)). The intensity profile for each fluorescent bead (Fig. 2(C)) along the z-axis of each image in the stack was captured using an ImageJ macro. The maximum intensity across each slice was extracted for each fluorescent bead. The maximum intensity vs. axial depth was averaged across all fluorescent beads and fit with a single-term Gaussian function. The axial resolution of the SIMscope3D as indicated by the full width at half maximum (FWHM) of this fit is 18.14 +/- 0.8 µm (Fig. 2(D)). Fitting a gaussian to the intensity profile of the beads at the focal plane yields a FWHM of 1.36 +/- 0.06 µm, indicating that the lateral resolution is 2 µm limited by the Nyquist sampling. The optical sectioning strength of the SIMscope3D is experimentally obtained by fitting maximum intensity vs axial depth from reconstructed images of the thin fluorescent slide. The gaussian fit is shown in Fig. 2(E) and the optical sectioning strength characterized by the FWHM of this fit is 17.8+/- 0.4 µm.

3.3 Fixed tissue imaging

To demonstrate the viability of the SIMscope3D for biological tissue imaging, we imaged regions in the striatum in a fixed brain tissue labeled with PLP-eGFP. Figure 3 shows a comparison between p-WF and SIM reconstructed images. The effect of optical sectioning and rejection of out of focus fluorescence and scattering background is evident in Fig. 3(C) comparing line cut intensity plots between p-WF and SIM images. Attributed to the higher contrast obtained via optical sectioning, the SIM images can even distinguish oligodendrocytes that have a significant background from nearby bundles of myelinated fibers.

 figure: Fig. 3.

Fig. 3. Fixed PLP-eGFP mouse brain tissue imaged with the SIMscope3D. Comparison between the pseudo widefield (A) and SIM reconstructed (B) image is shown; (C) The intensity profile of the line cut indicated in (A) and (B) demonstrating rejection of out of focus background leading to increased contrast and higher visibility of oligodendrocytes in the focus plane. The ability to resolve oligodendrocytes from nearby bundles of myelinated fibers (i and ii) is especially evident.

Download Full Size | PDF

The SIMscope3D is designed to perform non-mechanical depth scanning up to 550 µm using the integrated electrowetting lens. We performed volumetric imaging with axial stepping interval of 4.58 µm for a total axial depth of 330 µm, demonstrating its volumetric imaging capability. Figure 4 shows the ability of the SIMscope3D to image and distinguish cell bodies up to a depth of 260 µm (Visualization 1). The imaging depth beyond 260 µm is in part limited by the incident power in addition to scattering and loss of modulation contrast from the tissue at higher depths.

 figure: Fig. 4.

Fig. 4. Image stack with increasing imaging depth collected using the SIMscope3D in the striatum region of fixed PLP-eGFP mouse brain coronal slice. With only 7.4 µW/phase incident on the sample, we are able to image and clearly distinguish cell bodies in the sample up to 260 µm deep.

Download Full Size | PDF

3.4 Microglia processes in awake animals

Microglia, the brain’s tissue resident macrophages, have highly dynamic branched processes that continuously surveil their environment. To image the motion of microglia processes in an awake mouse, the microscope field of view was aligned to a region with visible cells and was positioned next to noticeable vasculature landmarks. The SIMscope3D was focused onto the surface of the cranial window and volumetric images were acquired at five different time points, each volume stack took ∼3.5 min with 15 min in between each stack (t = 0, 15, 30, 45, 60 min). The stacks ranged from 0 to 120 um below the bottom of the cranial window with an axial stepping interval of 5 µm. The results from this experiment are presented in Fig. 5. The SIM images provide background-free high-resolution imaging of cell bodies even at depth, when compared with p-WF images (Fig. 5(A)). A maximum intensity projection of microglia somas up to a depth of 120 µm is shown in Fig. 5(B). Figure 5(B) also demonstrates the ability for the SIMscope3D to optically section cells in densely labeled tissue. The ability to reject out of focus background enables the use of the SIMscope3D to capture a time series of the motion of microglial processes as show in Fig. 5(C). Additionally, reference 2P images (Visualization 2) were acquired of the same FOV, using the vasculature landmarks as a reference, compared to SIMscope3D images (Figure S5). The 2P images confirmed that the cells being imaged were indeed microglia. The SIMscope3D was able to resolve smaller microglial processes with an improved contrast, where the p-WF image was unable to resolve the feature at all (Figs. 5(D) & 5(E)). Finally, the SIMscope3D also produces images with greatly improved contrast as compared with the p-WF images (Fig. 5(F)). This proof-of-concept experiment suggests that the SIMscope3D can perform long-term awake imaging of microglia, which will allow studies of the dynamic nature of the microglial response in mouse disease models in novel, low-cost ways.

 figure: Fig. 5.

Fig. 5. (A) Image stack comparison of microglial cells between SIM reconstructed (left) and pseudo-widefield (right) images, with three different representative z depths (0 µm refers to bottom of cranial window); (B) Color-coded Z-depth max projections for xy, yz and xz planes at T = 0 min; (C) Representative microglial cells across five different time points from ROIs in (A) at depths of 20 µm (top row) and 45 µm (bottom row). Images were averaged across 3 slices (15 µm) for better SNR. White arrows indicate microglial process growth and cyan arrows indicate microglia process retraction; (D) T = 30, Z = 45 ROI from (C) comparing SIM (left) vs. pWF (right); (E) Normalized intensity line profiles for yellow line in (D); (F) Contrast comparison for yellow dotted square ROI in (D) for all five time points (p = .00083, N = 5).

Download Full Size | PDF

4. Discussion

The components chosen in the design of the SIMscope3D, such as achromatic doublets for minimizing chromatic shift, electrowetting tunable lens for depth scanning, and a board level 2.2 µm pixel CMOS sensor, are aimed towards providing high contrast and resolution. This unique benefit makes this miniature microscope different from others that have been designed for imaging neural activity from cells where higher contrast and resolution is not required. In comparison, the SIMscope3D allows us to distinguish features that are excluded from widefield techniques as shown in Fig. 5. This is important for studies of structure changes in supporting cells, such as microglia.

The SIMscope3D also opens new applications in real-time low noise volumetric imaging of neural activities. The neural activity signals obtained by the most popular Inscopix and UCLA miniscopes relies heavily on computation-heavy post hoc imaging processing and noise reduction algorithms. Yet, in densely labeled samples, the neural activity signals obtained by these devices are influenced heavily by out-of-focus fluorescence and scattering from the tissue. In applications that require real-time neural signals such as newly emerged calcium-imaging-based brain-machine interface technology [35,36], with slight modifications such as fast frame rate CMOS sensor, the SIMscope3D is capable of providing high quality real-time neural signal while maintaining other advantages of miniscopes. The board level camera is currently limited to 5 fps acquisition with full frame resolution. Using a custom board for the sensor (ON Semiconductor MT9P031 I12STM (Aptina)) frame rates of up to 14 fps are possible with full frame resolution, which can be further increased using region of interest acquisition. Additionally, looking at the estimated weight distribution of the components in the SIMscope3D, the optical components contribute to about 1.2 g, with the CMOS and the 3D printed enclosure adding approximately 1.9 g and 3.6 g respectively (Figure S6). With certain optimization to the enclosure design, and a custom lightweight board level camera, the weight of SIMscope3D will be improved in future iterations to enable imaging in freely behaving mice.

The main advantage offered by the SIMscope3D is that it can optically section densely labeled scattering tissue (Fig. 5(B)) which improves image contrast (Figs. 5(E) and 5(F)). This provides the ability to identify prominent cellular features with not only lateral but also axial spatial information, giving researchers the ability to study the 3D structure of neural and non-neuronal populations in real-time. In contrast, conventional miniature widefield microscopes do not offer optical sectioning and have a large depth of field, preventing extrapolation of axial information. An alternative solution for high resolution imaging in freely moving animals is the two-photon fiber coupled miniature microscope systems (2P-FCM). But these systems require bulky, expensive lasers and a more complex optical setup, making the SIMscope3D an easier and more cost-effective system to implement. Thus, the SIMscope3D offers a novel, easy-to-implement way for researchers to perform high-contrast 1-photon 3D imaging on neural populations in vivo.

Comparing with 2P imaging, the SIMscope3D currently loses some microglial features and processes. However, the 2P images collected on a 1.0 NA benchtop system were highly saturated (Supplementary Figure S5B), which highlights the microglial processes with lower signal in comparison with cell bodies. The current SIMscope3D is limited due to the amount of power incident at the imaging plane from our LED source (7.4 µW/phase). With future iterations of the system, the power can be increased to allow better imaging of small features with low fluorescence expression. Additionally, increasing the excitation power can greatly reduce current exposure times, allowing faster frame rate imaging than from 2P-FCM imaging which is limited by laser scanning.

5. Conclusion

In this work, we present a design for a 3D structured illumination fiber coupled miniature microscope with an onboard 2.2 µm CMOS sensor and an active non-mechanical axial scanning lens, capable of imaging at depths up to 550 µm in tissue. This is the first demonstration of a miniature microscope for structured light illumination to acquire volumetric high resolution optically sectioned imaging in live tissue.

We experimentally characterized the axial resolution of the SIMscope3D at 18 µm (FWHM). We demonstrated proof of concept volumetric imaging in fixed brain tissue labeled with eGFP at depths up to 260 µm. The higher contrast obtained in SIM reconstruction due to optical sectioning, enabled time series volumetric images of microglia processes in an awake animal, at depths up to 120 µm.

The SIMscope3D opens new applications in structural volumetric imaging in awake animals with high contrast. The benefits of this system include the lower cost and ability to use higher frame rates than 2P miniature microscopes since images are theoretically only limited by the frame rate of the camera. These features create new opportunities to investigate dynamic neural structure and function in behaving animals.

Funding

National Institutes of Health (R01 NS123665, R21 EY029458).

Acknowledgments

The authors would like to acknowledge Tarah A. Welton at Department of Bioengineering, University of Colorado Anschutz Medical Campus for help with the fixed coronal brain tissue. The authors acknowledge help from Forest Speed at Department of Bioengineering, University of Colorado Anschutz Medical Campus in collection of the optical sectioning data. The authors would also like to thank Dr. Mo Zohrabi at Department of Electrical, Computer and Energy Engineering, University of Colorado Boulder for technical assistance with Zemax OpticStudio modeling.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data presented in this paper may be obtained from the authors upon reasonable request. Code for analyzing the data presented in this paper is available in Code 1, Ref. [34].

Supplemental document

See Supplement 1 for supporting content.

References

1. D. J. Cai, D. Aharoni, T. Shuman, J. Shobe, J. Biane, W. Song, B. Wei, M. Veshkini, M. La-Vu, J. Lou, S. E. Flores, I. Kim, Y. Sano, M. Zhou, K. Baumgaertel, A. Lavi, M. Kamata, M. Tuszynski, M. Mayford, P. Golshani, and A. J. Silva, “A shared neural ensemble links distinct contextual memories encoded close in time,” Nature 534(7605), 115–118 (2016). [CrossRef]  

2. T. Shuman, D. Aharoni, D. J. Cai, C. R. Lee, S. Chavlis, L. Page-Harley, L. M. Vetere, Y. Feng, C. Y. Yang, I. Mollinedo-Gajate, L. Chen, Z. T. Pennington, J. Taxidis, S. E. Flores, K. Cheng, M. Javaherian, C. C. Kaba, N. Rao, M. La-Vu, I. Pandi, M. Shtrahman, K. I. Bakhurin, S. C. Masmanidis, B. S. Khakh, P. Poirazi, A. J. Silva, and P. Golshani, “Breakdown of spatial coding and interneuron synchronization in epileptic mice,” Nat. Neurosci. 23(2), 229–238 (2020). [CrossRef]  

3. A. Klioutchnikov, D. J. Wallace, M. H. Frosz, R. Zeltner, J. Sawinski, V. Pawlak, K. M. Voit, P. S. J. Russell, and J. N. D. Kerr, “Three-photon head-mounted microscope for imaging deep cortical layers in freely moving rats,” Nat. Methods 17(5), 509–513 (2020). [CrossRef]  

4. B. N. Ozbay, G. L. Futia, M. Ma, V. M. Bright, J. T. Gopinath, E. G. Hughes, D. Restrepo, and E. A. Gibson, “Three dimensional two-photon brain imaging in freely moving mice using a miniature fiber coupled microscope with active axial- scanning,” Sci. Rep. 8(1), 1–14 (2018). [CrossRef]  

5. W. Zong, R. Wu, S. Chen, J. Wu, H. Wang, Z. Zhao, G. Chen, R. Tu, D. Wu, Y. Hu, Y. Xu, Y. Wang, Z. Duan, H. Wu, Y. Zhang, J. Zhang, A. Wang, L. Chen, and H. Cheng, “Miniature two-photon microscopy for enlarged field-of-view, multi-plane and long-term brain imaging,” Nat. Methods 18(1), 46–49 (2021). [CrossRef]  

6. W. Zong, R. Wu, M. Li, Y. Hu, Y. Li, J. Li, H. Rong, H. Wu, Y. Xu, Y. Lu, H. Jia, M. Fan, Z. Zhou, Y. Zhang, A. Wang, L. Chen, and H. Cheng, “Fast high-resolution miniature two-photon microscopy for brain imaging in freely behaving mice,” Nat. Methods 14(7), 713–719 (2017). [CrossRef]  

7. W. Göbel, J. N. D. Kerr, A. Nimmerjahn, and F. Helmchen, “Miniaturized two-photon microscope based on a flexible coherent fiber bundle and a gradient-index lens objective,” Opt. Lett. 29(21), 2521 (2004). [CrossRef]  

8. B. A. Flusberg, J. C. Jung, E. D. Cocker, E. P. Anderson, and M. J. Schnitzer, “In vivo brain imaging using a portable 3.9 gram two-photon fluorescence microendoscope,” Opt. Lett. 30(17), 2272 (2005). [CrossRef]  

9. “UCLA Miniscope V4 Wiki,” http://miniscope.org/index.php/Miniscope_V4.

10. A. de Groot, B. J. G. van den Boom, R. M. van Genderen, J. Coppens, J. van Veldhuijzen, J. Bos, H. Hoedemaker, M. Negrello, I. Willuhn, C. I. De Zeeuw, and T. M. Hoogland, “Ninscope, a versatile miniscope for multi-region circuit investigations,” eLife 9, 1–24 (2020). [CrossRef]  

11. W. A. Liberti, L. N. Perkins, D. P. Leman, and T. J. Gardner, “An open source, wireless capable miniature microscope system,” J. Neural Eng. 14(4), 045001 (2017). [CrossRef]  

12. A. D. Jacob, A. I. Ramsaran, A. J. Mocle, L. M. Tran, C. Yan, P. W. Frankland, and S. A. Josselyn, “A compact head-mounted endoscope for in vivo calcium imaging in freely behaving mice,” Curr. Protoc. Neurosci. 84(1), e51–29 (2018). [CrossRef]  

13. L. Zhang, B. Liang, G. Barbera, S. Hawes, Y. Zhang, K. Stump, I. Baum, Y. Yang, Y. Li, and D. T. Lin, “Miniscope GRIN lens system for calcium imaging of neuronal activity from deep brain structures in behaving animals,” Curr. Protoc. Neurosci. 86(1), e56–21 (2019). [CrossRef]  

14. K. K. Ghosh, L. D. Burns, E. D. Cocker, A. Nimmerjahn, Y. Ziv, A. El Gamal, and M. J. Schnitzer, “Miniaturized integration of a fluorescence microscope,” Nat. Methods 8(10), 871–878 (2011). [CrossRef]  

15. Y. Hayashi, K. Kobayakawa, and R. Kobayakawa, “Large-scale calcium imaging with a head-mount axial scanning 3D fluorescence microscope,” bioRxiv 2–5 (2021).

16. K. Yanny, N. Antipa, W. Liberti, S. Dehaeck, K. Monakhova, F. L. Liu, K. Shen, R. Ng, and L. Waller, “Miniscope3D: optimized single-shot miniature 3D fluorescence microscopy,” Light: Sci. Appl. 9(1), 171 (2020). [CrossRef]  

17. O. Skocek, T. Nöbauer, L. Weilguny, F. Martínez Traub, C. N. Xia, M. I. Molodtsov, A. Grama, M. Yamagata, D. Aharoni, D. D. Cox, P. Golshani, and A. Vaziri, “High-speed volumetric imaging of neuronal activity in freely moving rodents,” Nat. Methods 15(6), 429–432 (2018). [CrossRef]  

18. M. A. Neil, R. Juskaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. 22(24), 1905–1907 (1997). [CrossRef]  

19. M. G. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000). [CrossRef]  

20. A. G. York, S. H. Parekh, D. Dalle Nogare, R. S. Fischer, K. Temprine, M. Mione, A. B. Chitnis, C. A. Combs, and H. Shroff, “Resolution doubling in live, multicellular organisms via multifocal structured illumination microscopy,” Nat. Methods 9(7), 749–754 (2012). [CrossRef]  

21. D. Dan, M. Lei, B. Yao, W. Wang, M. Winterhalder, A. Zumbusch, Y. Qi, L. Xia, S. Yan, Y. Yang, P. Gao, T. Ye, and W. Zhao, “DMD-based LED-illumination Super-resolution and optical sectioning microscopy,” Sci. Rep. 3(1), 1116 (2013). [CrossRef]  

22. R. Heintzmann and T. Huser, “Super-resolution structured illumination microscopy,” Chem. Rev. 117(23), 13890–13908 (2017). [CrossRef]  

23. R. Turcotte, Y. Liang, M. Tanimoto, Q. Zhang, Z. Li, M. Koyama, E. Betzig, and N. Ji, “Dynamic super-resolution structured illumination imaging in the living brain,” Proc. Natl. Acad. Sci. 116(19), 9586–9591 (2019). [CrossRef]  

24. N. Bozinovic, C. Ventalon, T. Ford, and J. Mertz, “Fluorescence endomicroscopy with structured illumination,” Opt. Express 16(11), 8016–8025 (2008). [CrossRef]  

25. V. Szabo, C. Ventalon, V. De Sars, J. Bradley, and V. Emiliani, “Spatially selective holographic photoactivation and functional fluorescence imaging in freely behaving mice with a fiberscope,” Neuron 84(6), 1157–1169 (2014). [CrossRef]  

26. X. Chen, K. L. Reichenbach, and C. Xu, “Experimental and theoretical analysis of core-to-core coupling on fiber bundle imaging,” Opt. Express 16(26), 21598 (2008). [CrossRef]  

27. C. M. Bacmeister, H. J. Barr, C. R. McClain, M. A. Thornton, D. Nettles, C. G. Welle, and E. G. Hughes, “Motor learning promotes remyelination via new and surviving oligodendrocytes,” Nat. Neurosci. 23(7), 819–831 (2020). [CrossRef]  

28. G. M. Sanchez, O. D. Supekar, G. L. Futia, B. N. Ozbay, C. Welle, V. M. Bright, J. T. Gopinath, D. Restrepo, D. Shepherd, and E. A. Gibson, “Widefield fluorescence optical sectioning microscopy in a miniature fiber-coupled microscope with active axial scanning,” in Conference on Lasers and Electro-Optics (OSA, 2020), p. SW4P.4.

29. M. A. A. Neil, R. Juškaitis, and T. Wilson, “Real time 3D fluorescence microscopy by two beam interference illumination,” Opt. Commun. 153(1-3), 1–4 (1998). [CrossRef]  

30. P. T. Brown, R. Kruithoff, G. J. Seedorf, and D. P. Shepherd, “Multicolor structured illumination microscopy and quantitative control of polychromatic light with a digital micromirror device,” Biomed. Opt. Express 12(6), 3700 (2021). [CrossRef]  

31. S. Van Der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, “Scikit-image: Image processing in python,” PeerJ 2014, 1–18 (2014).

32. J. Batson and L. Royer, “Noise2Self : blind denoising by self-supervision,” in Proceedings of the 36th International Conference on Machine Learning (2019), pp. 524–533.

33. M. Nikolova, “An algorithm for total variation minimization and applications,” J. Math. Imaging Vis. 20(1/2), 89–97 (2004). [CrossRef]  

34. D. Shepherd, “SIMscope3D Reconstruction Code,” figshare (2022), https://doi.org/10.6084/m9.figshare.19142336.

35. D. J. O’Shea, E. Trautmann, C. Chandrasekaran, S. Stavisky, J. C. Kao, M. Sahani, S. Ryu, K. Deisseroth, and K. V. Shenoy, “The need for calcium imaging in nonhuman primates: New motor neuroscience and brain-machine interfaces,” Exp. Neurol. 287, 437–451 (2017). [CrossRef]  

36. C. Li, D. C. W. Chan, X. Yang, Y. Ke, and W. H. Yung, “Prediction of forelimb reach results from motor cortex activities based on calcium imaging and deep learning,” Front. Cell. Neurosci. 13, 1–12 (2019). [CrossRef]  

Supplementary Material (4)

NameDescription
Code 1       SIMscope3D Reconstruction Code
Supplement 1       Supplementary Figures
Visualization 1       Z-stack of Fixed Tissue Imaging using SIMscope3D labeled with PLP-eGFP
Visualization 2       Reference 2-Photon Images of Microglia

Data availability

Data presented in this paper may be obtained from the authors upon reasonable request. Code for analyzing the data presented in this paper is available in Code 1, Ref. [34].

34. D. Shepherd, “SIMscope3D Reconstruction Code,” figshare (2022), https://doi.org/10.6084/m9.figshare.19142336.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Schematic of the SIMscope3D optical setup. (A) CAD rendering of the SIMscope3D; (B) Cross-sectional view of the SIMscope3D. The design consists of achromatic doublets for fiber coupling and focusing on the sample. The excitation light and fluorescence emission paths are separated by a dichroic cube. Active axial scanning up to 550 µm into the sample achieved using an electrowetting lens. The fluorescence from the sample is collected onto an onboard CMOS sensor as shown by the stripe pattern in (C); (D) SIM reconstructed image compared to the pseudo-widefield (p-WF) reconstruction in a PLP-eGFP labeled fixed tissue sample showing the optical sectioning provided by SIM; (E) Demonstration of depth resolving capability of the SIMscope3D with color coded cells from different imaging planes in a PLP-eGFP labeled fixed tissue sample.
Fig. 2.
Fig. 2. (A) Image of 1 µm diameter fluorescent beads for axial resolution characterization; (B) Close up view of a single bead for characterizing the z-axis profile; (C) x-z cross-sectional view of the fluorescent beads; (D) The intensity vs axial depth for all fluorescent beads was fit to a Gaussian function, yielding an axial full width half maximum of 18.14 µm; (E) The intensity vs axial position profile for a thin fluorescent slide was fit to a Gaussian function to characterize the optical sectioning strength, yielding a full width half maximum of 17.8 µm; Error bars correspond to ±1 standard deviation.
Fig. 3.
Fig. 3. Fixed PLP-eGFP mouse brain tissue imaged with the SIMscope3D. Comparison between the pseudo widefield (A) and SIM reconstructed (B) image is shown; (C) The intensity profile of the line cut indicated in (A) and (B) demonstrating rejection of out of focus background leading to increased contrast and higher visibility of oligodendrocytes in the focus plane. The ability to resolve oligodendrocytes from nearby bundles of myelinated fibers (i and ii) is especially evident.
Fig. 4.
Fig. 4. Image stack with increasing imaging depth collected using the SIMscope3D in the striatum region of fixed PLP-eGFP mouse brain coronal slice. With only 7.4 µW/phase incident on the sample, we are able to image and clearly distinguish cell bodies in the sample up to 260 µm deep.
Fig. 5.
Fig. 5. (A) Image stack comparison of microglial cells between SIM reconstructed (left) and pseudo-widefield (right) images, with three different representative z depths (0 µm refers to bottom of cranial window); (B) Color-coded Z-depth max projections for xy, yz and xz planes at T = 0 min; (C) Representative microglial cells across five different time points from ROIs in (A) at depths of 20 µm (top row) and 45 µm (bottom row). Images were averaged across 3 slices (15 µm) for better SNR. White arrows indicate microglial process growth and cyan arrows indicate microglia process retraction; (D) T = 30, Z = 45 ROI from (C) comparing SIM (left) vs. pWF (right); (E) Normalized intensity line profiles for yellow line in (D); (F) Contrast comparison for yellow dotted square ROI in (D) for all five time points (p = .00083, N = 5).

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

I ( ϕ ) = I 0 + I C cos ( ϕ k ) + I s sin ( ϕ k )
F o r ϕ k = [ 0 , π 2 , π ]
I 1 = I ( ϕ = 0 ) = I 0 + I C I 2 = I ( ϕ = π 2 ) = I 0 + I s I 3 = I ( ϕ = π ) = I 0 I c
I p = 1 2 ( ( 2 I 2 I 1 I 3 ) 2 + ( I 3 I 1 ) 2 ) 1 / 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.