Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deep learning enables contrast-robust super-resolution reconstruction in structured illumination microscopy

Open Access Open Access

Abstract

Structured illumination microscopy (SIM) is a powerful technique for super-resolution (SR) image reconstruction. However, conventional SIM methods require high-contrast illumination patterns, which necessitate precision optics and highly stable light sources. To overcome these challenges, we propose a new method called contrast-robust structured illumination microscopy (CR-SIM). CR-SIM employs a deep residual neural network to enhance the quality of SIM imaging, particularly in scenarios involving low-contrast illumination stripes. The key contribution of this study is the achievement of reliable SR image reconstruction even in suboptimal illumination contrast conditions. The results of our study will benefit various scientific disciplines.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Structured illumination microscopy (SIM) stands out as a fast and minimally invasive super-resolution technique that offers remarkable enhancements in spatial resolution. [13]. SIM is founded upon Wide-Field fluorescence microscopy, utilizing cosine-shaped structured illumination as opposed to a uniform Wide-Field illumination system [46]. Through the integration of the frequency-shift effect inherent in structured illumination with advanced post-processing algorithms, SIM technology extracts high-frequency sample details, resulting in two-fold resolution improvement [7]. To date, various methods have been proposed to improve the reconstruction quality in SIM, such as total internal reflection fluorescent microscopy (TIRF) [8,9], Fair-SIM [10], HiFi-SIM [11]. Sparse-SIM [12] is a deconvolution algorithm developed by exploiting a priori knowledge about the sparsity and continuity of biological structures, which increases the resolution of SR microscopes by nearly two times. JSFR-SIM [13]was developed by implementing a simplified workflow for SR-SIM, termed joint space and frequency reconstruction. JSFR-AR-SIM [14]was developed by integrating a high-speed reconstruction framework with a high-fidelity optimization approach designed to suppress the sidelobe artifact.

However, the generation of high-contrast (i.e., high-modulation depth) stripe patterns is the key requirement for the SIM technique. In the conventional reconstruction process of SIM, samples modulated with high-contrast structured illumination enable more accurate calculations of phase, frequency, and other information of the acquired images. [15]. Conversely, in cases of low contrast in structured illumination [16,17], inaccuracies often arise during the parameter estimation of input images, particularly in estimating the required frequency shift and phase shift. This often results in various defects in image reconstruction. Low contrast may result from inconsistent polarization in the illumination beam, thick specimens or high background noise. [18]. To address this problem, an algorithm based on inverse matrix computation was proposed. This method performs better than the autocorrelation algorithm in simulating conditions with low modulation depth [19]. However, calculations for actual application scenarios rely on the high contrast of interference-based SIM. Lei proposed an alternative reconstruction algorithm based on image recombination transform [20]; however, this algorithm also requires the precise frequency and phase of the illumination pattern to ensure the imaging quality.

In recent years, deep learning methods have been applied to biological research, such as BS-CNN [21], and CNNs-SIM [22] , and they have achieved good results. Deep neural networks have been trained to increase the apparent magnification and resolution of images. The use of deep learning for content-aware image restoration has shown great promise in denoising, isotropic imaging, and enhancing signal-to-noise ratio. However, the potential of deep learning to boost SIM’s performance under low-contrast conditions has not been explored.

This study introduces contrast-robust structured illumination microscopy (CR-SIM) , which utilizes an end-to-end deep residual neural network designed to enhance the quality of SIM imaging, particularly in scenarios with low-contrast illumination stripes. We accomplish this by reconstructing images using deep neural networks that have been trained on real images, enabling us to retrieve super-resolution information from low-contrast pattern-modulated samples. The proposed CR-SIM method has potential applications in biomedical and chemistry research due to its ability to enhance image resolution and quality, enabling more detailed and accurate analysis. Compared to interferometric systems, the projection-based digital micromirror device structured illumination microscopy (DMD-SIM) system offers the advantages of a compact structure and low cost. However, the contrast of the illumination patterns projected onto the sample surface is constrained by the low-pass filtering characteristics of the optical system. This limitation hinders the application effectiveness of the projection-based DMD-SIM system. [2326]. In addition, challenges such as artifacts and noise persist, especially in complex samples. Hardware prerequisites and system complexities further hinder its widespread adoption. CR-SIM presents a promising solution for mitigating these limitations. By leveraging CR-SIM, DMD-SIM can potentially improve image quality under low-contrast conditions through adaptive illumination compensation. The combination of DMD-SIM and CR-SIM has significant potential for advancing their respective applications. The rest of this paper is organized as follows: Section 2 explains the methods employed in this study, Section 3 describes the results, and Section 4 provides the conclusions.

2. Methods

2.1 DL-based contrast-robust SIM

We present a deep learning-based method for SIM that aims to improve the quality of SIM images even in the case of low-contrast illumination stripes. First, we constructed a comprehensive training dataset comprising different contrast stripe patterns, including instances with low- contrast stripes. This was achieved by simulating stripe patterns characterized by different levels of contrast. Here, we choose the highest-quality of microtubule (MT) and clathrin-coated pits (CCP) in the open-source BioSR [27] dataset as the training data’s ground truth. Pair the corresponding ground truth images with simulate degraded SIM images at varying contrast levels to obtain a training data set. This dataset served as the primary training input for our deep neural network. Next, we set up CR-SIM with an encoder and decoder. The encoder is responsible for extracting the relevant image features from the raw SIM images. These features contain essential information necessary to enhance the quality of the final image. The decoder uses the extracted features and optimizes them to generate a high-quality target image. Finally, the decoder reconstructs the final SR image from the raw SIM images. A schematic of the proposed method is presented in Fig. 1.

 figure: Fig. 1.

Fig. 1. Schematic of CR-SIM. (a) Data generation pipeline for CR-SIM: The training input of the CR-SIM network was nine raw SIM images under different low-contrast levels, and the ground truth was the corresponding high-contrast SIM result at the same region. (b) Network training flow: The network weight is updated according to the loss between the predicted image generated by the network and the ground truth. (c) Simulation of illumination stripes with ten levels of contrast.

Download Full Size | PDF

2.2 Generation of the training data

Obtaining high-quality outputs from raw SIM images can often be challenging, as controlling the contrast level of the illumination pattern during experimental acquisition is not always feasible. To address these limitations, simulations can be employed to generate a training dataset containing SIM images with varying levels of contrast in the stripes, including low-contrast ones. Such simulations generate illumination stripe patterns with precisely controlled contrast levels, providing a means to create synthetic raw SIM images. Through simulating the modulation of the sample by the illumination pattern, we generate synthetic images that closely mimic the characteristics of real-world SIM images.

To generate stripe patterns with different contrast levels, the simulation process modifies the properties of the illumination stripe. By adjusting the amplitude or intensity distribution of the simulated illumination stripe, patterns with varying contrast levels can be created. Consequently, synthetic raw SIM images can be generated using low-contrast stripes to replicate the challenges encountered in real-world scenarios.

The utilization of simulations to construct training datasets encompassing diverse contrast levels enables the development and optimization of algorithms, such as the proposed CR-SIM method, to effectively handle low-contrast conditions. These synthetic datasets facilitate the controlled training and validation of deep learning models and reconstruction algorithms, thereby facilitating their generalizability and robustness in real-world SIM applications. Accordingly, to create a training dataset of SIM images with different levels of contrast stripes, including low-contrast ones, we simulated illumination stripes with different contrast levels. The production process is illustrated in Figure 1(a). The preparation of the training dataset encompassed three primary steps.

Step 1: Illumination pattern generation

In the context of SIM, the intensity distribution of illumination stripe patterns can be expressed as:

$$I(\boldsymbol{r}) = I_0[1 + \boldsymbol{m} \cos (\boldsymbol{k}_0 \cdot \boldsymbol{r} + \varphi)],$$
where $I_0$ represents the average intensity of the illumination stripe, $m$ is the stripe modulation contrast, $k_0$ is the stripe spatial frequency vector, and $\varphi$ denotes the initial phase of the stripe. Equation (1) was employed in the simulation process to obtain the illumination stripe patterns in three different directions, each with three distinct phases.

Step 2: Contrast of the illumination stripes

As described in Step 1, the contrast of the illumination stripes can be adjusted by controlling the value of $m$. Accordingly, we varied the value of $m$ to adjust the contrast and obtain stripe patterns with different contrasts. Specifically, we set ten contrast levels ranging from low to high. For two-dimensional SIM, three orientations and three phases for each orientation were typically acquired, resulting in a total of nine frames. Consequently, there were nine illumination stripe patterns for each contrast level, leading to ten groups of contrasts, each comprising nine images. This is illustrated in Figure 1(c).

Step 3: Simulation of raw intensity images

To simulate raw intensity images, the fluorescent response of the sample was modeled by multiplying the illumination pattern intensity $I(r)$ and the sample structure $F(r)$, both of which were selected from the BioSR [27] dataset as ground truth images. The final image $D(r)$ was then blurred using the intensity point spread function $H(r)$, and white Gaussian noise $N(r)$ was added. These steps are mathematically expressed as follows:

$$D(r)=(F(r)I(r))\otimes H(r)+N(r).$$

The simulation training data were generated using MATLAB. The training dataset comprised 9408 groups of images, while the testing dataset comprised 1568 groups of images.

2.3 Network architecture and training details

Herein, CR-SIM is used to reconstruct an SR image from D(r), as shown in Figure 1(b). Our CR-SIM network is based on Residual U-Net, which consists of an Encoder-Decoder structure with skip connections, as depicted in Figure 2. The encoder consists of four downsampling modules responsible for extracting image features from the input. The decoder incorporates four upsampling modules, which further optimize features and generate the final target image based on the encoder-processed input. Meanwhile, each downsampling module is composed of two 3$\times$3 convolutions and a 2$\times$2 maximum pooling layer. Futhermore, each upsampling module contains two 3$\times$3 convolution layers. Each 3$\times$3 convolution in the network is post-connected to a rectified linear unit (ReLU) activation function to enhance the expressiveness of the model. The network also transfers the output feature maps of each downsampling stage in the encoder to the decoder through jump connections, which are channel-spliced with the output feature maps of the upsampling layer in the corresponding stage. This action achieves the fusion of shallow and deep information and provides more semantic information for the decoding process. The more model structure details and training details of CR-SIM network is provided in our Supplement 1.

 figure: Fig. 2.

Fig. 2. Architecture of the CR-SIM neural network.

Download Full Size | PDF

After constructing the low-contrast CR-SIM neural network, the network was trained following the flowchart presented in Figure 1(b). The input, a set of SIM images with low-contrast structured illumination, was passed through the low-contrast CR-SIM neural network, yielding the predicted image of the network. To minimize the disparity between the predicted image and the true SIM image, the image discrepancy was calculated using a loss function. By iteratively optimizing the parameters to reduce the discrepancy between the output and the true values, the loss function value was minimized until the network converged, at which point the weight parameters were preserved.

The network was implemented using the Keras platform (version 2.3.1; Python 3.7). To update the parameters of the neural network, we adopted the Adam optimizer with a learning rate of $\alpha$ = 0.0001. The code was deployed on a server equipped with an AMD Ryzen 7 4800H with Radeon Graphics (16 CPUs), 16 GB RAM, and an NVIDIA GeForce RTX 2060 running the Windows 10 operating system.

3. Results

The performance of the CR-SIM neural network was assessed through simulations and experiments. While simulations serve as the ideal method for quantifying the reconstruction quality of CR-SIM, experiments verified its feasibility in actual low-contrast situations. To demonstrate the applicability of our algorithm to real experimental data, we successfully reconstructed SIM data captured in low-contrast illuminated strip scenes, including polarization-unadapted SIM systems and projection DMD-SIM.

3.1 SIM reconstruction at varying contrast levels

To quantitatively measure the reconstruction quality of CR-SIM at different contrast levels, the contrast of the illumination stripes must be controlled. This was achieved through a validation experiment employing a simulation dataset, allowing us to evaluate the performance of the proposed method across different contrast levels.

In our experimentation, we focused on two distinct samples: clathrin-coated pits (CCPs) and microtubules (MTs), representing specimens with increasing structural complexity. Figure 3 and Figure 4 comprehensively present the reconstructed outputs for each specimen type at various contrast levels. The figures include comparisons with Wide-Field, Hessian-SIM, and DFGAN. Comparing the CR-SIM reconstruction with the Wide-Field image reveals a marked improvement in resolution. Notably, CR-SIM demonstrates a superior resolving power, effectively separating entangled microtubules within a magnified region of interest. Furthermore, CR-SIM performs comparably to DFGAN but exhibits fewer artifacts and less background noise. Notably, at the lowest contrast level, CR-SIM surpasses both the DFGAN and Hessian-SIM in image reconstruction quality. As the contrast level increases, CR-SIM consistently outperforms the alternatives, underscoring its robustness and adaptability to contrast variations. In summary, our validation experiment on the simulation dataset establishes the efficacy of CR-SIM in enhancing the resolution of reconstructed images. The results demonstrate higher quality compared to those produced by DFGAN and Hessian-SIM methods. Compared to the Wide-Field image, the CR-SIM reconstruction exhibits significantly higher resolution, particularly evident in the magnified region of interest where CR-SIM effectively separates entangled microtubules. In comparison to Hessian-SIM and DFGAN, CR-SIM closely matches the conventional algorithm, with fewer artifacts and less background noise. Notably, even at the lowest contrast level, CR-SIM outperforms both Hessian-SIM and DFGAN. With increasing contrast levels, CR-SIM consistently maintains superior reconstruction effects, indicating enhanced robustness and adaptability to varying contrast conditions. To further enrich the comparative analysis, we quantified method performance by calculating PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index Measure) for the different reconstruction approaches relative to the ground truth. As shown in Figure 5, CR-SIM provides the highest PSNR and SSIM at varying contrast levels compared to other methods. The quantitative assessment results from SSIM and PSNR further verified and highlighted the effective and reliable resolution improvement by using our method.

 figure: Fig. 3.

Fig. 3. Comparison of SIM reconstruction at varying contrast levels for the Wide-Field, Hessian-SIM, DFGAN and CR-SIM methods. Representative SR images reconstructed by the (b) Wide-Field, (c) Hessian-SIM, (d)DFGAN and (e) CR-SIM methods from SIM raw images of CCPs.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Comparison of SIM reconstruction at varying contrast levels for the Wide-Field, Hessian-SIM, DFGAN and CR-SIM methods. Representative SR images reconstructed by the (b) Wide-Field, (c) Hessian-SIM, (d)DFGAN and (e) CR-SIM methods from SIM raw images of microtubules.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. PSNR and SSIM comparisons at varying contrast levels for the Wide-Field, Hessian-SIM, DFGAN and CR-SIM methods.

Download Full Size | PDF

3.2 Experimental results in actual low-contrast situations with high-background thick samples

In this experimental section, we rigorously evaluate the super-resolution (SR) performance of Contrast-Robust Structured Illumination Microscopy (CR-SIM) under challenging conditions, specifically scenarios involving high background noise and thick specimens. The selected samples inherently possess both high background noise and thickness, replicating realistic challenges encountered in biological or materials science applications. The deliberate imprecision in polarization tuning adds complexity, mirroring difficulties faced by traditional imaging methods.

In this experiment, data were acquired using a self-made high-speed SIM system based on EOM and scanning galvanometer system, with a multi-color laser (RGB-405-C/488/561/637-Cnm-50mW-DF60082-A, Changchun New Industries Optoelectronics Tech) serving as the light source. Within this system, EOMs (Thorlabs, EO-PM-NR-C4) and scanning galvanometer systems (Cambridge Technology, 8310 K) are used to generate and change the structured illumination pattern. The modulated sample information by the pattern was recorded as a single-phase image through an objective lens (Nikon, 100$\times$/NA 1.49) and captured by a digital CMOS camera (HAMAMATSU C13440-20CU). The raw image obtained has a pixel size of 65 nm. [28] The resulting reconstructed outputs are shown in Figure 6, where they are compared to outputs of traditional reconstruction methods: Fair-SIM [10] and IM-SIM [19]. The quality of the input and conventional SIM reconstruction was poor under these conditions. As shown in the Figure 6(b)-(e), the reconstruction results of the traditional algorithm are almost submerged in the noise, while only CR-SIM achieves robust reconstruction.

 figure: Fig. 6.

Fig. 6. Results in actual low-contrast situations. (a) Comparison of CR-SIM, Fair-SIM, and Wide-Field images. (b-d) Local images within the yellow boxes in (a), corresponding to the results of the Wide-Field, Fair-SIM, and CR-SIM methods, respectively. (e) Normalized intensity in (b)–(d). Scale bar: 5μm.

Download Full Size | PDF

This section emphasizes the practical relevance of CR-SIM in addressing challenges that may be encountered in biological or materials science applications, where high background noise and thick specimens are not uncommon. The results provide valuable insights into the potential of CR-SIM to excel in demanding imaging conditions.

3.3 Experimental results of projection DMD-SIM

Despite its exceptional performance in high-resolution imaging, DMD-SIM has limitations owing to its dependence on high-contrast illumination, which restricts its effectiveness in low-contrast situations. Moreover, issues such as artifacts and noise persist, particularly in intricate sample environments. The demands for hardware and system complexities also impede its widespread adoption. Our specific focus was to assess whether CR-SIM could yield plausible reconstruction results from DMD-SIM data. To gauge the efficacy of CR-SIM, a comparative analysis involving the benchmarking of CR-SIM against conventional reconstruction methods (IM-SIM and Fair-SIM) was conducted.

In this experiment, data were acquired using a self-designed projection-type DMD system, with an LED (N5311-SLE, Changchun New Industries Optoelectronics Tech) serving as the light source and featuring a four-channel output. Within this system, the binary characteristics of the DMD (1920$\times$1080 pixels, pixel size of 7.56 $\mathrm{\mu}$m, DLP6500FYE from Texas Instruments, Dallas, Texas, USA) were harnessed for binary modulation of the incident light source amplitude. The modulated sample information by stripe pattern was recorded as a single-phase image through an objective lens (Nikon, 100$\times$/NA 1.49) and captured by a digital CMOS camera (HAMAMATSU C13440-20CU). The raw image obtained has a pixel size of 65 nm. [29]

In our experimental setup, we used a set of nine single-phase images captured using huFIB cell microtubule. As shown in Figure 7(a), the imaging results for the Wide-Field, Fair-SIM, IM-SIM, and CR-SIM modalities were juxtaposed. Subsequently, we performed a detailed examination of the two localized magnified images, which are shown in Figure 7(b)–(e).

 figure: Fig. 7.

Fig. 7. Reconstruction of a SIM image under extreme low-contrast conditions. (a) Reconstruction results of the sample using Wide-Field, Fair-SIM, IM-SIM, and CR-SIM. (b) Enlarged images of two selected regions of interest (ROI-1 and ROI-2) from (a) in the Wide-Field mode. (c-e) Enlarged reconstruction results of ROI-1 and ROI-2 using FAIR-SIM, IM-SIM, and CR-SIM, respectively. Scale bar: 5μm.

Download Full Size | PDF

Due to the low-pass filtering characteristics of the projection system, the contrast of higher-frequency stripes is suppressed by the modulation curve. Consequently, both the traditional Fair-SIM and IM-SIM approaches failed to accurately calculate the phase and frequency, thereby distorting the original cellular structure. In contrast, CR-SIM was a robust solution, effectively circumventing the limitations posed by high-frequency modulation and yielding substantially improved imaging results compared to the Wide-Field approach.

Figure 8(a) provides a comparison between the Wide-Field imaging of huFIB cell microtubule using a fully open DMD mirror and the CR reconstruction results. The figure indicates that CR-SIM exhibited significant improvements in resolution and background removal compared to the Wide-Field imaging. Figure 8(b) compares the CR-SIM and IM-SIM reconstructions. CR-SIM avoided the incoherence in the IM-SIM reconstruction results and also did not introduce artifacts, resulting in a higher signal-to-noise ratio.

 figure: Fig. 8.

Fig. 8. Reconstruction results of the microtubule sample. (a) Comparison between CR-SIM and Wide-Field images. (b) Comparison between CR-SIM and IM-SIM images. (c-e) Enlarged images of the yellow boxes in (a), representing the results of the Wide-Field, IM-SIM, and CR-SIM methods, respectively. (f) Normalized intensity profiles along the yellow lines in (c-e).

Download Full Size | PDF

The reconstruction outputs and line profiles across neighbouring microtubules for both CR-SIM and IM-SIM are shown in Figure 8(c)-(e). The displayed cropped region contains two parallel microtubules, separated by a gap smaller than the diffraction limit, and thus not resolved in the wide-field image. In the outputs from CR-SIM and IM-SIM, this gap is clearly visible. The distance between the peaks in the line profile for CR-SIM is 146 nm, which is close to the theoretically achievable resolution with standard SIM.

These findings demonstrate that CR-SIM can effectively generate realistic low-contrast scenes and achieve robust reconstructions, even in the presence of noise. The ability of this method to outperform traditional techniques in these scenarios underscores its potential to enhance the quality of SIM in practical applications.

3.4 Generalizable experimental results from different structural samples

Deep learning poses inherent challenges in achieving generalization across diverse structural samples. In our experimental framework, the training dataset is composed of two distinct sample types: Microtubule (MT) and Clathrin-Coated Pits (CCP). This deliberate selection aimed to capture the intricacies and variabilities inherent in structural features, establishing a robust foundation for our model. To rigorously assess the generalization capabilities of our proposed method, we conducted experiments with structural samples not included in the training dataset. Specifically, we introduced F-actin and Mitochondria structures, thus encompassing a broader spectrum of morphological characteristics. This strategic diversification in sample structures serves as a stringent test for the adaptability and versatility of our approach.

We compare and analyze the reconstruction results of CR-SIM with the traditional inverse matrix-SIM (IM-SIM) algorithm. When compared to the IM-SIM algorithm, our CR-SIM method reconstructed the F-actin and mitochondria with less information loss and artifacts (Figure 9). From the local zoomed-in images in Figure 9(a), our CR-SIM method more precisely resolves the densely crisscrossing regions of the F-actin cytoskeleton than those from the IM-SIM images. As shown in Figure 9(b), our CR-SIM method can distinguish the ring-like mitochondria morphology, which is not distinguishable in both the Wide-Field image and IM-SIM result due to insufficient resolution and noise, respectively. In conclusion, our method exhibits promising outcomes in terms of generalization, showcasing its adaptability to diverse structural samples beyond the confines of the training dataset. These findings reinforce the potential applicability and reliability of CR-SIM in broader microscopy applications.

 figure: Fig. 9.

Fig. 9. Comparison of CR-SIM and conventional IM-SIM algorithm for imaging different structures. (a) Wide-Field, super-resolution CR-SIM and IM-SIM results of F-actin under the excitation wavelength of 638 nm. Scale bars, 10 $\mathrm{\mu}$m (upper image) and 2 $\mathrm{\mu}$m (lower boxed magnified images). (b) Wide-Field, super-resolution CR-SIM and IM-SIM results of mitochondria under the excitation wavelength of 488 nm. Scale bars, 10 $\mathrm{\mu}$m (upper image) and 2 $\mathrm{\mu}$m (lower boxed magnified images).

Download Full Size | PDF

4. Conclusions

In this study, we addressed the challenge of improving SR optical microscopy techniques, particularly for scenarios with low-contrast illumination stripes. We introduced CR-SIM, an innovative approach that employs an end-to-end deep residual neural network. The key contribution was the inclusion of low-contrast simulation data in the training dataset, which enabled CR-SIM to generate SR reconstructions that maintained the image quality and minimized the artifacts.

Our experiments demonstrated the effectiveness of CR-SIM in reconstructing SR images from raw data containing low-contrast illumination stripes. In particular, CR-SIM outperformed traditional SIM methods, exhibiting superior image reconstruction and fewer artifacts, particularly in high-contrast scenarios.

Although the DMD-SIM excels in high-resolution imaging, it faces limitations related to high-contrast illumination, artifacts, and hardware complexity, which hinder its widespread adoption. A potential solution to these challenges is the combination of DMD-SIM with CR-SIM, which will enable improved image quality under low-contrast conditions through adaptive illumination compensation, paving the way for enhanced research applications.

The future of SR imaging holds exciting possibilities, such as the integration of transfer learning and other advanced neural network architectures to further enhance the generalization and adaptability of these techniques. The ongoing development and refinement of CR-SIM, along with the continued exploration of its applications in various low-contrast scenarios, offer promising avenues for future research and technological advancements in the field of SR microscopy.

Funding

Research Initiation Project of Zhejiang Lab (2022NKOPI01); Natural Science Foundation of Zhejiang Province (LQ23F050010, LY23F050010); National Natural Science Foundation of China (61975188); Ningbo Key Scientific and Technological Project (2022Z123); National Key Research and Development Program of China (2021YFF0700302).

Disclosures

The authors declare no conflicts of interest.

Data availability

The source codes supporting the conclusions of this article is available in [30].

Supplemental document

See Supplement 1 for supporting content.

References

1. Y. Wu and H. Shroff, “Faster, sharper, and deeper: structured illumination microscopy for biological imaging,” Nat. Methods 15(12), 1011–1019 (2018). [CrossRef]  

2. D. Li and E. Betzig, “Response to comment on “extended-resolution structured illumination imaging of endocytic and cytoskeletal dynamics”,” Science 352(6285), 527 (2016). [CrossRef]  

3. M. F. Langhorst, J. Schaffer, and B. Goetze, “Structure brings clarity: structured illumination microscopy in cell biology,” Biotechnol. J. 4(6), 858–865 (2009). [CrossRef]  

4. R. Heintzmann and M. G. Gustafsson, “Subdiffraction resolution in continuous samples,” Nat. Photonics 3(7), 362–364 (2009). [CrossRef]  

5. S. Cox, “Super-resolution imaging in live cells,” Dev. Biol. 401(1), 175–181 (2015). [CrossRef]  

6. M. G. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000). [CrossRef]  

7. M. G. Gustafsson, “Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution,” Proc. Natl. Acad. Sci. 102(37), 13081–13086 (2005). [CrossRef]  

8. E. Chung, D. Kim, Y. Cui, et al., “Two-dimensional standing wave total internal reflection fluorescence microscopy: superresolution imaging of single molecular and biological specimens,” Biophys. J. 93(5), 1747–1757 (2007). [CrossRef]  

9. R. Fiolka, M. Beck, and A. Stemmer, “Structured illumination in total internal reflection fluorescence microscopy using a spatial light modulator,” Opt. Lett. 33(14), 1629–1631 (2008). [CrossRef]  

10. M. Müller, V. Mönkemöller, S. Hennig, et al., “Open-source image reconstruction of super-resolution structured illumination microscopy data in imagej,” Nat. Commun. 7(1), 10980 (2016). [CrossRef]  

11. G. Wen, S. Li, L. Wang, et al., “High-fidelity structured illumination microscopy by point-spread-function engineering,” Light: Sci. Appl. 10(1), 70 (2021). [CrossRef]  

12. W. Zhao, S. Zhao, L. Li, et al., “Sparse deconvolution improves the resolution of live-cell super-resolution fluorescence microscopy,” Nat. Biotechnol. 40(4), 606–617 (2022). [CrossRef]  

13. Z. Wang, T. Zhao, H. Hao, et al., “High-speed image reconstruction for optically sectioned, super-resolution structured illumination microscopy,” Adv. Photonics 4(2), 026003 (2022). [CrossRef]  

14. Z. Wang, T. Zhao, Y. Cai, et al., “Rapid, artifact-reduced, image reconstruction for super-resolution structured illumination microscopy,” Innovation 4(3), 100425 (2023). [CrossRef]  

15. L. Jin, B. Liu, F. Zhao, et al., “Deep learning enables structured illumination microscopy with low light levels and enhanced speed,” Nat. Commun. 11(1), 1934 (2020). [CrossRef]  

16. M. Weigert, U. Schmidt, T. Boothe, et al., “Content-aware image restoration: pushing the limits of fluorescence microscopy,” Nat. Methods 15(12), 1090–1097 (2018). [CrossRef]  

17. C. S. Smith, J. A. Slotman, L. Schermelleh, et al., “Structured illumination microscopy with noise-controlled image reconstructions,” Nat. Methods 18(7), 821–828 (2021). [CrossRef]  

18. K. Wen, Z. Gao, X. Fang, et al., “Structured illumination microscopy with partially coherent illumination for phase and fluorescent imaging,” Opt. Express 29(21), 33679–33693 (2021). [CrossRef]  

19. R. Cao, Y. Chen, W. Liu, et al., “Inverse matrix based phase estimation algorithm for structured illumination microscopy,” Biomed. Opt. Express 9(10), 5037–5051 (2018). [CrossRef]  

20. X. Zhou, M. Lei, D. Dan, et al., “Image recombination transform algorithm for superresolution structured illumination microscopy,” J. Biomed. Opt. 21(9), 096009 (2016). [CrossRef]  

21. E. Xypakis, G. Gosti, T. Giordani, et al., “Deep learning for blind structured illumination microscopy,” Sci. Rep. 12(1), 8623 (2022). [CrossRef]  

22. C. Ling, C. Zhang, M. Wang, et al., “Fast structured illumination microscopy via deep learning,” Photonics Res. 8(8), 1350 (2020). [CrossRef]  

23. B.-J. Chang, L.-J. Chou, Y.-C. Chang, et al., “Isotropic image in structured illumination microscopy patterned with a spatial light modulator,” Opt. Express 17(17), 14710–14721 (2009). [CrossRef]  

24. J.-Y. Lin, R.-P. Huang, P.-S. Tsai, et al., “Wide-field super-resolution optical sectioning microscopy using a single spatial light modulator,” J. Opt. A: Pure Appl. Opt. 11(1), 015301 (2009). [CrossRef]  

25. M. Li, Y. Li, W. Liu, et al., “Structured illumination microscopy using digital micro-mirror device and coherent light source,” Appl. Phys. Lett. 116(23), 233702 (2020). [CrossRef]  

26. D. Dan, M. Lei, B. Yao, et al., “Dmd-based led-illumination super-resolution and optical sectioning microscopy,” Sci. Rep. 3(1), 1116 (2013). [CrossRef]  

27. C. Qiao, D. Li, Y. Guo, et al., “Evaluation and development of deep neural networks for image super-resolution in optical microscopy,” Nat. Methods 18(2), 194–202 (2021). [CrossRef]  

28. Y. Chen, W. Liu, Z. Zhang, et al., “Multi-color live-cell super-resolution volume imaging with multi-angle interference microscopy,” Nat. Commun. 9(1), 4818 (2018). [CrossRef]  

29. Q. Liu, D. Zhou, J. Zhang, et al., “Dmd-based compact sim system with hexagonal-lattice-structured illumination,” Appl. Opt. 62(20), 5409–5415 (2023). [CrossRef]  

30. W. Liu, “Code for: Deep learning enables contrast-robust super-resolution reconstruction in structured illumination microscopy,” GitHub (2024), https://github.com/WenjieLab/Contrast-robust-SIM-reconstruction.

Supplementary Material (1)

NameDescription
Supplement 1       Supplementary Information for Deep learning enables contrast-robust super-resolution reconstruction in structured illumination microscopy

Data availability

The source codes supporting the conclusions of this article is available in [30].

30. W. Liu, “Code for: Deep learning enables contrast-robust super-resolution reconstruction in structured illumination microscopy,” GitHub (2024), https://github.com/WenjieLab/Contrast-robust-SIM-reconstruction.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Schematic of CR-SIM. (a) Data generation pipeline for CR-SIM: The training input of the CR-SIM network was nine raw SIM images under different low-contrast levels, and the ground truth was the corresponding high-contrast SIM result at the same region. (b) Network training flow: The network weight is updated according to the loss between the predicted image generated by the network and the ground truth. (c) Simulation of illumination stripes with ten levels of contrast.
Fig. 2.
Fig. 2. Architecture of the CR-SIM neural network.
Fig. 3.
Fig. 3. Comparison of SIM reconstruction at varying contrast levels for the Wide-Field, Hessian-SIM, DFGAN and CR-SIM methods. Representative SR images reconstructed by the (b) Wide-Field, (c) Hessian-SIM, (d)DFGAN and (e) CR-SIM methods from SIM raw images of CCPs.
Fig. 4.
Fig. 4. Comparison of SIM reconstruction at varying contrast levels for the Wide-Field, Hessian-SIM, DFGAN and CR-SIM methods. Representative SR images reconstructed by the (b) Wide-Field, (c) Hessian-SIM, (d)DFGAN and (e) CR-SIM methods from SIM raw images of microtubules.
Fig. 5.
Fig. 5. PSNR and SSIM comparisons at varying contrast levels for the Wide-Field, Hessian-SIM, DFGAN and CR-SIM methods.
Fig. 6.
Fig. 6. Results in actual low-contrast situations. (a) Comparison of CR-SIM, Fair-SIM, and Wide-Field images. (b-d) Local images within the yellow boxes in (a), corresponding to the results of the Wide-Field, Fair-SIM, and CR-SIM methods, respectively. (e) Normalized intensity in (b)–(d). Scale bar: 5μm.
Fig. 7.
Fig. 7. Reconstruction of a SIM image under extreme low-contrast conditions. (a) Reconstruction results of the sample using Wide-Field, Fair-SIM, IM-SIM, and CR-SIM. (b) Enlarged images of two selected regions of interest (ROI-1 and ROI-2) from (a) in the Wide-Field mode. (c-e) Enlarged reconstruction results of ROI-1 and ROI-2 using FAIR-SIM, IM-SIM, and CR-SIM, respectively. Scale bar: 5μm.
Fig. 8.
Fig. 8. Reconstruction results of the microtubule sample. (a) Comparison between CR-SIM and Wide-Field images. (b) Comparison between CR-SIM and IM-SIM images. (c-e) Enlarged images of the yellow boxes in (a), representing the results of the Wide-Field, IM-SIM, and CR-SIM methods, respectively. (f) Normalized intensity profiles along the yellow lines in (c-e).
Fig. 9.
Fig. 9. Comparison of CR-SIM and conventional IM-SIM algorithm for imaging different structures. (a) Wide-Field, super-resolution CR-SIM and IM-SIM results of F-actin under the excitation wavelength of 638 nm. Scale bars, 10 $\mathrm{\mu}$m (upper image) and 2 $\mathrm{\mu}$m (lower boxed magnified images). (b) Wide-Field, super-resolution CR-SIM and IM-SIM results of mitochondria under the excitation wavelength of 488 nm. Scale bars, 10 $\mathrm{\mu}$m (upper image) and 2 $\mathrm{\mu}$m (lower boxed magnified images).

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

I ( r ) = I 0 [ 1 + m cos ( k 0 r + φ ) ] ,
D ( r ) = ( F ( r ) I ( r ) ) H ( r ) + N ( r ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.