Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Ultra-fast, universal super-resolution radial fluctuations (SRRF) algorithm for live-cell super-resolution microscopy

Open Access Open Access

Abstract

For a long term, spatial resolution of fluorescence microscopy was strictly restricted by the diffraction limit. To solve this problem, various super-resolution technologies have been developed. Super-resolution radial fluctuations (SRRF), an emerging type of super-resolution microscopy, directly analyze raw images and generate super-resolution results without fluorophore localization, thereby showing more advantages in handling high-density data. Here, by speeding up the process of the algorithm with graphics processing unit (GPU) and programming with Python language, we expand the universality and improve the computing speed of the SRRF algorithm. We further apply our SRRF algorithm in different live-cell super-resolution microscopy methods with two types of fluorescence fluctuation sources: (i) direct stochastic optical reconstruction microscopy (dSTORM) in which fluorophores themselves blink under specific buffer and laser condition and (ii) structural illumination microscopy (SIM) and modulated Airyscan in which fluorescence fluctuations are artificially introduced with modulated laser illumination. With improved spatiotemporal resolution and image quality, our SRRF algorithm demonstrates its capability in live-cell super-resolution imaging, indicating its wide applications in life sciences.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

In the past few decades, super-resolution microscopy [1] based on different strategies has undergone a fast evolution and allowed biologists to observe cell structures and dynamics in previously unattainable details beyond diffraction limit of resolution [2]. Among these methods, Single-Molecule Localization Microscopy (SMLM), including fluorescent Photoactivation Localization Microscopy (PALM/fPALM) and direct Stochastic Optical Reconstruction Microscopy (dSTORM) [36], obtains extremely high localization precision by lighting fluorescent molecules sparsely. By localizing only a part of fluorescent molecules at a time, the theoretical resolution can achieve 1 nm [1]. However, sparse blinking of fluorescent molecules is challenging to realize in live-cell SMLM, especially in dSTORM, since the laser intensities and concentrations of the STORM buffer in living cells are lower than that in fixed cells [6,7]. To achieve higher localization precision, some of the existing SMLM algorithms tend to recognize regions with insufficient blinking as background [8,9]. As a result, lots of structural details are lost, leading to a misunderstanding of the structures and dynamics in living cells.

Super-Resolution Radial Fluctuations (SRRF), one of the super-resolution microscopy approaches based on fluorescence fluctuations rather than localization, was developed by Henrique’s Group in 2016 [10]. for each pixel in the raw images, the authors defined a concept called radiality, representing the degree of proximity to the fluorescence center. The higher the radiality is, the closer it is to the fluorescence center. Based on the spatial information, the degree of temporal correlations over time are calculated as the final result. Thus, the generated SRRF image is a super-resolution image with reduced full widths at half maximum (FWHM), bypassing the localization for individual fluorophores. To date, SRRF has been successfully applied in live-cell confocal, widefield, and Total Internal Reflection Fluorescence (TIRF) microcopies. A series of raw frames were used for SRRF reconstruction, and dynamics of living samples expressing Green Fluorescent Protein (GFP) were super-resolved [10,11]. However, there are several problems in the existing live-cell SRRF approaches: (i) GFP has limited fluorescence fluctuations, so temporal information cannot be extracted effectively [11,12]; (ii) the process of calculating radiation is time-consuming; (iii) SRRF-Stream takes advantage of parallel computing but is restricted on Andor cameras [13].

In this study, we re-developed SRRF algorithm using python language [14], so that it can be imported in any Python project and called in LabVIEW programs [15], significantly expanding the application scenarios of SRRF microscopy. Moreover, our algorithm utilizes the Compute Unified Device Architecture (CUDA) technique [16,17] for parallel computing, which greatly speeds up the computation and thus makes it more suitable for live-cell studies. To explore its potential in live-cell super-resolution imaging, we applied our SRRF algorithm in a variety of super-resolution microscopy methods where fluorescence fluctuations have two sources. (i) Fluorophore blinking (dSTORM): Dyes blink at much higher density in live-cell dSTORM than in fixed samples. When applied to render live-cell dSTORM data, our program resolved F-actin with much more details at a resolution of ∼32 nm, much better than that when using traditional SMLM algorithm [18]. (ii) Artificially modulated laser illumination (Structural Illumination Microscopy (SIM) [19] and modulated Airyscan [20]): Among widefield microscopy methods, patterned illumination in SIM can be regarded as an artificially introduced fluorescence fluctuations. And in point scanning Airyscan microscopy, modulated illumination with random or regular patterns can be introduced, so that fluorescence fluctuations are generated. Tested in these two methods, our SRRF algorithm showed improved resolution and proved itself in resolving dynamics in living cells, too.

2. Methods

The SRRF algorithm in this paper is composed of two parts (Fig. 1): a spatial and a temporal analysis. The spatial analysis uses the fluorescence distribution of each frame and generates a series of new frames called radiality maps; the temporal analysis merges the sequence of radiality maps into a single SRRF super-resolution image through higher-order temporal statistics, which is based on the fluctuations of fluorescence luminance.

 figure: Fig. 1.

Fig. 1. Flow chart of the SRRF algorithm. Arrows indicate the direction of the workflow. GPU: graphics processing unit.

Download Full Size | PDF

2.1 Gradient map generation

In order to calculate the radiality, we first get a gradient map in both horizontal and vertical directions. For an image plane $I({x, y} )$, the gradients ${G_x}$ and ${G_y}$ are calculated by the following equations:

$$\left\{ {\begin{array}{c} {{G_x}[{x, y} ]= I[{x - 1, y} ]- I[{x + 1, y} ]}\\ {{G_y}[{x, y} ]= I[{x, y - 1} ]- I[{x, y + 1} ]} \end{array}} \right.$$
Since each element of the matrix are independent, we use parallel computing to accelerate the process. CuPy is a Python library which implements NumPy-compatible multi-dimensional array on CUDA and supports a subset of Numpy interface [21]. In our algorithm, the calculation on matrixes is accelerated parallelly on GPU with CuPy. Accordingly, the indexing and subtraction here are very fast.

2.2 Interpolation

We then zoom in the original frames and gradient maps to produce radiality maps on a sub-pixel basis. Catmull-Rom interpolation is an ideal method since it has C1 continuity which prevents mosaic phenomenon [2224]. The magnification process consists of two steps.

Firstly, we interpolate the matrixes in the vertical and horizontal direction. Here we assume that the magnification rate, the height and the width of the frame, the original frame, and the interpolated frame in the vertical direction are $\textrm{M}$, $\textrm{H}$, $\textrm{W}$ (in pixels), $\textrm{P}({\textrm{x},\textrm{y}} )$, and ${\textrm{P}_\textrm{y}}({\textrm{x},\textrm{y}} )$ respectively. The index of matrixes starts from 0. The coordinates of the new matrix, which indicates the vertical location of each pixel in the zoomed image on the original image, is then decided:

$$yc = \left[ {0\; \frac{1}{M}\; \ldots \; \; H - \frac{1}{M}} \right]$$
Secondly, we select 3 nearby pixels for each pixel in the original image for interpolation:
$$uy = \left[ \begin{array}{c} {min({\lfloor yc \rfloor - 1,\; 0} )}\\ {\lfloor yc \rfloor}\\ {max({\lfloor yc \rfloor + 1,\; h - 1} )}\\ {max({\lfloor yc \rfloor + 2,\; h - 1} )} \end{array}\right]$$
and then calculate the weights:
$$Qy = \frac{1}{2} \cdot \left[ {\begin{array}{cccc}0 &{ - 1} &2 &{- 1}\\ 2 &0 &{- 5} &3 \\ 0 &1 &4 &{- 3} \\ 0 &0 &{- 1} &1 \end{array}} \right] \left[{\begin{array}{c}{1} \\ {\{ yc \}} \\ {\{ yc \}^{2}} \\ {\{ yc \}^{3}} \end{array}} \right]$$
where $\lfloor a \rfloor$ stands for floor function of $\textrm{a}$, and $\{\textrm{a} \}= \textrm{a} - \lfloor a \rfloor$.

In this way, the interpolated image is generated by weighted summation:

$${P_y}({x,\; y} )= \; \mathop \sum \nolimits_{k = 0}^3 P({x, u y({x, k} )} )\cdot Q({x, k} )$$
Likewise, we follow the same procedure to generate $\textrm{P}({\textrm{x},\textrm{y}} )$. A simple way is to transpose ${\textrm{P}_\textrm{y}}$, repeat the same process to $\textrm{P}_\textrm{y}^T$, and transpose the result again.

2.3 Drift correction

In the temporal analysis, the pixels may drift due to the vibration in the shooting process, so the drift correction is necessary. Cross-correlation of two sequential frames, which is the core of drift estimation, generate a correlation map, while the location of maximum in the map is the offset of two frames. The map is interpolated by interp2d function in SciPy, and the location-searching is done by SciPy’s optimize module [25]. As a result, the maximum location $({{\textrm{x}_\textrm{d}},\; {y_d}} )$ will be sub-pixel. Both ${\textrm{x}_\textrm{d}}$ and ${\textrm{y}_\textrm{d}}$ are fractions.

After drift estimation, the offset is sent into Catmull-Rom zooming function, and the interpolation use the compensated coordinates instead of the original coordinates. Finally, the magnified frames are drift-corrected.

2.4 Radiality map generation

Having the gradient maps and the original frames zoomed, we then calculate the radiality map [11]:

$${R_t}({x, y} )= \frac{1}{N}\mathop \sum \nolimits_{i = 1}^N sgn({{{\vec{G}}_i} \cdot \overrightarrow {{r_i}} } ){\left[ {1 - \frac{1}{{{r_i}}}\frac{{|{({{x_c} - x_i^{\prime}} ){G_{yi\; }} - ({{y_c} - y_i^{\prime}} ){G_{xi}}} |}}{{\sqrt {G_{xi}^2 + G_{yi}^2} }}} \right]^2}$$
For low-density image data, we multiply radiality with local gradient magnitude weight ${\textrm{G}_\textrm{w}}$.
$${G_w}({x, y} )= \frac{1}{N}\mathop \sum \nolimits_{i = 1}^N \frac{{sgn({{{\vec{G}}_i} \cdot \overrightarrow {{r_i}} } )({|{\overrightarrow {{G_i}} } |- |{\overrightarrow {{G_c}} } |} )}}{{{I_c}}}$$
The procedure contains many calculation steps. In spite of acceleration with CuPy, calling of different mathematical kernel functions still consumes lots of time. For example, each time when two matrixes are added together, the CuPy has to pause and call the kernel functions in CUDA. To save time, we re-write new kernel function which combines all of the operations above in the formula in CUDA C source code and then create a python function of CuPy’s RawKernel class from it. CUDA’s NVRTC will compile the source code in the runtime. Then, we decide the grid size, block size and block numbers from the matrix shape. Finally, we pass the matrixes and the parameters to this RawKernel function.

The RawKernel function consists of several CUDA functions. The complete flow chart of the calculation is shown below (Fig. 2). Calculations for each pixel are related with neighbour sampling pixels, thus, before the calculation, bound checking should be done. Then, after picking the required data from the GPU memory, the calculation begins. Here we use the built-in hypot and rhypot functions to calculate the absolute value of gradients and its reciprocal.

 figure: Fig. 2.

Fig. 2. Flow chart of the calculation with CUDA functions.

Download Full Size | PDF

Two optional flags, do_gw and do_iw influence the result as well as the procedure, but the relevant codes are inserted as macros and can be removed at compiling time if not necessary. As a result, kernel function doesn’t need to verify the flags in each loop.

The custom kernel function saves hundreds of kernel callings. In consequence, the total time consumed is 2/3 shorter than before.

2.5 Higher-order temporal analysis

The temporal analysis is made by applying super-resolution optical fluctuation imaging (SOFI) [26] to the radiality maps.

$$\begin{aligned} TRAC2({x, y} ) & = \langle \delta {R_t} \cdot \delta {R_{t + \delta t}}\rangle\\ TRAC3({x, y} ) & = \langle \delta {R_t} \cdot \delta {R_{t + \delta t}} \cdot \delta {R_{t + 2\delta t}}\rangle\\ TRAC4({x, y} ) &= \langle \delta {R_t} \cdot \delta {R_{t + \delta t}} \cdot \delta {R_{t + 2\delta t}} \cdot \delta {R_{t + 3\delta t }}\rangle\\ & \quad - \langle \delta {R_t} \cdot \delta {R_{t + \delta t}}\rangle \langle\delta {R_{t + 2\delta t}} \cdot \delta {R_{t + 3\delta t}}\rangle \\ & \quad {- \langle \delta {R_t} \cdot \delta {R_{t + 2\delta t}}\rangle \langle\delta {R_{t + \delta t}} \cdot \delta {R_{t + 3\delta t}}}\rangle \\ &\quad { - \langle \delta {R_t} \cdot \delta {R_{t + 3\delta t}}\rangle \langle\delta {R_{t + \delta t}} \cdot \delta {R_{t + 2\delta t}}}\rangle\end{aligned}$$
where ${\delta}{\textrm{R}_\textrm{t}} = {R_t} - \langle R \rangle$, and $\langle \cdots \rangle$ stands for temporal average.

The temporal average here is a sort of reduction algorithm, where a series of elements are reduced into one output, and CUDA programs should be specially designed to get fully optimized. Fortunately, CuPy’s Reduction Kernel can easily meet this requirement. We pass the input matrixes and the output matrix to it and tell the CuPy what operations should be done. By simply multiplying different matrixes in the map step and summing them up in the reduce step, we get the result of TRAC2 and TRAC3. Besides, TRAC4 uses the result of TRAC2. After multiplying and subtracting the result of them, we can get the result of TRAC4, too.

3. Results and discussions

First of all, the speed acceleration of our algorithm was demonstrated. Standard low-density data including 10,000 frames (64×64 pixels, 16 bit) [10] was applied here, and every 10 or 100 original frames were projected into one frame to create high-density images (i.e., the final data included 1000 or 100 frames). The test was executed on the same computer (Intel Xeon Gold 5115 Central Processing Unit (CPU), 2.4 GHz, 10 Cores; Nvidia GeForce RTX 2080 Ti GPU, 1.55GHz) and the results are shown in Table 1. Compared with NanoJ-SRRF (CPU) [10], our algorithm improved the calculation speed by up to 78-fold, indicating its capability for real-time calculation.

Tables Icon

Table 1. Computation time for tested SRRF algorithms

We then characterized our algorithm in various types of data with fluorescence fluctuations. Firstly, fluorescence fluctuations can be derived from photoblinking of the fluorophores. In live-cell dSTORM, the realization of sparse blinking is challenging, since the laser intensities and concentrations of dSTORM buffer are much lower than that in fixed cells [6,7,27]. However, based on the principle of SMLM, performance of the main-stream SMLM algorithms decreases when blinking density increases [28]. For the evaluation of our SRRF algorithm in handling live-cell dSTORM data, Maximum likelihood algorithm encoded on a Graphics Processing Unit (MaLiang) and ThunderSTORM algorithm [8,9] were applied as controls.

Living primary cultural astrocytes were obtained from 1-day-old Wistar rats, stained with fluorescent probe Lifeact-AF647-CPP [18], and imaged in live-cell dSTORM buffer (supplemented with 2% glucose, 6.7% of 1 M HEPES (pH 8.0), 0.5% beta-mercaptoethanol and an oxygen-scavenging system (0.5 mg ml−1 glucose oxidase and 40 µg ml−1 catalase)). A total of 10,000 frames were acquired at 50 Hz and the super-resolution images were rendered from sub-stacks of 500 frames. In the diffraction-limited TIRF image, actin filaments showed as approximately parallel lines, which sometimes assembled in bundles (Fig. 3(a)). In contrast, the super-resolved SRRF image showed more details than those with MaLiang or ThunderSTORM (Figs. 3(b)–3(d)), especially in regions with dense actin filaments (e.g., the upper right and lower left corners). For instance, at least 7 bundles of actin filaments were distinguished when using SRRF algorithm, while only 4 bundles can be resolved when using MaLiang or ThunderSTORM (the upper right corners). Notably, different trends of actin filaments were revealed from the same set of data (indicated with yellow arrowheads). Based on the rich details, we observed actin filament rearrangement in the time-lapse super-resolution images (Figs. 3(e)–3(g)). At first, the two bundles (red dashed line) gathered together at 40 s, and then they began to split up and finally formed a bigger gap at 200 s (red solid line). Remarkably, during the dynamic rearrangement, the actin filaments were measured with FWHM of 22-33 nm (Figs. 3(f) and 3(g)), and the whole resolution was determined as 32 nm [29]. In contrast, no distinguished gap in the same position at any time point can be observed when using MaLiang or ThunderSTORM algorithm owing to their lack of details (Figs. 3(h) and 3(i)). As a consequence, the resolution acquired from the same raw data with MaLiang was restricted to 60 nm [18]. Figure 3(j) shows another example. SRRF behaved better especially when the actin filaments showed thin, dense, or cross structures (Figs. 3(k)–3(m)). The yellow boxes in Figs. 3(j)–3(m) present that two bundles of actin filaments were resolved using SRRF, while MaLiang and ThunderSTORM lost the localization.

 figure: Fig. 3.

Fig. 3. Comparison of the different algorithms for resolving the same live-cell dSTORM data. (a,j) Diffraction-limited TIRF images (128×128 pixels, 16 bit) of Astrocytes labeled with fluorescent probes for actin filaments. The dSTORM raw data rendered with (b,k) SRRF, (c,l) MaLiang, or (d,m) ThunderSTORM algorithm. (e,h,i) Enlarged images of the yellow-boxed region in (b), (c), (d), respectively. (f,g) Intensity profiles along the dashed and solid yellow line in (e). Each super-resolution image was reconstructed from 500 frames (128×128 pixels, 16 bit) at 50 Hz. Scale bars: (a-d,j-m) 5 µm, (e,h,i) 500 nm.

Download Full Size | PDF

To further demonstrate the capacity of SRRF algorithm for handling high-density dSTORM data, we utilized 30,000 frames of raw data acquired in fixed cells (Fig. 4(a)). Microtubules in fixed BSC-1 cells were labeled with Alexa Fluor 647 by immunofluorescence staining [27]. Figure 4(b) shows the fluorescence density of the first frame. Every 100, 200, or 300 original frames were projected into one frame, thus the original density was increased by 100, 200, or 300 fold (i.e., the final data included 300, 150, or 100 frames, respectively; Figs. 4(c)–4(h)). As the density increased, the structural continuity of microtubules decreased significantly when the raw frames are processed with ThunderSTORM. In contrast, there was no significant decline in image quality when SRRF is utilized to calculate the data with higher fluorescence density.

 figure: Fig. 4.

Fig. 4. Compatibility of the algorithms to high-density dSTORM data. (a) Diffraction-limited TIRF image projected from 30,000 frames (64×64 pixels, 16 bit) of raw data. (b) The first frame of the raw data. (c,e,g), Super-resolution images reconstructed with SRRF when the sample density was increased 100×, 200×, or 300×, respectively. (d,f,h), Super-resolution images reconstructed with ThunderSTORM when the sample density was increased 100×, 200×, or 300×, respectively. Scale bars: 1 µm.

Download Full Size | PDF

Taken together, these dSTORM results suggest the robustness of our SRRF algorithm in handling high-density SMLM data, especially in live-cell imaging.

On the other hand, fluorescence fluctuations can come from modulated illumination, too. SIM provides one possible way to artificially introduce fluorescence fluctuations in widefield imaging mode (Fig. 5). We first utilized a standard fluorescent example [30,31] to resolve mitochondria in fixed bovine pulmonary artery endothelial (BPAEC) cells stained with MitoTracker Red CMXRos (Figs. 5(a)–5(c)). Each image was reconstructed from 9 frames of raw data (three directions, three phases), and the interpolation magnification is 4× for SRRF algorithm. The resolution was improved obviously in comparison with SIM algorithm (Figs. 5(d)–5(f)). We also applied the algorithms for testing live-cell images (Figs. 5(g)–5(i)). Living human bone osteosarcoma epithelial (U2OS) cells were labeled with SiR-tubulin [32] and imaged under SIM mode at 50-ms exposure time (i.e., 450 ms for a whole SIM frame) [33,34]. Inrespective of the comparatively high background, the SRRF image presents higher resolution than the SIM image (Figs. 5(j)–5(l)). In the meantime, Fig. 5(m) shows the clear dynamics of microtubules (highlighted with yellow arrowheads).

 figure: Fig. 5.

Fig. 5. Potential of SRRF algorithm in resolving SIM data. (a,g) Widefield images (1024×1024 pixels, 16 bit) of mitochondria in fixed BPAEC and microtubules in living U2OS. (b,h) Super-resolution images processed with SRRF algorithm. (c,i) Super-resolution images resolved with SIM algorithm. (d,e,j,k) Enlarged images of the white-boxed region in (b), (c), (h), and (i), respectively. (f,l) Intensity profiles along the yellow and blue lines in (d),(e) or (j),(k), respectively. (m) Magnified time-lapse inset of the yellow-boxed region in (h). Each super-resolution image was reconstructed from 9 frames of SIM raw data. For the time-lapse images, representative frames are displayed. Scale bars: (a-c, g-i) 10 µm, (d,e,j,k,m) 1 µm.

Download Full Size | PDF

In point scanning microscopy, fluorescence fluctuations can also be generated by artificially modulating the laser illumination with either random or regular patterns. Here we introduced speckle (randomly generated) and vortex (gradient 0-2π phases in every 10 frames) patterns in super-resolution Airyscan microscopy. Unmodulated illumination (conventional Gaussian spot) was used as a control. An Olympus IX71 microscope equipped with an oil-immersion objective lens (UPlan SApo 100x/1.4 Oil, Olympus) was used to collect the fluorescence. The Airyscan images were obtained by the pixel reassignment and summation of images from a detector array composed of 19 avalanche photodiodes (each detector has an area size of 0.2 Airy unit (AU) and the whole detector array has an equivalent pinhole of 1 AU) [35]. 20-nm fluorescent beads were irradiated with 640-nm laser, and a total of 100 frames were used for computation with 10× interpolation magnification (Figs. 6(a)–6(c)). The resolution improvement in the control group is comparable with that in the speckle mode (Fig. 6(d)), suggesting that resolution can also be significantly improved in SRRF without artificially introduced fluctuations. While in the vortex mode, the same bead has an FWHM of 21 nm, indicating further enhancement when the appropriate modulation mode is used. In spite of resolution improvement, some of the adjacent beads were not separated successfully due to the insufficient independence of brightness fluctuations between the particles (Figs. 6(a)–6(c)). Airyscan images of living U2OS cells stained with Atto 647N [36] were also investigated (Figs. 6(e)–6(g)). The results showed rich mitochondrial dynamics, such as elongation, decrement, bending, fission and fusion, indicating the capability of our SRRF algorithm in analyzing Airyscan images of living cells (yellow arrowheads; Fig. 6(g)).

 figure: Fig. 6.

Fig. 6. Images acquired with Airyscan technique and post-processed with SRRF algorithm. (a-c) Airyscan (Lower half; 500×500 pixels, 20 nm/pixel, 6-µs dwell time for each pixel, 16 bit) and SRRF result (Upper half) images of 20-nm fluorescent beads when using illumination laser with no modulation, speckle patterns, and vortex patterns, respectively. (d) Intensity profiles across the fluorescent bead in the blue-, green-, and yellow-boxes in (a), (b) and (c), respectively. (e,f) Airyscan (Left panel; 500×500 pixels, 50 nm/pixel, 4-µs dwell time for each pixel, 16 bit) and SRRF result (Right panel) images of mitochondria in living U2OS cells using illumination laser with speckle and vortex patterns. (g) Magnified time-lapse inset of the white-boxed region in (e). Each super-resolution image was reconstructed from 9 frames of raw data in (e)-(g). For the time-lapse images, representative frames are displayed. Scale bars: (a-c) 2 µm, (e,f) 5 µm, (g) 500 nm.

Download Full Size | PDF

4. Conclusion

In this paper, we optimized the SRRF algorithm using Python language, making the program accessible for more scenarios. This advantage stands out especially when compared with SRRF-Stream, which is restricted on Andor cameras. In the meantime, this algorithm utilized parallel acceleration with CUDA, making it dozens of times faster than NanoJ-SRRF.

This re-developed SRRF algorithm was tested in various types of microscopy methods with fluorescence fluctuations from either fluorophore blinking or artificially modulated illumination. Applied in live-cell dSTORM data, SRRF algorithm gave much more details than traditional SMLM algorithms, meanwhile providing higher resolution (32 nm). We further demonstrated its capability in handling high-density dSTORM images by projecting hundreds of raw frames acquired in fixed samples into one frame. SRRF algorithm can also be used in SIM and modulated Airyscan data, providing high spatial resolution and clear dynamic details. In these methods without obvious fluorophore blinking, super-resolved SRRF images can also be obtained without artificially-introduced laser modulation, while specific illumination patterns can further improve the resolution.

Overall, these results indicate the great potential of our SRRF algorithm in processing data from various types of live-cell super-resolution microscopy methods and facilitate its wide applications in life science.

Funding

National Key R&D Program of China (2018YFA0701400); National Natural Science Foundation of China (61735017, 61827825, 31901059); Fundamental Research Funds for the Central Universities (2019XZZX003-06, 2019QNA5006); China Postdoctoral Science Foundation (2019M662042); Zhejiang Lab (2018EB0ZX01); ZJU-Sunny Photonics Innovation Center (2019-01).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. Y. M. Sigal, R. Zhou, and X. Zhuang, “Visualizing and discovering cellular structures with super-resolution microscopy,” Science 361(6405), 880–887 (2018). [CrossRef]  

2. E. Abbe, “Beitrage zur Theorie des Mikroskops und der mikroskopischen Wahrnehmung,” Arkiv. Mikroskop. Anat. 9(1), 413–468 (1873). [CrossRef]  

3. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef]  

4. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3(10), 793–796 (2006). [CrossRef]  

5. S. T. Hess, T. P. K. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91(11), 4258–4272 (2006). [CrossRef]  

6. U. Endesfelder and M. Heilemann, “Direct stochastic optical reconstruction microscopy (dSTORM),” Methods Mol. Biol. 1251, 263–276 (2015). [CrossRef]  

7. S. Waldchen, J. Lehmann, T. Klein, S. van de Linde, and M. Sauer, “Light-induced cell damage in live-cell super-resolution microscopy,” Sci. Rep. 5(1), 15348 (2015). [CrossRef]  

8. T. Quan, P. Li, F. Long, S. Zeng, Q. Luo, P. N. Hedde, G. U. Nienhaus, and Z. L. Huang, “Ultra-fast, high-precision image analysis for localization-based super resolution microscopy,” Opt. Express 18(11), 11867–11876 (2010). [CrossRef]  

9. M. Ovesny, P. Krizek, J. Borkovec, Z. Svindrych, and G. M. Hagen, “ThunderSTORM: a comprehensive ImageJ plug-in for PALM and STORM data analysis and super-resolution imaging,” Bioinformatics 30(16), 2389–2390 (2014). [CrossRef]  

10. N. Gustafsson, S. Culley, G. Ashdown, D. M. Owen, P. M. Pereira, and R. Henriques, “Fast live-cell conventional fluorophore nanoscopy with ImageJ through super-resolution radial fluctuations,” Nat. Commun. 7(1), 12471 (2016). [CrossRef]  

11. S. Culley, K. L. Tosheva, P. M. Pereira, and R. Henriques, “SRRF: Universal live-cell super-resolution microscopy,” Int. J. Biochem. Cell Biol. 101, 74–79 (2018). [CrossRef]  

12. K. A. Lukyanov, D. M. Chudakov, S. Lukyanov, and V. V. Verkhusha, “Photoactivatable fluorescent proteins,” Nat. Rev. Mol. Cell Biol. 6(11), 885–890 (2005). [CrossRef]  

13. J. Cooper, M. Browne, H. Gribben, M. Catney, C. Coates, A. Mullan, G. Wilde, and R. Henriques, “Real time multi-modal super-resolution microscopy through Super-Resolution Radial Fluctuations (SRRF-Stream),” Proc. SPIE 10884, 1088418 (2019). [CrossRef]  

14. T. E. Oliphant, “Python for scientific computing,” Comput. Sci. Eng. 9(3), 10–20 (2007). [CrossRef]  

15. C. J. Kalkman, “LabVIEW: a software system for data acquisition, data analysis, and instrument control,” J. Clin. Monit. 11(1), 51–58 (1995). [CrossRef]  

16. C.-T. Yang, C.-L. Huang, and C.-F. Lin, “Hybrid CUDA, OpenMP, and MPI parallel programming on multicore GPU clusters,” Comput. Phys. Commun. 182(1), 266–269 (2011). [CrossRef]  

17. J. D. Owens, M. Houston, D. Luebke, S. Green, J. E. Stone, and J. C. Phillips, “GPU Computing,” Proc. IEEE 96(5), 879–899 (2008). [CrossRef]  

18. Y. Han, M. Li, M. Zhang, X. Huang, L. Chen, X. Hao, C. Kuang, X. Liu, and Y.-H. Zhang, “Cell-permeable organic fluorescent probes for live-cell super-resolution imaging of actin filaments,” J. Chem. Technol. Biotechnol. 94(6), 2040–2046 (2019). [CrossRef]  

19. M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000). [CrossRef]  

20. J. Huff, “The Airyscan detector from ZEISS: confocal imaging with improved signal-to-noise ratio and super-resolution,” Nat. Methods 12(12), i–ii (2015). [CrossRef]  

21. J. Kossaifi, Y. Panagakis, A. Anandkumar, and M. Pantic, “Tensorly: Tensor learning in python,” J. Mach. Learn. Res. 20(1), 925–930 (2019).

22. C. Twigg, “Catmull-Rom splines,” Computer 41(6), 4–6 (2003).

23. M. Hu and J. Q. Tan, “Adaptive osculatory rational interpolation for image processing,” J. Comput. Appl. Math. 195(1-2), 46–53 (2006). [CrossRef]  

24. C. Yuksel, S. Schaefer, and J. Keyser, “Parameterization and applications of Catmull-Rom curves,” Comput. Aided Design 43(7), 747–755 (2011). [CrossRef]  

25. B. G. Olivier, M. R. Johann, and J.-H. S. Hofmeyr, “Modelling cellular processes with Python and Scipy,” Mol. Biol. Rep. 29(1/2), 249–254 (2002). [CrossRef]  

26. T. Dertinger, R. Colyer, G. Iyer, S. Weiss, and J. Enderlein, “Fast, background-free, 3D super-resolution optical fluctuation imaging (SOFI),” Proc. Natl. Acad. Sci. U. S. A. 106(52), 22287–22292 (2009). [CrossRef]  

27. S. A. Jones, S.-H. Shim, J. He, and X. Zhuang, “Fast, three-dimensional super-resolution imaging of live cells,” Nat. Methods 8(6), 499–505 (2011). [CrossRef]  

28. D. Sage, T.-A. Pham, H. Babcock, T. Lukes, T. Pengo, J. Chao, R. Velmurugan, A. Herbert, A. Agrawal, S. Colabrese, A. Wheeler, A. Archetti, B. Rieger, R. Ober, G. M. Hagen, J.-B. Sibarita, J. Ries, R. Henriques, M. Unser, and S. Holden, “Super-resolution fight club: assessment of 2D and 3D single-molecule localization microscopy software,” Nat. Methods 16(5), 387–395 (2019). [CrossRef]  

29. A. C. Descloux, K. S. Grussmayer, and A. Radenovic, “Parameter-free image resolution estimation based on decorrelation analysis,” Nat. Methods 16(9), 918–924 (2019). [CrossRef]  

30. H. Huang, L. Yang, P. Zhang, K. Qiu, J. Huang, Y. Chen, J. Diao, J. Liu, L. Ji, J. Long, and H. Chao, “Real-time tracking mitochondrial dynamic remodeling with two-photon phosphorescent iridium (III) complexes,” Biomaterials 83, 321–331 (2016). [CrossRef]  

31. Q. Liu, Y. Chen, W. Liu, Y. Han, R. Cao, Z. Zhang, C. Kuang, and X. Liu, “Total internal reflection fluorescence pattern-illuminated Fourier ptychographic microscopy,” Opt. Laser. Eng. 123, 45–52 (2019). [CrossRef]  

32. G. Lukinavičius, L. Reymond, E. D’Este, A. Masharina, F. Göttfert, H. Ta, A. Güther, M. Fournier, S. Rizzo, H. Waldmann, C. Blaukopf, C. Sommer, D. W. Gerlich, H.-D. Arndt, S. W. Hell, and K. Johnsson, “Fluorogenic probes for live-cell imaging of the cytoskeleton,” Nat. Methods 11(7), 731–733 (2014). [CrossRef]  

33. R. Cao, Y. Chen, W. Liu, D. Zhu, C. Kuang, Y. Xu, and X. Liu, “Inverse matrix based phase estimation algorithm for structured illumination microscopy,” Biomed. Opt. Express 9(10), 5037–5051 (2018). [CrossRef]  

34. Y. Chen, R. Cao, W. Liu, D. Zhu, Z. Zhang, C. Kuang, and X. Liu, “Widefield and total internal reflection fluorescent structured illumination microscopy with scanning galvo mirrors,” J. Biomed. Opt. 23(4), 046007 (2018). [CrossRef]  

35. Z. Yu, S. Liu, D. Zhu, C. Kuang, and X. Liu, “Parallel detecting super-resolution microscopy using correlation based image restoration,” Opt. Commun. 404, 139–146 (2017). [CrossRef]  

36. Y. Han, M. Li, F. Qiu, M. Zhang, and Y. H. Zhang, “Cell-permeable organic fluorescent probes for live-cell long-term super-resolution imaging reveal lysosome-mitochondrion interactions,” Nat. Commun. 8(1), 1307 (2017). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Flow chart of the SRRF algorithm. Arrows indicate the direction of the workflow. GPU: graphics processing unit.
Fig. 2.
Fig. 2. Flow chart of the calculation with CUDA functions.
Fig. 3.
Fig. 3. Comparison of the different algorithms for resolving the same live-cell dSTORM data. (a,j) Diffraction-limited TIRF images (128×128 pixels, 16 bit) of Astrocytes labeled with fluorescent probes for actin filaments. The dSTORM raw data rendered with (b,k) SRRF, (c,l) MaLiang, or (d,m) ThunderSTORM algorithm. (e,h,i) Enlarged images of the yellow-boxed region in (b), (c), (d), respectively. (f,g) Intensity profiles along the dashed and solid yellow line in (e). Each super-resolution image was reconstructed from 500 frames (128×128 pixels, 16 bit) at 50 Hz. Scale bars: (a-d,j-m) 5 µm, (e,h,i) 500 nm.
Fig. 4.
Fig. 4. Compatibility of the algorithms to high-density dSTORM data. (a) Diffraction-limited TIRF image projected from 30,000 frames (64×64 pixels, 16 bit) of raw data. (b) The first frame of the raw data. (c,e,g), Super-resolution images reconstructed with SRRF when the sample density was increased 100×, 200×, or 300×, respectively. (d,f,h), Super-resolution images reconstructed with ThunderSTORM when the sample density was increased 100×, 200×, or 300×, respectively. Scale bars: 1 µm.
Fig. 5.
Fig. 5. Potential of SRRF algorithm in resolving SIM data. (a,g) Widefield images (1024×1024 pixels, 16 bit) of mitochondria in fixed BPAEC and microtubules in living U2OS. (b,h) Super-resolution images processed with SRRF algorithm. (c,i) Super-resolution images resolved with SIM algorithm. (d,e,j,k) Enlarged images of the white-boxed region in (b), (c), (h), and (i), respectively. (f,l) Intensity profiles along the yellow and blue lines in (d),(e) or (j),(k), respectively. (m) Magnified time-lapse inset of the yellow-boxed region in (h). Each super-resolution image was reconstructed from 9 frames of SIM raw data. For the time-lapse images, representative frames are displayed. Scale bars: (a-c, g-i) 10 µm, (d,e,j,k,m) 1 µm.
Fig. 6.
Fig. 6. Images acquired with Airyscan technique and post-processed with SRRF algorithm. (a-c) Airyscan (Lower half; 500×500 pixels, 20 nm/pixel, 6-µs dwell time for each pixel, 16 bit) and SRRF result (Upper half) images of 20-nm fluorescent beads when using illumination laser with no modulation, speckle patterns, and vortex patterns, respectively. (d) Intensity profiles across the fluorescent bead in the blue-, green-, and yellow-boxes in (a), (b) and (c), respectively. (e,f) Airyscan (Left panel; 500×500 pixels, 50 nm/pixel, 4-µs dwell time for each pixel, 16 bit) and SRRF result (Right panel) images of mitochondria in living U2OS cells using illumination laser with speckle and vortex patterns. (g) Magnified time-lapse inset of the white-boxed region in (e). Each super-resolution image was reconstructed from 9 frames of raw data in (e)-(g). For the time-lapse images, representative frames are displayed. Scale bars: (a-c) 2 µm, (e,f) 5 µm, (g) 500 nm.

Tables (1)

Tables Icon

Table 1. Computation time for tested SRRF algorithms

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

{ G x [ x , y ] = I [ x 1 , y ] I [ x + 1 , y ] G y [ x , y ] = I [ x , y 1 ] I [ x , y + 1 ]
y c = [ 0 1 M H 1 M ]
u y = [ m i n ( y c 1 , 0 ) y c m a x ( y c + 1 , h 1 ) m a x ( y c + 2 , h 1 ) ]
Q y = 1 2 [ 0 1 2 1 2 0 5 3 0 1 4 3 0 0 1 1 ] [ 1 { y c } { y c } 2 { y c } 3 ]
P y ( x , y ) = k = 0 3 P ( x , u y ( x , k ) ) Q ( x , k )
R t ( x , y ) = 1 N i = 1 N s g n ( G i r i ) [ 1 1 r i | ( x c x i ) G y i ( y c y i ) G x i | G x i 2 + G y i 2 ] 2
G w ( x , y ) = 1 N i = 1 N s g n ( G i r i ) ( | G i | | G c | ) I c
T R A C 2 ( x , y ) = δ R t δ R t + δ t T R A C 3 ( x , y ) = δ R t δ R t + δ t δ R t + 2 δ t T R A C 4 ( x , y ) = δ R t δ R t + δ t δ R t + 2 δ t δ R t + 3 δ t δ R t δ R t + δ t δ R t + 2 δ t δ R t + 3 δ t δ R t δ R t + 2 δ t δ R t + δ t δ R t + 3 δ t δ R t δ R t + 3 δ t δ R t + δ t δ R t + 2 δ t
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.