Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Three-dimensional super-resolution structured illumination microscopy with maximum a posteriori probability image estimation

Open Access Open Access

Abstract

We introduce and demonstrate a new high performance image reconstruction method for super-resolution structured illumination microscopy based on maximum a posteriori probability estimation (MAP-SIM). Imaging performance is demonstrated on a variety of fluorescent samples of different thickness, labeling density and noise levels. The method provides good suppression of out of focus light, improves spatial resolution, and allows reconstruction of both 2D and 3D images of cells even in the case of weak signals. The method can be used to process both optical sectioning and super-resolution structured illumination microscopy data to create high quality super-resolution images.

© 2014 Optical Society of America

1. Introduction

Recently, new methods have been developed which circumvent the diffraction limit of optical microscopes. These include stimulated emission depletion microscopy [1] (STED), photoactivated localization microscopy [2,3] (PALM, FPALM), stochastic optical reconstruction microscopy [4] (STORM), super-resolution optical fluctuation imaging [57] (SOFI), and super-resolution structured illumination microscopy (SR-SIM) [810]. SR-SIM offers high photon efficiency, potentially high imaging rates, relatively low hardware requirements, and compatibility with most dyes and fluorescent proteins, making it an attractive method for a broad range of studies in cell biology.

SR-SIM uses illumination patterns with high spatial frequency (close to the resolution limit of the microscope) to illuminate the sample. High frequency information contained in the sample is encoded, through aliasing, into the acquired images. By acquiring multiple images with illumination patterns of different phases and orientations, aliased components can be separated and a high-resolution image reconstructed [8,9]. Two-dimensional SR-SIM enables a twofold resolution improvement in the lateral dimension [9,11,12], but does not provide optical sectioning. If a three-dimensional illumination pattern is used, resolution can also be improved in the axial direction [13,14].

Structured illumination microscopy has also been used for optical sectioning, but without lateral resolution enhancement (OS-SIM) [15]. Optically sectioned images can be calculated by taking the root mean square of the differences of the acquired images (square-law method), or by a form of homodyne detection [15]. Several other methods are also possible [16]. When combined with optimized illumination patterns, OS-SIM can achieve an axial resolution of ~300 nm [17,18]. This is about two to three fold better than is achievable in confocal laser scanning microscopy (CLSM) and is comparable to the axial resolution reported in 3D SR-SIM [13].

Recently, new concepts in structured illumination have appeared such as combining OS-SIM and SR-SIM by weighting Fourier space image components [19], or use of random speckle patterns for illumination the sample (blind-SIM) [20]. Orieux et al. suggested a framework for SR-SIM based on Bayesian estimation for 2D image reconstruction [21], and we previously showed that Bayesian estimation methods have several advantages over the square-law method and can achieve a performance comparable to SR-SIM methods [22].

Here we propose an image reconstruction method for SIM which provides resolution improvement in all three dimensions using two-dimensional illumination patterns. Our method, maximum a posteriori probability SIM (MAP-SIM) is based on combining, via spectral merging in the frequency domain, maximum a posteriori probability estimation (for resolution improvement) and homodyne detection (for optical sectioning). We used a microscope setup in which the illumination pattern is generated by a spatial light modulator (SLM) together with incoherent illumination. MAP-SIM does not require precise knowledge of the point spread function (PSF) which must be carefully measured in most SR-SIM approaches. Additionally, MAP-SIM does not require precise knowledge of the pattern positions in each acquired image.

2. Theory

2.1 Maximum a posteriori probability estimation

An image acquired by the microscope can be modeled as a convolution of an ideal image of a real sample with the point spread function (PSF) of the microscope. The additional noise is composed of different noise sources (e.g., photon noise, read out noise) and can be modeled by additive Gaussian noise with zero mean [23,24]. Image acquisition in structured illumination microscopy can then be described as

yk=HMkx+ nk,
where k=1,,K indexes the sequence of illumination patterns, yk denotes a vectorized (matrix converted into a column vector by stacking the columns of the matrix on top of each other), diffraction limited low-resolution (LR) image acquired by the camera using the k-th illumination pattern, x represents an unknown, vectorized high-resolution (HR) image, and nk is a vectorized image containing additive noise; all of these vectors contain m elements. Toeplitz matrix H is an m×m matrix which models the convolution between the HR image and the PSF of the system, and Mk is an m×m diagonal matrix in which the elements represent the k-th illumination pattern. We model the PSF of the microscope as an Airy function, see Section 2.2.

The reconstruction of the HR image x can be performed using a Bayesian approach [21,25,26]. The maximum a posteriori estimator of x is given by maximizing the probability of the HR image represented by the observed LR images

x^=arg maxx[P(x|y1,y2,,yK)].

Applying Bayes' theorem to the conditional probability in Eq. (2) and by taking the logarithm, we obtain

x^=arg maxx[logP(y1,y2,,yK|x)+logP(x)].

Because the LR images yk are independent measurements, we can write

P(y1,y2,,yK|x)= k=1KP(yk|x).

The additive noise nk in Eq. (1) is modeled as white Gaussian noise with a mean of zero and variance σ2. The density function in Eq. (4) can be expressed as

P(yk|x) exp(ykHMkx22σ2).

Because of the presence of noise, the inversion of Eq. (1) is an ill-posed problem and some form of regularization is needed to ensure uniqueness of the solution. The regularization term in Eq. (3), provided by the density function P(x), reflects prior knowledge about the HR image, such as a positivity constraint and image smoothness. Several kinds of priors and regularization techniques have been proposed within the Bayesian framework [27]. To impose a smoothness condition and to ensure that a cost function is simple to minimize, we have adopted a term composed of finite difference approximations of the first order derivatives at each pixel location [28].

logP(x)=Γ(x)= i[(Δhx)i2+(Δvx)i2].
Here Δh and Δv are the finite difference operators along the horizontal and vertical direction of an image, and ()i denotes the i-th element of a vector.

Substituting Eqs. (5) and (6) into Eq. (3), the image reconstruction can be expressed as a minimization of the following cost function

x^=argminx[k=1KykHMkx2+λΓ(x)].

The cost function in Eq. (7) consists of two terms. The first term describes the mean square error between the estimated HR image and the observed LR images. The second term is the regularization term. Its contribution is controlled by the parameter λ, which is a small positive constant proportional to the noise variance σ2, and which defines the strength of the regularization. Equation (7) is minimized by gradient descent optimization methods and the estimate of the unknown image x at the (n+1)th iteration is obtained as

x(n+1)=x(n)α(n)g(n).
Here α(n) is the step size, g(n) is the gradient of the cost function, and n=0,1,2, is an iteration step. Computation iteratively continues until α(n+1)g(n+1)/α(n)g(n)<ε, where ε>0 is a user-defined stopping criterion. This enables one to stop the algorithm very quickly after only a few aggressive steps towards the minimum.

2.2 OTF modeling

The spatial frequency fc at which the optical transfer function (OTF) reaches zero determines the achievable resolution of a microscope, see Fig. 1(a). We model the PSF as an Airy disk [29] which in Fourier space leads to an OTF

OTF(f)= 1π[2cos1(|f|fc)sin(2cos1(|f|fc))],
where f is the spatial frequency. The cut off frequency fc is estimated by calculating the radial average of the power spectral density (PSD) of a widefield image, see Fig. 1(b).

 figure: Fig. 1

Fig. 1 Schematic of spectral merging (a) Spatial frequencies in Fourier space, where fc is the cut off frequency. (b) Power spectral density (PSD) in relation to the spatial frequency. (c) Blending frequency spectra of HR-MAP estimation and LR homodyne detection using low and high pass filters.

Download Full Size | PDF

2.3 Spectral merging

MAP estimation of a high resolution image obtained with structured illumination microscopy enables reconstruction of images (HR-MAP) with details unresolvable in a widefield microscope. However, MAP estimation as described here does not suppress the out of focus light. On the other hand, the homodyne detection method

xLR-HOM=|k=1Kykexp(2πikK)|
used in OS-SIM [15] provides images (LR-HOM) with optical sectioning but without resolution improvement. Noting that the unwanted out of focus light is dominantly present at low spatial frequencies, we merge both LR-HOM and HR-MAP images in the frequency domain, see Fig. 1(c), to obtain the final HR image (MAP-SIM). Low pass filtering is applied to the LR-HOM image and a complementary high pass filter is applied to the HR-MAP image. O’Holleran and Shaw [19] used Gaussian weights with empirically adjusted standard deviations for weighting frequency components obtained by SR-SIM. We verified that Gaussian functions are well suited for our case, and we applied a weighting scheme based on linear combination of both merged components to preserve the total signal power
xMAP-SIM=1{(1β){xLR-HOM}exp(f22ρ2)+β{xHR-MAP}(1exp(f22ρ2))},
where ,1 denotes Fourier transform operator and its inverse, respectively, fis the spatial frequency, ρ is the standard deviation of the Gaussian filter, and β is a positive weighting coefficient. The use of these variables and application of Eq. (11) are described in more detail in Section 3.4.

3. Methods

3.1 Microscope setup and acquisition

Our setup is based on an IX71 microscope equipped with UPLSAPO 100 × /1.40 NA and 60 × /1.35 NA oil immersion objectives (Olympus, Hamburg, Germany) [18], see Fig. 2. We used a NEO sCMOS camera (pixel size 6.5 μm). Focus was adjusted using a piezo-Z stage (resolution 1.5 nm, NanoScan-Z, Prior, Cambridge, UK). The desired illumination patterns were produced by a high speed ferroelectric liquid crystal on silicon (LCOS) microdisplay (SXGA-3DM, Forth Dimension Displays, Dalgety Bay, Scotland; 1280 × 1024 pixels, 13.62 µm pixel pitch). Similar LCOS microdisplays have been used previously in SIM [14,18], and in other fast optical sectioning systems such as programmable array microscopy (PAM) [30]. The display was illuminated by a home-built, three channel LED system based on high power LEDs (PT-54, Luminous Devices, Sunnyvale, California) with emission maxima at 460 nm, 525 nm, and 623 nm. The output of each LED was filtered with a band pass filter (450‒490 nm, 505‒555 nm, 633‒653 nm, resp., Chroma, Bellows Falls, Vermont), and the three wavelengths were combined with appropriate dichroic mirrors. The light was then vertically polarized with a linear polarizer (Edmund Optics, Barrington, NJ). We imaged the microdisplay into the microscope using a 180 mm focal length tube lens (U-TLU, Olympus) and polarizing beam splitter cube (Newport, Irvine, California). When using a 100 × objective, single microdisplay pixels are imaged into the sample with a nominal size of 136.2 nm, thus as diffraction-limited spots. Sample fluorescence was isolated with a dual band filter set appropriate for Cy3 and Cy5, or a single band set for GFP (Chroma).

 figure: Fig. 2

Fig. 2 Structured illumination microscope: (a) the microscope setup, (b) examples of line grid illumination patterns. Top row shows a pattern sequence which creates homogenous illumination. Bottom row shows several line grid patterns in different orientations. Blue are “on” pixels creating the illumination, gray are “off” pixels.

Download Full Size | PDF

3.2. Illumination patterns

Most strategies in structured illumination microscopy assume that a set of illumination patterns required for image reconstruction consists of K equal movements of the same pattern such that the sum of all of the patterns results in homogenous illumination. In our experiments, the illumination patterns created by the microdisplay consisted of a regular grid of lines. The lines were one microdisplay pixel thick (diffraction limited in the sample when using a 100 × objective) with a gap of several “off” pixels in between. The line grid was shifted by one pixel between each image acquisition to obtain a new illumination mask, see Fig. 2(b). Changing the spacing between the “on” pixels allows one to vary the spatial frequency of the pattern in the sample which influences signal to noise ratio of the result, imaging depth, and optical sectioning ability [18]. This spacing can be adjusted experimentally based on the sample, for example, when imaging deep into the sample, a lower pattern frequency may be required. In most experiments, we used pattern sequences with several orientations of the line grid pattern (0°, 90°, 45° and 135°) in order to achieve isotropic resolution improvement. Note that due to square shape of the microdisplay pixels, more patterns are required in the diagonal direction to equally cover the whole image while keeping the spatial frequency of the pattern approximately the same.

3.3 Samples

HepG2 cells expressing the labeled histone H4-dendra2 [31] were maintained in DMEM supplemented with 10 % FCS, 100 U/ml penicillin, and 100 U/ml streptomycin (all from Invitrogen, Carlsbad, CA, USA) at 37 °C, 5% CO2, and 100 % humidity. Mowiol containing 1,4-diazabicyclo[2.2.2]octane (DABCO) was from Fluka (St. Louis, Missouri). Cells were grown on high precision #1.5 coverslips (Zeiss, Jena, Germany). Before imaging, cells were first washed with PBS, then fixed with 2 % paraformaldehyde for 15 minutes at 4 °C. For imaging of actin, we permeabilized fixed cells with 0.1 % Triton X-100 for 15 minutes at 4 °C, then labeled the cells with 2 nM Atto565-phalloidin (Atto-Tec Siegen, Germany) for 30 minutes at room temperature. We then mounted the coverslips in mowiol and sealed them onto clean slides with clear nail polish.

To illustrate the versatility of MAP-SIM, we imaged Drosophila salivary gland chromosomes (type 30-9066, Carolina Biological, Burlington, North Carolina), and pollen grains (type 30-4264, Carolina Biological). We also imaged mitochondria in bovine pulmonary artery endothelial (BPAE) cells labeled with MitoTracker Red CMXRos (FluoCells prepared slide #1, Invitrogen). The PSF of the microscope was measured using 100 nm tetraspeck beads (Invitrogen).

3.4 Image processing

The input data were first normalized into the range [0, 1] according to their bit depth. To obtain a HR-MAP image, we minimized Eq. (7) using a gradient descent algorithm. The speed of convergence is strongly influenced by the iteration step size α(n). In the case of a fixed value (α=0.5), the algorithm converges in approximately 14 iterations, see Fig. 3(a). In order to speed up the convergence, we used the Barzilai-Borwein method [32], which is a variation of a standard gradient descent algorithm but with a step size which is adapted in every iteration based on the changes of the image estimate and the gradient of the cost function between consecutive iterations. This accelerates the convergence rate substantially. A good initial guess x(0) of the HR-MAP image is also important for fast convergence. For this initial guess we used the sum of the acquired SIM images, which corresponds to a widefield image. Regularization of the problem in Eq. (7) is controlled by a small positive constant λ, which can be adjusted according to the noise conditions. If the noise increases, λ should also increase. The value λ=0.001 worked well for all tested samples. Setting the regularization parameter to a small value prevents oversmoothing of the image and potential loss of high frequency information. We found that a stopping criterion ε=0.01 was a reasonable value. With these settings, good results are obtained in about four iterations, see Figs. 3(a)-3(b). The frequency spectrum of the estimated image was then apodized with a cosine bell function and transformed into the real space to obtain the HR-MAP image. Note that the convolution step Hx in Eq. (7) was performed in the frequency domain in order to achieve fast execution and memory efficiency.

 figure: Fig. 3

Fig. 3 Choice of the iteration step size (coefficient alpha) and its influence on the convergence of the algorithm. (a) Cost function vs. number of iterations. Fixed step size (red) and step size given by the Barzilai-Borwein method (blue). (b) Region of interest from a test sample (phalloidin-labeled actin in a HepG2 cell). Shown are the first 4 iterations of the algorithm, where the step size was determined using the Barzilai-Borwein method.

Download Full Size | PDF

The optically sectioned LR-HOM image and the super-resolution HR-MAP image are merged in the frequency domain by combining their Fourier spectra using Eq. (11), see the flowchart in Fig. 4. The balance between LR-HOM and HR-MAP images is controlled by the coefficient β. To maximally exploit image details, it is preferred to put more emphasis on HR-MAP image. We experimentally determined that β=0.8 provides good results across the range of the samples we imaged. The standard deviation of the Gaussian weighting function is related to the normalized frequency fc, see Fig. 1(c).

 figure: Fig. 4

Fig. 4 Flowchart of the MAP-SIM algorithm.

Download Full Size | PDF

4. Results

4.1 Spatial resolution measurements

Spatial resolution was determined by averaging measurements from fifty individual 100 nm fluorescent beads. We used a 100 × /1.40 NA oil immersion objective and 460 nm LED excitation (emission 500 - 550 nm). Fourteen images were acquired at each z-plane (pattern orientation: 0°, 90°; number of shifts: 3; pattern period imaged in the sample: 409 nm; and orientation: 45°, 135°; shifts: 4; period: 385 nm). A region of interest (ROI) around every bead position (19 × 19 pixels) was extracted from both the widefield and MAP-SIM images. In order to align the position of the beads in each ROI, we registered the ROIs with sub-pixel accuracy using standard normalized cross-correlation methods. Intensity values were then fit with a Gaussian function and the full width at half maximum (FWHM) was determined in the axial and lateral directions. Figure 5 shows the resulting averaged FWHM values and PSF cross-sections.

 figure: Fig. 5

Fig. 5 Measurements of the spatial resolution on a sample of fluorescent beads. Cross-sections of the PSF are obtained by averaging measurements over 50 beads along (a) lateral and (b) axial directions.

Download Full Size | PDF

4.2 2D MAP-SIM

To demonstrate the lateral resolution improvement of MAP-SIM in a thin sample, we imaged a Drosophila salivary gland chromosome preparation. The images were acquired using 623 nm LED illumination and a 100 × /1.40 NA oil immersion objective. Forty eight images were acquired at each z-plane (orientation: 0°, 90°; shifts: 10; period: 1.36 μm; and orientation: 45°, 135°; shifts: 14; period: 1.35 μm). The chromosome sample is quite thin (~1.5 µm), producing little out of focus light. Figure 6 demonstrates how MAP-SIM performs when compared to widefield and square-law methods in terms of contrast and lateral resolution. Plotting the intensity profile across the widefield, square-law and MAP-SIM images revealed many more fine details in MAP-SIM, see Fig. 6(g). We also plotted the normalized power spectral density vs. reduced spatial frequency, see Fig. 6(h). The reduced spatial frequency was normalized in the interval [0, 1] according to the maximum spatial frequency in the MAP-SIM image.

 figure: Fig. 6

Fig. 6 Comparison of different imaging methods. Drosophila salivary gland chromosome sample. (a, d) Widefield image and region of interest. (b, e) Square-law method and ROI. (c, f) MAP-SIM and ROI. (g) Line profile of the images, indicated by the white line in (a). (h) Plot of normalized power spectral density vs. reduced spatial frequency for widefield, square law, and MAP-SIM approaches.

Download Full Size | PDF

4.3 3D MAP-SIM

To demonstrate the optical sectioning characteristics of MAP-SIM, we imaged a relatively thick biological sample, a fluorescent pollen grain about 50 μm thick, see Fig. 7. The images were acquired using a 60 × /1.35 NA oil immersion objective. In this case a SIM pattern with a single orientation was used (orientation: 0°; number of shifts: 10; period: 2.27 μm). Ninety planes along the z-axis were scanned with a spacing of 500 nm. Lateral and axial cross sections of the pollen grain image in Fig. 7(c) reveal that MAP-SIM provides increased lateral and axial resolution compared to the widefield image. We also imaged Atto-532 phalloidin labeled actin in a HepG2 cell using the same illumination patterns as the pollen grain sample, see Fig. 8. Depth color coding was applied to the image using the isolum color map [33]. Maximum intensity projections of the color coded 3D MAP-SIM images are shown in Fig. 8(a)-8(c).

 figure: Fig. 7

Fig. 7 Image of an autofluorescent pollen grain acquired using a 60 × /1.35 NA oil immersion objective. (a) Widefield image. (b) square-law method. (c) MAP-SIM. Shown are also XZ projections taken along the pixel row indicated by the white line.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Atto-532 Phalloidin labeled actin in a HepG2 cell. Maximum intensity projections of the 3D stack. (a) widefield, (b) square-law method, (c) MAP-SIM. Thickness of the sample is 7 µm. The look up table isolum [33] was used for depth color coding.

Download Full Size | PDF

In our microscope set-up, the illumination pattern contrast is lower at high spatial frequencies compared to SR-SIM using coherent illumination. Despite this, we compared MAP-SIM and SR-SIM processing methods in images of HepG2 cells expressing the labeled histone H4-dendra2. In this sample we also labeled actin using Atto-532 phalloidin. The images were acquired using a 460 nm LED (H4-dendra2) and 525 nm LED (atto-532 phalloidin) with a 100 × /1.40 NA oil immersion objective. In total, 24 patterned images were acquired for each z-plane Twenty four images were acquired at each z-plane (orientation: 0°, 90°; shifts: 5; period: 681 nm; and orientation: 45°, 135°; shifts: 7; period: 674 nm). Figure 9(a)-9(c) shows a maximum intensity projection of 23 Z-planes. Figure 9(f)-9(g) shows results for a single optical section. To process the data using SR-SIM methods, we followed the approach of Gustafsson, et al. [13]. We located peaks in the Fourier spectrum using a spatial calibration method derived from our previous work [18].

 figure: Fig. 9

Fig. 9 Comparison between SR-SIM processing and MAP-SIM. Maximum intensity projection of 23 Z-planes in each of two color channels (red, green) for (a) widefield, (b) SR-SIM, (c) MAP-SIM. (d, e) Regions of interest indicated in (b, c). (f, g) Single optical section in the red channel.

Download Full Size | PDF

4.4 SNR analysis and acquisition times

When using typical SR-SIM processing methods [13], high noise levels in the raw data can lead to inaccuracies when determining the shifts of the spectral components and thereby degrade the final super-resolution image. Thomas et al. examined the performance of SIM image reconstruction methods at low signal levels [34]. They showed an image reconstruction for a sample where the SNR of an equivalent widefield image was estimated as 12 dB.

We evaluated the performance of MAP-SIM under various noise conditions. Using a 100 × /1.40 NA oil immersion objective, images of MitoTracker-labeled mitochondria in BPAE cells were acquired with 525 nm LED excitation and acquisition times (for one SIM pattern position) ranging from 10 ms to 400 ms. Under these conditions the SNR of equivalent widefield images ranged from 2.7 dB to 21.3 dB. We found that MAP-SIM reconstruction was successful down to a SNR of about 5.9 dB. The SNR was measured in widefield images based on manually selected regions in areas containing signal or background respectively and the results are shown in Fig. 10.

 figure: Fig. 10

Fig. 10 Performance of the proposed MAP-SIM method under various noise conditions in comparison to the widefield image. Data acquisition times are 400 ms, 50 ms and 25 ms respectively. Images are of mitochondria labelled with Mitotracker in BPAE cells.

Download Full Size | PDF

5. Discussion

There are several advantages to the use of an incoherent illumination approach such as the one presented here. One is that we do not require a pupil plane mask to block the unwanted diffraction orders that are generated when using coherent illumination based on two-beam or three-beam laser interference. Such masks can be tricky to implement because different wavelength lasers are focused to different locations in the (reconstructed) pupil plane. This then requires numerous holes in the mask which must be precisely positioned. On the other hand, the contrast of our patterns is lower when compared to coherent interference patterns and we achieved a slightly lower resolution improvement than that typically reported in SR-SIM with coherent illumination.

The LCOS microdisplay used here can be configured with a variety of timing schemes which are supplied with the device. With the timing program that we used, the microdisplay can display an illumination pattern and switch to the next pattern in the sequence in 1.14 ms. Given a bright enough light source, fast enough camera, and appropriate sample, acquisition of raw SIM images at rates exceeding 800 Hz would therefore be possible. However, specifying the fastest possible acquisition rate, as is sometimes reported in SIM, is rather meaningless without consideration of the illumination power density, microscope objective, nature of the sample labeling, and other factors. The SNR analysis shown in Fig. 10 thus reflects an attempt to determine the relevant parameters based on measured quantities.

So far the MATLAB implementation of the MAP-SIM algorithm was not optimized for speed. The reconstruction of the 765 × 735 pixel image shown in Fig. 6, employing 48 patterned illumination images, took about 15 seconds using a conventional PC (Intel Core i7, 2.1 GHz, 8 GB RAM). We attribute the fast processing speeds to the frequency domain convolution we applied when solving Eq. (7), and to the Barzilai-Borwein method which ensures fast convergence. Processing each 2D plane separately also reduces the required CPU time and suggests parallel processing of individual planes, which would significantly speed up reconstruction of 3D samples.

6. Conclusion

We introduced a fast and efficient MAP-SIM algorithm, which is suitable for processing data acquired by both optical sectioning and super-resolution structured illumination microscopy. The proposed algorithm creates high quality super-resolution images. The measured resolution was (144 ± 7) nm in the lateral direction and (299 ± 50) nm axially. The reconstruction of super-resolution images was successful even in the presence of high noise levels, where the SNR of the corresponding widefield images was about 5.9 dB. Image acquisition and data processing are both very fast, revealing an interesting potential for live cell imaging. The microscope setup uses a relatively inexpensive microdisplay with no moving parts together with low cost LED illumination and is a simple add-on to conventional widefield fluorescence microscopes. MAP-SIM processing should also prove useful for other illumination strategies such as TIRF-SIM or emerging combinations of SIM and light sheet microscopy.

Acknowledgments

T.L. thanks Prof. Theo Lasser for his kind help and valuable advice. G.H. thanks Lubomír Kováčik for useful discussions. This work was supported by the Czech Science Foundation (P102/10/1320, P302/12/G157, 14-15272P, P205/12/P392), by Charles University in Prague (PRVOUK P27/LF1/1 and UNCE 204022), by OPPK CZ.2.16/3.1.00/24010 and OPVK CZ.1.07/2.3.00/30.0030, by COST CZ LD12018, by Czech Technical University in Prague (SGS14/148/OHK3/2T/13), and by the Biotechnology and Biomedicine Center of the Academy of Sciences and Charles University in Vestec. T.L. acknowledges a SCIEX scholarship (project code 13.183).

References and links

1. S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett. 19(11), 780–782 (1994). [CrossRef]   [PubMed]  

2. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef]   [PubMed]  

3. S. T. Hess, T. P. K. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91(11), 4258–4272 (2006). [CrossRef]   [PubMed]  

4. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3(10), 793–796 (2006). [CrossRef]   [PubMed]  

5. T. Dertinger, R. Colyer, G. Iyer, S. Weiss, and J. Enderlein, “Fast, background-free, 3D super-resolution optical fluctuation imaging (SOFI),” Proc. Natl. Acad. Sci. U.S.A. 106(52), 22287–22292 (2009). [CrossRef]   [PubMed]  

6. S. Geissbuehler, C. Dellagiacoma, and T. Lasser, “Comparison between SOFI and STORM,” Biomed. Opt. Express 2(3), 408–420 (2011). [CrossRef]   [PubMed]  

7. S. Geissbuehler, N. L. Bocchio, C. Dellagiacoma, C. Berclaz, M. Leutenegger, and T. Lasser, “Mapping molecular statistics with balanced super-resolution optical fluctuation imaging (bSOFI),” Opt. Nanoscopy 1(1), 4 (2012). [CrossRef]  

8. R. Heintzmann and C. Cremer, “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,” Proc. SPIE 3568, 185–196 (1999). [CrossRef]  

9. M. G. L. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000). [CrossRef]   [PubMed]  

10. M. G. L. Gustafsson, “Nonlinear structured-illumination microscopy: Wide-field fluorescence imaging with theoretically unlimited resolution,” Proc. Natl. Acad. Sci. U.S.A. 102(37), 13081–13086 (2005). [CrossRef]   [PubMed]  

11. P. Kner, B. B. Chhun, E. R. Griffis, L. Winoto, and M. G. L. Gustafsson, “Super-resolution video microscopy of live cells by structured illumination,” Nat. Methods 6(5), 339–342 (2009). [CrossRef]   [PubMed]  

12. L. M. Hirvonen, K. Wicker, O. Mandula, and R. Heintzmann, “Structured illumination microscopy of a living cell,” Eur. Biophys. J. 38(6), 807–812 (2009). [CrossRef]   [PubMed]  

13. M. G. L. Gustafsson, L. Shao, P. M. Carlton, C. J. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94(12), 4957–4970 (2008). [CrossRef]   [PubMed]  

14. L. Shao, P. Kner, E. H. Rego, and M. G. L. Gustafsson, “Super-resolution 3D microscopy of live whole cells using structured illumination,” Nat. Methods 8(12), 1044–1046 (2011). [CrossRef]   [PubMed]  

15. M. A. A. Neil, R. Juškaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. 22(24), 1905–1907 (1997). [CrossRef]   [PubMed]  

16. R. Heintzmann, “Structured illumination methods,” in Handbook of Biological Confocal Microscopy, J. B. Pawley, ed., 3rd ed. (Springer, 2006), pp. 265–279.

17. F. Chasles, B. Dubertret, and A. C. Boccara, “Optimization and characterization of a structured illumination microscope,” Opt. Express 15(24), 16130–16140 (2007). [CrossRef]   [PubMed]  

18. P. Křížek, I. Raška, and G. M. Hagen, “Flexible structured illumination microscope with a programmable illumination array,” Opt. Express 20(22), 24585–24599 (2012). [CrossRef]   [PubMed]  

19. K. O’Holleran and M. Shaw, “Optimized approaches for optical sectioning and resolution enhancement in 2D structured illumination microscopy,” Biomed. Opt. Express 5(8), 2580–2590 (2014). [CrossRef]   [PubMed]  

20. E. Mudry, K. Belkebir, J. Girard, J. Savatier, E. Le Moal, C. Nicoletti, M. Allain, and A. Sentenac, “Structured illumination microscopy using unknown speckle patterns,” Nat. Photonics 6(5), 312–315 (2012). [CrossRef]  

21. F. Orieux, E. Sepulveda, V. Loriette, B. Dubertret, and J.-C. Olivo-Marin, “Bayesian estimation for optimized structured illumination microscopy,” IEEE Trans. Image Process. 21(2), 601–614 (2012). [CrossRef]   [PubMed]  

22. T. Lukeš, G. M. Hagen, P. Křížek, Z. Švindrych, K. Fliegel, and M. Klíma, “Comparison of image reconstruction methods for structured illumination microscopy,” in Proc. SPIE 9129, Biophotonics: Photonic Solutions for Better Health Care IV, 91293J (May 8, 2014) (2014), Vol. 9129, pp. 1–13.

23. G. M. P. Van Kempen, L. J. Van Vliet, P. J. Verveer, and H. T. M. Van Der Voort, “A quantitative comparison of image restoration methods for confocal microscopy,” J. Microsc. 185(3), 354–365 (1997). [CrossRef]  

24. P. Sarder and A. Nehorai, “Deconvolution methods for 3-D fluorescence microscopy images,” IEEE Signal Process. Mag. 23(3), 32–45 (2006). [CrossRef]  

25. P. J. Verveer and T. M. Jovin, “Efficient superresolution restoration algorithms using maximum a posteriori estimations with application to fluorescence microscopy,” J. Opt. Soc. Am. A 14(8), 1696–1706 (1997). [CrossRef]  

26. P. J. Verveer, M. J. Gemkow, and T. M. Jovin, “A comparison of image restoration approaches applied to three-dimensional confocal and wide-field fluorescence microscopy,” J. Microsc. 193(1), 50–61 (1999). [CrossRef]   [PubMed]  

27. P. Milanfar, ed., Super-Resolution Imaging (CRC Press, 2011), p. 490.

28. S. Chaudhuri, Super-Resolution Imaging (Kluwer Academic Publishers, 2000), p. 279.

29. J. W. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw-HIll Int., 1996).

30. G. M. Hagen, W. Caarls, K. A. Lidke, A. H. B. De Vries, C. Fritsch, B. G. Barisas, D. J. Arndt-Jovin, and T. M. Jovin, “Fluorescence recovery after photobleaching and photoconversion in multiple arbitrary regions of interest using a programmable array microscope,” Microsc. Res. Tech. 72(6), 431–440 (2009). [CrossRef]   [PubMed]  

31. Z. Cvačková, M. Mašata, D. Stanĕk, H. Fidlerová, and I. Raška, “Chromatin position in human HepG2 cells: although being non-random, significantly changed in daughter cells,” J. Struct. Biol. 165(2), 107–117 (2009). [CrossRef]   [PubMed]  

32. J. Barzilai and J. M. Borwein, “Two-point step size gradient methods,” IMA J. Numer. Anal. 8(1), 141–148 (1988). [CrossRef]  

33. M. Geissbuehler and T. Lasser, “How to display data by color schemes compatible with red-green color perception deficiencies,” Opt. Express 21(8), 9862–9874 (2013). [CrossRef]   [PubMed]  

34. B. Thomas, M. Momany, and P. Kner, “Optical sectioning structured illumination microscopy with enhanced sensitivity,” J. Opt. 15(9), 094004 (2013). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Schematic of spectral merging (a) Spatial frequencies in Fourier space, where fc is the cut off frequency. (b) Power spectral density (PSD) in relation to the spatial frequency. (c) Blending frequency spectra of HR-MAP estimation and LR homodyne detection using low and high pass filters.
Fig. 2
Fig. 2 Structured illumination microscope: (a) the microscope setup, (b) examples of line grid illumination patterns. Top row shows a pattern sequence which creates homogenous illumination. Bottom row shows several line grid patterns in different orientations. Blue are “on” pixels creating the illumination, gray are “off” pixels.
Fig. 3
Fig. 3 Choice of the iteration step size (coefficient alpha) and its influence on the convergence of the algorithm. (a) Cost function vs. number of iterations. Fixed step size (red) and step size given by the Barzilai-Borwein method (blue). (b) Region of interest from a test sample (phalloidin-labeled actin in a HepG2 cell). Shown are the first 4 iterations of the algorithm, where the step size was determined using the Barzilai-Borwein method.
Fig. 4
Fig. 4 Flowchart of the MAP-SIM algorithm.
Fig. 5
Fig. 5 Measurements of the spatial resolution on a sample of fluorescent beads. Cross-sections of the PSF are obtained by averaging measurements over 50 beads along (a) lateral and (b) axial directions.
Fig. 6
Fig. 6 Comparison of different imaging methods. Drosophila salivary gland chromosome sample. (a, d) Widefield image and region of interest. (b, e) Square-law method and ROI. (c, f) MAP-SIM and ROI. (g) Line profile of the images, indicated by the white line in (a). (h) Plot of normalized power spectral density vs. reduced spatial frequency for widefield, square law, and MAP-SIM approaches.
Fig. 7
Fig. 7 Image of an autofluorescent pollen grain acquired using a 60 × /1.35 NA oil immersion objective. (a) Widefield image. (b) square-law method. (c) MAP-SIM. Shown are also XZ projections taken along the pixel row indicated by the white line.
Fig. 8
Fig. 8 Atto-532 Phalloidin labeled actin in a HepG2 cell. Maximum intensity projections of the 3D stack. (a) widefield, (b) square-law method, (c) MAP-SIM. Thickness of the sample is 7 µm. The look up table isolum [33] was used for depth color coding.
Fig. 9
Fig. 9 Comparison between SR-SIM processing and MAP-SIM. Maximum intensity projection of 23 Z-planes in each of two color channels (red, green) for (a) widefield, (b) SR-SIM, (c) MAP-SIM. (d, e) Regions of interest indicated in (b, c). (f, g) Single optical section in the red channel.
Fig. 10
Fig. 10 Performance of the proposed MAP-SIM method under various noise conditions in comparison to the widefield image. Data acquisition times are 400 ms, 50 ms and 25 ms respectively. Images are of mitochondria labelled with Mitotracker in BPAE cells.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

y k =H M k x+  n k ,
x ^ = arg max x [ P( x| y 1 , y 2 ,, y K ) ].
x ^ = arg max x [ logP( y 1 , y 2 ,, y K | x )+logP( x ) ].
P( y 1 , y 2 ,, y K | x )=  k=1 K P( y k |x ).
P( y k | x ) exp( y k H M k x 2 2 σ 2 ).
logP( x )=Γ( x )=  i [ ( Δ h x ) i 2 + ( Δ v x ) i 2 ] .
x ^ = argmin x [ k=1 K y k H M k x 2 +λΓ( x ) ].
x ( n+1 ) = x ( n ) α ( n ) g ( n ) .
OTF( f )=  1 π [ 2 cos 1 ( | f | f c )sin( 2 cos 1 ( | f | f c ) ) ],
x LR-HOM =| k=1 K y k exp( 2πi k K ) |
x MAP-SIM = 1 { ( 1β ){ x LR-HOM }exp( f 2 2 ρ 2 )+β{ x HR-MAP }( 1exp( f 2 2 ρ 2 ) ) },
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.