Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Structured illumination microscopy with unknown patterns and a statistical prior

Open Access Open Access

Abstract

Structured illumination microscopy (SIM) improves resolution by down-modulating high-frequency information of an object to fit within the passband of the optical system. Generally, the reconstruction process requires prior knowledge of the illumination patterns, which implies a well-calibrated and aberration-free system. Here, we propose a new algorithmic self-calibration strategy for SIM that does not need to know the exact patterns a priori, but only their covariance. The algorithm, termed PE-SIMS, includes a pattern-estimation (PE) step requiring the uniformity of the sum of the illumination patterns and a SIM reconstruction procedure using a statistical prior (SIMS). Additionally, we perform a pixel reassignment process (SIMS-PR) to enhance the reconstruction quality. We achieve 2× better resolution than a conventional widefield microscope, while remaining insensitive to aberration-induced pattern distortion and robust against parameter tuning.

© 2017 Optical Society of America

1. Introduction

The Abbe diffraction limit was considered to be the fundamental limit for spatial resolution of an optical microscope for more than a hundred years. In the last decade, novel techniques have circumvented this limit in order to achieve super-resolution [1–7]. Structured illumination microscopy (SIM) [1–4], for example, uses illumination by multiple structured patterns to downmodulate high spatial frequency information of the object into the low-frequency region, which can then pass through the bandwidth of the microscope’s optical transfer function (OTF) and be captured by the sensor. The reconstruction algorithm for SIM combines demodulation process which brings the high spatial frequency information back to its original position and synthetic aperture that extends the support of the effective OTF. Various structured patterns have been used to realize SIM: periodic gratings [1–4], a single focal spot (confocal microscope) [8,9], multifocal spots [10–13] and random speckles [13–22]. When the illumination patterns themselves are diffraction-limited, linear SIM is restricted to 2× the bandwidth of a widefield microscope [4], allowing up to ~ 2.4× resolution enhancement (metrics explained in Sec. 3).

In practice, structured illumination systems are sensitive to aberrations and experimental errors. To avoid reconstruction artifacts that degrade resolution, the patterns that are projected onto the sample must be known accurately. Periodic grating patterns can be parameterized by their contrast, period and phase angle, which may be estimated in the post-processing [23–26]. For multifocal patterns, the location of each focal spot is required [10]. For random speckle patterns, the relative shifts of the patterns are needed [18, 19]. Even with careful calibration and high-quality optics, distortions caused by the sample may degrade the result.

To alleviate some of the experimental challenges, blind SIM was proposed, enabling SIM reconstruction without many priors [16,17,21,22,27,28]. The only assumption is that the sum of all illumination patterns is uniform. Optimization-based algorithms have been adopted, including iterative least squares with positivity and equality constraints [16, 21, 27], joint support recovery [17] and 1 sparsity constraints [22]. However, these algorithms are sensitive to parameter tuning and may show low contrast in reconstructing high spatial frequencies [16]. Another algorithm, speckle super-resolution optical fluctuation imaging (S-SOFI) realizes SOFI [29] by first projecting random speckle patterns onto the object, and then using the statistical properties of the speckle patterns as a prior to reconstruct a high-resolution image [20]. S-SOFI is experimentally simple and robust; however it only achieves a 1.6× resolution enhancement instead of 2.4× for conventional SIM techniques (as compared to a widefield microscope).

In this paper, we propose a new reconstruction algorithm for SIM that is applicable to any illumination patterns. Our method, termed pattern estimation structured illumination microscopy with a statistical prior (PE-SIMS), is as robust and insensitive to parameter tuning as S-SOFI, and achieves better resolution enhancement (up to 2×). Like blind SIM, the patterns need not be known (except for a requirement on the covariance of the patterns). We demonstrate our method using simulated and experimental results with both speckle and multifocal patterns. We discuss pattern design strategies to reduce the amount of data required and demonstrate an extension that uses pixel reassignment [30–34] to improve the reconstruction quality.

2. Theory and method

Our algorithm takes in a SIM dataset consisting of multiple images captured under different structured illumination patterns (e.g. random speckles, multifocal spots). We reconstruct the super-resolved image in two parts. The first part is an iterative optimization procedure for estimating each illumination pattern based on an approximated object. The second part reconstructs the high-resolution image using the estimated patterns and the measured images, along with a statistical prior. Before introducing these two parts, we start by defining the SIM forward model.

2.1. Forward model of structured illumination microscopy

A representative experimental setup is shown in Fig. 1. A DMD spatial light modulator (SLM) is used to project patterns onto the object through an objective lens. The measured intensity for the -th captured image is the product of the object’s fluorescence distribution o(r) with the illumination pattern p (r), where r = (x, y) denotes the lateral position coordinates. This product is then convolved with the system’s incoherent detection-side point spread function (PSF), hdet(r):

I(r)=[o(r)p(r)]hdet(r)=o(r)p(r)hdet(rr)d2r.

 figure: Fig. 1

Fig. 1 Example experimental setup for structured illumination microscopy (SIM) using a deformable mirror device (DMD) to capture low-resolution images of the object modulated by different illumination patterns. Our IPE-SIMS algorithm reconstructs both the super-resoloved image and the unknown arbitrary illumination patterns.

Download Full Size | PDF

2.2. Part 1: pattern estimation

The first part of our inverse algorithm is to estimate the illumination patterns. To do so, we start with an low-resolution approximation of the object. Then, we use this object and our measured images to iteratively estimate the patterns (see Fig. 2).

 figure: Fig. 2

Fig. 2 The first part of our algorithm, Pattern Estimation (PE), iteratively estimates the illumination patterns from an approximated object given by the deconvolved widefield image.

Download Full Size | PDF

Part 1a: approximate widefield image

If we already knew the object o(r), it would be straightforward to estimate the pattern for each measured image by dividing out the object from each of the measurements. However, the object o(r) is unknown. Hence, we start by making a rough estimate of the object. We first take the mean of all the measured images:

Iavg(r)=I(r)=[o(r)p(r)]hdet(r)p0o(r)hdet(r),
where 〈·〉 is the mean operation with respect to , and p0 = 〈p (r)〉 is approximately a constant over the entire field of view. The resulting image will be equivalent to the low-resolution widefield image if the sum of all illumination patterns is approximately uniform.

Part 1b: deconvolve widefield image

Since the widefield image represents the convolution of the object with its PSF, we can perform a deconvolution operation to estimate the low-resolution object:

oest(r)=1{I˜avg(u)h˜det(u)|h˜det(u)|2+β},
where and 1 denote the Fourier transform and its inverse, respectively, ˜ denotes the Fourier transform of a certain function, u = (ux, uy) are the lateral spatial frequency coordinates and β is a small Tikhonov regularization constant. Note that this object estimate has diffraction-limited resolution and will be used only for estimating the illumination patterns.

Part 1c: Pattern estimation

We then use the low-resolution object estimate oest(r) to recover each of the illumination patterns. Since each image is simply the product of the illumination and object, we could divide each image by the estimated object to get the pattern. However, we instead solve the problem as an optimization procedure in order to impose the correct Fourier support constraint and avoid reconstruction artifacts. The -th pattern estimate is the solution to the following problem

minimizepf(p)=fdiff(p)+IC(p)=r|I(r)[oest(r)p(r)]hdet(r)|2+IC(p),whereIC(p)={0,pC+,pC,C={p(r)|p˜(u)=0,u>2NAλillu},
where λillu is the wavelength of the excitation light. The first term of the cost function, fdiff(p), in Eq. (4) is the least square error (residual) between the measured intensity and the predicted intensity based on our current estimate. The second term enforces a frequency support constraint for the illumination pattern via an indicator function IC. This is important to reduce artifacts in the pattern estimation because a normal division between the measured image and estimated object will create errors outside of this frequency support. In our epi-illumination geometry, the constraint is that the frequency content of each illumination pattern be confined within the OTF defined by the objective’s NA.

We implement a proximal gradient descent algorithm [35], summarized in >Subroutine 1. Proximal gradient descent is designed to solve convex optimization problems like ours that have two cost function terms: one being a differentiable cost function term (e.g. the residual) and the other being a constraint or regularization term (usually nondifferentiable). When the constraint is defined by an indicator function, as in Eq. (4), the method is also known as a projected gradient method.

Tables Icon

Subroutine 1. Pattern Estimation

To implement, we first compute the gradient of the differentiable cost function term with respect to p (r)

g(k)(r)=fdiff(p(k))p=2oest(r)[hdet(r)(I(r)[oest(r)p(k)(r)]hdet(r))],
where k denotes evaluation of the gradient using the pattern at the k-th iteration.

We define the projection operation C to force the information outside of the OTF to be zero at each iteration. To reduce high-frequency artifacts, the following soft-edge filter is used

C(y)=1{{y}|h˜illu(u)|2|h˜illu(u)|2+δ},
where hillu(r) is the system’s illumination-side PSF, and δ determines the amount of high-frequency information that is suppressed in the pattern estimation step. We repeat this process of updates and projections until convergence (typically ~50 iterations to estimate each pattern).

The convergence speed for proximal gradient descent is on the order of O(1/K) [35], indicating that the residual between the current and optimal cost functions is inversely proportional to the number of iterations K. To accelerate convergence, one extra step is conducted in Subroutine 1 to include the information of the previous estimate [36, 37]. The convergence rate for this accelerated proximal gradient method, O(1/K2) [37], is significantly faster than the normal proximal gradient method.

2.3. Part 2: SIM with a statistical prior

Once we have recovered the illumination patterns, the second part of the algorithm is to reconstruct a high-resolution image from the measured dataset I (r) and the estimated patterns p (r). We call this part of the algorithm Structured Illumination Microscopy with a Statistical prior (SIMS), summarized in Fig. 3. There are four steps, which are explained below. We will also describe how the statistical prior is used and why this procedure gives better resolution.

 figure: Fig. 3

Fig. 3 The second part of our algorithm, termed structured illumination microscopy with a statistical prior (SIMS), estimates the high-resolution object from the measured images and the estimated illumination patterns obtained in Part 1.

Download Full Size | PDF

Part 2a: calculate the pattern-intensity covariance

Consider the case where the pattern p(r) is a random variable at position r and the measured intensity I(r) is also a random variable at position r. The -th image is thus the -th sample function for these random variables (one event out of the sample space). Covariance is a measure of how much two random variables change together. Since the intensity I(r) is the blurred version of the product between random patterns p(r) and deterministic object o(r) (Eq. (1)), the covariance between the pattern and the intensity should give high similarity wherever the object o(r) has signal and thus allow us to find the object underneath the random-pattern illumination [14, 38–41]. We calculate this covariance image Icov(r) as

Icov(r)=Δp(r)ΔI(r)=o(r)Δp(r)Δp(r)hdet(rr)d2r,
where ΔI (r) = I (r) − 〈I (r)〉, and Δp (r) = p (r) − 〈p (r)〉.

Regardless of which illumination pattern is imposed, the covariance image always gives an estimate of the object. However, the resolution of the reconstructed object may be different for different pattern statistics. We can quantify this by taking a closer look at the expression on the right-hand side of Eq. (7). The multiplication of detection PSF and covariance between p(r) and p(r′) acts as the PSF of the covariance image, which thus determines resolution. If the patterns are perfectly spatially correlated, the pattern-pattern covariance is a constant, and the pattern-intensity covariance image is a normal widefield image with PSF of h(r). If the patterns are perfectly spatially uncorrelated, the pattern-pattern covariance is 〈|Δp(r)|2 δ (rr′), which, for a constant variance, results in the PSF being a delta function and the object being reconstructed with perfect resolution. In practice, this is not achievable, since the illumination is bandlimited and thus cannot be perfectly uncorrelated. In the general case, to find the resolution (PSF) of the covariance image, we need to calculate the spatial covariance of the patterns, which is the subject of Part 2b, below.

Part 2b: calculate pattern-pattern covariance

To calculate the spatial covariance of the projected patterns, we first consider the pattern formation model. In our experiments, for example, we use a DMD to create random patterns at the sample plane. Assuming that the projected DMD pattern is sparse enough to avoid interference cross-terms, we can express our pattern under the incoherent model as

p(r)=t(r)hillu(rr)d2r,
where t (r) is the -th pattern on the DMD. With this model, the pattern-pattern covariance is
Δp(r)Δp(r)=Δt(r1)Δt(r2)hillu(rr1)hillu(rr2)d2r1d2r2=γtΔt2(r1)δ(r1r2)hillu(rr1)hillu(rr2)d2r1d2r2αthillu(rr1)hillu(rr1)d2r1=αt(hilluhillu)(rr),
where we have used an assumption that the DMD pattern values at position r1 and r2 are perfectly uncorrelated:
Δt(r1)Δt(r2)=γtΔt2(r1)δ(r1r2)αtδ(r1r2),
with γt being a constant that maintains unit consistency. This assumption is valid because the effective DMD pixel size is small compared to the FWHM of the optical system and we can control Δt (r) to create an uncorrelated pattern. In the experiment, each position of t (r) is an independent and identically distributed random variable. When the number of patterns is large enough, the variance Δt2(r1) approaches the same constant for all the positions. We can then combine γt and the variance into a single constant αt.

Ideally, we can assume hillu(r) ≈ hdet(r) when λilluλdet, where λdet is the wavelength of the fluorescent emission detection light, and theoretically calculate the pattern-pattern covariance. We can also estimate hilluhillu(r) by numerically evaluating Eq. (9) using our estimated patterns, which accounts for possible aberrations in the illumination optics.

Part 2c: PSF deconvolution of the covariance image

The pattern-pattern covariance derived in Part 2b is related to the PSF of the pattern-intensity covariance calculated in Part 2a. Hence, we can plug the pattern-pattern covariance into Eq. (7) and write the covariance image as

Icov(r)=Δp(r)ΔI(r)=αto(r)[(hilluhillu)hdet](rr)d2r.

Importantly, the effective PSF for this correlation image is now [(hilluhilluhdet](r), and the corresponding effective OTF is [|h˜illu|2h˜det](u). Since both |h˜illu|2 and h˜det have approximately the same Fourier support as the widefield OTF, the convolution between them covers around 2× the support of the widefield OTF, as in conventional SIM. Given the effective PSF, we implement a standard deconvolution to improve contrast at high spatial frequencies:

Icov,dec(r)=1{I˜cov(u)H(u)|H(u)|2+ξ},
where H(u)=[|h˜illu|2h˜det](u) and ξ is a small regularization parameter.

Part 2d: shading correction operation

When the number of images is not large enough to give uniform variance of the patterns at each pixel ( Δt2(r) from Eq. (9)), low-frequency shading artifacts will occur. Even if we assume the mean of the pattern to be flat in Eq. (2), the variance can still be non-uniform. These can be seen in the deconvolved covariance image in Fig. 3. To resolve this, we can estimate and correct for the variance across the image using our previously estimated projected patterns. Since the projected pattern p (r) is the blurred version of the pattern on the DMD, by ignoring the high-frequency component of the DMD pattern, we can approximate the variance of the DMD pattern by

αt(r)=γtΔt2(r)γtΔp2(r).

We divide out the spatially-varying variance αt in Eq. (11) from the deconvolved SIMS image,

ISIMS(r)=Icov,dec(r)αt(r)αt2(r)+ϵ,
where ϵ is a regularizer and ISIMS(r) is the output from our SIMS reconstruction (Part 2c). This result of this step is our final reconstruction of the high-resolution object function.

2.4. Parameter tuning and algorithm runtime

Our SIMS algorithm involves 4 regularizers: β, δ, ξ, and ϵ, described in Eq. (3), Eq. (6), Eq. (12), and Eq. (14), respectively. Each is decoupled from the others and acts similarly to a typical Tikhonov regularizer, so tuning may be done independently. Generally, we want the regularizers to be as small as possible, while still avoiding noise amplification.

The procedure to tune the regularization parameters heuristically is summarized as follows. First, we check if the widefield images are well-deconvolved by finding the smallest β to give the image with best resolution but without obvious noise amplification, then we move on to check the deconvolved covariance image by tuning the SIMS regularizer ξ and the smooth-edge filter regularizer δ using the same principle, and finally we check the final reconstruction by using the smallest shading correction regularizer ϵ with enough shading correction but without evident noise amplification. Additionally, the negative values in all of the deconvolved images are set to zero since the fluorescent density is always positive.

The algorithm is implemented in MATLAB and run on an Intel i7 2.8 GHz CPU computer with 16 G DDR3 RAM under OS X operating system. To reconstruct an image with size of 200 × 200 pixels and 400 measurements, this computer takes about 200 seconds. The bottleneck of the algorithm is on the pattern estimation step. The estimation of each pattern takes around 0.5 second.

3. Results

3.1. Definition of resolution

Before introducing and comparing any SIM algorithms, we want to first define the resolution criterion considered in this paper. Resolution of a microscopic image is usually defined by measuring the minimal resolvable distance between two points. Consider a widefield image with detected wavelength λ and numerical aperture N A; the Abbe resolution criterion is then 0.5λ/N A, the full width at half maximum (FWHM) of the widefield PSF. As two points get closer to each other, the contrast between them decreases. Under the separation set by Abbe’s limit, two infinitely small points observed under widefield microscope will give an overlapped two-point image with a dip at the center with the contrast equal to 0.01. Hence, the Abbe resolution criterion can be thought of as setting the minimum acceptable contrast between two points at 0.01. We can therefore define the resolution of a microscope or a reconstruction algorithm by measuring the smallest resolvable fine features that have contrast between them of at least 0.01.

3.2. Comparison of algorithms

Given this definition of resolution, we quantify the resolution for various algorithms in Fig. 4. The Siemens star test target (o(r, θ) = 1 + cos 40θ in polar coordinates) has varying spatial frequencies along the radius. The resolution of different imaging methods is quantified by reading the minimal resolved period when the contrast reaches 0.01. The effective modulation transfer function (MTF) of each method is shown in Fig. 4(b), measured as the contrast of the reconstructed Siemens star image at different radii.

 figure: Fig. 4

Fig. 4 (a) Simulated reconstructions of a Siemens star target under a widefield microscope, deconvolved widefield, confocal microscope, deconvolved confocal, blind SIM [16], S-SOFI [20], our PE-SIMS and PE-SIMS-PR algorithms. (b) The effective modulation transfer function (MTF) of each method, given by the contrast of the reconstructed Siemens star image at different radii.

Download Full Size | PDF

Our simulations use a SIM dataset with random patterns, so that we may compare against the previously proposed reconstruction algorithms of blind SIM [16] and S-SOFI [20]. We create Nimg = 400 speckle-illuminated images from shifted random patterns on the DMD, with shifts of 0.6 FWHM of the PSF across 20 × 20 steps in the x and y directions, respectively. In each pattern, only 10% of the DMD pixels are turned on. This noise-free situation allows us to compare the ideal achieved resolution for the different algorithms.

Figure 4a shows the widefield, deconvolved widefield, confocal, and deconvolved confocal images of the Siemens star, as compared to blind SIM [16], S-SOFI [20] and our algorithm. At the bottom, we show the measured effective MTF for each algorithm. In terms of visual effect, S-SOFI [20] gives the least artifacts.

To compare resolution, we use our definition of the minimal resolved separation when the contrast drops to 0.01 and summarize the results in Table 1. The enhancement metric gives the ratio resolution improvement over widefield imaging. S-SOFI resolves features down to 1.67× smaller than the widefield microscope, which is close to the claimed 1.6× in [20], and Blind SIM achieves 1.84× improvement but lower contrast for high-frequencies, which is consistent with [16]. Our PE-SIMS and PE-SIMS-PR (PE-SIMS with pixel reassignment algorithm [30–34] described in Appendix B) algorithms give better resolution compared to other methods. We resolve features down to 1.84× and 2×, respectively, close to the limit set by the deconvolved confocal image. Hence, our method performs the best of the blind algorithms.

Tables Icon

Table 1. Achieved resolution for different algorithms

Ideally, if we know all the patterns and our spatial modulation covers the full Fourier bandwidth of the objective, we could reconstruct out to 4N A/λ in Fourier space, achieving enhancement of 2.42×, as in the case of deconvovled confocal image or periodic SIM with known patterns. The blind algorithms, however, deal with an ill-posed problem (measure Nimg images and solve Nimg + 1 images) that can only become well-posed through appropriate constraints. If the prior for these algorithms are not accurate enough, they may solve a different problem even if the problem becomes well-posed. This is why algorithms with different prior assumptions give different resolution performance for the same dataset, as we saw in Table 1.

4. Experimental results

Our experimental setup is shown in Fig. 1. A laser beam (Thorlabs, CPS532, 4.5 mW) is expanded to impinge onto a reflective DMD spatial light modulator (DLP®Discovery 4100, .7″ XGA, 1024×768 pixels, pixel size 13.6 µm). The DMD generates a total of Nimg random patterns (30% of DMD pixels turned on). These random illumination patterns are projected onto the object (with demagnification of 60×) through a 4f system composed of a 200 mm convex lens and a 60× objective lens with NA= 0.8 (Nikon CFI). The resulting fluorescent light is then collected with another 4f system formed by the same 60× objective and a 400 mm convex lens (magnification 120×). A dichroic mirror blocks the reflected illumination light (as in a typical epi-illumination setup). The images are taken with an sCMOS camera (PCO.edge 5.5, 2560×2160 pixels, pixel size 6.5 µm). Patterns are shifted on a 20 × 20 grid in the x and y directions with a step size of 0.6 FWHM of the PSF, while collecting images at each step. Our test object is carboxylate-modified red fluorescent beads (Excitation wavelength: 580 nm/Emission wavelength: 605 nm) having mean diameter of 210 nm (F8810, Life Technologies).

Reconstruction results are shown in Fig. 5, demonstrating improved resolution using our PE-SIMS algorithm, as compared to standard widefield or deconvolved widefield images.

 figure: Fig. 5

Fig. 5 Reconstructions of red fluorescent beads (Ex:580 nm/Em:605 nm) from the experiment using random pattern illumination with 20 × 20 scanning step.

Download Full Size | PDF

To quantitatively analyze the experimental results, we measure the resolved feature size of the reconstructed image and compare it to our theory. As shown in the cutline in Fig. 5, two fluorescent beads separated by 328 nm can clearly be resolved using our method, which are otherwise unresolvable in either widefield or deconvolved widefield images. The contrast of this two-Gaussian shape shows these two Gaussian are separated by 1.16× FWHM, so the FWHM of the reconstructed beads is around 283 nm. Assuming the bead can be modeled as a Gaussian function with FWHM of 140 nm (210 nm in diameter for the beads), we can then deconvolve the bead shape out of the reconstruction and get the FWHM of the PSF for this case equal to 240 nm, which is below the diffraction limit λ/2N A = 371 nm.

Our algorithm can be used on other types of SIM datasets, as long as the pattern-pattern covariance gives a point-like function at the center. As an example, we tested our algorithm on a dataset from a previous method, Multispot SIM (MSIM) [10]. In MSIM, the patterns are a shifting grid of diffraction-limited spots. Since the previous MSIM implementation assumes known patterns, a calibration step captured an extra dataset with a uniform fluorescence sample in order to measure the patterns directly. Our algorithm ignores this calibration data, yet accurately reconstructs both the object and patterns (see Fig. 6). The MSIM result using the calibration data is shown for comparison. The sample is microtubules stained with Alexa Fluor 488 in a fixed cell observed under a TIRF 60× objective with N A = 1.45. Our PE-SIMS-PR reconstruction gives a similar result to the known-pattern MSIM reconstruction.

 figure: Fig. 6

Fig. 6 Comparison of our algorithm on dataset from Multispot SIM (MSIM) which uses with Nimg = 224 scanned multi-spot patterns from [10]. We show the deconvolved widefield image and the reconstructions using MSIM with known patterns, as well as our blind PE-SIMS algorithm with and without pixel reassignment.

Download Full Size | PDF

5. Conclusion

We have proposed a robust algorithm that can give 2× resolution improvement compared to widefield fluoresence imaging using a SIM dataset without knowing the imposed patterns. Our algorithm first estimates each illumination pattern from a low-resolution approximate object and measured intensities by solving a constrained convex optimization problem. We then synthesize a high-resolution image by calculating the covariance between the estimated patterns and the measured intensity images, followed by a deconvolution and shading correction to get to the final reconstruction. We quantified the limits on resolution of our algorithm by the reconstructed contrast of a simulated Siemens star target. In simulations, we showed that our algorithm gives better resolution compared to previously proposed blind algorithms [16, 20]. Experimentally, we demonstrated this improvement experimentally on both random speckle pattern illumination and multi-spot scanned illumination.

Appendix A: reducing the number of images by multi-spot scanning

In this paper, we used 400 random speckle illumination patterns to reconstruct the image, far more than the 9-image requirement of conventional SIM [4]. This large number of images was required for high-quality reconstructions because the average and variance of the illumination patterns must be sufficiently flat in order to avoid shading variations. Recall that we want αt(r)γtΔp2(r) in Eq. (13) to be close to a constant, which suggests that the variance of the random patterns is constant. When the number of images Nimg goes down, this statistical assumption is not true any more. We use a shading correction algorithm (Sec. 2.3) to fix this problem by estimating the nonuniform variance, but it is still only an estimate. Hence, when the degree of variance nonuniformity increases (as the number of images decresases), the shading correction algorithm incurs errors.

Figure 7 shows simulations demonstrating the effect of reducing the number of images. We use the same random pattern as in Sec. 2.3 and shift by step sizes of 0.6 FWHM of the PSF. As we decrease the number of images from 400 to 36, the reconstruction becomes worse, due to shading errors. The shading map, αt (r)o(r), is shown in the bottom row of Fig. 7. We can see the artifacts happen at the region where the αt (r) is dim and changing. Without knowing the patterns a priori it is not possible to fully correct these shading effects.

 figure: Fig. 7

Fig. 7 Results with simulated and experimental (fluorescent beads) datasets comparing random speckle and multi-spot illumination patterns. (middle row) Shading maps overlaid on the object. Decreasing the number of random patterns results in shading artifacts in the reconstruction. The random patterns are scanned in 20 × 20, 10 × 10, and 6 × 6 steps with the same step size of 0.6 FWHM of the PSF, while the multi-spot pattern is scanned with 6 × 6 steps.

Download Full Size | PDF

Since we know that the artifacts that appear with too few images are due to a non-uniform αt (r), we can attempt to design patterns that will be uniform with a minimal number of images. We would like Δp2(r) to give a uniform map. Consider the contribution from a single pattern; Δp2(r) is similar to the original pattern but with sharper bright spots. The ensemble average over sums up all these bright spots after shifting the pattern around. For a shifted random pattern, we must capture many images in order for the summation of the bright spots to give a uniform map. One efficient way to get a sum of bright spots to become a uniform map is to use a periodic multi-spot pattern (see Fig. 7) [10, 11, 13]. The period of this multi-spot pattern is designed to be 6 shifting step sizes. Thus, we can use 6 × 6 scanning steps to give a uniform shading map αt (r). The reconstruction is also shown in Fig. 7 to be almost as good as the one illuminated with 400 shifted random patterns.

Experimentally, we see similar trends in image reconstruction quality for different illumination strategies (see the bottom row of Fig. 7). Results from random pattern illumination of fluorescent beads with Nimg = 400 and multi-spot illumination with Nimg = 36 give very similar results, and shading artifacts become prominent as the number of patterns is reduced. Note that we use the same algorithm for both the random and multi-spot illuminated datasets because the PSFs of the pattern-intensity covariance images Icov(r) for both cases are the same.

To show that the PSF for the pattern-intensity covariance image with random and multi-spot illumination are the same, we must derive the pattern-pattern covariance 〈Δp (rp (r′)〉 as we did in Part 2b in Sec. 2.3. To calculate the pattern-pattern covariance, we need to calculate the covariance of the patterns on the DMD 〈Δt (rt (r′)〉 and plug it into Eq. (9) to get pattern-pattern covariance 〈Δp (rp (r′)〉. For the multi-spot case, we can express the pattern on the DMD and its zero-mean pattern as

t(r)=Λm,nδ(rrmnr)+t0Δt(r)Λ2m,nδ(rrmnr),
where rmn = (mΛ, nΛ), m and n are integers, and Λ is the period of the pattern. Then, we can calculate the covariance of the pattern on the DMD as
Δt(r1)Δt(r2)=Δt(r1r)Δt(r2r)d2r=Λ4m,nδ(r1r2rmn)m,nδ(r1r2rmn)Λ4ηm,nδ(r1r2rmn),
where η is a constant that enforces unit consistency. Plugging this into Eq. (9), we can then calculate the pattern-pattern covariance as
Δp(r)Δp(r)=(hilluhillu)(rr)Λ4ηm,nδ(rrrmn).

Although the pattern-pattern covariance is only a replica of the (hilluhillu)(r), the PSF of the covariance image, Icov(r), only depends on the multiplication of hdet(r) and (hilluhillu)(r) ⊗ Λ4ηm,n δ(rrmn) as Eq. (7) derived. If the period of the multi-spot pattern is large compared to (hilluhillu)(r), we can still have our PSF as [(hilluhillu) · hdet](r), which is the same as the case of random pattern illumination.

Appendix B: enhanced SNR via pixel reassignment

In this section, we first discuss the similarity between SIMS and confocal microscopy. This leads to an extension of our method that incorporates the pixel reassignment procedure proposed in [30–34]. In computing the covariance of the shifted pattern p (rrs) and the intensity I (r), there is still some information of the object leftover. Pixel reassignment helps incorporate it in a straightforward fashion, giving better SNR in the final reconstruction.

In Sec. 2.3 of our SIMS procedure, we first calculate the covariance image Icov(r). The PSF of this covariance image is determined by imposing our statistical prior on the pattern-pattern covariance 〈Δp (rp (r′)〉. The effect is similar to the illumination PSF of confocal microscopy [9]. Looking at Eq. (11), our covariance image with PSF of [(hilluhillu) · hdet](r) is the same as a confocal image taken with illumination PSF, (hilluhillu)(r), and detection PSF, hdet(r).

From the same SIM dataset, we can further use the shifted patterns p (rrs) and correlate them with the intensity I (r) to compute a series of shifted covariance images

Icovs(r,rs)=Δp(rrs)ΔI(r)=o(r)Δp(rrs)Δp(r)h(rr)d2r=αto(r)(hilluhillu)(rrsr)hdet(rr)d2r.

The PSF of the shifted covariance image Icovs(r) is the product of (hilluhillu)(rrs) and hdet(r), whose center is approximately at rs/2. This image is the same as the image taken under a confocal microscope with a shifted pinhole. This implies by shifting around the patterns and correlating with the intensity, we get the equivalent of many 2D confocal images taken with the pinhole at different positions. This is the same dataset as would be described in the imaging scanning microscope, where the single-pixel camera and pinhole is replaced with a CCD in the confocal system [32,33]. Though these images are not centered, they still contain the information of the same object. Pixel reassignment was proposed in [31–34] as a way to incorporate this 4D information to get a 2D image with better SNR.

Since the 2D images from rs-shifted patterns are approximately rs/2-shifted versions of the one at rs = 0, we can shift the information back to the center region and sum up all these images to enhance the SNR and form a pixel-reassigned (PR) image as

IPR(r)=Icovs(r+rs2,rs)d2rs=αto(r)[(hilluhillu)(rrs2r)hdet(r+rs2r)d2rs]d2r=αto(r)[(hilluhillu)hdet](2(rr))d2r

This synthesized image using pixel reassignment gives a PSF of [(hilluhillu) ⊗ hdet](2r). Figure 8(a) shows the comparison between the SIMS PSF, [(hilluhillu) · hdet](r), and the PSF of SIMS with pixel reassignment, [(hilluhillu) ⊗ hdet](2r) both in the real space and the Fourier space (assuming hilluhdet). In the real space, the PSF after doing pixel reassignment looks fatter than the one without pixel reassignment. However, the OTF of the one with pixel reassignment has larger value in the high-frequency region, where the noise severely degrade the image resolution. Thus, we get better SNR by summing up all the information we have and have a OTF that better deals with noise at high-frequency region. Since we know the PSF, [(hilluhillu) ⊗ hdet](2r), and the shading map, αt (r), of this pixel-reassigned image IPR(r), we can again apply the deconvolution and the shading correction operation described in Sec. 2.3 to get a PE-SIMS-PR reconstruction.

 figure: Fig. 8

Fig. 8 (a) Comparison of the PSF and OTF for SIMS and SIMS with pixel reassignment (PR). (b) Comparisons of the deconvolved widefield image and the reconstructions of the 6 × 6 multi-spot scanned fluorescent beads with and without pixel reassignment.

Download Full Size | PDF

Figure 8(b) compares the reconstruction result of fluorescent beads using 6 × 6 multi-spot illumination with and without applying pixel reassignment algorithm. Pixel reassignment results in sharper contrast when two beads are close to each other and helps clean up some background deconvolution errors. A cut-line plot of the fluorescent beads in Fig. 8(b) shows that the FWHM of the reconstructed bead from SIMS (300.3 nm) is larger than for SIMS-PR with pixel reassignment (254 nm), giving better resolution.

Acknowledgments

The authors thank Prof. Michael Lustig, Jonathan Tamir, Michael Chen and Hsiou-Yuan Liu for helpful discussions. This research is funded by the Gordon and Betty Moore Foundation’s Data-Driven Discovery Initiative through Grant GBMF4562 to Laura Waller (UC Berkeley).

References and links

1. W. Lukosz, “Optical systems with resolving powers exceeding the classical limit. II,” J. Opt. Soc. Am. 57, 932–941 (1967). [CrossRef]  

2. M. A. A. Neil, R. Juškaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. 22, 1905–1907 (1997). [CrossRef]  

3. R. Heintzmann and C. Cremer, “Laterally modulated excitation microscopy: improvement of resolution by using a diffraction grating,” Proc. SPIE3568, 185–196 (1999). [CrossRef]  

4. M. G. Gustafsson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microscopy 198, 82–87 (2000). [CrossRef]  

5. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006). [CrossRef]   [PubMed]  

6. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nature Methods 3, 793–795 (2006). [CrossRef]   [PubMed]  

7. S. W. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy,” Opt. Lett. 19, 780–782 (1994). [CrossRef]   [PubMed]  

8. T. Wilson and C. J. R. Sheppard, Theory and Practice of Scanning Optical Microscopy (Academic Press, 1984).

9. C. J. R. Sheppard, “Scanning confocal microscopy,” Encyclopedia of Opt. Engineering 20132525–2544 (2003).

10. A. G. York, S. H. Parekh, D. D. Nogare, R. S. Fischer, K. Temprine, M. Mione, A. B. Chitnis, C. A. Combs, and H. Shroff, “Resolution doubling in live, multicellular organisms via multifocal structured illumination microscopy,” Nat. Methods 9, 749–754 (2012). [CrossRef]   [PubMed]  

11. A. G. York, P. Chandris, D. D. Nogare, J. Head, P. Wawrzusin, R. S. Fischer, A. Chitnis, and H. Shroff, “Instant super-resolution imaging in live cells and embryos via analog image processing,” Nat. Methods 10, 1122–1126 (2013). [CrossRef]   [PubMed]  

12. F. Ströhl and C. F. Kaminski, “A joint Richardson-Lucy deconvolution algorithm for the reconstruction of multifocal structured illumination microscopy data,” Methods Appl. Fluoresc. 3, 014002 (2015). [CrossRef]  

13. N. Chakrova, R. Heintzmann, B. Rieger, and S. Stallinga, “Studying different illumination patterns for resolution improvement in fluorescence microscopy,” Opt. Express 23, 31367–31383 (2015). [CrossRef]   [PubMed]  

14. J. García, Z. Zalevsky, and D. Fixler, “Synthetic aperture superresolution by speckle pattern projection,” Opt. Express 13, 6075–6078 (2005). [CrossRef]  

15. D. Sylman, V. Micó, J. Garcá, and Z. Zalevsky, “Random angular coding for superresolved imaging,” Appl. Opt. 49, 4874–4882 (2010). [CrossRef]   [PubMed]  

16. E. Mudry, K. Belkebir, J. Girard, J. Savatier, E. L. Moal, C. Nicoletti, M. Allain, and A. Sentenac, “Structured illumination microscopy using unknown speckle patterns,” Nat. Photon. 6, 312–315 (2012). [CrossRef]  

17. J. Min, J. Jang, D. Keum, S.-W. Ryu, C. Choi, K.-H. Jeong, and J. C. Ye, “Fluorescent microscopy beyond diffraction limits using speckle illumination and joint support recovery,” Sci. Rep. 3, 2075 (2013). [CrossRef]   [PubMed]  

18. S. Dong, P. Nanda, R. Shiradkar, K. Guo, and G. Zheng, “High-resolution fluorescence imaging via pattern-illuminated Fourier ptychography,” Opt. Express 22, 20856–20870 (2014). [CrossRef]   [PubMed]  

19. H. Yilmaz, E. G. V. Putten, J. Bertolotti, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Speckle correlation resolution enhancement of wide-field fluorescence imaging,” Optica 2, 424–429 (2015). [CrossRef]  

20. M. Kim, C. Park, C. Rodriguez, Y. Park, and Y.-H. Cho, “Superresolution imaging with optical fluctuation using speckle patterns illumination,” Scientific Reports 5, 16525 (2015). [CrossRef]   [PubMed]  

21. A. Negash, S. Labouesse, N. Sandeau, M. Allain, H. Giovannini, J. Idier, R. Heintzmann, P. C. Chaumet, K. Belkebir, and A. Sentenac, “Improving the axial and lateral resolution of three-dimensional fluorescence microscopy using random speckle illuminations,” J. Opt. Soc. Am. A 33, 1089–1094 (2016). [CrossRef]  

22. S. Labouesse, M. Allain, J. Idier, S. Bourguignon, A. Negash, P. Liu, and A. Sentenac, “Joint reconstruction strategy for structured illumination microscopy with unknown illuminations,” ArXiv: 1607.01980 (2016).

23. S. A. Shroff, J. R. Fienup, and D. R. Williams, “Phase-shift estimation in sinusoidally illuminated images for lateral superresolution,” JOSA A 26, 413–424 (2009). [CrossRef]   [PubMed]  

24. S. A. Shroff, J. R. Fienup, and D. R. Williams, “Lateral superresolution using a posteriori phase shift estimation for a moving object: experimental results,” J. Opt. Soc. Am. A 27, 1770–1782 (2010). [CrossRef]  

25. K. Wicker, O. Mandula, G. Best, R. Fiolka, and R. Heintzmann, “Phase optimisation for structured illumination microscopy,” Opt. Express 21, 2032–2049 (2013). [CrossRef]   [PubMed]  

26. K. Wicker, “Non-iterative determination of pattern phase in structured illumination microscopy using auto-correlations in Fourier space,” Opt. Express 21, 24692–24701 (2013). [CrossRef]   [PubMed]  

27. R. Ayuk, H. Giovannini, A. Jost, E. Mudry, J. Girard, T. Mangeat, N. Sandeau, R. Heintzmann, K. Wicker, K. Belkebir, and A. Sentenac, “Structured illumination fluorescence microscopy with distorted excitations using a filtered blind-SIM algorithm,” Opt. Lett. 38, 4723–4726 (2013). [CrossRef]   [PubMed]  

28. A. Jost, E. Tolstik, P. Feldmann, K. Wicker, A. Sentenac, and R. Heintzmann, “Optical sectioning and high resolution in single-slice structured illumination microscopy by thick slice blind-SIM reconstruction,” PLoS ONE 10, e0132174 (2015). [CrossRef]   [PubMed]  

29. T. Dertinger, R. Colyer, G. Iyer, S. Weiss, and J. Enderlein, “Fast, background-free, 3D super-resolution optical fluctuation imaging (SOFI),” PNAS 106, 22287–22292 (2009). [CrossRef]   [PubMed]  

30. I. J. Cox, C. J. R. Sheppard, and T. Wilson, “Improvement in resolution by nearly confocal microscopy,” Appl. Opt. 21, 778–781 (1982). [CrossRef]   [PubMed]  

31. C. J. R. Sheppard, “Super-resolution in confocal imaging,” Optik 80, 53–54 (1988).

32. C. B. Müller and J. Enderlein, “Image scanning microscopy,” Phys. Rev. Lett. 104, 198101 (2010). [CrossRef]   [PubMed]  

33. C. J. R. Sheppard, S. B. Mehta, and R. Heintzmann, “Superresolution by image scanning microscopy using pixel reassignment,” Opt. Lett. 38, 2889–2992 (2013). [CrossRef]   [PubMed]  

34. S. Roth, C. J. R. Sheppard, K. Wicker, and R. Heintzmann, “Optical photon reassignment microscopy (OPRA),” Optical Nanoscopy 2, 1–62013.

35. N. Parikh and S. Boyd, “Proximal algorithms,” Found. Trends Optim. 1, 123–231 (2013).

36. Y. Nesterov, “A method for solving the convex programming problem with convergence rate O(1/k2),” Dokl. Akad. Nauk SSSR 269, 543–547 (1983).

37. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Science 2, 183–202 (2009). [CrossRef]  

38. T. Tanaami, S. Otsuki, N. Tomosada, Y. Kosugi, M. Shimizu, and H. Ishida, “High-speed 1-frame/ms scanning confocal microscope with a microlens and Nipkow disks,” Appl. Opt. 41, 4704–4708 (1996). [CrossRef]  

39. J. G. Walker, “Non-scanning confocal fluorescence microscopy using speckle illumination,” Opt. Commun. 189, 221–226 (2001). [CrossRef]  

40. S.-H. Jiang and J. G. Walker, “Experimental confirmation of non-scanning fluorescence confocal microscopy using speckle illumination,” Opt. Commun. 238, 1–12 (2004). [CrossRef]  

41. R. Heintzmann and P. A. Benedetti, “High-resolution image reconstruction in fluorescence microscopy with patterned excitation,” Appl. Opt. 45, 5037–5045 (2006). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Example experimental setup for structured illumination microscopy (SIM) using a deformable mirror device (DMD) to capture low-resolution images of the object modulated by different illumination patterns. Our IPE-SIMS algorithm reconstructs both the super-resoloved image and the unknown arbitrary illumination patterns.
Fig. 2
Fig. 2 The first part of our algorithm, Pattern Estimation (PE), iteratively estimates the illumination patterns from an approximated object given by the deconvolved widefield image.
Fig. 3
Fig. 3 The second part of our algorithm, termed structured illumination microscopy with a statistical prior (SIMS), estimates the high-resolution object from the measured images and the estimated illumination patterns obtained in Part 1.
Fig. 4
Fig. 4 (a) Simulated reconstructions of a Siemens star target under a widefield microscope, deconvolved widefield, confocal microscope, deconvolved confocal, blind SIM [16], S-SOFI [20], our PE-SIMS and PE-SIMS-PR algorithms. (b) The effective modulation transfer function (MTF) of each method, given by the contrast of the reconstructed Siemens star image at different radii.
Fig. 5
Fig. 5 Reconstructions of red fluorescent beads (Ex:580 nm/Em:605 nm) from the experiment using random pattern illumination with 20 × 20 scanning step.
Fig. 6
Fig. 6 Comparison of our algorithm on dataset from Multispot SIM (MSIM) which uses with Nimg = 224 scanned multi-spot patterns from [10]. We show the deconvolved widefield image and the reconstructions using MSIM with known patterns, as well as our blind PE-SIMS algorithm with and without pixel reassignment.
Fig. 7
Fig. 7 Results with simulated and experimental (fluorescent beads) datasets comparing random speckle and multi-spot illumination patterns. (middle row) Shading maps overlaid on the object. Decreasing the number of random patterns results in shading artifacts in the reconstruction. The random patterns are scanned in 20 × 20, 10 × 10, and 6 × 6 steps with the same step size of 0.6 FWHM of the PSF, while the multi-spot pattern is scanned with 6 × 6 steps.
Fig. 8
Fig. 8 (a) Comparison of the PSF and OTF for SIMS and SIMS with pixel reassignment (PR). (b) Comparisons of the deconvolved widefield image and the reconstructions of the 6 × 6 multi-spot scanned fluorescent beads with and without pixel reassignment.

Tables (2)

Tables Icon

Subroutine 1 Pattern Estimation

Tables Icon

Table 1 Achieved resolution for different algorithms

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

I ( r ) = [ o ( r ) p ( r ) ] h det ( r ) = o ( r ) p ( r ) h det ( r r ) d 2 r .
I avg ( r ) = I ( r ) = [ o ( r ) p ( r ) ] h det ( r ) p 0 o ( r ) h det ( r ) ,
o est ( r ) = 1 { I ˜ avg ( u ) h ˜ det ( u ) | h ˜ det ( u ) | 2 + β } ,
minimize p f ( p ) = f diff ( p ) + I C ( p ) = r | I ( r ) [ o est ( r ) p ( r ) ] h det ( r ) | 2 + I C ( p ) , where I C ( p ) = { 0 , p C + , p C , C = { p ( r ) | p ˜ ( u ) = 0 , u > 2 N A λ illu } ,
g ( k ) ( r ) = f diff ( p ( k ) ) p = 2 o est ( r ) [ h det ( r ) ( I ( r ) [ o est ( r ) p ( k ) ( r ) ] h det ( r ) ) ] ,
C ( y ) = 1 { { y } | h ˜ illu ( u ) | 2 | h ˜ illu ( u ) | 2 + δ } ,
I cov ( r ) = Δ p ( r ) Δ I ( r ) = o ( r ) Δ p ( r ) Δ p ( r ) h det ( r r ) d 2 r ,
p ( r ) = t ( r ) h illu ( r r ) d 2 r ,
Δ p ( r ) Δ p ( r ) = Δ t ( r 1 ) Δ t ( r 2 ) h illu ( r r 1 ) h illu ( r r 2 ) d 2 r 1 d 2 r 2 = γ t Δ t 2 ( r 1 ) δ ( r 1 r 2 ) h illu ( r r 1 ) h illu ( r r 2 ) d 2 r 1 d 2 r 2 α t h illu ( r r 1 ) h illu ( r r 1 ) d 2 r 1 = α t ( h illu h illu ) ( r r ) ,
Δ t ( r 1 ) Δ t ( r 2 ) = γ t Δ t 2 ( r 1 ) δ ( r 1 r 2 ) α t δ ( r 1 r 2 ) ,
I cov ( r ) = Δ p ( r ) Δ I ( r ) = α t o ( r ) [ ( h illu h illu ) h det ] ( r r ) d 2 r .
I cov , dec ( r ) = 1 { I ˜ cov ( u ) H ( u ) | H ( u ) | 2 + ξ } ,
α t ( r ) = γ t Δ t 2 ( r ) γ t Δ p 2 ( r ) .
I SIMS ( r ) = I cov , dec ( r ) α t ( r ) α t 2 ( r ) + ϵ ,
t ( r ) = Λ m , n δ ( r r m n r ) + t 0 Δ t ( r ) Λ 2 m , n δ ( r r m n r ) ,
Δ t ( r 1 ) Δ t ( r 2 ) = Δ t ( r 1 r ) Δ t ( r 2 r ) d 2 r = Λ 4 m , n δ ( r 1 r 2 r m n ) m , n δ ( r 1 r 2 r m n ) Λ 4 η m , n δ ( r 1 r 2 r m n ) ,
Δ p ( r ) Δ p ( r ) = ( h illu h illu ) ( r r ) Λ 4 η m , n δ ( r r r m n ) .
I cov s ( r , r s ) = Δ p ( r r s ) Δ I ( r ) = o ( r ) Δ p ( r r s ) Δ p ( r ) h ( r r ) d 2 r = α t o ( r ) ( h illu h illu ) ( r r s r ) h det ( r r ) d 2 r .
I PR ( r ) = I cov s ( r + r s 2 , r s ) d 2 r s = α t o ( r ) [ ( h illu h illu ) ( r r s 2 r ) h det ( r + r s 2 r ) d 2 r s ] d 2 r = α t o ( r ) [ ( h illu h illu ) h det ] ( 2 ( r r ) ) d 2 r
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.