Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Autonomous illumination control for localization microscopy

Open Access Open Access

Abstract

Super-resolution fluorescence microscopy improves spatial resolution, but this comes at a loss of image throughput and presents unique challenges in identifying optimal acquisition parameters. Microscope automation routines can offset these drawbacks, but thus far have required user inputs that presume a priori knowledge about the sample. Here, we develop a flexible illumination control system for localization microscopy comprised of two interacting components that require no sample-specific inputs: a self-tuning controller and a deep learning-based molecule density estimator that is accurate over an extended range of densities. This system obviates the need to fine-tune parameters and enables robust, autonomous illumination control for localization microscopy.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Single molecule localization microscopy (SMLM) is a suite of techniques for super-resolution fluorescence imaging that has generated great interest for bioimaging applications. Of these techniques, photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) achieve super-resolution by exploiting optically-induced transitions of single fluorescent markers between emitting and non-emitting states [1–3]. Due to the tradeoff between spatial and temporal resolutions, there is significant interest in improving their throughput; a single image typically takes minutes to acquire. Several approaches to this problem have been taken including high frame rate imaging [4], tailored illumination for large fields of view (FOVs) [5–7], and automation [8–13]. All of these approaches may be interpreted as means to collect more data at a fixed cost to the microscopist’s time. Automation is a particularly appealing line of technology development because it transfers repetitive tasks to a computer. Automated illumination strategies have enabled multiple field of view acquisitions without user interference, but their flexibility has thus far been limited because they require extensive parameter tuning [8], a priori knowledge of the sample response, and have often been targeted only to specific samples [9–11].

Automation also has the potential to enable—in real-time, and for a given sample—the optimization of acquisition parameters to better control the balance between the achievable resolution, degree of artifacts, and acquisition time. For example, a suboptimal transition rate between fluorescence emitting and non-emitting states will result in measurements that are either needlessly long or produce artifacts [14–16]. These artifacts must then be corrected in post-processing, lest they lead researchers to incorrect conclusions. Minimizing these errors at the point of acquisition is therefore an important step in the quality control process, but few freely available tools exist for this purpose.

In this work we address these limitations by developing an autonomous illumination control system for localization microscopy that adapts itself to each field-of-view and that requires a minimal effort in parameter tuning. We begin by establishing the primary components within the negative feedback loop that comprises the control system. The function of each component is decoupled from the others, which allows us to address their design independently of the system as a whole. With this philosophy in mind, we then develop a new algorithm for each component, intended to reduce the number of user inputs and increase the generality of the overall control system. The first is a parameter-free algorithm for counting emitters in an image. This algorithm, called Density Estimation by Fully Convolutional Networks (DEFCoN), outperforms fluorescent spot counters that are based on matched filters by greatly reducing their bias when signals from individual emitters are highly spatially overlapping. Furthermore, it can be readily adapted to new classes of data sets by re-training the network. The second component is a self-tuning controller that automatically adapts its gain parameters to each specific field-of-view by measuring the fluorescence excitation step response prior to acquisition. The self-tuning procedure both eliminates the guesswork in determining a valid set of control parameters and makes the system robust against heterogeneity within a single sample.

Finally, we reintegrate these components into the control system and show how they may help balance artifacts against imaging speed in PALM/STORM, requiring as input only a single free parameter that is general to the problem. These tools are freely provided to the community as the Automated Laser Illumination Control Algorithms (ALICA, pronounced ah-LEETZ-uh) plugin for Micro-Manager [17], a free and open-source software library for microscopy acquisition control.

2. Design of the illumination control system

2.1 Optical control of the active emitter density

Common photodynamical models employed in PALM/STORM are based on a system of states that correspond to the distinct energy levels of a fluorescent molecule. A transition from one state to another during a given time interval is a random event and occurs with a probability that depends only on the rate coefficient ascribed to that transition. In general, the rate coefficients can depend on a number of sample-specific factors, such as a fluorophore’s local environment and its chemical structure. At least one rate coefficient between fluorescence emitting and non-emitting states is proportional to the irradiance (power-per-area) integrated across the fluorophore’s absorption cross section. In direct STORM, for example, the irradiance of visible excitation light determines the transition rate from the emitting singlet state to the non-emitting triplet state; the irradiance of ultraviolet (UV) light influences the return rate from a non-emitting reduced state to the singlet state [18]. As another example, many photoswitchable fluorescent proteins (PS-FPs) are irreversibly switched into a red-shifted emission state at a rate that depends on the local UV irradiance [2,3].

The existence of these light-induced transitions allows the microscopist to optically tune the density of fluorophores in the emitting state by adjusting the power of the light source(s). Typically, the goal is to adjust the power until there is approximately one active emitter per diffraction-limited area on average. At lower densities, the acquisition will take longer to sample the underlying structure; at higher densities, artifacts begin to appear in the final data set because localizations may correspond to the centroid of multiple overlapping emitters and not their individual locations.

2.2 Control system design

The purpose of the illumination control system is to find and maintain a fixed density of active emitters throughout an acquisition. The system is implemented as a negative feedback loop [Fig. 1].

 figure: Fig. 1

Fig. 1 The autonomous illumination control system. The three primary components in the feedback loop are represented as modular blocks, and the data passed between the components are indicated in italics.

Download Full Size | PDF

The system consists of three generalized components. The first is the microscope, which contains the illumination source and provides raw images from its camera. The images are fed sequentially into an analyzer whose job is to estimate the density of active emitters. (In general, the analyzer may produce estimates of other quantities as well, such as the integrated intensity.) The controller is the third component and takes as inputs the analyzer’s most recent estimate of the active emitter density and the density set point, i.e. the desired emitter density to maintain during the experiment. The controller’s purpose is to compute the power of the illumination source that minimizes the absolute difference between the estimate and the set point. If the difference deviates from zero—as it would at the very start of a measurement or over time due to photobleaching—then the controller will apply a corrective adjustment to the light source’s output power.

The division of labor between the components carries several advantages. The components are weakly coupled; thus, if any component fails to perform its computation before the previous one in the feedback loop, the other components can still continue their work unimpeded. Furthermore, the algorithms for the different components’ functionalities can be exchanged at will without affecting the others. This allows microscopists to adapt the control system to their particular samples and use cases. This also means that the optimization of each component may be seen as its own independent problem, rather than one of the control system as a whole.

3. Estimating emitter counts with density maps

3.1 The spot counting problem

A natural choice for an analyzer for localization microscopy is a module that counts the number of single fluorescent molecules in an image [Fig. 2]. To study the accuracy of spot counting algorithms, we simulated 15,000 frames of a 2D SMLM image stack. The average number of active emitters was increased every 1000’th frame by slightly increasing the transition rate from the off to the emitting state. The fluorophores were randomly arranged on a 2D microtubule network from the 2016 SMLMS Challenge [19] and were modeled with a two-state system whose simulated lifetimes were exponentially-distributed. Figure 2(a) shows a single frame from the simulation with the ground truth positions of emitters marked as red x’s. Spot counting algorithms that work on an image such as this one usually involve two steps. First, a filter is applied to the image that amplifies the signal from the fluorescent molecules while simultaneously suppressing the background. This step is often performed by convolution of the image with a matched filter whose frequency response is the conjugate of the Fourier transform of the microscope point spread function (PSF) [20]. Small regions of interest (ROIs) surrounding local maxima in the filtered image are then identified as single emitters. The detections in Fig. 2(a)—marked as cyan circles—were identified using a wavelet-based matched filter coupled with watershed segmentation and followed by a calculation of the centroid of connected components as described in [21] and implemented in the software package ThunderSTORM [22]. (The wavelet filter had a scale parameter of 2 and an order parameter of 3. The threshold setting was one standard deviation of the first wavelet level of the input image, and the ground truth density of emitters in Fig. 2(a) is approximately 2.1 µm−2.)

 figure: Fig. 2

Fig. 2 Bias in the detection of fluorescent spots. a) A single image from a simulated PALM acquisition demonstrates two types of counting errors: undercounting due to poor signal-to-noise (left arrow) and undercounting due to overlapping PSFs (right arrow). Red x’s: ground truth emitters; cyan circles: detected molecules using a wavelet matched filter. Scale bar: 1 µm. b) The number of fluorescence spots detected by the wavelet/watershed algorithm of [21] using different values for the B-spline scale parameters [22] vs. the true number of emitting fluorophores in images from a simulated PALM data set. The gray line indicates an unbiased result. Data points are binned averages with error bars representing the 95% confidence interval of the mean.

Download Full Size | PDF

One can immediately see in Fig. 2(a) that the number of detections is less than the number of actual emitters. Furthermore, the bias towards undercounting appears to be a general feature of counting by direct detection and becomes worse as the true density of emitters increases; this makes the spot counter’s response to the true density nonlinear [Fig. 2(b)]

Two types of error contribute to this bias. The first is missed detections due to a poor signal-to-noise ratio (SNR); this error is largely inconsequential because the absence of emitters with poor SNR from a data set typically does not have an adverse effect on the SMLM reconstruction. The second error arises from the overlapping signals from closely-spaced emitters. In the context of automation performance, this would result in the control system erroneously concluding that there are fewer active emitters than there are in reality, thereby preventing the system from taking the correct action. The nonlinear response of the detection-based spot counter furthermore complicates the controller design.

An alternative to spot counting is to compute a quantity from the images that is somehow proportional to the number of spots, such as the sum over pixel values or the time that a pixel value spends above a given threshold [23]. While this approach can be made linear in the number of active emitters, it is susceptible to other types of errors that limit its use to samples where the only significant source of light is from the target fluorophores. Autofluorescence, contaminants, and out-of-focus fluorescence would all bias the emitter count estimate.

Another alternative would be to use multi-emitter subpixel localization routines. (An extensive and recent list of such routines may be found at [24].) In principle, these algorithms can perform unbiased spot counting in the case of overlapping signals by fitting the photon count distributions to models containing multiple emitters. They often require extensive parameter tuning, however, and are too slow to use for real-time applications or large FOVs.

3.2 Density map regression

The bias of detection-based counters suggests that a new approach is required to alleviate these issues. We therefore reformulated the problem of fluorescence spot counting as a regression problem over a density map [Fig. 3(a)]. In this formulation, a model is constructed that transforms an input image of fluorescent spots into a density map, i.e. a 2D image of the same size as the input and upon which a normalized Gaussian kernel is placed at each emitting fluorophore position. The integrated sum of the density map pixels over a subregion is equal to the number of spots it contains; the integral over the full density map is the estimated number of spots within the FOV. Density map regression has been successfully applied to problems in counting pedestrians, cars, and cells [25–27].

 figure: Fig. 3

Fig. 3 Density map estimation for fluorescence spot counting. a) A target density map generated from ground truth simulated data. The integral over the density map is the number of fluorescent spots in the FOV. Red x’s denote ground truth positions. b) The architecture of DEFCoN. (De)Conv.: (de)convolutional layer. ReLU: rectified linear units. The number of convolution kernels (and subsequent strided convolution kernels) used in each layer is indicated by the number below. For example, the first layer of the segmentation network is composed of 16 convolution kernels, each 3-by-3 in size, followed by ReLU activation, followed by 16 strided 3-by-3 convolutions, followed by ReLU.

Download Full Size | PDF

Previous models for density map regression have utilized an ad hoc maximum excess over subarrays (MESA) distance [25] refined with ridge-regression [26], a random forest with hand-crafted features [28], or fully convolutional neural networks (FCNNs) [27,29,30]. FCNNs are particularly attractive because they do not require hand-crafted features and the model can be trained directly from images. In addition, their computational complexity scales linearly with the number of pixels, rendering them useful for real-time computation and competitive in terms of speed with detection-based algorithms [31]. To this end, we designed a spot counter called Density Estimation by Fully Convolutional Networks (DEFCoN) for density map regression of images of fluorescent spots [Fig. 3(b)].

3.2.1 Network architecture

DEFCoN’s architecture consists of two fully convolutional networks in series: a segmentation network and a density network. Each network is comprised of layers of (de)convolutional operations and nonlinear image transforms that first form a downsampling path and then an upsampling path. In the downsampling path, the convolution operations serve to extract features in the image at length scales that increase with each successive layer. In the upsampling path, an output image (either a segmentation map or density map) is constructed from these features. Training DEFCoN means finding the values of all the square (de)convolutional filters that produce accurate segmentation and density maps.

The segmentation network computes a parameter-free segmentation of the input image. The downsampling path of the segmentation network consists of three convolutional layers with 3 x 3 pixel kernels and separated by strided convolutional layers. Except where noted in Fig. 3(b), the activation function is the rectified linear unit (ReLU). The receptive field of the deepest layer corresponds to a 12 x 12 pixel region on the original image, which is large enough to capture information about the shape of clusters of fluorescent spots yet small enough to maintain the speed of the network’s computation. The upsampling path is made of two layers of eight deconvolution kernels followed by a 1 x 1 convolutional layer with a sigmoid activation function. During training, the network’s output is compared to a binary, ground truth segmentation mask using pixel-wise binary cross-entropy as the loss function. Essentially, the segmentation network performs a per-pixel classification where the output is a map of values that indicate the probabilities that the pixels contain signal from a fluorophore.

The density network transforms the segmentation map into the final density map and possesses a similar architecture as the segmentation network [Fig. 3(b)]. The deepest convolutional layer is made of 5 x 5 pixel kernels—making the receptive field 15 x 15 pixels. The final layer has a linear —rather than sigmoid— activation function, because predicting pixel values is a regression problem, and not a classification problem.

The reason for the inclusion of the segmentation network in DEFCoN is empirical; we found that the density estimation network alone does not generalize well to new data sets. This is likely due to the large degree of similarity between the input images and the density map estimates. In the absence of the segmentation network, the density network would learn how to minimize the counting error through subtle pixel-wise transformations rather than learning meaningful representations of what a fluorophore looks like. The result is that non-zero values would be sporadically placed in the background pixels of the resulting density maps, significantly biasing local counts. The addition of the segmentation network is our solution to avoid fine-tuning for improved generalization, such as is done in [27].

3.2.2 Loss functions

The DEFCoN network is trained in two phases [Fig. 4]. The segmentation network is trained alone in the first phase using ground truth segmentation masks generated from simulated data. Next, its weights are frozen and the combined segmentation/density network is trained in full, this time with ground truth density maps that are also generated from simulated data. As in [29], the loss function that is used for backpropagation while training the full network is comprised of two terms.

l=lpixel+γlcount
The first term is simply the sum of the squared pixel errors
lpixel=i,j(d^i,jdi,j)2
where d^i,j and di,j are the values of pixel i,jin the predicted and ground truth density maps, respectively. The second term, lcount, penalizes the network for counting the number of spots incorrectly. Since the count is merely the sum of all the density map pixel values, this term is expressed as

 figure: Fig. 4

Fig. 4 Training DEFCoN’s neural networks. The training takes place in two steps: first the segmentation network alone is trained on target segmentation maps. Then its weights are frozen and the full network is trained on the target density maps.

Download Full Size | PDF

lcount=(i,jd^i,ji,jdi,j)2

The parameter γvaries the relative weight attributed to each term. If γis too small, each pixel can adopt a small offset that leads to a systematic counting error in the density map; if γis too large, the network will lose some local information, resulting in misshapen kernels. We empirically found a value of γ=0.01 to give the best results.

3.2.3 Training data generation, augmentation, and preprocessing

DEFCoN is trained on simulated data from SASS, our in-house simulation and development platform [32]. 89,500 64-by-64 pixel training images were generated with signal-to-noise (SNR) varying between 2.0 and 17.2 and with fluorophore densities between 0.0 and 1.5 µm−2, to reflect the broad range of conditions encountered in SMLM. The camera pixel size varies between 50 nm and 135 nm. Half the training sets contain out-of-focus fluorophores using the Gibson-Lanni model for the point spread function (PSF) [33,34]. A third of the set is built using realistic microtubule simulations, while the other two are made of randomly distributed fluorophores. Finally, a low-frequency random background is generated on half of the images using simplex noise [35,36].

The generated images are augmented further by random shifts of the brightness and contrast. Before being fed to the network, the input images are preprocessed using linear normalization, i.e. histogram stretching. If Ii,jis the original intensity of pixel (i,j), then the transformation is given by:

Ii,j'=Ii,jmin(Ii,j)max(Ii,j)min(Ii,j)
As a result, every pixel value in the image is between 0 and 1. Normalizing the inputs improves training speed and generalization.

The network was trained entirely on simulated data because it is necessary to know the ground-truth emitter positions over a wide range of densities. One could additionally use localizations derived from real images of single fluorescent emitters at low densities, and from multi-emitter algorithms at high densities. However, this would introduce uncertainty into the training data due to a finite localization precision and would be complicated by the presence of false positive and false negative localizations. Incorporating real data into the training set would only be necessary if the accuracy of the model is too low for a particular application, which is not the case here.

3.2.4 Target construction and training

To train DEFCoN, two target images were built for each training image: one segmentation mask and one density map. The density maps are created by adding Gaussian kernels to an empty image at the ground-truth positions of the emitters. Emitters whose total signals were less than 250 photons per frame were not included in the density map because they had a very poor SNR in the simulated image. For the kernels, we use the standard deviation σ = 1 pixel. We found this value to be a good compromise; smaller kernels do not have a good resolution, while larger kernels overlap too much. Each training image is stored with its corresponding density map.

The segmentation masks are built from the density maps. First a threshold is applied to the Gaussian kernels in the ground truth density maps; every pixel with density-map value over 0.03 is given the value 1, every pixel under is set to 0.

Training is done in two phases [Fig. 4]. First, the segmentation network alone is trained on 80,550 images, holding out 8950 images for validation. A 0.5 dropout regularization layer [37] is applied after the deepest convolutional layer to prevent overfitting. The training is stopped when the performance on the validation data has stopped improving (early stopping). This double regularization (dropout and early stopping) ensures that the network generalizes well to new data sets.

The complete network is then trained end-to-end, feeding the images and comparing the output to the ground truth density maps. However, to keep the segmentation task completely separated from the density map inference, the weights in the segmentation network are frozen; only the weights of the density network are adjusted with backpropagation during the second training phase. In this configuration, the density network is trained with the same validation, optimization and regularization parameters as the segmentation network.

The network is implemented and trained using Tensorflow 1.3.0 [38] with the Keras 2.0.8 API [39]. We use the Adam optimization algorithm [40] with an initial learning rate of 0.001 for both training phases and with a batch size of 32. These parameters are not of crucial importance, but are chosen to achieve fast convergence. Training takes between one and two hours with an Nvidia GeForce GTX 1060 6GB GPU and an Intel Core i5-7600K CPU at 3.80 GHz.

3.3 DEFCoN performance

We tested DEFCoN against the ThunderSTORM implementation of the wavelet filtering and watershed algorithm because it is currently one of the top-performing segmentation algorithms for SMLM and performs well when spots overlap weakly [21,22]. Following [21], we first generated several simulated SMLM stacks consisting of 100 images, 128 x 128 pixels in size, of randomly distributed fluorophores with different mean densities of active emitters (in units of µm−2) and SNRs. Here, the SNR is defined as the ratio between the maximum value of the pixels spanned by the image of a single fluorescent molecule and the standard deviation of the neighboring background pixel values. Next, we applied each algorithm to the SMLM stacks and calculated a performance metric to compare the two, in this case the counting error:

CE=|N^N|N
where N^ and Nare the predicted and ground truth fluorophore counts, respectively. The mean counting errors from the test are displayed in Fig. 5.

 figure: Fig. 5

Fig. 5 Comparison between DEFCoN and the wavelet filtering/watershed method from [21]. a) The mean counting error’s dependence on the density of randomly distributed fluorophores. The magenta tick indicates the density in the simulated data sets for panel b. b) The dependence of the error on the SNR. The magenta tick indicates the SNR of the simulated data sets in panel a.

Download Full Size | PDF

DEFCoN performs extremely well at counting fluorescent spots across a range of fluorophore densities [Fig. 5(a)]. The wavelet filtering algorithm with watershed performs as well as DEFCoN at very sparse densities. However, its mean counting error grows at a rate that is ~5-7 times faster than DEFCoN’s at intermediate densities and an SNR of 10. At high densities, DEFCoN still outperforms the wavelet/watershed method, although the mean counting errors for the two methods grow at about the same rate. DEFCoN performs slightly worse than the wavelet/watershed method at low SNRs [Fig. 5(b)], with a mean counting error of 0.39 for DEFCoN vs. 0.27 for wavelets. This disparity decreases with increasing SNR until they perform similarly above an SNR ~7.

Equally important for real-time spot counting is the speed with which each algorithm executes. A rough criterion is that the time required for the algorithm to produce a spot count should be less than 10 ms, which is the fastest running exposure time of commercially available sCMOS cameras with a 2048 x 2048 pixel ROI. (EMCCD cameras are slower than their sCMOS counterparts at the full ROI size and equivalent bit depth.) The results of the speed comparisons calculated from the same data set are shown in Fig. 6.

 figure: Fig. 6

Fig. 6 Execution times for DEFCoN and wavelet-based segmentation. The dependence of the execution time on the image size scales linearly with the number of pixels.

Download Full Size | PDF

When implemented on a GPU, DEFCoN can produce a spot count in about 20 ms for images that are 512 x 512 pixels in size. The CPU implementation of DEFCoN performs similarly in speed to a CPU-based wavelet/watershed combination for image sizes larger than 256 x 256 pixels. As expected, the computation time of DEFCoN grows linearly with the number of pixels and is independent of the density of fluorescence spots.

Finally, we tested DEFCoN on the RealLS and RealHD data sets from the 2016 SMLMS Challenge [19]. RealLS is a low density data set where the active emitters are sparsely distributed in space; RealHD is a high density data set containing many overlapping fluorophores. Because no ground-truth data exists for these experimentally-derived data sets, 10 frames from RealLS and 5 from RealHD were given dot annotations by hand, where each dot marked the ground truth position of a visible fluorophore. The mean counting errors for DEFCoN and wavelets/watershed are displayed in Table 1. As expected, both algorithms perform well at low density, but DEFCoN produces more accurate counts (with respect to the annotations) in the high density data set, indicating that it approaches human performance at frame-by-frame counting.

Tables Icon

Table 1. Mean counting errors on real data sets.

Taken together, these results indicate that DEFCoN outperforms the state-of-the art detection-based approaches for fluorescence spot counting. The improved linearity and ability to work across a large range of active emitter densities makes its application in an illumination control system both general and robust. Perhaps most important for automation, however, is that its use requires no parameter tuning, relying instead on robust training data.

4. Controller self-tuning

4.1 The controller tuning problem

Having dealt with the problem of making accurate estimates of the density of emitters, we now turn to the problem of control: how does one compute the required illumination intensity without manually adjusting any of the control parameters? In what follows, we will restrict the discussion to control of a UV illumination source because fluorophores respond strongly to even relatively weak UV irradiance and because UV light controls the density of active emitters in both PALM and STORM.

Automation methods for photobleaching correction were first proposed in [8,9]. The controller in [8] counted fluorescent spots in real-time using the wavelet approach of [21], accepting as inputs three free parameters. The first two are thresholds for the average and maximum emitter counts; if either of these two values exceeds ± 15% of their original value, then the controller adjusts the illumination power. The third parameter is the amount by which the illumination power is adjusted during each step. Likewise, the controller of [9] also makes discrete adjustments to the illumination power in values that are predetermined by the user. In addition, it requires a threshold value to separate the noise from the signal and another threshold that helps identify and remove pixels from the analysis that are always active, such as those contaminated by autofluorescence from dust particles.

Parameter tuning for these control systems may be performed in exploratory experiments to collect a priori knowledge about the typical sample response. The system performance will necessarily depend on how well the parameter values generalize to variability in the response during and between acquisitions. Large variability in labeling density, sample preparation, and the appearance of edge cases like brightly autofluorescent dust particles can invalidate a previously defined set of control parameter values, resulting in a suboptimal density of activated fluorophores. To our knowledge, no one has yet addressed the problem of automatically determining the optimal control parameters to use on a previously unseen FOV.

4.2 Proportional-integral controllers

In the context of SMLM, our strategy is to implement a proportional-integral (PI) controller to compute the power of the UV illumination source that will maintain a constant density of active emitters [Fig. 7(a)]. It accepts two inputs: the estimate of the density of emitters N(t) and the desired densityN0, which is also called the set point. The difference between these two quantities is the error signal e(t)=N(t)N0 which is fed in parallel into the proportional and integral block components. The computed power Pof the illumination source is

P(t)=Kpe(t)+Ki0te(t')dt'
where Kp and Ki are the proportional and integral gain, respectively. The value in choosing PI control over either purely proportional control or stepping the illumination by pre-determined amounts is that it can maintain a long-term zero error signal while still achieving a fast response to perturbations [41].

 figure: Fig. 7

Fig. 7 a) A proportional-integral (PI) controller. b) The number of detected localizations per frame in an acquisition where the UV laser power was controlled by the PI controller. c) The number of detected localizations per frame where the UV laser was manually adjusted.

Download Full Size | PDF

Figure 7(b) demonstrates the ability of the PI controller to maintain a constant number of localizations throughout an experiment and to compensate for photobleaching. Briefly, microtubules in Cos7 cells were immunolabeled with AlexaFluor647 and imaged with STORM as described in [5]. UV irradiance on the sample was at most ~0.02 kW/cm2 taking into account neutral density filters and a uniformly illuminated area of 120 x 120 µm2. In Fig. 7(b), the UV laser power was steadily increased by a manually tuned PI controller in response to a decreasing density of active emitters due to photobleaching. Due to the sparsity of this particular sample, the emitters were detected in the images using a simple spot counting algorithm for the analyzer [42]. The stability in the number of detected localizations indicates good control of the density of active emitters. Figure 7(c), on the other hand, was taken on a similar field of view with the UV laser under manual control and demonstrates the difficulty in maintaining a fixed number of localizations without a feedback system in place.

4.3 Self-tuning for density set point control

Finding the correct values for the gain parameters of the PI controller is essential to achieving a stable and fast response to changes in both the set point and error signal. Severe oscillations in the illumination output or a slow response to changes in the emitter density may occur when the PI controller is not properly tuned. Furthermore, the optimal values for the gain parameters will vary with each FOV. For these reasons, we implemented a self-tuning procedure that is based on a set of rules derived from internal model control known as lambda tuning [43]. These rules are used to calculate the optimal values for Kp and Ki on a given FOV by measuring the sample’s step response to UV light [Fig. 8].

 figure: Fig. 8

Fig. 8 Construction of the self-tuning procedure for the PI controller.

Download Full Size | PDF

The lambda tuning rules for the PI controller are

Kp=ΔPΔNτ(λ+td)
Ki=Kpτ1

In these expressions, ΔN represents the change in the analyzer’s average output in response to a step change ΔP in the output power of the illumination source. τ and td are the response time and the dead time of the system, respectively. The former represents the amount of time it takes for the analyzer’s output to reach approximately 1e163% of its quasi-steady state value at the new laser power, whereas the latter represents the time after the change in laser power when a response is first detected. In our experience, the initial change in emitter density is nearly instantaneous when compared to the exposure time for a camera frame, so we set tdto zero. Small values of the parameter λwill result in a fast response to changes about the set point, whereas large values will result in a slow response. The lambda tuning rules produce a response to a change in set point that settles out in a time of approximately 4λ without overshooting the set point value. λ=3τis recommended for stable set point control [43].

In practice, we find that it is easy to measure the step response ΔN but difficult to precisely measure τ. Fortunately, the value for τ varies little between experiments and need not be precise; a guess often suffices. For example, τ10frames and td=0frames in Fig. 7. Using the recommended value from the lambda tuning rules of λ=3τ=30frames, this means that Kp0.33ΔP/ΔN—irrespective of the value of τ—and Ki=(0.03frames1)ΔP/ΔN. Even if τ was overestimated by an order of magnitude, the controller would bring the system into a quasi-steady state within a small fraction of the total acquisition time, which often extends over tens of thousands of frames.

There remains one problem that results when the integral term in Eq. (6) becomes large. The integral acts as a form of memory for the error signal and, if allowed to accumulate, can cause the controller to become saturated such that it only outputs its maximum or minimum value. Large error signals accumulate in the controller’s memory, for example, when selecting a value for the set point that is either too high for the available illumination power or close to zero. (In the latter case, spurious detections keep the value of the measured density above zero and cause the integral term to accumulate a negative value.) This condition, more generally known as integral windup [41], is solved by placing upper and lower limits on the value for the integral term. The upper limit is set as the difference between the maximum possible laser output and the value of the proportional error term; the lower limit is the difference between the minimum output and the proportional term.

To better visualize how this self-tuning procedure works on a real microscope and sample, please see Visualization 1. In this video, a STORM acquisition of a mitochondrial sample is launched. Next, the self-tuning procedure is run. During this procedure, the output from the analyzer (DEFCoN in this example) is averaged over ten non-consecutive frames with the UV laser set to 0 mW. Then, a 2 mW step pulse is applied to the sample, and analyzer’s output is again averaged. Kp and Kiare determined using the rules described above, which then allows adjustment of the set point and having the system respond accordingly. In this case, the set point is placed at 20 counts per 100 µm2.

5. Managing tradeoffs between artifacts and imaging speed

5.1 The problem of optimal set point determination

To further improve the flexibility of the control system, we next examined ways to better determine a value for the set point. Roughly speaking, the optimum set point should produce super-resolved reconstructions with the fewest artifacts, the best resolution, and should take the least amount of time. However, the determination of an optimum value that satisfies these requirements is more challenging than it might first appear. For one, the optimum density of emitting fluorophores is not a global property but varies across the FOV with the structure’s local dimensionality [15]. Second, the quality of a super-resolution reconstruction depends on the algorithm chosen to perform the reconstruction [44], so the optimal acquisition settings may vary between algorithms. Third, we lack a rigorous, measurable definition of SMLM image quality that may be used for real-time optimization. In part, this is because a SMLM image is a function of all the images that contribute to the eventual reconstruction; it is difficult to predict the final image quality before the data is fully acquired. Finally, the tradeoffs that one is forced to make between the number of artifacts, resolution, and acquisition time mean that there is not one but rather many “optimal” solutions. To select one solution from this set requires that the microscopist explicitly specify the degree of tradeoffs that she or he is willing to make between these quantities.

We have addressed this final problem by translating the set point into a quantity that allows the microscopist to directly input the tradeoff they are willing to make between the resolution and the degree of artifacts. This approach employs a heuristic solution that is common to SMLM practitioners: the criterion that there should not be more than one emitter active per diffraction-limited area in any given camera frame. (We note, however, that other heuristics may be incorporated into this modular framework.) Combined with DEFCoN and the self-tuning controller, this effectively means that only one free parameter is necessary to determine the autonomous behavior of the activation laser.

5.2 Maximum local count control

The approach that we use here is to compute the highest local density of active emitters in an image from a density map estimate and subsequently use this quantity—and not the total count—as the controller’s input. The maximum local density of emitters arises naturally from a density map because the sum of the pixels over any subregion produces the number of emitters within that same subregion. We can transform the DEFCoN output into a map of local emitter densities through an extension of the so-called “gliding box algorithm” [45]. Briefly, a kernel of size n x n pixels and whose values are all unity is convolved with the density map. During a single step of the convolution, the value of the pixel currently at the center of the gliding window is replaced with the sum of the pixels that fall within the window. The final result is a map whose values represent the local densities of emitters [Fig. 9]. The maximum local count (MLC) is the maximum value over the entirety of this new map:

MLC=maxsS((i,j)sd^i,j)
where Sis the set of all s subregions of the same size within an estimated density map d^i,j.

 figure: Fig. 9

Fig. 9 The local density estimates are made by computing the sum of the pixels in each subregion of a density map. (In this example, the size of the subregions is 7 x 7 pixels; an example is the red square on the left.) The maximum local count (the red square on the right) is the largest value found in the map of local density estimates.

Download Full Size | PDF

Figure 10 demonstrates how the average MLC directly controls the tradeoff between artifacts and imaging speed on a simulated 2D microtubule network acquired with different average MLC values. Briefly, fluorophores whose photodynamics followed that of a simple two state ON/OFF model were simulated with a 2D Gaussian PSF. 5000 raw frames were generated for different mean fluorophore off-times and the fluorophores were localized with subpixel accuracies in ThunderSTORM [22]. The average MLC for each stack was computed over 7 x 7 pixel windows from DEFCoN’s density maps.

 figure: Fig. 10

Fig. 10 The average maximum local count (AMLC) as a heuristic for set point determination. a) The AMLC value directly determines the tradeoff between the degree of artifacts in the SMLM image (precision and recall) and the rate at which localizations are detected. b) SMLM images of a simulated microtubule network acquired at different AMLC values. AMLC values are in the upper left corner of each image. False positive localizations are in red. Scale bar: 1 µm.

Download Full Size | PDF

Figure 10(a) shows that the average MLC serves as a proxy for the tradeoff between the degree of artifacts and imaging speed. Increasing the set point (i.e., the average MLC) leads to a monotonic decrease in the precision and recall, both of which were calculated in ThunderSTORM with a radius threshold of 50 nm. (Precision is the ratio of true positive detections to the sum of true and false positives; recall is the ratio of true positives to the sum of true positives and false negatives.) The precision in particular, which depends on the number of false positives, reflects the degree of artifacts with smaller numbers indicating more artifacts. On the other hand, the localization detection rate increases monotonically with the MLC, and, as a result, so too does the final resolution of the image [46,47]. Though the details of the curves in Fig. 10(a) will vary with the underlying fluorophore distribution, we expect that the tradeoff between the two quantities is a general feature of SMLM imaging and precludes any determination of a single optimum for the irradiance.

Despite this fact, Fig. 10(b) demonstrates that the utility in the MLC as a set point lies in confining the false positives largely to the region of the sample where the ground truth density is highest. The selection of its value will depend on, for example, the type of localization algorithm used for analysis (single or multi-emitter), and the tradeoffs one is willing to make between image speed and degree of artifacts. The MLC is therefore a useful parameter for automation systems because it is easily interpretable by the microscopist, eliminating much of the guesswork involved in setting a desired value for it.

6. Discussion

We have packaged these tools into a plugin for Micro-Manager 2.0 that is freely available to the community. The plugin, called ALICA, implements the control system in Fig. 1 and allows users to select between different algorithms for the analyzer and controller based on their specific needs. In particular, the combination of the self-tuning PI controller and DEFCoN are especially simple to use because they require a minimum of parameter tuning. As a result, we expect that more labs will be able to implement automation routines for high throughput SMLM.

To our knowledge, the self-tuning PI controller presented here is the first realization of a control system that directly measures the sample response to set its own parameters. We view the ability of the controller to adapt to new FOVs as an essential feature in super-resolution automation routines because of sample heterogeneity. As throughput in SMLM increases, the amount of heterogeneity encountered both within and between samples increases, which makes it difficult to predict a priori an optimum set of control parameter values. It’s worth noting that the lambda tuning rules are just one set of many different rules that may be employed for parameter tuning [43]; each has their own set of advantages in terms of speed and precision.

DEFCoN works by recognizing how to count molecules based on the shapes of multiple, overlapping spots. It does not, however, currently extract count information that is encoded in time. Accounting for such information may improve its accuracy in high density conditions, such as when imaging so-called zero-dimensional structures that appear as diffraction-limited spots in widefield images. We expect that extending DEFCoN to include temporal information would further decrease its bias, although it already works well across the range of emitter densities frequently encountered in SMLM. The real value in incorporating temporal information would be to extend ALICA’s toolset into more general fluctuation-based super-resolution modalities, such as SOFI and SRRF [48,49], which excel in dense environments of active emitters. We also note that the purpose of DEFCoN, i.e. real-time counting, is different from other recent Deep Learning approaches to SMLM, Deep-STORM and DeepLoco [50,51]. DEFCoN directly computes local densities of molecules and is tailored for real-time control systems. Both Deep-STORM and DeepLoco are intended to compute localizations and are tailored for high precision, post-acquisition analysis. To be clear, DEFCoN does not compute localizations; the density maps could, however, be used as a form of pre-processing to improve the accuracy of other localization algorithms.

In summary, we presented the development of an illumination control system for localization microscopy that works across a wide range of conditions with minimal parameter tuning. In doing so, we exercised the design principles that we believe are most important for autonomous super-resolution illumination systems: implement feedback, modularize the control loop, optimize the subsystems, and translate control parameters into quantities that are independent of the sample. Finally, we provide open-source tools so that these developments may continue to improve the quality and reproducibility of super-resolution microscopy data.

Data associated with this manuscript are available at https://doi.org/10.5281/zenodo.1303156 [52]. The software tools presented in this work—ALICA, DEFCoN, and SASS—are available at https://github.com/LEB-EPFL.

Funding

We thank the EPFL for their financial support. K.M.D. is supported by a SystemsX.ch Transition Postdoc Fellowship (2014/227).

Acknowledgments

We thank Frank Scheffold, Ricardo Henriques, Seamus J. Holden, Volkan Cevher, Paul Rolland, Daniel Sage, Timo Rey, and Christian Sieben for their generous feedback and fruitful discussions.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3(10), 793–796 (2006). [CrossRef]   [PubMed]  

2. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef]   [PubMed]  

3. S. T. Hess, T. P. K. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91(11), 4258–4272 (2006). [CrossRef]   [PubMed]  

4. Y. Lin, J. J. Long, F. Huang, W. C. Duim, S. Kirschbaum, Y. Zhang, L. K. Schroeder, A. A. Rebane, M. G. M. Velasco, A. Virrueta, D. W. Moonan, J. Jiao, S. Y. Hernandez, Y. Zhang, and J. Bewersdorf, “Quantifying and optimizing single-molecule switching nanoscopy at high speeds,” PLoS One 10(5), e0128135 (2015). [CrossRef]   [PubMed]  

5. K. M. Douglass, C. Sieben, A. Archetti, A. Lambert, and S. Manley, “Super-resolution imaging of multiple cells by optimised flat-field epi-illumination,” Nat. Photonics 10(11), 705–708 (2016). [CrossRef]   [PubMed]  

6. R. Diekmann, Ø. I. Helle, C. I. Øie, P. McCourt, T. R. Huser, M. Schüttpelz, and B. S. Ahluwalia, “Chip-based wide field-of-view nanoscopy,” Nat. Photonics 11(5), 322–328 (2017). [CrossRef]  

7. Z. Zhao, B. Xin, L. Li, and Z.-L. Huang, “High-power homogeneous illumination for super-resolution localization microscopy with large field-of-view,” Opt. Express 25(12), 13382–13395 (2017). [CrossRef]   [PubMed]  

8. A. Kechkar, D. Nair, M. Heilemann, D. Choquet, and J.-B. Sibarita, “Real-time analysis and visualization for single-molecule based super-resolution microscopy,” PLoS One 8(4), e62918 (2013). [CrossRef]   [PubMed]  

9. S. J. Holden, T. Pengo, K. L. Meibom, C. Fernandez Fernandez, J. Collier, and S. Manley, “High throughput 3D super-resolution microscopy reveals Caulobacter crescentus in vivo Z-ring organization,” Proc. Natl. Acad. Sci. U.S.A. 111(12), 4566–4571 (2014). [CrossRef]   [PubMed]  

10. J. P. Eberle, W. Muranyi, H. Erfle, and M. Gunkel, “Fully automated fargeted confocal and single-molecule localization microscopy,” in Super-Resolution Microscopy. Methods in Molecular Biology (Humana, 2017), pp. 139–152.

11. M. Mund, J. A. van der Beek, J. Deschamps, S. Dmitrieff, P. Hoess, J. L. Monster, A. Picco, F. Nédélec, M. Kaksonen, and J. Ries, “Systematic analysis of the molecular architecture of endocytosis reveals a nanoscale actin nucleation template that drives efficient vesicle formation,” Cell 174, 884–896 (2017). [CrossRef]   [PubMed]  

12. F. Farzam and K. A. Lidke, “Automated multiple target superresolution imaging,” in Frontiers in Optics 2017 (OSA, 2017), p. FTh3D.3.

13. A. Beghin, A. Kechkar, C. Butler, F. Levet, M. Cabillic, O. Rossier, G. Giannone, R. Galland, D. Choquet, and J.-B. Sibarita, “Localization-based super-resolution imaging meets high-content screening,” Nat. Methods 14(12), 1184–1190 (2017). [CrossRef]   [PubMed]  

14. A. Burgert, S. Letschert, S. Doose, and M. Sauer, “Artifacts in single-molecule localization microscopy,” Histochem. Cell Biol. 144(2), 123–131 (2015). [CrossRef]   [PubMed]  

15. P. Fox-Roberts, R. Marsh, K. Pfisterer, A. Jayo, M. Parsons, and S. Cox, “Local dimensionality determines imaging speed in localization microscopy,” Nat. Commun. 8, 13558 (2017). [CrossRef]   [PubMed]  

16. J.-F. Rupprecht, A. Martinez-Marrades, Z. Zhang, R. Changede, P. Kanchanawong, and G. Tessier, “Trade-offs between structural integrity and acquisition time in stochastic super-resolution microscopy techniques,” Opt. Express 25(19), 23146–23163 (2017). [CrossRef]   [PubMed]  

17. A. D. Edelstein, M. A. Tsuchida, N. Amodaj, H. Pinkard, R. D. Vale, and N. Stuurman, “Advanced methods of microscope control using μManager software,” J. Biol. Methods 1(2), 10 (2014). [CrossRef]   [PubMed]  

18. M. Heilemann, S. van de Linde, M. Schüttpelz, R. Kasper, B. Seefeldt, A. Mukherjee, P. Tinnefeld, and M. Sauer, “Subdiffraction-resolution fluorescence imaging with conventional fluorescent probes,” Angew. Chem. Int. Ed. Engl. 47(33), 6172–6176 (2008). [CrossRef]   [PubMed]  

19. D. Sage, H. Kirshner, T. Pengo, N. Stuurman, J. Min, S. Manley, and M. Unser, “Quantitative evaluation of software packages for single-molecule localization microscopy,” Nat. Methods 12(8), 717–724 (2015). [CrossRef]   [PubMed]  

20. D. Sage, F. R. Neumann, F. Hediger, S. M. Gasser, and M. Unser, “Automatic tracking of individual fluorescence particles: application to the study of chromosome dynamics,” IEEE Trans. Image Process. 14(9), 1372–1383 (2005). [CrossRef]   [PubMed]  

21. I. Izeddin, J. Boulanger, V. Racine, C. G. Specht, A. Kechkar, D. Nair, A. Triller, D. Choquet, M. Dahan, and J. B. Sibarita, “Wavelet analysis for single molecule localization microscopy,” Opt. Express 20(3), 2081–2095 (2012). [CrossRef]   [PubMed]  

22. M. Ovesný, P. Křížek, J. Borkovec, Z. Svindrych, and G. M. Hagen, “ThunderSTORM: a comprehensive ImageJ plug-in for PALM and STORM data analysis and super-resolution imaging,” Bioinformatics 30(16), 2389–2390 (2014). [CrossRef]   [PubMed]  

23. S. Holden, T. Pengo, and S. Manley, “Optimisation and control of sampling rate in localisation microscopy,” in 10th International Conference on Sampling Theory and Applications (2013), pp. 281–284.

24. “Single-Molecule Localization Microscopy: Software Benchmarking,” http://bigwww.epfl.ch/smlm/challenge2016/index.html?p=participants.

25. V. Lempitsky and A. Zisserman, “Learning to count objects in images,” in Advances in Neural Information Processing Systems 23 (NIPS) (2010), pp. 1324–1332.

26. C. Arteta, V. Lempitsky, J. A. Noble, and A. Zisserman, “Interactive object counting,” in European Conference on Computer Vision – ECCV (Springer, 2014), pp. 504–518.

27. W. Xie, J. A. Noble, and A. Zisserman, “Microscopy cell counting and detection with fully convolutional regression networks,” Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 6(3), 283–292 (2018). [CrossRef]  

28. L. Fiaschi, U. Koethe, R. Nair, and F. A. Hamprecht, “Learning to count with regression forest and structured labels,” in Proceedings of the 21st International Conference on Pattern Recognition (ICPR) (2012), pp. 2685–2688.

29. D. Kang, Z. Ma, and A. B. Chan, “Beyond Counting: Comparisons of density maps for crowd analysis fasks - counting, detection, and tracking,” arXiv 1705.10118 (2017).

30. D. Oñoro-Rubio and R. J. López-Sastre, “Towards perspective-free object counting with deep learning,” in European Conference on Computer Vision (ECCV) (Springer, 2016), pp. 615–629. [CrossRef]  

31. L. He, X. Ren, Q. Gao, X. Zhao, B. Yao, and Y. Chao, “The connected-component labeling problem: A review of state-of-the-art algorithms,” Pattern Recognit. 70, 25–43 (2017). [CrossRef]  

32. M. Štefko, B. Ottino, K. M. Douglass, and S. Manley, “SMLM acquisition simulation software (SASS),” https://github.com/LEB-EPFL/SASS (2018).

33. S. F. Gibson and F. Lanni, “Experimental test of an analytical model of aberration in an oil-immersion objective lens used in three-dimensional light microscopy,” J. Opt. Soc. Am. A 9(1), 154–166 (1992). [CrossRef]   [PubMed]  

34. J. Li, F. Xue, and T. Blu, “Fast and accurate three-dimensional point spread function computation for fluorescence microscopy,” J. Opt. Soc. Am. A 34(6), 1029–1034 (2017). [CrossRef]   [PubMed]  

35. K. Perlin, “An image synthesizer,” in Proceedings of the 12th Annual Conference on Computer Graphics and Interactive Techniques - SIGGRAPH ’85 (ACM, 1985), pp. 287–296. [CrossRef]  

36. K. Spencer, “Open simplex noise,” https://gist.github.com/KdotJPG/b1270127455a94ac5d19 (2014).

37. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res. 15, 1929–1958 (2014).

38. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, X. Zheng, and G. Brain, “TensorFlow: A system for large-scale machine learning,” in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ’16) (2016), pp. 265–284.

39. F. Chollet, “Keras,” GitHub Repos., https://github.com/keras-team/keras (2015).

40. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv 1412.6980 (2014).

41. J. Bechhoefer, “Feedback for physicists: A tutorial essay on control,” Rev. Mod. Phys. 77(3), 783–836 (2005). [CrossRef]  

42. N. Stuurman, “SpotCounter (ImageJ),” http://imagej.net/SpotCounter (2017).

43. J. F. Smuts, Process Control for Practitioners : How to Tune PID Controllers and Optimize Control Loops (OptiControls Inc, 2011).

44. S. Culley, D. Albrecht, C. Jacobs, P. M. Pereira, C. Leterrier, J. Mercer, and R. Henriques, “Quantitative mapping and minimization of super-resolution optical imaging artifacts,” Nat. Methods 15(4), 263–266 (2018). [CrossRef]   [PubMed]  

45. C. Allain and M. Cloitre, “Characterizing the lacunarity of random and deterministic fractal sets,” Phys. Rev. A 44(6), 3552–3558 (1991). [CrossRef]   [PubMed]  

46. R. P. J. Nieuwenhuizen, K. A. Lidke, M. Bates, D. L. Puig, D. Grünwald, S. Stallinga, and B. Rieger, “Measuring image resolution in optical nanoscopy,” Nat. Methods 10(6), 557–562 (2013). [CrossRef]   [PubMed]  

47. N. Banterle, K. H. Bui, E. A. Lemke, and M. Beck, “Fourier ring correlation as a resolution criterion for super-resolution microscopy,” J. Struct. Biol. 183(3), 363–367 (2013). [CrossRef]   [PubMed]  

48. T. Dertinger, R. Colyer, G. Iyer, S. Weiss, and J. Enderlein, “Fast, background-free, 3D super-resolution optical fluctuation imaging (SOFI),” Proc. Natl. Acad. Sci. U.S.A. 106(52), 22287–22292 (2009). [CrossRef]   [PubMed]  

49. N. Gustafsson, S. Culley, G. Ashdown, D. M. Owen, P. M. Pereira, and R. Henriques, “Fast live-cell conventional fluorophore nanoscopy with ImageJ through super-resolution radial fluctuations,” Nat. Commun. 7, 12471 (2016). [CrossRef]   [PubMed]  

50. E. Nehme, L. E. Weiss, T. Michaeli, and Y. Shechtman, “Deep-STORM: super-resolution single-molecule microscopy by deep learning,” Optica 5(4), 458–464 (2018). [CrossRef]  

51. N. Boyd, E. Jonas, H. P. Babcock, and B. Recht, “DeepLoco: Fast 3D Localization Microscopy Using Neural Networks,” bioRxiv 267096 (2018).

52. M. Štefko, B. Ottino, K. M. Douglass, and S. Manley, “Autonomous illumination control in localization microscopy - Data,” (2018). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       An example of how the self-tuning PI controller calibrates itself during an automated STORM image sequence. Fluorescent spots were counted using a fully convolutional neural network called DEFCoN.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 The autonomous illumination control system. The three primary components in the feedback loop are represented as modular blocks, and the data passed between the components are indicated in italics.
Fig. 2
Fig. 2 Bias in the detection of fluorescent spots. a) A single image from a simulated PALM acquisition demonstrates two types of counting errors: undercounting due to poor signal-to-noise (left arrow) and undercounting due to overlapping PSFs (right arrow). Red x’s: ground truth emitters; cyan circles: detected molecules using a wavelet matched filter. Scale bar: 1 µm. b) The number of fluorescence spots detected by the wavelet/watershed algorithm of [21] using different values for the B-spline scale parameters [22] vs. the true number of emitting fluorophores in images from a simulated PALM data set. The gray line indicates an unbiased result. Data points are binned averages with error bars representing the 95% confidence interval of the mean.
Fig. 3
Fig. 3 Density map estimation for fluorescence spot counting. a) A target density map generated from ground truth simulated data. The integral over the density map is the number of fluorescent spots in the FOV. Red x’s denote ground truth positions. b) The architecture of DEFCoN. (De)Conv.: (de)convolutional layer. ReLU: rectified linear units. The number of convolution kernels (and subsequent strided convolution kernels) used in each layer is indicated by the number below. For example, the first layer of the segmentation network is composed of 16 convolution kernels, each 3-by-3 in size, followed by ReLU activation, followed by 16 strided 3-by-3 convolutions, followed by ReLU.
Fig. 4
Fig. 4 Training DEFCoN’s neural networks. The training takes place in two steps: first the segmentation network alone is trained on target segmentation maps. Then its weights are frozen and the full network is trained on the target density maps.
Fig. 5
Fig. 5 Comparison between DEFCoN and the wavelet filtering/watershed method from [21]. a) The mean counting error’s dependence on the density of randomly distributed fluorophores. The magenta tick indicates the density in the simulated data sets for panel b. b) The dependence of the error on the SNR. The magenta tick indicates the SNR of the simulated data sets in panel a.
Fig. 6
Fig. 6 Execution times for DEFCoN and wavelet-based segmentation. The dependence of the execution time on the image size scales linearly with the number of pixels.
Fig. 7
Fig. 7 a) A proportional-integral (PI) controller. b) The number of detected localizations per frame in an acquisition where the UV laser power was controlled by the PI controller. c) The number of detected localizations per frame where the UV laser was manually adjusted.
Fig. 8
Fig. 8 Construction of the self-tuning procedure for the PI controller.
Fig. 9
Fig. 9 The local density estimates are made by computing the sum of the pixels in each subregion of a density map. (In this example, the size of the subregions is 7 x 7 pixels; an example is the red square on the left.) The maximum local count (the red square on the right) is the largest value found in the map of local density estimates.
Fig. 10
Fig. 10 The average maximum local count (AMLC) as a heuristic for set point determination. a) The AMLC value directly determines the tradeoff between the degree of artifacts in the SMLM image (precision and recall) and the rate at which localizations are detected. b) SMLM images of a simulated microtubule network acquired at different AMLC values. AMLC values are in the upper left corner of each image. False positive localizations are in red. Scale bar: 1 µm.

Tables (1)

Tables Icon

Table 1 Mean counting errors on real data sets.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

l= l pixel +γ l count
l pixel = i,j ( d ^ i,j d i,j ) 2
l count = ( i,j d ^ i,j i,j d i,j ) 2
I i,j ' = I i,j min( I i,j ) max( I i,j )min( I i,j )
CE= | N ^ N | N
P( t )= K p e( t )+ K i 0 t e( t' )dt'
K p = ΔP ΔN τ ( λ+ t d )
K i = K p τ 1
MLC= max sS ( ( i,j )s d ^ i,j )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.