Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fast, bias-free algorithm for tracking single particles with variable size and shape

Open Access Open Access

Abstract

We introduce a fast and robust technique for single-particle tracking with nanometer accuracy. We extract the center-of-mass of the image of a single particle with a simple, iterative algorithm that efficiently suppresses background-induced bias in a simplistic centroid estimator. Unlike many commonly used algorithms, our position estimator requires no prior information about the shape or size of the tracked particle image and uses only simple arithmetic operations, making it appropriate for future hardware implementation and real-time feedback applications. We demonstrate it both numerically and experimentally, using an inexpensive CCD camera to localize 190 nm fluorescent microspheres to better than 5 nm.

©2008 Optical Society of America

1. Introduction

In single-particle tracking experiments using optical microscopy, fluorescent molecules, quantum dots, metal nanoparticles, and polymer microspheres are routinely localized on a 10 nm length scale, and in exceptional cases even on a 1 nm length scale, despite the much longer optical wavelength encoding that information. This ability to track the nanometer-scale motion of small objects with optical microscopy has greatly improved experimental access to nanoscale biological behavior [1, 2, 3, 4] and microscale-to-nanoscale colloid interactions [5, 6, 7].

In most cases, particle tracking is accomplished by analyzing a series of digital images obtained, for example, from the output of a CCD camera. However, the analysis of these images presents a challenging data processing task, and a correspondingly large body of research has been devoted to developing and testing particle-tracking software routines. When selecting an algorithm for extracting the underlying position of a particle from a noisy image, researchers have been forced to accept tradeoffs among accuracy, robustness, and speed. For example, it is generally accepted that the most accurate results can be achieved by fitting an image to a constrained (usually Gaussian) instrument resolution function [8, 9, 10]. Unfortunately, the computational complexity of such an algorithm renders it quite slow and therefore unavailable for real-time or time-critical applications. Other numerical search algorithms offer much shorter execution times by incorporating prior knowledge of the image shape through masking [10] or spatial filtering [11]. Furthermore, a recently introduced algebraic solution for locating a two- or three-dimensional Gaussian from sparsely sampled brightness data provides an efficient particle localization algorithm with no numerical searching when the shape of the Gaussian function and the image background level are known [12, 13]. Various commercial tracking software packages have also been quantitatively evaluated, with emphasis on the care required to incorporate prior knowledge without introducing bias and errors in these automated packages [14]. However, all of these methods require foreknowledge of the particle shape and the image background and are not robust to variations in the size or shape of the tracked object that may arise when tracking an asymmetric object such as a nanorod or when a single particle diffuses into and out of focus.

Moreover, recently developed experimental techniques for manipulating nanoparticles using feedback control [15, 16, 17, 18, 19, 20] place stringent requirements on the speed and accuracy of particle-tracking routines. Because image processing routines are simply too slow, many particle-tracking feedback controllers use fast photodiodes rather than CCD cameras, sacrificing widefield spatial information for real-time control. However, real-time control of multiple, spatially-separated particles [21] demands an imaging method and will be directly enhanced by the development of simple, fast, and accurate algorithms for localizing individual particles within subsections of a larger image. Future experiments that combine nanometer-resolution particle localization with real-time feedback control will enable new techniques for monitoring and engineering structures at the nanoscale.

In this paper, we progress toward this goal of robust, real-time particle tracking by introducing a simple, fast estimator for extracting the center position of an object from an image corrupted by noise, pixelation, and a constant (unknown) background. Our algorithm is a refinement of the simple centroid or center-of-mass (CM) algorithm, which is computationally efficient but can only provide an unbiased estimate when a particle lies exactly at the center of the image. In the estimator algorithm developed here, we enforce this otherwise serendipitous arrangement by iteratively testing whether an object lies at the image center and subsequently refining the image window in order to better center the object. This procedure efficiently suppresses the estimator bias (exponentially in the number of iterations), resulting in a very fast, unbiased localization algorithm with sub-pixel accuracy. It requires few input parameters and makes no assumptions about the object shape, making it robust and effective for localizing objects with varying sizes and complex shapes in the presence of uncharacterized background noise. Furthermore, the algorithm is computationally simple and can be executed with a few lines of code. Its performance is comparable to or exceeds full nonlinear least-squares minimization in many cases, while its execution time is orders of magnitude shorter. For all of these reasons, our algorithm is a promising candidate for implementation in signal-processing hardware for real-time applications.

The paper is organized as follows. In section 2, we examine the error in the CM estimator in detail and show that this error can be estimated in real-time. In section 3, we define an iterative “VirtualWindow” CM (VWCM) algorithm that exploits this fact to eliminate the average error in the CM estimator. In section 4, we present the results of extensive Monte Carlo simulations of the new estimator, and in section 5 we apply the VWCM in our own particle-tracking experiment. Finally, section 6 summarizes our results and provides an outlook for future applications and refinements.

2. Bias in the center-of-mass estimator

As mentioned in section 1, the center-of-mass (CM) estimator of a particle’s position is simple, very fast to execute, and provides an estimate with no assumptions about the shape or size of the underlying object. Unfortunately, it is strongly biased by any background signal and exhibits poor noise rejection outside the region of interest. In this section, we will derive the mean error, or bias, in the CM estimator and show that this error can itself be estimated in real-time.

With the output of a CCD camera in mind, we define an image to be a two-dimensional matrix S with elements Sjk representing the total number of counts in pixel Pjk with width Δ centered at location (xjk,yjk). [The pixel size Δ and coordinates (x,y) are always considered in the object plane of the optical system, so that, for example, Δ is the actual CCD pixel size divided by the system magnification M.] S represents the experimental data from a single shot of the experiment, e.g. a single CCD frame. In order to analyze, and later design, a position estimator algorithm, we must assume some underlying statistical model for such an image. For this purpose, we assume the signal from the particle is drawn from a Poisson distribution with spatially-dependent mean value NS(x,y)/Δ2, which represents the point-spread function of the optical system convolved with the shape of the fluorescent object. This mean value is explicitly normalized by the pixel area Δ2 so that NS(x,y) represents a dimensionless number of counts. In order to calculate the mean number of counts in a particular pixel, averaged over an ensemble of individual images S, we must integrate NS(x,y)/Δ2 over the area of that pixel, as in Eq. (1a) below.

For a general scenario, we can also consider a background signal drawn from a distribution with mean NB(x,y)/Δ2 and (spatially-uncorrelated) variance σ 2 B(x,y)/Δ2. The noise term may represent scattered light or technical noise in the camera, with statistical properties (e.g. Poisson, Gaussian or other distribution) included in the definitions of NB and σ 2 B. Mathematically, we can now write the mean and covariance of an ensemble of frames S each arising from the image of a particle at position (x 0,y 0):

Sjk=PjkdxdyΔ2[NS(xx0,yy0)+NB(x,y)]
SjkSjk=SjkSjk+δjjδkkPjkdxdyΔ2[NS(xx0,yy0)+σB2(x,y)].

The angle brackets 〈〉 denote the expectation value over an ensemble of images, while ∫∫Pjk denotes integration over the area of pixel Pjk so that, for example, the mean count rate 〈Sjk〉 is just the integral of the spatially-varying average count rate over the pixel area. Without the background contribution, Eqs. (1a)(1b) represent familiar Poisson counting statistics, where the mean, 〈Sjk〉, and variance, 〈S 2 jk〉-〈Sjk2, of the counts in pixel Pjk are equal [22, 23]. Equations (1a)(1b) give a prescription for calculating the statistics of the image S that includes the effects of pixelation, image truncation, background noise, and counting statistics.

Now consider the CM estimator of the object’s x-position x 0:

x̂CM=jkxjkSjkjkSjk.

While it is simple to calculate from the data, x̂CM is nevertheless a nonlinear function of the image S, making statistical calculations difficult. However, the statistics of a linearized approximation are accurate to order 𝓝-3/2, where 𝓝=jkSjk is the total number of counts in the image. The corresponding linearized approximation to Eq. (2) is

x̂CM=1𝓝jkxjkSjk,𝓝=jkSjk.

We can now calculate the mean and variance of the centroid estimator for an image Sjk satisfying (1a):

x̂CM=1𝓝jk{xjkPjkdxdyΔ2[NS(xx0,yy0)+NB(x,y)]}
x̂CM2x̂CM2=1𝓝2jk{xjk2PjkdxdyΔ2[NS(x−x0,yy0)+σB2(x,y)]}.

Equations (4)(5) allow us to calculate the statistics of the centroid estimator for any underlying image function NS(x,y) superimposed on a spatially-varying, Poisson- or Gaussian-distributed background. The mean and mean-square errors (bias and variance) resulting from pixelation, truncation, shot noise, and background noise can each be derived from these expressions. For more complicated systems, such as electron-multiplying CCD cameras, the mathematical form of the noise term [Eq. (1b)] must be modified to accommodate signal-dependent noise introduced by on-chip gain. For the remainder of this paper, we will be concerned only with the mean value 〈x̂CM〉, which determines the estimator bias; the noise (variance) terms are included as a reference.

Let us now calculate the bias arising from a spatially-invariant background signal with NB(x,y)=NB. We leave the noise σ2 B(x,y) on this background level unspecified, as it does not enter a calculation of the bias. Introducing the shorthand notation,

NSΔ(xjkx0,yjky0)=PjkdxdyΔ2NS(xx0,yy0)

and rearranging Eq. (4), we find that the average CM estimator error, the bias BCM, can be written as

BCMx̂CMx0
=1jkNSΔ(xjkx0,yjky0)[jk(xjkx0)NSΔ(xjkx0,yjky0)+NBjk(xjkx̂CM)].

It is important to note that, for NB(x,y)=NB, Eq. (4) and Eq. (6) are mathematically identical, and no approximations have yet been made. Let us now consider the different contributions to the bias in Eq. (6). The normalization factor ∑jkN Δ S(xjk-x 0,yjk-y 0) represents the average number of detected counts from the particle itself, and is only a weak function of x 0 and y 0 as long as the particle image is not strongly truncated at the edges of the array. We may then denote this term by 〈𝓝S〉 and neglect its functional dependence on (x 0,y 0). The first term in brackets on the right-hand side of Eq. (6) is a discretized analog of the continuous center-of-mass

jk(xjkx0)NSΔ(xjkx0,yjky0)dxdyΔ2xNS(x,y)=0.

The last integral is proportional to the x-coordinate of the center-of-mass of the image function NS(x,y), which we assume to be 0 [this condition can always be enforced through the definition of NS(x,y)]. We assume that the particle is near the center of the image and is not significantly truncated at the edges of the array. Violations of the approximate equality in Eq. (7) now correspond to estimator bias arising from pixelation and truncation of the underlying image. Assuming these are negligible for now, we are left with the following expression for the estimator bias

BCM𝓝B𝓝S(x¯x̂CM)

where 〈𝓝S〉 and 〈𝓝B〉 are the average number of signal and background counts in the entire image, respectively, and x̄ is the geometric center of the pixel array, i.e. the unweighted average x-coordinate. Equation (8) reveals that the estimator bias is proportional to the difference between the center of the pixel array x̄ and the average estimatex̂CM〉. The constant of proportionality is the ratio of background to signal counts in the image. Recall that the estimator bias is unknown to the experimenter, since the underlying particle position x 0 is unknown. However, the difference between the estimate x̂CM and the center of the pixel array can be calculated in each shot of the experiment. In our algorithm described below, we exploit this fact in order to form an online estimate of the bias then correct it by truncating the pixel window to center the particle in the image array.

 figure: Fig. 1.

Fig. 1. Cartoon illustration of the iterative VWCM algorithm operating on an image of a particle with a constant background. The background biases the center-of-mass (CM) estimate towards the center of the array. The first, biased centroid estimate (yellow) is offset by δ 1,x from the array center. At the second iteration, the window is truncated by an amount 2δ 1,x along x and a new centroid (blue) is calculated within this window. Where part of a pixel is truncated by the “virtual window,” its value is scaled proportionally to the relative area. At each iteration, the window is further adjusted until the center of the window and the CM estimate coincide, giving a bias-free estimate of the particle position.

Download Full Size | PDF

3. Virtual window center-of-mass algorithm

In section 2, we derived the bias in the CM estimator x̂CM arising from a statistically constant background. In a realistic experimental scenario, this bias greatly limits the practical utility of the CM algorithm. When the background signal is sufficiently well characterized, its detrimental effects can be suppressed or eliminated through background subtraction, image thresholding, or both. However, these procedures are critically dependent on the choice of background levels and thresholds [9] and are unavailable in cases where the background is unknown, uncharacterized, or time-varying. In this section, we will derive a more robust CM algorithm that requires no specific foreknowledge of the image to be analyzed but efficiently suppresses the bias arising from background levels.

Earlier, we found that the bias in the CM estimator x̂CM can itself be estimated in real time through Eq. (8). The central concept of our Virtual Window Center-of-Mass (VWCM) estimator is to use this information to modify the image window in order to center it on the object and eliminate the estimator bias. The procedure is iterative, such that at the nth iteration, we calculate the center-of-mass x̂(n) CM then modify the image array by eliminating a portion of the image near one edge, effectively shifting the geometric center x̄(n) of the window. The update rule that defines our iterative algorithm (explained below) is to truncate the window at the nth iteration such that x̄(n+1)=x̂(n) CM.

To see how eliminating a portion of the image shifts the array center, consider eliminating a single row of pixels (width Δ) from the edge of an array. The center of the resulting (rectangular) array is shifted by Δ/2 as a result. If our image resolution were infinitely fine, we could translate the array center x̄ any desired amount by discarding arbitrarily small portions of the image at one edge. Since we do not have infinitely fine resolution in practice, we may still approximate a sub-pixel truncation of the array by weighting the pixels and pixel coordinates along one edge of the image. For example, if we wish to translate the array center by a small amount δ/2<Δ/2, we simply multiply the pixel intensities S jk at the one edge by 1-δ/Δ and redefine the coordinates along that edge by xjkxjk+δ. This procedure approximates the truncation of a region of width δ from the negative edge of the image, defining a virtual window shifted by δ/2 as desired: x̄→x̄±δ/2. To complete the algorithm, the user sets two termination conditions, ε and nmax, such that the algorithm terminates when |x̄(n+1)-x̄(n)|/Δ<ε or the number of iterations exceeds nmax. With these concepts, we can now precisely describe the VWCM algorithm, displayed graphically in Fig. 1:

Virtual window center-of-mass (VWCM) algorithm

1. Let the image S (1)=S and coordinates (x (1) jk,y (1) jk)=(xjk,yjk) correspond to the raw data.

2. Beginning with n=1, Calculate the center-of-mass x̂(n) CM from the image S (n) and coordinates x (n) jk :

x̂CM(n)=jkxjk(n)Sjk(n)jkSjk(n)..

Repeat for ŷ(n) CM.

3. Define a new image S (n+1) and new coordinate system (x (n+1) jk,y (n+1) jk) by truncating the previous image such that

x̄(n+1)=x̂(n), ȳ(n+1)=ŷ(n).

For subpixel shifts, use the virtual window procedure described above.

4. Iterate until |x̄(n+1)-x̄(n)|<ε or n=nmax.

The update rule x̄(n+1)=x̂(n) CM corresponds to a shift of the image that centers the array on the current estimate of the particle position. Denoting the bias in the nth iteration by B (n) CM=〈x̂(n) CM〉-x 0, we find from Eq. (8) and the update rule x̄(n+1) CM=x̂(n) CM

BCM(n)=(11+𝓝S𝓝B)n1BCM(1)

where 〈𝓝S〉 and 〈𝓝B〉 are the average number of counts arising from the signal and background respectively. Equation (9) shows that the bias in the VWCM algorithm tends exponentially to zero with the number of iterations n. In fact, the signal-to-background ratio 〈𝓝S〉/〈𝓝B〉 changes at each iteration as the background and signal are truncated differently during the image-shifting procedure. However, the VWCM is designed to truncate more background than signal, so that 〈𝓝S〉/〈𝓝Bincreases at each iteration. Equation (9) is therefore a conservative estimate, providing a lower bound on the convergence rate. Regardless, the correction is small, and we have found Eq. (9) to give an accurate prediction of the bias-suppression rate for a wide range of parameters. The convergence rate depends on the signal-to-background ratio 〈𝓝S〉/〈𝓝B〉, but the algorithm requires no knowledge of this quantity. In fact, the algorithm requires no input beyond the image S and pixel coordinates (xjk,yjk), and termination conditions ε and nmax. Finally, note that the algorithm requires only simple arithmetic operations on the image and is therefore very fast, particularly when the signal-to-background ratio gives a satisfactory convergence rate.

4. Numerical simulations

We performed extensive numerical simulations in order to confirm the predicted features of the VWCM algorithm and compare its performance to other algorithms. For this test, we implemented four algorithms in standard numerical analysis software. We chose to compare the Gaussian fit, Gaussian mask [10], CM and VWCM. We included the Gaussian fit, because it is generally assumed to provide superior accuracy, and the Gaussian mask and CM because these are among the fastest executing algorithms and therefore most promising for real-time implementation.

For the Gaussian fit, we used a function of the form

f(x,y)=Aexp[(xx0)22σx2(yy0)22σy2]+B

where x 0, y 0, σx, σy, A, and B were the fit parameters. For both the Gaussian mask (see Ref. [10] for details) and the VWCM, we used termination conditions ε=10-3 and nmax=200. Finally, we preprocessed every image array S passed to the algorithms by subtracting the minimum value from the entire array. This simple procedure ensures that the counts in each pixel are nonnegative (a prerequisite for the VWCM algorithm) and removes spuriously large offsets, which severely degrade the CM algorithm, make the Gaussian fit more sensitive to its initial fit conditions, and require more iterations to reach convergence in the VWCM.

A graphical summary of our simulations is shown in Fig. 2. In each case, we generated underlying count rate functions NS(x,y) corresponding to fluorescent objects with varying brightness, shape and position, as described in the figure caption. We then integrated NS(x,y) over the pixel coordinates to generate the pixelated rate N Δ S(xjk,yjk). For each image realization, an array of Poisson-distributed random numbers was generated based on this rate. Finally, we added constant Poisson-distributed background noise, with a rate determined by the user-defined signal-to- background ratio 〈𝒩S〉/〈𝒩B〉. The pixel size was taken to be Δ=123 nm, corresponding to our experiment (see section 5). For each type of object, we varied the position across a square grid corresponding to one pixel, generating a set of 1000 images at each position. The underlying object position was estimated for each image using the four algorithms, and each resulting distribution is plotted in Fig. 2 as a circle centered at the mean estimated position with a radius of one standard deviation (1σ).

Note that the Gaussian fit requires an initial parameter set that strongly affects its convergence and accuracy, while the Gaussian mask requires a mask size and initial position guess to execute. For each particular image, the Gaussian mask’s performance can be optimized by adjusting the mask size. Because robustness and automation are a key performance objective for our algorithm development, in Fig. 2 we optimized the Gaussian mask for the tightly focused image then used that mask shape for the remaining images without further “by-hand” adjustment. In contrast to the Gaussian fit and mask algorithms, the CM and VWCM accept no input other than the image matrix S.

 figure: Fig. 2.

Fig. 2. Simulation results showing the bias and localization accuracy achieved for 21×21 pixel images using the Gaussian fit (GF), Gaussian mask (GM), CM and VWCM algorithms. For each image, the underlying position was varied over a grid of positions spanning the center pixel (Δ=123 nm, dashed squares). The resulting distribution of estimated positions is displayed as a circle centered at the mean with 1σ radius as defined in the text. “N.C.” denotes instances when the Gaussian fit and mask algorithms either failed to converge or had large errors with the resulting 1σ circles larger than the pixel. Image details (top row): The first four images represent the (non-paraxial) point-spread function of a dipole emitter [24] with wavelength λ=550 nm, at depths z=0 nm, 500 nm, 750 nm and 1000 nm imaged through a microscope with magnification M=60 and numerical aperture 1.2. The final image is a simple rod shape. Each image has 〈𝓝S〉=〈𝓝B〉=5000 photons.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Positions of a single 190 nm fluorescent microparticle stepped in 100 nm increments as estimated by the Gaussian fit (GF, red), CM (green), and VWCM (blue). Results are plotted as in Fig. 2, but with 2σ radii for better visibility. Note the extreme distortion introduced by bias in the CM algorithm. As a typical example, we quote the following results for the upper left position: the bias along x (measured from the Gaussian fit estimate) Bx, standard deviations along xx) and algorithm execution time (T). GF: Bx=0 nm, σx=1.0 nm, T=66 ms. CM: Bx=127 nm, σx=0.6 nm, T=0.2 ms. VWCM: Bx=-1.3 nm, σx=2.2 nm, T=2.7 ms.

Download Full Size | PDF

Overall, the simulations show that the VWCM algorithm is fast, accurate, and robust. None of the other algorithms we investigated shared this combination of properties: the Gaussian fit algorithm has excellent accuracy and precision but is very slow and fails to converge in some cases; the Gaussian mask algorithm is both fast and accurate for a known, nearly-Gaussian image shape but its performance is significantly degraded when the mask is ill-suited to the object; the CMalgorithm is the fastest, but exhibits a severe bias towards the geometric center of the image array, as shown in Fig. 2 and predicted in Eq. (8). The VWCM completely eliminates this bias in the CM algorithm and performs robustly, with little variation in accuracy for a variety of underlying object shapes and sizes. It is less accurate than the Gaussian mask and Gaussian fit for tightly focused images within a large field but exhibits comparable and in some cases dramatically improved accuracy for images that deviate from Gaussian shape. Its execution time is typically a few times longer than the CM (corresponding to the number of iterations n require to reach convergence) but 2 to 10 times shorter than the Gaussian mask and 100 to 1000 times shorter than the Gaussian fit even with initial fit parameters chosen favorably in order to minimize “runaway” unstable cases that do not converge at all.

 figure: Fig. 4.

Fig. 4. VWCM position estimates (blue squares) for a binary aggregate of two microparticles stepped in 100 nm increments parallel to the image plane and 1 µm increments out of the focal plane. Approximately 30 images were taken at each position. Error bars represent one standard deviation in the estimated positions, and are inside the data points for some cases. The standard deviation along x ranged between 4.0 nm and 5.6 nm for this series. The red circles are a guide to the eye, spaced at 100 nm intervals.

Download Full Size | PDF

5. Experimental results

In order to test the VWCM algorithm in a realistic experimental setting, we tracked the motion of fluorescent microparticles using digital video microscopy. Our samples consisted of 190 nm diameter fluorescent dye-labeled polystyrene microspheres dispersed and dried (immobilized) on a glass cover slip. The sample was illuminated by a 488 nm solid-state laser, and fluorescence was collected through a water-immersion objective (magnification 60x, NA 1.2) and separated from the excitation by a dichroic filter. Images were obtained with a CCD camera operating at 30 frames per second. Using a three-axis piezoelectric stage, we displaced particles with nanometer precision, and for each image we located the object using the four algorithms discussed in section 4.

In Fig. 3, we show the distribution of estimated particle positions when a single in-focus particle was scanned in 100 nm increments over a 5×5 grid. At each point, approximately 30 images were captured and the resulting estimated position distributions are plotted as circles with 2σ radius (for clarity). This type of data, consisting of in-focus images within a large field of view, represents a “best-case” scenario for the Gaussian fit and mask algorithms (in both accuracy and execution time), since the point-spread function for this case is well-approximated by a Gaussian with constant shape. The data in the figure show that the VWCM eliminates the bias in the CM and gives position estimates commensurate with the Gaussian fit and mask. The latter algorithms are roughly two times more accurate than the VWCM for this in-focus case, but the VWCM executes faster and with no input parameters.

To see how the VWCM performs with images of a more complex nature, we captured images of an asymmetric aggregate of (most likely) two microspheres as it is moved outside the focal plane of the microscope. This asymmetric, extended object was moved in 100 nm increments over five steps while simultaneously defocusing by 1 µm at each step. The resulting images and position estimates are shown in Fig. 4. Despite the complicated, asymmetric particle shape, the VWCM algorithm tracks the particle motion with high fidelity. For data of this type, the initial parameters for the Gaussian fit and the mask size for the Gaussian mask algorithm would need to be tailored for each image in order to achieve satisfactory tracking. In contrast, the VWCM requires no adjustment for these (or any other) images.

6. Conclusions

In this paper, we described a new algorithm for finding the center of mass of a compact image while suppressing the bias due to a flat background. This VWCM algorithm iteratively centers the particle in a “virtual window” where there is no bias. We demonstrated the VWCM both numerically and experimentally, confirming its predicted simplicity and robustness. It is well-suited for real-time applications in which particles of various sizes and shapes will be tracked simply, robustly and accurately. In the future, we hope to extend our virtual-window method to treat images with a sloped (non-constant) background.

Acknowledgements

The authors gratefully acknowledge Peter Carmichael for stimulating discussions. A. B. is supported by the National Research Council.

References and links

1. M. J. Saxton and K. Jacobson, “Single-particle tracking: applications to membrane dynamics,” Annu. Rev. Biophys. Biomol. Struct. 26, 373–399 (1997). [CrossRef]   [PubMed]  

2. A. Yildiz, J. N. Forkey, S. A. McKinney, T. Ha, Y. E. Goodman, and P. R. Selvin, “Myosin V walks hand-overhand: Single fluorophore imaging with 1.5-nm localization,” Science 300, 2061–2065 (2003). [CrossRef]   [PubMed]  

3. X. Michalet, F. F. Pinaud, L. A. Bentolila, J. M. Tsay, S. Doose, J. J. Li, G. Sundaresan, A. M. Wu, S. S. Gambhir, and S. Weiss, “Quantum Dots for Live Cells, in Vivo Imaging, and Diagnostics,” Science 307, 538–544 (2005). [CrossRef]   [PubMed]  

4. D. Weihs, T. G. Mason, and M. A. Teitell, “Bio-Microrheology: A Frontier in Microrheology,” Biophys. J. 91, 4296–4305 (2006). [CrossRef]   [PubMed]  

5. P. Bahukudumbi and M. A. Bevan, “Imaging energy landscapes with concentrated diffusing colloidal probes,” J. Chem. Phys. 126, 244702 (2007). [CrossRef]   [PubMed]  

6. H.-J. Wu, W. Everett, S. Anekal, and M. Bevan, “Mapping Patterned Potential Energy Landscapes with Diffusing Colloidal Probes,” Langmuir 22, 6826–6836 (2006). [CrossRef]   [PubMed]  

7. S. K. Sainis, V. Germain, and E. R. Dufresne, “Statistics of Particle Trajectories at Short Time Intervals Reveal fN-Scale Colloidal Forces,” Phys. Rev. Lett. 99, 018303 (2007). [CrossRef]   [PubMed]  

8. N. Bobroff, “Position measurement with a resolution and noise-limited instrument,” Rev. Sci. Instrum. 57, 1152 (1986). [CrossRef]  

9. M. Cheezum, W. Walker, and W. Guilford, “Quantitative Comparison of Algorithms for Tracking Single Fluorescent Particles,” Biophys. J. 81, 2378–2388 (2001). [CrossRef]   [PubMed]  

10. R. Thompson, D. Larson, and W. Webb, “Precise Nanometer Localization Analysis for Individual Fluorescent Probes,” Biophys. J. 82, 2775–2783 (2002). [CrossRef]   [PubMed]  

11. J. Crocker and D. Grier, “Methods of Digital Video Microscopy for Colloidal Studies,” J. Colloid Interface Sci. 179, 298–310 (1996). [CrossRef]  

12. S. B. Andersson, “Position estimation of fluorescent probes in a confocal microscope,” in Proceedings of IEEE Conference on Decision and Control (IEEE, 2007) pp. 2445–2450.

13. T. Sun and S. Andersson, “Precise 3-D localization of fluorescent probes without numerical fitting,” in Proceedings of IEEE Annual International Conference of the Engineering in Medicine and Biology Society (IEEE, 2007) pp. 4181–4184.

14. B. Carter, G. Shubeita, and S. Gross, “Tracking single particles: a user-friendly quantitative evaluation,” Phys. Biol. 2, 60–72 (2005). [CrossRef]   [PubMed]  

15. V. Levi, Q. Ruan, and E. Gratton, “3-D particle tracking in a two-photon microscope. Application to the study of molecular dynamics in cells,” Biophys. J. 88, 2919–2928 (2005). [CrossRef]   [PubMed]  

16. A. J. Berglund and H. Mabuchi, “Tracking-FCS: Fluorescence Correlation Spectroscopy of Individual Particles,” Opt. Express 13, 8069–8082 (2005). [CrossRef]   [PubMed]  

17. M. Armani, S. Chaudhary, R. Probst, and B. Shapiro, “Using feedback control and micro-fluidics to steer individual particles,” in Proceedings of IEEE Conference on Micro Electro Mechanical Systems (IEEE, 2005), pp. 855–858.

18. A. E. Cohen and W. E. Moerner, “Method for Trapping and Manipulating Nanoscale Objects in Solution,” Appl. Phys. Lett. 86, 093109 (2005). [CrossRef]  

19. H. Cang, C. M. Wong, C. S. Xu, A. H. Rizvi, and H. Yang, “Confocal three dimensional tracking of a single nanoparticle with concurrent spectroscopic readout,” Appl. Phys. Lett. 88, 223901 (2006). [CrossRef]  

20. K. McHale, A. J. Berglund, and H. Mabuchi, “Quantum dot photon statistics measured by three-dimensional particle tracking,” Nano Lett. 7, 3535–3539 (2007). [CrossRef]   [PubMed]  

21. M. Armani, S. Chaudhary, R. Probst, and B. Shapiro, “Using feedback control of microflows to independently steer multiple particles,” IEEE J. Microelectromech. Syst. 15, 945–956 (2006). [CrossRef]  

22. C. W. Gardiner, Handook of Stochastic Methods for Physics, Chemistry and the Natural Sciences, 2nd ed. (Springer-Verlag, 1985).

23. N. G. van Kampen, Stochastic processes in physics and chemistry (Elsevier Science Pub. Co., 2001).

24. L. Novotny and B. Hecht, Principles of Nano-Optics (Cambridge University Press, 2006).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Cartoon illustration of the iterative VWCM algorithm operating on an image of a particle with a constant background. The background biases the center-of-mass (CM) estimate towards the center of the array. The first, biased centroid estimate (yellow) is offset by δ 1, x from the array center. At the second iteration, the window is truncated by an amount 2δ 1, x along x and a new centroid (blue) is calculated within this window. Where part of a pixel is truncated by the “virtual window,” its value is scaled proportionally to the relative area. At each iteration, the window is further adjusted until the center of the window and the CM estimate coincide, giving a bias-free estimate of the particle position.
Fig. 2.
Fig. 2. Simulation results showing the bias and localization accuracy achieved for 21×21 pixel images using the Gaussian fit (GF), Gaussian mask (GM), CM and VWCM algorithms. For each image, the underlying position was varied over a grid of positions spanning the center pixel (Δ=123 nm, dashed squares). The resulting distribution of estimated positions is displayed as a circle centered at the mean with 1σ radius as defined in the text. “N.C.” denotes instances when the Gaussian fit and mask algorithms either failed to converge or had large errors with the resulting 1σ circles larger than the pixel. Image details (top row): The first four images represent the (non-paraxial) point-spread function of a dipole emitter [24] with wavelength λ=550 nm, at depths z=0 nm, 500 nm, 750 nm and 1000 nm imaged through a microscope with magnification M=60 and numerical aperture 1.2. The final image is a simple rod shape. Each image has 〈𝓝 S 〉=〈𝓝 B 〉=5000 photons.
Fig. 3.
Fig. 3. Positions of a single 190 nm fluorescent microparticle stepped in 100 nm increments as estimated by the Gaussian fit (GF, red), CM (green), and VWCM (blue). Results are plotted as in Fig. 2, but with 2σ radii for better visibility. Note the extreme distortion introduced by bias in the CM algorithm. As a typical example, we quote the following results for the upper left position: the bias along x (measured from the Gaussian fit estimate) Bx , standard deviations along x x ) and algorithm execution time (T). GF: Bx =0 nm, σ x =1.0 nm, T=66 ms. CM: Bx =127 nm, σ x =0.6 nm, T=0.2 ms. VWCM: Bx =-1.3 nm, σ x =2.2 nm, T=2.7 ms.
Fig. 4.
Fig. 4. VWCM position estimates (blue squares) for a binary aggregate of two microparticles stepped in 100 nm increments parallel to the image plane and 1 µm increments out of the focal plane. Approximately 30 images were taken at each position. Error bars represent one standard deviation in the estimated positions, and are inside the data points for some cases. The standard deviation along x ranged between 4.0 nm and 5.6 nm for this series. The red circles are a guide to the eye, spaced at 100 nm intervals.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

S jk = P jk dxdy Δ 2 [ N S ( x x 0 , y y 0 ) + N B ( x , y ) ]
S jk S j k = S jk S j k + δ jj δ kk P j k dxdy Δ 2 [ N S ( x x 0 , y y 0 ) + σ B 2 ( x , y ) ] .
x ̂ CM = jk x jk S jk jk S jk .
x ̂ C M = 1 𝓝 j k x j k S j k , 𝓝 = j k S j k .
x ̂ C M = 1 𝓝 jk { x jk P jk dxdy Δ 2 [ N S ( x x 0 , y y 0 ) + N B ( x, y ) ] }
x ̂ C M 2 x ̂ C M 2 = 1 𝓝 2 jk { x jk 2 P jk dxdy Δ 2 [ N S ( x− x 0 , y y 0 ) + σ B 2 ( x , y ) ] } .
B CM x ̂ CM x 0
= 1 jk N S Δ ( x jk x 0 , y jk y 0 ) [ jk ( x jk x 0 ) N S Δ ( x jk x 0 , y jk y 0 ) + N B jk ( x jk x ̂ CM ) ] .
jk ( x jk x 0 ) N S Δ ( x jk x 0 , y jk y 0 ) dxdy Δ 2 x N S ( x , y ) = 0 .
B CM 𝓝 B 𝓝 S ( x ¯ x ̂ CM )
B CM ( n ) = ( 1 1 + 𝓝 S 𝓝 B ) n 1 B CM ( 1 )
f ( x , y ) = A exp [ ( x x 0 ) 2 2 σ x 2 ( y y 0 ) 2 2 σ y 2 ] + B
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.