Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Position and orientation estimation of fixed dipole emitters using an effective Hermite point spread function model

Open Access Open Access

Abstract

We introduce a method for determining the position and orientation of fixed dipole emitters based on a combination of polarimetry and spot shape detection. A key element is an effective Point Spread Function model based on Hermite functions. The model offers a good description of the shape variations with dipole orientation and polarization detection channel, and provides computational advantages over the exact vectorial description of dipole image formation. The realized localization uncertainty is comparable to the free dipole case in which spots are rotationally symmetric and can be well modeled with a Gaussian. This result holds for all dipole orientations, for all practical signal levels, and for defocus values within the depth of focus, implying that the massive localization bias for defocused emitters with tilted dipole axis found with Gaussian spot fitting is eliminated.

© 2012 Optical Society of America

1. Introduction

The topic of this paper is the role of emitter dipole orientation in localization microscopy techniques such as PALM, STORM, or GSDIM [15]. In these techniques, emitters can switch between off and on-states such that only a sparse subset of all emitters is in the on-state for each image frame of a time lapse series. The location of all emitters can then be determined from the sequence of all image frames which each have a randomly different sparse subset of emitters that is active. The localization uncertainty can be well below the diffraction limit, mainly dependent on the photon count, and typically amounting to several tens of nanometers. In many cases the emitter can either rotate or change its conformation freely during the lifetime of the excited state of the emitter, typically a few ns. For the many excitation-emission cycles comprising an image frame of typically 10 ms, an average over randomly distributed emission dipole orientations will be observed. In other cases the single emitter cannot rotate freely, such as in solid-state or cryofrozen samples or for emitters immobilized at a surface or in a matrix, or rigidly bound to a macromolecuar complex. In those cases the fixed dipole orientation needs to be taken into account in the analysis, as it affects the observed spot shape.

In a previous paper written by us [6] we have investigated the role of dipole orientation and classical optical aberrations on localization accuracy of spot fitting with a Gaussian Point Spread Function (PSF) model. One of the key findings was that the combination of tilted dipoles and symmetric aberrations such as defocus or spherical aberration, gives rise to a very large localization bias of tens of nanometers. Similar conclusions have been obtained by Engelhardt and co-workers [7], and previously, in the context of Total Internal Reflection Fluorescence (TIRF) imaging, by Enderlein and co-workers [8]. The results of simulated localizations of a tilted dipole emitter with defocus are shown in Fig. 1 in order to illustrate the problem. Clearly, the variance of the fit is small and about equal to the Cramer-Rao Lower Bound (CRLB). There is, however, a bias of about 50 nm, which is unacceptably high.

 figure: Fig. 1

Fig. 1 Cross-sections of the PSF for a tilted dipole with defocus equal to the diffraction limit (left, (a)), and scatter plot of simulated localizations for a fixed dipole emitter with tilted dipole axis with defocus equal to the diffraction limit (boiling down to an axial object displacement of about 0.15 μm for the parameters assumed, NAob = 1.25, and λ = 500 nm) using a Gaussian PSF (right, (b)). The found positions, the 1 × σ confidence level and the Gaussian Cramer-Rao Lower Bound (CRLB) are plotted. The localization was done using Maximum Likelihood Estimation (MLE) with a Gaussian PSF, taking 500 detected signal photons, and 25 background photons, the photons distributed over an 11×11 pixel large Region Of Interest (ROI) according to Poisson statistics.

Download Full Size | PDF

The first goal of the current paper is to solve this bias problem for fixed dipole emitters with a tilted dipole axis. To that end the orientation of the dipole emitter must be measured. There are basically two ways to deduce dipole orientation. The first method is based on the use of polarization optics in the detection light path. Gould et al. [9] split the image in two sub-images via a Polarizing Beam Splitter (PBS) for measuring the rotational anisotropy (via cross-polarized illumination-imaging). Pavani et al. [10] also use a PBS in the detection light path for measuring a polarization contrast imaging modality. It appears that an accurate measurement of the full polarization and in that way of both the polar and azimuthal dipole angle requires a more sophisticated polarimeter architecture, e.g. as proposed in [11], or by one of the architectures described in [12]. In particular the Azzam-polarimeter [13] in which the image is split into four sub-images seems a suitable option. The second method to determine dipole orientation is via spot shape information. The root cause of the tilted dipole bias problem is that the spot of a tilted dipole with a little defocus cannot be reliably distinguished from the spot of a horizontal dipole with a position offset. A differentiation in shape can be accomplished by defocused image acquisition. Several papers have appeared about the measurement of dipole orientation using spot shape varations of defocused images of single dipole emitters [14, 15]. Extensions of this idea to measure both orientation and position have appeared as well. Aguet and coworkers determine position and orientation from defocused images by including a steerable filter approach for solving the orientation variables into a Least Mean Squares (LMS) fit for the position [16]. Another study of combined position-orientation measurement has been presented by Mortensen and co-workers [17]. They consider methods that apply to in-focus TIRF imaging. Both papers contain results for images with relatively low noise levels (corresponding to photon counts which we estimate to be of the order 3 × 103 [16] to 3 × 104 [17]). It is not clear how these methods perform for more practical noise levels (corresponding to say 200–1000 photons per emitter per acquired image). In addition, the extent to which these methods suffer form the tilted dipole bias problem is not fully clarified.

A common denominator of all of the above work is the use of realistic vectorial PSF models, taking all effects of high NA and polarization into account [18, 19]. A major drawback of the use of such models in an iterative procedure to fit position and orientation is their computational inefficiency. This stands in stark contrast to the popular Gaussian PSF model, which works fine for freely and rapidly rotating dipoles [6] and can be used for real-time position estimation when programmed on paralel processing platforms such as graphics cards [20]. An additional, albeit minor, drawback of the full-fledged PSF models is that asymmetric aberrations, such as astigmatism for the purpose of 3D-imaging, cannot be easily introduced in the model.

In this paper we propose a method for determining position and orientation with two novel ingredients. First, we combine polarization and spot shape information to determine orientation rather than tread along a single one of these routes. In particular, we propose the use of an Azzam-polarimeter in the detection light path. We will show that this gives orientational information from polarization as well as from hitherto unreported spot shape variations. Second, we propose an effective PSF model based on Hermite functions. These provide a straightforward generalization of the isotropic Gaussian PSF model for describing the essentials of spot shape variations with dipole orientation and with the polarization branch in the detection light path. We show that the proposed method (i) solves the tilted dipole bias problem, (ii) has a defocus tolerance which is at least equal to Maréchal’s diffraction limit, and (iii) works well at moderate photon counts corresponding to usually encountered experimental conditions.

The paper is organized as follows. In Section 2 the PSFs for imaging through the Azzam-polarimeter are presented, and section 3 introduces the effective Hermite PSF model. The unknown parameters of the model are found by MLE fitting decribed in section 4. Results are presented in section 5, conclusions and an outlook in section 6. A number of technical details are described in three appendices.

2. The Point Spread Functions in polarization imaging

A possible experimental implementation of the Azzam polarimeter is shown in Fig. 2. The beam captured by the objective is split into two parts, the polarization of one part is rotated over π/4, then both beams are split in two parts by a PBS. The four images are captured on four quadrants of a single EMCCD. Standard fields of view (512×512 pixels) are big enough to accommodate four images. Such a setup is quite akin to commonly used ‘Optosplit’-setups. Not drawn in the figure are optical path length compensation plates for making the defocus in all channels equal.

 figure: Fig. 2

Fig. 2 Schematic view of a light path for simultaneous acquisition of four polarization channels on a single camera based on the Azzam polarimeter architecture. Light from emitter E is captured by lenses L1 and imaged by lens L2 onto the camera. In the imaging branch the beam is split into two equal parts with a normal beam splitter BS. The two branches are color coded for the sake of clarity, it does not indicate wavelength. The ‘bottom’ (red) branch passes through a polarization rotating (over π/4) component R. Both the ‘top’ (blue) and ‘bottom’ (red) branches pass a polarizing beam splitter (PBS) splitting each branch into two orthogonally polarized parts. The horizontal (x) and vertical (y) polarized beams of both branches are directed towards the camera (EMCCD) via three mirrors M1, M2, and M3. The four channels Ix, Iy, Ix, and Iy are projected onto the four quadrants of the camera, thus generating four images of the emitter E.

Download Full Size | PDF

Polarization rotation over π/4 can be achieved in a number of ways, for example with a quartz waveplate with fast axis along the optical axis (thus using only the optical activity, not the birefringence), or with a twisted liquid crystal waveplate operating in the Mauguin-regime, or with a set of two λ/2-plates, one with fast axis oriented along the x-axis, and the other with fast axis oriented at π/8 with the x-axis. An even different arrangement for producing the four polarization images can be made using a switchable polarization rotator. For example, having a first fixed λ/2-plate and a second fast switchable λ/2-plate, such as a Ferroelectric Liquid Crystal based shutter, in front of the PBS allows to record the π/4 rotated polarization pairs of images consecutively, rather than in parallel. Presumably, other polarization optical schemes for reaching the same goal can be invented too.

The four PSFs corresponding to the four sub-images have a distinct shape depending on the emitter dipole orientation. These shapes are analyzed with the following formal developments. These developments closely follow the notations and conventions of our previous paper [6]. For the sake of completeness, a summary of all definitions and conventions on coordinate scalling, and of the basic equations of dipole image formation are given in appendix A. First we consider the pair of polarization images without π/4 polarization rotations. The electric field in the image plane (on the detector) at scaled position u⃗ originating from a dipole at scaled position u⃗d with dipole oriented along the unit vector d⃗ = (dx, dy, dz) is given by:

Eim,j(u,ud)=E0Σk=x,y,zwjk(uud)dk,
for j = x,y, where E0 is the amplitude, and where the functions wjk (u⃗) are the Fourier transforms of the pupil function times a matrix of angular functions. If there are no aberrations or only azimuthally symmetric aberrations (defocus, spherical aberration) the electric field can be expressed in terms of the three Richards and Wolf type of functions Fk (k = 0,1,2) defined in appendix A. It appears that the contribution from F2 is generally much smaller than the contribution from the other two functions. It is therefore a reasonable approximation to neglect this term. The electric field then turns out to be:
Eim,x(u,ψ)=E0[F0(u)dx+iF1(u)cosψdz],
Eim,y(u,ψ)=E0[F0(u)dy+iF1(u)sinψdz],
where polar coordinates (u, ψ) are used in the image plane. Polarization rotation over an angle χ changes this to:
Eim,x(u,ψ)=E0[F0(u)dx+iF1(u)cos(ψχ)dz],
Eim,y(u,ψ)=E0[F0(u)dy+iF1(u)sin(ψχ)dz],
where the rotated dipole vector components are:
dx=cosχdx+sinχdy,
dy=sinχdx+cosχdy.
The resulting intensities after passing through the PBS follow as:
PSFx(u,ψ)=E02[|F0(u)|2dx2+2Im{F0(u)F1(u)*}cos(ψχ)dxdz+|F1(u)|2cos2(ψχ)dz2],
PSFy(u,ψ)=E02[|F0(u)|2dy2+2Im{F0(u)F1(u)*}sin(ψχ)dydz+|F1(u)|2sin2(ψχ)dz2],
where χ = 0,π/4 is the rotation angle. Clearly, the two PSFs with χ = π/4 (from now on referred to as the x′ and y′ channels) give rise to the same spot shapes as for the pair with χ = 0 (from now on referred to as the x and y channels), but now rotated over an angle π/4. The relative intensity of the two spots is however different, as it depends on the azimuthal angle of the dipole.

These expressions for the polarization PSFs can describe the gross features of spot shape as a function of dipole orientation. Figure 3 shows the numerically calculated PSFs for the four polarization channels for horizontal, tilted and vertical dipoles. For (near) horizontal dipoles the first term on the r.h.s of equations 8 and 9 is dominant, giving rise to conventional peaked spots with peak intensities dx2 and dy2 for the x and y-channels, and dx2 and dy2 for the x′ and y′-channels. Clearly, these intensities can be very different depending on the azimuthal dipole angle. For (near) vertical dipoles the third term on the r.h.s of equations 8 and 9 is dominant, leading to double spots in each of the two channels, such that the line through the two sub-spots of each channel is oriented along the polarization axis of that channel. The sum of the x and y polarization spots, as well as the sum of the x′ and y′ channels, is an azimuthally symmetric doughnut spot. For tilted dipoles all terms matter, and we see a cross-over between the two different spot shapes. On top of that, the second term on the r.h.s of equations 8 and 9 is now of relevance and gives rise to offsets in the position of the peaks. The offset is directed along the axis of polarization of each of the polarizations channels, i.e. the x-polarized spot has an offset in the x-direction, whereas the y-polarized spot has an offset in the y-direction. The offsets in the x′ and y′ channels are along the diagonals of the image. This behaviour enables the disentanglement of dipole tilt and defocus from a mere displacement of the emitter.

 figure: Fig. 3

Fig. 3 The polarization PSFs in the four detection channels of the Azzam polarimeter architecture for horizontal dipoles with azimuthal angle π/4 (top row), tilted dipoles with azimuthal angle π/4 (middle row), and vertical dipoles (bottom row) dipoles for a field of view of size 2.2×λ/NA, a water objective (NA = 1.25 in a medium with nmed = 1.33), and diffraction limited defocus (72 mλ rms).

Download Full Size | PDF

3. Effective Point Spread Function model based on Hermite functions

Localization algorithms ususally require the numerical optimization of some merit function. In each iteration step the PSF model must be compared to actual image data. Clearly, this requires evaluating the full PSF in each iteration step, which is numerically challenging. Moreover, it is questionable whether such an exact PSF model is needed, as much of the finer detail of the spot shape is presumably washed away in the noise and background. For these two reasons it makes sense to develop a PSF model depending on functions that are numerically easy to calculate and nonetheless are able to describe the wide variety of spot shapes that occur for different dipole orientations. These functions must execute these desirable tasks with reasonable accuracy for practical signal-to-noise and signal-to-background levels. The simplicity and convenience of the popular Gaussian suggests that we try a modification of the Gaussian function. Such a modification is for example the product of a polynomial with a Gaussian. A function like x2 exp (−x2/2σ2) can be tried for the doughnut spots that occur for vertical dipoles, a function such as x exp (−x2/2σ2) can describe the spot asymmetry for tilted dipoles, and a Gaussian exp (−x2/2σ2) reasonably fits the spot shape for horizontal dipoles. An effective PSF model can be found by a linear combination of these functions, weighted with factors quadratic in the dipole vector components. It appears to be advantageous to write these products of polynomials and a Gaussian function in terms of Hermite functions. The Hermite functions are defined as:

ψn(u)=Hn(u)exp(u2),
with Hn (s) the nth order Hermite polynomial (H0 (u) = 1, H1 (u) = 2u, H2 (u) = 4u2 – 2, ...). Hermite polynomials have only a few applications in physics and engineering. Most famously, of course, is in the eigenfunctions of the Schrödinger equation for the quantum mechanical harmonic oscillator. Another applications is in Hermite-Gaussian beams which solve the paraxial wave equation [21].

The aforementioned linear combination of Hermite functions up to second order is:

PSFx(x,y)=A2πσ2(dx2+12βdz2)Ψ00(x,y)+b/a2+Aαβ2πσ2dxdz(cosχΨ10(x,y)+sinχΨ01(x,y))+Aβ8πσ2dz2(cos2χΨ20(x,y)+2sinχcosχΨ11(x,y)+sin2χΨ02(x,y)),
PSFy(x,y)=A2πσ2(dy2+12βdz2)Ψ00(x,y)+b/a2+Aαβ2πσ2dydz(sinχΨ10(x,y)+cosχΨ01(x,y))+Aβ8πσ2dz2(sin2χΨ20(x,y)2sinχcosχΨ11(x,y)+cos2χΨ02(x,y)),
with the Hermite basis functions:
Ψlm(x,y)=ψl(xx02σ)ψm(yy02σ),
for integer l and m. Here x = u cosψ and y = u sinψ are the image plane coordinates, and χ is the polarization rotation angle which takes values equal to zero for the first image pair and equal to π/4 for the second image pair. The emitter coordinates are x0 and y0, A is a measure for the signal photon count, σ is the width of the PSF, the number of background photons in each polarization channel is b per pixel area a × a (the background is assumed to be unpolarized), α is an effective asymmetry parameter, and β is a measure of the relative magnitude of the doughnut contribution to the total spot. It can be shown that the PSFs are positive definite provided that the asymmetry parameter satisfies |α| ≤ 1 and the background parameter b ≥ 0.

The distribution of the signal photons over the two channels is found from the integral of the PSFs over the image plane:

Nx=dxdy(PSFx(x,y)b/a2)=A(dx2+12βdz2),
Ny=dxdy(PSFy(x,y)b/a2)=A(dy2+12βdz2),
which gives a total signal photon count N=Nx+Ny=A(dx2+dy2+βdz2)=A(dx2+dy2+βdz2).

It appears to be convenient to absorb the signal photon count variable A, the unit-vector along the dipole axis d⃗ and the parameter β into a single vector D⃗ of three independent variables defined by:

(Dx,Dy,Dz)=A(dx,dy,βdz).
With this substitution the model has eight free parameters: (x0, y0, σ, Dx, Dy, Dz, α, b). The signal photon count and the dipole unit vector can be found from the D⃗-vector via the inverse transformation:
N=Dx2+Dy2+Dz2,
(dx,dy,dz)=1Dx2+Dy2+Dz2/β(Dx,Dy,Dz/β).
This necessitates prior knowledge of the parameter β, which thus is the only ‘magic’ parameter of our model. It turns out that β only depends on fixed parameters such as NA and refractive indices and on the value of the asymmetry parameter α. Results in support of this connection are presented in section 5.

Deviations of the found orientation from a ground truth dipole direction d⃗0 = (sin θ0 cos ϕ0, sin θ0 sin ϕ0, cos θ0) can be quantified with two orientational fluctuation variables:

Qp=p0d,
Qs=s0d,
where p⃗0 = (cos θ0 cos ϕ0, cos θ0 sin ϕ0, − sin θ0) and s⃗0 = (−sin ϕ0, cos ϕ0, 0) are the two unit vectors in the polar and azimuthal direction.

The power captured by the pixel centered at (xk, yk) follows from integration of the intensity over the a × a pixel area:

μkl=xka/2xk+a/2yka/2yk+a/2dxdyPSFl(x,y).
This integration can be done analytically using the different (recursion) relations satisfied by the Hermite functions:
dψm(u)du=(dHm(u)du2uHm(u))eu2=(2mHm1(u)2uHm(u))eu2=Hm+1(u)eu2=ψm+1(u).
It now follows that:
μkx=1π[(Dx2+12Dz2)Σ00(xk,yk)+αDxDz(cosχΣ10(xk,yk)+sinχΣ01(xk,yk))+14Dz2(cos2χΣ20(xk,yk)+2sinχcosχΣ11(xk,yk)+sin2χΣ02(xk,yk))]+b,
μky=1π[(Dy2+12Dz2)Σ00(xk,yk)+αDyDz(sinχΣ10(xk,yk)+cosχΣ01(xk,yk))+14Dz2(sin2χΣ20(xk,yk)2sinχcosχΣ11(xk,yk)+cos2χΣ02(xk,yk))]+b,
where:
Σlm(xk,yk)=Δψl1(xkx02σ)Δψm1(yky02σ),
with the shorthand defined for functions f(u):
Δf(u)=f(u+a22σ)f(ua22σ),
and where the definition of the Hermite functions is extended by :
ψ1(u)=π2erf(u).
The components of the vector D⃗ in the rotated frame are Dx = cos χ Dx + sin χ Dy and Dy = −sin χ Dx + cos χ Dy.

The recursion relations provide another numerical advantage, apart from the exact integration over the pixel area. Namely, it is possible to obtain analytical expressions for derivatives of the pixel averaged PSF μkl w.r.t. the fit parameters, which can then be evaluated numerically with great efficiency. Such expressions are needed in the MLE fitting routine.

Figure 4 shows cross-sections along the x and y-axes of realistic vectorial PSFs and Hermite PSF model fits for the four polarization channels, and for horizontal, tilted, and vertical dipoles. Apparently, the Hermite PSF model gives a good quantitative description of the different spot shapes, small differences arise with the fringe structures surrounding the main spot.

 figure: Fig. 4

Fig. 4 Cross-sections of realistic vectorial PSFs for diffraction limited defocus (72 mλ rms) and Hermite PSF model fits in the four polarization detection channels, arranged by rows, for three different dipole orientations (horizontal, tilted, vertical), arranged by columns. The horizontal and tilted dipoles have azimuthal angle π/4. Note that the vertical axes in the figures are adapted to the peak height of each image so as to make the spot shapes better visible. The x and y cross-sections for the two rotated polarization channels overlap exactly, thus making only two of the four curves visible in figures (g) to (i) (the red curve overlaps the blue curve, the green curve overlaps the magenta curve).

Download Full Size | PDF

4. Levenberg-Marquardt based Maximum Likelihood Estimation

The observed parameters are the set of photon counts in the four channels N={nkl} for l = x,y,x′,y′, the unknown parameters are θ = {x0, y0, σ, Dx, Dy, Dz, α, b}. In Maximum Likelihood Estimation (MLE) the likelihood for observing N given the unknown parameters θ is maximized. It is customary to work with the logarithm of the likelihood function, as that appears to be a more amenable functional than the likelihood itself. The log-likelihood function is derived from the properties of the statistical process leading to the observed parameters N. In case the photon count per pixel follows a Poisson distribution with expectation value μkl(θ), we obtain the log-likelihood:

logL=lk[nkllogμklμkllog(nkl!)].
The log-likehood function is maximum if the gradient of the functional w.r.t. the parameters θ is zero. The first order derivatives (gradient) and second order derivatives (Hessian) of the log-likelihood are given by:
Gi=logLθi=lk(nklμkl1)μklθi,
Hij=2logLθiθj=lknklμkl2μklθiμklθj+lk(nklμkl1)2μklθiθj.
Expressions for the first order derivatives of the pixel Poisson-rates μkl, derived by applying the recursion relations satisfied by the Hermite functions, are given in appendix B. The terms in Eq. (30) involving the second order derivatives of the pixel Poisson-rates are usually neglected in the optimization procedure [22], giving rise to a simplified Hessian:
Hijs=lknklμkl2μklθiμklθj.
The reason is that these terms are linear in the difference μklnkl, and are therefore expected to be small close to the optimum, provided the model reasonably fits the observations, and provided the initial values for the parameters are not too far off. An additional justification is that the precise form of the Hessian does not affect the optimum in parameter space, only the path of convergence to the optimum. There is also a practical advantage to using this simplified Hessian Hs, as now the computation of the second order derivatives of the Poisson-rates is avoided. It is noted that there is no fundamental reason not to calculate the second order derivatives. If it is wanted or perhaps needed the full expression for the Hessian may be used as well, which thus comes at the price of finding and evaluating expressions for the second order derivatives of the Poisson-rates.

The update rule for the parameters in each iteration step of the Levenberg-Marquardt optimization algorithm is:

θθ[Hs+λdiag(Hs)]1G.
In case the parameter update does not result in an increase of the log-likelihood it is tried again with a Levenberg-Marquardt parameter λ that is increased by a factor η. In case the parameter update does result in an increase of the log-likelihood the parameter update is accepted and the Levenberg-Marquardt parameter λ is decreased by the factor η. We take as starting value λ = 1 and a multiplication factor η = 10. The iteration is terminated when the relative increase in the merit function is less than 5×10−5. Convergence is typically achieved in 6–10 iterations. In our experience the speed of convergence is better compared to the modified Newton-Raphson method of [20] (which uses λ = 1 and only the diagonal part of the Hessian), and the stability of convergence is an improvement over the pure Newton-Raphson method (which uses the setting λ = 0).

Initial values for the parameters are calculated with an algorithm described in appendix C. The method is based on fitting the model to the lowest (up to second) order moments of the light distribution across the ROI. This results in values that are already close to the optimum, especially for low background, thus improving convergence of the MLE procedure.

The Fisher information matrix is:

Fij=lk1μklμklθiμklθj.
In the next section we present results not for the vector components (Dx, Dy, Dz) but instead for the photon count N and for the orientational fluctuations (Qp, Qs). Such a change in variables (Dx, Dy, Dz) → (N, Qp, Qs) means we must replace the Fisher matrix following FF′ = JFJ, with J the Jacobian of the coordinate transformation. The Cramér-Rao Lower Bound (CRLB) on the parameter covariances is found by inverting the Fisher information matrix. The CRLB gives the best possible accuracy in estimating the parameters θi, provided the model fits the underlying reality. Clearly, in our case there is no exact match between the two as we work with a simplified PSF model based on Hermite functions. We will show, however, that our estimator is to a great extent unbiased, in the sense that the parameter averages over all statistical realizations corresponds to the assumed ground truth values of those parameters, thus lending support to the application of the CRLB concept.

5. Numerical results

Results for the localization performance of the MLE algorithm are plotted in Fig. 5. In these simulation sets four polarization PSFs are generated according to the model described in the appendix. The NA is taken to be 1.25 (water immersion), the wavelength is 500 nm, each pixel is 80 nm in object space (slightly oversampled, Nyquist would imlpy 100 nm pixels), and the ROI is 11×11 pixels large. The emitter coordinates are drawn from a normal distribution with a width equal to one pixel. The four polarization images are distorted by Poisson-noise and background. A total number of 500 signal photons and 25 background photons (making the signal-to-background ratio equal to 20) is distributed over the four images following Poisson statistics. So, the sum of the four images contains on average 500 signal photons and 25 background photons. The simulation is run for 500 instances. Fits with a final log-likelihood deviating more than three standard deviations from the mean are designated as outliers and are removed from the data set. Typically less than 1% of the runs results in an outlier, indicating that the fitting routine is quite robust. Figure 5 shows scatter plots of the found position errors for horizontal dipoles (θd = π/2, ϕd = π/4), tilted dipoles (θd = π/4, ϕd = π/4), vertical dipoles (θd = 0) for the case of zero defocus and for defocus equal to the diffraction limit. For the zero defocus case localization errors are in the range 4–6 nm, where the largest localization error is for horizontal and tilted dipoles in the tilting direction. For comparison, rapidly rotating dipoles, for which the averaged dipole PSF can be well fitted with a Gaussian, give rise to a localization error of 4–5 nm for the same parameters and for the same photon count. Clearly, the maximum achievable localization precision is hardly compromised by dividing the total number of photons over multiple images. The diffraction limited defocus case gives rise to localization errors in the range 5–9 nm, where the worst case occurs for horizontal and tilted dipoles in the tilting direction. The bias for tilted dipoles is reduced to about 5 nm, which is a reduction with an order of magnitude compared to the state-of-the-art Gauss fitting [6]. The typical bias in each of the two coordinates is less by a factor 2/3, and amounts to about 2 nm. This shows up in the localization uncertainty for random fixed dipoles (see Fig. 6), which is somewhat above CRLB. In conclusion, the proposed MLE estimation achieves near optimum localization unertainty with a small residual localization bias of 5 nm/2 nm (worst case/typical case), and hardly underperformes compared to the much more simple case of rapidly rotating dipoles, for which Gaussian PSF fitting works very well.

 figure: Fig. 5

Fig. 5 Scatter plots of localizations for in focus images (left column) and for images with diffraction limited defocus (right column), for horizontal dipoles (top row), tilted dipoles (middle row), and vertical dipoles (bottom row).

Download Full Size | PDF

 figure: Fig. 6

Fig. 6 Scatter plots of localizations for images of fixed dipole emitters with random dipole orientation in focus (left) and with diffraction limited defocus (right).

Download Full Size | PDF

The dipole orientation is determined from the dipole vector components (Dx, Dy, Dz) via Eq. (18). Figure 7 shows data connecting the found dipole vector components to the ground truth dipole direction along the unit vector (dx, dy, dz). The simulation was done for a set of 500 dipole emitters with random but fixed orientation in a watery medium (nmed = 1.33), both for conditions corresponding to a water immersion objective (NA = 1.25) and to an oil immersion objective (NA = 1.45, for capturing more light via supercritical fluorescence in TIRF-mode). The linear relation between Dhor/Dver=Dx2+Dx2/Dz and dhor/dver=dx2+dx2/dz following from Eq. (18) works rather well. It is mentioned that the fitted value of β is based only on the orientations with intermediate tilt angles, as the scatter for orientations very close to the vertical or horizontal orientation tends to dominate the outcome of the fit. For the case of a water immersion objective, the coefficient of linearity turns out to depend quadratically on the asymmetry parameter: βfit = β0α2/2, where the parameter β0 is found as β0 = 0.66. Other values for the NA would necessitate a calibration similar to the one presented in Fig. 7 in order to find the right numerical value for β0. For the case of an oil immersion objective the best fit indicates a constant value β = 1.7. The difference with the water immersion case is probably related to the evanescent wave contributions to the overall spot. It may be expected that the proper value of β depends not only on NA but also on the refractive index of the medium. Prior knowledge of this latter parameter is thus also needed to calibrate the parameter β. An alternative to the phenomenological relationships presented here is to use a look-up table for connecting D⃗ to d⃗.

 figure: Fig. 7

Fig. 7 (a) Plot of the found ratio Dhor/Dver as a function of the ground truth ratio dhor/dver for a set of random fixed dipoles, and for a water immersion objective with NA = 1.25, and a linear fit through the data. (b) Plot of the found coefficient of linearity as a function of the average value of the squared asymmetry parameter for a water immersion objective (NA = 1.25) and an oil immersion objective (NA = 1.45).

Download Full Size | PDF

Figure 8 shows scatter plots for the orientational fluctuations for the same cases considered as for the localization. For the vertical dipole case the angular uncertainty is around 0.8 deg in all directions, which makes sense as nothing breaks the rotational symmetry around the vertical axis. For the intermediate, tilted dipole case the polar angle uncertainty is slightly larger, around 2.5 deg in focus and around 3.0 deg with defocus equal to the diffraction limit. For the horizontal dipole case polar angle uncertainty is around 5 deg in focus and around 4 deg with defocus equal to the diffraction limit. The uncertainty in the azimuthal angle for both the tilted and horizontal dipole cases is around 1.6 deg both in and out of focus. The results for fixed dipoles with random orientation, shown in Fig. 9, give around 1.7 deg in the azimuthal direction and 4–5 deg in the polar direction, depending on the amount of defocus. Clearly, the nominal level of accuracy achieved, and the tolerance for defocus, are quite reasonable. The performance for most orientations is comparable to the results reported in [16], with the exception of vertical dipoles, for which the current results are an improvement.

 figure: Fig. 8

Fig. 8 Scatter plots of the variations of the found orientation around the ground truth orientation in the polar direction (Qp) and in the azimuthal direction (Qs) for in focus images (left column) and for images with diffraction limited defocus (right column), for horizontal dipoles (top row), tilted dipoles (middle row), and vertical dipoles (bottom row).

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 Scatter plots of the variations of the found orientation around the ground truth orientation for images of fixed dipole emitters with random dipole orientation in focus (left) and with diffraction limited defocus (right).

Download Full Size | PDF

The comparison between the achieved accuracy and the averaged CRLB of the Hermite PSF model at the found parameters reveals several striking features. For the tilted dipole case it appears that the angular accuracy is a bit above the average CRLB, according to prior expectations. For the horizontal dipole case the CRLB fits with the azimuthal uncertainty, but is much larger than the polar uncertainty. For the vertical dipole case the CRLB is much larger than the achieved uncertainty in both angular directions. The reason for this behaviour is that the Fisher matrix of the Hermite PSF model is singular for certain parameter values. This occurs when the asymmetry coefficient α = 0 and when Dz = 0 (horizontal dipole) or Dx = Dy = 0 (vertical dipole). Close to these singular cases the CRLB for the polar direction (horizontal dipole) or for both angular directions (vertical dipole) grows very large. There are a number of reasons why the CRLB diverges while the realized uncertainty found from the simulations remains finite. First of all, the fact that the Fisher matrix is singular at these special points does not imply that the actual Hessian is singular, as it differs from the Fisher matrix by a noise term proportional to the difference in observed and model pixel values nkμk, and by the additional diagonal Levenberg-Marquardt term. In our simulations it appears that for the vertical dipole case the Levenberg-Marquardt parameter λ indeed does not get small upon convergence to the optimum, as would be expected for a non-singular Hessian. A consequence of a (near) singular Hessian would be a deviation in the distribution of the found parameters from the normal distribution, which would be especially visible under low noise conditions, as then the difference between the Hessian and the singular Fisher-matrix vanishes. We have done Kolmogorov-Smirnov tests on the distribution of the polar angle fluctuation Qp to check this. It turns out that indeed the distribution of found Qp is not a normal distribution for horizontal and vertical dipoles at high photon counts (above about 103 for the vertical dipole case and even lower for the horizontal dipole case). A second reason for the difference between the averaged CRLB and the realized uncertainty is the variation in the asymmetry parameter α. For the horizontal and vertical case this parameter cannot be determined accurately, so values in the entire range −1 ≤ α ≤ 1 will be found. For non-zero α the Fisher matrix is not singular, thus regularizing the CRLB. We suspect that this is also the reason why the polar uncertainty for the horizontal dipole improves with defocus, as then the tendency for non-zero α increases. Yet a third reason may be found from the simplified nature of the Hermite PSF model. It may be the case that the Fisher matrix for the exact dipole PSF model is not singular for the horizontal and vertical dipole cases.

We have not developed a full understanding of why the realized uncertainty in the polar angle has the finite value it appears to have. We have tried several methods to regularize the calculation of the average CRLB in order to be able to predict the outcome of the simulations. For example, averaging the Fisher matrix and then taking the inverse instead of the other way around turns out to give a good fit with the observed polar uncertainty for the horizontal dipole case. However, this positive result does not extend to other values of the polar angle. Another regularization method that has been tried is to evaluate the CRLB of a suitable function of the polar angle, say cos (2θp), which has a zero derivative at θp = 0 and θp = π/2. Although such a procedure gives finite results for the CRLB at these two orientations, it does not give a proper description of the realized uncertainty in the numerical simulations. Summarizing, the good quality of the realized angular uncertainty may be somewhat fortuitous and at present beyond description by a predictive model. Finally, we speculate that on a fundamental level the use of defocus for determining dipole orientation may provide an advantage, as then a formally diverging CRLB may be avoided.

Figure 10 shows the rms localization error (RMSLE), the rms orientational error in the s-direction (RMSOEs), and in the p-direction (RMSOEp) as a function of the signal photon count for different angles and defocus values and for zero background. These error measures incorporate both bias and variance effects in a single number. As benchmark for the localization results the Gaussian localization CRLB equal to 2σ/Nph (the factor 2 comes from adding the x and y contribution to the RMSLE) is plotted as well. In a recent paper Foreman and Török [12] show that the azimuthal angle of in-plane dipoles can be measured with a polarimeter architecture like the one considered in this paper with a CRLB equal to 1/2Nph. This line is plotted in the orientational error curves as a benchmark. It appears that the localization uncertainty in focus follows the Gaussian CRLB quite closely. The error is slightly larger in case of defocus for horizontal and tilted dipoles, in particular due to the small residual bias of a few nm for the tilted dipole case. This bias turns up at high photon counts as a plateau the localization error flattens off to. The azimuthal orientation error is excellently described by the bound derived by Foreman and Török for the tilted dipole case, both in focus and out of focus. For the horizontal dipole case the agreement is reasonably well, the error is a bit below the bound for low photon counts and a bit above the bound for high photon counts. The vertical dipole case is well below the bound for low photon counts, but gives an increase in uncertainty for high photon counts. The uncertainty in the polar orientation is significantly larger than the benchmark, for intermediate tilt angles by a factor of about 2, for larger tilt angles by a factor of about 3. Both the relatively large uncertainty in the polar angle and the increase in angular uncertainty of vertical dipoles for large photon counts is attributed to the near-singular Fisher matrix close to the horizontal or vertical dipole orientation, giving rise to non-normal data distributions around the mean with long tails to high values for the orientational error. Another deteriorating effect is a small bias in the polar angle for the tilted dipole case at high photon counts.

 figure: Fig. 10

Fig. 10 Plots of the rms localization error (RMSLE, top row), the rms orientational error in the s-direction (RMSOEs, middle row), and the rms orientational error (RMSOEp, bottom row) as a function of the signal photon count.

Download Full Size | PDF

6. Conclusion

In summary, we have shown that the realized localization error closely follows the CRLB derived from the Gaussian PSF model, for all dipole orientations, for all practical photon counts and for defocus values at least up to the diffraction limit. Clearly, this provides a solution for the tilted dipole bias problem. The uncertainty in the azimuthal orientation is reasonably well described by the Foreman-Török formula, the uncertainty in the polar orientation appears to be somewhat worse. Our computational scheme is straightforward in view of the relative complexity of the shape variations of the polarization PSFs. The effective Hermite PSF model works well, and can be efficiently used in a MLE-fit for the unknown parameters. The Hermite PSF model provides significant analytical shortcuts for the integration over the pixel area and for the derivatives of the pixel Poisson rates with respect to the fit parameters.

A practical problem not considered here is the registration problem. The four images for the four polarization channels must be aligned with respect to each other with an accuracy typically better than 10 nm. Although this can be accomplished in principle with fiducials such as fluorescent beads, it requires careful experimentation and image analysis. Another aspect not considered here is the excess noise of the EMCCD. According to Mortensen et al. [17] MLE-fitting with a Poissonian noise based log-likelihood gives the correct results, only the predicted uncertainty is a factor 2 larger due to the noise associated with the electron multiplication process. A final issue introducing inaccuracies beyond the effects reported in this paper is birefringence of the sample. In case the emitter is located behind a layer of cell material, this birefringence may disturb the polarization of the emitted light, thus compromising the measurement of orientation and position.

An interesting topic for further study is the cross-over between fixed and free dipoles for coherent polarized illumination. In that case the ratio between the fluorescence lifetime and the rotational difusion time changes from zero to infinitity. This will modify the PSF and the fitting procedure for position and orientation. Clearly, the uncertainty in determining the orientation will grow large upon approach of the free dipole case. The nature of this transition and a quantitative description of it are left for future work. It is noted that the ratio between the rotational diffusion time and the frame time is the relevant ratio if the illumination is unpolarized, e.g. if a lamp is used instead of a laser. A second topic of interest for future explorations is the generalization to the case of 3D-localization. It is not clear if the current approach with a simplified Hermite PSF model can be successfully applied to the bifocal [23, 24], astigmatic [25, 26], or helical [27] spot modifications needed for determining the axial position. Finally, an in-depth analysis of the effect of background on the uncertainty in position and orientation would provide a third topic worthy of investigation.

A. Summary of dipole image formation

The emitter dipole is located in a medium with refractive index nmed adjacent to a cover slip with refractive index ncov. The emitter is imaged with an immersion objective lens with numerical aperture NAob designed for an immersion fluid with refractive index nimm. The objective lens is assumed to be corrected for focusing onto the interface between the cover slip and the medium. The intersection of the optical axis with this interface is taken to be the origin of the coordinate system in object space. The emitted radiation is collected by the objective lens with focal length Fob and focused by the tube lens with focal length Fim onto the detector. Both lenses are assumed to be aplanatic and the imaging system is assumed to be telecentric, i.e. the aperture stop is located at the back focal plane of the objective lens, which coincides with the front focal plane of the tube lens. The magnification of the imaging system is M = Fim/Fob, thus making the numerical aperture in image space NAim = NAob/M. In practice M ≫ 1, so that NAim ≪ 1. The radius of the stop is given by R = FobNAob = FimNAim. The pupil coordinates are scaled with the stop radius R scaling the pupil to the unit circle, the object and image coordinates are scaled with the diffraction lengths λ/NAob (object space) and λ/NAim (image space). The (2D) position of the emitter with respect to the focal point is r⃗d = (xd, yd). The scaled position is u⃗d = NAobr⃗d/λ. The electric field in the pupil plane at point v⃗ = (vx, vy, 0) corresponds to the plane wave in object space with wavevector along (sin θmed cos ϕ, sin θmed sin ϕ, cos θmed), so vx = nmed sin θmed cos ϕ/NAob and vy = nmed sin θmed sin ϕ/NAob.

The electric field component j = x,y proportional to dipole component k = x,y,z is given by:

wjk(u)=1πd2vC(v)qjk(v)(1v2NAob2/nmed2)1/4exp(2πiuv),
where C (v⃗) is the pupil function (C (v⃗) = 0 for |v⃗| > 1). The polarization vectors are defined by:
qxk=cosϕTppksinϕTssk,
qyk=sinϕTppk+cosϕTssk,
which depend on the p and s basis polarization vectors p⃗ = (cos θmed cos ϕ, cos θmed sin ϕ, – sin θmed) and s⃗ = (−sin ϕ, cos ϕ,0) and on the Fresnel coefficients Ta = Ta,med–cov Ta,cov–imm for a = p,s and where the Fresnel coefficients for the two contributing interfaces are defined by Ta,1–2 = 2ca,1/ (ca,1 + ca,2) for a = p,s and with cp,l = nl / cos θl and cs,l = nl cos θl for l = med, cov, imm.

In case of azimuthal symmetry in the pupil function (only defocus and/or spherical aberration) the functions wjk (u⃗) can be written as:

wxx(u,ψ)=F0(u)+F2(u)cos(2ψ),
wxy(u,ψ)=F2(u)cos(2ψ),
wxz(u,ψ)=iF1(u)cosψ,
wyx(u,ψ)=F2(u)cos(2ψ),
wyy(u,ψ)=F0(u)F2(u)cos(2ψ),
wyz(u,ψ)=iF1(u)sinψ,
with:
F0(u)=201dvv(Ts+Tp1v2NAob2/nmed2)2(1v2NAob2/nmed2)1/4J0(2πuv)exp(iW(v)),
F1(u)=201dvv2TpNAob/nmed(1v2NAob2/nmed2)1/4J1(2πuv)exp(iW(v)),
F2(u)=201dvv(TsTp1v2NAob2/nmed2)2(1v2NAob2/nmed2)1/4J2(2πuv)exp(iW(v)).

B. Derivatives of the Poisson rate

The first order derivatives of the Poisson rates μkx and μky of pixel k w.r.t. the parameters θ are:

μkxx0=12πσ[(Dx2+12Dz2)Σ10+αDxDz(cosχΣ20+sinχΣ11)+14Dz2(cos2χΣ30+2sinχcosχΣ21+sin2χΣ12)],
μkyx0=12πσ[(Dy2+12Dz2)Σ10+αDyDz(sinχΣ20+cosχΣ11)+14Dz2(sin2χΣ302sinχcosχΣ21+cos2χΣ12)],
μkxy0=12πσ[(Dx2+12Dz2)Σ01+αDxDz(cosχΣ11+sinχΣ02)+14Dz2(cos2χΣ21+2sinχcosχΣ12+sin2χΣ03)],
μkyy0=12πσ[(Dy2+12Dz2)Σ01+αDyDz(sinχΣ11+cosχΣ02)+14Dz2(sin2χΣ212sinχcosχΣ12+cos2χΣ03)],
μkxσ=1πσ[(Dx2+12Dz2)Λ00+αDxDz(cosχΛ10+sinχΛ01)+14Dz2(cos2χΛ20+2sinχcosχΛ11+sin2χΛ02)],
μkyσ=1πσ[(Dy2+12Dz2)Λ00+αDyDz(sinχΛ10+cosχΛ01)+14Dz2(sin2χΛ202sinχcosχΛ11+cos2χΛ02)],
μkxDx=cosχπ[2DxΣ00+αDz(cosχΣ10+sinχΣ01)],
μkyDx=sinχπ[2DyΣ00+αDz(sinχΣ10+cosχΣ01)],
μkxDy=sinχπ[2DxΣ00+αDz(cosχΣ10+sinχΣ01)],
μkyDy=cosχπ[2DyΣ00+αDz(sinχΣ10+cosχΣ01)],
μkxDz=1π[DzΣ00+αDx(cosχΣ10+sinχΣ01)+12Dz(cos2χΣ20+2sinχcosχΣ11+sin2χΣ02)],
μkyDz=1π[DzΣ00+αDy(sinχΣ10+cosχΣ01)]+12Dz(sin2χΣ202sinχcosχΣ11+cos2χΣ02)],
μkxα=1πDxDz(cosχΣ10+sinχΣ01),
μkyα=1πDyDz(sinχΣ10+cosχΣ01),
μkxb=1,
μkyb=1,
where the arguments of the Σlm are suppressed for the sake of brevity, and where:
Λlm=Δ(xkx02σψl+1(xkx02σ))Δ(ψm(yky02σ))+Δ(ψl(xkx02σ))Δ(yky02σψm+1(yky02σ)).
In evaluating these derivatives it is used that:
x0Δψl(xkx02σ)=12σΔψl+1(xkx02σ),
σΔψl(xkx02σ)=1σΔ(xkx02σψl+1(xkx02σ)),
and similar expressions for the analogous derivatives.

C. Initial value estimation by the method of moments

Initial vlaues for the parameters are obtained from the lowest (up to second) order moments of the light distribution across the ROI. This method generalizes the centroid estimation of emitter location, which works well provided that the PSF is accurately described by a Gaussian, and provided there is no or only little background. It may therefore be expected that the generalization presented here also works well in the low-background limit. We will first describe how to extract estimates from each of the two pairs (with and without π/4 polarization rotation) of polarized images. Afterwards the two sets of values can be combined to the final estimate.

The Hermite PSF-model of Eqs. (11) and (12) gives rise to zeroth order moments (photon counts in the x and y polarization channels) as given by Eqs.(14) and (15). For the calculation of the first and second order moments it appears to be convenient to work with coordinates in a rotated frame, i.e. using x′ = x cos χ + y sin χ and y′ = −x sin χ + y cos χ and similar definitions for x0 and y0. The first order moments are:

dxdyPSFx(x,y)x=(Dx2+12Dz2)x0+α2DxDzσ,
dxdyPSFx(x,y)y=(Dx2+12Dz2)y0,
dxdyPSFy(x,y)x=(Dy2+12Dz2)x0,
dxdyPSFy(x,y)y=(Dy2+12Dz2)y0+α2DyDzσ.
Clearly, the offset term does not affect the first order moment of x′ in the y-polarization channel and the first order moment of y′ in the x-polarization channel. The second order moments are:
dxdyPSFx(x,y)x2=(Dx2+12Dz2)(σ2+x02)+Dz2σ2+2αDxDzσx0,
dxdyPSFx(x,y)y2=(Dx2+12Dz2)(σ2+y02),
dxdyPSFy(x,y)x2=(Dy2+12Dz2)(σ2+x02),
dxdyPSFy(x,y)y2=(Dy2+12Dz2)(σ2+y02)+Dz2σ2+2αDyDzσy0.
The moments of xy′ turn out to be zero in the model.

This set of lowest order moments are estimated from the pixel values nkj for all pixels k in channel j = x,y by calculating the sums:

M0j=knkj,
(M1xj,M1yj)=knkj(xk,yk),
(M2xj,M2yj)=knkj(xk2,yk2).
For the sake of clarity we will first describe how to find values in case of zero offset α = 0. In that case the zeroth and first order moments suffice for finding the emitter coordinates (centroid estimate):
x0v=M1xx+M1xyM0x+M0y,
y0v=M1yx+M1yyM0x+M0y.
In the case of non-zero offset α ≠ 0 this estimate is biased but still has minimum variance. We will henceforth refer to this estimate as the minimum-variance estimate with subscript ’v’. With the quantities:
A2xj=M2xj2M1xjx0v+M0jx0v2,
A2yj=M2xj2M1xjy0v+M0jy0v2,
an estimate for the spot width follows as:
σ2=A2xy+A2yxM0x+M0y,
The vector component Dz can now be obtained from:
Dz2=A2xx+A2yyA2xyA2yx2σ2,
which allows to deduce the other two vector components from the zeroth order moments by:
Dx2=M0x12Dz2,
Dy2=M0y12Dz2.
In case of a non-zero offset term α ≠ 0 the first and zeroth order moments give rise to a second estimate for the emitter position:
x0b=M1xyM0y,
y0b=M1yxM0x.
This estimate is unbiased, hence the subscript ’b’, and at first sight seems preferable over the centroid estimate. However, the distribution of photons over the two channels can be rather uneven giving one of the two estimates a relatively large variance. Clearly, it is more sensible to look for an optimum bias-variance trade-off which has the lowest possible overall localization error. This optimum is found via a weighted sum of the two estimates:
x0=(1ε)x0b+εx0v,
with ε a weighting parameter (and a similar expression for the y-coordinate). It is noted that translation invariance requires that the linear combination of the two fit values takes this form with only a single degree of freedom in the weighting parameter. The optimum value of ε is determined from mimimizing the (x-contribution to the) Mean Square Localization Error MSLE, which is the sum of the variance V and the square of the bias Δ:
MSLE=(x0xtrue)2=(x0x0)2+(x0xtrue)2=V+Δ2.
The estimated variances of the minimum-variance and minimum-bias fit values are V=σ/M0x+M0y and Vy=σ/M0y, where an initial estimate for σ is given by (80). The bias is estimated as Δ = κ(x0vx0b) with κ a parameter describing the confidence level of the bias estimate. It follows that the MSLE is given by:
MSLE=(1ε)2Vy+ε2V+ε2Δ2.
The miminum of this expression is obtained for:
ε=VyV+Vy+Δ2,
and the minimum MSLE is equal to:
MSLE=Vy(V+Δ2)V+Vy+Δ2.
For vertical dipoles or horizontal dipoles with azimuthal angle in between the polarizer axes we typically expect Δ2Vy ≈ 2V giving MSLE ≈ V. For tilted dipoles we can have Δ2Vy ≈ 2V giving MSLE ≈ 2V. For horizontal dipoles with azimuthal angle close to the orthogonal axis we expect VyV leading to MSLE ≈ V + Δ2. For this last case a near zero bias confidence parameter κ is needed to keep Δ small in all cases. On the other hand it must be equal to unity for tilted dipoles with azimuthal angle in between the polarizer axes. A suitable choice is therefore:
κ=4M0xM0y(M0x+M0y)2.
With this choice the worst case (tilted dipoles with large bias) gives a localization accuracy that is a factor 2 worse than the ideal V value.

Improved estimates can be found for the other parameters now that the position of the emitter is estimated with mimimum MSLE. To that end, the minimum variance estimate in Eqs. (78) and (79) must be replaced by the minimum localization error estimate. The equations for estimating the spot width and the vector components (Dx, Dy, Dz) remain the same. Finally, the asymmetry coefficient α is obtained from:

α=2Dx(M1xxM0xx0)+Dy(M1yyM0yy0)(Dx2+Dy2)Dzσ.

There is a connection between the method of moments presented here and the orthogonality relations satisfied by the Hermite polynomials:

duHn(u)Hm(u)exp(u2)=π2nn!δnm.
This orthogonality implies that the coefficients for the expansion of a spot in Hermite functions can be found by integrating the product of the spot and the particular Hermite polynomial over the image coordinates. Such integrals can be expressed as linear combinations of the moments of the spot. Clearly, the Hermite expansion coefficients, and hence the model parameters, can be directly expressed in terms of the moments.

References and links

1. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging Intracellular Fluorescent Proteins at Nanometer Resolution,” Science 313, 1643–1645 (2006). [CrossRef]  

2. K. A. Lidke, B. Rieger, T. M. Jovin, and R. Heintzmann, “Superresolution by localization of quantum dots using blinking statistics,” Opt. Express 13, 7052–7062 (2005). [CrossRef]   [PubMed]  

3. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3, 793–795 (2006). [CrossRef]   [PubMed]  

4. J. Fölling, M. Bossi, H. Bock, R. Medda, C. A. Wurm, B. Hein, S. Jakobs, C. Eggeling, and S. W Hell, “Fluorescence nanoscopy by ground-state depletion and single-molecule return,” Nat. Methods 5, 943–945 (2008). [CrossRef]   [PubMed]  

5. M. Heilemann, S. van de Linde, M. Schüttpelz, R. Kasper, B. Seefeldt, A. Mukherjee, P. Tinnefeld, and M. Sauer, “Subdiffraction-resolution fluorescence imaging with conventional fluorescent probes,” Angew. Chem., Int. Ed. Engl. 476172–6176 (2008). [CrossRef]  

6. S. Stallinga and B. Rieger, “Accuracy of the Gaussian Point Spread Function model in 2D localization microscopy,” Opt. Express 18, 24461–24476 (2010). [CrossRef]   [PubMed]  

7. J. Engelhardt, J. Keller, P. Hoyer, M. Reuss, T. Staudt, and S. W. Hell, “Molecular orientation affects localization accuracy in superresolution far-field fluorescence microscopy,” Nano Lett. 11, 209–213 (2010). [CrossRef]   [PubMed]  

8. J. Enderlein, E. Toprak, and P. R. Selvin, “Polarization effect on position accuracy of fluorophore localization,” Opt. Express 14, 8111 (2006). [CrossRef]   [PubMed]  

9. T. J. Gould, M. S. Gunewardene, M. V. Gudheti, V. V. Verkhusha, S.-R. Yin, J. A. Gosse, and S. T. Hess, “Nanoscale imaging of positions and anisotropies,” Nat. Methods 5, 1027–1031, 2008. [CrossRef]   [PubMed]  

10. S. R. P. Pavani, J. G. DeLuca, and R. Piestun, “Polarization sensitive, three-dimensional, single-molecule imaging of cells with a double-helix system,” Opt. Express 17, 19644–19655 (2009). [CrossRef]   [PubMed]  

11. M. R. Foreman, C. M. Romero, and P. Török, “Determination of the three-dimensional orientation of single molecules,” Opt. Lett. 33, 1020–1022 (2008). [CrossRef]   [PubMed]  

12. M. R. Foreman and P. Török, “Fundamental limits in single-molecule orientation measurements,” New J. Phys. 13, 093013 (2011). [CrossRef]  

13. R. M. A. Azzam, “Division-of-amplitude Photopolarimeter (DOAP) for the Simultaneous Measurement of All Four Stokes Parameters of Light,” Opt. Acta 29, 685–689 (1982). [CrossRef]  

14. A. P. Bartko and R. M. Dickson, “Imaging three-dimensional single molecule orientations,” J. Phys. Chem. B 103, 11237–11241 (1999). [CrossRef]  

15. P. Dedecker, B. Muls, J. Hofkens, J. Enderlein, and J. Hotta, “Orientational effects in the excitation and de-excitation of single molecules interacting with donut-mode laser beams,” Opt. Express 15, 3372–3383 (2007). [CrossRef]   [PubMed]  

16. F. Aguet, A. Geissbühler, I. Märki, T. Lasser, and M. Unser, “Super-resolution orientation estimation and localization of fluorescent dipoles using 3-D steerable filters,” Opt. Express 17, 6829–6848 (2009). [CrossRef]   [PubMed]  

17. K. I. Mortensen, L. S. Churchman, J. A. Spudich, and H. Flyvbjerg, “Optimized localization analysis for single-molecule tracking and super-resolution microscopy,” Nat. Methods 7, 377–381 (2010). [CrossRef]   [PubMed]  

18. T. Wilson, R. Juskaitis, and P. D. Higdon, “The imaging of dielectric point scatterers in conventional and confocal polarisation microscopes,” Opt. Commun. 141, 298–313 (1997). [CrossRef]  

19. P. Török, P. D. Higdon, and T. Wilson, “Theory for confocal and conventional microscopes imaging small di-electric scatterers,” J. Mod. Opt. 45, 1681–1698 (1998). [CrossRef]  

20. C. S. Smith, N. Joseph, B. Rieger, and K. A. Lidke, “Fast, single-molecule localization that achieves theoretically minimum uncertainty,” Nat. Methods 7, 373–375 (2010). [CrossRef]   [PubMed]  

21. E. Zauderer, “Complex argument Hermite-Gaussian and Laguerre-Gaussian beams,” J. Opt. Soc. Am. A 3, 465–469 (1986). [CrossRef]  

22. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, “Numerical Recipes in Fortran 77,” 2nd ed. (Cambridge Univeristy Press, 1992).

23. E. Toprak, H. Balci, B. H. Blehm, and P. R. Selvin, “Three-dimensional particle tracking via bifocal imaging,” Nano Lett. 7, 2043–2045 (2007). [CrossRef]   [PubMed]  

24. M. F. Juette, T. J. Gould, M. D. Lessard, M. J. Mlodzianoski, B. S. Nagpure, B. T. Bennett, S. T. Hess, and J. Bewersdorf, “Three-dimensional sub-100nm resolution fluorescence microscopy of thick samples,” Nat. Methods 5, 527–530 (2008). [CrossRef]   [PubMed]  

25. L. Holtzer, T. Meckel, and T. Schmidt, “Nanometric three-dimensional tracking of individual quantum dots in cells,” Appl. Phys. Lett. 90, (053902), 1–3 (2007). [CrossRef]  

26. B. Huang, W. Wang, M. Bates, and X. Zhuang, “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science 319, 810–813 (2008). [CrossRef]   [PubMed]  

27. S. R. P. Pavani and R. Piestun, “Three dimensional tracking of fluorescent microparticles using a photon-limited double-helix response system,” Opt. Express 16, 22048–22057 (2008). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Cross-sections of the PSF for a tilted dipole with defocus equal to the diffraction limit (left, (a)), and scatter plot of simulated localizations for a fixed dipole emitter with tilted dipole axis with defocus equal to the diffraction limit (boiling down to an axial object displacement of about 0.15 μm for the parameters assumed, NAob = 1.25, and λ = 500 nm) using a Gaussian PSF (right, (b)). The found positions, the 1 × σ confidence level and the Gaussian Cramer-Rao Lower Bound (CRLB) are plotted. The localization was done using Maximum Likelihood Estimation (MLE) with a Gaussian PSF, taking 500 detected signal photons, and 25 background photons, the photons distributed over an 11×11 pixel large Region Of Interest (ROI) according to Poisson statistics.
Fig. 2
Fig. 2 Schematic view of a light path for simultaneous acquisition of four polarization channels on a single camera based on the Azzam polarimeter architecture. Light from emitter E is captured by lenses L1 and imaged by lens L2 onto the camera. In the imaging branch the beam is split into two equal parts with a normal beam splitter BS. The two branches are color coded for the sake of clarity, it does not indicate wavelength. The ‘bottom’ (red) branch passes through a polarization rotating (over π/4) component R. Both the ‘top’ (blue) and ‘bottom’ (red) branches pass a polarizing beam splitter (PBS) splitting each branch into two orthogonally polarized parts. The horizontal (x) and vertical (y) polarized beams of both branches are directed towards the camera (EMCCD) via three mirrors M1, M2, and M3. The four channels Ix, Iy, Ix, and Iy are projected onto the four quadrants of the camera, thus generating four images of the emitter E.
Fig. 3
Fig. 3 The polarization PSFs in the four detection channels of the Azzam polarimeter architecture for horizontal dipoles with azimuthal angle π/4 (top row), tilted dipoles with azimuthal angle π/4 (middle row), and vertical dipoles (bottom row) dipoles for a field of view of size 2.2×λ/NA, a water objective (NA = 1.25 in a medium with nmed = 1.33), and diffraction limited defocus (72 mλ rms).
Fig. 4
Fig. 4 Cross-sections of realistic vectorial PSFs for diffraction limited defocus (72 mλ rms) and Hermite PSF model fits in the four polarization detection channels, arranged by rows, for three different dipole orientations (horizontal, tilted, vertical), arranged by columns. The horizontal and tilted dipoles have azimuthal angle π/4. Note that the vertical axes in the figures are adapted to the peak height of each image so as to make the spot shapes better visible. The x and y cross-sections for the two rotated polarization channels overlap exactly, thus making only two of the four curves visible in figures (g) to (i) (the red curve overlaps the blue curve, the green curve overlaps the magenta curve).
Fig. 5
Fig. 5 Scatter plots of localizations for in focus images (left column) and for images with diffraction limited defocus (right column), for horizontal dipoles (top row), tilted dipoles (middle row), and vertical dipoles (bottom row).
Fig. 6
Fig. 6 Scatter plots of localizations for images of fixed dipole emitters with random dipole orientation in focus (left) and with diffraction limited defocus (right).
Fig. 7
Fig. 7 (a) Plot of the found ratio Dhor/Dver as a function of the ground truth ratio dhor/dver for a set of random fixed dipoles, and for a water immersion objective with NA = 1.25, and a linear fit through the data. (b) Plot of the found coefficient of linearity as a function of the average value of the squared asymmetry parameter for a water immersion objective (NA = 1.25) and an oil immersion objective (NA = 1.45).
Fig. 8
Fig. 8 Scatter plots of the variations of the found orientation around the ground truth orientation in the polar direction (Qp) and in the azimuthal direction (Qs) for in focus images (left column) and for images with diffraction limited defocus (right column), for horizontal dipoles (top row), tilted dipoles (middle row), and vertical dipoles (bottom row).
Fig. 9
Fig. 9 Scatter plots of the variations of the found orientation around the ground truth orientation for images of fixed dipole emitters with random dipole orientation in focus (left) and with diffraction limited defocus (right).
Fig. 10
Fig. 10 Plots of the rms localization error (RMSLE, top row), the rms orientational error in the s-direction (RMSOEs, middle row), and the rms orientational error (RMSOEp, bottom row) as a function of the signal photon count.

Equations (93)

Equations on this page are rendered with MathJax. Learn more.

E im , j ( u , u d ) = E 0 Σ k = x , y , z w j k ( u u d ) d k ,
E im , x ( u , ψ ) = E 0 [ F 0 ( u ) d x + i F 1 ( u ) cos ψ d z ] ,
E im , y ( u , ψ ) = E 0 [ F 0 ( u ) d y + i F 1 ( u ) sin ψ d z ] ,
E im , x ( u , ψ ) = E 0 [ F 0 ( u ) d x + i F 1 ( u ) cos ( ψ χ ) d z ] ,
E im , y ( u , ψ ) = E 0 [ F 0 ( u ) d y + i F 1 ( u ) sin ( ψ χ ) d z ] ,
d x = cos χ d x + sin χ d y ,
d y = sin χ d x + cos χ d y .
P S F x ( u , ψ ) = E 0 2 [ | F 0 ( u ) | 2 d x 2 + 2 Im { F 0 ( u ) F 1 ( u ) * } cos ( ψ χ ) d x d z + | F 1 ( u ) | 2 cos 2 ( ψ χ ) d z 2 ] ,
P S F y ( u , ψ ) = E 0 2 [ | F 0 ( u ) | 2 d y 2 + 2 Im { F 0 ( u ) F 1 ( u ) * } sin ( ψ χ ) d y d z + | F 1 ( u ) | 2 sin 2 ( ψ χ ) d z 2 ] ,
ψ n ( u ) = H n ( u ) exp ( u 2 ) ,
P S F x ( x , y ) = A 2 π σ 2 ( d x 2 + 1 2 β d z 2 ) Ψ 00 ( x , y ) + b / a 2 + A α β 2 π σ 2 d x d z ( cos χ Ψ 10 ( x , y ) + sin χ Ψ 01 ( x , y ) ) + A β 8 π σ 2 d z 2 ( cos 2 χ Ψ 20 ( x , y ) + 2 sin χ cos χ Ψ 11 ( x , y ) + sin 2 χ Ψ 02 ( x , y ) ) ,
P S F y ( x , y ) = A 2 π σ 2 ( d y 2 + 1 2 β d z 2 ) Ψ 00 ( x , y ) + b / a 2 + A α β 2 π σ 2 d y d z ( sin χ Ψ 10 ( x , y ) + cos χ Ψ 01 ( x , y ) ) + A β 8 π σ 2 d z 2 ( sin 2 χ Ψ 20 ( x , y ) 2 sin χ cos χ Ψ 11 ( x , y ) + cos 2 χ Ψ 02 ( x , y ) ) ,
Ψ lm ( x , y ) = ψ l ( x x 0 2 σ ) ψ m ( y y 0 2 σ ) ,
N x = d x d y ( P S F x ( x , y ) b / a 2 ) = A ( d x 2 + 1 2 β d z 2 ) ,
N y = d x d y ( P S F y ( x , y ) b / a 2 ) = A ( d y 2 + 1 2 β d z 2 ) ,
( D x , D y , D z ) = A ( d x , d y , β d z ) .
N = D x 2 + D y 2 + D z 2 ,
( d x , d y , d z ) = 1 D x 2 + D y 2 + D z 2 / β ( D x , D y , D z / β ) .
Q p = p 0 d ,
Q s = s 0 d ,
μ k l = x k a / 2 x k + a / 2 y k a / 2 y k + a / 2 d x d y P S F l ( x , y ) .
d ψ m ( u ) d u = ( d H m ( u ) d u 2 u H m ( u ) ) e u 2 = ( 2 m H m 1 ( u ) 2 u H m ( u ) ) e u 2 = H m + 1 ( u ) e u 2 = ψ m + 1 ( u ) .
μ k x = 1 π [ ( D x 2 + 1 2 D z 2 ) Σ 00 ( x k , y k ) + α D x D z ( cos χ Σ 10 ( x k , y k ) + sin χ Σ 01 ( x k , y k ) ) + 1 4 D z 2 ( cos 2 χ Σ 20 ( x k , y k ) + 2 sin χ cos χ Σ 11 ( x k , y k ) + sin 2 χ Σ 02 ( x k , y k ) ) ] + b ,
μ k y = 1 π [ ( D y 2 + 1 2 D z 2 ) Σ 00 ( x k , y k ) + α D y D z ( sin χ Σ 10 ( x k , y k ) + cos χ Σ 01 ( x k , y k ) ) + 1 4 D z 2 ( sin 2 χ Σ 20 ( x k , y k ) 2 sin χ cos χ Σ 11 ( x k , y k ) + cos 2 χ Σ 02 ( x k , y k ) ) ] + b ,
Σ lm ( x k , y k ) = Δ ψ l 1 ( x k x 0 2 σ ) Δ ψ m 1 ( y k y 0 2 σ ) ,
Δ f ( u ) = f ( u + a 2 2 σ ) f ( u a 2 2 σ ) ,
ψ 1 ( u ) = π 2 erf ( u ) .
log L = l k [ n k l log μ k l μ k l log ( n k l ! ) ] .
G i = log L θ i = l k ( n k l μ k l 1 ) μ k l θ i ,
H i j = 2 log L θ i θ j = l k n k l μ k l 2 μ k l θ i μ k l θ j + l k ( n k l μ k l 1 ) 2 μ k l θ i θ j .
H i j s = l k n k l μ k l 2 μ k l θ i μ k l θ j .
θ θ [ H s + λ diag ( H s ) ] 1 G .
F i j = l k 1 μ k l μ k l θ i μ k l θ j .
w j k ( u ) = 1 π d 2 v C ( v ) q j k ( v ) ( 1 v 2 NA ob 2 / n med 2 ) 1 / 4 exp ( 2 π i u v ) ,
q x k = cos ϕ T p p k sin ϕ T s s k ,
q y k = sin ϕ T p p k + cos ϕ T s s k ,
w x x ( u , ψ ) = F 0 ( u ) + F 2 ( u ) cos ( 2 ψ ) ,
w x y ( u , ψ ) = F 2 ( u ) cos ( 2 ψ ) ,
w x z ( u , ψ ) = i F 1 ( u ) cos ψ ,
w y x ( u , ψ ) = F 2 ( u ) cos ( 2 ψ ) ,
w y y ( u , ψ ) = F 0 ( u ) F 2 ( u ) cos ( 2 ψ ) ,
w y z ( u , ψ ) = i F 1 ( u ) sin ψ ,
F 0 ( u ) = 2 0 1 d v v ( T s + T p 1 v 2 NA ob 2 / n med 2 ) 2 ( 1 v 2 NA ob 2 / n med 2 ) 1 / 4 J 0 ( 2 π u v ) exp ( i W ( v ) ) ,
F 1 ( u ) = 2 0 1 d v v 2 T p NA ob / n med ( 1 v 2 NA ob 2 / n med 2 ) 1 / 4 J 1 ( 2 π u v ) exp ( i W ( v ) ) ,
F 2 ( u ) = 2 0 1 d v v ( T s T p 1 v 2 NA ob 2 / n med 2 ) 2 ( 1 v 2 NA ob 2 / n med 2 ) 1 / 4 J 2 ( 2 π u v ) exp ( i W ( v ) ) .
μ k x x 0 = 1 2 π σ [ ( D x 2 + 1 2 D z 2 ) Σ 10 + α D x D z ( cos χ Σ 20 + sin χ Σ 11 ) + 1 4 D z 2 ( cos 2 χ Σ 30 + 2 sin χ cos χ Σ 21 + sin 2 χ Σ 12 ) ] ,
μ k y x 0 = 1 2 π σ [ ( D y 2 + 1 2 D z 2 ) Σ 10 + α D y D z ( sin χ Σ 20 + cos χ Σ 11 ) + 1 4 D z 2 ( sin 2 χ Σ 30 2 sin χ cos χ Σ 21 + cos 2 χ Σ 12 ) ] ,
μ k x y 0 = 1 2 π σ [ ( D x 2 + 1 2 D z 2 ) Σ 01 + α D x D z ( cos χ Σ 11 + sin χ Σ 02 ) + 1 4 D z 2 ( cos 2 χ Σ 21 + 2 sin χ cos χ Σ 12 + sin 2 χ Σ 03 ) ] ,
μ k y y 0 = 1 2 π σ [ ( D y 2 + 1 2 D z 2 ) Σ 01 + α D y D z ( sin χ Σ 11 + cos χ Σ 02 ) + 1 4 D z 2 ( sin 2 χ Σ 21 2 sin χ cos χ Σ 12 + cos 2 χ Σ 03 ) ] ,
μ k x σ = 1 π σ [ ( D x 2 + 1 2 D z 2 ) Λ 00 + α D x D z ( cos χ Λ 10 + sin χ Λ 01 ) + 1 4 D z 2 ( cos 2 χ Λ 20 + 2 sin χ cos χ Λ 11 + sin 2 χ Λ 02 ) ] ,
μ k y σ = 1 π σ [ ( D y 2 + 1 2 D z 2 ) Λ 00 + α D y D z ( sin χ Λ 10 + cos χ Λ 01 ) + 1 4 D z 2 ( sin 2 χ Λ 20 2 sin χ cos χ Λ 11 + cos 2 χ Λ 02 ) ] ,
μ k x D x = cos χ π [ 2 D x Σ 00 + α D z ( cos χ Σ 10 + sin χ Σ 01 ) ] ,
μ k y D x = sin χ π [ 2 D y Σ 00 + α D z ( sin χ Σ 10 + cos χ Σ 01 ) ] ,
μ k x D y = sin χ π [ 2 D x Σ 00 + α D z ( cos χ Σ 10 + sin χ Σ 01 ) ] ,
μ k y D y = cos χ π [ 2 D y Σ 00 + α D z ( sin χ Σ 10 + cos χ Σ 01 ) ] ,
μ k x D z = 1 π [ D z Σ 00 + α D x ( cos χ Σ 10 + sin χ Σ 01 ) + 1 2 D z ( cos 2 χ Σ 20 + 2 sin χ cos χ Σ 11 + sin 2 χ Σ 02 ) ] ,
μ k y D z = 1 π [ D z Σ 00 + α D y ( sin χ Σ 10 + cos χ Σ 01 ) ] + 1 2 D z ( sin 2 χ Σ 20 2 sin χ cos χ Σ 11 + cos 2 χ Σ 02 ) ] ,
μ k x α = 1 π D x D z ( cos χ Σ 10 + sin χ Σ 01 ) ,
μ k y α = 1 π D y D z ( sin χ Σ 10 + cos χ Σ 01 ) ,
μ k x b = 1 ,
μ k y b = 1 ,
Λ lm = Δ ( x k x 0 2 σ ψ l + 1 ( x k x 0 2 σ ) ) Δ ( ψ m ( y k y 0 2 σ ) ) + Δ ( ψ l ( x k x 0 2 σ ) ) Δ ( y k y 0 2 σ ψ m + 1 ( y k y 0 2 σ ) ) .
x 0 Δ ψ l ( x k x 0 2 σ ) = 1 2 σ Δ ψ l + 1 ( x k x 0 2 σ ) ,
σ Δ ψ l ( x k x 0 2 σ ) = 1 σ Δ ( x k x 0 2 σ ψ l + 1 ( x k x 0 2 σ ) ) ,
d x d y P S F x ( x , y ) x = ( D x 2 + 1 2 D z 2 ) x 0 + α 2 D x D z σ ,
d x d y P S F x ( x , y ) y = ( D x 2 + 1 2 D z 2 ) y 0 ,
d x d y P S F y ( x , y ) x = ( D y 2 + 1 2 D z 2 ) x 0 ,
d x d y P S F y ( x , y ) y = ( D y 2 + 1 2 D z 2 ) y 0 + α 2 D y D z σ .
d x d y P S F x ( x , y ) x 2 = ( D x 2 + 1 2 D z 2 ) ( σ 2 + x 0 2 ) + D z 2 σ 2 + 2 α D x D z σ x 0 ,
d x d y P S F x ( x , y ) y 2 = ( D x 2 + 1 2 D z 2 ) ( σ 2 + y 0 2 ) ,
d x d y P S F y ( x , y ) x 2 = ( D y 2 + 1 2 D z 2 ) ( σ 2 + x 0 2 ) ,
d x d y P S F y ( x , y ) y 2 = ( D y 2 + 1 2 D z 2 ) ( σ 2 + y 0 2 ) + D z 2 σ 2 + 2 α D y D z σ y 0 .
M 0 j = k n k j ,
( M 1 x j , M 1 y j ) = k n k j ( x k , y k ) ,
( M 2 x j , M 2 y j ) = k n k j ( x k 2 , y k 2 ) .
x 0 v = M 1 x x + M 1 x y M 0 x + M 0 y ,
y 0 v = M 1 y x + M 1 y y M 0 x + M 0 y .
A 2 x j = M 2 x j 2 M 1 x j x 0 v + M 0 j x 0 v 2 ,
A 2 y j = M 2 x j 2 M 1 x j y 0 v + M 0 j y 0 v 2 ,
σ 2 = A 2 x y + A 2 y x M 0 x + M 0 y ,
D z 2 = A 2 x x + A 2 y y A 2 x y A 2 y x 2 σ 2 ,
D x 2 = M 0 x 1 2 D z 2 ,
D y 2 = M 0 y 1 2 D z 2 .
x 0 b = M 1 x y M 0 y ,
y 0 b = M 1 y x M 0 x .
x 0 = ( 1 ε ) x 0 b + ε x 0 v ,
MSLE = ( x 0 x true ) 2 = ( x 0 x 0 ) 2 + ( x 0 x true ) 2 = V + Δ 2 .
MSLE = ( 1 ε ) 2 V y + ε 2 V + ε 2 Δ 2 .
ε = V y V + V y + Δ 2 ,
MSLE = V y ( V + Δ 2 ) V + V y + Δ 2 .
κ = 4 M 0 x M 0 y ( M 0 x + M 0 y ) 2 .
α = 2 D x ( M 1 x x M 0 x x 0 ) + D y ( M 1 y y M 0 y y 0 ) ( D x 2 + D y 2 ) D z σ .
d u H n ( u ) H m ( u ) exp ( u 2 ) = π 2 n n ! δ n m .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.