Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Super-resolution orientation estimation and localization of fluorescent dipoles using 3-D steerable filters

Open Access Open Access

Abstract

Fluorophores that are fixed during image acquisition produce a diffraction pattern that is characteristic of the orientation of the fluorophore’s underlying dipole. Fluorescence localization microscopy techniques such as PALM and STORM achieve super-resolution by applying Gaussian-based fitting algorithms to in-focus images of individual fluorophores; when applied to fixed dipoles, this can lead to a bias in the range of 5–20 nm. We introduce a method for the joint estimation of position and orientation of dipoles, based on the representation of a physically realistic image formation model as a 3-D steerable filter. Our approach relies on a single, defocused acquisition. We establish theoretical, localization-based resolution limits on estimation accuracy using Cramér-Rao bounds, and experimentally show that estimation accuracies of at least 5 nm for position and of at least 2 degrees for orientation can be achieved. Patterns generated by applying the image formation model to estimated position/orientation pairs closely match experimental observations.

©2009 Optical Society of America

1. Introduction

Fluorescence localization microscopy (FLM) has emerged as a powerful family of techniques for optically imaging biological samples at molecular resolutions, down to the nanometer scale [1]. This is achieved by employing specific fluorescent labels that can be activated [2], switched on [3], or that intrinsically blink [4], which make it possible to image sparse subsets of the fluorophores contained within the specimen in sequence. Fluorophores appear as spatially isolated spots in the resulting image frames; their center can then be computationally localized with an accuracy that far surpasses the resolution limit formulated by Abbe. A super-resolved image of the specimen is generated by imaging a large number of such fluorophore subsets, localizing every molecule within each frame, and combining the resulting positions to render a composite image at a finer scale. The resolution achieved by these techniques is thus directly dependent upon the achievable localization accuracy. This accuracy in turn depends on the number of photons collected [5] and on the image formation model used in the localization algorithm.

The primary implementations of FLM proposed to date include photoactivated localization microscopy (PALM) [6], fluorescence photoactivation localization microscopy (FPALM) [7], and stochastic optical reconstruction microscopy (STORM) [8]. Developed contemporaneously, these methods demonstrated resolutions at the 10 nm scale experimentally. Due to the long acquisition times required for the collection of a sufficient amount of frames, these methods were initially designed to image 2-D sections of thin, fixed specimens. For such samples, it is generally assumed that the image of a single molecule corresponds to the in-focus section of the system’s 3-D point spread function (PSF), which can be approximated by a 2-D Gaussian function. Under the latter assumption, it has been shown that Gaussian-based fitting algorithms do not result in a significant loss in localization accuracy [9].

Prior to the introduction and practical feasibility (due to a lack of suitable fluorophores) of FLM, localization-based approaches were generally limited to single molecule tracking applications, with inherent isolation of individual fluorophores. In a study of the progression of the molecular motor Myosin V on actin fibers, Yildiz et al. achieved a Gaussian localization-based resolution of 1.5 nm, and accordingly named their technique fluorescence imaging with one-nanometer accuracy (FIONA) [10]. In the framework of single particle tracking, theoretical limits on the achievable localization accuracy have been formulated for lateral (i.e. x-y) localization [11] and axial localization [12]; these limits translate into a measure of resolution when extended to incorporate multiple sources.

In all of the previously cited methods, it is assumed that individual fluorophores act as isotropically emitting point sources (which also implies that their image corresponds to a section of the system’s PSF). Whereas this is valid for molecules that freely rotate during image acquisition, it does not hold when imaging fixed fluorophores. Their corresponding diffraction patterns differ significantly from the PSF, and are highly specific of the orientation of the fluorophore’s underlying electromagnetic dipole [13, 14].

1.1. Localization of fluorescent dipoles

Dipole diffraction patterns are generally not radially symmetric, and as a consequence, their point of maximum intensity in the image plane is shifted with respect to the position of the fluorophore. Applying Gaussian-based localization to such patterns can lead to a bias in the range of 5–20 nm, even for dipoles that are imaged in focus [15]. This is especially significant in the context of FLM, where resolutions of the same order are striven for. Due to the complexity of the dipole patterns, avoiding this bias requires localization methods based on a more accurate image formation model than a simple Gaussian.

Moreover, localization based on a physically realistic model makes it possible to estimate the dipole’s orientation in addition to its position. So far, this has been exploited in an extension of the FIONA technique named dipole orientation and position imaging (DOPI) [16]. In the study of Myosin V, the estimation of the fluorescent labels’ orientation made it possible to characterize its progression on actin much more precisely than with FIONA. A shortcoming of the current implementation of DOPI is its reliance on two separate procedures for orientation and position estimation.

The current state of the art in dipole orientation and position estimation is based on matched filtering of defocused images. Using an advanced image formation model for dipoles [17], Patra et al. proposed to estimate the orientation based on a finite number of precomputed templates corresponding to rotated versions of a dipole diffraction pattern for a given amount of defocus [18]. The accuracy of this approach is inherently limited by the angular sampling stemming from the finite number of templates, which is usually restricted due to computational cost. As a further consequence, the matched filtering approach limits position estimates to pixel-level accuracy. In DOPI, super-resolved position information is recovered by taking two images of every fluorophore: a defocused image for orientation estimation, and an in-focus image for position estimation, which uses the Gaussian-based localization method proposed in FIONA.

Other methods for estimating the orientation of fluorescent dipoles have been proposed in the literature. Often relying on specific instrumentation, most of these methods are incompatible with a joint position and orientation approach over a reasonably large field of view. Notable techniques are based on direct imaging of the emission pattern in the back focal plane of the objective lens (see, e.g., [19–21]), and on annular illumination [22].

1.2. New approach based on 3-D steerable filters

In this work, we introduce an efficient method for the joint estimation of position and orientation for fluorescent dipoles from a single (defocused) image. We show that image formation for dipoles can be expressed as a linear combination of six templates weighted by trigonometric functions of the dipole angles. As a consequence, the estimation problem can be formulated as the optimization of a 3-D steerable filter [23]. The resulting algorithm serves both as an initial detection of dipoles with localization accuracy at the pixel level, and as an efficient means for iteratively updating the orientation estimates when performing the subsequent sub-pixel localization to achieve super-resolution. We formulate the theoretical limits on estimation accuracy for position and orientation using Cramér-Rao bounds (CRB), and show that our method reaches these bounds. Notably, these limits make it possible to establish the acquisition settings (e.g., defocus) that will lead to the highest estimation accuracy. Our experimental results indicate that dipoles can be localized with a position accuracy of at least 5 nm and an orientation accuracy of at least 2 degrees, at a peak signal-to-noise ratio (PSNR) of approximately 25 dB.

1.3. Organization of the paper

In the next section, we introduce an image formation model for fluorescent dipoles and show how it decomposes into six non-orthogonal templates. Subsequently, in Section 3, we use this steerable decomposition to formulate a localization algorithm for the estimation of both the position and orientation of fluorescent dipoles. In Section 4, we establish the theoretical limits on the accuracy of these estimators, using Fisher information. Finally, in Section 5, we demonstrate our method on simulated and experimental data. We conclude with a discussion of potential applications and extensions of the proposed technique.

2. Image formation

A single fluorescent molecule can be characterized as a harmonically oscillating dipole with moment p = (sin θp cos ϕp, sin θp sin ϕp, cos θp) and position xp = (xp, yp, zp), where the angles θp and ϕp are the zenith (i.e., between the dipole and the optical axis) and azimuth angle of the dipole, respectively. In our notation, the subscript p indicates a dipole parameter. The intensity radiated by such a dipole is modeled by propagating the emitted electric field through the optical system. We begin by expressing the amplitude of the electric field in object space as

εs=p,epseps+p,eses,

where the vectors e ps, and e s are the p- and s-polarized components of εs in the sample layer [24], as illustrated in Fig. 1 (the superscript s on the e p vector denotes the sample medium; e s remains constant throughout the system). A constant factor representing the magnitude (see, e.g., [24, 25]) has been neglected in this expression and will be reintroduced later. We assume a standard model for the object space, consisting of a sample layer (refractive index ns), a glass layer (corresponding to the coverslip, index ng), and an immersion layer (index ni) [26]. After propagating through these layers and the objective, the field is given by

εa=p,epstp(1)tp(2)epa+p,ests(1)ts(2)es,

where tp (l) and ts (l) are the Fresnel transmission coefficients for p-polarized and s-polarized light from layer l to layer l + 1, i.e.,

tp(l)=2nlcosθlnl+1cosθl+nlcosθl+1
ts(l)=2nlcosθlnlcosθl+nl+1cosθl+1,

where (n 1,n 2,n 3) = (ns,ng,ni) for the configuration considered here. Note that Eq. (2) can easily be generalized to an arbitrary number of strata. The unit vectors of the relevant polarization directions, expressed in spherical coordinates, are

eps=(cosθscosϕ,cosθssinϕ,sinθs)
=(1nsns2ni2sin2θcosϕ,1nsns2ni2sin2θsinϕ,ninssinθ)
epa=(cosϕ,sinϕ,0)
es=(sinϕ,cosϕ,0),

where θs is the zenith angle between the wavevector k and the optical axis in the sample layer, θ is the corresponding angle in the immersion layer, and ϕ is the azimuth angle (see Fig. 1). The wavevector is thus parameterized as

k=kni(cosϕsinθ,sinϕsinθ,cosθ),

where k=2πλ is the wavenumber. The field in the detector plane, expressed in object space coordinates, is given by the Richards-Wolf integral [27, 28]:

ε=iA0λ02π0αεaeik,xeikΛ(θ;τ)cosθsinθdθdϕ
=iA0λ02π0αεaeikrnisinθcos(ϕϕd)eiknizcosθeikΛ(θ;τ)cosθsinθdϕ,
 figure: Fig. 1.

Fig. 1. Electric field propagation in a microscope. The illustrated path is in direction of the azimuth angle ϕ.

Download Full Size | PDF

where A 0 is the scalar amplitude of the field, λ its wavelength, and a is the maximal angular aperture of the objective (i.e., α = sin-1 (NA/ni)). The phase component Λ(θ; τ) describes the system’s aberrations. Its argument

τ=(ni,ni*,ng,ng*,ns,ti*,tg,tg*)

is a vector containing the optical parameters of the setup; the subscripts on the refractive indices n and thicknesses t indicate the respective layer for each parameter (see Fig. 1), and an asterisk signals a design value (i.e., ni is the refractive index of the immersion medium under experimental conditions, and ni * its corresponding design value). The field ε is a function of the point of observation x = (-r cosϕd, -r sin ϕd, z) on the detector, where r=(xxp)2+(yyp)2 and ϕd = tan-1 ((y - yp)/(x - xp)); z denotes defocus. All units are expressed in object space coordinates, which allows us to place the origin of the coordinate system at the interface between the sample layer and the coverslip in order to facilitate the expression of the phase term Λ(θ; τ).

We describe the system’s aberrations based on the optical path difference (OPD) between actual (experimental) imaging conditions and the corresponding design values of the system, as proposed in [26]. Aberrations are assumed to arise exclusively as the result of a mismatch between the refractive indices and/or the thicknesses of the different layers in the sample setup. State-of-the-art vectorial PSF calculations all incorporate this phase term (see, e.g., [29, 30]), and although the formalism employed differs from [26], these approaches lead to the same result for the phase and are considered equivalent [31, 32]. We thus replace the aberration term Λ(θ; τ) with the OPD Λ(θ, z; zp, τ), defined as

Λ(θ,z;zp,τ)=(zpz+ni(zpnstgng+tg*ng*+ti*ni*))nicosθ
+zpns2ni2sin2θ+tgng2ni2sin2θ
tg*ng*2ni2sin2θtg*ni*2ni2sin2θ.

Note that Λ(θ,z;0,τ) is the phase term corresponding to the standard defocus model [33].

By substituting Eq. (2) into Eq. (6) and simplifying the result, we can rewrite the vector ε as

ε=i[I0+I2cos(2ϕd)I2sin(2ϕd)2iI1cos(ϕd)I2sin(2ϕd)I0I2cos(2ϕd)2iI1(ϕd)]p,

where

I0(x;xp,τ)=0αB0(θ)(ts(1)ts(2)+tp(1)tp(2)1nsns2ni2sin2θ)dθ
I1(x;xp,τ)=0αB1(θ)tp(1)tp(2)ninssinθdθ
I2(x;xp,τ)=0αB2(θ)(ts(1)ts(2)tp(1)tp(2)1nsns2ni2sin2θ)dθ

with

Bm(θ)=cosθsinθJm(krnisinθ)eikΛ(θ;z;zp,τ).

It should be noted that these integrals are standard for vectorial PSF calculations (for equivalent expressions see, e.g. [30, 32]). The amplitude factor A 0 was dropped from this formulation and will again be reintroduced later. Due to an already involved notation, we choose to omit the argument (x; xp, τ) from functions such as ∣I 02 from this point on.

The orientation-dependent intensity in the detector plane is accordingly given by

hθp,ϕp(x;xp,τ)=ε2
=sin2θp(I02+I22+2cos(2ϕp2ϕd)e{I0*I2})
2sin(2θp)cos(ϕpϕd)m{I1*(I0+I2)}+4I12cos2θp
=pTMp.

An asterisk denotes complex conjugation, and v T stands for the Hermitian transpose of the vector ν; ℜe{ν} and ℑm{ν} represent the real and imaginary components of ν, respectively. The symmetric matrix M = (mij)1≤i,j≤3 is specified by

m11=I02+I22+2e{I0*I2}cos2ϕd
m12=2e{I0*I2}sin2ϕd
m13=2cosϕdm{I1*(I0+I2)}
m22=I02+I22+2e{I0*I2}cos2ϕd
m23=2sinϕdm{I1*(I0+I2)}
m33=4I12.

The quadratic form in Eq. (12) is of particular interest, since it decouples the dipole orientation from the calculation of the field propagation. The dipole diffraction pattern can thus be modeled as the linear combination of six non-orthogonal templates, which forms the basis of the steerable filter-based algorithm presented in this work.

The result of Eq. (12) is consistent with the other vectorial PSF models cited earlier [28–32]: for a fluorophore that is freely rotating during exposure, the resulting intensity is obtained by integrating the above result over all possible orientations, i.e.,

h(x;xp,τ)=02π0πhθp,ϕp(x;xp,τ)θpdθpdϕp
=8π3(I02+2I12+I22).

An illustration of the different diffraction patterns observed for different orientation and defocus values is provided in Fig. 11 in the appendix.

2.1. Noise model

In single molecule fluorescence microscopy, shot noise is the dominant source of noise. Further potential sources include a background term due to autofluorescence, as well as residual signals from other fluorophores in the sample. Read-out noise in the detector may also contribute to observations. Whereas the former sources obey Poisson statistics, read-out noise is Gaussian distributed, which presumes an additive noise model. However, given that the Poisson distribution rapidly converges towards a Gaussian with equal mean and variance when the variance is large enough (this is usually considered the case when σ 2 > 10), we propose a general noise model consisting of a shifted Poisson formulation that incorporates a term accounting for the read-out noise factor and the background. Consequently, we formulate the expected photon count q̅(x; xp, τ) corresponding to a point x on the detector (in object-space coordinates) as

q¯(x;xp,τ)=c·(Ahθp,ϕp(x;xp,τ)+b),

where A is the amplitude, c is a conversion factor, and b is the sum of the background fluorescence signal and the variance σr 2, (in intensity) of the read-out noise. The probability of detecting q photons at x is then given by

Pq(x;xp,τ)(q)=eq¯(x;xp,τ)q¯(x;xp,τ)qq!.

2.2. Pixelation

The pixelation of the detector has a non-negligible effect on the measured intensity distribution corresponding to a dipole, and must therefore be taken into account in the implementation of the model. Hereinafter, we assume that whenever a point of observation x represents a pixel on the CCD, functions of x incorporate integration over the pixel’s area (pixels are assumed to be contiguous and non-overlapping).

3. Dipole localization using steerable filters

A standard solution to estimating the position and orientation of arbitrarily rotated image features consists in correlating the input data with a set of filters corresponding to rotated versions of the feature template. This is the approach currently used in joint dipole orientation and position estimation methods [16, 18]. Mathematically, it is formulated as

(θp*(x),ϕp*(x))=argmax(θp,ϕp)f(x)*gθp,ϕp(x),

where θp *(x) and ϕp *(x) are the optimal orientations in every point x, f(x) is the measured data, * is the convolution operator, and g θp,ϕp(x) = h θp,ϕp (x; (0,0, zp),τ) is a feature template corresponding to the dipole diffraction pattern for the orientation (θp, ϕp). The accuracy on θp *(x) and ϕp *(x) depends on the sampling of θp and ϕp chosen for the optimization—for accuracies of a few degrees, this requires convolution with hundreds of templates, rendering the process very costly.

For specific feature template functions, rotation of the templates can be decoupled from the convolution by decomposing the template into a small number of basis templates that are interpolated by functions of the orientation. These functions, called steerable filters, were introduced by Freeman and Adelson [23]. The best known class of steerable filters are Gaussian derivatives, due to their applicability to edge and ridge detection in image processing [34].

The quadratic form of Eq. (12) shows that the 3-D rotation of a dipole can be decoupled from filtering; i.e., that the orientation and position estimation can be expressed through a 3-D steerable filter. Specifically, the dipole diffraction pattern is decomposed into six non-orthogonal templates, as illustrated in Fig. 2. Orientation estimation then amounts to filtering measured dipole patterns with these templates, followed by optimizing the trigonometric weighting functions that interpolate the dipole model. This process is illustrated in the filterbank representation of Fig. 3.

 figure: Fig. 2.

Fig. 2. High-resolution versions of the templates involved in the steerable decomposition of a dipole diffraction pattern. The example corresponds to a Cy5 dipole (λ = 660 nm) at an air/glass interface, imaged with a 100×, 1.45 NA oil immersion objective at 500 nm defocus. The template labels match the definitions given in Eq. (13).

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Filterbank implementation of the steerable dipole filters.

Download Full Size | PDF

For each spatial location x, the dipole model is fitted to the data by minimizing the least-squares criterion

JLS(x;θp,ϕp)=Ω(Ahθp,ϕp(v;xp,τ)f(xv))2dv
=Ahθp,ϕp(x;xp,τ)2+Ωf(xv)2dv
2Ahθp,ϕp(x;xp,τ)*f(x),

where f(x) is the observed signal from which the average background value has been subtracted, and where Ω ⊂ ℜ2 is the support of h θp,ϕp. The correlation term is steerable and can be expressed as

(hθp,ϕp*f)(x)=ijaij(θp,ϕp)(mij*f)(x)
=pTMfp,

where aij(θp, ϕp) are the weighting functions given in Fig. 3, and where [Mf]ij = mij* f. Concretely, this means that we can compute Eq. (19) for each location x very efficiently by first convolving the image f with the six templates mij—which yields Mf—followed by applying the trigonometric weights aij(θp, ϕp), which amounts to computing the quadratic form p T M f p. Due to the symmetry of M, only six basis templates are involved in the process.

In order to simplify the expression for the optimization of Eq. (18), we rewrite the model energy term, which is independent of ϕp, as

Ahθp,ϕp(x;xp,τ)2=A2uθpTEuθp,

where u θp = (sin2 θp, sin 2θp, cos2 θp)T, and where E is defined as

E=[m112m11m13m11m33m11m13m132m13m33m11m33m13m33m332].

The notation 〈mij〉 stands for integration over the support Ω of mij(x). We can then rewrite the cost criterion as

J(x;θp,ϕp)=A2uθpTEuθp2ApTMfp.

The data term ∫Ω f(x-ν)2 dν in Eq. (18) has no effect on the optimization and was eliminated from the criterion.

3.1. Orientation estimation

The criterion in Eq. (22) cannot be solved in closed form for θp, ϕp, and A. However, for a given set of templates corresponding to a position estimate, an iterative algorithm is obtained by setting the partial derivatives

θpJ(x;θp,ϕp)=2A(AuθpTEθpuθp2pTMfθpp)
ϕpJ(x;θp,ϕp)=4ApTMfϕpp

to zero, and alternately solving for θp and ϕp; we found this to be more efficient than a standard gradient-descent based approach. The quartic equations whose solution yields the optimal values for both angles are given in the appendix. The notation tν stands for the componentwise derivative of the vector ν with respect to t. Between these iterations, the amplitude is updated using the least-squares estimator

Â=pTMfpuθpTEuθp.

For fixed values of xp and z, the angles can be estimated to sufficient accuracy in a small number of iterations (usually less than five).

 figure: Fig. 4.

Fig. 4. Schematic outline of the proposed detection algorithm. The steerable filter-based component yields results that are accurate at the pixel level (*finer scale results can be obtained by applying shifted versions of the feature templates). Every update of the position estimates generates a new set of appropriately shifted templates, from which the orientation is estimated at little cost by making use of the steerable decomposition.

Download Full Size | PDF

3.2. Sub-pixel position estimation

Discretization of Eq. (22) yields a filter-based algorithm that returns position estimates with pixel-level accuracy. In order to recover super-resolved position estimates, an iterative fitting algorithm analogous to the Gaussian-based fitting algorithms employed by FLM can be applied to the dipole model in Eq. (12). In this work, this was achieved with the Levenberg-Marquardt algorithm. An alternative would be to use a maximum-likelihood-based cost criterion, which is optimal with respect to the Poisson-based noise model [12]. However, the localization accuracy achieved using the proposed least-squares criterion is near-optimal in the sense that it reaches the theoretical limits (see Section 4).

3.3. Joint estimation algorithm

In addition to the lateral position (xp, yp), amplitude A, and orientation (θp, ϕp) of dipoles, which can be estimated using the proposed steerable filter in the minimization of Eq. (22), the axial position zp of the dipole and the position of the focus z are further degrees of freedom in the model. Estimation of these two parameters can be challenging due to their mutual influence in the phase term of Eq. (8); possible solutions for resolving this ambiguity were discussed in [12], in the framework of 3-D localization of isotropic point sources.

However, for the distances zp from the coverslip that are observable under TIRF (total internal reflection fluorescence) excitation, which is currently the method of choice for FLM due to its thin depth excitation (around 100 nm), the axial shift variance in the PSF is essentially negligible. This means that a dipole with axial position zp that is observed at the focal distance z is virtually undistinguishable from an identical dipole at the origin that is observed at z - zp. Consequently, we set zp = 0 for our experiments.

For a given value of defocus z, the minimization of Eq. (22) is achieved in two steps. The input image is first filtered with the six templates corresponding to z, which yields orientation and position estimates that are accurate at the pixel level. This is followed by an iterative optimization that refines the position estimates to sub-pixel accuracy. At each iteration, a new set of templates corresponding to the sub-pixel position estimates is evaluated; the corresponding orientation is computed with the method described in Section 3.1. If required, the estimation of z can be included between steps of this iterative procedure, both at the pixel level and the super-resolution level. A simplified flowchart representation of the algorithm is given in Fig. 4.

4. Dipole localization accuracy

Resolution in FLM is defined as a function of localization accuracy. Theoretical limits on localization accuracy can be established based on the Cramér-Rao bounds for the parameters of interest (see [11, 35] for an analysis of lateral localization accuracy, and [12] for axial localization accuracy). The CRB yields a lower bound on the variance of an estimator, provided that the latter is unbiased. For multi-parameter estimation, this bound is obtained by computing the inverse of the Fisher information matrix [36]. For our image formation model, this matrix is defined through

Fij=Ω1q¯q¯ϑiq¯ ϑjdx

where the parameters to be estimated are ϑ = (xp, yp, θp, ϕp, z, A). In the matrix F, the cross-terms between ϕp and the other parameters are zero, which slightly simplifies the inversion. The CRBs for ϑ = (xp, yp, θp, z, A) are thus given by

Var(ϑ̂i)[F1]ii,

and the bound for ϕp by

Var(ϕ̂p)1/Ω1q¯ (q¯ ϕp)2dx.

The partial derivatives of Eq. (12) relevant to the computation of the CRBs are given in the appendix.

An example of the CRBs for all parameters is shown in Fig. 5, as a function of z and θp. As the different plots illustrate, the bounds are relatively flat for a large range of values. In-focus imaging leads to lower accuracy for orientation and defocus estimation. Off-focus imaging, on the other hand, preserves excellent localization accuracy in the plane, while leading to higher accuracies for orientation and defocus estimation. Note that these observations hold true for the general case; changing the system and acquisition parameters essentially amounts to a scaling of the bounds.

4.1. Performance of the algorithm

The localization accuracy of the proposed algorithm was evaluated by performing the fit on simulated acquisitions generated using the image formation model in Eq. (15). In Fig. 6, we show standard deviations of the estimation results for each parameter compared to the associated CRB. The standard deviations consistently match the value of the CRB, which indicates that the performance of the algorithm is near-optimal. A fully optimal solution for the proposed Poisson-based noise model would consist in a maximum-likelihood formulation. However, this would lead to a computationally less efficient solution compared to the filter-based approach obtained through the proposed least-squares formulation.

5. Results

5.1. Experimental setup

To demonstrate the performance of our algorithm experimentally, we imaged single Cy5 dipoles at an air/glass interface. Samples were prepared by drying a solution of Cy5 molecules bound to single-stranded DNA (ssDNA) fragments onto a coverslip (isolated Cy5 binds to the glass surface at a specific orientation, leading to limited diversity in the observed diffraction patterns). These were then imaged using TIRF excitation at a wavelength of λ = 635 nm. Imaging was performed on the microscope setup described in [37] (α Plan-Fluar 100 × 1.45 NA objective with Immersol 518F (Carl Zeiss Jena, Jena, Germany)), where we added a highly sensitive CCD camera (LucaEM S 658M (Andor, Belfast, Northern Ireland), 10 × 10 μm2 pixels) in the image plane. Images were acquired with an EM gain setting of 100 (the shot noise variance distortion resulting from the electron multiplying process [38] can be compensated in the background term of Eq. (15) in order to conserve the Poisson statistics). Despite weak photostability in air, individual molecules could be imaged for several tens of seconds before bleaching.

 figure: Fig. 5.

Fig. 5. Cramer-Rao bounds for (a) xp and yp, (b) z, (c) θp, (d) ϕp, and (e) A. The two surfaces in (a) are the maximum and minimum values of the bound for xp, and vice-versa for yp; the localization accuracy for these parameters varies as a function of ϕp. System parameters: ni = 1.515, ns = 1.00, λ = 565 nm. Average PSNR = 35 dB, background intensity level 20%.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Performance of the 3-D steerable filter-based estimation of (a) xp and yp, (b) z, (c) θp and ϕp, and (d) A. The solid lines show the CRB for each parameter, and the markers correspond to the standard deviation of the estimation over 250 realizations of noise for each point. Parameters: NA = 1.45, ni = 1.515, ns = 1.00, z = 400 nm, average PSNR = 34 dB, background intensity level 10%.

Download Full Size | PDF

5.2. Evaluation

For an experimental assessment of the accuracy of our algorithm, we applied the estimation to image sequences of individual molecules taken under identical acquisition conditions. Standard deviations on position, orientation, and defocus for such an experiment are given in Table 1; some frames from the sequence used are shown in Fig. 7. The results are in good agreement with the values predicted by the CRB. Specifically, it is the relative magnitude between the different standard deviations that precisely matches the behavior predicted by the CRB (for comparison, see Fig. 6). Although the average PSNR was relatively high for these acquisitions (26.43 dB, computed with respect to the average of all acquisitions), they provide an indication of experimentally achievable accuracy.

In a similar experiment, we compared the estimation accuracy over a series of frames of the same dipole taken at different levels of defocus. The resulting values, given in Table 2, indicate that the algorithm and image formation model are consistent and correspond well to the observed measurements. The frames from this experiment are shown in Fig. 8.

The performance of the steerable filter-based estimation of orientation and position is illustrated in Fig. 9 for simulated data and in Fig. 10 for an experimental acquisition. The image generated using the estimated positions and orientations matches the experimental measurement well, despite some residual aberrations that slightly modify the observed diffraction patterns.

6. Discussion

The method proposed in this paper should be of immediate benefit to optical imaging-based studies of molecular motors, as demonstrated in the DOPI paper by Toprak et al. [16]. These authors showed that the diffraction pattern of dipolar quantum dots [39] attached to myosin V can be clearly detected and tracked over time. Due to the computational cost of the matched filtering algorithm employed [18], the angular sampling of the templates was limited to 10° for ϕp and to 15° for θp, which also required the selection of dipoles oriented almost orthogonal to the optical axis for better accuracy (see supporting text of [16]). Due to a relatively small change in the observed diffraction pattern for values of θp between 0° and 45°, it was assumed that θp could only be determined with low accuracy using a matched filtering approach [18]. Our theoretical analysis shows that both angles in the parameterization of the dipole’s orientation can be recovered with high accuracy.

 figure: Fig. 7.

Fig. 7. Frames from the experiment described in Table 1. Scale bar: 1 μm.

Download Full Size | PDF

Tables Icon

Table 1. Mean μ and standard deviation σ for position, orientation, and defocus, measured over 22 images of a single Cy5 molecule. Three of these frames are shown in Fig. 7, along with the fitted model.

 figure: Fig. 8.

Fig. 8. Frames from the experiment described in Table 2. Scale bar: 1 μm.

Download Full Size | PDF

Tables Icon

Table 2. Mean μ and standard deviation σ for position and orientation measured over 4 images of two Cy5 molecules. These frames are shown in Fig. 8, along with the fitted model.

 figure: Fig. 9.

Fig. 9. Detection of dipole orientations on simulated data, using the proposed steerable filters. (a) Dipole patterns at random, pixel-level positions and orientations. (b) High-resolution image generated using detected positions and orientations. Parameters: NA = 1.45, ni = 1.515, ns = 1.00, z = 400 nm, average PSNR = 25 dB, background intensity level 20%. Scale bar: 1 μm.

Download Full Size | PDF

DOPI requires acquisition of a pair of images: a defocused image for orientation estimation, and an in-focus image for Gaussian-based localization. As shown by Enderlein et al. [15], Gaussian-based localization applied to dipole diffraction patterns can introduce a significant bias in the estimated position, even for molecules that are imaged in focus. Our method removes these limitations: it performs well over a wide range of dipole orientations (only the estimation of φp for values of θp that are close to zero remains inherently inaccurate), and is relatively insensitive to the amount of defocus used, as shown in Fig. 5.

Recent advances in the development of FLM include expansions to 3-D, using either a combination of cylindrical optics and bivariate Gaussian fitting [40], or an interferometric estimation of fluorophore positions in direction of the optical axis [41], as well as orientation sensitive imaging using polarization-based techniques [42]. All of these approaches require some modification of the imaging system; the fitting algorithm described in this paper can be readily adapted to include 3-D localization, and is compatible with standard TIRF setups.

 figure: Fig. 10.

Fig. 10. (a) Dipole diffraction patterns for ssDNA-bound Cy5 molecules at an air/glass interface. (b) Diffraction patterns rendered using the orientations and positions estimated from (a), using the proposed algorithm and image formation model. Scale bar: 1 μm.

Download Full Size | PDF

7. Conclusion

We have introduced an efficient algorithmic framework based on 3-D steerable filters for the estimation of fluorescent dipole positions and orientations. Image formation for fluorescent dipoles can be expressed as a function of six templates weighted by functions of the dipole’s orientation, leading to an effective filter-based estimation at the pixel-level, followed by an iterative refinement that yields super-resolved estimates. Experimental results on Cy5 dipoles demonstrate the potential of the proposed approach; estimation accuracies of the order of 5 nm for position and 2° for orientation are reported.

A. Appendix

A.1. Quartic equations for orientation estimation

The orientation parameters θp and ϕp are estimated by iteratively solving the two sets of quartic equations given below. The solution for θp is obtained by solving

tan4θp(f13cos(ϕp)+f23sin(ϕp)Ae12)
+tan3θp(f33(f11cos(ϕp)2+f12sin(2ϕp)+f22sin(ϕp)2)+A(e11e132e22))
+3tan2θpA(e12e23)
+tanθp(f33(f11cos(ϕp)2+f12sin(2ϕp)+f22sin(ϕp)2)+A(e13e33+2e22))
(f13cos(ϕp)+f23sin(ϕp)Ae23)
=0

for tan θp, where [E]ij = eij. Similarly, the solution for ϕp is obtained by solving

tan4ϕp(f122sin2θpf132cos2θp)
+tan3ϕp(f13f23cos2θp+f12(m11f22)sin2θp)
+tan2ϕp(((f11f22)22f122)sin2θp(f132+f232)cos2θp)
+tanϕp(f13f23cos2θpf12(f11f22)sin2θp)
+f122sin2θpf232cos2θp
=0

for tan ϕp, where [M f]ij = fij.

A.2. Derivatives of the dipole model

The partial derivatives required in the evaluation of the Fisher information and in the fitting algorithm are

hθp,ϕpxp=sin2θp(2e{I0*I0xp+I2*I2xp}+2cos(2ϕp2ϕd)e{I0*I2xp+I2*I0xp})
2sin(2θp)cos(ϕpϕd)m{I1*(I0xp+I2xp)I1xp(I0*+I2*)}
+8cos2θpe{I1*I1xp}
hθp,ϕpθp=sin2θp(I02+I22+2cos(2ϕp2ϕd)e{I0*I2}4I12)
4cos(2θp)cos(ϕpϕd)m{I1*(I0+I2)}
hθp,ϕpϕp=4sin2θpsin(2ϕp2ϕd)e{I0*I2}
+2sin(2θp)sin(ϕpϕd)m{I1*(I0+I2)}.
 figure: Fig. 11.

Fig. 11. Intensity distribution generated by a Cy3 dipole (λ = 565 nm) at an air/glass interface for different values of θp and z, imaged with a 100×, 1.45 NA objective. The azimuth angle is fixed at ϕp = 0. The intensities are normalized across every focus value z. Every row, as well as planar rotations of each pattern, can be generated using six unique templates. Scale bar: 500 nm.

Download Full Size | PDF

Acknowledgments

This work was supported by the Swiss National Science Foundation under grant 200020-109415, as well as by the Hasler foundation under grant 2033.

References and links

1. S. W. Hell, “Microscopy and its focal switch,” Nature Methods 6, 24–32 (2009). [CrossRef]   [PubMed]  

2. G. H. Patterson and J. Lippincott-Schwartz, “A photoactivatable GFP for selective photolabeling of proteins and cells,” Science 297, 1873–1877 (2002). [CrossRef]   [PubMed]  

3. M. Bates, T. R. Blosser, and X. Zhuang ,“Short-range spectroscopic ruler based on a single-molecule optical switch,” Phys. Rev. Lett. 94, 108101(1–4) (2005). [CrossRef]   [PubMed]  

4. K. A. Lidke, B. Rieger, T. M. Jovin, and R. Heintzmann, “Superresolution by localization of quantum dots using blinking statistics,” Opt. Express 13, 7052–7062 (2005). [CrossRef]   [PubMed]  

5. R. E. Thompson, D. R. Larson, and W. W. Webb, “Precise nanometer localization analysis for individual fluorescent probes,” Biophys. J. 82, 2775–2783 (2002). [CrossRef]   [PubMed]  

6. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacio, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006). [CrossRef]   [PubMed]  

7. S. T. Hess, T. P. K. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91, 4258–4272 (2006). [CrossRef]   [PubMed]  

8. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nature Methods 3, 793–795 (2006). [CrossRef]   [PubMed]  

9. M. K. Cheezum, W. F. Walker, and W. H. Guilford, “Quantitative comparison of algorithms for tracking single fluorescent particles,” Biophys. J. 81, 2378–2388 (2001). [CrossRef]   [PubMed]  

10. A. Yildiz, J. N. Forkey, S. A. McKinney, H. Taekjip, Y. E. Goldman, and P. R. Selvin, “Myosin V walks handover-hand: single fluorophore imaging with 1.5-nm localization,” Science 300, 2061–2065 (2003). [CrossRef]   [PubMed]  

11. R. J. Ober, S. Ram, and S. Ward, “Localization accuracy in single-molecule microscopy,” Biophys. J. 86, 1185–1200 (2004). [CrossRef]   [PubMed]  

12. F. Aguet, D. Van De Ville, and M. Unser, “A maximum-likelihood formalism for sub-resolution axial localization of fluorescent nanoparticles,” Opt. Express 13, 10,503–10,522 (2005). [CrossRef]  

13. A. P. Bartko and R. M. Dickson, “Imaging three-dimensional single molecule orientations,” J. Phys. Chem. B 103, 11,237–11,241 (1999).

14. R. Schuster, M. Barth, A. Gruber, and F. Cichos, “Defocused wide field fluorescence imaging of single CdSe/ZnS quantum dots,” Chem. Phys. Lett. 413, 280–283 (2005). [CrossRef]  

15. J. Enderlein, E. Toprak, and P. R. Selvin, “Polarization effect on position accuracy of fluorophore localization,” Opt. Express 14, 8111–8120 (2006). [CrossRef]   [PubMed]  

16. E. Toprak, J. Enderlein, S. Syed, S. A. McKinney, R. G. Petschek, T. Ha, Y. E. Goldman, and P. R. Selvin, “Defocused orientation and position imaging (DOPI) of myosin V,” Proc. Natl. Acad. Sci. USA 103, 6495–6499 (2006). [CrossRef]   [PubMed]  

17. M. Böhmer and J. Enderlein, “Orientation imaging of single molecules by wide-field epifluorescence microscopy,” J. Opt. Soc. Am. A 20, 554–559 (2003). [CrossRef]  

18. D. P. Patra, I. Gregor, and J. Enderlein, “Image analysis of defocused single-molecule images for three-dimensional molecule orientation studies,” J. Phys. Chem. A 108, 6836–6841 (2004). [CrossRef]  

19. M. A. Lieb, J. M. Zavislan, and L. Novotny, “Single-molecule orientations determined by emission pattern imaging,” J. Opt. Soc. Am. B 21, 1210–1215 (2004). [CrossRef]  

20. Z. Sikorski and L. M. Davis, “Engineering the collected field for single-molecule orientation determination,” Opt. Express 16, 3660–3673 (2008). [CrossRef]   [PubMed]  

21. M. R. Foreman, C. M. Romero, and P. Török, “Determination of the three-dimensional orientation of single molecules,” Opt. Lett. 33, 1020–1022 (2008). [CrossRef]   [PubMed]  

22. B. Sick, B. Hecht, and L. Novotny, “Orientational imaging of single molecules by annular illumination,” Phys. Rev. Lett. 85, 4482–4485 (2000). [CrossRef]   [PubMed]  

23. W. T. Freeman and E. H. Adelson, “The design and use of steerable filters,” IEEE Trans. Pattern Anal. Mach. Intell. 13, 891–906 (1991). [CrossRef]  

24. E. H. Hellen and D. Axelrod, “Fluorescence emission at dielectric and metal-film interfaces,” J. Opt. Soc. Am. B 4, 337–350 (1987). [CrossRef]  

25. G. W. Ford and W. H. Weber, “Electromagnetic interactions of molecules with metal surfaces,” Phys. Rep. 113, 195–287 (1984). [CrossRef]  

26. S. F. Gibson and F. Lanni, “Experimental test of an analytical model of aberration in an oil-immersion objective lens used in three-dimensional light microscopy,” J. Opt. Soc. Am. A 8, 1601–1613 (1991). [CrossRef]  

27. E. Wolf, “Electromagnetic diffraction in optical systems—I. An integral representation of the image field,” Proc. R. Soc. London A 253, 349–357 (1959). [CrossRef]  

28. B. Richards and E. Wolf, “Electromagnetic diffraction in optical systems—II. Structure of the image field in an aplanatic system,” Proc. R. Soc. London A 253, 358–379 (1959). [CrossRef]  

29. S. W. Hell, G. Reiner, C. Cremer, and E. H. K. Stelzer, “Aberrations in confocal fluorescence microscopy induced by mismatches in refractive index,” J. Microsc. 169, 391–405 (1993). [CrossRef]  

30. P. Török and R. Varga, “Electromagnetic diffraction of light focused through a stratified medium,” Appl. Opt. 36, 2305–2312 (1997). [CrossRef]   [PubMed]  

31. O. Haeberlé, “Focusing of light through a stratified medium: a practical approach for computing microscope point spread functions. Part I: Conventional microscopy,” Opt. Commun. 216, 55–63 (2003). [CrossRef]  

32. A. Egner and S. W. Hell, “Equivalence of the Huygens-Fresnel and Debye approach for the calculation of high aperture point-spread functions in the presence of refractive index mismatch,” J. Microsc. 193, 244–249 (1999). [CrossRef]  

33. M. Born and E. Wolf, Principles of Optics, 7th ed. (Cambridge University Press, 1959).

34. M. Jacob and M. Unser, “Design of steerable filters for feature detection using Canny-like criteria,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 1007–1019 (2004). [CrossRef]  

35. K. A. Winick, “Cramér-Rao lower bounds on the performance of charge-coupled-device optical position estimators,” J. Opt. Soc. Am. A 3, 1809–1815 (1986). [CrossRef]  

36. D. L. Snyder and M. I. Miller, Random point processes in time and space, 2nd ed. (Springer, 1991). [CrossRef]  

37. M. Leutenegger, H. Blom, J. Widengren, C. Eggeling, M. Gösch, R. A. Leitgeb, and T. Lasser, “Dual-color total internal reflection fluorescence cross-correlation spectroscopy,” J. Biomed. Opt. 11, 040502(1–3) (2006). [CrossRef]   [PubMed]  

38. M. S. Robbins and B. J. Hadwen, “The noise performance of electron multiplying charge-coupled devices,” IEEE Trans. Electron Devices 50, 1227–1232 (2003). [CrossRef]  

39. J. Hu, L.-s. Li, W. Yang, L. Manna, L.-w. Wang, and A. P. Alivisatos, “Linearly polarized emission from colloidal semiconductor quantum rods,” Science 292, 2060–2063 (2001). [CrossRef]   [PubMed]  

40. B. Huang, W. Wang, M. Bates, and X. Zhuang, “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science 319, 810–813 (2008). [CrossRef]   [PubMed]  

41. G. Shtengel, J. A. Galbraith, C. G. Galbraith, J. Lippincott-Schwartz, J. M. Gillette, S. Manley, R. Sougrat, C. M. Waterman, P. Kanchanawong, M. W. Davidson, R. D. Fetter, and H. F. Hess, “Interferometric fluorescent super-resolution microscopy resolves 3D cellular ultrastructure,” Proc. Natl. Acad. Sci. USA 106, 3125–3130 (2009). [CrossRef]   [PubMed]  

42. T. J. Gould, M. S. Gunewardene, M. V. Gudheti, V. V. Verkhusha, S.-R. Yin, J. A. Gosse, and S. T. Hess, “Nanoscale imaging of molecular positions and anisotropies,” Nature Methods 5, 1027–1030 (2008). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Electric field propagation in a microscope. The illustrated path is in direction of the azimuth angle ϕ.
Fig. 2.
Fig. 2. High-resolution versions of the templates involved in the steerable decomposition of a dipole diffraction pattern. The example corresponds to a Cy5 dipole (λ = 660 nm) at an air/glass interface, imaged with a 100×, 1.45 NA oil immersion objective at 500 nm defocus. The template labels match the definitions given in Eq. (13).
Fig. 3.
Fig. 3. Filterbank implementation of the steerable dipole filters.
Fig. 4.
Fig. 4. Schematic outline of the proposed detection algorithm. The steerable filter-based component yields results that are accurate at the pixel level (*finer scale results can be obtained by applying shifted versions of the feature templates). Every update of the position estimates generates a new set of appropriately shifted templates, from which the orientation is estimated at little cost by making use of the steerable decomposition.
Fig. 5.
Fig. 5. Cramer-Rao bounds for (a) xp and yp , (b) z, (c) θp , (d) ϕp , and (e) A. The two surfaces in (a) are the maximum and minimum values of the bound for xp , and vice-versa for yp ; the localization accuracy for these parameters varies as a function of ϕp . System parameters: ni = 1.515, ns = 1.00, λ = 565 nm. Average PSNR = 35 dB, background intensity level 20%.
Fig. 6.
Fig. 6. Performance of the 3-D steerable filter-based estimation of (a) xp and yp , (b) z, (c) θp and ϕp , and (d) A. The solid lines show the CRB for each parameter, and the markers correspond to the standard deviation of the estimation over 250 realizations of noise for each point. Parameters: NA = 1.45, ni = 1.515, ns = 1.00, z = 400 nm, average PSNR = 34 dB, background intensity level 10%.
Fig. 7.
Fig. 7. Frames from the experiment described in Table 1. Scale bar: 1 μm.
Fig. 8.
Fig. 8. Frames from the experiment described in Table 2. Scale bar: 1 μm.
Fig. 9.
Fig. 9. Detection of dipole orientations on simulated data, using the proposed steerable filters. (a) Dipole patterns at random, pixel-level positions and orientations. (b) High-resolution image generated using detected positions and orientations. Parameters: NA = 1.45, ni = 1.515, ns = 1.00, z = 400 nm, average PSNR = 25 dB, background intensity level 20%. Scale bar: 1 μm.
Fig. 10.
Fig. 10. (a) Dipole diffraction patterns for ssDNA-bound Cy5 molecules at an air/glass interface. (b) Diffraction patterns rendered using the orientations and positions estimated from (a), using the proposed algorithm and image formation model. Scale bar: 1 μm.
Fig. 11.
Fig. 11. Intensity distribution generated by a Cy3 dipole (λ = 565 nm) at an air/glass interface for different values of θp and z, imaged with a 100×, 1.45 NA objective. The azimuth angle is fixed at ϕp = 0. The intensities are normalized across every focus value z. Every row, as well as planar rotations of each pattern, can be generated using six unique templates. Scale bar: 500 nm.

Tables (2)

Tables Icon

Table 1. Mean μ and standard deviation σ for position, orientation, and defocus, measured over 22 images of a single Cy5 molecule. Three of these frames are shown in Fig. 7, along with the fitted model.

Tables Icon

Table 2. Mean μ and standard deviation σ for position and orientation measured over 4 images of two Cy5 molecules. These frames are shown in Fig. 8, along with the fitted model.

Equations (68)

Equations on this page are rendered with MathJax. Learn more.

ε s = p , e p s e p s + p , e s e s ,
ε a = p , e p s t p ( 1 ) t p ( 2 ) e p a + p , e s t s ( 1 ) t s ( 2 ) e s ,
t p ( l ) = 2 n l cos θ l n l + 1 cos θ l + n l cos θ l + 1
t s ( l ) = 2 n l cos θ l n l cos θ l + n l + 1 cos θ l + 1 ,
e p s = ( cos θ s cos ϕ , cos θ s sin ϕ , sin θ s )
= ( 1 n s n s 2 n i 2 sin 2 θ cos ϕ , 1 n s n s 2 n i 2 sin 2 θ sin ϕ , n i n s sin θ )
e p a = ( cos ϕ , sin ϕ , 0 )
e s = ( sin ϕ , cos ϕ , 0 ) ,
k = k n i ( cos ϕ sin θ , sin ϕ sin θ , cos θ ) ,
ε = i A 0 λ 0 2 π 0 α ε a e i k , x e i k Λ ( θ ; τ ) cos θ sin θ d θ d ϕ
= i A 0 λ 0 2 π 0 α ε a e ikr n i sin θ cos ( ϕ ϕ d ) e i k n i z cos θ e ik Λ ( θ ; τ ) cos θ sin θ d ϕ ,
τ = ( n i , n i * , n g , n g * , n s , t i * , t g , t g * )
Λ ( θ , z ; z p , τ ) = ( z p z + n i ( z p n s t g n g + t g * n g * + t i * n i * ) ) n i cos θ
+ z p n s 2 n i 2 sin 2 θ + t g n g 2 n i 2 sin 2 θ
t g * n g * 2 n i 2 sin 2 θ t g * n i * 2 n i 2 sin 2 θ .
ε = i [ I 0 + I 2 cos ( 2 ϕ d ) I 2 sin ( 2 ϕ d ) 2 i I 1 cos ( ϕ d ) I 2 sin ( 2 ϕ d ) I 0 I 2 cos ( 2 ϕ d ) 2 i I 1 ( ϕ d ) ] p ,
I 0 ( x ; x p , τ ) = 0 α B 0 ( θ ) ( t s ( 1 ) t s ( 2 ) + t p ( 1 ) t p ( 2 ) 1 n s n s 2 n i 2 sin 2 θ ) d θ
I 1 ( x ; x p , τ ) = 0 α B 1 ( θ ) t p ( 1 ) t p ( 2 ) n i n s sin θ d θ
I 2 ( x ; x p , τ ) = 0 α B 2 ( θ ) ( t s ( 1 ) t s ( 2 ) t p ( 1 ) t p ( 2 ) 1 n s n s 2 n i 2 sin 2 θ ) d θ
B m ( θ ) = cos θ sin θ J m ( k r n i sin θ ) e i k Λ ( θ ; z ; z p , τ ) .
h θ p , ϕ p ( x ; x p , τ ) = ε 2
= sin 2 θ p ( I 0 2 + I 2 2 + 2 cos ( 2 ϕ p 2 ϕ d ) e { I 0 * I 2 } )
2 sin ( 2 θ p ) cos ( ϕ p ϕ d ) m { I 1 * ( I 0 + I 2 ) } + 4 I 1 2 cos 2 θ p
= p T Mp .
m 11 = I 0 2 + I 2 2 + 2 e { I 0 * I 2 } cos 2 ϕ d
m 12 = 2 e { I 0 * I 2 } sin 2 ϕ d
m 13 = 2 cos ϕ d m { I 1 * ( I 0 + I 2 ) }
m 22 = I 0 2 + I 2 2 + 2 e { I 0 * I 2 } cos 2 ϕ d
m 23 = 2 sin ϕ d m { I 1 * ( I 0 + I 2 ) }
m 33 = 4 I 1 2 .
h ( x ; x p , τ ) = 0 2 π 0 π h θ p , ϕ p ( x ; x p , τ ) θ p d θ p d ϕ p
= 8 π 3 ( I 0 2 + 2 I 1 2 + I 2 2 ) .
q ¯ ( x ; x p , τ ) = c · ( A h θ p , ϕ p ( x ; x p , τ ) + b ) ,
P q ( x ; x p , τ ) ( q ) = e q ¯ ( x ; x p , τ ) q ¯ ( x ; x p , τ ) q q ! .
( θ p * ( x ) , ϕ p * ( x ) ) = arg max ( θ p , ϕ p ) f ( x ) * g θ p , ϕ p ( x ) ,
J LS ( x ; θ p , ϕ p ) = Ω ( A h θ p , ϕ p ( v ; x p , τ ) f ( x v ) ) 2 d v
= A h θ p , ϕ p ( x ; x p , τ ) 2 + Ω f ( x v ) 2 d v
2 A h θ p , ϕ p ( x ; x p , τ ) * f ( x ) ,
( h θ p , ϕ p * f ) ( x ) = i j a i j ( θ p , ϕ p ) ( m i j * f ) ( x )
= p T M f p ,
A h θ p , ϕ p ( x ; x p , τ ) 2 = A 2 u θ p T E u θ p ,
E = [ m 11 2 m 11 m 13 m 11 m 33 m 11 m 13 m 13 2 m 13 m 33 m 11 m 33 m 13 m 33 m 33 2 ] .
J ( x ; θ p , ϕ p ) = A 2 u θ p T E u θ p 2 A p T M f p .
θ p J ( x ; θ p , ϕ p ) = 2 A ( A u θ p T E θ p u θ p 2 p T M f θ p p )
ϕ p J ( x ; θ p , ϕ p ) = 4 A p T M f ϕ p p
A ̂ = p T M f p u θ p T E u θ p .
F i j = Ω 1 q ¯ q ¯ ϑ i q ¯ ϑ j d x
Var ( ϑ ̂ i ) [ F 1 ] i i ,
Var ( ϕ ̂ p ) 1 / Ω 1 q ¯ ( q ¯ ϕ p ) 2 d x .
tan 4 θ p ( f 13 cos ( ϕ p ) + f 23 sin ( ϕ p ) A e 12 )
+ tan 3 θ p ( f 33 ( f 11 cos ( ϕ p ) 2 + f 12 sin ( 2 ϕ p ) + f 22 sin ( ϕ p ) 2 ) + A ( e 11 e 13 2 e 22 ) )
+ 3 tan 2 θ p A ( e 12 e 23 )
+ tan θ p ( f 33 ( f 11 cos ( ϕ p ) 2 + f 12 sin ( 2 ϕ p ) + f 22 sin ( ϕ p ) 2 ) + A ( e 13 e 33 + 2 e 22 ) )
( f 13 cos ( ϕ p ) + f 23 sin ( ϕ p ) A e 23 )
= 0
tan 4 ϕ p ( f 12 2 sin 2 θ p f 13 2 cos 2 θ p )
+ tan 3 ϕ p ( f 13 f 23 cos 2 θ p + f 12 ( m 11 f 22 ) sin 2 θ p )
+ tan 2 ϕ p ( ( ( f 11 f 22 ) 2 2 f 12 2 ) sin 2 θ p ( f 13 2 + f 23 2 ) cos 2 θ p )
+ tan ϕ p ( f 13 f 23 cos 2 θ p f 12 ( f 11 f 22 ) sin 2 θ p )
+ f 12 2 sin 2 θ p f 23 2 cos 2 θ p
= 0
h θ p , ϕ p x p = sin 2 θ p ( 2 e { I 0 * I 0 x p + I 2 * I 2 x p } + 2 cos ( 2 ϕ p 2 ϕ d ) e { I 0 * I 2 x p + I 2 * I 0 x p } )
2 sin ( 2 θ p ) cos ( ϕ p ϕ d ) m { I 1 * ( I 0 x p + I 2 x p ) I 1 x p ( I 0 * + I 2 * ) }
+ 8 cos 2 θ p e { I 1 * I 1 x p }
h θ p , ϕ p θ p = sin 2 θ p ( I 0 2 + I 2 2 + 2 cos ( 2 ϕ p 2 ϕ d ) e { I 0 * I 2 } 4 I 1 2 )
4 cos ( 2 θ p ) cos ( ϕ p ϕ d ) m { I 1 * ( I 0 + I 2 ) }
h θ p , ϕ p ϕ p = 4 sin 2 θ p sin ( 2 ϕ p 2 ϕ d ) e { I 0 * I 2 }
+ 2 sin ( 2 θ p ) sin ( ϕ p ϕ d ) m { I 1 * ( I 0 + I 2 ) } .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.