Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

On the single point resolution of on-axis digital holography

Open Access Open Access

Abstract

On-axis digital holography (DH) is becoming widely used for its time-resolved three-dimensional (3D) imaging capabilities. A 3D volume can be reconstructed from a single hologram. DH is applied as a metrological tool in experimental mechanics, biology, and fluid dynamics, and therefore the estimation and the improvement of the resolution are current challenges. However, the resolution depends on experimental parameters such as the recording distance, the sensor definition, the pixel size, and also on the location of the object in the field of view. This paper derives resolution bounds in DH by using estimation theory. The single point resolution expresses the standard deviations on the estimation of the spatial coordinates of a point source from its hologram. Cramér–Rao lower bounds give a lower limit for the resolution. The closed-form expressions of the Cramér–Rao lower bounds are obtained for a point source located on and out of the optical axis. The influences of the 3D location of the source, the numerical aperture, and the signal-to-noise ratio are studied.

© 2010 Optical Society of America

1. INTRODUCTION

On-axis digital holography (DH) is a metrological tool widely used in experimental mechanics, biology, or fluid dynamics [1], and therefore the estimation and the improvement of the resolution are key issues of this field [2, 3, 4, 5]. As the resolution depends on several experimental parameters (e.g., sensor definition, fill factor, and recording distance) and on the image processing algorithm used to perform the three-dimensional (3D) reconstruction, experimenters are in need of methodologies to tune the experimental setup and to select the reconstruction that will provide the best achievable resolution.

First of all, let us recall that two definitions of resolution can be distinguished [6]: (1) the two point resolution is defined as the system’s capability to determine the separation of two point sources and (2) the single point resolution is defined as the system’s capacity to determine the location of a point source. The first definition is used when the imaging goal is to separate two objects or structures inside an object (for example, in astronomy, in microscopy, etc.). The second definition, directly linked to the accuracy of measurement, is more suited for metrological applications.

The Rayleigh criterion is the most cited resolution criterion in the literature. It is based on a two point resolution [definition (1)] for diffraction limited systems. According to the Rayleigh criterion, two point sources are just resolved if the central maximum of the intensity diffraction pattern produced by one point source coincides with the first zero of the intensity diffraction pattern produced by the other. For an imaging system with a square aperture and numerical aperture Ω, classical Rayleigh resolution limits (lateral δx, δy and axial δz) are well known [7],

δx=δy=λΩ,δz=λΩ2.

The Rayleigh criterion is based on the assumption of a known point spread function (PSF) and negligible noise. It should be pointed out [6] that, with numerical tools, if the PSF was to be known, the two-component model of the PSF could be fitted numerically to the observations with respect to the component locations and amplitudes. In a noiseless situation, a perfect fit would result, leading to a virtually unlimited resolution in spite of diffraction. It is obvious that the study of the practically achievable resolution in imaging should take into account the noise level (whatever definition of the resolution is chosen).

Resolution limits in DH have been discussed by many authors [2, 4, 5, 8, 9]. The commonly used approach for resolution estimation is to evaluate the Rayleigh resolution (or Sparrow’s) by estimating the width of the point spread function of the digital holographic system in the reconstructed planes. Not only the effects of diffraction are taken into account but also the sampling and pixel integration effects.

In this paper, we choose to study single point resolution (more adapted to metrological applications), taking into account the effect of the noise level in the image. We present a methodology based on parameter estimation theory (see, e.g., [10]) to estimate the resolution. This type of approach has already been applied to many application fields [11, 12, 13, 14, 15, 16, 17]. Here we apply it to estimate the single point resolution in on-axis DH.

The resolution estimation problem can be formulated as the estimation of a point source location (x,y,z) from the analysis of its hologram (data). As the hologram of a point source can be modeled by a parametric function (parameters x, y, z) and a model of noise, the single point resolution is estimated by means of the variance on the parameters. Cramér–Rao lower bound (CRLB) [18, 19] gives a lower bound on the variance of the estimators. This bound is reached asymptotically (for large data) by the maximum likelihood estimator. Let us note that the maximum likelihood estimator has already been used with success in DH to detect objects of simple geometrical form, leading to enhanced accuracy [20, 21, 22], and a widening of the field of view classically limited to the hologram borders.

The paper is organized as follows. In Section 2, we describe a parameter estimation approach to estimate a point source location (x,y,z). The single point resolution is then derived from the CRLB. In Section 3, we give the analytical expressions of the variance calculated on the optical axis. In Section 4, analytical expressions of single point resolution in the whole field of view are given and illustrated by resolution maps. Finally, in Section 5, influences of sampling and pixel integration are discussed.

2. STATISTICAL ESTIMATION OF SINGLE POINT RESOLUTION

In this section, we formulate single point resolution from the viewpoint of statistical parameter estimation theory. We consider here the estimation problem of a point source location (x,y,z) from the analysis of its hologram. The accuracy of the estimates can be determined by calculating the standard deviation of the estimates. The CRLB gives a lower bound for the variance of any unbiased estimator of the parameter to be reconstructed (e.g., x,y,z). The CRLB can thus be used to calculate the theoretical resolution limit.

2A. Single Point Hologram Model

The (optical) system response to a point source can be described by a parametric model. We will consider the case of a hologram of a single point source recorded on an idealized square sensor of side L (sampling and quantization effects are neglected; see Fig. 1 ). For a point (xk,yk) located on the sensor, the intensity of the diffraction pattern (parametric model gθ) under Fresnel approximation is a radial chirp function and depends on the axial point source location z and lateral point source location (x,y),

gθ(xk,yk)=A  sin(π(xxk)2+(yyk)2λz),
with θ=(x,y,z) being the vector parameters and λ being the wavelength of the light, omitting the proportionality factor and the offset level. Due to noise, the measured image is a perturbed version of the model. We consider in the first approximation the noise as white Gaussian.

2B. Cramér–Rao Lower Bound

According to the Cramér–Rao inequality, the covariance matrix of any unbiased estimator θ̂={θ̂i}i of the unknown vector parameter θ is bounded from below by the inverse of the so-called Fisher information matrix,

var(θ̂i)[I1(θ)]i,i,
where I(θ) is the n×n Fisher information matrix. It is connected with the curvature of the logarithmic-likelihood function ln p(d;θ) [10],
[I(θ)]i,j=defE[2ln p(d;θ)θiθj].
The logarithmic-likelihood function in the case of additive Gaussian white noise (with variance σb2) is given by
ln p(d;θ)=12σb2xkyk[d(xk,yk)gθ(xk,yk)]2+C,
where d represents the data (pixel values), gθ is the parametric model depending on the parameter vector θ, and C is a constant. Using Eqs. (4, 5) and neglecting sampling and quantization effects, the Fisher matrix becomes
[I(θ)]i,j=1σb21L2sensor(gθ(xk,yk)θigθ(xk,yk)θj)dxkdyk.

The CRLB is asymptotically (for large samples) reached by maximum likelihood estimators. In DH, where the signal is distributed on the whole sensor, estimation is performed using a large set of independent identically distributed measurements (typically more than 1×106). The maximum likelihood estimator approaches the theoretical resolution limit given by the CRLB. In practice the logarithmic-likelihood function is maximized by numerical optimization. This amounts in this case to minimizing the squared difference between measurements and the model. Note that if the minimization technique fails to reach the global minimum or if the noise level is too high, the resulting estimation error will exceed the CRLB.

3. SINGLE POINT RESOLUTION ON THE OPTICAL AXIS

This section aims to studying the variance of the parameters (x,y,z) in the simple case of a point source located on the optical axis (see Fig. 1). This case is of interest because it leads to an analytical expression of the resolution which will be compared to the classical Rayleigh formula (1).

3A. Analytical Form of the Fisher Information Matrix

Neglecting the effect of sampling, the Fisher information matrix I can be calculated from Eq. (6). For a square sensor of side L, I has the form

[I(θ)]i,j=1σb21L2L/2L/2|L/2L/2(gθ(xk,yk)θigθ(xk,yk)θj)|x=0y=0dxkdyk.
Using a symbolic computation software, the integral (7) can be calculated with the expression of the model (2). It gives an analytical expression composed of many terms that can be significantly simplified assuming
L2/λz1.
The simple expressions obtained are
Ixx=Iyyπ2A2σb2L26λ2z2,Izzπ2A2σb27L4360λ2z4.
Let us note that the assumption (8) is always verified in classical conditions. It means that more than two oscillations of the chirp function are recorded on the sensor. Due to the parity of |gθ(xk,yk)/θi|y=0x=0 with respect to xk and yk, diagonal terms (Ixy, Ixz, and Iyz) are null. This simple case of a point source located on the optical axis and square sensor leads to a diagonal Fisher information matrix.

3B. Analytical Form of the Covariance Matrix

As the Fisher matrix is diagonal, the covariance matrix is also diagonal. Standard deviations on lateral (σ̊x,σ̊y) and axial (σ̊z) measurements are given by

σ̊x=σ̊y=λΩc1SNR,σ̊z=λΩ2c2SNR,
with Ω being the numerical aperture (Ω=L/2z), c1=6/2π0.4, c2=310/7/2π0.6, and SNR=A/σb.

Let us make some remarks about these expressions:

  • Each error is inversely proportional to the signal-to-noise ratio (SNR).
  • The covariance matrix is diagonal: the errors on the estimations of x, y, and z are not correlated.
  • The resolutions on x and z are proportional to classical resolution formulas (1) with proportionality constants depending on the SNR. However let us notice that the proportionality factors are different. Whereas the Rayleigh criterion gives δz=δx/Ω, our study leads to σ̊z=(c2/c1)σ̊x/Ω with c2/c11.5.

4. SINGLE POINT RESOLUTION MAP

Many studies about resolution (using various approaches) are restricted to the optical axis case. However as the truncation of the signal varies with the lateral location of the source (x,y) (Fig. 1) the resolution should depend on x and y. The non-uniformity of resolution on a plane has been presented in [5]. The authors qualitatively analyzed it considering the PSF (in reconstructed planes) shift variant. We will go further in a quantitative study. In this section we first neglect the sampling effect in order to derive analytical expressions of a single point resolution map. In the next section we discuss the limitations of these analytical expressions and propose a numerical way to calculate the resolution maps taking into account sampling effects and pixel integration.

4A. Analytical Form of the Fisher Information Matrix

In the more general case where the point source is not located on the optical axis but at coordinates (x,y,z), analytical expressions of the resolution can be obtained using the previous approach. The model gθ is in this case translated; this can be interpreted as a shift in integration limits,

[I(θ)]i,j=1σb21L2L/2xL/2x|L/2yL/2y(gθ(xk,yk)θigθ(xk,yk)θj)|x=0y=0dxkdyk.

Using a symbolic computation software and the hypothesis (8), we obtain the analytical form for I,

Ixxπ2A2σb2L2λ2z2(1+12x¯2)6,
Iyyπ2A2σb2L2λ2z2(1+12y¯2)6,
Ixy=Iyxπ2A2σb2L2λ2z22x¯y¯,
Izzπ2A2σb2L4λ2z4(7+120(x¯2+y¯2)+180(x¯2+y¯2)2)360,
Ixz=Izxπ2A2σb2L3λ2z3x¯(1/3+x¯2+y¯2),
Iyz=Izxπ2A2σb2L3λ2z3y¯(1/3+x¯2+y¯2),
with x¯=x/L and y¯=y/L being the non-dimensional coordinates. Let us note that I is no more diagonal and therefore the estimation errors on x, y, and z are correlated.

4B. Analytical Form of the Covariance Matrix

The covariance matrix is obtained by inverting the Fisher information matrix. By introducing σ̊x, σ̊y, and σ̊z obtained in the simple case of a point source located on the optical axis (10), and a non-dimensional radial coordinate ρ¯=x¯2+y¯2, the covariance matrix can be written as

((1+156x¯2K)σ̊x2156x¯y¯Kσ̊xσ̊y60715x¯1+3ρ¯2Kσ̊xσ̊z156x¯y¯Kσ̊xσ̊y(1+156y¯2K)σ̊y260715y¯1+3ρ¯2Kσ̊yσ̊z60715x¯1+3ρ¯2Kσ̊xσ̊z60715y¯1+3ρ¯2Kσ̊yσ̊z(1+60ρ¯223ρ¯2K)σ̊z2),
where K=736ρ¯2+180ρ¯4 depends only on the radial coordinate.

4C. Standard Deviation Maps

Standard deviations are given by the square roots of the three diagonal terms of the covariance matrix,

σx=σ̊x1+156x¯2736ρ¯2+180ρ¯4,σy=σ̊y1+156y¯2736ρ¯2+180ρ¯4,
σz=σ̊z1+60ρ¯223ρ¯2736ρ¯2+180ρ¯4.
Examples of standard deviation maps calculated using Eqs. (14, 15) are presented in Fig. 2 . The squares in the centers of the figures represent the sensor boundaries. These maps illustrate that the accuracy to estimate the location of a point source varies with the lateral location (x¯,y¯) of the point source. This variation is not negligible: on the x error map the maximum is 2.0 times the minimum, while on the z error map the maximum is 3.7 times the minimum. By using non-dimensional variables x¯ and y¯, the obtained error maps are invariant up to a multiplicative constant when z or L changes. These analytical results have been compared with numerical integrations of Eq. (11) (see Section 5) to make sure that hypothesis (8) is valid. The relative error on the error maps is negligible (lower than 0.3%).

5. INFLUENCE OF SAMPLING AND PIXEL INTEGRATION

The DH reconstruction is in practice transversally limited to the hologram support. Within an inverse problem framework, field extrapolation can be performed to reconstruct fields with a transversal size up to four times the sensor size [22, 23]. Field extrapolation is challenging since the signal of out-of-field objects recorded on the sensor is weak due to pixel integration and attenuation due to diffraction lobes.

The simplified model (2) used to derive the expressions of resolution [Eqs. (14, 15)] considers that the amplitude of the signal is independent of the transversal location of the point source, i.e., the signal is not attenuated even when the point source is located far away from the sensor boundaries. The obtained expressions of resolution cannot be used to assess the asymptotical behavior of the resolution in extrapolated fields. At the limit of an x or y location at infinity, the formulas give a null variance due to the absence of signal attenuation and sampling considerations.

We study now the impact of sampling and pixel integration in the resolution maps obtained in the previous section. When taking into account pixel integration the model gθp of the diffraction pattern is a chirp function modulated by two sinc functions [1],

gθp(xk,yk)=gθ(xk,yk)sinc(πκxΔx(xxk)λz)sinc(πκyΔy(yyk)λz),
with Δx and Δy being the sampling periods and κxΔx and κyΔy being the width and the height of the active area of the pixel. The integral of Eq. (6) cannot be computed in closed form for this model. We therefore consider the definition of the Fisher information matrix (4), using the sampling given by the sensor,
[I(θ)]i,j=1σb21NMi=1Nj=1M(gθp(iΔx,jΔy)θigθp(iΔx,jΔy)θj),
where N is the number of rows and M is number of columns of the sensor. Equation (17) can be computed for all x¯ and y¯ using fast Fourier transforms (FFTs) by noting that it expresses the correlation between a binary mask (defined as equal to 1 on the sensor and zero outside) and a product of model gradients (|(gθp/θi)(gθp/θj)|y=0x=0).

The error maps calculated numerically from Eq. (17) with the same experimental parameters as in Subsection 4C are closed to that of Fig. 2. The relative differences (expressed in percent) between the two error maps are presented in Fig. 3 . It shows that the differences in the classical field (limited by the sensor boundaries) are less than 6% and reach 16% in an extended field of two times the sensor size. When the pixel number increases (with a constant width L) these differences decrease: for N=512 it is less than 1.5% in the classical field and less than 4% in the extended field.

Let us note first that, in agreement with intuition, when the pixel integration is taken into account, the errors tend to infinity when the point source moves away from the optical axis since the signal tends to zero. Second, when taking into account pixel integration, the shape of the error maps depends on z and L contrary to the ideal sensor case of Sections 3, 4 (where sampling is neglected).

6. CONCLUSION

On-axis digital holography (DH) is used in metrological applications; thus the resolution issues are essential in this field. Although the DH resolution has already been studied by many authors, quantitative studies have been limited to the resolution on the optical axis. It is well known that the resolution varies with the transversal location, but it has never been precisely quantified to our knowledge. In this paper we proposed a methodology to study resolution maps for DH over the whole field of view. By using a statistical framework and a parametric model of the image we calculate resolution maps from the CRLB. These resolution limits give the best resolutions achievable by any unbiased estimator. These bounds are reached asymptotically (for large data) by a maximum likelihood estimator such as those proposed in [20, 21, 22]. Note that the errors that occur when global optimization fails should also be considered (i.e., inability to numerically compute the maximum likelihood estimator) [20].

Without taking into account the effects of sampling, closed-form expressions of resolutions can be derived. These expressions, quickly computed, give a good approximation of the resolution in the classical field of view (limited by the sensor boundaries) with typical experimental setup parameters. The simple model of image formation can be made more complex (e.g., taking into account the sampling and the pixel integration); in that case numerical integration of the gradient of the model can be computed by FFT.

The methodology employed here to study resolutions in DH can be applied to studying many important issues in DH by refining the noise model or changing the image formation model. For example, it can be used to estimate the lower bound of the accuracy of an object location in DH. The parametric model depends not only on the position parameters (x,y,z) of the object but also on the size and shape of the object. All these parameters are correlated and a study of error maps can be used to design an optimal setup (with respect to the accuracy of object location estimation). Another example is to estimate the resolution improvement comparing different setups/optical techniques such as off-axis DH, phase shifting DH, or color DH.

Recently, maximum a posteriori (MAP) approaches have been shown to lead to reconstructions with a few artifacts [24, 25, 26]. The theoretical study of the resolution of the obtained 3D reconstructions is challenging, though. The methodology and results described in the paper provide a lower bound on the resolution achievable with MAP techniques.

A possible extension of this study is to consider both the detection and estimation problems. Shahram and Milanfar followed a detection-theoretic approach for a detailed analysis of two point resolution in conventional imaging [15]. A similar framework could be applied to single point resolution in DH.

ACKNOWLEDGMENT

The authors are grateful to P. Réfrégier and E. Thiébaut for fruitful discussions.

 figure: Fig. 1

Fig. 1 Illustration of the parametric model gθ(xk,yk) of an on-axis digital hologram of a point source (x=0.2L, y=0.4L, z=100  mm, λ=0.532μm, Ω=8.6×103).

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 Single point resolution in a transversal plane: (a) x-resolution map normalized by the value of x-resolution on the optical axis; (b) z-resolution map normalized: σz/σ̊z; (c) x-resolution for y¯=0; (d) z-resolution for y¯=0; for z=100  mm, λ=0.532μm, Ω=8.6×103, and SNR=10. The squares in the centers of the figures represent the sensor boundaries.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Relative differences (in percent) between resolution maps (Fig. 2) and resolution maps taking into account pixel integration for z=100  mm, λ=0.532μm, Ω=8.6×103, SNR=10, Δx=Δy=6.7μm, κx=κy=1, and N=256. The squares in the centers of the figures represent the sensor boundaries.

Download Full Size | PDF

1. T. M. Kreis, Handbook of Holographic Interferometry (Wiley-VCH, 2005).

2. M. Jacquot, P. Sandoz, and G. Tribillon, “High resolution digital holography,” Opt. Commun. 190, 87–94 (2001). [CrossRef]  

3. A. Stern and B. Javidi, “Improved-resolution digital holography using the generalized sampling theorem for locally band-limited fields,” J. Opt. Soc. Am. A 23, 1227–1235 (2006). [CrossRef]  

4. J. Garcia-Sucerquia, W. Xu, S. K. Jericho, P. Klages, M. H. Jericho, and H. J. Kreuzer, “Digital in-line holographic microscopy,” Appl. Opt. 45, 836–850 (2006). [CrossRef]   [PubMed]  

5. D. P. Kelly, B. M. Hennelly, N. Pandey, T. J. Naughton, and W. T. Rhodes, “Resolution limits in practical digital holographic systems,” Opt. Eng. (Bellingham) 48, 095801 (2009). [CrossRef]  

6. A. J. Den Dekker and A. Van den Bos, “Resolution: a survey,” J. Opt. Soc. Am. A 14, 547–557 (1997). [CrossRef]  

7. J. W. Goodman, Introduction to Fourier Optics (Roberts, 2005).

8. A. Stern and B. Javidi, “Analysis of practical sampling and reconstruction from Fresnel fields,” Opt. Eng. (Bellingham) 43, 239–250 (2004). [CrossRef]  

9. A. Stern and B. Javidi, “Space-bandwidth conditions for efficient phase-shifting digital holographic microscopy,” J. Opt. Soc. Am. A 25, 736–741 (2008). [CrossRef]  

10. S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory (Prentice-Hall, 2005).

11. P. Réfrégier, Noise Theory and Application to Physics: From Fluctuations to Information (Springer Verlag, 2004).

12. C. W. Helstrom, “Detection and resolution of incoherent objects by a background-limited optical system,” J. Opt. Soc. Am. 59, 164–175 (1969). [CrossRef]  

13. S. Ram, E. Sally Ward, and R. J. Ober, “A stochastic analysis of performance limits for optical microscopes,” Multidimens. Syst. Signal Process. 17, 27–57 (2006). [CrossRef]  

14. S. Van Aert, D. Van Dirk, and A. J. den Dekker, “Resolution of coherent and incoherent imaging systems reconsidered—Classical criteria and a statistical alternative,” Opt. Express 14, 3830–3839 (2006). [CrossRef]   [PubMed]  

15. M. Shahram and P. Milanfar, “Statistical and information-theoretic analysis of resolution in imaging,” IEEE Trans. Inf. Theory 52, 3411–3437 (2006). [CrossRef]  

16. P. Réfrégier, J. Fade, and M. Roche, “Estimation precision of the degree of polarization from a single speckle intensity image,” Opt. Lett. 32, 739–741 (2007). [CrossRef]   [PubMed]  

17. A. Sentenac, C. A. Guérin, P. C. Chaumet, F. Drsek, H. Giovannini, N. Bertaux, and M. Holschneider, “Influence of multiple scattering on the resolution of an imaging system: a Cramér–Rao analysis,” Opt. Express 15, 1340–1347 (2007). [CrossRef]   [PubMed]  

18. C. R. Rao, “Information and the accuracy attainable in the estimation of statistical parameters,” Bull. Calcutta Math. Soc. 37, 81–89 (1945).

19. H. Cramér, Mathematical Methods of Statistics (Princeton U. Press, 1946).

20. F. Soulez, L. Denis, C. Fournier, E. Thiebaut, and C. Goepfert, “Inverse problem approach for particle digital holography: accurate location based on local optimization,” J. Opt. Soc. Am. A 24, 1164–1171 (2007). [CrossRef]  

21. S. H. Lee, Y. Roichman, G. R. Yi, S. H. Kim, S. M. Yang, A. van Blaaderen, P. van Oostrum, and D. G. Grier, “Characterizing and tracking single colloidal particles with video holographic microscopy,” Opt. Express 15, 18275–18282 (2007). [CrossRef]   [PubMed]  

22. F. Soulez, L. Denis, E. Thiebaut, C. Fournier, and C. Goepfert, “Inverse problem approach for particle digital holography: out-of-field particle detection made possible,” J. Opt. Soc. Am. A 24, 3708–3716 (2007). [CrossRef]  

23. L. Denis, D. Lorenz, and D. Trede, “Greedy solution of ill-posed problems: Error bounds and exact inversion,” Inverse Probl. 25, 115017 (2009). [CrossRef]  

24. S. Sotthivirat and J. A. Fessler, “Penalized-likelihood image reconstruction for digital holography,” J. Opt. Soc. Am. A 21, 737–750 (2004). [CrossRef]  

25. L. Denis, D. Lorenz, E. Thiebaut, C. Fournier, and D. Trede, “Inline hologram reconstruction with sparsity constraints,” Opt. Lett. 34, 3475–3477 (2009). [CrossRef]   [PubMed]  

26. D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17, 13040–13049 (2009). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (3)

Fig. 1
Fig. 1 Illustration of the parametric model g θ ( x k , y k ) of an on-axis digital hologram of a point source ( x = 0.2 L , y = 0.4 L , z = 100   mm , λ = 0.532 μ m , Ω = 8.6 × 10 3 ).
Fig. 2
Fig. 2 Single point resolution in a transversal plane: (a) x-resolution map normalized by the value of x-resolution on the optical axis; (b) z-resolution map normalized: σ z / σ ̊ z ; (c) x-resolution for y ¯ = 0 ; (d) z-resolution for y ¯ = 0 ; for z = 100   mm , λ = 0.532 μ m , Ω = 8.6 × 10 3 , and SNR = 10 . The squares in the centers of the figures represent the sensor boundaries.
Fig. 3
Fig. 3 Relative differences (in percent) between resolution maps (Fig. 2) and resolution maps taking into account pixel integration for z = 100   mm , λ = 0.532 μ m , Ω = 8.6 × 10 3 , SNR = 10 , Δ x = Δ y = 6.7 μ m , κ x = κ y = 1 , and N = 256 . The squares in the centers of the figures represent the sensor boundaries.

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

δ x = δ y = λ Ω , δ z = λ Ω 2 .
g θ ( x k , y k ) = A   sin ( π ( x x k ) 2 + ( y y k ) 2 λ z ) ,
var ( θ ̂ i ) [ I 1 ( θ ) ] i , i ,
[ I ( θ ) ] i , j = def E [ 2 ln p ( d ; θ ) θ i θ j ] .
ln p ( d ; θ ) = 1 2 σ b 2 x k y k [ d ( x k , y k ) g θ ( x k , y k ) ] 2 + C ,
[ I ( θ ) ] i , j = 1 σ b 2 1 L 2 sensor ( g θ ( x k , y k ) θ i g θ ( x k , y k ) θ j ) d x k d y k .
[ I ( θ ) ] i , j = 1 σ b 2 1 L 2 L / 2 L / 2 | L / 2 L / 2 ( g θ ( x k , y k ) θ i g θ ( x k , y k ) θ j ) | x = 0 y = 0 d x k d y k .
L 2 / λ z 1.
I x x = I y y π 2 A 2 σ b 2 L 2 6 λ 2 z 2 , I z z π 2 A 2 σ b 2 7 L 4 360 λ 2 z 4 .
σ ̊ x = σ ̊ y = λ Ω c 1 SNR , σ ̊ z = λ Ω 2 c 2 SNR ,
[ I ( θ ) ] i , j = 1 σ b 2 1 L 2 L / 2 x L / 2 x | L / 2 y L / 2 y ( g θ ( x k , y k ) θ i g θ ( x k , y k ) θ j ) | x = 0 y = 0 d x k d y k .
I x x π 2 A 2 σ b 2 L 2 λ 2 z 2 ( 1 + 12 x ¯ 2 ) 6 ,
I y y π 2 A 2 σ b 2 L 2 λ 2 z 2 ( 1 + 12 y ¯ 2 ) 6 ,
I x y = I y x π 2 A 2 σ b 2 L 2 λ 2 z 2 2 x ¯ y ¯ ,
I z z π 2 A 2 σ b 2 L 4 λ 2 z 4 ( 7 + 120 ( x ¯ 2 + y ¯ 2 ) + 180 ( x ¯ 2 + y ¯ 2 ) 2 ) 360 ,
I x z = I z x π 2 A 2 σ b 2 L 3 λ 2 z 3 x ¯ ( 1 / 3 + x ¯ 2 + y ¯ 2 ) ,
I y z = I z x π 2 A 2 σ b 2 L 3 λ 2 z 3 y ¯ ( 1 / 3 + x ¯ 2 + y ¯ 2 ) ,
( ( 1 + 156 x ¯ 2 K ) σ ̊ x 2 156 x ¯ y ¯ K σ ̊ x σ ̊ y 60 7 15 x ¯ 1 + 3 ρ ¯ 2 K σ ̊ x σ ̊ z 156 x ¯ y ¯ K σ ̊ x σ ̊ y ( 1 + 156 y ¯ 2 K ) σ ̊ y 2 60 7 15 y ¯ 1 + 3 ρ ¯ 2 K σ ̊ y σ ̊ z 60 7 15 x ¯ 1 + 3 ρ ¯ 2 K σ ̊ x σ ̊ z 60 7 15 y ¯ 1 + 3 ρ ¯ 2 K σ ̊ y σ ̊ z ( 1 + 60 ρ ¯ 2 2 3 ρ ¯ 2 K ) σ ̊ z 2 ) ,
σ x = σ ̊ x 1 + 156 x ¯ 2 7 36 ρ ¯ 2 + 180 ρ ¯ 4 , σ y = σ ̊ y 1 + 156 y ¯ 2 7 36 ρ ¯ 2 + 180 ρ ¯ 4 ,
σ z = σ ̊ z 1 + 60 ρ ¯ 2 2 3 ρ ¯ 2 7 36 ρ ¯ 2 + 180 ρ ¯ 4 .
g θ p ( x k , y k ) = g θ ( x k , y k ) sinc ( π κ x Δ x ( x x k ) λ z ) sinc ( π κ y Δ y ( y y k ) λ z ) ,
[ I ( θ ) ] i , j = 1 σ b 2 1 N M i = 1 N j = 1 M ( g θ p ( i Δ x , j Δ y ) θ i g θ p ( i Δ x , j Δ y ) θ j ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.