Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Nanoscale depth reconstruction from defocus: within an optical diffraction model

Open Access Open Access

Abstract

Depth from defocus (DFD) based on optical methods is an effective method for depth reconstruction from 2D optical images. However, due to optical diffraction, optical path deviation occurs, which results in blurring imaging. Blurring, in turn, results in inaccurate depth reconstructions using DFD. In this paper, a nanoscale depth reconstruction method using defocus with optical diffraction is proposed. A blurring model is proposed by considering optical diffraction, leading to a much higher accuracy in depth reconstruction. Firstly, Fresnel diffraction in an optical system is analyzed, and a relationship between intensity distribution and depth information is developed. Secondly, a blurring imaging model with relative blurring and heat diffusion is developed through curving fitting of a numerical model. In this way, a new DFD method with optical diffraction is proposed. Finally, experimental results show that this new algorithm is more effective for depth reconstruction on the nanoscale.

© 2014 Optical Society of America

1. Introduction

Depth from defocus (DFD), as introduced by Pentland [1], is known to be effective for depth reconstruction from 2D optical images. DFD is widely used in many fields such as remote sensing, robotics, and materials science [2, 3].

Traditional DFD methods calculate depth information with a blurring degree measurement of two blurred images based on geometrical optics [4, 5], where optical light is assumed to travel in straight lines. However, the use of geometrical optics is inaccurate in high resolution depth reconstruction because of several factors. For example, 1) depth calculation of traditional DFD is based on the assumption that optical diffraction can be ignored during optical imaging processes. Diffraction is a standard property of all wave phenomena. For most optical systems, the imaging beam is restricted to a round hole by a diaphragm, so optical diffraction exists in these systems. When a small object is observed, an optical system with a high magnification factor is needed. However, when the size of the object and the size of some elements in the optical system are close to the wave length of imaging, optical diffraction is more obvious. 2) Depth reconstruction precision is strongly related to defocus measurements. In traditional DFD, if the camera parameters of an optical imaging system are fixed, the defocus phenomenon is assumed to be the result of depth variation. In fact, although the focused image-forming conditions are fulfilled, the intensity distribution of a point on the image plane does not converge at a point because of optical diffraction, as shown in Fig. 1. In this way, optical diffraction results in blurring imaging. Therefore, in order to improve depth reconstruction precision at the nanoscale, a method with the ability to relate optical diffraction and depth variation is necessary.

 figure: Fig. 1

Fig. 1 Schematic of circular aperture diffraction.

Download Full Size | PDF

The problem of DFD and optical diffraction has been addressed in variety of contexts. FitzGerrell et. al. [6] presented a two-dimensional function illustrating the effects of defocus on the optical transfer function (OTF) associated with a circularly symmetric pupil function, but their OTF did not consider optical diffraction. Stokseth [7] analyzed the optical properties of an aberration-free defocused optical system, and the exact diffraction optical transfer function (OTF) and point spread function (PSF) were compared to the geometrical OTF and PSF. These properties of the defocus transfer function make it useful for analyzing optical systems with circularly symmetric pupils. However, the relationship between depth information and OTF or PSF did not mention in his work. In reference [8], a DFD formula for a diffraction-limited imaging system was derived, and the result shows that a correction factor is need to compensate the traditional DFD reconstruction formula when an imaging system with large sensor displacement is used. However, lacking of theoretical analysis on optical diffraction, they focused on a path-length error resulted from optical diffraction, rather than a model between depth information and intensity distribution which is highly related to the blurring imaging. Until now, modeling depth information based on the presence of optical diffraction has rarely been researched. In our previous work, a DFD method with fixed camera parameters was proposed [9, 10], while optical diffraction has not been considered.

In this paper, a high resolution DFD method with optical diffraction is proposed. Our present approach is novel in several ways and provides a mathematical relation between optical diffraction and depth information. Firstly, the basic principle of Fresnel diffraction in an optical system is analyzed, and the relationship between Fresnel diffraction and depth information is developed. Secondly, a defocus imaging model with optical diffraction is developed through curve fitting of a numerical model, taking into account relative blurring and heat diffusion. The heat diffusion equations combined with optical diffraction are developed, and their solution is transformed into a dynamic optimization problem. Finally, experiments with static and dynamic samples are conducted and the results show that our proposed method is capable of obtaining more accurate depth information at the nanoscale.

2. Defocus with optical diffraction

There appear two typical diffractions including Fraunhofer diffraction and Fresnel diffraction [1113]. In an optical imaging system, normally, optical diffraction that occurs is convergent-wave Fresnel diffraction, and the diagrammatic sketch of optical path is shown as Fig. 2. The amplitude of a random point P on the imaging plane can be described as [14]:

E˜P=AR+bexp[ik(R+b+ρ22b)]n=1(iR+bRaρ)nJn(2πλabρ)
where A the amplitude of the point which is unit distance far from a source point S on the optical axis, a is the real radius of lens, k = 2л/λ, ρ is the distance from P to the axis x; Jn is n order Bessel function; λ is the wavelength of the incident light; R is the distance between the ideal imaging plane and the lens; b is the distance between the real-time imaging plane and the lens, when the imaging plane moves forward or backward xi (i = 1,2), it becomes bi (i = 1,2).

 figure: Fig. 2

Fig. 2 Diagrammatic sketch of optical path.

Download Full Size | PDF

On the imaging plane, a coordinate system OXYZ, where X is the optical axis, O is the origin point, and YZ is the imaging plane, is constructed. On XY plane, X = -R + b. Since XY plane and XZ plane are almost axisymmetric, their imaging property is supposed to be same. Due to symmetry, parameter ρ is replaced by parameter y. Here, we focus on research the intensity distribution near the idea imaging point. Therefore, Eq. (1) can be transformed into:

E˜P=exp[ik(x+ρ22b)]{iJ1(2πλ1ysinu)πλ1ysinu+λπsin2un=2Jn(2πλ1ysinu)(iy1sinu)nxn1}

Then, Eq. (2) can be derived as:

E˜P=exp[ik(x+ρ22b)]Bexp(iβ)
where Bexp(iβ)=iJ1(2πλ1ysinu)πλ1ysinu+λπsin2un=2(iy1sinu)nxn1Jn(2πλ1ysinu),sinu=aR; x is movement of the imaging plane along X axis, and y is the distance between P and O along Y axis.

Then, the normalized intensity distribution of P is:

IP=E˜PE˜P*=B2

From Eq. (2) and Eq. (3), we can see that the intensity distribution of a random point P on the imaging plane is a function of x and y. In order to observe its optical property, we suppose that λ = 600nm, sinu = 0.5, and the intensity distribution along y axis with different x is shown in Fig. 3. From it, we can find that if the source point is on the optical axis, the intensity of a random point P is maximal when P is the intersection point of the imaging plane and the optical axis. On other positions around the intersection point, the intensity value decreases with the distance from the maximal point.

 figure: Fig. 3

Fig. 3 Distribution curve along optical axis when λ = 600nm, sinu = 0.

Download Full Size | PDF

Since, in a camera, the scale factor between the object distance and the imaging distance is the axial magnification m, the imaging distance variation x in Fig. 3 can be transformed into the variation of the object distance l with:

l=x/m

If we fix all the parameters of a camera, the intensity distribution resulted from x variation can also be realized by the variation of l, and the distribution of Ip (l, y) is almost same to that of Ip (x, y) since l is a linear function of x. When l = 0, Ip (l, y) does not distribute in a point as expected, and the image of a source point is a blurred round spot.

3. Blurring model comparison

In geometrical optics, when the focal length f, the distance of the object from the principal u, and the distance of the focused image from the lens plane v fulfills Eq. (6), researchers assume the image of a source point is a focused point, and the imaging process is focused. Otherwise, the image is a round spot and a blurred image appears.

1u+1v=1f

If the camera parameters are fixed during a defocus imaging process, depth can be reconstructed with blurring degree of blurred images. Radius of the blurred round rg can be denoted as:

rg=av2|1f1v1s|
where s is depth information and a is the real radius of lens.

If the point spread function is a Gaussian function, the blurring kernel σ is:

σ2=γ2rg2
where γ is a constant between the blurring radius and the blurring degree. γ>0 and it is determined via a calibration procedure.

Therefore, the reconstructed depth s in geometrical optics is:

s=1(1f1v±2σavγ)

When optical diffraction is considered, with fixed camera parameters, blurring is resulted from depth variation and optical diffraction, and the blurring kernel can be denoted as:

σ=f(rd)=Fd(Diffraction,s)

From Fig. 3, we can see that Ip(l, y) is close to a Gaussian function of y with a fixed l, so it is easy to fit each Ip(l, y) with a Gaussian curve, shown in Fig. 4, where the dot value is the calculation result, and the solid line is the fitting Gaussian curve. From each fitted curve, we can easily attain the Gaussian kernel σ of different l. If λ = 600 nm, sinu = 0.5, f = 0.357 mm, u = 3.4 mm, a = 0.18 mm, and γ = 300, the relationship between l and σ with/without optical diffraction is compared in Fig. 5, where the solid line is the result without optical diffraction, and the dash line is the result with optical diffraction. From Fig. 5, we can see that when depth variation l is zero, in geometric optics where diffraction is not considered, σ is zero; While it is 1.57e-4 with optical diffraction. With increasing of depth variation l, their σ is becoming close. That is the reason that in marco optical system, it is difficult to observe optical diffraction, but on nano scale, influence of optical diffraction cannot be ignored.

 figure: Fig. 4

Fig. 4 Fitted Gaussian curve with a fixed l.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Depth variation and blurring degree comparison.

Download Full Size | PDF

4. Depth reconstruction with optical diffraction

In order to calculate depth information from blurring degree of blurred images, a numerical model between l and σ is needed. From Fig. 5, we can see that the relationship between σ and l can be fitted with an aquadratic curve, and the fitted curve in our paper is:

σ=al2+bl+c

So:

al2+bl+cσ=0

The solution of Eq. (12) is:

l=b±b24a(cσ)2a

The finial depth can be calculated as:

s=s0+l=s0+b±b24a(cσ)2a
From Eq. (14), a, b, and c are known after the curve fitting, and in order to calculate depth information, we only need to obtain the blurring kernel σ. In this paper, we will calculate depth information from the basic principle of blurring imaging.

If we use a real aperture camera, the blurred image E measured on the imaging plane with the blurring setting of the optics and the radius of the blurred round rd is a function that can be approximated via the following equation:

E(y,z)=h(y,z,rd)I(u,v)dudv
where h is called point spread function (PSF); E(y, z) and I(y, z) are the blurred image and the radiance image, respectively.

An important case that we will consider is that of a scene made of an equifocal plane, that is, a plane parallel to the image plane. In this case, the depth map satisfies s(y, z) = s, the PSF h is shift-invariant, that is, h(y, z, rd) = h(y-z, rd), and rd is a constant. Hence, the image formation model becomes the following simple convolution:

E(y,z)=h(y,z)I(y,z)
where “*” denotes convolution.

From Fig. 4, it can be seen that the intensity distribution of a random point on the imaging plane can be approximated with a Gaussian function. When the PSF is approximated by a shift-invariant Gaussian function, the imaging model in Eq. (16) can be formulated in terms of the isotropic heat equation in physics:

{u˙(y,z,t)=εΔu(y,z,t)t(0,)u(y,z,0)=I(y,z)
Consider u(y, z, 0) to be a radiance image, the solution of the diffusion equation can be obtained in terms of convolution of the image with a temporally evolving Gaussian kernel. Therefore, the diffusion equation can be introduced into the process of blurring imaging. The solution u at a time t = τ plays the role of an image E(y, z) = u(y, z, τ) captured with a certain blurring setting that is related to τ. The parameter ε is called the diffusion coefficient and it is nonnegative. The “dot” denotes differentiation in time, that is, u˙=˙ut,“Δ”denotes the Laplacian operator:

Δu=2uy2+2uz2

It is also easy to verify that the variation σ is related to the diffusion coefficient ε via:

σ2=2tε

When the distance map s is not an equifocal plane, the PSF is in general shift varying. The equivalence with the isotropic heat equation does not hold, and the diffusion process can be formulated in terms of the inhomogeneous diffusion equation as:

{u˙(y,z,t)=(ε(y,z)u(y,z,t))t(0,)u(y,z,0)=I(y,z)
where “”denotes the gradient operator and “”is the divergence operator:

=[yz]T,=y+z

By assuming the surface s is smooth, we can relate again the diffusion coefficient ε(y, z) to the space-varying variance σ via:

σ2(y,z)=2tε(y,z)

Here, we modeled the image E(y, z) via diffusion equations starting from the radiance image I(y, z), which we do not know. However, without prior knowledge or an accurate restoration model, it is very complicated to calculate I (y, z). Even if it can be calculated, the resolution is always very low. Rather than having to estimate it from two or more images, in this paper, we introduce a model of the relative blurring between two blurred images.

Suppose there are two images E1(y, z) and E2(y, z) for two different blurring settings σ1 and σ2,also, σ1<σ2 (that is, E1(y, z) is less blurred than E2(y, z)), from Eq. (14), then E2 (y, z) can be written as:

E2(y,z)=h(y,z,σ22)I(u,v)dudv=12πσ22exp((yu)2+(zv)22σ22)I(u,v)dudv=12π(σ22σ12)exp((yu)2+(zv)22(σ22σ12))dudv12πσ12exp((uy˜)2+(vz˜)22σ12)I(y˜,z˜)dy˜dz˜=12πΔσ2exp((yu)2+(zv)22Δσ2)E1(u,v)dudv=h(y,z,Δσ2)E1(u,v)dudv=h(y,z,Δσ2)E1(u,v)
whereΔσ2=σ22σ12,is the relative blurring between E1(y, z) and E2(y, z). Therefore, a blurred image can be described by another blurred image with the relative blurring between them, and no radiance is needed.

Suppose E1(y, z), whose depth map is s1(y, z), is the blurred image attained before a depth variation Δs(y,z)along the optical axis; E2 (y, z) with depth s1(y, z) is another blurred image attained after depth variation; s0 is the focus depth; s1(y, z)-s2(y, z) = Δs(y, z). If the depth variation Δs is known, the initial depth s1(y, z) can be calculated from the following method.

Because of the imaging relationship between two blurred images in Eq. (16), the blurring process between E1(y, z) and E1(y, z) can be denoted as the following heat diffusion functions:

{u˙(y,z,t)=(ε(y,z)u(y,z,t))t(0,)u(y,z,0)=E1(y,z)u(y,z,Δt)=E2(y,z)

The relative blurring between E1(y, z) and E2(y, z) is:

Δσ2=σ22σ12=(al22+bl2+c)2(al12+bl1+c)2

One can view the time Δt as the variable encoding the global amount of blurring, and the diffusion coefficient ε as the variable encoding the depth map s via:

ε=Δσ22Δt=(al22+bl2+c)2(al12+bl1+c)22Δt

Through simplification, Eq. (25) can be denoted as:

al22+bl2+c=±Δσ2+(al12+bl1+c)2

Suppose:

c=c±(al12+bl1+c)2+Δσ2

Then we get:

s=s0+b±b24ac2a

As a global algorithm, we construct the following optimization problem to calculate the solutions of the diffusion equations.

s˜=argmins2(y,z)(u(y,z,Δt)E2(y,z))2dydz

However, the optimization process above is ill posed, that is, the minimum may not exist, and even if it exists, it may not be stable with respect to data noise. A common way to regularize the problem is to add a Tikhonov Penalty [15]:

s˜=argmins2(y,z)(u(y,z,Δt)E2(y,z))2dydz+αs2(y,z)2+αks2(y,z)2
where the additional term imposes a smoothness constraint on the depth map. In practice, we use α>0,k>0 which are all very small, because this term has no practical influence on the cost energy denoted as:

F(s)=(u(y,z,Δt)E2(y,z))2dydz+αs2+αks2

Thus the solution process is equal to the following:

s˜=argminsF(s)s.t.Eq.(14),Eq.(29)

Finally, it is easy to attain the new depth value with Eq. (19). The algorithm can be divided into the following steps:

  • Give camera parameters f, D, γ, v,s0; two blurred images E1,E2;a threshold τ;α and step size β;
  • Initialize the depth map with a plan s, to be simple, we can suppose that the initial plane is an equifocal plane;
  • Compute Eq. (25) and attain the relative blurring;
  • Compute Eq. (24) and attain the solution u(y,z,Δt) of diffusion equations;
  • Compute Eq. (32) with the solution of step (4). If the cost energy is belowε, the algorithm stops; or computes the following equation with stepβ:
    st=F'(s)
  • Compute Eq. (29), update the depth map, and return to step (3).

5. Experiment

In order to validate the new algorithm, we use it to reconstruct the depth information of a static nano standard grid and a dynamic AFM cantilever, and then compare the results with those of our previous DFD without optical diffraction. The height of the nano grid is 500 nm and the accuracy of this product is 3%; an Iphysik Instrumente (PI) nanoplatform is working up to the tip of the AFM cantilever in our dynamic experiment, and its raise height of each step is 100 nm. In the experiment, we use the microscope of HIROX-7700, and its amplification factor is 7000. The rest parameters are as the following: f = 0.357 mm, s0 = 3.4 mm, F-number = 2, D = f/2.

5.1 Static experiment

First, the standard grid is scanned by an atomic force microscopy (AFM) of Veeco Dimension 3100, and the 3D image of the nano scale grid is shown in Fig. 6. Then, the experiment using 120 × 110 pixel grid region is conducted. The results are shown in Figs 7(a)-(b) to Fig. 10. Figures 7(a) and 7(b) are two blurred images, in which Fig. 7(a) is the blurred image before depth variation and Fig. 7(b) is that after depth variation; the reconstructed depth of the nano grid is shown in Figs 8 (a)-8(b), where Fig. 8 (a) is the reconstructed depth with our new DFD in this paper and Fig. 8(b) is the result of our precious DFD without optical diffraction. The unit of the depth axis is mm.

 figure: Fig. 6

Fig. 6 Nano grid scanned by AFM.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 The blurred images of a static nano grid.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Reconstructed 3D depth with/without diffraction.

Download Full Size | PDF

In order to investigate the precision of the new algorithm, first we construct the error map Φ between the true shape s in Fig. 6 and the estimated shape s˜, and the compute formulas are shown in Eq. (35). The error map with/without optical diffraction is shown in Figs 9 (a)-9(b), where Fig. 9(a) is the error map with optical diffraction and Fig. 9(b) is that without optical diffraction. Then we calculate the average estimation error of 500 points with Eq. (36), as the known accurate height of the standard grid is 500 nm. In order to compare precision of them, we make a section of 3D depth in Fig. 10, where the solid line is the result of our new method in this paper, and the dash line is the result of traditional DFD.

ϕ=s˜s1
Eave=1nk=1n|HkH˜k|
where n is the number of the sample points; Hk is the true height from AFM, H˜kis the estimated height of the kth point.

 figure: Fig. 9

Fig. 9 Reconstructed 3D shape with/without diffraction.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Arbitrary section shape of nano grid.

Download Full Size | PDF

From Figs 9 (a)-9(b) and Fig. 10, we can see that the reconstructed depth error of the new proposed algorithm with optical diffraction is smaller than that of the algorithm without optical diffraction. Eave of the DFD without diffraction is 91 nm, while Eave of our new algorithm is 46 nm. That means with our new algorithm, the reconstructed error of the previous DFD in geometric optics can be decreased by 49.5%. Furthermore, the reconstructed depth with our new algorithm is smaller than that of the previous DFD. This result is consistent with our preceding analysis that blurred images are the combination of optical diffraction and depth variation.

5.2 Dynamic experiment

We capture a blurred image of the AFM cantilever. After the PI nano platform rises with a step of 100 nm, we capture a blurred image after each step. Then, we reconstruct the depth information of the bended cantilever, and the results are shown in Figs. 11(a)-(b) to Fig. 13. Figures 11(a)-11(b) are two blurred images in which Fig. 11(a) is the blurred image before depth variation and Fig. 11(b) is that after depth variation; the reconstructed depth of the AFM cantilever is shown in Figs 12 (a)-12(b), in which Fig. 12(a) is the reconstructed depth with optical diffraction and Fig. 12(b) is that without optical diffraction. Figure 13 is a depth section of the bended cantilever, where the solid line is the result of our new method in this paper, and the dash line is the result of traditional DFD. In Fig. 13, in order to compare the depth difference on nano scale, we substrate 3.4 mm on the vertical axis, and the unit of the depth axis is mm.

 figure: Fig. 11

Fig. 11 The blurred images of dynamitic AFM cantilever.

Download Full Size | PDF

 figure: Fig. 12

Fig. 12 Reconstructed 3D depth with/without diffraction.

Download Full Size | PDF

 figure: Fig. 13

Fig. 13 Arbitrary section shape of AFM cantilever.

Download Full Size | PDF

From Figs. 12(a)-(b) and Fig. 13, we can find the following conclusion:

  • The cantilever’s end with the tip bends obviously, and the height difference between the maximal bended value and the bottom of the hollow is 97 nm with our new method. While with the previous DFD without optical diffraction, it is 145 nm.
  • For the platform depth reconstruction, it is lower than the bended cantilever with our method, and this coincides with the experiment fact; while with the previous method, it is higher than the cantilever at some points, so it is far from expected.

5. Conclusion

In this paper, a global nanoscale depth reconstruction method from defocus with optical diffraction is proposed and validated with static and dynamic samples. Our primary contribution here is to develop a relationship between Fresnel diffraction and blurred imaging. Our second contribution is a proposed imaging model for defocus with optical diffraction through curve fitting, based on relative blurring and heat diffusion. In this way, we have constructed a new DFD method considering optical diffraction. Finally, a static standard nano grid and a dynamic AFM cantilever are used to validate the proposed DFD method at the nanoscale. The results show that the proposed algorithm is more effective method for reconstructing depth information from blurred images at the nanoscale.

Acknowledgments

The authors thank the funding support from the Natural Science Foundation of China (No. 61305025) and the Fundamental Research Funds for the Central Universities (N13050411).

References and links

1. A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. 9(4), 523–531 (1987). [CrossRef]   [PubMed]  

2. P. N. Vinay and C. Subhasis, “On defocus, diffusion and depth estimation,” Pattern Recognit. Lett. 28(3), 311–319 (2007). [CrossRef]  

3. S. K. Navar, M. Watanabe, and M. Noguchi, “Real-time focus range sensor,” IEEE Trans. Pattern Anal. Mach. Intell. 18(12), 1186–1198 (1996). [CrossRef]  

4. P. Favaro, S. Soatto, M. Burger, and S. J. Osher, “Shape from defocus via diffusion,” IEEE Trans. Pattern Anal. Mach. Intell. 30(3), 518–531 (2008). [CrossRef]   [PubMed]  

5. P. Favaro, A. Mennucci, and S. Soatto, “Observing shape from blurred images,” Int. J. Comput. Vis. 52(1), 25–43 (2003). [CrossRef]  

6. A. R. FitzGerrell, E. R. Dowski Jr, and W. T. Cathey, “Defocus transfer function for circularly symmetric pupils,” Appl. Opt. 36(23), 5796–5804 (1997). [CrossRef]   [PubMed]  

7. P. A. Stokseth, “Properties of a defocused optical system,” J. Opt. Soc. Am. 59(10), 1314–1321 (1969). [CrossRef]  

8. C. Mair and C. J. Goodman, “Diffraction-limited depth-from-defocus,” Electron. Lett. 36(24), 2012–2013 (2000). [CrossRef]  

9. Y. J. Wei, Z. L. Dong, and C. D. Wu, “Depth measurement using single camera with fixed camera parameters,” IET Computer Vision 6(1), 29–39 (2012). [CrossRef]  

10. Y. J. Wei, C. D. Wu, and Z. L. Dong, “Global depth reconstruction of nano grid with singly fixed camera,” Science China. Technol. Soc. 54(4), 1044–1052 (2011).

11. R. C. Word, J. P. S. Fitzgerald, and R. Konenkamp, “Direct imaging of optical diffraction in photoemission electron microscopy,” Appl. Phys. Lett. 103(2), 021118 (2013). [CrossRef]  

12. I. Kantor, V. Prakapenka, A. Kantor, P. Dera, A. Kurnosov, S. Sinogeikin, N. Dubrovinskaia, and L. Dubrovinsky, “A new diamond anvil cell design for X-ray diffraction and optical measurements,” Rev. Sci. Instrum. 83(12), 125102 (2012). [CrossRef]   [PubMed]  

13. H. Oberst, D. Kouznetsov, K. Shimizu, J. Fujita, and F. Shimizu, “Fresnel diffraction mirror for an atomic wave,” Phys. Rev. Lett. 94(1), 013203 (2005). [CrossRef]   [PubMed]  

14. P. Wang, Y. G. Xu, W. Wang, and Z. J. Wang, “Analytic expression for Fresnel diffraction,” Optical Society of America 15(3), 684–688 (1998). [CrossRef]  

15. R. Lagnado and S. Osher, “A technique for calibrating derivative security pricing models: numerical solution of an inverse problem,” Journal of Computational Finance 1(1), 13–26 (1997).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1
Fig. 1 Schematic of circular aperture diffraction.
Fig. 2
Fig. 2 Diagrammatic sketch of optical path.
Fig. 3
Fig. 3 Distribution curve along optical axis when λ = 600nm, sinu = 0.
Fig. 4
Fig. 4 Fitted Gaussian curve with a fixed l.
Fig. 5
Fig. 5 Depth variation and blurring degree comparison.
Fig. 6
Fig. 6 Nano grid scanned by AFM.
Fig. 7
Fig. 7 The blurred images of a static nano grid.
Fig. 8
Fig. 8 Reconstructed 3D depth with/without diffraction.
Fig. 9
Fig. 9 Reconstructed 3D shape with/without diffraction.
Fig. 10
Fig. 10 Arbitrary section shape of nano grid.
Fig. 11
Fig. 11 The blurred images of dynamitic AFM cantilever.
Fig. 12
Fig. 12 Reconstructed 3D depth with/without diffraction.
Fig. 13
Fig. 13 Arbitrary section shape of AFM cantilever.

Equations (36)

Equations on this page are rendered with MathJax. Learn more.

E ˜ P = A R + b exp [ i k ( R + b + ρ 2 2 b ) ] n = 1 ( i R + b R a ρ ) n J n ( 2 π λ a b ρ )
E ˜ P = exp [ i k ( x + ρ 2 2 b ) ] { i J 1 ( 2 π λ 1 y sin u ) π λ 1 y sin u + λ π sin 2 u n = 2 J n ( 2 π λ 1 y sin u ) ( i y 1 sin u ) n x n 1 }
E ˜ P = exp [ i k ( x + ρ 2 2 b ) ] B exp ( i β )
I P = E ˜ P E ˜ P * = B 2
l = x / m
1 u + 1 v = 1 f
r g = a v 2 | 1 f 1 v 1 s |
σ 2 = γ 2 r g 2
s = 1 ( 1 f 1 v ± 2 σ a v γ )
σ = f ( r d ) = F d ( D i f f r a c t i o n , s )
σ = a l 2 + b l + c
a l 2 + b l + c σ = 0
l = b ± b 2 4 a ( c σ ) 2 a
s = s 0 + l = s 0 + b ± b 2 4 a ( c σ ) 2 a
E ( y , z ) = h ( y , z , r d ) I ( u , v ) d u d v
E ( y , z ) = h ( y , z ) I ( y , z )
{ u ˙ ( y , z , t ) = ε Δ u ( y , z , t ) t ( 0 , ) u ( y , z , 0 ) = I ( y , z )
Δ u = 2 u y 2 + 2 u z 2
σ 2 = 2 t ε
{ u ˙ ( y , z , t ) = ( ε ( y , z ) u ( y , z , t ) ) t ( 0 , ) u ( y , z , 0 ) = I ( y , z )
= [ y z ] T , = y + z
σ 2 ( y , z ) = 2 t ε ( y , z )
E 2 ( y , z ) = h ( y , z , σ 2 2 ) I ( u , v ) d u d v = 1 2 π σ 2 2 e x p ( ( y u ) 2 + ( z v ) 2 2 σ 2 2 ) I ( u , v ) d u d v = 1 2 π ( σ 2 2 σ 1 2 ) exp ( ( y u ) 2 + ( z v ) 2 2 ( σ 2 2 σ 1 2 ) ) d u d v 1 2 π σ 1 2 exp ( ( u y ˜ ) 2 + ( v z ˜ ) 2 2 σ 1 2 ) I ( y ˜ , z ˜ ) d y ˜ d z ˜ = 1 2 π Δ σ 2 exp ( ( y u ) 2 + ( z v ) 2 2 Δ σ 2 ) E 1 ( u , v ) d u d v = h ( y , z , Δ σ 2 ) E 1 ( u , v ) d u d v = h ( y , z , Δ σ 2 ) E 1 ( u , v )
{ u ˙ ( y , z , t ) = ( ε ( y , z ) u ( y , z , t ) ) t ( 0 , ) u ( y , z , 0 ) = E 1 ( y , z ) u ( y , z , Δ t ) = E 2 ( y , z )
Δ σ 2 = σ 2 2 σ 1 2 = ( a l 2 2 + b l 2 + c ) 2 ( a l 1 2 + b l 1 + c ) 2
ε = Δ σ 2 2 Δ t = ( a l 2 2 + b l 2 + c ) 2 ( a l 1 2 + b l 1 + c ) 2 2 Δ t
a l 2 2 + b l 2 + c = ± Δ σ 2 + ( a l 1 2 + b l 1 + c ) 2
c = c ± ( a l 1 2 + b l 1 + c ) 2 + Δ σ 2
s = s 0 + b ± b 2 4 a c 2 a
s ˜ = arg min s 2 ( y , z ) ( u ( y , z , Δ t ) E 2 ( y , z ) ) 2 d y d z
s ˜ = arg min s 2 ( y , z ) ( u ( y , z , Δ t ) E 2 ( y , z ) ) 2 d y d z + α s 2 ( y , z ) 2 + α k s 2 ( y , z ) 2
F ( s ) = ( u ( y , z , Δ t ) E 2 ( y , z ) ) 2 d y d z + α s 2 + α k s 2
s ˜ = arg min s F ( s ) s . t . E q . ( 1 4 ) , E q . ( 29 )
s t = F ' ( s )
ϕ = s ˜ s 1
E a v e = 1 n k = 1 n | H k H ˜ k |
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.