Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Modeling the depth-sectioning effect in reflection-mode dynamic speckle-field interferometric microscopy

Open Access Open Access

Abstract

Unlike most optical coherence microscopy (OCM) systems, dynamic speckle-field interferometric microscopy (DSIM) achieves depth sectioning through the spatial-coherence gating effect. Under high numerical aperture (NA) speckle-field illumination, our previous experiments have demonstrated less than 1 μm depth resolution in reflection-mode DSIM, while doubling the diffraction limited resolution as under structured illumination. However, there has not been a physical model to rigorously describe the speckle imaging process, in particular explaining the sectioning effect under high illumination and imaging NA settings in DSIM. In this paper, we develop such a model based on the diffraction tomography theory and the speckle statistics. Using this model, we calculate the system response function, which is used to further obtain the depth resolution limit in reflection-mode DSIM. Theoretically calculated depth resolution limit is in an excellent agreement with experiment results. We envision that our physical model will not only help in understanding the imaging process in DSIM, but also enable better designing such systems for depth-resolved measurements in biological cells and tissues.

© 2017 Optical Society of America

1. Introduction

Depth selectivity, or the so-called sectioning effect, is important in optical imaging of microscopic objects that have complex 3D features [1–3]. Over the years, many depth-resolved optical microscopy techniques have been proposed including scanning confocal microscopy (SCM) [4,5], structured-illumination microscopy [6,7], two-photon fluorescence microscopy [8,9], light sheet microscopy [10,11], optical coherence tomography (OCT) [12,13], and optical diffraction tomography (ODT) [14,15]. Among these methods, SCM is the most widely implemented microscopy technique. Furthermore, C. J. R. Sheppard and his colleagues have pioneered the development of 3D coherent transfer function (CTF) method to help understand the optical sectioning effect in SCM systems [16,17].

Interferometric microscopy offers extreme sensitivity in measuring sample deformation or absorption along the axial dimension without using fluorescence staining [18–22]. In an interferometric microscopy system, the sectioning effect can be realized via either the spatial- or the temporal-coherence property of light under wide-field imaging mode. OCT, as an interferometric imaging technique, normally uses the temporal-coherence gating effect to achieve depth resolved measurements [13]. Its depth resolution is typically a few microns, which is mainly determined by the bandwidth or the temporal coherence of the light source used. Similarly, the spatial-coherence gating effect has also been utilized in interferometric microscopy to obtain depth-resolved measurements [23]. B. Redding et al. reported a full-field interferometric confocal imaging method, where the spatial coherence was manipulated by using a multimode fiber [24]. The measured spatial resolution however was limited to a few microns. Soon after, Y. Choi et al. demonstrated a reflection-mode dynamic speckle-field quantitative phase microscopy system with ~500 nm lateral resolution and ~1 micron depth resolution [25]. This type of system is promising in studying the nanoscale dynamics of depth-resolved structures such plasma and nucleic membranes in complex eukaryotic cells. If applied to 3D imaging, this reflection phase imaging system can potentially solve the “missing cone” problem during image reconstruction, which otherwise requires priori constraints, such as the non-negativity and piecewise smoothness for convergence [26,27].

Despite of the recent experimental advances in depth-resolved interferometric imaging using dynamic speckle-fields, there has not been a full physical model to describe the sectioning effect in such systems [23]. Most of the previous theoretical analysis of depth resolution was based on small scattering angle approximations or paraxial approximations, including the SCM transfer function calculations [16,17], where the diffraction effects that potentially degrade the image reconstruction quality in high NA imaging are not fully accounted for. Later on, C. J. R. Sheppard’s group has also calculated the 3D CTF for high NA imaging conditions for holographic tomography [28]. Recently, through solving the inverse scattering problem with the diffraction tomography theory, accurate 3D CTF has been obtained for low temporal-coherence interferometric tomography systems, which enabled more precise 3D reconstruction with improved spatial resolution in all dimensions for tissue [29,30] and cellular imaging [31–33]. This highlights the importance of including the diffraction effects in coherent imaging.

In this paper, we have extended the diffraction tomography theory to dynamic speckle-field interferometric microscopy or DSIM. We have successfully developed a model to calculate the axial response function in reflection-mode DSIM systems, which can be used to determine the depth resolution. The theoretically calculated depth resolution agrees well with our previous experimental results [25]. In the following, a full description of the physical model is provided. First of all, we describe a typical reflection-mode DSIM system, including the light scattering process, the interference fields, and the detection measurement function. Then, we solve the scattered field from the inhomogeneous wave equation to calculate the cross-correlation function, which is directly related to the measurement quantity. Finally, we calculate the axial response function of a thin 2D slice to obtain the depth resolution. Our study shows that the depth resolution is proportional to the square of the NA of the illumination and imaging objective. In the discussion section, we also verify that transmission-mode DSIM systems do not have sectioning effects for flat objects.

2. Reflection-mode dynamic speckle-field interferometric microscopy

In this section, we first describe the working principle of a typical reflection-mode DSIM system. Then, we solve the backward scattered field for an arbitrary object to determine the measurement function on the detector plane. This lays the foundation for calculating the axial response function and the depth resolution.

2.1 System configuration and working principle

Typically, a Linnik-type interferometer is used in a reflection-mode DSIM system. Figure 1(a) shows the schematic of such a system (more details can be found in [23,25]). The field of interest starts from the diffuser plane, consisting of a disk shape ground glass, which is conjugated to the back focal planes of the reference arm objective (Obj1) as well as that of the imaging arm objective (Obj2). The sample surface, reference mirror, and the detector are also in conjugate planes through imaging optics. When the diffuser rotates at a high-speed (this allows for sufficient averaging of speckles during the camera integration time), the generated dynamic speckle-field forms a smooth distribution in the objective back aperture planes. According to our following theoretical model, it would be ideal for this field distribution to uniformly fill up the back aperture of Obj1 and Obj2 to achieve the optimum illumination with the best sectioning effect. Next, we describe the fields that are involved in the imaging process as described in Fig. 1(b).

 figure: Fig. 1

Fig. 1 Illustration of reflection-mode DSIM. (a) The system configuration of a reflection-mode DSIM based on a Linnik-type interferometer; (b) A description of the electromagnetic fields involved in the imaging system.

Download Full Size | PDF

In our imaging system, the diffuser is in the Fourier plane where the speckle field is generated. Following the theory framework in [23], we assume that the speckle-field, immediately after the diffuser plane, has an angular spectrum distribution, S(kxi,kyi), where (kxi,kyi) is the wavevector. For simplicity, we assume a 1:1 4f relay system between the diffuser plane and the back focal planes (or the back aperture planes) of Obj1 and Obj2. Thus, the speckle angular distribution at the back aperture planes is still S(kxi,kyi). A particular wavevector (kxi,kyi), corresponding to a physical point on the back focal plane of Obj1 at (xd0,yd0)=(fλ0kxi/2π,fλ0kyi/2π) where f is the focal length of the objective and λ0 is the laser wavelength in free space, generates an incident plane wave, Ui(r),in the sample space as shown in Fig. 1(b), given by

Ui(r)=S(kxi,kyi)ei(kxix+kyix+kziz),
where |ki|=kxi2+kyi2+kzi2=n¯β0=β from the dispersion relation (due to the fact that the incident field satisfies the homogeneous wave equation), β0=2π/λ0 is the propagation constant in free space, n¯ is the medium refractive index, β is the propagation constant in the medium, and r=(x,y,z) is the position vector. The plane wave illuminates the sample, described by the scattering potential χ(r)=n2(x,y,z)n¯2 where n(x,y,z) is the sample refractive index distribution. As a result, a backward scattered field, is generated. To obtain the depth-resolved measurements, the sample needs to be scanned along the axial direction around the focal plane. Assuming the sample focal displacement is zR, the backward scattered field in the sample and detector space is denoted by Ubs(r;zR) and respectively, (whererd=(xd,yd,zd)). On the detector plane, there is also a plane wave component, Ur(rd), coming from the reference arm, which has a form similar to that of the incident field,
Ur(rd)=S(kxi,kyi)ei(kxixd+kyiyd+kyizd).
The backscattered sample and the reference fields interfere at the detector plane, creating an intensity distribution. From the measured intensity, we obtain 2Re{Γ12(rd;zR)}=2Re{Ubs(rd;zR)Ur*(rd)}, which is the real part of the cross-correlation function for each zR. In this paper, we are interested in modeling the physical imaging process, thus, we need to fully describe the cross-correlation function, which requires solving the sample scattered field.

2.2 Solving the backward scattered field

The sample scattered field can be described by the inhomogeneous wave equation [14]:

2Us(r)+β2Us(r)=βo2χ(r)U(r),
where U(r) is the total driving field which consists of both the incident and the scattered fields, U(r)=Ui(r)+Us(r). Under the first-order Born approximation, we have U(r)Ui(r) that allows us to solve the backward scattered field, denoted as Ubs, in the z > 0 sample space, for different sample focal displacement as (refer to the Appendix for the derivation)
Ubs(k,z;zR)=β02S(kxi,kyi)eiq(zzR)eikzizR2qχ(kxkxi,kykyi,qkzi).
where k=(kx,ky)is the Fourier transform variable with respect to r=(x,y) and q=β2kx2ky2is the scattered field axial projection (kx and ky have units of m−1). Notice that for simplicity, we will use the same representation for a physical parameter in different spaces by carrying the variables throughout this paper. For example, in Eq. (4) χ is in the 3D Fourier transform space as evidenced from its variables. The imaging condition ensures that the field at z = 0 (defined at the sample surface) is conjugated with the camera detector plane, zd = 0. Therefore,
Ubs(kd,zd=0;zR)=βo2S(kxi,kyi)P(kx,ky)ei(q+kzi)zR2qχ(kxkxi,kykyi,qkzi),
where kd=(kdx,kdy) is the Fourier transform variable of the transverse detector coordinate (xd,yd). (xd,yd) and (x,y) are related through a magnification M, i.e., (xd,yd)=(Mx,My) and (kdx,kdy)=(kx/M,ky/M). In Eq. (5), the aperture function P(kx,ky) has been introduced, which defines the spatial frequency bandwidth limited by the objective numerical aperture. Next, the scattered field solution will be used to calculate the cross-correlation function to obtain the system response function.

3. System response function

In this section, we will calculate the axial response function in reflection-mode DSIM. First, we calculate the scattered field from a thin step phase object. Then, we calculate the cross-correlating function Γ12 by considering the speckle statistics.

3.1 Thin step object response

A homogeneous thin object, as described in Fig. 2, is used as the sample to calculate the axial response function. The one-dimensional object has an infinite lateral dimensions and an axial width of zo, thus, its scattering potential can be described with a rectangle function in z as χ(x,y,z)=rect(z/z0) (the constant part of the scattering potential has been ignored, as it does not contribute to the axial response calculation). The 3D Fourier transform of this scattering potential is

χ(kx,ky,kz)=δ(kx)δ(ky)sinc(kzz0).
Substituting the above expression into Eq. (5), we obtain the backward scattered field in the sample space as,
Ubs(kx,ky,z;zR)=β02S(kxi,kyi)P(kx,ky)eiβ2kx2ky2(zzR)eikzizR2q×δ(kxkxi)δ(kykyi)sinc[(β2kx2ky2+kzi)z0].
Next, we take a 2D inverse Fourier transform of Eq. (7) over (kx,ky) . This Fourier transform integral can be directly evaluated by using the delta function property, i.e., Ubs(x,y,z;zR)=Ubs(kx=kxi,ky=kyi,z;zR). Hence,
Ubs(x,y,z;zR)=β02S(kxi,kyi)P(kxi,kyi)ei(kxix+kyiy)eiβ2kxi2kyi2(zzR)eikzizR2β2kxi2kyi2×sinc[(β2kxi2kyi2+kzi)z0].
The dispersion relation of the incident field makes kzi=β2kxi2kyi2. Then, Eq. (8) is simplified as,
Ubs(x,y,z;zR)=β02S(kxi,kyi)P(kxi,kyi)ei(kxix+kyiy)eikzi(z2zR)2kzisinc(2kziz0).
If z0 is very small, such that kziz00 and sinc(2kziz0)1, the scattered field becomes
Ubs(x,y,z)=β02S(kxi,kyi)P(kxi,kyi)ei(kxix+kyiy)eikzizeikzi(2zR)2kzi.
At the detector plane, we have
Ubs(xd,yd,zd=0)=β02S(kxi,kyi)P(kxi,kyi)ei(kxix+kyiy)eikzi(2zR)2kzi.
Equation (11) is the backward scattering field solution, where the phase term eikzi(2zR) signifies the double path of the field in the sample. Interestingly, in transmission-mode operation, the forward scattered field does not have a zR dependent phase term, indicating that it will not be able to provide the sectioning effect for flat objects (see more details in the discussion part).

 figure: Fig. 2

Fig. 2 Illustration of a thin step phase object, defined by a rectangle function.

Download Full Size | PDF

3.2 Speckle-field statistics

Next, we calculate the correlation function while considering the speckle-field statistics. The speckle-field angular spectrum distribution S(kxi,kyi) is a complex function, which can be written as

S(kxi,kyi)=|S(kxi,kyi)|eiφ(kxi,kyi)=1N2A(kxi,kyi)eiφ(kxi,kyi),
where N2 is the number of independent scattering areas. The distributions of A(kxi,kyi) and φ(kxi,kyi) have the following statistical properties (see Goodman [34]): each of the amplitude and phase elements are statistically independent of each other (i.e., the scattering area elements are unrelated and the strength of a given scattered component bears no relation to its phase); the phase values are uniformly distributed in the primary interval (-π, π). The detector measurement obtains the real part of the cross-correlation function Γ12 between the sample backward scattered field and the reference field, i.e., 2Re(Γ12), which includes all the speckle wavevector contributions weighted by the distribution functionS(kxi,kyi). Therefore, 2Re(Γ12)is a summation of all possible individual correlation pairs Ubs(kxi,kyi)Ur*(kxi',kyi'),,
2Re(Γ12)=2Re(UbsUr*)=2Re{[kxi,kyiUbs(kxi,kyi)][kxi',kyi'Ur*(kxi',kyi')]}.
In the above equation, we have changed the notation of Ubs(xd,yd,zd=0) and Ur(xd,yd,zd=0) to Ubs(kxi,kyi) and Ur(kxi,kyi) to make the mathematical operation clearer. With the solution of Ubs and Ur given in Eq. (2) and Eq. (11), we can write down the exact form of 2Re(Γ12), that is
2Re(Γ12)kxi,kyi{kxi',kyi'1kziA(kxi,kyi)A(kxi',kyi')cos[(kxikxi')x+(kyikyi')y+kzi(2zR)+Δφ]}.
where Δφ=φ(kxi,kyi)φ(kxi',kyi').. Using trigonometric identities, we can write 2Re(Γ12) as
2Re(Γ12)kxi,kyikxi',kyi'1kziA(kxi,kyi)A(kxi',kyi')cos[(kxikxi')x+(kyikyi')y+2kzizR]cos(Δφ)kxi,kyikxi',kyi'1kziA(kxi,kyi)A(kxi',kyi')sin[(kxikxi')x+(kyikyi')y+2kzizR]sin(Δφ).
Furthermore, 2Re(Γ12) can be break into two parts, one is the matched speckle term (kxi=kxi',kyi=kyi') whereas the other is the unmatched term (kxikxi' or kyikyi'). Since the probability distribution of φ(kxi,kyi) and φ(kxi',kyi') is uniform, their difference Δφ will also statistically have a uniform distribution in the interval (−2π, 2π). If we take the ensemble average of many speckle patterns, due to rotation of the diffuser that produces uncorrelated speckle patterns, the unmatched correlation terms reduce to zero. Therefore, only the matched terms survive, leaving
2Re(Γ12)kxi,kyiA2(kxi,kyi)P(kxi,kyi)cos(2kzizR)kziM.
where M denotes the ensemble average over M distributions. When M is very large, this ensemble average will make a smooth distribution for A2(kxi,kyi), which is also called the original speckle spectral distribution T0(kxi,kyi) such that T0(kxi,kyi)=A2(kxi,kyi). Note that T0(kxi,kyi) is also band-limited by the objective aperture function P(kxi,kyi), since the illumination and imaging paths share the same objective lens in the reflection DSIM system. For this reason, we replace the term A2(kxi,kyi) with T0(kxi,kyi)P(kxi,kyi) in Eq. (16).

3.3 The axial response function

In order to calculate the axial response function, the solution of 2Re(Γ12) in Eq. (16) is converted into an integral form as below,

2Re(Γ12)T0(kxi,kyi)P2(kxi,kyi)cos(2β2kxi2kyi2zR)β2kxi2kyi2dkxidkyi.
It is always desired that T0(kxi,kyi) is uniform within the objective back aperture area for the best depth selectivity. There are many ways to achieve this goal, such as magnifying this distribution with an additional 4f system [25]. For the best sectioning effect, T0(kxi,kyi) is assumed to be a uniform distribution and P(kxi,kyi) is assumed to be a circular disk function. Thus, P2(kxi,kyi) will still be a circular disk function that has the same distribution as the radius of the disk is determined by the numerical aperture of the objective. By denoting kxi=krcosϕ and kyi=krsinϕ, the integral in Eq. (17) is converted into the polar coordinates as below,
2Re(Γ12)P(krcosϕ,krsinϕ)cos(2β2kr2zR)β2kr2dkrdϕ.
The range of kr is limited by the objective numerical aperture through P(krcosϕ,krsinϕ) to [0,NAobj(2π/λ0)]=[0,(2πn¯/λ0)sin(θmax)]=[0,βsin(θmax)]. The integral in ϕ can be dropped out as the function in the integral is circularly symmetric, giving
2Re(Γ12)0βsinθmaxcos(2β2kr2zR)β2kr2krdkr.
With a variable change, K=β2kr2, and KdK=krdkr, Eq. (19) becomes
2Re(Γ12)βcos(θmax)βcos(2KzR)dK
The above integral can be easily evaluated to give an analytical solution as
2Re(Γ12)β[sinc(2βzR/π)cos(θmax)sinc(2βcos(θmax)zR/π)],
where the sinc function is defined as: sinc(x)=sin(πx)/πx.

The above framework establishes a mathematical model that describes the axial response of the reflection-mode DSIM system: if we know the objective numerical aperture, the illumination wavelength, we can compute 2Re(Γ12) as a function of zR to obtain the axial response function in DSIM. Finally, with the axial response function, the depth resolution can also be determined.

4. Depth resolution

In this section, the axial response function model is tested using the specifications from an experimental system, and this model is subsequently used to quantify the sectioning effect in terms of depth resolution. For this study, we use the parameters from our previous experiment [25], which also provides a way to validate our theoretical model. In that reflection-mode DSIM system, the laser wavelength is λ0=0.8 μm, the sample host medium is water with refractive index n¯=1.33, and two water immersion objectives are used with NAobj=1. Inserting these parameters into Eq. (21), we obtain the axial response function as shown in Fig. 3, where the solid black curve is the axial response function, i.e., 2Re(Γ12) vs. different defocus positions zR. The dashed red curve is the envelope function, from which the first zero is determined to be around zR=0.89μm, and the half maximum value is found to be around zR=0.53 μm. The depth resolution, δz, is defined as the full-width half maximum (FWHM) value which is 1.06 μm, which is in a good agreement with our previous study [25].

 figure: Fig. 3

Fig. 3 Axial response function with NAobj=1. The axial response function is obtained by calculating 2Re(Γ12) at different defocus position zR.

Download Full Size | PDF

Next, we study the relationship between the depth resolution and the objective numerical aperture. The depth resolution values are obtained for various NAobj ranging from 0.6 to 1.2 with 0.1 intervals. In Fig. 4, the depth resolution δz is plotted as a function of NAobj in black square markers. Curve fitting shows a 1/NAobj2trend over the plotted data range; the solid red line is described by  δz=1.315/NAobj2-0.262. This fitting result is expected as depth resolution normally degrades with 1/NAobj2 in coherent microscopy. Therefore, in a reflection-mode DSIM system the higher the numerical aperture, the better the depth resolution. According to this calculation, at NAobj=1.2 a depth resolution as small as 0.65 μm can be achieved. It should be noted that in order to get the best depth resolution, a uniform speckle spectral distribution over the whole back aperture area of the high numerical aperture objective is necessary.

 figure: Fig. 4

Fig. 4 Relationship between depth resolution δz (vertical axis) and objective numerical aperture NAobj (horizontal axis).

Download Full Size | PDF

5. Discussion

We have developed a physical model to precisely describe the sectioning effect in a reflection-mode speckle-field illumination interferometric system. The sectioning effect comes from the spatially incoherent illumination. It is also possible to use a broadband source in such a system to further enhance the depth selectivity. However, it is not clear how much depth selectivity can be enhanced with this addition. In principle, this theoretical framework can be extended to incorporate temporal coherence to answer this interesting question. We note that frameworks considering temporal coherence have been reported for transmission case in earlier publications [31,32]. Another important question is whether transmission-mode DSIM can provide sectioning effect. In order to answer this, we calculate the forward scattered field using Eq. (33b) in the Appendix for the same step object described in Section 3. The field is given as

Ufs(kx,ky,z;zR)=β02S(kxi,kyi)P(kx,ky)eiβ2kx2ky2(zzR)eikzizR2q×δ(kxkxi)δ(kykyi)sinc[(β2kx2ky2kzi)z0].
Similarly, we can Fourier transform the field into the spatial domain representation,
Ufs(x,y,z;zR)=β02S(kxi,kyi)P(kxi,kyi)ei(kxix+kyiy)eiβ2kxi2kyi2(zzR)eikzizR2β2kxi2kyi2×sinc[(β2kxi2kyi2kzi)z0].
The dispersion relation dictates that kzi=β2kxi2kyi2, making
Ufs(x,y,z;zR)=β02S(kxi,kyi)P(kxi,kyi)ei(kxix+kyiy)eikzi(zzR)eikzizR2kzisinc(0)=β02S(kxi,kyi)P(kxi,kyi)ei(kxix+kyiy)eikziz2kzi.
The detector plane zd=0 sees the field at z = 0 plane as
Ufs(xd,yd,zd=0;zR)=β02S(kxi,kyi)P(kxi,kyi)ei(kxix+kyiy)2kzi.
It turns out that the forward scattered field, as described in Eq. (25), is not a function of zR, thus giving no sectioning effect. Note that the above calculation assumes flat thin objects, with no lateral structures. However, for objects that have lateral features, there will be sectioning as was demonstrated in [36]. The missing axial frequency information in the low transverse region is called the “missing cone” problem in 3D optical imaging. Our following paper will discuss this issue in more details by calculating the 3D CTF in both reflection and transmission-mode DSIM systems.

6. Summary

In conclusion, we have developed a mathematical model to describe the axial response function in reflection-mode dynamic speckle-field interferometric microscopy. This model is based on the diffraction tomography theory and speckle statistics, and provides a spatial correlation function. Using this function, the axial response function is obtained and used to determine the depth resolution. The theoretically calculated depth resolution is in excellent agreement with our experimental results. Using this method, the connection between depth resolution and objective numerical aperture is also studied, which reveals an inverse square law relationship that is also expected. We envision that developed physical model will contribute to the understanding of sectioning effect in spatially incoherent illumination interferometry systems. It can also guide on the design of such systems for better performance in the future.

Appendix optical diffraction tomography

We start with the inhomogeneous wave equation that describes the scattered field [31,32]:

2Us(r)+β2Us(r)=βo2χ(r)U(r),
where U(r) is approximated as Ui(r)=S(kxi,kyi)ei(kxix+kyix+kziz). Equation (26) can be solved in the space (or spatial spectrum space) by taking the 3D Fourier transform on both sides, namely
(β2k2)Us(kx,ky,kz)=β02S(kxi,kyi)χ(kx,ky,kz)kδ(kxkxi,kykyi,kzkzi)=β02S(kxi,kyi)χ(kxkxi,kykyi,kzkzi),
where (kx,ky,kz) is the Fourier transform variable of (x,y,z) and k2=kx2+ky2+kz2. Assuming the object is centered at z=zR, i.e., χ(r,zzR), then the above equation can be revised to the following form to incorporate this sample shift:
(β2k2)Us(kx,ky,kz)=β02S(kxi,kyi)χ(kxkxi,kykyi,kzkzi)ei(kzkzi)zR.
By re-arranging Eq. (28), the scattered field Us(kx,ky,kz) is solved as
Us(kx,ky,kz)=β02S(kxi,kyi)χ(kxkxi,kykyi,kzkzi)ei(kzkzi)zRkz2(β2kx2ky2)=β02S(kxi,kyi)χ(kxkxi,kykyi,kzkzi)ei(kzkzi)zR12q(1kz+q1kzq),
where q=β2kx2ky2 (notice that this q is not kz, because the homogeneous dispersion relation does not apply to the scattered field [35]). The two fractional terms in Eq. (29), 1/(kz + q) and 1/(kz - q), correspond to the forward scattered and backward scattered fields, respectively. This can be seen by performing an inverse Fourier transform over Eq. (29) over kz. The inverse Fourier transform of 1/kz gives a sign function, sgn(z) (one can also write it as 1-2H(-z) where H(z) is the Heaviside function). We can therefore write
Us(kx,ky,z)=β02S(kxi,kyi)[χ(kxkxi,kykyi,zzR)eikziz]z12q(sgn(z)eiqzsgn(z)eiqz).
For backward scattering(z<0), we consider the eiqz term. The backward scattered field, denoted as Ubs, has the following form
Ubs(kx,ky,z)=β02S(kxi,kyi)2[χ(kxkxi,kykyi,zzR)eikziz]zeiqzq.
For the forward scattering(z>0), we consider the eiqz term that results in a forward scattered field, denoted as Ufs, in the form of
Ufs(kx,ky,z)=β02S(kxi,kyi)2[χ(kxkxi,kykyi,zzR)eikziz]zeiqzq.
Next, we write the convolution in z in Eq. (31a) as an integral
Ubs(kx,ky,z)=β02S(kxi,kyi)2q+[χ(kxkxi,kykyi,z'zR)eikziz']eiq(zz')dz'=β02S(kxi,kyi)eiqz2q+χ(kxkxi,kykyi,z'zR)ei(q+kzi)z'dz'=β02S(kxi,kyi)eiqz2q+χ(kxkxi,kykyi,zzR)eiWzdz.
where W=qkzi. The above integral is a Fourier transform over z that turns χ back to the 3D Fourier transform domain with a phase shift eiWzR=ei(q+kzi), which is in addition to the phase term eiqz, giving:
Ubs(kx,ky,z)=β02S(kxi,kyi)eiqzei(q+kzi)zR2qχ(kxkxi,kykyi,qkzi)=β02S(kxi,kyi)eiq(zzR)eikzizR2qχ(kxkxi,kykyi,qkzi).
Following the same derivation, we can write the forward scattered field as:

Ufs(kx,ky,z)=β02S(kxi,kyi)eiqzei(kziq)zR2qχ(kxkxi,kykyi,qkzi)=β02S(kxi,kyi)eiq(zzR)eikzizR2qχ(kxkxi,kykyi,qkzi).

Funding

US National Institute of Health (NIH) grants NIH9P41EB015871-26A1, 1R01HL121386-01A1; the Hamamatsu Corp, and National Research Foundation Singapore through the Singapore MIT Alliance for Research and Technology’s BioSystems and Micromechanics Inter-Disciplinary Research program.

References and links

1. D. A. Agard, “Optical sectioning microscopy: cellular architecture in three dimensions,” Annu. Rev. Biophys. Bioeng. 13(1), 191–219 (1984). [CrossRef]   [PubMed]  

2. J. A. Conchello and J. W. Lichtman, “Optical sectioning microscopy,” Nat. Methods 2(12), 920–931 (2005). [CrossRef]   [PubMed]  

3. P. J. Keller, F. Pampaloni, and E. H. K. Stelzer, “Life sciences require the third dimension,” Curr. Opin. Cell Biol. 18(1), 117–124 (2006). [CrossRef]   [PubMed]  

4. M. Minsky, “Memoir on inventing the confocal scanning microscope,” Scanning 10(4), 128–138 (1988). [CrossRef]  

5. T. Wilson, “Optical sectioning in confocal fluorescent microscopes,” J. Microsc-Oxford 154(2), 143–156 (1989). [CrossRef]  

6. M. A. A. Neil, R. Juskaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. 22(24), 1905–1907 (1997). [CrossRef]   [PubMed]  

7. M. G. L. Gustafsson, L. Shao, P. M. Carlton, C. J. R. Wang, I. N. Golubovskaya, W. Z. Cande, D. A. Agard, and J. W. Sedat, “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination,” Biophys. J. 94(12), 4957–4970 (2008). [CrossRef]   [PubMed]  

8. P. T. C. So, C. Y. Dong, B. R. Masters, and K. M. Berland, “Two-photon excitation fluorescence microscopy,” Annu. Rev. Biomed. Eng. 2(1), 399–429 (2000). [CrossRef]   [PubMed]  

9. W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248(4951), 73–76 (1990). [CrossRef]   [PubMed]  

10. H. U. Dodt, U. Leischner, A. Schierloh, N. Jährling, C. P. Mauch, K. Deininger, J. M. Deussing, M. Eder, W. Zieglgänsberger, and K. Becker, “Ultramicroscopy: three-dimensional visualization of neuronal networks in the whole mouse brain,” Nat. Methods 4(4), 331–336 (2007). [CrossRef]   [PubMed]  

11. J. Huisken, J. Swoger, F. Del Bene, J. Wittbrodt, and E. H. K. Stelzer, “Optical sectioning deep inside live embryos by selective plane illumination microscopy,” Science 305(5686), 1007–1009 (2004). [CrossRef]   [PubMed]  

12. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254(5035), 1178–1181 (1991). [CrossRef]   [PubMed]  

13. A. F. Fercher, W. Drexler, C. K. Hitzenberger, and T. Lasser, “Optical coherence tomography - principles and applications,” Rep. Prog. Phys. 66(2), 239–303 (2003). [CrossRef]  

14. E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic data,” Opt. Commun. 1(4), 153–156 (1969). [CrossRef]  

15. W. Choi, C. Fang-Yen, K. Badizadegan, S. Oh, N. Lue, R. R. Dasari, and M. S. Feld, “Tomographic phase microscopy,” Nat. Methods 4(9), 717–719 (2007). [CrossRef]   [PubMed]  

16. C. J. R. Sheppard, M. Gu, and X. Q. Mao, “Three-dimensional coherent transfer-function in a reflection-mode confocal scanning microscope,” Opt. Commun. 81(5), 281–284 (1991). [CrossRef]  

17. M. Gu, Principles of Three Dimensional Imaging in Confocal Microscopes (World Scientific, Singapore; River Edge, NJ, 1996).

18. G. E. Sommargren, “Optical heterodyne profilometry,” Appl. Opt. 20(4), 610–618 (1981). [CrossRef]   [PubMed]  

19. K. Creath, “Phase-measurement interferometry techniques for nondestructive testing,” Moire Techniques, Holographic Interferometry, Optical NDT, and Applications to Fluid Mechanics 1554, 701–707 (1991).

20. B. Bhaduri, C. Edwards, H. Pham, R. Zhou, T. H. Nguyen, L. L. Goddard, and G. Popescu, “Diffraction phase microscopy: principles and applications in materials and life sciences,” Adv. Opt. Photonics 6(1), 57–119 (2014). [CrossRef]  

21. P. Hosseini, R. Zhou, Y. H. Kim, C. Peres, A. Diaspro, C. Kuang, Z. Yaqoob, and P. T. C. So, “Pushing phase and amplitude sensitivity limits in interferometric microscopy,” Opt. Lett. 41(7), 1656–1659 (2016). [CrossRef]   [PubMed]  

22. E. Cuche, P. Marquet, and C. Depeursinge, “Simultaneous amplitude-contrast and quantitative phase-contrast microscopy by numerical reconstruction of Fresnel off-axis holograms,” Appl. Opt. 38(34), 6994–7001 (1999). [CrossRef]   [PubMed]  

23. M. G. Somekh, C. W. See, and J. Goh, “Wide field amplitude and phase confocal microscope with speckle illumination,” Opt. Commun. 174(1-4), 75–80 (2000). [CrossRef]  

24. B. Redding, Y. Bromberg, M. A. Choma, and H. Cao, “Full-field interferometric confocal microscopy using a VCSEL array,” Opt. Lett. 39(15), 4446–4449 (2014). [CrossRef]   [PubMed]  

25. Y. Choi, P. Hosseini, W. Choi, R. R. Dasari, P. T. C. So, and Z. Yaqoob, “Dynamic speckle illumination wide-field reflection phase microscopy,” Opt. Lett. 39(20), 6062–6065 (2014). [CrossRef]   [PubMed]  

26. D. A. Agard, Y. Hiraoka, P. Shaw, and J. W. Sedat, “Fluorescence microscopy in three dimensions,” Methods Cell Biol. 30, 353–377 (1989). [CrossRef]   [PubMed]  

27. Y. Sung, W. Choi, N. Lue, R. R. Dasari, and Z. Yaqoob, “Stain-free quantification of chromosomes in live cells using regularized tomographic phase microscopy,” PLoS One 7(11), e49502 (2012). [CrossRef]   [PubMed]  

28. S. S. Kou and C. J. R. Sheppard, “Image formation in holographic tomography: high-aperture imaging conditions,” Appl. Opt. 48(34), H168–H175 (2009). [CrossRef]   [PubMed]  

29. T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys. 3(2), 129–134 (2007). [CrossRef]   [PubMed]  

30. N. D. Shemonski, F. A. South, Y. Z. Liu, S. G. Adie, P. S. Carney, and S. A. Boppart, “Computational high-resolution optical imaging of the living human retina,” Nat. Photonics 9(7), 440–443 (2015). [CrossRef]   [PubMed]  

31. T. Kim, R. Zhou, M. Mir, S. D. Babacan, P. S. Carney, L. L. Goddard, and G. Popescu, “White-light diffraction tomography of unlabelled live cells,” Nat. Photonics 8(3), 256–263 (2014). [CrossRef]  

32. R. Zhou, T. Kim, L. L. Goddard, and G. Popescu, “Inverse scattering solutions using low-coherence light,” Opt. Lett. 39(15), 4494–4497 (2014). [CrossRef]   [PubMed]  

33. T. Kim, R. J. Zhou, L. L. Goddard, and G. Popescu, “Solving inverse scattering problems in biological samples by quantitative phase imaging,” Laser Photonics Rev. 10(1), 13–39 (2016). [CrossRef]  

34. J. W. Goodman, “Statistical properties of laser speckle patterns,” in Laser Speckle and Related Phenomena (Springer, 1975), pp. 9–75.

35. M. Shan, V. Nastasa, and G. Popescu, “Statistical dispersion relation for spatially broadband fields,” Opt. Lett. 41(11), 2490–2492 (2016). [CrossRef]   [PubMed]  

36. Y. Choi, T. D. Yang, K. J. Lee, and W. Choi, “Full-field and single-shot quantitative phase microscopy using dynamic speckle illumination,” Opt. Lett. 36(13), 2465–2467 (2011). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1
Fig. 1 Illustration of reflection-mode DSIM. (a) The system configuration of a reflection-mode DSIM based on a Linnik-type interferometer; (b) A description of the electromagnetic fields involved in the imaging system.
Fig. 2
Fig. 2 Illustration of a thin step phase object, defined by a rectangle function.
Fig. 3
Fig. 3 Axial response function with N A o b j = 1. The axial response function is obtained by calculating 2 Re ( Γ 12 ) at different defocus position z R .
Fig. 4
Fig. 4 Relationship between depth resolution δ z (vertical axis) and objective numerical aperture N A o b j (horizontal axis).

Equations (35)

Equations on this page are rendered with MathJax. Learn more.

U i ( r ) = S ( k x i , k y i ) e i ( k x i x + k y i x + k z i z ) ,
U r ( r d ) = S ( k x i , k y i ) e i ( k x i x d + k y i y d + k y i z d ) .
2 U s ( r ) + β 2 U s ( r ) = β o 2 χ ( r ) U ( r ) ,
U b s ( k , z ; z R ) = β 0 2 S ( k x i , k y i ) e i q ( z z R ) e i k z i z R 2 q χ ( k x k x i , k y k y i , q k z i ) .
U b s ( k d , z d = 0 ; z R ) = β o 2 S ( k x i , k y i ) P ( k x , k y ) e i ( q + k z i ) z R 2 q χ ( k x k x i , k y k y i , q k z i ) ,
χ ( k x , k y , k z ) = δ ( k x ) δ ( k y ) sin c ( k z z 0 ) .
U b s ( k x , k y , z ; z R ) = β 0 2 S ( k x i , k y i ) P ( k x , k y ) e i β 2 k x 2 k y 2 ( z z R ) e i k z i z R 2 q × δ ( k x k x i ) δ ( k y k y i ) sin c [ ( β 2 k x 2 k y 2 + k z i ) z 0 ] .
U b s ( x , y , z ; z R ) = β 0 2 S ( k x i , k y i ) P ( k x i , k y i ) e i ( k x i x + k y i y ) e i β 2 k x i 2 k y i 2 ( z z R ) e i k z i z R 2 β 2 k x i 2 k y i 2 × sin c [ ( β 2 k x i 2 k y i 2 + k z i ) z 0 ] .
U b s ( x , y , z ; z R ) = β 0 2 S ( k x i , k y i ) P ( k x i , k y i ) e i ( k x i x + k y i y ) e i k z i ( z 2 z R ) 2 k z i sin c ( 2 k z i z 0 ) .
U b s ( x , y , z ) = β 0 2 S ( k x i , k y i ) P ( k x i , k y i ) e i ( k x i x + k y i y ) e i k z i z e i k z i ( 2 z R ) 2 k z i .
U b s ( x d , y d , z d = 0 ) = β 0 2 S ( k x i , k y i ) P ( k x i , k y i ) e i ( k x i x + k y i y ) e i k z i ( 2 z R ) 2 k z i .
S ( k x i , k y i ) = | S ( k x i , k y i ) | e i φ ( k x i , k y i ) = 1 N 2 A ( k x i , k y i ) e i φ ( k x i , k y i ) ,
2 Re ( Γ 12 ) = 2 Re ( U b s U r * ) = 2 Re { [ k x i , k y i U b s ( k x i , k y i ) ] [ k x i ' , k y i ' U r * ( k x i ' , k y i ' ) ] } .
2 Re ( Γ 12 ) k x i , k y i { k x i ' , k y i ' 1 k z i A ( k x i , k y i ) A ( k x i ' , k y i ' ) cos [ ( k x i k x i ' ) x + ( k y i k y i ' ) y + k z i ( 2 z R ) + Δ φ ] } .
2 Re ( Γ 12 ) k x i , k y i k x i ' , k y i ' 1 k z i A ( k x i , k y i ) A ( k x i ' , k y i ' ) cos [ ( k x i k x i ' ) x + ( k y i k y i ' ) y + 2 k z i z R ] cos ( Δ φ ) k x i , k y i k x i ' , k y i ' 1 k z i A ( k x i , k y i ) A ( k x i ' , k y i ' ) sin [ ( k x i k x i ' ) x + ( k y i k y i ' ) y + 2 k z i z R ] sin ( Δ φ ) .
2 Re ( Γ 12 ) k x i , k y i A 2 ( k x i , k y i ) P ( k x i , k y i ) cos ( 2 k z i z R ) k z i M .
2 Re ( Γ 12 ) T 0 ( k x i , k y i ) P 2 ( k x i , k y i ) cos ( 2 β 2 k x i 2 k y i 2 z R ) β 2 k x i 2 k y i 2 d k x i d k y i .
2 Re ( Γ 12 ) P ( k r cos ϕ , k r sin ϕ ) cos ( 2 β 2 k r 2 z R ) β 2 k r 2 d k r d ϕ .
2 Re ( Γ 12 ) 0 β sin θ max cos ( 2 β 2 k r 2 z R ) β 2 k r 2 k r d k r .
2 Re ( Γ 12 ) β cos ( θ max ) β cos ( 2 K z R ) d K
2 Re ( Γ 12 ) β [ sin c ( 2 β z R / π ) cos ( θ max ) sin c ( 2 β cos ( θ max ) z R / π ) ] ,
U f s ( k x , k y , z ; z R ) = β 0 2 S ( k x i , k y i ) P ( k x , k y ) e i β 2 k x 2 k y 2 ( z z R ) e i k z i z R 2 q × δ ( k x k x i ) δ ( k y k y i ) sinc [ ( β 2 k x 2 k y 2 k z i ) z 0 ] .
U f s ( x , y , z ; z R ) = β 0 2 S ( k x i , k y i ) P ( k x i , k y i ) e i ( k x i x + k y i y ) e i β 2 k x i 2 k y i 2 ( z z R ) e i k z i z R 2 β 2 k x i 2 k y i 2 × sinc [ ( β 2 k x i 2 k y i 2 k z i ) z 0 ] .
U f s ( x , y , z ; z R ) = β 0 2 S ( k x i , k y i ) P ( k x i , k y i ) e i ( k x i x + k y i y ) e i k z i ( z z R ) e i k z i z R 2 k z i sinc ( 0 ) = β 0 2 S ( k x i , k y i ) P ( k x i , k y i ) e i ( k x i x + k y i y ) e i k z i z 2 k z i .
U f s ( x d , y d , z d = 0 ; z R ) = β 0 2 S ( k x i , k y i ) P ( k x i , k y i ) e i ( k x i x + k y i y ) 2 k z i .
2 U s ( r ) + β 2 U s ( r ) = β o 2 χ ( r ) U ( r ) ,
( β 2 k 2 ) U s ( k x , k y , k z ) = β 0 2 S ( k x i , k y i ) χ ( k x , k y , k z ) k δ ( k x k x i , k y k y i , k z k z i ) = β 0 2 S ( k x i , k y i ) χ ( k x k x i , k y k y i , k z k z i ) ,
( β 2 k 2 ) U s ( k x , k y , k z ) = β 0 2 S ( k x i , k y i ) χ ( k x k x i , k y k y i , k z k z i ) e i ( k z k z i ) z R .
U s ( k x , k y , k z ) = β 0 2 S ( k x i , k y i ) χ ( k x k x i , k y k y i , k z k z i ) e i ( k z k z i ) z R k z 2 ( β 2 k x 2 k y 2 ) = β 0 2 S ( k x i , k y i ) χ ( k x k x i , k y k y i , k z k z i ) e i ( k z k z i ) z R 1 2 q ( 1 k z + q 1 k z q ) ,
U s ( k x , k y , z ) = β 0 2 S ( k x i , k y i ) [ χ ( k x k x i , k y k y i , z z R ) e i k z i z ] z 1 2 q ( s g n ( z ) e i q z s g n ( z ) e i q z ) .
U b s ( k x , k y , z ) = β 0 2 S ( k x i , k y i ) 2 [ χ ( k x k x i , k y k y i , z z R ) e i k z i z ] z e i q z q .
U f s ( k x , k y , z ) = β 0 2 S ( k x i , k y i ) 2 [ χ ( k x k x i , k y k y i , z z R ) e i k z i z ] z e i q z q .
U b s ( k x , k y , z ) = β 0 2 S ( k x i , k y i ) 2 q + [ χ ( k x k x i , k y k y i , z ' z R ) e i k z i z ' ] e i q ( z z ' ) d z ' = β 0 2 S ( k x i , k y i ) e i q z 2 q + χ ( k x k x i , k y k y i , z ' z R ) e i ( q + k z i ) z ' d z ' = β 0 2 S ( k x i , k y i ) e i q z 2 q + χ ( k x k x i , k y k y i , z z R ) e i W z d z .
U b s ( k x , k y , z ) = β 0 2 S ( k x i , k y i ) e i q z e i ( q + k z i ) z R 2 q χ ( k x k x i , k y k y i , q k z i ) = β 0 2 S ( k x i , k y i ) e i q ( z z R ) e i k z i z R 2 q χ ( k x k x i , k y k y i , q k z i ) .
U f s ( k x , k y , z ) = β 0 2 S ( k x i , k y i ) e i q z e i ( k z i q ) z R 2 q χ ( k x k x i , k y k y i , q k z i ) = β 0 2 S ( k x i , k y i ) e i q ( z z R ) e i k z i z R 2 q χ ( k x k x i , k y k y i , q k z i ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.