Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Proposal of three-dimensional phase contrast holographic microscopy

Open Access Open Access

Abstract

We propose a three-dimensional phase contrast digital holographic microscopy. The object to be observed is a low-contrast transparent refractive index distribution sample, such as biological tissue. Low contrast phase objects are converted to high contrast images through the microscopy we propose. In order to gain high three-dimensional resolution, the direction of pump plane wave is scanned, and separate holographic images produced at each angle are acquired and decoded into complex amplitude in Fourier space. The three-dimensional image is reconstructed in a computer from all information acquired through the system. The resolution in the direction of the optical axis is increased by utilizing a 4π configuration of objective lenzes.

©2007 Optical Society of America

1. Introduction

Recently, studies on biomedical science have evolved remarkably, and some novel microscopies for biological material have been developed energetically. For example, the confocal microscopy is widely used in biology, medical science, and pharmaceutical science. The development of confocal microscopy has resulted in the ability to observe living, unstained biological tissue [1–3]. While the image can be formed by autofluorescence from the specimen in confocal microscopy, the radiance of autofluorescence typically suffers from low signal values. The diffracted wave reflected or transmitted through the sample also enables one to obtain the image. With confocal reflection microscopy, however, the signal from refractive index boundaries is emphasized and the structure containing several phase layers cannot be visualized precisely because the 0-order light does not interfere with the diffracted wave at the pinhole right before the detector [1]. While confocal transmission microscopy can resolve an absorptive object that has an absorption coefficient distribution, it cannot form a high contrast image of a phase object, so that a detector with an excessively wide dynamic range is required [1]. Therefore, in most cases, the biological tissue is stained for fluorescence detection, which is, however, a major disadvantage in observing living tissue. Furthermore, it is difficult to separate the essential signal generated in the fluorescent dyes from background autofluorescence.

Various tools have been developed that do not require staining the specimen before observation. Some multi-photon microscopies detect the second harmonic generation (SHG) signal [4–6], the third harmonic generation (THG) signal [7,8], or the coherent anti-Stokes Raman scattering (CARS) signal [9,10], which are generated through nonlinear optical interaction between the molecules in the specimen and the light. Multi-photon microscopy, however, requires a relatively expensive high power pulse laser as a light source. While phase contrast microscopy, which can convert a phase object into a high contrast irradiance image, is one of the most powerful tools to observe biological tissue, the three-dimensional (3-D) character at a phase object cannot be resolved [11,12]. Digital holographic microscopy, which can provide the 3-D images by numerically calculating the complex amplitude of wavefront diffracted by the object using a hologram recorded through a digital camera, does not have a sufficient resolution [13–16]. Finally, although optical coherence tomography is an exquisite device for biological tissue, the development of a broadband light source is indispensable in order to gain high resolution in the depth direction [17,18].

In this paper, a powerful tool is proposed that can resolve an unstained specimen three-dimensionally and is operated easily. This technique permits 3-D visualization of objects at any viewing angle. 3-D image reconstruction is accomplished by using digital holographic technique combined with scanning the direction of the wave vector of the pump plane wave in order to increase the resolution. The digital holographic technique is used to acquire the complex amplitude of the object waves produced by the pump beams at every incident angle. The resolution of the microscopy system using the 3-D optical transfer function (OTF) and an estimate of the error in object reconstruction are also discussed.

2. Setup of 3-D phase contrast holographic microscopy

The object is a low contrast refractive index distribution. Most biological tissues satisfy this condition. Unlike ordinary microscopy, which comprises an image-formation optical system, the interference pattern between an object wave and a reference wave is recorded with a two-dimensional digital camera placed in the Fourier plane of the object. The complex amplitude of the object wave is calculated in a computer using a digital holographic technique [13–16]. In order to gain high 3-D resolution, the direction of the pump beam is scanned two-dimensionally by a scanning mirror system, and the 3-D image of the local scattering cross section in the object is reconstructed from the complex amplitude information of the object waves produced over the angle range of the pump beam.

 figure: Fig. 1.

Fig. 1. Schematic of the setup of the 3-D phase contrast holographic microscopy. BS: Beam splitter. λ/2 : Half wave plate. λ/4 : Quarter wave plate. PBS: Polarized beam splitter.

Download Full Size | PDF

Figure 1 shows a schematic of the 3-D holographic microscopy instrument. Any two successive lenses in Fig. 1 compose infinite image-formation systems. A stable near-infrared laser with a sufficiently long coherence length is used as the light source. Since the direction of the polarization of the laser is perpendicular to the plane of the drawing (S-polarization), the beam is reflected by polarized beam splitter (PBS1). The beam is collimated through a tube lens and a primary objective, and the object is illuminated by a plane wave whose angle is scanned two-dimensionally by the scanning mirror. While the maximum angle of the pump beam after propagating through the primary objective is approximately 64 degrees, which corresponds to a numerical aperture (NA) of 1.2 in water, the maximum beam angle after reflection by the scanning mirror is about 1.15 degree, since the focal length of the tube lens is 60 times as long as that of the primary objective. Two primary objectives, which have the identical specifications, face each other, and a sample holder is set between them. After the 0-order light is transmitted through the sample, it propagates through another primary objective and is transmitted through the polarized beam splitter (PBS1) after being changed into P-polarization by a half wave plate (λ/2 -A). The 0-order light is collimated by the tube lens and reflected by the scanning mirror again, which changes the angle of the 0-order light back into the original angle of the pump beam emerging from the light source. After reflection by the scanning mirror, the 0-order light converges onto a micromirror of about 200μm in diameter. While a 50/50 beam splitter inevitably leads to light loss in the detection path, it can be reduced by using an 80/20 beam splitter. The majority of the 0-order light is reflected by the micromirror and is used as a reference wave for the hologram. The remainder of the 0-order light is transmitted through the micromirror and advances toward the digital camera. A certain quantity of 0-order light is required to form the image. The transmitted 0-order light is used for compensation of phase error that will be discussed later.

The transmitted component of the scattered wave emerging from the sample travels along the same optical path as the 0-order light and passes through the outside of the micromirror, whereas the low frequency component of the scattered wave is intercepted by the micromirror. The micromirror is set on the optical axis at the Fourier plane of the two primary objectives (the middle plane in the sample). The scattered wave passing through the outside of the microrirror is transmitted through PBS2 and propagated toward the digital camera set at the Fourier plane of the two primary objectives. The location of the micromirror is also conjugate to the digital camera except for the reference beam path.

The reflected component of the scattered wave emerging from the sample propagates back along the optical path and is reflected by PBS1. It then follows the identical optical path with the scattered transmitted wave. After reflected by the scanning mirror, the reflected scattered wave passes around the outside of the microrirror. The S-polarization is reflected by PBS2, and the polarization is converted the P-polarization by a half wave plate (λ/ 2 -B), and it propagates toward the digital camera. In the same way as the transmitted component, a few frequency components of the scattered reflected wave in the vicinity of the 0-order light are intercepted by the micromirror. While the well-known high-NA depolarization effect occurs in the primary objectives, two polarizers placed on the transmission (P-polarization) and reflection (S-polarization) sides ensure the linear polarization.

 figure: Fig. 2.

Fig. 2. Description of the position of the 0-order light and the circular area of the scattered wave on the observation plane. The circular area shifts during the scanning of the pump direction. On the other hand, the arrival position of the 0-order light is fixed at the center.

Download Full Size | PDF

The 0-order light transmitted in the micromirror separates into two paths due to PBS2 after its polarization is converted to circular polarization by a micro quarter wave plate (λ/4 ) of the same size as the micromirror. A part of the 0-order light transmitted through the PBS2 propagates to the digital camera along with the transmitted component of the scattered wave and the other part of the 0-order light reflected by the PBS2 propagates with the reflected component of the scattered wave. The two divided 0-order light parts is used to compensate the phase error for the transmitted and reflected components, which will be described in detail in Section 4. Although the micro quarter wave plate can be removed for a simple setup, in this case the phase error of the reflected component cannot be minimized. The digital camera can be divided into two parts, which are used for the transmission and reflection waves. Since the transmitted and reflected scattered waves and the reference wave are P-polarized on the observation plane of the digital camera, the transmission and reflection waves interfere with the reference wave, and holographic images are recorded through the digital camera. Since the central point of the scanning mirror is placed at the conjugate position of the middle point of the sample on the optical axis, the incident angle of the transmission and reflection waves is perpendicular to the digital camera while the pump beam scanned. The circular area of the scattered wave at the digital camera is shifted during scanning. The 0-order light is fixed at the central positions of the observation planes in each of part, as shown in Fig. 2. The incident angle of the reference wave onto the digital camera never varies during the scanning, since the 0-order light is always reflected at the center of the micromirror after being reflected twice by the scanning mirror. Holographic images of the transmission and reflection waves are recorded for each incident direction of the pump wave.

 figure: Fig. 3.

Fig. 3. Schematic of the sample holder and the mechanical aperture between the two Fourier lenses facing each other. The Fourier lens satisfies the condition of h = F sin θ.

Download Full Size | PDF

The sample is set in a sample holder with two circular apertures fixed between the two primary objectives, as shown in Fig. 3. The primary objective in this system is a Fourier lens [19], in which the aberration of pupil is corrected properly and the condition h = F sin θ is satisfied, where h represents the height of the principal ray on the pupil plane, F denotes the focal length, and θ stands for the angle of the principal ray emitted from the central point on the object plane to the optical axis (see Fig. 3). In particular, a water immersion objective lens with numerical aperture (NA) 1.2 can be designed and produced easily for the Fourier lens, whose specification is that the angular field of view of the incident plane wave on the object side is 1.2 in NA in water and the NA on the detector side is 0.0075 in air. In this case, if the focal length of the tube lens is 200mm and the magnification of the primary objective is 60X, the focal length of the primary objective is about 4.4mm in water, and the field of view in the sample is approximately 50 μm. The digital camera is set to be conjugate to the pupil plane of the primary objective. If we assume that the optical system between the pupil plane of the primary objective and the detector plane comprises the image-formation system of magnification β, we obtain the relation

NA·F=μN4β,

where μ represents the size of a pixel of the digital camera and N denotes the number of the pixels along a side of the digital camera. The length of the side of detector plane is assumed to be equivalent to μN, which is twice as long as the diameter of a circular area of the scattered wave on the digital camera.

3. Diffraction due to low contrast phase object

Before the explanation of the algorithm for the object reconstruction, it is significant to consider diffraction due to a phase object that has a low contrast refractive index distribution. The first Born approximation is applied, in which only the 0-order light and the first-order scattered wave are taken into account, as the 0-order light is overwhelmingly intense and the higher-order scattered waves are negligible. The refractive index distribution in the specimen is given by n(x) = n 0 + εg(x), where n 0 is a scalar constant representing the average refractive index, x stands for the three-dimensional position in the specimen, ε is a constant that is sufficiently small ε ≫ 1, and g(x) is a scalar function related to the net refractive index distribution and its range is -1 < g(x) < 1. If the object is illuminated by a plane wave, the scattered wave appears. Since the digital camera is set conjugate to the pupil plane of the primary objective, one can presume that the wave vector of the scattered light is recorded.

3.1 Electric field of the scattered wave

After the incident plane wave E (i)(x′,t) with wave vector n 0 k 0 is scattered by the object, the total electric field including the 0-order light and the scattered wave on an infinite-radius reference sphere satisfies the following equation [20]

Ext=E(i)xt+rotxrotxα(x)N(x)E(x,txxn0c)xxd3x,

where x′ is the three-dimensional position on the reference sphere, c is the speed of the light in vacuum, α(x) is the polarizability relative to average refractive index n 0, and N(x) denotes the number of molecules in a unit volume. The product α(x)N(x) is given by [20]

α(x)N(x)=34π{n(x)n0}21{n(x)n0}2+2ε2πn0g(x).

The effective electric field E′ (x,t - ∣x′-xn 0 / c) in Eq. (2) is

E(x,txxn0c)=E0(x)exp[in0k0xx]eiωt,

where ω is the angular frequency of the light used and k 0 = ω/c is the magnitude of the wave vector k 0. Since the reference sphere is located far from the object, that is ∣x′-x∣ ≫ λ/2π, the integrand of the second term in Eq. (2) approximates [20]

ErEθEφ=0α(x)N(x)sinθn02k02E0(x)exp[ik0xx]xx0eiωt,

where λ is the wavelength of light in vacuum, and θ is the angle between the direction of the effective electric field E 0′(x) and the propagation direction of the scattered light exp[i n 0 k 0x′-x∣]/∣x′-x∣, and φ is azimuthal angle and r is radius in polar coordinates. While Eq. (5) implies the well-known high-NA depolarization effect, two polarizers placed on the transmission and reflection sides ensure the linear polarization of the scattered waves, as mentioned above. Hereafter, the polarization effects of scattering are not taken into account, namely, the vector electric field shown by Eq. (5) is considered to be the scalar field whose amplitude is equivalent to Eθ with an approximation of sin θ = 1. Note that this is not the paraxial approximation. If only the scalar electric field is considered, the integrand of the second term in Eq. (2) is

Esxxt=α(x)N(x)n02k02E0(x)exp[ik0xx]xxeiωt.

Assuming the harmonic dependence of the scalar electric field results in E′(x′,t)→E 0′(x′)e -iωt and E (i) (x′,t) →E 0 (i)(x′)e -iωt, and substitution of Eqs. (3) and (6) into Eq. (2) yields

E0(x)=E0(i)(x)+εk02n02πg(x)E0(x)exp[in0k0xx]xxd3x.

Ignoring the second and higher order of ε (the first order Born approximation) and substitution of E 0 (i) (x′) = exp[i n 0 k 0 · x] yields

E0(x)=E0(i)(x)+εk02n02πg(x)E0(i)(x)exp[in0k0xx]xxd3x
=exp[in0k0·x]+εk02n02πg(x)exp[in0k0·x]exp[in0k0xx]xxd3x.

Since the distance from any position x in the sample to arbitrary position x on the reference sphere is infinite, the second term of the right side in Eq. (8) is proportional to the Fourier transform of g(x), which is measured at the digital camera.

3.2 Diffraction efficiency

We will consider the diffraction efficiency in the case of the simple object that contains only a single spatial frequency, namely g(x) = cos(Κ· x) = exp[i κ·x]/2 + exp[-i κ·x]/2, where κ is the three-dimensional grating vector. For simplicity, we will consider the diffraction due to only the positive frequency component exp[i κ·x]/2. Substitution of g(x) = exp[iκ·x]/2 into Eq. (8) yields

E0(x)=exp[in0k0·x]+F0(x),

with

F0(x)=n04πεk02exp[iκ·x]exp[in0k0·x]exp[in0k0xx]xxd3x
=n04πεk02exp[i{(n0kx+κx)x+(n0ky+κy)y+(n0kz+κz)z}]exp[in0k0R]Rd3x,

where R=(xx)2+(yy)2+(zz)2 and kx, ky and kz are the x, y, and z components of k 0, respectively. A central position of the sample is defined as origin of the coordinate and z direction is optical axis of the system. While the integral over x and y is conducted from negative infinity to positive infinity, the integral domain x, which corresponds to the size of the sample, is assumed to be sufficiently small compared with the distance between the origin and the position x on the reference sphere,

x2+y2+z2x′2+y′2+z′2
xx+yy+zzx2+y2+z2,

which results in

Rx2+y2+z2xx+yy+zzx2+y2+z2+x2+y2+z22x2+y2+z2.

Inserting Eq. (12) into the exponential function and Rx2+y2+z2 into the dominator in Eq. (10) yields

F0(x)=n04πεk02exp[in0k0r]r′exp[i(n0kx+κxn0kx)x]exp[in0k02rx2]dx
×exp[i(n0ky+κyn0ky)y]exp[in0k02ry2]dy
×L2L2exp[i(n0kz+κzn0kz)z]exp[in0k02rz2]dz,

where L represents a thickness of the sample in the z direction, and kx′, ky′, and kz′ are x, y, and z components of wave vector of the scattered wave k 0′, and k 0 x′/r′= kxetc . are used, and r=x2+y2+z2(x2+y2+z2).

Now we will consider the integral over x in Eq. (13),

Jx=limrrrexp[iax]exp[ibx2]dx,

with a = n 0 kx + κx - n 0 kx′ and b = n 0 k 0 /(2r′). As r′ approaches infinity, b approaches zero. In order to execute the integral Jx, the integrand is multiplied by a factor exp[-βx 2], where β is real and positive and approaches zero β → +0 (βb). Then Eq. (14) becomes

Jx=limrβ+0r′r′exp[iax]exp[ibx2]exp[βx2]dx
=limrβ+0exp[a24(βib)]r′r′exp[(βibxia2βib)2]dx.

Note that βib is a complex number with phase φβ,b=12tan1(bβ), which approaches -π/4 as β → +0 (βb). The change of variables z=βibxia(2βib) allows one to write

Jx=limrβ+0exp[a24(βib)]1βibcexp[z2]dz,

where the contour C makes an angle φβb to the real axis. If β = (1/r′)α (1 < α < 2), the C and the real axis. Since there are no singularities of the integrand anywhere between C and the real axis, one can again deform the contour to the real axis,

Jx=limrexp[a24(βib)]1βibr′r′exp[x2]dx
=limrπβibexp[i4ba2]exp[β4b2a2]
=limγiπbexp[i4ba2]exp[γa2].

Replacing a = n 0 kx + κx - n 0 kx and b = n 0 k 0/(2r′) yields

Jx=limγiπ2rn0k0exp[ir(n0kx+κxn0kx)22n0k0]exp[γ(n0kx+κxn0kx)2],

and likewise the integral over y is

Jy=limγiπ2rn0k0exp[ir′(n0ky+κyn0ky)22n0k0]exp[γ(n0ky+κyn0ky)2].

When r′ approaches infinity r′ → ∞, the integral over z in Eq. (13) is

Jz=Lsinc[L2(n0kz+κzn0kz],

where sinc[x] =sin x/x. Substitution of these integrals jx, jy, Jz into Eq. (13) yields

F0(r)=limγk02exp[in0k0r′]exp[ir′2n0k0{(n0kx+κxn0kx)2+(n0ky+κyn0ky)2}]
×exp[γ{(n0kx+κxn0kx)2+(n0ky+κyn0ky)2}]
×Lsinc[L2(n0kz+κzn0kz)]
=iπεLλexp[in0k0r′]sinc[L2(n0kz+κzn0kz)],(asn0kx+κxn0kx=0,n0ky+κyn0ky=0)0,(others).

The diffracted wave has a phase shift of π/2 relative to 0-order light. When the Bragg’s condition n 0 k 0 + κ - n 0 k 0′=0 is satisfied, the diffraction efficiency η is

η=(πεLλ)2

which corresponds to the well known formula [21]. The diffraction efficiency of the phase object composed of a single spatial frequency is deduced assuming that only single scattering is taken into account and multi-scattering can be ignored if ε is sufficiently small. This result is approximately true as long as πεL/λ < 1. For example, ε < 0.006 when L = 45μm and λ = 850 nm. If πεL/λ ≫1, η becomes the square of sine function and oscillates, since there is a tight coupling between the 0-order light and the diffraction wave and the energy transfer between the two modes occurs periodically due to multi-scattering. Even if ε is somewhat large ε > 0.006, an image can be reconstructed through this measurement system. However, the image might be deformed compared with the original object.

4. Algorithm of object reconstruction

As mentioned above, the digital camera is conjugate to the pupil plane of the primary objective. The circular area of the scattered wave shifts two-dimensionally as the pump beam is scanned, with the position of the 0-order light fixed at the center of the camera area (see Fig. 2). The number of pixels of the digital camera for each transmitted and reflected components is N×N pixels. It is useful to consider the detection position of the scattered wave on the camera to correspond to wave number space, since the entrance pupil of the primary objective is located at infinity and the radius of the reference sphere is also infinity. The complex amplitude of the scattered wave on the detection plane is calculated numerically by utilizing a digital off axis holography technique [14–16].

The observable volume of the specimen is limited by two circular apertures on both sides of the sample. The diameters of the apertures are three times as large as that of the cross section of the pump beam, which is three times as large as the thickness of the sample. The pump beam is converted into a plane wave and is incident on the sample after converging onto a point in the pupil plane of the primary objective (PO1). The cross section of the pump beam on the aperture plane is a circle that is independent of the incident angle, which is one of the features of the Fourier lens (see Fig. 3). The NA of the scattered wave in the pupil plane of the primary objective (PO2) is confined by the apertures to be three times as large as that of the 0-order light.

In order to acquire the digital holographic image, the incident angle ϕ of the reference wave to the normal of the detection plane is adjusted as λ/sin ϕ = 4μ, which means that a unit cycle of the phase of the reference wave corresponds to four pixels of the digital camera. The difference in spatial frequency between a top and a bottom or a right side and a left side of the digital camera corresponds to 4NA/λ. In this case, the size of the object D in real space, which is calculated by Fourier transforming the wave number space, is

D=λN4NA=Fλβμ,

from Eq. (1). The diameter of the aperture R and the size of the object D have the relation R = 3D/4.

 figure: Fig. 4.

Fig. 4. Description of the digital hologram and the generation process of a single “twin partial sphere” in 3-D matrix. The circular parts of the complex amplitude generated by the digital hologram are projected onto the partial spheres in the frequency space.

Download Full Size | PDF

Now the algorithm of the object reconstruction is considered. After the holographic images of the transmitted and reflected scattered waves produced by the pump beam at every incident angle are acquired through the digital camera, the holographic images of N×N pixels are Fourier transformed in the computer. As a result of that computation, the complex amplitude of the object wave is obtained on one side at the matrix and a conjugated wave appears on another side, as shown in Fig. 4. While the object wave and the conjugated wave overlap, due to the relation of R =3D/4, the central areas of the object and conjugated waves of the size of N/4 do not overlap. Because the irradiance of the reference wave is adjusted by an attenuator to be about a hundred times (ten times in amplitude) as intense as that of the diffraction wave at the most intense position, only a central bright spot produced by the dc term appears. The autocorrelation of the object wave, which is supposed to appear around the central bright spot, is negligible. A section of the object wave of N/4 × N/4 elements is cut out and the inverse Fourier transform is applied to this 2-D matrix, which implies that the size of the object to be reconstructed is restricted to D/4 . As a result, the complex amplitude of the scattered wave on the pupil plane is obtained.

 figure: Fig. 5.

Fig. 5. Positional relation among the partial spheres corresponding to each pump direction.

Download Full Size | PDF

Since information about the amplitude of the scattered wave is contained in the circle of diameter N/8 in Fig. 4, the circle is cut out and projected onto the “partial sphere” that corresponds to the reference sphere and is mapped into wave number space. The partial sphere lies in a three-dimensional matrix of N/4 × N/4 × N/4 elements. The complex amplitudes in all elements other than those on the partial sphere are set to zero (see Fig. 4). In this step, the complex amplitude is projected in the z direction onto the digital partial sphere composed of a set of voxels across which the analog partial sphere passes. If a (x, y) element of the digital partial sphere consists of two voxels in the z direction, the amplitude is distributed equally to the two voxels. While this step can lead to some artifacts in real space, the error is reduced by using a partial sphere convoluted with a Gaussian function containing a few elements in the z direction. After the resultant partial sphere is digitalized, the amplitude is projected onto the digital partial sphere with a weight of the Gaussian distribution. In this case, the peripheral intensity of the image in real space in the z direction becomes weaker.

The orientation of each partial sphere in frequency space is such that the 0-order light is placed at the frequency origin, as shown in Fig. 5. This construction allows one to figure the physical properties of the system optical transfer function (OTF). If the system has no error in phase for the measurement of the hologram corresponding to each partial sphere, the argument of the origin in the frequency space is zero. While the amplitude corresponding to the 0-order light lies on the partial sphere in the transmitted component, the amplitude of the 0-order light for the reflected component is located at the origin apart from the partial sphere, which has a hole at the corresponding position where the 0-order light arrives on the digital camera. The 0-order light arriving on the digital camera for the reflected component is subtracted from the partial sphere in the reflected component and is used for the phase error correction of the reflected component. The partial sphere for the reflected component and the 0-order light lie on the same sphere. The two 3-D matrices for the transmitted and reflected components are added, which results in a matrix composed of two partial spheres facing each other. This matrix of two partial spheres, which share the same radius and center of curvature, is hereafter referred to as a twin partial sphere.

We will consider two calculation methods to reconstruct the object. These two methods are equivalent for object reconstruction, as long as the sample is low contrast, which will be discussed in detail in the next section. In the first method, the twin partial sphere is Fourier transformed after the phase shift of π/2 is added only to the 0-order light, and the square of the modulus is calculated. The same calculation is performed for every direction of the pump plane wave. The final reconstructed object with the size of D/4 × D/4 × D/4 in real space is obtained by adding all of these calculated matrices. Since the square of the modulus is independent of shifts in the Fourier plane, a positioning of the twin partial sphere is not required. This method of the 3-D object reconstruction corresponds to image formation of conventional microscopy with an incoherent Kohler illumination system.

The relative optical path length between the transmitted scattered wave and the reference wave may change due to the temperature variation or vibration during the measurement. A difference in phase among the measurements for each hologram could be generated, but the error of the reconstructed object vanishes if only the transmitted component is taken into account, since we calculate the square of the modulus of Fourier transform of each complex amplitude in the frequency domain. However, the phase error remains due to the reflected component. The relative phase difference between the transmitted and reflected components of the twin partial sphere exists for the changes in relative optical length among the three paths of the transmitted and reflected diffraction waves and the reference wave. In order to minimize the phase error, the partial sphere for the transmitted (reflected) component is divided by the phase term of the 0-order light arriving on the digital camera for the transmitted (reflected) component before the two partial spheres are added to form the twin partial sphere. The optical path of the scattered wave is different from that of the 0-order light in the looped part, as outlined in Fig. 1, which includes the sample holder and PBS1. This looped part requires an extremely steady and stable mechanical design. Relative phase error caused by the looped part is discussed in Section 6.

In the second method, the object is reconstructed in the following way. After the complex amplitudes of the twin partial spheres for every pump direction are calculated, each twin partial sphere is translated so that the position corresponding to the 0-order light is placed at the origin of the 3-D matrix. The partial sphere of the transmitted (reflected) component in the twin partial sphere is divided by the phase term of the 0-order light arriving on the transmission (reflection) side of the digital camera in the same way as the first method. Thus, also in the second method, phase correction for each hologram by means of the 0-order light is achieved, and it ensures that the phase at the origin of the 3-D matrices is zero. Even with this phase correction, some relative phase error can remain in the reflected component, as described for the first calculation method. The total amplitude in the frequency domain is obtained by coherently adding the 3-D matrices of the twin partial spheres for every pump direction. Finally, the object intensity is reconstructed through a Fourier transform of the total amplitude after the phase shift of π/2 is added to the origin of the 3-D matrix, and taking the square of the modulus. These two techniques are closely related to filtered backprojection. The filtering is later done by the application of the inverse total OTF.

 figure: Fig. 6.

Fig. 6. 3-D entrance pupil function composed of the two pupil functions on the transmission and reflection sides. The pupil function is considered to be in the frequency space.

Download Full Size | PDF

5. Three-dimensional image formation feature

Consider the 3-D pupil functions P T(f) for the transmission side and P R(f) for the reflection side which have a relation P T(f) = P*R(-f), as shown in Fig. 6. For simplicity, it is assumed that P T (f) (P R (f)) is unity on the partial sphere for the transmission (reflection) side and zero outside the partial sphere. That is,

PT(f)={1,(onshell of transmission side)0,(others).
PR(f)={1,(onshellofreflectionside)0,(others).

In practical systems, the pupil function is not uniform and is a function of NA. As NA approaches higher edges, the value of the pupil function shrinks. While we assume a uniform apodization hereafter, one can replace it with an arbitrary apodization, and also in this case the equations can be developed similarly to the derivation in this section.

The wave number of the pump wave 2πf 0(=n 0 k 0) is scanned over the pupil function P T (f) and the scattered wave emerging from the object is transmitted though the both sides of the pupil P(f) = P T(f) + P R(f). The amplitude of the 0-order light, which can be assumed to be a real number after the phase is shifted by π/2, is attenuated by the micromirror before arriving at the digital camera by the factor of 0 < a < 1. If the amplitude of the object is given by O(x), the amplitude on the twin partial sphere for a certain wave number of the pump wave 2πf 0 is

F(f,f0)={O(x)ei2πf·xdxaδ(f)}{PT(f+f0)+PR(f+f0)}PT*(f0),

where the asterisk stands for the complex conjugate. Based on the first method, the irradiance of the image of the reconstructed object I I(x′) is given by

II(x′)=F(f,f0)ei2πf·xdf2df0
=[{O(x1)ei2πf1·x1dx1(f1)}{PT(f1+f0)+PR(f1+f0)}PT*(f0)ei2πf1·x′df1]
×[{O*(x2)ei2πf2·x2dx2(f2)}{PT*(f2+f0)+PR*(f2+f0)}PT(f0)ei2πf2·x′df2]df0
=[O(x1)PT*(f0)ei2πf0·(xx1){UT(xx1)+UR(xx1)}dx1a{PT(f0)+PR(f0)}PT*(f0)]
×[O*(x2)PT(f0)ei2πf0·(xx2){UT*(xx2)+UR*(xx2)}dx2a{PT*(f0)+PR*(f0)}PT(f0)]df0,

where

UT(x)=PT(f)ei2πf·xdf
UR(x)=PR(f)ei2πf·xdf.

The Fourier transform of the 3-D entrance pupil function P(f) = P T(f) + P R(f) is equivalent to the 3-D coherent point spread function U(x) = U T(x) + U R(x), which has the relations

UT(x)=UR*(x)
UT(x)=UT*(x)
UR(x)=UR*(x).

Since the pupil function has rotational symmetry to the optical axis,

II(x)=γ(x1x2)O(x1)O*(x2){UT(xx1)+UR(xx1)}{UT*(xx2)+UR*(xx2)}dx1dx2
aO(x1)UR(xx1){UT(xx1)+UR(xx1)}dx1
aO*(x2)UT(xx2){UT*(xx2)+UR*(xx2)}dx2
+a2PT(f0)4df0,

with

γ(x1x2)=PT(f0)2ei2πf0·(x1x2)df0,

which is referred to as the 3-D mutual intensity.

Consider a low contrast object O(x) = 1 + ε 0 o(x) where o(x) is proportional to g(x) and ε 0 ≪ 1. Inserting O(x) = 1 + ε 0 o(x) into Eq. (29) yields

II(x)={1+ε0o(x)+ε0*o*(x)}UT(xx)2dx
+ε0o(x)UR2(xx)dx+ε0*o*(x)UT2(xx)dx
a{1+ε0o(x)}{UT(xx)2+UR2(xx)}dx
a{1+ε0*o*(x)}{UT(xx)2+UT2(xx)}dx
+a2UT(xx)2dx,

where Parseval’s theorem ∫∣P T(f 0)∣2df 0 = ∫∣U T(x′-x)∣2dx and ∣P T(f 0)∣4 = ∣P T(f 0)∣2 are used in the last term and the second order terms of ε 0 are ignored. A further simple calculation leads to

II(x)=(1a)(1a)UT(xx)2dx
+ε0o(x){UT(xx)2+UR2(xx)}dx
+ε0*o*(x){UT(xx)2+UT2(xx)}dx]
=(1a)UT(x)2dx[(1a)
+ε0o˜(f)OTF(f)ei2πf·xdf
+ε0*o˜*(f)OTF*(f)ei2πfxdf],

where ∫U R 2 (x′-x)dx = ∫U T 2 (x′-x)dx = 0 and õ(f) is the Fourier transform of o(x). The 3-D optical transfer function is defined as

OTF(f)={UT(x)2+UR2(x)}ei2πf·xdxUT(x)2dx,

which is equivalent to the correlation function between the entrance pupil P(f) = P T(f) + P R(f) and the pupil on transmission side P T(f) for the pump wave, that is

OTF(f)=P(f)PT*(ff)dfPT(f)2df.

Figure 7 shows the OTF in the case for NA = 1.2 with the primary objective in water. The OTF has rotational symmetry in the fz direction, which is the spatial frequency in the direction of the optical axis. A cross section involving the fz axis is described in the figure. Note that spatial frequencies along the optical axis cannot be resolved for both the transmitted and reflected components, since the scattered wave propagated in the vicinity of the 0-order light is intercepted by the micromirror, which is similar to the conventional phase contrast microscopy. The depth resolution is gained from the reflected component, and a part of the OTF corresponding to the reflected component lies in the region known as the missing cone. Although the gap between the two portions of the OTF corresponding to the transmitted and reflected components exists and the spatial frequency in the gap cannot be resolved, it can be reduced by using higher NA objectives.

 figure: Fig. 7.

Fig. 7. Optical transfer function in the case where the primary objective with NA =1.2 in water is used. The OTF has rotational symmetry in the fz direction.

Download Full Size | PDF

If the amplitude of the object is a real-valued function O(x) = O*(x), which holds for a 3-D phase object with a low contrast refractive index distribution, Eq. (32) becomes

II(x)=(1a)2UT(xx)2dx+(1a)ε0o(x){2UT(xx)2+UR2(xx)+UT2(xx)}dx
=(1a){(1a)+2ε0o(x)}PSF(xx)/2dx,

where

PSF(xx)=2UT(xx)2+UR2(xx)+UT2(xx)
=U(xx)2

is the 3-D point spread function. Interestingly, intensity of the low contrast object ∣O(x)∣2 = 1 + 2ε0 o(x) is effectively converted into ∣O a(x)∣2 = (1 - a) + 2ε 0 o(x), which means that the initial contrast of the object 2ε0 is enhanced to 2ε0/(1-a) by attenuation of the 0-order light. Finally, a simple equation for object reconstruction is obtained,

II(x)=(1a){Oa(x)2PSF(xx)/2dx}.

The irradiance of the image reconstructed through this algorithm is represented by the convolution of the object intensity, which has the enhanced contrast, with the 3-D point spread function. This method requires a relatively long computing time, because the 3-D Fourier transform is calculated in the computer for every incident direction of the pump wave.

In the second method, the irradiance of the image is given by

III(x)={Fff0df0}ei2πf·xdf2
=[{O(x)ei2πf·xdx(f)}{PT(f+f0)+PR(f+f0)}PT*(f0)df0]ei2πf·xdf2
=O(x)UR(xx){UT(xx)+UR(xx)}dxaPT(f0)2df02
=O(x){UT(xx)2+UR2(xx)}dxaUT(xx)2dx2,

where Parseval’s theorem is used. If the sample is a low contrast object, substituting O(x) = 1 + ε 0 o(x) into Eq. (38) yields

III(x)={1+ε0o(x)}{UT(xx)2+UR2(xx)}dxaUT(xx)2dx2
=A(1a)[(1a)UT(xx)2dx
+ε0o(x){UT(xx)2+UR2(xx)}dx
+ε0*o*(x){UT(xx)2+UT2(xx)}dx]
=AII(x)

where A= ∫∣U T(x′-x)\2 dx and the second order terms of ε 0 are ignored. The image irradiance of the reconstructed object through the second method is proportional to that of the first algorithm, as long as the sample is a low contrast object. The second method can reduce the computing time, because it requires only one 3-D Fourier transform in the final stage. The images of the low contrast object reconstructed by the two methods show identical optical features.

The image can be deconvoluted with the OTF in the same way as a conventional optical system. In the first method, each shifted partial sphere is divided by the OTF before it is Fourier transformed. In the second method, the deconvolution is achieved by dividing the total amplitude in the frequency domain by the OTF before taking the square of the modulus. Both methods are phase error free in the transmitted component, so low-frequency objects can be resolved with almost no error in reconstruction. While the relative phase shifts between different reflected holograms can be minimized by analyzing the phase in the overlap areas, a slight dc phase difference between the transmitted and reflected components remains because of the looped part.

6. Error estimation of object reconstruction

In the looped part, the optical path lengths between the centers of the sample holder and PBS1 on the transmission and reflection sides are designed to be equivalent. The optical path lengths for the transmitted and reflected diffraction waves between the micromirror and the digital camera are also designed to be identical. As mentioned above, any errors in phase due to changes in relative optical length among the three paths for the transmitted and reflected diffraction waves and the reference wave between the micromirror and the digital camera are corrected in the computer. However, there remains another phase error for the reflected component due to the change in relative optical length between the transmission and reflection sides in the looped part. Even though extremely steady and stable mechanical design is achieved for this part, the change in optical path lengths could occur during scanning and every hologram could have a slightly different dc phase component.

Error in object reconstruction is now evaluated, assuming that the phase error of each hologram is a Gaussian-distributed random pattern whose statistical profile is (mean value, standard deviation) = (0, σ). The root mean square (RMS) error is given by

RMS(σ)=i=1N(IiσIi0)2N,

where N represents the number of the elements in the 3-D matrix of the image, Iiσ denotes the value of the i-th element in the image with phase error σ, and Ii0 is the value of the i-th element in the image with no error. Figure 8 shows an example of the calculation result of the normalized RMS for a cubic object of 3.0μ m in size. RMS is evaluated for σ between zero and 2π×0.65. Each data point in Fig. 8 is an average of five hundred images, where a different Gaussian white noise distribution is used for each image. The calculation is performed with deconvoluted images. The object is well reconstructed as long as the standard deviation of the error is less than 0.2λ, which can be achieved by an ordinary mechanical design. Since the phase error is only in the reflected component and the part of the OTF corresponding to the reflected component is located apart from the origin, objects composed of only low spatial frequencies can be resolved almost perfectly. Different object distributions were also calculated, and it turns out that the configuration of the object does not strongly affect the RMS. All results have the tendency to produce a similar RMS versus σ relationship.

 figure: Fig. 8.

Fig. 8. Error in object reconstruction in the case of the test sample (a cube of 3.0 μ m in size) where the wavelength of 850nm is used.

Download Full Size | PDF

7. Discussion

The setup of the system composed only of the transmitted component (T-type) is simple and phase error free, resulting in being more practical. While this T-type microscopy has a sufficient capability and offers a high performance, the entire system composed of transmitted and reflected components (TR-type) yields a higher optical resolution, particularly in the depth direction. For 3-D specimens, as is the case in biological studies, high 3-D resolution is often required to visualize the 3-D image at any viewing angle. T-type microscopy, however, is still functional for the 3-D specimens which are composed of low spatial frequencies.

In the practical experiment, a CCD (Charge Coupled Device Image Sensor) or a CMOS (Complementary Metal Oxide Semiconductor Image Sensor) can be used as the digital camera. One of the most reasonable sizes N 2 of the digital camera is 1024×1024 pixels for each of the transmitted and reflected components. In this case, the size of the 3-D matrix (N/4)3 for the image, which is Fourier transformed three-dimensionally in the computer, is 2563 elements. The size of the reconstructed object in real space D/4 is approximately 45μm (the resolution X/2NA is 354nm) if a pump beam of 850nm in wavelength is used as the light source and NA = 1.2.

While it is necessary to process each axial sectioning separately and sequentially with the confocal microscopy in order to acquire the 3-D image, holographic microscopy proposed here requires only two-dimensional scanning to reconstruct the 3-D specimen. The total time to acquire a 3-D image through this system is dependent on the number of scanning directions. Image acquisition time will be at least a couple of minutes using current technology for the detector and computer. If the size of the 3-D matrix for the image is 2563, 1282 × π/4 scan directions are required for maximum resolution, where the coefficient π/4 implies the circular pupil. However, the number of scanning directions can be reduced by balancing the quality of the image with the scanning time.

Near-infrared rays, for example the wavelength of 850nm, are used as the light source in this system because most biological specimens are transparent in that wavelength region. Then, the specimen can be considered to be a phase object that can be reconstructed through this system. Even though some absorbing materials exist in the specimen, an image can be obtained that contains the information on both an absorbing part and the phase part. The image only of the phase part is provided by adding the phase shift of π/2 to the 0-order light, and an image only of the absorbing part is obtained separately from the phase part by not adding the phase shift. While the absorbing object can be resolved by other microscopy techniques, such as transmission confocal microscopy, one of the advantages of this system is to be able to visualize 3-D phase objects. As mentioned above, the first Born approximation is assumed in the generation of the scattered wave through the interaction between the object and the pump wave, which implies that the specimen must be considered to be a low contrast object. If the specimen has a high contrast refractive index distribution, the object cannot be well-reconstructed, and thus the system has this applicative limitation.

8. Conclusion

A novel 3-D holographic microscopy is proposed that can reconstruct low contrast phase objects and convert them into high contrast images. Two algorithms for object reconstruction are given. It is deduced that these two methods show identical optical features of the image if the object has a low contrast structure. The second algorithm is more useful because of its shorter computing time. Mechanical tolerance is also evaluated, which indicates the upper limit of the allowable error in phase during the scanning. The whole system, including the mechanical design, can be built using current technology. Further advances in two-dimensional detector and computer power will be able to improve the quality of the image and reduce the computing time.

References and links

1. T. Wilson, Confocal Microscopy (Academic Press, 1990).

2. W. B. Amos, J. G. White, and M. Fordham, “Use of confocal imaging in the study of biological structures,” Appl. Opt. 26, 3239 (1987). [CrossRef]   [PubMed]  

3. G. J. Brakenhoff, H. T. M. van der Voort, E. A. van Spronsen, and N. Nanninga, “3-Dimensional imaging of biological structures by high resolution confocal scanning laser microscopy,” Scanning Microsc. 2, 33 (1988). [PubMed]  

4. I. Freund and M. Deutsch, “2nd-harmonic microscopy of biological tissue,” Opt. Lett. 11, 94 (1986). [CrossRef]   [PubMed]  

5. P. J. Campagnola, H. A. Clark, W. A. Mohler, A. Lewis, and L. M. Loew, “Second-harmonic imaging microscopy of living cells,” J. Biomed. Opt. 6, 277 (2001). [CrossRef]   [PubMed]  

6. J. Mertz and L. Moreaux, “Second-harmonic generation by focused excitation of inhomogeneously distributed scatterers,” Opt. Commun. 196, 25 (2001). [CrossRef]  

7. Y. Barad, H. Eisenberg, M. Horowitz, and Y. Silberberg, “Nonlinear scanning laser microscopy by third-harmonic generation,” Appl. Phys. Lett. 70, 922 (1997). [CrossRef]  

8. M. Muller, J. Squier, K. R. Wilson, and G. J. Brakenhoff, “3D microscopy of transparent objects using third-harmonic generation,” J. Microsc. 191, 266 (1998). [CrossRef]   [PubMed]  

9. M. D. Duncan, J. Reintjes, and T. J. Manuccia, “Scanning coherent anti-Stokes Raman microscope,” Opt. Lett. 7, 350 (1982). [CrossRef]   [PubMed]  

10. A. Zumbusch, G. R. Holtom, and X. S. Xie, “Vibrational microscopy using coherent anti-Stokes Raman scattering,” Phys. Rev. Lett. 82, 4014 (1999).

11. F. Zernike, “Das Phasenkontrastverfahren bei der mikroskopischen Beobachtung,“ Z. Tech. Phys. 16, 454 (1935).

12. F. Zernike, “How I discovered phase contrast,” Science 121, 345 (1955). [CrossRef]   [PubMed]  

13. W. S. Haddad, D. Cullen, J. C. Solem, J. W. Longworth, A. McPherson, K. Boyer, and C. K. Rhodes, “Fourier-transform holographic microscope,” Appl. Opt. 31, 4973 (1992). [CrossRef]   [PubMed]  

14. U. Schnars and W. Jüptner, “Direct recording of holograms by a CCD target and numerical reconstruction,” Appl. Opt. 33, 179 (1994). [CrossRef]   [PubMed]  

15. J. H. Massig, “Digital off-axis holography with a synthetic aperture,” Opt. Lett. 27, 2179 (2002). [CrossRef]  

16. S. Kostianovski, S. G. Lipson, and E. N. Ribak, “Interference microscopy and Fourier fringe analysis applied to measuring the spatial refractive-index distribution,” Appl. Opt. 324744 (1993). [CrossRef]   [PubMed]  

17. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang,, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science , 2541178 (1991). [CrossRef]   [PubMed]  

18. T. Dresel, G. Hausler, and H. Venzke, “Three-dimensional sensing of rough surfaces by coherence radar,” Appl. Opt. 31919 (1992). [CrossRef]   [PubMed]  

19. M. Mansuripur, Classical Optics and its Applications (Cambridge University Press2002)

20. M. Born and E. Wolf, Principles of Optics 5th. ed., (Pergamon Press, 1974).

21. H. Kogelnik, “Coupled wave theory for thick hologram gratings,“ Bell Syst. Tech. J , 482909 (1969).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Schematic of the setup of the 3-D phase contrast holographic microscopy. BS: Beam splitter. λ/2 : Half wave plate. λ/4 : Quarter wave plate. PBS: Polarized beam splitter.
Fig. 2.
Fig. 2. Description of the position of the 0-order light and the circular area of the scattered wave on the observation plane. The circular area shifts during the scanning of the pump direction. On the other hand, the arrival position of the 0-order light is fixed at the center.
Fig. 3.
Fig. 3. Schematic of the sample holder and the mechanical aperture between the two Fourier lenses facing each other. The Fourier lens satisfies the condition of h = F sin θ.
Fig. 4.
Fig. 4. Description of the digital hologram and the generation process of a single “twin partial sphere” in 3-D matrix. The circular parts of the complex amplitude generated by the digital hologram are projected onto the partial spheres in the frequency space.
Fig. 5.
Fig. 5. Positional relation among the partial spheres corresponding to each pump direction.
Fig. 6.
Fig. 6. 3-D entrance pupil function composed of the two pupil functions on the transmission and reflection sides. The pupil function is considered to be in the frequency space.
Fig. 7.
Fig. 7. Optical transfer function in the case where the primary objective with NA =1.2 in water is used. The OTF has rotational symmetry in the fz direction.
Fig. 8.
Fig. 8. Error in object reconstruction in the case of the test sample (a cube of 3.0 μ m in size) where the wavelength of 850nm is used.

Equations (80)

Equations on this page are rendered with MathJax. Learn more.

NA · F = μN 4 β ,
E x t = E ( i ) x t + rot x rot x α ( x ) N ( x ) E ( x , t x x n 0 c ) x x d 3 x ,
α ( x ) N ( x ) = 3 4 π { n ( x ) n 0 } 2 1 { n ( x ) n 0 } 2 + 2 ε 2 π n 0 g ( x ) .
E ( x , t x x n 0 c ) = E 0 ( x ) exp [ i n 0 k 0 x x ] e iωt ,
E r E θ E φ = 0 α ( x ) N ( x ) sin θ n 0 2 k 0 2 E 0 ( x ) exp [ i k 0 x x ] x x 0 e iωt ,
E s x x t = α ( x ) N ( x ) n 0 2 k 0 2 E 0 ( x ) exp [ i k 0 x x ] x x e iωt .
E 0 ( x ) = E 0 ( i ) ( x ) + ε k 0 2 n 0 2 π g ( x ) E 0 ( x ) exp [ i n 0 k 0 x x ] x x d 3 x .
E 0 ( x ) = E 0 ( i ) ( x ) + ε k 0 2 n 0 2 π g ( x ) E 0 ( i ) ( x ) exp [ i n 0 k 0 x x ] x x d 3 x
= exp [ i n 0 k 0 · x ] + ε k 0 2 n 0 2 π g ( x ) exp [ i n 0 k 0 · x ] exp [ i n 0 k 0 x x ] x x d 3 x .
E 0 ( x ) = exp [ i n 0 k 0 · x ] + F 0 ( x ) ,
F 0 ( x ) = n 0 4 π ε k 0 2 exp [ i κ · x ] exp [ i n 0 k 0 · x ] exp [ i n 0 k 0 x x ] x x d 3 x
= n 0 4 π ε k 0 2 exp [ i { ( n 0 k x + κ x ) x + ( n 0 k y + κ y ) y + ( n 0 k z + κ z ) z } ] exp [ i n 0 k 0 R ] R d 3 x ,
x 2 + y 2 + z 2 x′ 2 + y′ 2 + z′ 2
x x + y y + z z x 2 + y 2 + z 2 ,
R x 2 + y 2 + z 2 x x + y y + z z x 2 + y 2 + z 2 + x 2 + y 2 + z 2 2 x 2 + y 2 + z 2 .
F 0 ( x ) = n 0 4 π ε k 0 2 exp [ i n 0 k 0 r ] r′ exp [ i ( n 0 k x + κ x n 0 k x ) x ] exp [ i n 0 k 0 2 r x 2 ] d x
× exp [ i ( n 0 k y + κ y n 0 k y ) y ] exp [ i n 0 k 0 2 r y 2 ] d y
× L 2 L 2 exp [ i ( n 0 k z + κ z n 0 k z ) z ] exp [ i n 0 k 0 2 r z 2 ] d z ,
J x = lim r r r exp [ i ax ] exp [ i b x 2 ] d x ,
J x = lim r β + 0 r′ r′ exp [ i ax ] exp [ ib x 2 ] exp [ β x 2 ] d x
= lim r β + 0 exp [ a 2 4 ( β ib ) ] r′ r′ exp [ ( β ib x ia 2 β ib ) 2 ] d x .
J x = lim r β + 0 exp [ a 2 4 ( β ib ) ] 1 β ib c exp [ z 2 ] d z ,
J x = lim r exp [ a 2 4 ( β ib ) ] 1 β ib r′ r′ exp [ x 2 ] d x
= lim r π β ib exp [ i 4 b a 2 ] exp [ β 4 b 2 a 2 ]
= lim γ i π b exp [ i 4 b a 2 ] exp [ γ a 2 ] .
J x = lim γ i π 2 r n 0 k 0 exp [ i r ( n 0 k x + κ x n 0 k x ) 2 2 n 0 k 0 ] exp [ γ ( n 0 k x + κ x n 0 k x ) 2 ] ,
J y = lim γ i π 2 r n 0 k 0 exp [ i r′ ( n 0 k y + κ y n 0 k y ) 2 2 n 0 k 0 ] exp [ γ ( n 0 k y + κ y n 0 k y ) 2 ] .
J z = L sin c [ L 2 ( n 0 k z + κ z n 0 k z ] ,
F 0 ( r ) = lim γ k 0 2 exp [ i n 0 k 0 r′ ] exp [ i r′ 2 n 0 k 0 { ( n 0 k x + κ x n 0 k x ) 2 + ( n 0 k y + κ y n 0 k y ) 2 } ]
× exp [ γ { ( n 0 k x + κ x n 0 k x ) 2 + ( n 0 k y + κ y n 0 k y ) 2 } ]
× L sin c [ L 2 ( n 0 k z + κ z n 0 k z ) ]
= iπεL λ exp [ i n 0 k 0 r′ ] sin c [ L 2 ( n 0 k z + κ z n 0 k z ) ] , ( as n 0 k x + κ x n 0 k x = 0 , n 0 k y + κ y n 0 k y = 0 ) 0 , ( others ) .
η = ( πεL λ ) 2
D = λN 4 NA = Fλβ μ ,
P T ( f ) = { 1 , ( on shell of transmission side ) 0 , ( others ) .
P R ( f ) = { 1 , ( on shell of reflection side ) 0 , ( others ) .
F ( f , f 0 ) = { O ( x ) e i 2 π f · x d x a δ ( f ) } { P T ( f + f 0 ) + P R ( f + f 0 ) } P T * ( f 0 ) ,
I I ( x′ ) = F ( f , f 0 ) e i 2 π f · x d f 2 d f 0
= [ { O ( x 1 ) e i 2 π f 1 · x 1 d x 1 ( f 1 ) } { P T ( f 1 + f 0 ) + P R ( f 1 + f 0 ) } P T * ( f 0 ) e i 2 π f 1 · x′ d f 1 ]
× [ { O * ( x 2 ) e i 2 π f 2 · x 2 d x 2 ( f 2 ) } { P T * ( f 2 + f 0 ) + P R * ( f 2 + f 0 ) } P T ( f 0 ) e i 2 π f 2 · x′ d f 2 ] d f 0
= [ O ( x 1 ) P T * ( f 0 ) e i 2 π f 0 · ( x x 1 ) { U T ( x x 1 ) + U R ( x x 1 ) } d x 1 a { P T ( f 0 ) + P R ( f 0 ) } P T * ( f 0 ) ]
× [ O * ( x 2 ) P T ( f 0 ) e i 2 π f 0 · ( x x 2 ) { U T * ( x x 2 ) + U R * ( x x 2 ) } d x 2 a { P T * ( f 0 ) + P R * ( f 0 ) } P T ( f 0 ) ] d f 0 ,
U T ( x ) = P T ( f ) e i 2 π f · x d f
U R ( x ) = P R ( f ) e i 2 π f · x d f .
U T ( x ) = U R * ( x )
U T ( x ) = U T * ( x )
U R ( x ) = U R * ( x ) .
I I ( x ) = γ ( x 1 x 2 ) O ( x 1 ) O * ( x 2 ) { U T ( x x 1 ) + U R ( x x 1 ) } { U T * ( x x 2 ) + U R * ( x x 2 ) } d x 1 d x 2
a O ( x 1 ) U R ( x x 1 ) { U T ( x x 1 ) + U R ( x x 1 ) } d x 1
a O * ( x 2 ) U T ( x x 2 ) { U T * ( x x 2 ) + U R * ( x x 2 ) } d x 2
+ a 2 P T ( f 0 ) 4 d f 0 ,
γ ( x 1 x 2 ) = P T ( f 0 ) 2 e i 2 π f 0 · ( x 1 x 2 ) d f 0 ,
I I ( x ) = { 1 + ε 0 o ( x ) + ε 0 * o * ( x ) } U T ( x x ) 2 d x
+ ε 0 o ( x ) U R 2 ( x x ) d x + ε 0 * o * ( x ) U T 2 ( x x ) d x
a { 1 + ε 0 o ( x ) } { U T ( x x ) 2 + U R 2 ( x x ) } d x
a { 1 + ε 0 * o * ( x ) } { U T ( x x ) 2 + U T 2 ( x x ) } d x
+ a 2 U T ( x x ) 2 d x ,
I I ( x ) = ( 1 a ) ( 1 a ) U T ( x x ) 2 d x
+ ε 0 o ( x ) { U T ( x x ) 2 + U R 2 ( x x ) } d x
+ ε 0 * o * ( x ) { U T ( x x ) 2 + U T 2 ( x x ) } d x ]
= ( 1 a ) U T ( x ) 2 d x [ ( 1 a )
+ ε 0 o ˜ ( f ) OTF ( f ) e i 2 π f · x d f
+ ε 0 * o ˜ * ( f ) OTF * ( f ) e i 2 π f x d f ] ,
OTF ( f ) = { U T ( x ) 2 + U R 2 ( x ) } e i 2 π f · x d x U T ( x ) 2 d x ,
OTF ( f ) = P ( f ) P T * ( f f ) d f P T ( f ) 2 d f .
I I ( x ) = ( 1 a ) 2 U T ( x x ) 2 d x + ( 1 a ) ε 0 o ( x ) { 2 U T ( x x ) 2 + U R 2 ( x x ) + U T 2 ( x x ) } d x
= ( 1 a ) { ( 1 a ) + 2 ε 0 o ( x ) } PSF ( x x ) / 2 d x ,
PSF ( x x ) = 2 U T ( x x ) 2 + U R 2 ( x x ) + U T 2 ( x x )
= U ( x x ) 2
I I ( x ) = ( 1 a ) { O a ( x ) 2 PSF ( x x ) / 2 d x } .
I II ( x ) = { F f f 0 d f 0 } e i 2 π f · x d f 2
= [ { O ( x ) e i 2 π f · x d x ( f ) } { P T ( f + f 0 ) + P R ( f + f 0 ) } P T * ( f 0 ) d f 0 ] e i 2 π f · x d f 2
= O ( x ) U R ( x x ) { U T ( x x ) + U R ( x x ) } d x a P T ( f 0 ) 2 d f 0 2
= O ( x ) { U T ( x x ) 2 + U R 2 ( x x ) } d x a U T ( x x ) 2 d x 2 ,
I II ( x ) = { 1 + ε 0 o ( x ) } { U T ( x x ) 2 + U R 2 ( x x ) } d x a U T ( x x ) 2 d x 2
= A ( 1 a ) [ ( 1 a ) U T ( x x ) 2 d x
+ ε 0 o ( x ) { U T ( x x ) 2 + U R 2 ( x x ) } d x
+ ε 0 * o * ( x ) { U T ( x x ) 2 + U T 2 ( x x ) } d x ]
= A I I ( x )
RMS ( σ ) = i = 1 N ( I i σ I i 0 ) 2 N ,
Select as filters


    Select Topics Cancel
    © Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.