Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Phase imaging of cells by simultaneous dual-wavelength reflection digital holography

Open Access Open Access

Abstract

We present a phase-imaging technique to quantitatively study the three-dimensional structure of cells. The method, based on the simultaneous dual-wavelength digital holography, allows for higher axial range at which the unambiguous phase imaging can be performed. The technique is capable of nanometer axial resolution. The noise level, which increases as a result of using two wavelengths, is then reduced to the level of a single wavelength. The method compares favorably to software unwrapping, as the technique does not produce non-existent phase steps. Curvature mismatch between the reference and object beams is numerically compensated. The 3D images of SKOV-3 ovarian cancer cells are presented.

©2008 Optical Society of America

1. Introduction

In holography, the interference between the coherent object and reference waves produces a holographic recording, which contains the information about not only the intensity of light (amplitude signal), but also its phase. Conventional holography uses a photographic plate to record the interference pattern. Hologram is then developed by photochemical processes. If this hologram is then illuminated using the original reference wave, the wave diffracts and the light propagates in such a way that the original optical field is reproduced. Since the holographic image retains the information of not just the amplitude but also the phase of the original optical field, this image is the exact 3D replica of the original object.

Since the conventional process of holographic recording on photographic plates is rather complicated and time-consuming, recently the emphasis has been shifting towards digital holography [1]. In digital holography, hologram is sampled by a high resolution CCD array [2–4] and the detected light intensity profile is transferred to a computer as an array of numbers. The propagation of optical field, which is completely and accurately described by the diffraction theory [5], is done via numerical reconstruction of the image as another numerical array of complex numbers representing the amplitude and phase of the optical field.Furthermore, in addition to the ability of rapid image acquisition and the accessibility of quantitative amplitude and phase information, various image processing technique can be applied to the complex field, which is not possible in real space holography. Previously,numerical reconstruction was performed using Fresnel transform, Huygens convolution, and angular spectrum methods [6–8].

Digital holography has been utilized for microscopic image formation. The examples include imaging of microstructures and biological systems [9–11]. Since high magnification microscopic images have small depth of focus, the possibility to numerically focus a holographic image (which can be done from just a single hologram) makes digital holographic microscopy especially important [12, 13]. Also, direct access to both the amplitude and the phase information allows for the numerical correction of aberrations, such as curvature and anamorphism [14]. While the light wave is propagating through, or reflecting from, a microscopic object, the phase changes can be converted into the intensity variations. The phase change indicates the change in the optical path length. The optical path length can be then converted to physical thickness, providing the sample height information. This property of holograms offers a phase-contrast techniques, which can then be used for quantitative 3D imaging.

Many microscopic biological specimens, such as cells and their intracellular constituents, are mostly transparent, and therefore are problematic for conventional bright-field microscopy. A number of techniques are known to qualitatively convert the phase changes to observable amplitude variations. For example, Zernike phase contrast (ZPC) microscopy uses a spatial filter and a phase plate to translate phase into intensity modulation. Differential interference contrast (DIC) microscopy (also known as Nomarski Interference Contrast) uses two polarized light beams, which take slightly different paths through the microscopic sample. As their optical paths lengths differ and the beams interfere when they are recombined, it creates shadow effects, giving the appearance of a three-dimensional image.However, both ZPC and DIC phase contrast microscopy techniques cannot be easily utilized to extract the quantitative phase information. On the other hand, by giving direct access to the quantitative phase information, digital holographic microscopy offers a way to map a phase image of an object, and convert this phase map into optical thickness profile.

On the other hand, the phase imaging of objects whose optical thickness variation is greater than the wavelength of light is ambiguous. Once the phase change exceeds 2π, the phase wraps and the image suffers a discontinuity. Therefore, such phases need to be unwrapped using, for instance, a software algorithm that looks for jumps in the phase image and shifts them up or down depending on the surrounding pixels. The usual software algorithms are computationally demanding and cannot correctly process complex phase topologies. We have previously introduced a dual-wavelength phase-imaging technique that removes the 2π-discontinuities by using two different wavelengths for generating two phase maps and comparing them. The method then uses one of the original single-wavelength phase images and uses the dual-wavelength image as a guide to unwrap it [15]. This has an effect of keeping the overall noise levels low comparing to the dual-wavelength phase image.

Here, we present a phase-imaging technique to quantitatively study the three-dimensional structure of cells. A similar study had been previously done using a single wavelength and software phase unwrapping [16]. We have obtained 3D images of SKOV-3 ovarian cancer cells with diffraction limited lateral resolution and axial resolution on the order of 5 nm. The cells display intracellular features with sufficient clarity to measure the thickness of the cell’s lamelipodium and observe the features of its nucleus. Our dual-wavelength method allows a faster imaging, as the only time constriction is the speed at which the two angular spectra for both wavelengths are calculated. Moreover, in certain cases, the software unwrapping algorithm can mistakenly identify low intensity areas as multiple phase steps, producing nonexistent height features. This problem is not present in dual-wavelength optical unwrapping,as it does not rely on surrounding pixels to correct the phase discontinuities, but simply compares the two single wavelength phase images, taken simultaneously. In addition to that, we also present a simple ways of correcting a curvature mismatch between the reference and object beams, based on the phase correction within the angular spectrum algorithm.

2. Experimental apparatus

Figure 1 shows the experimental apparatus. It is based on two overlapping Michelson interferometers (one for each wavelength), which enables us to adjust the location of the firstorder components produced by each wavelength in the Fourier space (see below). The idea is similar to the setups based on the modified Mach-Zehnder configuration used previously [17–21], and the setups based on Michelson interferometer [22–26]. He-Ne (λ 1=633 nm) and diode-pumped solid-state (λ 2=532 nm) lasers were used as coherent light sources. Both beams are attenuated by neutral density filters (ND) and then passed through the microscope objectives (OBJ11/OBJ12) which, together with the apertures A and collimating lenses L11/L12, produce plane waves. Their intensity is further controlled by the polarizing filters P1 and P2. Beam splitters BS1 and BS2 divide the beams into the reference and the object arms. Two separate reference arms are used to fine-tune the location of the first-order diffraction peaks and separate them in the Fourier domain. Lenses L21 and L22 and 20x microscope objective OBJ1 again collimate the beams in the object arm. The wave fronts in both reference arms remain spherical and the resulting curvature mismatch is digitally removed. An interference filter is placed into the reference arm of the diode-pumped solidstate (λ=532 nm) laser. It is designed to allow only this wavelength to pass and block the inverse reflection of the other laser. The interference pattern between the reflected reference waves and the object wave is recorded by the CCD camera. A relative angle can be introduced between the object and each of the two reference beams by slightly tilting the reference arms mirrors. By introducing different tilts in two orthogonal directions for two reference beams,we can separate each spectral component in Fourier space (see Fig. 2(b) below), which allows us to capture both wavelengths simultaneously.

 figure: Fig. 1.

Fig. 1. Multi-wavelength digital holography apparatus. The focal length of the lenses L21 and L22 are 17.5 cm and 10 cm respectively. The beams are collimated between L11 and L21 and between L12 and L22 and again are collimated after 20x OBJ1 microscope objective.

Download Full Size | PDF

3. Multi-wavelength digital holographic phase imaging

3.1 Angular spectrum method

Once a hologram has been acquired, it is reconstructed by numerically propagating the optical field along the direction perpendicular to the hologram plane (z-direction) in accordance with the laws of diffraction. In the case of Fraunhofer diffraction, Fresnel-Kirchoff integral can be expressed as [27]:

A0(kx,ky;0)=E0(x,y;0)exp[i(kxx+kyy)]dxdy

where kx and ky are spatial frequencies corresponding to x and y respectively. E 0(x,y;z=0) is the intensity distribution recorded by the CCD camera. This is the expression for Fourier transform and A 0 (kx,ky;0) is the angular spectrum of the optical field E 0(x,y;z=0) at the hologram plane z=0. The object’s angular spectrum consists of a zero-order and a pair of first-order terms. One of the first-order terms is the angular spectrum of the object field and the other is its phase inverted version. Figure 2(a) shows the hologram of a USAF resolution target recorded by our dual wavelength experimental setup. The two crossing interference fringe patterns, formed by two wavelengths, can be clearly seen. Figure 2(b) presents the Fourier spectrum with the two pairs of first-order components,corresponding to the two wavelengths, clearly visible.

 figure: Fig. 2.

Fig. 2. Two-wavelength hologram of a USAF resolution target: (a) digital hologram (640×480pixels) and (b) its Fourier spectrum of the hologram with the red and the green wavelengths first order components shown.

Download Full Size | PDF

The field E 0(x,y;z=0) can be regarded as a projection of many plane waves propagating in different directions in space and with the complex amplitude of each component equal to A 0 (kx,ky;0). The angular spectrum can then be propagated in space along the z -axis:

A(kx,ky;z)=A0(kx,ky;0)exp[ikzz],

where exp[ikzz] is the complex transfer function and kz=k2kx2ky2 , where k=2π/λ. Here, there is no requirement for z to be larger than a certain minimum value, as in the case of Fresnel transform or Huygens convolution. The complex wave-field at an arbitrary z can be obtained by performing the inverse Fourier transform:

E(x,y;z)=A(kx,ky;z)exp[i(kxx+kyy)]dkxdky

As both integrals in Eq.(1) and Eq.(3) are computed via FFT algorithm, the angular spectrum method is well suited for the real-time imaging.

3.2 Curvature correction

The angular spectrum method described above is based on the premise that the reference and object waves are both plane waves. However, in the real setup, each wave has its wavefront curvature, resulting in a curvature mismatch. Consider the complex field captured by a CCD camera (see Fig. 3).

 figure: Fig. 3.

Fig. 3. R is the wave’s radius of curvature centered at C, which can be determined experimentally for a given setup, r¯ is the vector from the center of the CCD matrix (point O) to an arbitrary point A, and r¯0 is the vector from the center of the CCD matrix to the projection of the center of curvature on the CCD matrix P. Here r̄=x2+y2 , x and y are the coordinates of A and r̄0=x02+y02 , x0 and y0 are the coordinates of P.

Download Full Size | PDF

The phase mismatch can be compensated numerically, by multiplying the original “flat” field E 0(x,y;z=0) by the phase factor exp[iϕ], where ϕ=kd is the phase difference between A and O. Here,k=2π/λ,where λ is the wavelength of light and d is the optical path difference:

d=CACO=CP2+PA2CP2+PO2

From geometry:

d=R2+r̄r̄02R2+r̄02=R2+(xx0)2+(yy0)2R2+x02+y02

The difference can be positive or negative, depending on the angle of the curvature we are compensating. Finally,

E(x,y;0)=E0(x,y;0)exp[ik(±[R2+(xx0)2+(yy0)2R2+x02+y02])]

which is the exact expression for the curvature-corrected field. This expression agrees with the approximation from reference [28], in the case Rr and r¯0→0:

k(R2+r2R)=kR[1+r2R21]=2πRλ[1+r22R21]=2πλx2+y22R

It is worth noting that Eq. (7) is a known expression for Newton’s rings, which means that if the object is a plane mirror, the resulting interference pattern would be a set of concentric rings with the dark fringes of radius of √mRλ, where m=0,1,2…. Therefore, for a wavelength of 532 nm, R=3 cm, the radius of a first fringe is 126 micron and there is a total of 3 fringes visible in 174 micron frame (see Fig. 4 below). If the field of view is increased, there are going to be more fringes visible and at some point the aliasing may occur. One can use this formula as an analytical expression to avoid fringe aliasing. For example, for the parameters above, in order for the fringes to alias (less than 2 pixel per fringe), one would have to have a field of view large enough for over 100 fringes.

If the parameters are chosen correctly, even a substantial curvature mismatch can be compensated. Figure 4 shows the phase image of the USAF resolution target covered with a layer of aluminum to make it entirely reflective. The pattern on the resolution target is elevated approximately 100 nm above the flat background. Figure 4(a) shows the reconstructed image before the curvature correction. Figure 4(b) is the same image after the curvature correction was applied, and the curvature mismatch completely compensated.

 figure: Fig. 4.

Fig. 4. The reconstructed phase image of the USAF resolution target (a) without curvature correction and (b) with curvature correction applied. The images are 174×174 μm2 (450×450pixels).

Download Full Size | PDF

3.3 Multi-wavelength phase imaging and optical thickness

If the object is reflective, like the resolution target in Fig. 4, its surface height profile h(x, y) is described by its phase map ϕ(x, y) of the holographic reconstruction at a given wavelength by

hxy=λ4πϕxy

On the other hand, if the object is a mostly transparent cell on the reflective substrate, so that the light propagates through it, reflects from the substrate and propagates back, the physical thickness is

hxy=λ4πϕxy(nn0)

where (n-n 0) is the refractive index difference between the cell and air. Figure 5(a) shows the phase map of the aluminum-covered USAF resolution target. The step size in Fig. 5(a) is approximately 2.2 radians, which can be converted to height using Eq. (8). This is consistent with the AFM scan of the same area, shown in Fig. 5(b), with the step height equal to approximately 100 nm in both images.

 figure: Fig. 5.

Fig. 5. (a). Phase map and height profile for λ=633 nm. (b). For comparison, AFM image and height profile of the same area.

Download Full Size | PDF

In the phase profile above, the height variation is less than the wavelength of light. The phase images of objects with variations in height greater than the wavelength are ambiguous and the phase maps exhibit discontinuities. If the simultaneous dual-wavelength phase imaging is performed, each phase map suffers a discontinuity every time the total phase change exceeds 2π, but since the two wavelengths are different, the discontinuities occur at different points of the image. It is possible to use this information to unwrap the phase by comparing the two maps.

The phase images produced by each wavelength are independently filtered in the Fourier domain and two phase maps are obtained. Figure 6 shows the phase images of the USAF resolution target imaged at a slight angle. The images produced with a single wavelength exhibit multiple phase steps [see Figs. 6(a), 6(b)]. By comparing the phase images from each wavelength, the 2π phase ambiguities can be resolved. The new phase map [see Fig. 6(c)] is equivalent to a phase map created by a wavelength:

Λ12=λ1λ2λ1λ2

For λ1=633 nm and λ2=532 nm, Λ12=3334 nm, which is high enough to remove the discontinuities seen in Figs. 6(a), 6(b).

The downside of this method is that the phase noise is also amplified by the same factor as the range. However, one can then use this dual-wavelength “coarse” map as a guide, together with one of the original phase maps (ϕ 1 or ϕ 2), to produce the low noise “fine” phase map. The method (detailed in the reference 15) uses one of the original single wavelength (say λ 1) phase images and corrects the phase jumps using the coarse map as a guide. If the noise in the coarse phase map is too excessive, some of the single wavelength segments might still end up being vertically shifted from its correct position by λ 1 [26], creating phase image artifacts.

Since the value of this shift is always λ 1, these errors can then be corrected in software by simply looking for such jumps and shifting them up or down back to their proper place. In comparison to the coarse map, the noise in the resulting fine map [see Fig. 6(d)] is much lower, while the axial range is still the same. Indeed, while the rms noise in the flat area of the resolution target is about 40 nm for the coarse map, for the fine map it is almost the same as for the single wavelength (both on the order of 6 nm).

 figure: Fig. 6.

Fig. 6. Phase maps for (a) λ 1=532 nm and (b) λ 2=633 nm; (c) 3D rendering of synthetic dualphase map with beat wavelength Λ12=3334nm and (d) reduced noise fine map (the images are 174×174 μm2, 450×450 pixels and the vertical scale for (a) and (b) is in radians).

Download Full Size | PDF

4. Results

Here, we have applied the dual-wavelength phase imaging method to 3D imaging of SKOV-3 ovarian cancer cells. Figure 7 shows the confluent group of cells: Fig. 7(a) shows the intensity image, which is similar to what one can see using the ordinary microscope, while Fig. 7(b) displays a single wavelength wrapped phase image, and Fig. 7(c) shows the coarse dual-wavelength unwrapped phase image. Finally, Fig. 7(d) displays 3D rendering of the final fine map, where we see the cells connecting together with grooves between them. The area at the bottom of the images is the exposed part of the gold substrate, to which the cells are bound. The measurements of the optical thickness of cells can then be performed using Eq. (9). One also needs to make an assumption of the cells refractive index, which we took to be 1.375. While it may not be possible to precisely determine the refractive index of the cell at each individual point, this number is always close to the refractive index of water and unlikely to deviate by more than a few percent.

 figure: Fig. 7.

Fig. 7. Confluent SKOV-3 ovarian cancer cells: (a) amplitude image, (b) reconstructed phase for λ=532 nm, (c) dual-wavelength coarse phase image and (d) 3D rendering of fine map. All images are 92×92 μm2 (240×240 pixels).

Download Full Size | PDF

Figure 8 shows the image of SKOV-3 single cell, where the cell’s nucleus and pseudopodia are clearly seen. Once again, by using the phase to thickness conversion (Eq. (9)), we can easily determine the 3D features of the cell. In addition to phase images for (a) a single wavelength, (b) coarse map and (c) 3D rendering of the fine map, Fig. 8 displays the line intensity profile, which indicates, for example, that the overall cell height is about 1.47 μm. The separate measurement indicates that the thickness of the cells pseudopodia (lamelipodia) is around 270 nm.

Finally, the image in the Fig. 9 shows a different confluent area of the same sample. Once again, the phase images generated using one wavelength clearly exhibit a number of 2π phase steps [see Fig. 9(b)], while the dual wavelength unwrapped phase map Fig. 9(c) shows a few spots where discontinuities are still present. These spots correspond to the lower intensity areas on the sample where no interference fringes were obtained. As a result, the phase is a random noise, which gives rise to multiple 2π phase steps. The images in Fig. 9(d) and Fig.9(e) show the result of optical and software unwrapping respectively. The software phase unwrapping algorithm starts at a certain point of an image and moves along a one dimensional path (e.g. straight line, spiral). If it encounters what looks like a phase wrap (the next pixel is approximately 2π higher or lower than the previous), it corrects the phase down or up. If the image has noisy areas, where phase oscillates randomly, the software algorithm may take it as a real feature and create nonexistent steps in phase profile. On the other hand,in dual-wavelength technique, what you see is always real. Notice that the software unwrapping algorithm erroneously created a phase step [upper right corner of Fig. 9(e)],which clearly does not correspond to the real thickness profile of the sample.

 figure: Fig. 8.

Fig. 8. A single SKOV-3 cell: (a) reconstructed phase for λ=633 nm, (b) dual-wavelength coarse phase image, (c) 3D rendering of fine map and (d) line thickness profile. All images are 63.5×59 μm2 (165×153 pixels).

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Comparison between optical and software unwrapping: (a) amplitude image; (b) single wavelength phase image, (c) coarse maps, (d) 3D rendering of the dual-wavelength fine phase map and (e) software unwrapped phase map. Images are 98×98 μm2 (256×256 pixels).

Download Full Size | PDF

5. Conclusion

We have demonstrated the application of digital holography for studying cells. The use of two wavelengths together with the fine map algorithm allows us to increase the maximum height of the features that can be unambiguously imaged while keeping the noise down to single wavelength levels of only a few nanometers. As a result, the accuracy and the level of details of the dual-wavelength images of cells, presented here, are superior to what has been previously demonstrated. In comparison to the software unwrapping, dual-wavelength optical unwrapping method is advantageous, as it requires no intensive computation procedures and can handle complex phase topologies. The proposed method of curvature correction is simple and effective enough to easily implement the experiment without the microscope objectives in the reference arms of the Michelson interferometer. This greatly simplifies the optical setup and makes it much easier to do the initial adjustments of the apparatus. Simultaneous dual-wavelength setup utilized together with the angular spectrum algorithm provides an easy way to acquire single frame images in real time, which can be used to study cell migration.

Acknowledgments

The authors wish to acknowledge Richard Everly of Nanomaterials & Nanomanufacturing Center at the University of South Florida for his work in making the aluminum coating for the USAF resolution target and Joshua Robinson at the Department of Physics at the University of South Florida for his help in imaging the target using AFM.

References and links

1. W. Jueptner and U. Schnars, Digital Holography, (Springer Verlag, 2004).

2. U. Schnars, “Direct phase determination in hologram interferometry with use of digitally recorded holograms,” J. Opt. Soc. Am . A 11, 2011–2015 (1994). [CrossRef]  

3. U. Schnars and W. P. Jueptner, “Direct recording of holograms by a CCD target and numerical reconstruction,” Appl. Opt. 33, 179–181 (1994). [CrossRef]   [PubMed]  

4. U. Schnars and W. P. Jueptner, “Digital recording and numerical reconstruction of holograms,” Meas. Sci. Technol. 13, R85–R101 (2002). [CrossRef]  

5. J. W. Goodman, Introduction to Fourier Optics, 2nd ed (New York, McGraw-Hill, 1996).

6. S. Grilli, P. Ferraro, S. De Nicola, A. Finizio, G. Pierattini, and R. Meucci, “Whole optical wavefields reconstruction by digital holography,” Opt. Express 9, 294–302 (2001). [CrossRef]   [PubMed]  

7. L. F. Yu and L. L. Cai, “Iterative algorithm with a constraint condition for numerical reconstruction of a three-dimensional object from its hologram,” J. Opt. Soc. Am. A 18, 1033–1045 (2001). [CrossRef]  

8. K. Matsushima, H. Schimmel, and F. Wyrowski, “Fast calculation method for optical diffraction on tilted planes by use of the angular spectrum of plane waves,” J. Opt. Soc. Am. A 20, 1755–1762 (2003). [CrossRef]  

9. L. Xu, X. Peng, J. Miao, and A. K. Asundi, “Studies of digital microscopic holography with applications to microstructure testing,” Appl. Opt. 40, 5046–5051 (2001). [CrossRef]  

10. W.S. Haddad, D. Cullen, J. C. Solem, J. W. Longworth, A. McPherson, K. Boyer, and C. K. Rhodes, “Fourier-transform holographic microscope,” Appl. Opt. 31, 4973–4978 (1992). [CrossRef]   [PubMed]  

11. W. Xu, M. H. Jericho, I. A. Meinertzhagen, and H. J. Kreuzer, “Digital in-line holography for biological applications,” Proc. Natl. Acad. Sci.USA 98, 11301–10305 (2001). [CrossRef]   [PubMed]  

12. A. Barty, K. A. Nugent, D. Paganin, and A. Roberts, “Quantitative optical phase microscopy,” Opt. Lett. 23, 817–819 (1998). [CrossRef]  

13. E. Cuche, F. Bevilacqua, and C. Depeursinge, “Digital holography for quantitative phase-contrast imaging,” Opt. Lett. 24, 291–293 (1999). [CrossRef]  

14. P. Ferraro, S. De Nicola, A. Finizio, G. Coppola, S. Grilli, C. Magro, and G. Pierattini, “Compensation of the inherent wave front curvature in digital holographic coherent microscopy for quantitative phase-contrast imaging,” Appl. Opt. 42, 1938–1946 (2003). [CrossRef]   [PubMed]  

15. J. Gass, A. Dakoff, and M. K. Kim, “Phase imaging without 2π-ambiguity by multiple-wavelength digital holography,” Opt. Lett. 28, 1141–1143 (2003). [CrossRef]   [PubMed]  

16. C. J. Mann, L. Yu, C. M. Lo, and M. K. Kim, “High-resolution quantitative phase-contrast microscopy by digital holography,” Opt. Express 13, 8693–8698 (2005). [CrossRef]   [PubMed]  

17. K. Tobin and P. Bingham, “Optical Spatial Heterodyned Interferometry for applications in Semiconductor Inspection and Metrology,” Proc. SPIE 6162, 616203 (2006). [CrossRef]  

18. J. Kühn, T. Colomb, F. Montfort, F. Charrière, Y. Emery, E. Cuche, P. Marquet, and C. Depeursinge, “Realtime dual-wavelength digital holographic microscopy with a single hologram acquisition,” Opt. Express 15, 7231–7242 (2007). [CrossRef]   [PubMed]  

19. P. Ferraro, L. Miccio, S. Grilli, M. Paturzo, S. De Nicola, A. Finizio, R. Osellame, and P. Laporta, “Quantitative Phase Microscopy of microstructures with extended measurement range and correction of chromatic aberrations by multiwavelength digital holography,” Opt. Express 15, 14591–14600, (2007). [CrossRef]   [PubMed]  

20. D. Parshall and M. K. Kim, “Digital holographic microscopy with dual wavelength phase unwrapping,” Appl. Opt. 45, 451–459 (2006). [CrossRef]   [PubMed]  

21. M. K. Kim, L. Yu, and C. J. Mann, “Digital holography and multi-wavelength interference techniques,” in Digital holography and three-dimensional display, T. C. Poon, ed. (Springer2006).

22. N. Warnasooriya and M. K. Kim, “LED-based multi-wavelength phase imaging interference microscopy,” Opt. Express , 15, 9239–9247 (2007). [CrossRef]   [PubMed]  

23. A. Khmaladze and M. Kim, “Quantitative Phase Contrast Imaging of Cells by Multi-Wavelength Digital Holography,” in Conference on Lasers and Electro-Optics (CLEO), Technical Digest (CD), (Optical Society of America, 2007), paper JTuA52A. [CrossRef]  

24. A. Khmaladze, C. J. Mann, and M. K. Kim, “Phase Contrast Movies of Cell Migration by Multi-Wavelength Digital Holography,” in Digital Holography and Three-Dimensional Imaging (DH), Technical Digest (CD), (Optical Society of America, 2007), paper DMB3.

25. A. Khmaladze, A. Restrepo-Martínez, M. K. Kim, R. Castañeda, and A. Blandón, “The Application of Dual-Wavelength Reflection Digital Holography for Detection of Pores in Coal Samples,” in Digital Holography and Three-Dimensional Imaging (DH), Technical Digest (CD),(Optical Society of America, 2008), paper DMB5.

26. A. Khmaladze, A. Restrepo-Martínez, M. K. Kim, R. Castañeda, and A. Blandón, “Simultaneous Dual-Wavelength Reflection Digital Holography Applied to the Study of the Porous Coal Samples,” Appl. Opt. 47, 3203–3210 (2008). [CrossRef]   [PubMed]  

27. M. Born and E. Wolfe, Principles of Optics, (Pergamon, 1964).

28. P. Ferraro, S. De Nicola, A. Finizio, G. Coppola, S. Grilli, C. Magro, and G. Pierattini, “Compensation of the inherent wave front curvature in digital holographic coherent microscopy for quantitative phase-contrast imaging,” Appl. Opt. 42, 1938–1946 (2003). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Multi-wavelength digital holography apparatus. The focal length of the lenses L21 and L22 are 17.5 cm and 10 cm respectively. The beams are collimated between L11 and L21 and between L12 and L22 and again are collimated after 20x OBJ1 microscope objective.
Fig. 2.
Fig. 2. Two-wavelength hologram of a USAF resolution target: (a) digital hologram (640×480pixels) and (b) its Fourier spectrum of the hologram with the red and the green wavelengths first order components shown.
Fig. 3.
Fig. 3. R is the wave’s radius of curvature centered at C, which can be determined experimentally for a given setup, r¯ is the vector from the center of the CCD matrix (point O) to an arbitrary point A, and r¯0 is the vector from the center of the CCD matrix to the projection of the center of curvature on the CCD matrix P. Here r ̄ = x 2 + y 2 , x and y are the coordinates of A and r ̄ 0 = x 0 2 + y 0 2 , x0 and y0 are the coordinates of P.
Fig. 4.
Fig. 4. The reconstructed phase image of the USAF resolution target (a) without curvature correction and (b) with curvature correction applied. The images are 174×174 μm2 (450×450pixels).
Fig. 5.
Fig. 5. (a). Phase map and height profile for λ=633 nm. (b). For comparison, AFM image and height profile of the same area.
Fig. 6.
Fig. 6. Phase maps for (a) λ 1=532 nm and (b) λ 2=633 nm; (c) 3D rendering of synthetic dualphase map with beat wavelength Λ12=3334nm and (d) reduced noise fine map (the images are 174×174 μm2, 450×450 pixels and the vertical scale for (a) and (b) is in radians).
Fig. 7.
Fig. 7. Confluent SKOV-3 ovarian cancer cells: (a) amplitude image, (b) reconstructed phase for λ=532 nm, (c) dual-wavelength coarse phase image and (d) 3D rendering of fine map. All images are 92×92 μm2 (240×240 pixels).
Fig. 8.
Fig. 8. A single SKOV-3 cell: (a) reconstructed phase for λ=633 nm, (b) dual-wavelength coarse phase image, (c) 3D rendering of fine map and (d) line thickness profile. All images are 63.5×59 μm2 (165×153 pixels).
Fig. 9.
Fig. 9. Comparison between optical and software unwrapping: (a) amplitude image; (b) single wavelength phase image, (c) coarse maps, (d) 3D rendering of the dual-wavelength fine phase map and (e) software unwrapped phase map. Images are 98×98 μm2 (256×256 pixels).

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

A 0 ( k x , k y ; 0 ) = E 0 ( x , y ; 0 ) exp [ i ( k x x + k y y ) ] dxdy
A ( k x , k y ; z ) = A 0 ( k x , k y ; 0 ) exp [ ik z z ] ,
E ( x , y ; z ) = A ( k x , k y ; z ) exp [ i ( k x x + k y y ) ] dk x dk y
d = CA CO = CP 2 + PA 2 CP 2 + PO 2
d = R 2 + r ̄ r ̄ 0 2 R 2 + r ̄ 0 2 = R 2 + ( x x 0 ) 2 + ( y y 0 ) 2 R 2 + x 0 2 + y 0 2
E ( x , y ; 0 ) = E 0 ( x , y ; 0 ) exp [ ik ( ± [ R 2 + ( x x 0 ) 2 + ( y y 0 ) 2 R 2 + x 0 2 + y 0 2 ] ) ]
k ( R 2 + r 2 R ) = kR [ 1 + r 2 R 2 1 ] = 2 πR λ [ 1 + r 2 2 R 2 1 ] = 2 π λ x 2 + y 2 2 R
h x y = λ 4 π ϕ x y
h x y = λ 4 π ϕ x y ( n n 0 )
Λ 12 = λ 1 λ 2 λ 1 λ 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.