Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Quantitative phase microscopy using defocusing by means of a spatial light modulator

Open Access Open Access

Abstract

A new method for recovery the quantitative phase information of microscopic samples is presented. It is based on a spatial light modulator (SLM) and digital image processing as key elements to extract the sample’s phase distribution. By displaying a set of lenses with different focal power, the SLM produces a set of defocused images of the input sample at the CCD plane. Such recorded images are then numerically processed to retrieve phase information. This iterative process is based on the wave propagation equation and leads on a complex amplitude image containing information of both amplitude and phase distributions of the input sample diffracted wave front. The proposed configuration is a non-interferometric architecture (conventional transmission imaging mode) where no moving elements are included. Experimental results perfectly correlate with the results obtained by conventional digital holographic microscopy (DHM).

©2010 Optical Society of America

1. Introduction

The principle of imaging in holography was established by Dennis Gabor in 1948 [1]. In its basic architecture, the Gabor’s setup implies an in-line configuration where the imaging wave caused by diffraction at the sample’s plane and the reference wave (which is incoming from the non-diffracted light passing through the sample) interfere at the recording plane. This interference process results with holographic recording where complete information about the object wavefront becomes accessible. Nowadays, the hologram is recorded by electronic devices (typically a CCD or CMOS camera) and digital Fourier filtering as well as numerical reconstruction algorithms are commonly used to obtain imaging [26]. In that sense, DHM combines high quality imaging provided by microscopy, whole object wavefront recovery provided by holography, and numerical processing capabilities provided by computers [711]. As result, DHM avoids the limited depth of focus in high NA lenses and allows the visualization of phase samples that are not visible under conventional microscope imaging.

In DHM, the basic architecture is defined by an interferometric setup where the imaging system is placed on one branch (imaging arm) and a reference beam is reinserted at the CCD plane incoming from a second branch (reference arm). Due to its interferometric underlying principle, different classical interferometric configurations can be employed [7,1214]. But in any case, DHM implies the mixing of an imaging and a reference beam in order to recover the complex amplitude distribution of the wavefront diffracted from the input sample. Such phase and amplitude distribution extraction can be performed in off-axis configuration and by using Fourier filtering [2,15] or in on-axis mode and by using phase-shifting process [1517].

But phase retrieval can also be conducted without the addition of a reference beam, that is, by only considering numerical reconstructions derived from the intensity variations of the input object’s diffracted wavefront [1825]. Thus, Barty et al [24] proposed a non-interferometric method for the extraction of quantitative phase information based on the analysis of how the propagation beam is affected by the sample. They used an ordinary transmission microscope where different intensity images (in-focus and defocused images) were measured by moving the sample in the z-axial direction using a stepper motor. The resulting intensity measurements acted as inputs of a deterministic phase-retrieval algorithm based on the transport of intensity equation [2022].

In a similar way, Pedrini et al [2628] reported on a complex wavefront reconstruction non-interferometric method based on the recording of the volume speckle field from an object and phase retrieval algorithm. Pedrini et al validated their method working without lenses and displacing axially the CCD to provide the different intensity measurement planes. After that, reconstruction algorithms based on iteration of the wave propagation equation allow the recovery of the object’s complex amplitude wavefront. This single-beam multiple-intensity reconstruction (SBMR) method has also been validated in shape, deformation and angular displacement measurements of three-dimensional (3D) objects [29,30]. However, in both non-interferometric approaches [24,2630], the ability to defocus the input sample to measure several intensity distributions is obtained by either mechanical displacement of the sample [24] or the CCD camera [2530]. Thus, such approaches become extremely sensible to small misalignments due to non-orthogonalities and tilts in the experimental setup, and to noise incoming from environmental disturbances between different recorded images.

In order to improve the capabilities of previous approaches [2527], Bao et al reported on a different method where the set of diffracted patterns is recorded not by displacing the CCD but by tuning the illumination wavelength [31]. Thus, the experimental setup becomes static where no moving components are needed. This approach improves the convergence of the phase retrieval algorithm in comparison with previous works [2527] while reduces the impact of noise incoming from the use of mechanical components. Recently, several applications have been proposed in microscopy validating the use of an SLM as a programmable versatile element for quantitative phase [32], phase contrast [33,34] and differential interference contrast [35] microscopy.

In this manuscript, we present a new approach in the field of digital microscopy for phase retrieval based on an SLM and where no moving elements are considered. Similar to the approach presented in Ref [24], the experimental setup becomes an in-line configuration (conventional imaging system in transmission) where a microscope lens magnifies the input sample onto a CCD placed at the output image plane. Nevertheless, an SLM is inserted in the experimental setup to provide the recording of different misfocused images. The misfocus is originated by displaying at the SLM a phase-profile lens with finite focal length (or non-zero power). Thus, by varying the lens focal length that is being displayed at the SLM, it is possible to record a set of misfocused images in a way that is equivalent to the multiple intensity recordings provided by displacing the CCD in the SBMR method. Once the whole set of misfocused images is stored in the computer’s memory, they are digitally processed in a similar way presented in previous works [26,27] but with taking into account that the addition of a lens in the SLM not only changes the imaging plane but also its magnification. This numerical process is carefully described in Section 2 and finally yields in phase information recovery of the input sample.

The paper is organized as follows. Section 2 provides a description of both the system setup and the numerical processing while presenting experimental results corresponding with a USAF resolution test target for a better and deeper understanding of the approach. Section 3 presents experimental results considering a 3D biosample. Section 4 concludes the paper.

2. Analysis of the proposed method and experimental calibration

The optical assembly used to experimentally demonstrate the proposed approach is depicted in Fig. 1 . It is a non-interferometric microscope configuration where a collimated laser beam is used as the light source illuminating the input sample. As imaging system we use an infinity-corrected long-working-distance microscope lens in infinity imaging configuration that images the input sample over the CCD after passing through a tube lens. Between the microscope and the tube lens, an SLM is placed in either transmissive or reflective configuration. In Fig. 1, the SLM is placed in transmissive configuration but experimental validation is performed in a reflective mode. The SLM is controlled by a computer in order to display the different lenses. Two additional polarisers (one before the input sample and other after the SLM) optimize the SLM phase modulation process.

 figure: Fig. 1

Fig. 1 Experimental setup drawing for phase retrieval in transmission mode.

Download Full Size | PDF

When no lens is displayed at the SLM, the CCD images a given two-dimensional (2D) section of the 3D sample where objects outside the depth of focus of the lens will appear blurred. This situation corresponds with Fig. 2(a) where only the ray tracing in red is focussed at the CCD. Then, we display different positive and negative lenses at the SLM. Obviously, the lens power that the SLM can display is very low in comparison with the one that is defined by the microscope lens. However, this low variation means a refocusing capability over different sections of the 3D input sample, that is, different transversal planes of the sample will be in focus at the CCD by varying the power of the SLM-displayed lens. When negative lenses are considered [Fig. 2(b)], we are imaging further sections of the sample in comparison to the previous case where no lens was displayed at the SLM. The contrary happens when using positive lenses [Fig. 2(c)].

 figure: Fig. 2

Fig. 2 Ray tracing for a 3D sample in the experimental setup: (a)-(b)-(c) correspond with the central-left-right part of the 3D sample (red-blue-green lines), respectively, when no lens, negative and positive lenses are displayed at the SLM, respectively. Notice that the polarizer has been removed for simplicity in the drawing.

Download Full Size | PDF

This refocusing ability implies an additional advantage of the proposed method in comparison with previous works [2631]: since different transversal sections of the 3D input sample become imaged, the phase retrieval algorithm uses real intensity imaging planes along the iteration procedure. That is, some of the different amplitude distributions taken during the phase iteration process are real imaging planes instead of diffracted wave fronts. This fact allows a better final image quality once the whole process is finished, as we will see in the experimental validation.

In the following lines, we present the numerical manipulation involved in the proposed approach. The qualitative computational procedure is accompanied with experimental results obtained when a high resolution negative USAF test target is used as input object. Aside of being useful for clarifying the numerical processing, the USAF test processing will be used as a calibration stage for the proposed setup when more complex samples (phase samples) are considered (provided that the misfocus generated by the SLM will be the same).

First, a set of 9 intensity images corresponding to different focus of the input sample are stored in the computer’s memory. Let us name such intensities as IN, where N varies from 1 to 9. Obviously, each one of these 9 images is obtained by a different value of the lens displayed at the SLM and is connected to a different transversal section of the input object. In Fig. 3 we can see the whole set of images where case (e) corresponds to the SLM no-lens case. Since the final image magnification at infinity corrected imaging configuration depends on the ratio between the focal lengths of the tube lens and the microscope objective, an increase in the power of the tube lens implies a reduction in the overall image magnification. Thus, positive lenses at the SLM will produce a reduction in the image magnification [Fig. 3, from (a) to (d)] while negative lenses will increase the magnification of the image [Fig. 3, from (f) to (i)]. Both cases are compared with the no-lens SLM imaging case [Fig. 3(e)].

 figure: Fig. 3

Fig. 3 From (a) to (i), raw direct images of the USAF test central part obtained when varying the power of the lens displayed at the SLM.

Download Full Size | PDF

But previously to applying the Rayleigh-Sommerfeld (R-S) based phase retrieval algorithmic on the digital propagation between the different intensity planes, the IN inputs must be matched in magnification and transverse location. Otherwise, the numerical propagation will need a magnification control and will cause costly computational difficulties. Moreover the precise propagation distance between planes is also needed.

The change in magnification and location between images can be easily computed since the experimental setup is in infinity imaging. In this case the magnification β’ is just a quotient between the following focal lengths in the system taking into account the distance e between the SLM and the tube lens

β'=fSfTfO(fS+fTe)
where fO, fT, fS are the focal lengths of the microscope lens, the tube lens and the lens encoded in the SLM, respectively. On the other hand, the axial location of the image is given by the back focal length D defined as

D=fT(efS)e(fS+fT)

These expressions are valid for thin lenses, and thus are accurate in our case where the tube lens focal length is much longer than the lens thickness. Nevertheless, the distance e between the SLM and the tube lens has to be measured on the system. Owing to the system complexity there is a significant uncertainty in its measurement. For experimental simplicity the magnification and axial location of the images can be obtained from the recorded images, without the need for accurate physical measurements. This significantly simplifies the experiments, at the price of some parameters matching steps that can be automated.

The procedure for accurately matching the magnification and lateral placement, as well as the propagation distance between planes, implies a double-step algorithm based on the maximization of the correlation value. The first step gives a good estimate of lateral shift and scale factor for each image, while the second step refines these values and provides the propagation distance. Note that in this calibration step we use the fact that one of the images is in focus and (because is a binary pattern with small depth variations) the phase of the object can be neglected in a first approximation.

First, different scaled versions of each misfocus image (IN with N≠5) are correlated in intensity with the image obtained when no lens is displayed by the SLM [I5 or Fig. 3(e)]. The maximum value and the displacement of the correlation peak from the image centre are estimates of both the best scale factor and image displacement, respectively, to be applied for each IN (N≠5) image in this first round of the adjusting procedure.

However, all of the correlation operations performed in the first step are obtained between blurred images (since they are originated by varying the SLM lens power) and the in focus image (no lens imaging case). So, in order to get more accurate values of scale factors and displacements, we need to consider an additional variable: the propagation distance. Thus, the second step iterates the propagation distance with the scale factors and displacements. Numerical propagation is based on the well-known R-S equation where the approximation based on convolution operation is applied [4,11,15]. Thus, the diffraction integral is calculated using three Fourier transformations through the convolution theorem, that is, RS(x,y;d) = FT−1{FT{U(x,y)}⋅FT{h(x,y;d)}}, where RS(x,y) is the propagated wave field, U(x,y) is the recorded hologram, h(x,y) is the impulse response of free space propagation (the definition of h(u,v;d) can be found in Ref [15], page 115, Eq. (3).73), (x,y) are the spatial coordinates, FT is the Fourier transform operation (realized with the FFT algorithm) and d is the propagation distance.

Here, the recorded hologram is the amplitude distribution incoming from the square root of the in focus image intensity (I5). This input image is digitally propagated to different distances and correlated with scaled and shifted versions of the rest of the images (IN with N≠5). Obviously, the starting values for the scale and shift are those ones obtained in the first step. After the first round of propagations, we store in the computer’s memory the values of propagation distances that produce maximization in the correlation peak. With these best values, we refine the scale factor and displacement of each image (IN with N≠5). Then, we re-scale and re-shift again each defocused image and compute again the correlation operation varying slightly the propagation distance. This process is repeated until the difference between new and previous values is lower than 10−3.

A double result is obtained as consequence of this double step process. On one hand, a new set of intensity images (named as I’N) having both the same object size and the same lateral position is obtained. Figure 4 depicts this new set of I’N images. And on the other hand, the propagation distance between the different images is known in a precise way.

 figure: Fig. 4

Fig. 4 From (a) to (i), image compensation for the magnification being introduced by the lens displayed at the SLM for the USAF test central part.

Download Full Size | PDF

Under these conditions, phase retrieval algorithm is applied to reconstruct the complex amplitude distribution of the diffracted object wave field. The phase iteration process starts with an initial complex amplitude distribution U1(x,y) coming from the square root of the first intensity image [I’1 or Fig. 4(a)] multiplied by an initial constant phase: U1(x,y) = I’1(x,y) exp(iφ0(x,y)), being φ0(x,y) = 0. This initial complex amplitude distribution U1(x,y) is digitally propagated to the next measured plane, that is, to the image represented by I’2 [or Fig. 4(b)]. Once again, numerical computation of the R-S equation by convolution approximation is considered but now the recorded hologram is the complex amplitude distribution U1(x,y) and the Fresnel approximation is used in the calculation of the Fourier transform of the impulse response H(u,v;d) = FT{h(x,y;d)}, where (u,v) are the spatial-frequency coordinates and the definition of H(u,v;d) can be found in Ref [15], page 117, Eq. (3).84. Then, the calculation of the propagated wave field from the first measured intensity to the second one separated by a distance of d2 is simplified to RS2(x,y;d2) = FT−1{T1(u,v) H(u,v;d2)}, where T1(u,v) = FT{U1(x,y)} is the Fourier transform of the initial complex wave field.

This procedure is repeated for every plane where measurements are performed, that is, RSN + 1(x,y;dN + 1) = FT−1{TN(u,v) H(u,v;dN + 1)}. However, for subsequent propagations (N≠1), we retain the phase distribution φN(x,y) incoming from the previous propagation and replace the obtained amplitude by the square root of the intensity measured at that plane: TN(u,v) = FT{UN(x,y)} = FT{(I’N(x,y))1/2 exp(iφN(x,y))}. Let us name the iterative process from the first image (I1) to the last one (I9) step by step considering all the images as a cycle. Thus, once one cycle is performed, we propagate from I9 to I1 and the iterative process starts again, that is, a second cycle is considered. This iterative process is repeated until the quality of the reconstructed image at the imaging plane [I5 or Fig. 3(e)] will be smaller than some predefined threshold which is obtained by computing the root mean square error (rmse) between the real image I’5 and the one being obtained from the last image of the set (I’9) propagated to plane number 5 after a given number of cycles. We found that the rmse stabilizes after 5-6 cycles.

Finally, the whole process retrieves phase information about the input sample and, thus, the entire complex amplitude wavefront reconstruction. Figure 5(a) depicts the case when the central part of the USAF test target is directly imaged over the CCD, case (b) corresponds with the direct propagation of image I’9 [Fig. 4(i)] without applying the proposed approach, and case (c) represents the resulting image obtained after 6 cycles. Figure 5(d) plots the normalized variation of the rmse as the number of iterations of the whole cycle increases (from 1 to 25). We stop the iteration process at cycle number 6 when the rmse value equals the background rmse. We can see as no image reconstruction is possible when the proposed approach is not considered [case (b)] while a very good image quality is reconstructed by considering only 6 cycles in the iteration process.

 figure: Fig. 5

Fig. 5 (a) Direct imaging of the USAF test central part. (b) and (c) propagated images resulting without and with 6 cycles in the iteration process, respectively. (d) Representation of the normalized rmse (vertical coordinate) versus the number of cycles (horizontal coordinate).

Download Full Size | PDF

Finally, we have checked the robustness and the capabilities of the proposed approach when considering less reconstruction planes in the iteration process. Moreover, we have avoided the use of planes number 4, 5 (imaging plane) and 6 corresponding with images (d)-(e)-(f) of Fig. 4, respectively, in the reconstruction process. That is, we are considering only misfocused images in the phase iteration reconstruction. The resulting reconstructions are depicted in Fig. 6 where the number of cycles increases from 6 to 13 in order to achieve a similar result as the one obtained in Fig. 5 (c).

 figure: Fig. 6

Fig. 6 (a) Representation of the normalized rmse (vertical coordinate) versus the number of cycles (horizontal coordinate). (b)-(c) Propagated image resulting after 1 and 13 iterations cycles, respectively, when images I’4-I’5-I’6 are eliminated in the iteration process.

Download Full Size | PDF

3. Experimental validation

The experimental setup is shown in Fig. 7 . A collimated and horizontally polarized laser beam (Roithner Lasertechnik, green diode pumped laser modules, 532 nm wavelength) impinges over the input sample. A long working distance infinity corrected microscope lens (Mitutoyo M Plan Apo 0.55NA) in infinity imaging mode is used as imaging lens providing the image of the input sample at infinity. Imaging over the CCD (Basler A312f, 582x782 pixels, 8.3 µm pixel size, 12 bits/pixel) is achieved by a tube lens (doublet lens with 300 mm focal length). In this configuration, the resulting image has a magnification of 75x.

 figure: Fig. 7

Fig. 7 Picture of the experimental setup in reflective configuration.

Download Full Size | PDF

A reflective SLM (Holoeye HEO 1080 P, 1920x1080 pixel resolution, 8 µm pixel pitch) that is placed between the microscope lens and the tube lens and a standard cube beam-splitter (20 mm cube size) allows the reflective configuration. The SLM is connected to a computer where the different lenses are generated and sent to the SLM by displaying them in a figure that is transferred to the SLM. Matlab software is used to manage the whole process and for performing the required digital image processing. Two linear polarizers, one before the input sample and the other after the SLM, allow high efficiency phase modulation in the SLM. Additional neutral density filters (not visible in the picture) are used to adjust the beam laser power.

As input sample we used a swine sperm biosample enclosed in a counting chamber having a thickness of 20 µm. The sperm cells have an elliptical head shape of height and width around 6x9 µm, a total length of 55 µm, and a tail’s width of 2 µm on the head side and below 1 µm on the end, approximately. It is an unstained sample that is dried up allowing fixed sperm cells for the experiments. Because of the drying, the cells are fixed at different sections of the chamber. Figure 8 images two of the nine considered sections in the phase iteration process where different sperm cells appear in focus. Since the sample is essentially a phase object, the cells appears invisible in conventional bright field imaging mode.

 figure: Fig. 8

Fig. 8 Two sections of the 3D biosample where different sperm cells are in focus. The images have been scaled to be equalized in magnification.

Download Full Size | PDF

The lens displayed at the SLM has the following mathematical expression: l = exp(ik(x2 + y2)) being (x,y) the spatial coordinates and k the variable parameter that modifies the focal length of the lens. Since we vary k in the range [-0.0002, 0.0002], the focal length is modified from 0 to ± 4 meters, approximately. Along this range, we linearly varied the focal length with fixed increments allowing the recording of the whole set of 9 images. No evidence of another type of spacing (non-linear) between images has been noticed. Although the SLM allows a higher focal length variation, we stopped at this value since lower focal lengths than 4 m imply the presence of additional lens diffraction orders in the recorded images. One can see one of those high diffraction orders produced by the SLM lens appearing as a small vertical white rectangle in the centre of the image depicted in Fig. 8(b). Once the whole set of 9 recorded images is stored in the computer’s memory, the images are shifted and rescaled according to the values provided by the USAF test case.

Then, phase extraction iteration is performed. In this case, the iteration cycle is repeated 15 times. The obtained results (real part and phase distribution) are depicted in Fig. 9 . We can see as the cells that are basically invisible in Fig. 8(a) and 8(b) become now visible in Fig. 9(a)9(c) and 9(b)9(d), respectively.

 figure: Fig. 9

Fig. 9 Real (a)-(b) and phase (c)-(d) distributions of the retrieved complex amplitude images obtained with the proposed approach and corresponding with the sections showed in Fig. 8.

Download Full Size | PDF

Finally, the recovered iterated phase distribution obtained is compared with that one obtained when using conventional DHM. To allow this comparison, we have assembled a Match-Zehnder interferometric configuration in which imaging of the biosample provided by the microscope lens interferes with an off-axis reference beam inserted at the CCD plane. The off-axis holographic recording permits the recovery of the transmitted frequency band pass by Fourier transforming the recorded hologram and Fourier filtering of one of the hologram diffraction orders. Once the spectral filtered distribution is centred at the Fourier domain, inverse Fourier transformation retrieves complex amplitude imaging. Figure 10 compares the 3D plots of the unwrapped phase distributions in both analyzed cases. Although the cells are not the same ones in both figures, they come from the same swine sperm biosample allowing thus direct comparison. As one can see, the phase step between the background and the higher part of the sperm’s heads is around 2.5 rad in both figures. This fact shows a high degree of correlation between the unwrapped phase distributions provided by both methods.

 figure: Fig. 10

Fig. 10 3D representations of sperm cells from unwrapped phase distribution: (a) group of cells marked with a solid white line rectangle in Fig. 10(c), and 10(b) another group of sperm cells of the same biosample obtained with a conventional DHM configuration. Gray level scale represents optical phase in radians.

Download Full Size | PDF

4. Conclusions and discussion

We have presented a new approach for phase extraction where no interferometric architecture and no moving elements are considered. The experimental setup becomes ordinary transmission microscope system where an SLM is inserted before reaching the CCD plane. The SLM allows the recording of a set of misfocused images of the input sample when displaying at the SLM a set of lenses with different focal powers. The set of recorded intensity images is stored at the computer’s memory and digitally processed to retrieve phase information. The computational process involves scale and shift image equalization, propagation between image and, finally, phase iteration. The achievable results are in good agreement with that one obtained by conventional DHM.

Experimental validation has been reported for a synthetic object (USAF resolution test) and for a biosample (swine sperm sample). Also, the USAF test case acts as preliminary calibration stage in order to adjust the main parameters of the system: magnification, lateral shift and propagation distance between the different recorded images.

The proposed method has the main advantage of retrieving the phase information while being a non-interferometric configuration and it becomes different to previous works in the following sense. First, the experimental setup becomes static where no mechanical components are needed. Thus, vibrations, small tilts and misalignments (typically equal to the pixel size) in the system, and noise incoming from phase disturbances between recorded images are avoided. Once the magnification, lateral image translation and propagation distance are fixed in the preliminary calibration stage (USAF calibration process), the application of the method to any other sample is straightforward. Moreover, since the lenses can be displayed at the SLM and synchronized for recording with the CCD in the millisecond range, the proposed method becomes faster that others involving the recording of 21 intensity planes that are spaced 1 mm one each other as in Ref [26], just to cite and example. This possibility enables the proposed method with video recording capabilities.

Second, considering other methods for phase extraction in digital holography [2530], the proposed method allows high resolution direct imaging of the input sample since involves the use of microscope lenses. Also and it has been shown in Section 2, the obtained result is greatly improved regarding the number of cycles needed for phase retrieval since the proposed method considers imaging planes in the phase iterative procedure. For instance, Ref [25]. requires 75 iterations when considering 3 recording planes.

And third, the proposed method is valid for any type of samples in difference with other methods where non-diffuse objects are inherent to the process since they are based on the measurement of the volume speckle field [2631]. In fact, when a non-diffuse object is considered, a random plate (ground glass diffuser or similar mask) is used to provide the necessary speckle field [2731]. Here, experiments are conducted for non-diffuse objects including a 3D biosample where the sperm cells are not shadowing one to each other. However, there is no reason to successfully apply the method to diffuse samples having limited thickness because otherwise the retrieved phase distribution incoming from a volume sample will be distorted and cannot be ascribed to a given sample section.

Acknowledgements

The authors want to thank Prof. Carles Soler and Paco Blasco from Proiser R + D S.L. for providing the swine sperm sample. Also, part of this work was supported by the Spanish Ministerio de Educación y Ciencia under the project FIS2007-60626.

References and links

1. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948). [CrossRef]   [PubMed]  

2. U. Schnars and W. Jüptner, “Direct recording of holograms by a CCD target and numerical reconstruction,” Appl. Opt. 33(2), 179–181 (1994). [CrossRef]   [PubMed]  

3. U. Schnars, “Direct phase determination in hologram interferometry with use of digitally recorded holograms,” J. Opt. Soc. Am. A 11(7), 2011–2015 (1994). [CrossRef]  

4. L. P. Yaroslavsky, Digital Holography and Digital Image Processing: Principles, Methods, Algorithms (Kluwer, 2003).

5. J. Garcia-Sucerquia, W. Xu, S. K. Jericho, P. Klages, M. H. Jericho, and H. J. Kreuzer, “Digital in-line holographic microscopy,” Appl. Opt. 45(5), 836–850 (2006). [CrossRef]   [PubMed]  

6. V. Micó, J. García, Z. Zalevsky, and B. Javidi, “Phase-shifting Gabor holography,” Opt. Lett. 34(10), 1492–1494 (2009). [CrossRef]   [PubMed]  

7. P. Marquet, B. Rappaz, P. J. Magistretti, E. Cuche, Y. Emery, T. Colomb, and Ch. Depeursinge, “Digital holographic microscopy: a noninvasive contrast imaging technique allowing quantitative visualization of living cells with subwavelength axial accuracy,” Opt. Lett. 30(5), 468–470 (2005). [CrossRef]   [PubMed]  

8. F. Charrière, A. Marian, F. Montfort, J. Kuehn, T. Colomb, E. Cuche, P. Marquet, and Ch. Depeursinge, “Cell refractive index tomography by digital holographic microscopy,” Opt. Lett. 31(2), 178–180 (2006). [CrossRef]   [PubMed]  

9. G. Popescu, T. Ikeda, R. R. Dasari, and M. S. Feld, “Diffraction phase microscopy for quantifying cell structure and dynamics,” Opt. Lett. 31(6), 775–777 (2006). [CrossRef]   [PubMed]  

10. B. Kemper and G. von Bally, “Digital holographic microscopy for live cell applications and technical inspection,” Appl. Opt. 47(4), A52–A61 (2008). [CrossRef]   [PubMed]  

11. V. Micó, Z. Zalevsky, C. Ferreira, and J. García, “Superresolution digital holographic microscopy for three-dimensional samples,” Opt. Express 16(23), 19260–19270 (2008). [CrossRef]  

12. H. Iwai, C. Fang-Yen, G. Popescu, A. Wax, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Quantitative phase imaging using actively stabilized phase-shifting low-coherence interferometry,” Opt. Lett. 29(20), 2399–2401 (2004). [CrossRef]   [PubMed]  

13. S. Reichelt and H. Zappe, “Combined Twyman-Green and Mach-Zehnder interferometer for microlens testing,” Appl. Opt. 44(27), 5786–5792 (2005). [CrossRef]   [PubMed]  

14. Y. K. Park, G. Popescu, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Fresnel particle tracing in three dimensions using diffraction phase microscopy,” Opt. Lett. 32(7), 811–813 (2007). [CrossRef]   [PubMed]  

15. T. Kreis, Handbook of holographic interferometry: optical and digital methods (Wiley-VCH, 2005).

16. I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22(16), 1268–1270 (1997). [CrossRef]   [PubMed]  

17. I. Yamaguchi, J. Kato, S. Ohta, and J. Mizuno, “Image formation in phase-shifting digital holography and applications to microscopy,” Appl. Opt. 40(34), 6177–6186 (2001). [CrossRef]  

18. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik (Stuttg.) 35, 237–246 (1978).

19. J. R. Fienup, “Phase retrieval algorithms: a comparision,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]   [PubMed]  

20. M. R. Teague, “Deterministic phase retrieval: a Green’s function solution,” J. Opt. Soc. Am. 73(11), 1434–1441 (1983). [CrossRef]  

21. N. Streibl, “Phase imaging by the transport equation of intensity,” Opt. Commun. 49(1), 6–10 (1984). [CrossRef]  

22. M. R. Teague, “Image formation in terms of transport equation,” J. Opt. Soc. Am. A 2(11), 2019–2026 (1985). [CrossRef]  

23. G. Z. Yang, B. Z. Dong, B. Y. Gu, J. Zhuang, and O. K. Ersoy, “Gerchberg-Saxton and Yang-Gu algorithms for phase retrieval in a nonunitary transform system: a comparison,” Appl. Opt. 33(2), 209–218 (1994). [CrossRef]   [PubMed]  

24. A. Barty, K. A. Nugent, D. Paganin, and A. Roberts, “Quantitative optical phase microscopy,” Opt. Lett. 23(11), 817–819 (1998). [CrossRef]  

25. Y. Zhang, G. Pedrini, W. Osten, and H. Tiziani, “Whole optical wave field reconstruction from double or multi in-line holograms by phase retrieval algorithm,” Opt. Express 11(24), 3234–3241 (2003). [CrossRef]   [PubMed]  

26. G. Pedrini, W. Osten, and Y. Zhang, “Wave-front reconstruction from a sequence of interferograms recorded at different planes,” Opt. Lett. 30(8), 833–835 (2005). [CrossRef]   [PubMed]  

27. P. Almoro, G. Pedrini, and W. Osten, “Complete wavefront reconstruction using sequential intensity measurements of a volume speckle field,” Appl. Opt. 45(34), 8596–8605 (2006). [CrossRef]   [PubMed]  

28. P. Almoro, G. Pedrini, and W. Osten, “Aperture synthesis in phase retrieval using a volume-speckle field,” Opt. Lett. 32(7), 733–735 (2007). [CrossRef]   [PubMed]  

29. A. Anand, V. K. Chhaniwal, P. Almoro, G. Pedrini, and W. Osten, “Shape and deformation measurements of 3D objects using volume speckle field and phase retrieval,” Opt. Lett. 34(10), 1522–1524 (2009). [CrossRef]   [PubMed]  

30. P. F. Almoro, G. Pedrini, A. Anand, W. Osten, and S. G. Hanson, “Angular displacement and deformation analyses using a speckle-based wavefront sensor,” Appl. Opt. 48(5), 932–940 (2009). [CrossRef]   [PubMed]  

31. P. Bao, F. Zhang, G. Pedrini, and W. Osten, “Phase retrieval using multiple illumination wavelengths,” Opt. Lett. 33(4), 309–311 (2008). [CrossRef]   [PubMed]  

32. G. Popescu, L. P. Deflores, J. C. Vaughan, K. Badizadegan, H. Iwai, R. R. Dasari, and M. S. Feld, “Fourier phase microscopy for investigation of biological structures and dynamics,” Opt. Lett. 29(21), 2503–2505 (2004). [CrossRef]   [PubMed]  

33. S. Fürhapter, A. Jesacher, S. Bernet, and M. Ritsch-Marte, “Spiral phase contrast imaging in microscopy,” Opt. Express 13(3), 689–694 (2005). [CrossRef]   [PubMed]  

34. Ch. Maurer, A. Jesacher, S. Bernet, and M. Ritsch-Marte, “Phase contrast microscopy with full numerical aperture illumination,” Opt. Express 16(24), 19821–19829 (2008). [CrossRef]   [PubMed]  

35. T. J. McIntyre, Ch. Maurer, S. Bernet, and M. Ritsch-Marte, “Differential interference contrast imaging using a spatial light modulator,” Opt. Lett. 34(19), 2988–2990 (2009). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Experimental setup drawing for phase retrieval in transmission mode.
Fig. 2
Fig. 2 Ray tracing for a 3D sample in the experimental setup: (a)-(b)-(c) correspond with the central-left-right part of the 3D sample (red-blue-green lines), respectively, when no lens, negative and positive lenses are displayed at the SLM, respectively. Notice that the polarizer has been removed for simplicity in the drawing.
Fig. 3
Fig. 3 From (a) to (i), raw direct images of the USAF test central part obtained when varying the power of the lens displayed at the SLM.
Fig. 4
Fig. 4 From (a) to (i), image compensation for the magnification being introduced by the lens displayed at the SLM for the USAF test central part.
Fig. 5
Fig. 5 (a) Direct imaging of the USAF test central part. (b) and (c) propagated images resulting without and with 6 cycles in the iteration process, respectively. (d) Representation of the normalized rmse (vertical coordinate) versus the number of cycles (horizontal coordinate).
Fig. 6
Fig. 6 (a) Representation of the normalized rmse (vertical coordinate) versus the number of cycles (horizontal coordinate). (b)-(c) Propagated image resulting after 1 and 13 iterations cycles, respectively, when images I’4-I’5-I’6 are eliminated in the iteration process.
Fig. 7
Fig. 7 Picture of the experimental setup in reflective configuration.
Fig. 8
Fig. 8 Two sections of the 3D biosample where different sperm cells are in focus. The images have been scaled to be equalized in magnification.
Fig. 9
Fig. 9 Real (a)-(b) and phase (c)-(d) distributions of the retrieved complex amplitude images obtained with the proposed approach and corresponding with the sections showed in Fig. 8.
Fig. 10
Fig. 10 3D representations of sperm cells from unwrapped phase distribution: (a) group of cells marked with a solid white line rectangle in Fig. 10(c), and 10(b) another group of sperm cells of the same biosample obtained with a conventional DHM configuration. Gray level scale represents optical phase in radians.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

β ' = f S f T f O ( f S + f T e )
D = f T ( e f S ) e ( f S + f T )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.