Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Wavefield imaging via iterative retrieval based on phase modulation diversity

Open Access Open Access

Abstract

We present a fast and robust non-interferomentric wavefield retrieval approach suitable for imaging of both amplitude and phase distributions of scalar coherent beams. It is based on the diversity of the intensity measurements obtained under controlled astigmatism and it can be easily implemented in standard imaging systems. Its application for imaging in microscopy is experimentally studied. Relevant examples illustrate the approach capabilities for image super-resolution, numerical refocusing, quantitative imaging and phase mapping.

©2011 Optical Society of America

1. Introduction

Either digital holography or iterative phase retrieval techniques are widely used for visualization as well as for quantitative imaging in microscopy and in other relevant research areas. Such techniques rely in the numerical capabilities provided by computers to reconstruct the image of an object from the measurement of the intensity distribution of the coherent wave diffracted by the object. While in digital holographic microscopy (DHM) the object wavefront is encoded as an interference pattern [15], in phase retrieval several intensity measurements of the propagated beam are required to reconstruct the phase of the object wavefront [611].

To mitigate problems such as twin-image and dc-term (zeroth-order diffraction) overlapping in the hologram reconstruction, that distort the image of the object, additional techniques are often applied in DHM as for example phase-shifting interferometry [12, 13]. It increases the setup complexity and requires high stability during the recording of the interferograms, used for accurate recovering of the wavefront’s phase information.

On the other hand, the iterative phase retrieval techniques can provide robustness and a versatile experimental implementation without the need of an interference process with a reference beam. However such phase retrieval algorithms also suffer from several limitations, depending upon the considered approach. In the case of conventional iterative approaches based on the Gerchberg-Saxton (GS) algorithm [6], the main limitation is the ambiguity (e.g. defocus and twin-image) in the retrieved phase caused by convergence stagnation to spurious local minima solutions. Moreover, the reconstruction is more difficult if not only the phase but also the amplitude of the signal in the object plane are unknown as it is well-known in coherent optical and X-ray imaging [1416], wavefront sensing, etc.

In the last few years, several strategies to overcome such problems as well as for improving the iterative phase retrieval algorithms, have been developed. Most of them rely on the use of multiple measurements of diverse intensity distributions (also referred as constraint images) to increase the algorithm robustness and to mitigate ambiguity and stagnation [1618]. For instance, Faulkner and Rodenburg [17, 18] introduced a form of diversity that consists of acquiring a set of far-field intensity patterns obtained when the object is displaced transversely relative to a pre-designed illumination beam. However it requires an accurate knowledge of both the illumination beam and the transverse displacements of the object. Moreover, a high precision translation stage has to be used for its experimental implementation.

Other relevant way to increase the information diversity is based on the appropriate transformation of the diffracted field by means of an astigmatic operation [1923]. Specifically, the technique developed in Ref. [23] consists of the measurement of several constraint images (squared root of the intensity distribution) that correspond to Wigner distribution projections of the beam. The controlled astigmatism breaks the symmetry of the imaging system and provides by this way uniqueness of the retrieved complex-field amplitude. It also makes the iterative GS algorithm fast and robust against noisy measurements, ambiguities and stagnation. Indeed, the measurement of few constraint images to retrieve the phase information after tens instead of thousands iterations is required. Its application for accurate laser beam characterization was experimentally demonstrated [23]. We also underline that the implementation of controlled aberrations (e.g. astigmatism) has been found useful for phase retrieval in the context of electron diffraction imaging [2426].

In this work we generalize the approach proposed in Ref. [23] to the retrieval of both amplitude and phase distribution, without a priori knowledge about the object, that allows its implementation in imaging systems. The proposed technique and its performance are briefly discussed in Section 2. An optical system for its experimental implementation, which can be easily inserted in other imaging devices, is described in Section 3. The Section 4 is devoted to the applications of the proposed approach for imaging in the context of transmission microscopy. In particular, we study the method’s capabilities for relevant imaging tasks such as resolution enhancement of the retrieved image, numerical refocusing and quantitative phase mapping. The problem of detection and localization of micron-sized particles is also studied by using the proposed approach, which experimental results are compared with the ones provided by a conventional in-line holography technique. Relevant parameters (e.g. shape and refractive index) of such miro-particles are determined form the retrieved signal and compared with the values quoted by the manufacturer. Concluding remarks are drawn in Section 5.

2. Algorithm for iterative wavefield retrieval

The proposed algorithm exploits the diversity provided by a controlled astigmatic phase modulation of the object beam f (x, y) under study. The modulated beam is free-space propagated a fixed distance z and it is expressed as fout (r o; j) = Fz [f(r in)Lj(r in)](r o) :

fout(ro;j)=exp(i2πz/λ)iλzf(rin)Lj(rin)exp(iπ(xoxin)2+(yoyin)2λz)drin,
being r in , o = (xin , o, yin , o) the input and output spatial coordinates, j is an integer and λ is the wavelength. Here Lj(r in) stands for the transmittance function of the convergent cylindrical lens with variable focal length fj = αjz that controls the astigmatism of the constraint images |fout (r o; j)| (square root of the measured intensity) and, therefore, provides the required information diversity.

This procedure is inspired on the iterative phase retrieval algorithm developed in Ref. [23], in which such a varifocal cylindrical lens is fixed. It is important to note that this algorithm requires the measurement of the constraint images (j = 1, 2, ... , M), which includes the intensity distribution of the object field f(r in) whose direct acquisition is not always possible. While, in this work, we generalize this approach to retrieve both amplitude and phase distributions of the object field. In this case the diversity required for accurate signal recovery is provided by modulating the object field f(r in) with an astigmatic phase in two orthogonal directions. For instance, a cylindrical lens Lj(r in) oriented according to the x– and y–axis can be used for odd and even j, respectively. Notice that the orientation of the cylindrical lenses (e.g. along the x– and y–axis) is arbitrary if they remain orthogonally oriented. Nevertheless, because the constraint images are often registered in a square area of the CCD chip (as it is in our case), it could be useful rotating them by an angle π/4 for optimizing the acquisition of such an astigmatic field. In this case, the lenses necessary for wavefield retrieval can be easily expressed as

Lj(x,y)=exp(iπ(x+(1)jy)22λfj),
which corresponds to a cylindrical lens rotated at π/4 and –π/4 for odd and even j, respectively.

The flow chart of the proposed iterative wavefield retrieval is depicted in Fig. 1. As initial wavefield estimation ω k=1,j=1 the pupil function P(x, y) of the lens is applied. The pupil function serves as a realistic estimation. In each iteration loop k = 1, 2, ..., N, all the measured constraints images Ej = |Fz [f(r in)Lj (r in)]| are applied as shown in Fig. 1, where j = 1, 2, ..., M (M constraint images). Therefore after N × M iterations it is expected a successful wavefield retrieval: ω N,M (x, y) = f(r in). Notice that the entire wavefield is updated in each iteration step rather than just the phase distribution, which is done in the retrieval algorithm developed in Ref. [23].

 figure: Fig. 1

Fig. 1 Flow chart of the proposed wavefield retrieval algorithm, where * stands for the complex conjugate.

Download Full Size | PDF

To illustrate the algorithm performance, let us consider as input beam the Laguerre–Gaussian (LG) mode given by

LGp,l±(r;w)=(2πx±iyw)ipl(2πw2r2)exp(πw2r2),
where: r 2 = x 2 + y 2, w is the beam waist, and pl is the Laguerre polynomial with radial index p and azimuthal index l. In particular, we study the input beam f(rin)=LG2,4+, see Fig. 2(a), with a beam waist w=λz, where the wavelength λ = 532 nm and the propagation distance z = 10 cm that correspond to our experimental setup described in Section 3. We remind that the LG beams are stable under free-space propagation, which means that its form is preserved except for scaling. Moreover a LG mode cannot be distinguished from its complex conjugate (twin-image) under propagation. These facts make difficult the wavefield retrieval of a LG mode and therefore it is well-suited for testing signal recovery algorithms.

 figure: Fig. 2

Fig. 2 (a) Intensity and phase distributions of the LG2,4+ input signal. (b) The wavefield ωN , M (x, y) retrieved after N × M = 96 iterations using several constraint images M. (c) Evolution of the RMSE as a function of the number of iterations and constraint images by using the proposed algorithm. (d) The RMSE evolution for the case of the algorithm type 2, in which arithmetic averaging for wavefield estimation is performed in each iteration step.

Download Full Size | PDF

The algorithm convergence is monitored by calculating the relative root mean square error (RMSE) of the estimated constraints k,j = |Fz [ω k,j · Lj(r in)]|, corresponding to each iteration, with respect to the measured ones Ej as it follows

RMSE(k,j)=([Ej(r)k,j(r)]2dr)1/2×([Ej(r)]2dr)1/2.
The corresponding wavefields retrieved after N × M = 96 are shown in Fig. 2(b), while the corresponding evolution of the RMSE is shown in Fig. 2(c). These results demonstrate that the increase of the number M of constraint images significantly speeds up the convergence of the RMSE. Convergence in the wavefield retrieval is reached at N × M = 96 iterations with a RMSE value of 0.04 (M = 2), 0.01 (M = 3), and 5 × 10−3 (M = 4). We conclude that at least three constraint images are necessary to obtain successful signal retrieval, using a low number of iterations. We remind that each constraint image Ej is associated with a single value of the focal length given as fj = αjz [see the lens function Eq. (2)], where the orientation of the cylindrical lens is alternatively varied according with the rotation angle (−1)j π/4 . In particular, we have chosen α 1 = 0.35, α 2 = 0.5, α 3 = 0.75 and α 4 = 1.

To obtain an accurate signal recovery and mitigate ambiguities caused by noisy experimental measurements, the number (M) of constraint images has to be increased. It was previously demonstrated in the astigmatic phase retrieval approaches reported in Refs. [22, 23]. We underline that this fact is ignored in the astigmatic phase retrieval technique proposed by Henderson et al. [20], in which only three constraint images are used. In the latter work an iterative algorithm comprising forward and backward beam propagation is also applied, where: Two cylindrical lenses (orthogonally oriented as well) with fixed and identical focal length are used for generating two constraint images, while the third one corresponds to the measurement of the far-field diffraction pattern. Such an algorithm is based on the astigmatic phase retrieval approach reported by Nugent et al. [19]. In contrast to our approach, in Henderson’s algorithm the wavefield is estimated via arithmetic average of the three retrieved signals obtained in each iteration step. Notice that the averaging process is often introduced in order to provide robustness against noise [27]. Such an averaging process significantly degrades the algorithm convergence, although the considered diversity effectively provides uniqueness [20]. This fact can be easily demonstrated by comparing the behavior of the RMSE convergence obtained without and with the implementation of such an averaging process in the algorithm, see Fig. 2(c) and 2(d) respectively. Indeed, in our case the RMSE convergence is stable [Fig. 2(c)] and 6× faster than the one obtained in the case of using the Henderson’s method, referred to as algorithm type 2 in Fig. 2(d). Specifically, for M = 3 constraint images the RMSE convergence is reached at 0.01 after 96 iterations, while using the Henderson’s algorithm about 576 iterations are required to reach convergence at the same RMSE value. Therefore the proposed technique provides an accurate and fast signal retrieval using tens instead of hundreds iterations. It is important to note that the same improvement ratio in the algorithm convergence is also obtained when M = 2 and 4 constraint images are applied, which RMSE evolution is also shown in Fig. 2(d).

In our case the numerical simulation was performed using 1024 × 1024 pixels, where the Fast Fourier Transform (FFT) is applied for accurate calculation of the Fresnel diffraction integral, Eq. (1), see for example [28]. The algorithm typically converges within 1 minute (GNU/Linux workstation, Intel Core i7 processor, Matlab R2010a) after N × M = 96 iterations, where M is the number of constraint images of 1024 × 1024 pixels. These facts make the described method a versatile tool for wavefield retrieval. Notice that almost real time calculation can be reached by using a highly parallel processor, as for example a graphic processing unit [29].

3. Experimental implementation

The lenses Lj (x, y) necessary for wavefield retrieval [see Eq.(2)] are addressed into a programmable spatial light modulator (SLM) operating in phase-only modulation. Specifically, we use a reflective LCoS-SLM (Holoeye PLUTO, 8-bit gray-level, pixel pitch of 8 μm and 1920 × 1080 pixels), which was calibrated for a 2π phase shift at the wavelength λ = 532 nm and corrected from static aberrations caused by the deviation of flatness of the SLM display, as reported in Ref. [23]. The wavefront flatness was compensated up to λ/15 RMS for a circular area of 4 mm in diameter of the SLM display, where the digital lenses are addressed, which guarantees for an accurate lens implementation. In this work we apply the same experimental setup described in Ref. [23], in which a collimated Nd:YAG laser (linearly polarized at λ = 532 nm, with output power of 300 mW) is spatially modulated by the SLM in order to generate such signals. Nevertheless, conventional glass cylindrical lenses can be used instead of such digital ones.

The experimental setup is sketched in Fig. 3. Two relay lenses (RL1 and RL2) are set as a 4-f system or telescope for spatial filtering to isolate the addressed signal from the unmodulated light as well as from unwanted diffraction terms caused by the discrete structure of the SLM display. Notice that this spatial filtering is achieved at the Fourier plane of the spherical relay lens RL1, whereas the SLM is located at its back focal plane. Therefore the encoded lens Lj (x, y) is generated at the output plane (at z 0 = 0) of the telescope, which corresponds to the focal plane of relay lens RL2, see Fig. 3. The cylindrical lens Lj (x, y) addressed into the SLM was set with a focal length fj = αjz/s 2 to deal with the telescope magnification s. The intensity distribution of the outgoing beam is recorded by a CCD camera (Spiricon SP620U, 12-bit gray-level, pixel pitch of 4.4 μm and 1620 × 1200 pixels), which is placed at distance z for acquiring the constraint images.

 figure: Fig. 3

Fig. 3 Experimental setup used for the digital lens Lj(x, y) generation. BS is a cube beam splitter, RL1 and RL2 are spherical relay lenses (with a diameter of 5.1 cm) working as telescope 0.4×, focal lengths: f1 = 25 cm and f2 = 10 cm, in our case. SF is a spatial filter located at Fourier plane of RL1 for filtering of unwanted diffraction terms caused by the discrete structure of the SLM display.

Download Full Size | PDF

The measurement of the constraint images is mostly limited by the size of the active area of the CCD chip. Notice that the numerical aperture (NA) of such a system is given by both the chip size of the CCD and the propagation distance z. Therefore, in order to reach a maximum NA, the distance z was chosen as short as possible. Indeed, the shortest focal length that the SLM is able to implement at the considered wavelength is about fmin = 15 cm. Since the lens addressed in the SLM has a focal length given by fj = αjz/s 2, where s = 0.4 is the magnification of the telescope and αmin = 0.35, the shortest propagation distance is z = 10 cm. To reduce as much as possible the artifacts as well as ambiguities, caused by noisy experimental measurements and the limited dynamic range of the CCD camera, we use M = 8 constraints images obtained for the transformation parameter values α = 0.35, 0.42, 0.5, 0.62, 0.75, 0.87, 1, and 1.12. The analysis shows that at least four constraint images are necessary to obtain successful retrieval from experimental data.

In the next Section, we study the implementation of the proposed experimental setup for quantitative imaging in microscopy. Specifically, two experimental schemes are considered for this application. In the first one, the object under study is placed at the plane z 0 = 0 where it is illuminated by the probe beams corresponding to the lenses Lj (x, y), where j = 1, 2, ..., M. While in the second setup scheme the light scattered by the object is directly imaged into SLM using a microscope objective. The main goal of these experiments is to demonstrate an accurate retrieval of the wavefield, from which the image of the specimen can be synthesized and analyzed.

4. Imaging in microscopy by means of wavefield retrieval

The proposed technique for wavefield retrieval has several advantages: i) It is robust because the measurement does not require interferometry and the optical elements in the experimental setups are fixed, ii) It provides an accurate wavefield recovery since ambiguities such as twin-image and defocus (typically found in conventional GS algorithms) are suppressed thanks to the constraint diversity, iii) It is fast as in the acquisition of the required constraint images –the CCD camera and SLM can be synchronized to perform an automatic measurement at video rates (25 fps)– as well as in the retrieval algorithm convergence. We also underline that additional postprocessing tasks such as image resampling and digital alignment of the constraint images are not required in our approach.

These facts make the proposed approach a promising tool for imaging applications. Here we consider some of them related to the transmission microscopy geometry. In the following examples, the convergence of the iterative algorithm is typically reached after N × M = 96 iterations using M = 8 constraints images.

4.1. Transmission microscopy scheme via multi-illumination probe beams

In this microscopy scheme the specimen (placed at z 0 = 0, see Fig. 3) is sequentially illuminated by several probe or scanning beams P(x, y)Lj (x, y), where P(x, y) is the circular pupil of the addressed lens. Specifically, such a pupil function is fixed with radius 0.8 mm and it corresponds to a circular area of 4 mm in diameter of the SLM display. Here the relay telescope magnification has been taken into account. The specimen image is obtained from the wave-field retrieved at z 0 = 0, while the constraint images are acquired by the CCD camera as it is displayed in Fig. 4(a).

 figure: Fig. 4

Fig. 4 (a) Setup scheme for transmission microscopy. (b) Extension of the recording area by moving the CCD chip in order to achieve super-resolution in the retrieved image.

Download Full Size | PDF

In our case the light scattered from the specimen is registered at fixed distance z = 10 cm in the central area of the CCD chip corresponding to 1024 × 1024 pixels. In order to enhance the resolution of the retrieved field, one can register the adjacent scattered field for every illumination beam by moving the CCD camera in the recording plane, for example as displayed in Fig. 4(b). It allows generating an extended synthetic aperture (SA) with an effective spatial cutoff frequency higher than the cutoff given by the limited CCD aperture. We underline that the extended SA in the Fourier domain is often applied for super-resolution imaging, see for example Refs. [5, 30]. Implementing this technique the corresponding parts of the retrieved field (obtained from the constraint images registered at each CCD position) are properly assembled in the Fourier domain. Specifically, it is achieved by performing the Fourier transform of each retrieved part of the signal to compose an extended Fourier spectrum from which the superresolved image is generated, by simple inverse Fourier transformation.

To experimentally demonstrate the performance of the wavefield retrieval algorithm for imaging applications, we first consider a high-resolution USAF test target as an object. The last resolved elements in the retrieved image Fig. 5(a) are G5-E3 corresponding to a cutoff frequency of 40 lp/mm. This limit in the spatial resolution can be extended up to 114 lp/mm (G6-E6), see Fig. 5(b) and its zoomed inset, by assembling five retrieved fields obtained from the measurement of the constraint images as sketched in Fig. 4(b). It implies an spatial resolution enhancement about 3× in the retrieved image, whose size is increased to 3072 × 3072 pixels.

 figure: Fig. 5

Fig. 5 (a) Retrieved image of a high-resolution test target (USAF). (b) Image enhancement obtained using the proposed SA super-resolution technique. (c) Intensity distribution of the retrieved wavefield corresponding to onion skin specimen and its superresolved image (d). Nucleus as well as the cell membranes are clearly distinguished, as observed in the corresponding zoomed insets.

Download Full Size | PDF

This technique also allows successful imaging of the wavefields produced by biological specimen. In particular, we study an onion skin specimen enclosed into a flow chamber (filled with distilled water) made by attaching two glass coverslip (thickness about 0.17 mm). The retrieved intensity distribution of the sample is shown in Fig. 5(c). As observed, the whole cell structure of the specimen is clearly distinguished: Nucleus as well as the cell membranes. While the superresolved image Fig. 5(d) presents further details. For instance, the nucleus and membranes are better revolved and distinguished from the rest of the cell structure [see the zoomed inset of Fig. 5(d)], which also presents fine details. These experimental results demonstrate an accurate wavefield retrieval suitable for imaging with wide field of view. Nevertheless, for proper analysis of the phase distribution of a specimen wavefield, a microscope objective providing appropriate magnification is required. This case is studied in detail in the next subsection.

4.2. Transmission microscopy scheme using a microscope objective

The proposed optical setup for wavefield retrieval can be easily assembled with a conventional microscope. Specifically, the light scattered by the specimen is imaged through the microscope objective (MO) into the SLM display as sketched in Fig. 6. The optical path between the back focal plane of the MO and the SLM display is set according to a tube lens distance of 17 cm. Therefore the image of the specimen is obtained at the output plane z 0 = 0 of the relay telescope, see Fig. 6. While the tube lens is digitally addressed into the SLM together with cylindrical lenses Lj (x, y) necessary for the wavefield retrieval. Applying the algorithm described in Section 2 we are able to retrieve the wavefield at z 0 = 0 from the measurement of the constraint images acquired by the CCD camera at the position z = 10 cm.

 figure: Fig. 6

Fig. 6 Experimental setup for wavefield retrieval of the light scattered by the sample, which is imaged by means of the microscope objective (MO). The specimen is illuminated by focusing a collimated laser beam using a condenser.

Download Full Size | PDF

Notice that the absorption coefficient and the real part of the refractive index, n, of the sample play an important role in the imaging processes because both affect to the transmitted wavefield. Therefore amplitude as well as phase distributions of the scattered field must be recovered for achieving quantitative analysis of the specimen. In particular, refractive index mapping, associated with phase distribution, is useful for biological applications because it reveals the cell activity, molecular dynamics, etc. We underline that in the last years tomographic techniques based on wavefield recovery have been developed for a 3D imaging of live cell, see for example [31] and references cited therein. The knowledge of the amplitude and phase of the scattered field also allows performing more effectively its digital processing including filtering, denoising, image segmentation, etc. Here we demonstrate in several examples what information about the specimen can be obtained from the analysis of the intensity and phase distributions of scattered field retrieved by the proposed approach.

Let us first study the onion skin object, which is imaged using a Zeiss 16× objective (0.4 NA, Carl Zeiss Jena Apochromat) at the plane z 0 = 0, see Fig. 7(a). The intensity [Fig. 7(b)] and phase [Fig. 7(c)] distributions of the light modulated by the specimen, projected into the conjugated plane z 0 = 0, are simultaneously retrieved using the constraint images registered at the plane z. Notice that the synthesized image Fig. 7(b) is slightly blurred with respect the image Fig. 7(a) due to the limited CCD aperture (cutoff frequency of 40 lp/mm) comprising the recording setup, as previously discussed. While the information content of the intensity images is almost the same, that in particular underlines the accuracy of the retrieval method, the phase map provides additional details about the cell structure. The nucleus and cell membrane are clearly distinguished in the unwrapped phase distribution as well as in its 3D representation displayed in Fig. 7(d) and 7(c), correspondingly. For accurate phase unwrapping we applied the algorithm reported in Ref. [32].

 figure: Fig. 7

Fig. 7 (a) Intensity distribution of the specimen imaged at plane z 0 = 0 using the setup depicted in Fig. 6. Intensity and phase distributions of the retrieved wavefield are shown in (b) and (c), respectively. (d) Unwrapped phase distribution corresponding to the central region of the imaged specimen (b). (e) 3D representation of the unwrapped phase that reveals the cell structure of the onion skin object.

Download Full Size | PDF

The retrieved amplitude and phase distributions of the light modulated by the object, can also be used for the numerical refocusing. Here we demonstrate this technique on the synthesized field recovered using the proposed method. As test object it is considered an aggregate of polystyrene spheres of 5.0 ± 0.3 μm in radius (Duke Scientific Lot 9949) dispersed in immersion oil to avoid motion artifacts in the retrieved image. Its intensity distribution imaged at plane z 0 = 0 using a Zeiss 40× objective (0.95 NA, Carl Zeiss Jena Apochromat) is shown in Fig. 8(a). The light scattered by the beads and the unscattered portion of the illumination laser beam form an interference pattern yielding an in-line hologram, which is clearly observed in Fig. 8(a). The retrieved intensity and phase of the field are displayed in Fig. 8(b) and 8(c), respectively. From the unwrapped phase displayed in Fig. 8(d) and 8(e) we conclude that several particles labeled as 1, 2, 3 and 4 are arranged on top of each other. In particular, the particles #1 and #2 are partially hidden behind the #3 and #4 ones. This fact is not well-distinguished in the intensity distribution either.

 figure: Fig. 8

Fig. 8 (a) Aggregate of polystyrene spheres imaged at plane z 0 = 0 using the setup depicted in Fig. 6. Intensity and phase distributions of the retrieved wavefield are shown in (b) and (c), respectively. (d) Unwrapped phase distribution and its 3D representation (e).

Download Full Size | PDF

The particle position can be accurately measured by performing numerical refocusing of the retrieved wavefield. Indeed, the synthesized wavefield is numerically propagated from the conjugated image plane z 0 = 0 until all particles centers are found by locating the corresponding maxima in the diffracted field, see Fig. 9. Notice that the particles are labeled as they appear focused. Specifically, the spheres #5 – 7 are focused at z = 11 ± 1 μm, #8 – 10 at z = 18 ± 1 μm while the bead #11 is located at z = 23 ± 1 μm. Because the particles #1 and #2 are correspondingly attached to the ends of the #3 and #4 ones, their axial positions can not be accurately measured by applying this technique. Nevertheless, the light focused by the pairs of particles (#1 and #3) and (#2 and #4) is observed at z = 5 ± 1 μm and z = 8 ± 1 μm in the transversal position (plane XY) indicated as bead #3 and #4, respectively.

 figure: Fig. 9

Fig. 9 Numerical refocusing of the retrieved field [z = 0 μm, corresponding with Fig. 8(b) and 8(c)] reveals the particle position at several propagation distances z. The particles are labeled as their appear focused, by locating the maximum of the diffracted light.

Download Full Size | PDF

Let us now compare the considered particle localization method with another one based on numerical reconstruction of in-line holograms, which is often applied for fast particle tracking in microscopy, see for example [3, 33]. In the latter case, the particle position can be also found by locating the maximum intensity in the propagated field. This technique requires a well-defined interference pattern in the hologram, as the one observed in Fig. 8(a). Notice that a single particle can be detected if its corresponding diffracted field is properly encoded in the hologram. For the case of particles closely aggregated, the diffraction pattern of the whole structure is mostly encoded rather than the pattern emerging from each particle. Thus, it is expected that only some beads of the aggregate could be detected by using this holographic method. Indeed, as it is demonstrated in Fig. 10, only the beads #7, #8, #10 and #11 are successfully detected.

 figure: Fig. 10

Fig. 10 Numerical reconstruction at several propagation distances z of the in-line hologram shown in Fig. 8(a).

Download Full Size | PDF

From the unwrapped phase distribution ϕ (x, y) shown in Fig. 8(d), it is also possible to determine the bead’s radius (R) and its refractive index nb. However, an accurate knowledge about the refractive index of the surrounding fluid (immersion oil, ns = 1.52 at 589 nm) is required. Because the particle is spherical in shape one can estimate both the radius of the bead and the refractive index difference nbns by linear fitting of the plot corresponding to the square of the phase ϕ 2(x, y) against r 2 = (xxo)2 + (yyo)2. Notice that in the case of a spherical bead the square of the phase is theoretically given by the expression

ϕ2(x,y)=16π2(nbns)2λ2(R2r2),
see Ref. [34] in which this technique was demonstrated for polystyrene beads as well.

In our case, the exact determination of the center (xo, yo) of the sphere was obtained from the numerical refocusing results previously discussed. It allows for easy determination of the phase profile of the bead needed for accurate fitting. In particular, we consider the bead #7 for which the phase profile determined by averaging in four directions is shown in Fig. 11(a). The corresponding best linear fitting is displayed in Fig. 11(b). A refractive index difference nbns = 0.063 ± 0.004 is obtained from the slope of the fitted line according with Eq. (5). Thus, the quoted refractive index at 532 nm is nb = 1.583 ± 0.004 whereas a bead’s radius value of R = (5.0 ± 0.2) μm is found from the intercept of the fitted line. These values are consistent with the ones quoted by the manufacturer, R = (5.0 ± 0.3) μm and nb = 1.59 – 1.60 at 589 nm. The refractive index determination is in good agreement with the value reported by Ma et al. [35] (nb = 1.588 at 532 nm), although slightly smaller than the one quoted in Ref. [36] (nb = 1.5986) and Ref. [34] (nb = 1.602). These results demonstrate that proposed approach can be used for determination of quantitative information about the object. Moreover, the considered examples illustrate the potential applications of the proposed wavefield retrieval approach for advanced microscopic imaging which includes enhanced resolution, numerical refocusing, quantitative imaging and phase mapping, etc.

 figure: Fig. 11

Fig. 11 (a) Averaged phase profile ϕ of the polystyrene bead #7. (b) Square of the phase profile (ϕ 2) and the fit corresponding to Eq. (5).

Download Full Size | PDF

5. Conclusions

We have developed an iterative wavefield retrieval approach that exploits controlled astigmatic phase modulation to obtain appropriate diversity in the intensity measurements. The algorithm is robust against noise, ambiguities and stagnation and provides recovery uniqueness. The comparison with another similar technique demonstrates that tens instead of hundreds iterations are needed for its convergence.

The experimental setup for the implementation of the proposed method is highly robust (all the optical elements and the CCD camera are fixed) and its incorporation in other optical schemes is straightforward. It is based on a SLM that accurately implements the cylindrical lenses with corrected aberration required in this wavefield retrieval approach. A digital lens implementation at video rates synchronized with the CCD camera provides fast intensity measurements (constraint images) in a millisecond range. The high versatility of the proposed setup also allow simple implementation of advanced techniques for improving the resolution of the retrieved image.

The fast and accurate retrieval of complex-valued fields has been experimentally demonstrated for several applications in microscopic imaging. In particular, it has been demonstrated that super-resolution in the retrieved image of the object can be easily obtained using the proposed technique for generating an extended synthetic aperture. Indeed, a super-resolution factor about 3× in the cutoff frequency of the setup was reached by the proper assembling of five adjacent parts of the scattered field. The proposed approach promises to make easier and more practical the wavefield retrieval for quantitative imaging required in relevant optical applications. For instance, it has been demonstrated that shape, size and refractive index of polystyrene micro-particles can be accurately determined from the retrieved field. These results are in good agreement with the ones quoted by the manufacturer, although further research is needed to definitely establish its accuracy.

Acknowledgments

The financial support of the Spanish Ministry of Science and Innovation under project TEC2008-04105 and TEC2010-20307 are acknowledged. José A. Rodrigo gratefully thanks a “Juan de la Cierva” grant.

References

1. U. Schnars, “Direct phase determination in hologram interferometry with use of digitally recorded holograms,” J. Opt. Soc. Am. A 11, 2011–2015 (1994). [CrossRef]  

2. P. Marquet, B. Rappaz, P. J. Magistretti, E. Cuche, Y. Emery, T. Colomb, and C. Depeursinge, “Digital holographic microscopy: a noninvasive contrast imaging technique allowing quantitative visualization of living cells with subwavelength axial accuracy,” Opt. Lett. 30, 468–470 (2005). [CrossRef]   [PubMed]  

3. J. Garcia-Sucerquia, W. Xu, S. K. Jericho, P. Klages, M. H. Jericho, and H. J. Kreuzer, “Digital in-line holographic microscopy,” Appl. Opt. 45, 836–850 (2006). [CrossRef]   [PubMed]  

4. F. Charrière, A. Marian, F. Montfort, J. Kuehn, T. Colomb, E. Cuche, P. Marquet, and C. Depeursinge, “Cell refractive index tomography by digital holographic microscopy,” Opt. Lett. 31, 178–180 (2006). [CrossRef]   [PubMed]  

5. V. Mico, Z. Zalevsky, C. Ferreira, and J. García, “Superresolution digital holographic microscopy for three-dimensional samples,” Opt. Express 16, 19260–19270 (2008). [CrossRef]  

6. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

7. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]   [PubMed]  

8. M. R. Teague, “Deterministic phase retrieval: a Green’s function solution,” J. Opt. Soc. Am. 73, 1434–1441 (1983). [CrossRef]  

9. M. R. Teague, “Image formation in terms of the transport equation,” J. Opt. Soc. Am. A 2, 2019–2026 (1985). [CrossRef]  

10. A. Barty, K. A. Nugent, D. Paganin, and A. Roberts, “Quantitative optical phase microscopy,” Opt. Lett. 23, 817–819 (1998). [CrossRef]  

11. Y. Zhang, G. Pedrini, W. Osten, and H. Tiziani, “Whole optical wave field reconstruction from double or multi in-line holograms by phase retrieval algorithm,” Opt. Express 11, 3234–3241 (2003). [CrossRef]   [PubMed]  

12. I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997). [CrossRef]   [PubMed]  

13. I. Yamaguchi, J. ichi Kato, S. Ohta, and J. Mizuno, “Image formation in phase-shifting digital holography and applications to microscopy,” Appl. Opt. 40, 6177–6186 (2001). [CrossRef]  

14. J. N. Cederquist, J. R. Fienup, J. C. Marron, and R. G. Paxman, “Phase retrieval from experimental far-field speckle data,” Opt. Lett. 13, 619–621 (1988). [CrossRef]   [PubMed]  

15. J. Miao, P. Charalambous, J. Kirz, and D. Sayre, “Extending the methodology of X-ray crystallography to allow imaging of micrometre-sized non-crystalline specimens,” Nature 400, 342–344 (1999). [CrossRef]  

16. F. Zhang, G. Pedrini, and W. Osten, “Phase retrieval of arbitrary complex-valued fields through aperture-plane modulation,” Phys. Rev. A 75, 043805 (2007). [CrossRef]  

17. H. M. L. Faulkner and J. M. Rodenburg, “Movable aperture lensless transmission microscopy: a novel phase retrieval algorithm,” Phys. Rev. Lett. 93, 023903 (2004). [CrossRef]   [PubMed]  

18. J. M. Rodenburg and H. M. L. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85, 4795–4797 (2004). [CrossRef]  

19. K. A. Nugent, A. G. Peele, H. N. Chapman, and A. P. Mancuso, “Unique phase recovery for nonperiodic objects,” Phys. Rev. Lett. 91, 203902 (2003). [CrossRef]   [PubMed]  

20. C. A. Henderson, G. J. Williams, A. G. Peele, H. M. Quiney, and K. A. Nugent, “Astigmatic phase retrieval: an experimental demonstration,” Opt. Express 17, 11905–11915 (2009). [CrossRef]   [PubMed]  

21. T. Alieva and J. A. Rodrigo, “Iterative phase retrieval from Wigner distribution projections,” in Signal Recovery and Synthesis, OSA Technical Digest (CD) (Optical Society of America, 2009), p. STuD2.

22. J. A. Rodrigo, H. Duadi, T. Alieva, and Z. Zalevsky, “Multi-stage phase retrieval algorithm based upon the gyrator transform,” Opt. Express 18, 1510–1520 (2010). [CrossRef]   [PubMed]  

23. J. A. Rodrigo, T. Alieva, A. Cámara, O. Martínez-Matos, P. Cheben, and M. L. Calvo, “Characterization of holographically generated beams via phase retrieval based on Wigner distribution projections,” Opt. Express 19, 6064–6077 (2011). [CrossRef]   [PubMed]  

24. L. J. Allen, M. P. Oxley, and D. Paganin, “Computational aberration correction for an arbitrary linear imaging system,” Phys. Rev. Lett. 87, 123902 (2001). [CrossRef]   [PubMed]  

25. W. McBride, N. L. O’Leary, K. A. Nugent, and L. J. Allen, “Astigmatic electron diffraction imaging: a novel mode for structure determination,” Acta Crystallogr., Sect. A: Found. Crystallogr. 61, 321–324 (2005). [CrossRef]  

26. T. C. Petersen and V. J. Keast, “Astigmatic intensity equation for electron microscopy based phase retrieval,” Ultramicroscopy 107, 635–643 (2007). [CrossRef]   [PubMed]  

27. L. J. Allen, H. M. L. Faulkner, K. A. Nugent, M. P. Oxley, and D. Paganin, “Phase retrieval from images in the presence of first-order vortices,” Phys. Rev. E 63, 037602 (2001). [CrossRef]  

28. D. Mendlovic, Z. Zalevsky, and N. Konforti, “Computation considerations and fast algorithms for calculating the diffraction integral,” J. Mod. Opt. 44, 407–414 (1997). [CrossRef]  

29. T. Shimobaba, T. Ito, N. Masuda, Y. Abe, Y. Ichihashi, H. Nakayama, N. Takada, A. Shiraki, and T. Sugie, “Numerical calculation library for diffraction integrals using the graphic processing unit: the GPU-based wave optics library,” J. Opt. A, Pure Appl. Opt. 10, 075308 (2008). [CrossRef]  

30. L. Granero, V. Micó, Z. Zalevsky, and J. García, “Synthetic aperture superresolved microscopy in digital lensless Fourier holography by time and angular multiplexing of the object information,” Appl. Opt. 49, 845–857 (2010). [CrossRef]   [PubMed]  

31. Y. Sung, W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express 17, 266–277 (2009). [CrossRef]   [PubMed]  

32. M. A. Herráez, D. R. Burton, M. J. Lalor, and M. A. Gdeisat, “Fast two-dimensional phase-unwrapping algorithm based on sorting by reliability following a noncontinuous path,” Appl. Opt. 41, 7437–7444 (2002). [CrossRef]   [PubMed]  

33. F. C. Cheong, B. J. Krishnatreya, and D. G. Grier, “Strategies for three-dimensional particle tracking with holographic video microscopy,” Opt. Express 18, 13563–13573 (2010). [CrossRef]   [PubMed]  

34. T. J. McIntyre, C. Maurer, S. Fassl, S. Khan, S. Bernet, and M. Ritsch-Marte, “Quantitative SLM-based differential interference contrast imaging,” Opt. Express 18, 14063–14078 (2010). [CrossRef]   [PubMed]  

35. X. Ma, J. Q. Lu, R. S. Brock, K. M. Jacobs, P. Yang, and X.-H. Hu, “Determination of complex refractive index of polystyrene microspheres from 370 to 1610 nm,” Phys. Med. Biol. 48, 4165 (2003). [CrossRef]  

36. I. D. Nikolov and C. D. Ivanov, “Optical plastic refractive measurements in the visible and the near-infrared regions,” Appl. Opt. 39, 2067–2070 (2000). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Flow chart of the proposed wavefield retrieval algorithm, where * stands for the complex conjugate.
Fig. 2
Fig. 2 (a) Intensity and phase distributions of the LG 2 , 4 + input signal. (b) The wavefield ωN , M (x, y) retrieved after N × M = 96 iterations using several constraint images M. (c) Evolution of the RMSE as a function of the number of iterations and constraint images by using the proposed algorithm. (d) The RMSE evolution for the case of the algorithm type 2, in which arithmetic averaging for wavefield estimation is performed in each iteration step.
Fig. 3
Fig. 3 Experimental setup used for the digital lens Lj (x, y) generation. BS is a cube beam splitter, RL1 and RL2 are spherical relay lenses (with a diameter of 5.1 cm) working as telescope 0.4×, focal lengths: f1 = 25 cm and f2 = 10 cm, in our case. SF is a spatial filter located at Fourier plane of RL1 for filtering of unwanted diffraction terms caused by the discrete structure of the SLM display.
Fig. 4
Fig. 4 (a) Setup scheme for transmission microscopy. (b) Extension of the recording area by moving the CCD chip in order to achieve super-resolution in the retrieved image.
Fig. 5
Fig. 5 (a) Retrieved image of a high-resolution test target (USAF). (b) Image enhancement obtained using the proposed SA super-resolution technique. (c) Intensity distribution of the retrieved wavefield corresponding to onion skin specimen and its superresolved image (d). Nucleus as well as the cell membranes are clearly distinguished, as observed in the corresponding zoomed insets.
Fig. 6
Fig. 6 Experimental setup for wavefield retrieval of the light scattered by the sample, which is imaged by means of the microscope objective (MO). The specimen is illuminated by focusing a collimated laser beam using a condenser.
Fig. 7
Fig. 7 (a) Intensity distribution of the specimen imaged at plane z 0 = 0 using the setup depicted in Fig. 6. Intensity and phase distributions of the retrieved wavefield are shown in (b) and (c), respectively. (d) Unwrapped phase distribution corresponding to the central region of the imaged specimen (b). (e) 3D representation of the unwrapped phase that reveals the cell structure of the onion skin object.
Fig. 8
Fig. 8 (a) Aggregate of polystyrene spheres imaged at plane z 0 = 0 using the setup depicted in Fig. 6. Intensity and phase distributions of the retrieved wavefield are shown in (b) and (c), respectively. (d) Unwrapped phase distribution and its 3D representation (e).
Fig. 9
Fig. 9 Numerical refocusing of the retrieved field [z = 0 μm, corresponding with Fig. 8(b) and 8(c)] reveals the particle position at several propagation distances z. The particles are labeled as their appear focused, by locating the maximum of the diffracted light.
Fig. 10
Fig. 10 Numerical reconstruction at several propagation distances z of the in-line hologram shown in Fig. 8(a).
Fig. 11
Fig. 11 (a) Averaged phase profile ϕ of the polystyrene bead #7. (b) Square of the phase profile (ϕ 2) and the fit corresponding to Eq. (5).

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

f o u t ( r o ; j ) = exp ( i 2 π z / λ ) i λ z f ( r i n ) L j ( r i n ) exp ( i π ( x o x i n ) 2 + ( y o y i n ) 2 λ z ) d r i n ,
L j ( x , y ) = exp ( i π ( x + ( 1 ) j y ) 2 2 λ f j ) ,
LG p , l ± ( r ; w ) = ( 2 π x ± i y w ) i p l ( 2 π w 2 r 2 ) exp ( π w 2 r 2 ) ,
RMSE ( k , j ) = ( [ E j ( r ) k , j ( r ) ] 2 d r ) 1 / 2 × ( [ E j ( r ) ] 2 d r ) 1 / 2 .
ϕ 2 ( x , y ) = 16 π 2 ( n b n s ) 2 λ 2 ( R 2 r 2 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.