Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Lensless digital holography with diffuse illumination through a pseudo-random phase mask

Open Access Open Access

Abstract

Microscopic imaging with a setup consisting of a pseudo-random phase mask, and an open CMOS camera, but without an imaging objective, is demonstrated. The pseudo random phase mask acts as a diffuser for an incoming laser beam, scattering a speckle pattern to a CMOS chip, which is recorded once, as a reference. A sample which is afterwards inserted somewhere in the optical beam path changes the speckle pattern. A single (non-iterative) image processing step, comparing the modified speckle pattern with the previously recorded one, generates a sharp image of the sample. After a first calibration the method works in real-time and allows quantitative imaging of complex (amplitude and phase) samples in an extended three-dimensional volume. Since no lenses are used, the method is free from lens aberrations. Compared to standard inline holography the diffuse sample illumination improves the axial sectioning capability by increasing the effective numerical aperture in the illumination path, and it suppresses the undesired twin images. For demonstration, a high resolution spatial light modulator (SLM) is programmed to act as the pseudo-random phase mask. We show experimental results, imaging microscopic biological samples, such as insects, within an extended volume at a distance of 15 cm with a transverse and longitudinal resolution of about 60 μm and 400 μm, respectively.

© 2011 Optical Society of America

1. Introduction

Inline holography, invented by Gabor around 1948 [1, 2], is a holographic method where object- and reference beams are not spatially separated, i.e. the reference beam consists of the zero order Fourier component (or carrier wave) of the signal beam. The method has several advantages: it is very stable against vibrations or phase fluctuations, since the two beam components travel along the same path, and the longitudinal coherence length of the illumination light can be very low. The first optical holograms were in fact recorded with light from a mercury arc lamp. Since the development of high resolution digital cameras, digital Gabor holography has also been established as an alternative method in optical microscopy [3].

On the other hand a disadvantage of the method is that a "twin image" appears, which is an inverted (and phase conjugate) copy of the original image that often distorts the quality of the reconstructed image. Furthermore the plane wave illumination produces sharp “shadows” of all objects in the sample volume simultaneously, such that the reconstructed image of a transverse sample plane (“optical sectioning”) is disturbed by sharply reconstructed objects from other planes.

Here we demonstrate a modified concept of digital inline holography which avoids these disadvantages. In our approach the sample is illuminated with diffuse light, which has, however, a known phase distribution. The sample image is calculated from its scattered far-field (or Fresnel-regime) intensity distribution by numerically comparing the scattered speckle pattern with a previously recorded reference pattern.

In our experiment the sample is located between a high resolution pseudo-random phase mask and a CMOS camera chip. Since the phase of each pixel of the phase mask is known, the complex amplitude of the diffracted speckle pattern (without the sample) can be calculated in the plane of the image sensor by numerical Fresnel propagation. If a sample is then placed in the beam path, the diffracted speckle pattern changes accordingly. Since the phase of the undisturbed original speckle pattern is known (by numerical wave propagation of the illumination beam through the pseudo-random phase mask into the camera plane) a sharp image of the sample can be calculated in a single processing step, consisting mainly of a fast two-dimensional Fourier transform. The reconstructed image field can afterwards be propagated numerically to any transverse plane in the optical beam path, thus providing three-dimensional image information by numerical post-processing of the recorded speckle pattern.

One main advantage of using an illumination field scattered from a random phase mask, instead of a plane wave as in inline holography, is an increase of the longitudinal resolution or sectioning capability, i.e. the effective numerical aperture (NA), which is given by the sum of illumination and imaging NAs, is approximately doubled as compared to plane wave illumination. Since the longitudinal resolution scales with 1/NA2, this effect is particularly significant. We demonstrate this by simultaneously imaging two millimeter-sized samples with a longitudinal separation of 5 cm in a single speckle pattern, demonstrating that both of the samples can be reconstructed independently. Furthermore the images can be reconstructed without the appearance of disturbing twin images. Due to the diffuse reference beam, only the desired first diffraction order is sharply reconstructed, whereas all other orders (including the minus first order which is responsible for the twin image) are dispersed in a uniform background [4, 5].

The concept of using a random phase mask in the optical beam path for improving phase retrieval methods has already been proposed earlier, using a specially manufactured phase plate [6] or a spatial light modulator [7]. In these publications the main advantages of using a random phase plate are described from another point of view, mainly as spreading the image information over a larger frequency bandwidth, and therefore the method is called spread-spectrum phase retrieval. Although this method is closely related to our approach by sharing the common advantages of the diffuse image wave, the optical implementation and the numerical image reconstruction methods are different, e.g. in our approach the image is reconstructed from a single recorded speckle pattern by quasi-interferometric comparison with a previously recorded reference speckle pattern.

For numerical reconstruction of a digital inline hologram the full complex amplitude of the reference wave has to be known. This is no problem in standard inline holography using for example plane wave illumination, since the reference wave (being the zero diffraction order of the transmitted light) just corresponds to the illumination wave, and thus has a uniform amplitude and constant phase. However, in our more general case of a speckle illumination field, both amplitude and phase of the reference wave first have to be determined. This could be done for example by a preliminary interferometric measurement of the complex amplitude of the undisturbed speckle pattern before the sample is inserted in the camera plane. This approach would even work with a standard scattering plate (e.g. ground glass) as phase mask. However, for demonstration purposes we use a high resolution spatial light modulator, which can display a pseudo-random (i.e. known) phase pattern in the range between 0 and 2π with a resolution of 8 micron (pixelsize). From the known phase distribution in the SLM plane, the complex amplitude of the diffracted field in the camera plane is calculated, which corresponds to the desired reference wave. In order to map position and size of the numerically calculated speckle pattern in the camera plane with the actually recorded images, a preliminary image registration step is required, which, however, can also straightforwardly be performed by using the programmability of the SLM, as will be shown later. These calibration steps have to be done only once for a given setup, and from then on imaging can be done in real time, i.e. from each recorded camera frame the light field in the entire volume between SLM and camera plane can be reconstructed.

2. Experimental approach

Figure 1 summarizes the steps required to record and process a series of images. The upper line (steps 1 and 2) shows the preliminary tasks which have to be done only a single time before (or after) a measurement. The first step is the image registration, where a test hologram displayed at the SLM is sharply reconstructed in the camera plane. Spatial mapping of the experimentally recorded and the theoretically expected images provides a means to map any phase mask displayed at the SLM to a corresponding pixel image recorded by the camera. In step 2 a pseudo random phase mask (with known phase distribution) is displayed at the SLM, and the scattered speckle pattern is recorded by the camera as a reference for further measurements. The phase of each pixel of the recorded image field can be calculated based on the knowledge of the phase mask displayed at the SLM, using the previously obtained image registration information for correctly overlapping the calculated phase with the experimentally recorded speckle image. The next two tasks describe the actual imaging process. In step 3 a sample is placed in the optical path between the SLM (displaying still the same random phase mask as in step 2) and the camera, and an image (or a series of images) of the correspondingly changed speckle pattern is recorded and stored. Finally, numerical data processing (step 4) of the recorded images is carried out by subtracting the intensities of the reference from the image speckle patterns and assigning the calculated phase (step 2) of the reference speckle pattern to the difference. The corresponding complex field corresponds to a hologram of the sample and can be sharply reconstructed in different axial planes by standard numerical back-propagation methods - in our case we use the “spectrum of plane waves method” (with the corresponding propagation operator indicated in the figure).

 figure: Fig. 1

Fig. 1 Principle of lensless imaging. Step 1: Image registration: a test hologram displayed at the SLM is illuminated with a slightly divergent laser beam and reconstructed in the camera plane. Mapping the experimentally recorded image onto the numerically reconstructed one, allows one to derive a transformation matrix which is afterwards applied to all other experimentally recorded images. Step 2: A pseudo random phase pattern (with transmission function TR) is displayed at the SLM. The corresponding speckle pattern generated in the camera plane (R) is recorded as a reference. The corresponding complex phase angle (ΦR) of the speckle pattern is numerically computed and mapped pixel by pixel with the experimental speckle intensity image by using the previously determined spatial transformation matrix. Step 3: Actual imaging procedure: Samples in the optical beam path generate a series of new speckle patterns (Si) which are recorded by the camera. Step 4: Image reconstruction: The complex amplitude of the object’s wave field in the camera plane is obtained by numerical subtraction of the (normalized) image and reference intensity images ( (SiR)/(Si+R)/2 and by assigning the complex phase ΦR of the reference speckle pattern to the difference. Then the wavefront is numerically propagated to different axial planes between the SLM and the camera (so called ”spectrum of plane waves propagation operator” indicated in Fig. 1), in order to reconstruct sharp images of the sample.

Download Full Size | PDF

In more detail, the image reconstruction procedure can be understood as follows: first a pseudo random mask with a transmission function TR(xS, yS) = exp(iΦSLM(xS, yS)) is displayed in the SLM Plane (xs, ys), consisting of uniformly distributed phase levels in an interval between 0 and 2π. The correspondingly diffracted speckle image R(x,y) in the camera plane (x,y) is recorded as a reference. Thus R = |F{exp(iΦSLM)}|2, where the operator F{...} denotes Fresnel (or Fourier) propagation of the wave field from the SLM to the camera plane.

In the next step we assume that a sample object is inserted in the SLM plane (calculation for other positions is straightforward by propagating the field with the respective propagators). Furthermore it is assumed that the complex transmission function of the sample O(xS, yS) = 1 + ΔO(xS, yS) (with |ΔO(xS, yS)| ≪ 1) is close to unity, i.e. the sample is only a small disturbance for the transmitted speckle field. Then the complete wave field behind the SLM and the object becomes TRO = TR + ΔO TR = exp(iΦSLM) + ΔOexp(iΦSLM). In this case the image intensity S(x,y) in the camera plane becomes S = F{TR + ΔO TR}F*{TR + ΔO TR}, where the “*” symbol means the complex conjugate.

Using the linearity of the Fourier transform this can be expanded to

S=F{TR}F*{TR}+F{TR}F*{ΔOTR}+F*{TR}F{ΔOTR}+F{ΔOTR}F*{ΔOTR}.

The first term in the sum corresponds to the intensity R of the undisturbed speckle image, whereas the last term can be neglected, considering that |ΔO| ≪ 1. Under this assumption the difference between the two speckle images with and without inserted sample object becomes

SRF{TR}F*{ΔOTR}+F*{TR}F{ΔOTR}.

Therefore, in order to obtain the desired complex transmission function of the object, we have to calculate:

F1{(SR)/F*{TR}}/TR=ΔO+F1{(F{TR}/F*{TR})F*{ΔOTR}}/TR,
where the operator F−1 denotes the inverse Fresnel transform (from the camera plane to the SLM plane). Note that the second term on the right side contains the ratio F{TR}/F*{TR} as a part of the argument of the inverse Fresnel transform, corresponding to a randomly distributed speckle field. Therefore the inverse Fresnel transform of this term (even after multiplication with the additional factor F*O TR}) also results in a random speckle field, which is distributed uniformly in the SLM (=object) plane, and can be regarded as “speckle noise” with a total (integrated) intensity which corresponds to the intensity of the reconstructed object. However, since the object is localized and the speckle field is homogeneously distributed across the whole object plane, the object can still be reconstructed with a high signal to noise ratio against a diluted background. Thus one finally obtains that
ΔO=F1{(SR)/F*{TR}}/TR+specklenoise.

The result of this calculation is a complex “dark field” image of the sample (i.e. only ΔO but not O is obtained), where both amplitude and complex phase of the object are reconstructed. This is possible since the term F*{TR}=Rexp(iΦR) is known, i.e. R is measured as the intensity distribution of the reference image in the camera plane, and the corresponding phase ΦR is numerically calculated from the known transmission function of the pseudo-random phase mask TR. In the experiment it is advantageous to reduce artifacts due to the division by small absolute values of F*{TR} by approximating F*{exp(iΦSLM)}(R+S)/2)exp(iΦR). In this case one gets approximately:

ΔO=F1{SRS+R/2exp(iΦR)}exp(iΦSLM).

We now describe the individual steps taken in a demonstration experiment. The optical arrangement is shown in Fig. 2: a continuous wave Helium Neon laser (with 10 mW power at a wavelength of 633 nm, and with a bandwidth on the order of 1 GHz, lasing in TEM00 mode) is used for illumination. The beam is expanded by a set of lenses (not shown) and illuminates the surface of a reflective SLM (Holoeye HEO 1080P) with a slightly divergent beam through a non-polarizing beamsplitter cube. Directly behind the laser the linear beam polarization direction is optimized for SLM incidence by a half-wave plate (not shown), such that the SLM acts as an almost pure phase modulator, affecting the diffracted polarization only negligibly. The SLM has a resolution of 1920 x 1080 pixels, each with a quadratic shape and an edge length of 8 micron. The SLM is connected with the digital graphics card output of a computer and displays a copy of the actual computer screen image. The gray values of each pixel at the computer monitor are converted into refractive index variations of the liquid crystals at the corresponding SLM pixels, such that 256 (8-bit) phase levels within a range between 0 and 2π can be displayed by each SLM pixel. Only a quadratic region from the center of the SLM surface consisting of 1024 × 1024 pixels (corresponding to an area of approximately 8 x 8 mm2) is used in the experiment, the remaining area is shielded by a black card quadratic aperture. The light diffracted off the SLM surface passes through the beamsplitter cube and is reflected by a mirror to the chip of a CMOS camera (Canon EOS 1000D) at a distance of approximately 15 cm from the SLM. The camera chip has a size of 22.2 × 14.8 mm2 and a resolution of 3888 × 2592 "colored" pixels. The distance between the SLM and the camera is chosen such that all first order diffracted light from the SLM reaches the CMOS chip surface. The CMOS camera is connected via a USB cable to a computer for remote control. For adjustment, the camera is operated in video mode, showing a real-time image of the light intensity at the CMOS chip on the computer monitor. However, for image recording the camera operates in full resolution mode, recording the speckle images in uncompressed raw-format. Due to the red laser illumination, only the red channel of the RGB image data is used for further data processing.

 figure: Fig. 2

Fig. 2 Experimental setup. For a detailed explanation see the text.

Download Full Size | PDF

For calibration (step 1 in Fig. 1) a numerically calculated Fresnel hologram of a test pattern is displayed at the SLM, and the correspondingly diffracted image is recorded by the camera. An example of such a recorded test pattern is shown in Fig. 3. First the on-axis phase hologram of the test pattern containing some number-labeled cross-lines used for image registration is calculated as a far-field (Fourier) hologram using an iterative Fourier transform algorithm [8]. Then it is transformed into a Fresnel hologram which sharply reconstructs at a distance of 15 cm behind the SLM screen by multiplying the Fourier hologram pixel by pixel with a parabolic lens term, namely with exp(iπr2/λz), where r is the radius measured from the center of the SLM, λ is the light wavelength (633 nm) and z is the desired distance (15 cm) where the hologram should be sharply reconstructed. Due to the offset divergence of the incoming laser beam, the actual reconstruction distance is slightly larger than the programmed Fresnel distance of 15 cm, and the camera is positioned in the experimentally determined sharp image plane. The advantage of the Fresnel setup is that the zeroth diffraction order of the hologram, i.e. the merely reflected component of the light which amounts - due to the limited diffraction efficiency of the SLM - still to about 5% of the total image intensity, does not focus to a point in the image plane (as in a Fourier hologram), but is instead distributed over the CMOS chip surface [9]. Due to this “intensity dilution” its intensity is much weaker than that of the sharply reconstructed image structures and thus it can be neglected during post-processing of the recorded images.

 figure: Fig. 3

Fig. 3 Reconstructed on-axis Fresnel hologram of a test pattern recorded by a CMOS camera. The image is used for geometric mapping (image registration) of the experimentally projected holograms with the numerically reconstructed images. The corresponding phase mask displayed at the SLM was calculated with an iterative Fourier transform algorithm as a phase-only on-axis Fresnel hologram with a reconstruction distance of 15 cm. The imaging distance of 15 cm was chosen, since in this case the diffracted test pattern has approximately the same size as the camera chip. The dashed square (which is not part of the test hologram) was included afterwards to indicate the boundaries of the programmed hologram. The reconstructed image parts around the dashed square are reproductions of the inner image, which appear in higher diffraction orders due to the SLM pixelation.

Download Full Size | PDF

The main purpose of the test pattern is to map and merge the positions of the experimentally recorded image with the numerically reconstructed image in the computer. This is done by propagating the known phase pattern displayed at the SLM with a Fresnel propagator into the camera plane at a distance of 15 cm, and comparing the position of this numerically reconstructed image with the experimentally recorded image. In our data processing software MAT-LAB an interactive algorithm performs the required procedure of so-called “image registration” by selecting a set of equivalent test points in the theoretically and experimentally reconstructed images. From this set of corresponding points a transformation matrix is calculated and stored. This transformation matrix is then applied to all further experimentally recorded images and has the effect of adapting their size, orientation and possible geometric distortion, such that afterwards the position of each experimentally recorded image pixel exactly corresponds to that of its numerical reconstruction. Although this procedure maps only the theoretically expected with the experimentally recorded intensity images, one can calculate also the phase of each image pixel in the camera plane from the numerical reconstruction of the SLM pattern.

The next step is to display a pseudo-random phase mask at the SLM, with each pixel having a randomly chosen phase in the interval between 0 and 2π (step 2 in Fig. 1). In this case the SLM phase mask acts as an almost ideal scatterer which produces a two-dimensional diffuse speckle pattern at the camera. This speckle pattern is then stored as a reference for all further measurements. Note that due to the knowledge of the phase mask displayed at the SLM, also the phase of each pixel in the camera plane can be calculated.

After these preliminary steps, which have to be done only a single time before the measurements, a small sample object is inserted somewhere into the beam path, while the SLM is still displaying the same pseudo-random phase pattern as before (step 3 in Fig. 1). The presence of the sample changes the speckle pattern recorded by the camera.

In a first experiment we placed a “fly” with a size of approximately 2 × 2 mm2 directly at the surface of the SLM. The insect acts as a mixed amplitude and phase sample, since it contains both transparent (wings) and absorptive (body) parts.

The numerical reconstruction of the sample (indicated in Fig. 1, step 4) is performed according to Eq. (5). Figure 4(a) and 4(b) show the recorded speckle pattern without and with the sample in the optical path, respectively. The absolute value of the difference between the speckle patterns with (S) and without (R) included samples (normalized by S+R) is shown in Fig. 4(c). Note that the difference between the images of (a) and (b) represents an array that contains positive and negative values. This array is then multiplied pixel by pixel with exp(iΦR) (which is calculated by numerically propagating the SLM phase pattern into the camera plane - see step 4 in Fig. 1). Afterwards the resulting complex number array is back-propagated into the SLM plane by inverting the Fresnel operation used before for the calculation of the camera image from the SLM phase mask, and divided by the phase term exp(iΦSLM), thus removing the offset phase of the illumination light. The squared absolute value of this back-propagated image corresponds to a sharp intensity image of the sample in the SLM plane, in this case an image of the fly placed on top of the SLM (shown in (d)). The corresponding phase of the calculated complex amplitude (in (e)) corresponds to the phase of the sample object, i.e. it is a quantitative measure for the optical thickness of the object.

 figure: Fig. 4

Fig. 4 a: Speckle pattern without sample in the beam path. b: Speckle pattern after insertion of a sample on top of the SLM surface. c: Intensity difference between speckle patterns with (S) and without (R) included sample, normalized by S+R. d: Numerically reconstructed intensity image of a “fly” placed on the SLM surface. e: Numerically reconstructed complex phase image.

Download Full Size | PDF

For demonstration that imaging is possible in an extended three dimensional volume, the “fly” was placed on top of the SLM surface, and a second sample, namely an ant, on top of the beamsplitter cube, such that both specimen were in the optical beam path between SLM and camera, with a relative distance of approximately 5 cm. The recorded speckle pattern was processed as described before, and the resulting complex wave field numerically propagated to different axial positions between the SLM and the camera plane.

Figure 5(a) shows the result of the reconstruction in the SLM plane, corresponding to a sharp image of the “fly” which is located there. After numerical refocussing by a distance of 2.5 cm, corresponding approximately to the middle axial position between the two insects, the whole image is blurred (b). Finally, after a further propagation of 2.5 cm the image of the “fly” has completely vanished and a sharp image of the ant placed on top of the beam splitter cube is reconstructed (c). For comparison with standard inline holography the whole imaging sequence was repeated in d-f for plane wave illumination (by displaying just a plane phase front at the SLM). Although non-overlapping image parts of fly and ant also appear sharply in the corresponding focal planes, the overlapping parts of the images disturb each other, such that the two imaged objects cannot be identified. The fact that the diffuse illumination in (a)–(c) allows to discriminate between the two axially separated objects is due to the corresponding increase in the effective numerical aperture of the illumination beam.

 figure: Fig. 5

Fig. 5 Comparison of axial sectioning for diffuse and for plane wave illumination. Two specimen, namely a “fly” on top of the SLM surface, and an ant on top of the beamsplitter cube at a distance of 5 cm from the SLM surface, were inserted in the optical path. The sequences (a)–(c) and (d)–(f) show the results of the image reconstruction processes at corresponding focal planes for diffuse and plane wave illumination, respectively. (a) and (d): Numerical image reconstruction in the SLM plane. (b) and (e): Numerically refocussed image at a distance of 2.5 cm from the SLM surface. (c) and (f): Numerically refocussed image at a distance of 5 cm from the SLM surface (corresponding to the top surface of the beamsplitter cube). All images were reconstructed from the same experimentally recorded speckle pattern. A continuous version demonstrating the effects of numerical focus tuning is shown in an included movie ( Media 1, 3.6 MB). Obviously, the diffuse illumination enables independent imaging of the axially separated samples, whereas plane wave illumination results in an overlapping of the two images in all numerically refocussed planes.

Download Full Size | PDF

A mpg-movie ( Media 1, 3.6 MB) which gives a better impression of the effects of numerical refocussing is enclosed. The movie consists of a series of two-dimensional wave field reconstructions in continuously changing axial planes, starting near the SLM plane and moving to the surface plane of the beamsplitter cube. Note that the whole movie is generated by numerically processing only a single recorded speckle pattern. This suggests to apply the method for the three-dimensional imaging of dynamic processes, since the changing speckle patterns can be recorded at high rates that are practically only limited by the maximal recording speed of the image sensor. The three-dimensional information of the field in the whole optical beam path can be afterwards extracted by numerical post processing.

In order to estimate the transverse and axial resolution of our imaging setup we performed another series of experiments with sample test objects placed on top of the beamsplitter cube. Inserting a metal coated transmissive USAF resolution target at this position results in the reconstructed image shown in Fig. 6(a). The image quality is in this case strongly reduced by speckle noise, which is due to the considerable thickness (2 mm) of the glass target, which changes the phase of the reference speckle pattern in the camera plane with respect to its numerically calculated distribution (which is the basis of our image reconstruction). Additionally the resolution target does not satisfy the condition used in the derivation of Eq. (1) to be only a small disturbance to the illumination wave, which also reduces the image quality. Nevertheless a resolution of at least 63 μm is obtained, and it may be expected that for a better suited sample object this would be improved considerably.

 figure: Fig. 6

Fig. 6 a: Reconstructed test hologram from a resolution target shows a lateral resolution on the order of 50 micron. b,c: Crossed hairs, axially separated by 400 μm, alternately in focus.

Download Full Size | PDF

In order to estimate the axial resolution we placed a “crosshair - sample” on top of the beam-splitter cube, consisting of two clamped, crossed human hairs with a relative distance of 400 μm. Figure 6(b) and 6(c) show two reconstructed images of the sample, which are numerically sharply focussed in the planes of the horizontal (b) and vertical (c) hairs. A comparison of the two figures shows that the focal planes can be clearly distinguished, i.e. in (b) the horizontal hair is clearly sharper than the vertical, whereas in (c) the situation is reversed. At a separation of two mm, the peak intensity of the blurred hair is about half of the peak intensity of the one in focus. For thinner objects the difference would be even more distinct, which is why we can estimate the achieved axial resolution to be better than 2 mm.

Theoretically it is expected that the transverse image resolution d is limited by the pixel size of the SLM (p=8 μm). The theoretical limit dλ/2NA (the factor 2 arises because both imaging and an illumination NA are approximately equal) is determined by the numerical aperture NA of the imaging arrangement, which is given by the distance between camera and SLM (z), and the size of the camera chip (L), i.e. NAL/2z. In the initial alignment a test hologram was programmed such that it used the full resolution of the SLM to diffract a test pattern to the camera, which just filled the camera chip. Since the maximal diffraction angle α of the SLM is limited by its minimal grating constant, corresponding to two pixel diameters p, namely sin(α) = λ/2p, the maximal image size in the camera plane, which corresponds also to the size of the camera chip L, is given by L = 2zsin(α) = /p. Comparing this with the resolution limit d, we find that dp, as expected. Note that this does not change considerably when shifting the sample to another axial position, since a shift which, e.g., increases the imaging NA, simultaneously decreases the illumination NA, such that the total NA, given by the sum of the two, remains approximately equal (if the sample is in the middle between the SLM and the camera).

The axial resolution is expected to be on the order of the Rayleigh range of the setup, given by (4λ/NA2 ≈ 0.6 mm), which is close to our experimentally estimated value of <2mm.

The sharply reconstructed images are surrounded by a speckled background due to the simplified numerical reconstruction process, which corresponds to a numerical hologram reconstruction, and thus produces in principle an undesired phase conjugate diffraction order (the twin-image) with the same intensity as the reconstructed image. However, the image intensity which is in standard inline holography localised in the twin-image is in our case distributed diffusely in the whole image plane as a diluted background speckle pattern. In principle, this can be avoided by more extensive numerical processing with so-called phase retrieval methods [1012], which can iteratively calculate a complex wave field from its known (or measured) amplitude and/or phase distributions in two different transverse planes - the so called boundary conditions. This seems to be very well suited to our situation, since the known phase distribution of the pseudo-random phase mask corresponds to a boundary condition with a very high and detailed information content, which should allow an accurate and fast convergence of the phase retrieval algorithm. Thus, our straightforward holographic reconstruction may be used as a first iteration step of a more extensive phase retrieval algorithm, which might enable background-free quantitative reconstruction of the samples. Nevertheless our more simple quasi-holographic reconstruction method has the advantage that its numerical effort consists only in a single two-dimensional Fourier transform which can be calculated in real time (at video rate) - as compared to the elaborate phase retrieval methods which often use hundreds of iteration steps.

3. Conclusion

The demonstrated method of optical imaging without using imaging optics can be advantageous due to its simplicity. In our approach it uses a low-cost consumer camera to produce images of complex samples in a widely extended volume, without geometric image distortions which usually derive from lens errors. After a first calibration, consisting of image registration with a test pattern and recording of a reference speckle pattern without the sample, the method works at rates that are practically only limited by the acquisition speed of the image sensor, and each recorded frame contains the full information of the wave field in the volume between the diffusing mask and the camera, which can be recovered afterwards by numerical image processing. The currently used SLM makes the experiment still expensive, but in principle it can be replaced by a standard diffuser with a known phase profile, which can either be manufactured with the methods of diffractive optics, or can even be a standard (e.g. ground glass) diffuser which is once measured interferometrically before it is employed for imaging.

Compared to on-axis holography with plane wave illumination, the diffuse illumination has the advantage of avoiding the twin-image problem, i.e. it produces no sharply imaged diffraction orders besides the desired image field. Furthermore, it approximately doubles the effective numerical aperture, which results in an increased axial resolution that allows independent imaging in different axial planes.

The method can also be adapted for the real-time detection of dynamic changes happening between the recording of two adjacent camera frames - this might be achieved by using each preceding camera frame as a reference for the next (instead of recording a first reference image without inserted sample). In this case the described processing will just image the differences between two adjacent images, for example highlight the moving boundaries of a dynamic object.

Acknowledgments

This work was supported by the Austrian Science Foundation (FWF) Project No. P19582-N20.

References and links

1. D. Gabor, “A new microscopic principle,” Nature 161, 777–778 (1948). [CrossRef]   [PubMed]  

2. D. Gabor, “Microscopy by reconstructed wave-fronts,” Proc. R. Soc. London Ser. A 197, 454–487 (1949). [CrossRef]  

3. I. Moon, M. Daneshpanah, A. Anand, and B. Javidi, “Cell identification with computational 3-D holographic microscopy,” Opt. Photon. News 22, 18–23 (2011). [CrossRef]  

4. T. Nomura and M. Imbe, “Single-exposure phase-shifting digital holography using a random-phase reference wave,” Opt. Lett. 35, 2281–2283 (2010). [CrossRef]   [PubMed]  

5. C. Maurer, A. Schwaighofer, A. Jesacher, S. Bernet, and M. Ritsch-Marte, “Suppression of undesired diffraction orders of binary phase holograms,” Appl. Opt. 47, 3994–3998 (2008). [CrossRef]   [PubMed]  

6. F. Zhang, G. Pedrini, and W. Osten, “Phase retrieval of arbitrary complex-valued fields through aperture-plane modulation,” Phys. Rev. A 75, 043805 (2007). [CrossRef]  

7. C. Kohler, F. Zhang, and W. Osten, “Characterization of a spatial light modulator and its application in phase retrieval,” Appl. Opt. 48, 4003–4008 (2009). [CrossRef]   [PubMed]  

8. An explanation of iterative Fourier transform algorithms can be found for example in: B. C. Kress and P. Meyrueis (Eds.) “Digital Diffractive Optics,” 1st ed. (John Wiley & Sons, 2000) ISBN-13: 978-0-471-98447-4.

9. A. Jesacher, S. Fürhapter, S. Bernet, and M. Ritsch-Marte, “Diffractive optical tweezers in the Fresnel regime,” Opt. Express 12, 2243–2250 (2004). [CrossRef]   [PubMed]  

10. J. R. Fienup, “Reconstruction of an object from the modulus of its Fourier transform,” Opt. Lett. 3, 27–29 (1978). [CrossRef]   [PubMed]  

11. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]   [PubMed]  

12. J. N. Cederquist, J. R. Fienup, J. C. Marron, and R. G. Paxman, “Phase retrival from experimental far-field speckle data,” Opt. Lett. 13, 619–621 (1988). [CrossRef]   [PubMed]  

Supplementary Material (1)

Media 1: MPG (3375 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Principle of lensless imaging. Step 1: Image registration: a test hologram displayed at the SLM is illuminated with a slightly divergent laser beam and reconstructed in the camera plane. Mapping the experimentally recorded image onto the numerically reconstructed one, allows one to derive a transformation matrix which is afterwards applied to all other experimentally recorded images. Step 2: A pseudo random phase pattern (with transmission function TR) is displayed at the SLM. The corresponding speckle pattern generated in the camera plane (R) is recorded as a reference. The corresponding complex phase angle (ΦR) of the speckle pattern is numerically computed and mapped pixel by pixel with the experimental speckle intensity image by using the previously determined spatial transformation matrix. Step 3: Actual imaging procedure: Samples in the optical beam path generate a series of new speckle patterns (Si) which are recorded by the camera. Step 4: Image reconstruction: The complex amplitude of the object’s wave field in the camera plane is obtained by numerical subtraction of the (normalized) image and reference intensity images ( ( S i R ) / ( S i + R ) / 2 and by assigning the complex phase ΦR of the reference speckle pattern to the difference. Then the wavefront is numerically propagated to different axial planes between the SLM and the camera (so called ”spectrum of plane waves propagation operator” indicated in Fig. 1), in order to reconstruct sharp images of the sample.
Fig. 2
Fig. 2 Experimental setup. For a detailed explanation see the text.
Fig. 3
Fig. 3 Reconstructed on-axis Fresnel hologram of a test pattern recorded by a CMOS camera. The image is used for geometric mapping (image registration) of the experimentally projected holograms with the numerically reconstructed images. The corresponding phase mask displayed at the SLM was calculated with an iterative Fourier transform algorithm as a phase-only on-axis Fresnel hologram with a reconstruction distance of 15 cm. The imaging distance of 15 cm was chosen, since in this case the diffracted test pattern has approximately the same size as the camera chip. The dashed square (which is not part of the test hologram) was included afterwards to indicate the boundaries of the programmed hologram. The reconstructed image parts around the dashed square are reproductions of the inner image, which appear in higher diffraction orders due to the SLM pixelation.
Fig. 4
Fig. 4 a: Speckle pattern without sample in the beam path. b: Speckle pattern after insertion of a sample on top of the SLM surface. c: Intensity difference between speckle patterns with (S) and without (R) included sample, normalized by S + R. d: Numerically reconstructed intensity image of a “fly” placed on the SLM surface. e: Numerically reconstructed complex phase image.
Fig. 5
Fig. 5 Comparison of axial sectioning for diffuse and for plane wave illumination. Two specimen, namely a “fly” on top of the SLM surface, and an ant on top of the beamsplitter cube at a distance of 5 cm from the SLM surface, were inserted in the optical path. The sequences (a)–(c) and (d)–(f) show the results of the image reconstruction processes at corresponding focal planes for diffuse and plane wave illumination, respectively. (a) and (d): Numerical image reconstruction in the SLM plane. (b) and (e): Numerically refocussed image at a distance of 2.5 cm from the SLM surface. (c) and (f): Numerically refocussed image at a distance of 5 cm from the SLM surface (corresponding to the top surface of the beamsplitter cube). All images were reconstructed from the same experimentally recorded speckle pattern. A continuous version demonstrating the effects of numerical focus tuning is shown in an included movie ( Media 1, 3.6 MB). Obviously, the diffuse illumination enables independent imaging of the axially separated samples, whereas plane wave illumination results in an overlapping of the two images in all numerically refocussed planes.
Fig. 6
Fig. 6 a: Reconstructed test hologram from a resolution target shows a lateral resolution on the order of 50 micron. b,c: Crossed hairs, axially separated by 400 μm, alternately in focus.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

S = F { T R } F * { T R } + F { T R } F * { Δ O T R } + F * { T R } F { Δ O T R } + F { Δ O T R } F * { Δ O T R } .
S R F { T R } F * { Δ O T R } + F * { T R } F { Δ O T R } .
F 1 { ( S R ) / F * { T R } } / T R = Δ O + F 1 { ( F { T R } / F * { T R } ) F * { Δ O T R } } / T R ,
Δ O = F 1 { ( S R ) / F * { T R } } / T R + speckle noise .
Δ O = F 1 { S R S + R / 2 exp ( i Φ R ) } exp ( i Φ S L M ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.