Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Surpassing digital holography limits by lensless object scanning holography

Open Access Open Access

Abstract

We present lensless object scanning holography (LOSH) as a fully lensless method, capable of improving image quality in reflective digital Fourier holography, by means of an extremely simplified experimental setup. LOSH is based on the recording and digital post-processing of a set of digital lensless holograms and results in a synthetic image with improved resolution, field of view (FOV), signal-to-noise ratio (SNR), and depth of field (DOF). The superresolution (SR) effect arises from the generation of a synthetic aperture (SA) based on the linear movement of the inspected object. The same scanning principle enlarges the object FOV. SNR enhancement is achieved by speckle suppression and coherent artifacts averaging due to the coherent addition of the multiple partially overlapping bandpass images. And DOF extension is performed by digital refocusing to different object’s sections. Experimental results showing an impressive image quality improvement are reported for a one-dimensional reflective resolution test target.

©2012 Optical Society of America

1. Introduction

Digital holography (DH) arises from the same holographic principles than classical holography but replacing the holographic plate recording media by a solid state image recording device (typically a CCD or CMOS camera) composed by a matrix of light sensitive elements [1,2]. DH has the great advantage derived from the flexibility and the versatility of the numerical reconstruction provided by computers. However, DH suffers from the drawbacks inherent to electronic devices, maintaining the characteristics of using coherent illumination. The use of digital cameras addresses with a limitation of both spatial resolution and field of view (FOV) of the reconstructed image due to the limited number of pixels and the pixel size of the electronic imaging device. In addition, the use of coherent illumination generates noise (mainly due to speckle and coherent artifacts) which also degrades the spatial resolution, the signal-to-noise ratio (SNR) and phase accuracy of the reconstructed images [3]. For those reasons, many attempts have been made to overcome those drawbacks since the first implementations of DH [4,5].

As for the FOV limitation, scanning a larger hologram using the matrix detector is the most used approach for improving performance in DH. By scanning we mean a relative motion between the object and the digital camera and it implies the generation of an expanded hologram containing a bigger portion of the diffracted object wavefront, bigger in comparison with the non-scanning case. The generation of that expanded hologram is commonly known as aperture synthesis and can be implemented using, in essence, 4 different strategies [628]. The first one deals with the displacement of the camera at the recording plane [611] thus synthesizing a larger hologram in comparison with that one provided by only one camera position. The second performs synthetic aperture (SA) generation by angular multiplexing the spectral object information [1216]. Angular multiplexing is implemented by tilted beam illumination over the object in a similar way than in digital holographic microscopy [1721] and allows the recovery of additional spatial frequencies falling outside the digital camera sensible area when on-axis illumination is used. The third option is related with the use of additional optical elements (typically diffraction gratings) that are placed between the object and the digital camera [2226]. The diffraction grating directs towards the camera an additional portion of the diffracted object wavefront allowing the generation of the synthetic aperture hologram. And finally, the fourth case is based on object movement instead of shifting the digital camera [27,28].

On the other hand, speckle has been validated as a usable tool for, for instance, depth discrimination [29], superresolution (SR) [30], phase retrieval [31], shape measurement [32], and wavefront sensing [33]. However, in other applications speckle is an unwanted feature to be reduced in order to improve image quality. Thus, coherent noise reduction in DH have been attained by using partial spatial coherence sources [34], with implementations based on digital image processing [35], by approaches based on the averaging of multiple holograms recorded with different wavelengths [36] or with polarization states [37], or by slightly shifting the sample [38] or by changing the phase of the illumination [39] between them, just to cite a few. In addition, aperture synthesis [7,8,13,20,27] has also been validated as a useful strategy for reducing both size and contrast of the speckle noise as a consequence of both SA enlargement and overlapping between elementary apertures when generating the SA, respectively.

Among the previous reported methods to achieve SA generation in DH [628], approaches based on the object movement [27,28] are by far the less exploited ones. In fact, only rotation has been considered until now. In this report, we propose the use of object translation to synthesize a final digital image with improved capabilities concerning resolution, FOV, SNR and DOF in reflective DH. The proposed method, named as Lensless Object Scanning Holography (LOSH), profits of the linear movement of the object to record a set of reflective digital lensless Fourier holograms containing different spatial as well as spatial frequency information of the scanned object. Different spatial information comes from a different object region being recovered for each hologram due to the linear object displacement. Different spatial frequency content accounts for a given object area being illuminated under different angular directions depending on its position along the scanned direction and assuming that the illumination beam is enough divergent. Those two characteristics produce object FOV enlargement and resolution improvement, respectively, when considering the whole set of recorded holograms. Moreover, if the object displacement between contiguous holograms is small in comparison with the retrieved FOV for a single hologram, the SNR of the final image is also enhanced due to the averaging of the coherent noise when adding the whole set of recorded holograms. As a result, the final image provided by LOSH resembles an image obtained under white light illumination but considering that not only intensity but also phase information is accessible. Thus, an extended DOF image can be also synthesized owing to the coherent nature of LOSH by numerical propagation to different axial distances.

The paper is organized as follows. Section 2 theoretically presents the experimental layout as well as a description of the numerical processing involving LOSH. Experimental results are reported in Section 3 for reflective objects concerning FOV enlargement, resolution and SNR enhancement, and DOF extension. And finally, Section 4 concludes the paper.

2. Theoretical analysis

2.1 System description

The experimental setup of the LOSH is depicted in Fig. 1 . It is a classical lensless Fourier holographic configuration implemented in a Michelson-Morley interferometer. In the imaging branch, a divergent spherical point source illuminates the input object after reflecting in a non-polarizing beam splitter cube (BS). The back-diffracted wavefront reaches the hologram plane at the CCD. The light transmitted by the BS is reflected back by a reference mirror arriving at the CCD. This reference beam has the form of a spherical beam diverging from at the same distance as a virtual point source generated by the image of the real point source at the reference mirror. If the input object is placed just at the same distance from the CCD (D in Fig. 1(b)) but in the imaging branch, the recorded hologram becomes in a digital lensless Fourier hologram where the input object is imaged at one of the diffraction orders of the recorded hologram. In order to maximize the resolution of experimental layout, the reference mirror is glued to the BS using matching index immersion oil. This configuration is maintained for every input object position when considering the object displacement.

 figure: Fig. 1

Fig. 1 Upper view of the proposed LOSH layout: (a) ray tracing for imaging and reference beams, and (b) identification of the main parameters for the mathematical analysis.

Download Full Size | PDF

2.2 Mathematical analysis

Let us characterize the amplitude distribution of the reflective input object by O(x1, y1). This object is illuminated by a spherical divergent wave Aexp[jk(x12 + y12)/(2D0)] coming from the point source that is initially located at an effective distance D0. This effective distance takes into account the passing of the spherical wave through the higher index medium of the BS. This is not troublesome since it affects equally to both interferometric branches.

According to the geometry of Fig. 1, the point source is placed at a distance d1 from the upper face of the BS and the CCD detector at a distance d2 from the right face. Then, d1 is related with the position of the object through the definition of the virtual point source plane in order to assure the holographic recording in a lensless Fourier configuration. And d2 is, in principle, arbitrary but it determines the region of the input object that can be fully recovered in the reconstruction process (region that we will name as FOV).

For the sake of simplicity in the mathematical analysis, let us suppose that d1 = d2. In this particular case, the object is placed at half the distance between the illuminating point source and the CCD. Moreover the distance D between the object and the CCD becomes equal to D0. Thus, just after reflection on the input object, except for constant amplitude factors, the field distribution provided by the imaging arm is

U(x1,y1)=exp[jk2D(x12+y12)]O(x1,y1).

Applying Fresnel diffraction and dropping the irrelevant linear phase factor, the field distribution at the plane (x2, y2) where the CCD is located can be written as

U(x2,y2)=1jλDexp[jk2D(x22+y22)]U(x1,y1)exp[jk2D(x12+y12)]exp[j2πλD(x1x2+y1y2)]dx1dy1=Cexp[jk2D(x22+y22)]F˜(u,v)
where

F˜(u,v)=U(x1,y1)exp[jk2D(x12+y12)]exp[j2π(ux1+vy1)]dx1dy1=FT{U(x1,y1)exp[jk2D(x12+y12)]}

the spatial frequencies being (u, v) = (x2/(λD), y2/(λD)) and C = 1/(jλD).

Complex amplitude distribution of Eq. (2) interferes with the reference beam generated at a virtual point source located at the same distance D in front of the CCD. If, moreover, the reference mirror is slightly tilted, the reference virtual point source becomes shifted a distance b outside the optical axis. In our case, this shift is achieved by tilting both the BS and the reference mirror in the direction that is perpendicular to the object scanning direction. Assuming that the object is moved along the x-direction, b is perpendicular to Fig. 1 (y-direction). For instance, when b<0, the virtual reference point is located below the axis along the y-direction. Thus, the reference beam can be written as

R(x2,y2)=R0exp[jk2D(x22+y22)]exp(j2πby2λD).

Then, the CCD records a lensless Fourier transform (FT) hologram as the intensity distribution I(x2, y2) = |U(x2, y2) + R(x2, y2)| 2 provided by the addition of Eqs. (2) and (4). Assuming a linear relation between amplitude response and intensity in the reconstruction process, the two diffracted terms that give rise to the images at the Fourier domain can be written as

U(x2,y2)R(x2,y2)=R0Cexp(j2πbv)F˜(u,v)U(x2,y2)R(x2,y2)=R0Cexp(j2πbv)F˜(u,v)
where F˜(u,v) and F˜(u,v)are defined as

F˜(u,v)=FT{U(x1,y1)exp[jk2D(x12+y12)]}=FT{O(x1,y1)exp[jkD(x12+y12)]}F˜(u,v)=FT{U(x1,y1)exp[jk2D(x12+y12)]}=FT{O(x1,y1)exp[jkD(x12+y12)]}.

The situation is that of usual lensless Fourier hologram. For the reconstruction process we consider that the amplitude transmittance of the hologram is illuminated by an axial plane wave. Then, the amplitudes of the diffracted waves are proportional to the FTs of O(x1, y1) and O*(-x1, -y1). In any case and to get the final images, we need a spherical lens to carry out the FT of the hologram transmittance. If the focal length of the lens is f and we propagate to the back focal plane (x3, y3) of the lens, we get a complex amplitude proportional to

O[αx3,αy3b]exp{j2πλD[(αx3)2+(αy3b)2]}O[αx3,(αy3+b)]exp{j2πλD[(αx3)2+(αy3+b)2]}
being α = D/f. Aside of phase factors, Eq. (7) represents a direct image of the input object centered at (0, b/α) and an inverted image of the input object centered at (0, -b/α). Obviously, all this reconstruction process is performed numerically.

Now, we want to recall that LOSH is based on the recording of a set of digital lensless Fourier holograms when the object is linearly moved along a given direction (x-direction). Then, each hologram provides a recovered image corresponding to a different area of the input object (that one illuminated during each specific recording). Finally, an image with improved performance is synthesized by properly assembling all those images.

Taking into account the restrictions imposed by the CCD (limited pixel size and limited number of pixels), both the spatial frequency content (i.e. the resolution) and the FOV become limited accordingly for each successive hologram. Let us start first with the FOV limitation issue. The characteristic of a FT hologram is that each object point is encoded as an interferometric fringe pattern of a unique and constant spatial frequency ([40], p. 365). Since the reference wave is originated at the point (0, b), the object point with coordinates (x0, y0) is encoded as a sinusoidal fringe pattern of spatial frequency given by

fx=xoλDandfy=yobλD.

Considering an object inside a square of side L centered on-axis, that is, O(x1, y1)rect(x0/L, y0/L), the intervals of frequencies where the object points are encoded are

[fx1,fx2]=[L2λD,L2λD]and[fy1,fy2]=[L2bλD,L2bλD].

Nevertheless, although the CCD records all those frequencies, in the reconstruction process, only a squared portion of the object will be fully recovered. This square corresponds with the part of the geometrical projection of the object that falls on the CCD. We name the side of such a square L’. Since LOSH is applied in the x-direction, the carrier frequency produced into the y-direction will be only related with the FOV constraints.

The CCD detector is able to record an interferometric pattern below the Nyquist frequency defined by the CCD pixel size. The upper limit corresponds to the recording of two successive lines of the interferometric pattern with only one non excited pixel between them. Let us call P the pixel size. Thus, the minimum fringe period is defined by 2P which corresponds with a maximum frequency in line pairs of ν = 1/(2P). And that frequency corresponds to the farthest point of the object with respect to the reference point. This condition can be written as

L2bMλD=12P
where, for a given object and a given CCD detector, the maximum value of b to assure that all the points of the object are recorded is given by

bM=L2λD2P=12(LλDP).

As a conclusion, the FOV in the y-direction (as well as in the x-direction) is limited by the size of the pixels in the detector. This limitation of the FOV is characteristic of the FT holograms. If we consider the experimental conditions of the proposed experimental setup, we are far from this limit when considering our fully recovered image.

But there exits another limitation in the FOV when obtaining the twin images in the reconstruction process. As LOSH is performed numerically, the twin images are displayed in a square matrix of 1024X1024 pixels, each image is displaced the amount b/α = bf/D along the y-direction, and each one has to be contained in, at most, half of that matrix. In addition, the background terms (zero order term of the hologram) are located around the origin of the coordinate system and limit even more the available portion of the matrices for displaying the images. Note that, since both twin images are symmetrically placed from the center of the digital matrix, we focus only in one of them (the direct image).

As we have previously commented, the direct image is located around the point centered at (0, b/α) and it is inscribed in a square of side L/α. Its farthest point is placed at a distance of “L/(2α)+b/α ”, while the closest one is at “-L/(2α)+b/α ”. Since, the background term has a half-height equal to L/α, to avoid overlapping between the background and the image it is necessary that

bαL2αLαb3L2α.

Therefore, taking into account Eq. (11) and Eq. (12), we can select the value of b in order to optimize the experimental setup.

Let us consider now the multiplexing proposed by LOSH in the x-direction. Let us suppose that we move the object a distance ξ from its initial position. The corresponding new intervals of frequencies are now

[fx1',fx2']=[L2+ξλD,L2+ξλD]and[fy1',fy2']=[L2bλD,L2bλD]
that is, the same one in the y-direction [f’y1, f’y2] and a new one in the multiplexed direction [f’x1, f’x2]. Thus, when performing object scanning, the CCD records the information of different object regions in different spatial frequency bands. In other words, the new frequency interval [f’x1, f’x2] contains higher spatial frequency information of the input object than the case where no scanning is considered. And this procedure is valid as far as the frequency of the interferometric pattern is lower than the Nyquist sampling frequency of the CCD. The farthest object point that can be recorded in the hologram is given by

L2+ξMλD=12P.

So the maximum displacement of the object from its central position is defined by

ξM=λD2PL2.

This equation shows that also along the multiplexing direction the FOV can be limited by the characteristics of the CCD detector.

To study the generation of the SA that gives raise the SR effect, let us start with the object in its initial centered position. Since the CCD is also centered on-axis, the cutoff frequency of the system is uc = NA/(kλ), where NA is the numerical aperture of the detector and K = 0.82 for coherent imaging systems, approximately ([41], page 471). Then, the spatial frequency band transmitted by the system for the central object point along the direction of interest (x-direction) is

[u1,u2]=[H2KλD,H2KλD].

Nevertheless and as stated above, the portion of the object that is illuminated and projected into the aperture of the CCD detector is a square of side L’. Thus, the highest frequency transmitted for an extreme sided point of the FOV is

uM=(H+L')/2KλD.

Thus, when the object is moved at a given distance ξ, the detector will provide a spatial frequency band depending on the value of ξ. For the central point, the transmitted band is

[u1',u2']=[ξH2KλD,ξ+H2KλD].

Considering a value of the displacement ξ = H/2, the transmitted band becomes

[u1',u2']=[0KλD,HKλD]=[0,2u2]
and for ξ = H, the band transmitted is

[u1',u2']=[H2KλD,3H2KλD]=[u2,3u2].

Equations (16) and (20) show how adjacent spatial frequency bands are recorded by the system due to the object movement along the x-direction. Or in other words, a SA increasing by a factor of 3 the initial cutoff frequency can be numerically generated after properly managing all the information contained in the set of recorded holograms. Then, the resolution limit becomes also improved by a factor of 3. For those displacement values ξ≤H, we will get partial overlapping of the different bands transmitted by the system. In particular, for ξ = H/2, the aperture is expanded by a factor of 2.

2.3 Algorithmic

Following the mathematical analyses, the procedure for obtaining the superresolved and field extended image consists on the superposition of contributions obtained from the elementary holograms obtained in each object position. The algorithm for reconstruction consists on the correct superposition of the elementary outputs on the image domain. The reconstructed image will have an extended FOV and enhanced resolution; hence we start by allocating an image with the final resolution and spatial extent.

For each object position, a lensless FT hologram is obtained. Therefore, a standard resolution and FOV reconstruction can be obtained for each position by a simple FT of the recorded hologram. The different orders are separated by adding a carrier frequency in the hologram by slightly tilting the reference beam. This permits the spatial separation of the image in one of the reconstructed orders of the FT, the reconstructed band pass version of the object lying on first order. This image, with some modifications listed below, is added in the final image matrix at the proper location.

Prior to addition onto the final image, the image is rescaled to accommodate the enhancement of resolution, resulting in a smooth image with “empty resolution”. This is accomplished by zero padding in the FT of each recovered image. Zero padding procedure implies a double FT process where a bigger image size than the initial one is defined. Then, the FT of the original image is placed at the center of the new matrix and zeros are added surrounding it to fulfill the new image size. When coming back using the second FT, the resulting image becomes convolved with an interpolation (sinc) function incoming from the FT of the rectangular (or square) boundaries defined by the transitions between the zeros and the image FT. We use the same step for correcting the phase in the object plane (a spherical phase factor, as shown above). At the same step we can correct additional factors due to accidental (random but fixed) small displacements or tilts in the setup or slight mismatch in the equalizing of distance object-camera with respect to source-camera. It is also possible to correct for any other possible error, like the residual astigmatism or other optical aberrations introduced by geometry or by the BS [42]

The reconstructed image from each elementary hologram undergoes the same process. Each image corresponds to a different shift of the object resulting in a different part of the object being captured in the reconstructed image field. Each elementary image is displaced according to the displacement of the object. The exact displacement can be performed by simple correlation with the previous frames or by a previous calibration with a reticule. Once the scaling, phase correction and shift processes are performed the resulting images are accumulated on the image buffer allocated in the first step.

3. Experimental validation

The fundamental components of the LOSH assembly are depicted in Fig. 2 adopting a Michelson interferometric configuration. An animation of the setup presented in Fig. 2 is presented in Media 1. For all the upcoming experiments, a divergent laser beam incoming by a single VCSEL source (1mW optical power, 850nm wavelength, ± 15° beam divergence) is split by a non-polarizing beam splitter (BS) cube (BK7, 25.4 mm side, near IR broadband coating). The transmitted beam becomes the reference beam of our interferometric setup and it is reflected back by a metallic mirror (25.4 mm diameter, protected silver coating) that is attached to the beam splitter cube by optical index matching oil. The reflected beam at the BS cube is directed toward the input object (reflective T-90 resolution test from Applied Image Inc.) illuminating an extended area of it according to the divergence of the VCSEL source and the distance between object and source. The input object is assembled into a motorized linear translation stage to allow object scanning. Finally, all the holograms are recorded by a monochrome CCD imaging device (Kappa DC2, 12 bits dynamic range, 800 x 800 pixels as selected image size, 6.7 μm pixel size) and digitally processed by MATLAB software. Notice that no lenses are included in the experimental setup (only reflective surfaces). In the following, we have included different subsections showing how LOSH method improves image quality in digital holography.

 figure: Fig. 2

Fig. 2 Experimental setup for LOSH approach (Media 1).

Download Full Size | PDF

3.1 FOV enlargement

First, LOSH concept extends the object FOV. Since we are implementing a digital lensless Fourier holographic architecture in off-axis mode, the object FOV is retrieved from one of the diffraction hologram orders at the Fourier domain. Moreover, due to the hologram image size has the same size in both directions (800 x 800 pixels), the recovered FOV becomes in a square of light containing a given area of the input object. This square FOV has a side equal to L’ according to subsection 2.2 and defines the conventional object FOV that LOSH extends in the following way.

As a consequence of the object scanning, different object regions pass along the conventional FOV in a time sequence. Thus, the recording of a set of holograms corresponding with different illuminated areas of the input object allows the synthesis of an image where each single recovered FOV is properly assembled into a bigger object FOV image. Figure 3 depicts a single frame (that one where the central part of the T-90 test is centered at the FOV) of a video movie (Media 2) showing the generation of the expanded FOV image: the recorded hologram is presented in (a), the FT of the hologram is included in (b), (c) shows for clarity the upper hologram order where the input object is imaged (object scanning and fixed FOV), (d) presents an intermediate synthetic expanded image where the object is fixed and the FOV is scanned, and the addition of all the individual FOV images (the synthesized image with partially expanded FOV) is displayed in (e).

 figure: Fig. 3

Fig. 3 Full LOSH process yielding in the generation of an extended FOV (Media 2).

Download Full Size | PDF

In this experiment, the total object movement range is 7.5 mm and the number of recorded holograms is 30, thus the object step between holograms is 0.25 mm. According to the images included in Fig. 4 , we have estimated a square conventional FOV of side L’ = 2.6 mm (dimensions of the illuminated square area of the test presented in Fig. 4(a)) while the extended FOV (Fig. 4(b)) is 10 x 2.5 mm (width x height), approximately. So, the FOV improvement factor in the multiplexed direction is close to 4. Note also that the object movement starting from the central position (3.75 mm) is in good concordance with the maximum allowed displacement of the object provided by Eq. (15) (3.65 mm) considering an effective distance of D0 = 13.6 cm between the test target and the CCD (see subsection 3.2).

 figure: Fig. 4

Fig. 4 FOV extension by LOSH. (a) Conventional FOV recovered when considering a single digital lensless Fourier hologram. And (b) extended FOV image after applying LOSH.

Download Full Size | PDF

3.2 Resolution improvement

We will focus now on the resolution improvement provided by LOSH. The experiment is the same presented in previous subsection and the distance between the test target and the CCD is 14.5 cm, approximately. However, the BS cube brings closer the test to the camera. Under paraxial approximation, the displacement Δs originated by a plano–parallel plate is given by Δs = e(1-1/n), being e and n the width and the refractive index of the plate, respectively. According to technical specifications provided by Schott (http://www.schott.com), the refractive index of BK7 optical glass at the VCSEL wavelength is 1.51, and then the shift introduced by the BS cube is around Δs = 8.6 mm. Thus, the effective distance (D0) between the test target and the CCD is D0 = 13.6 cm, approximately. We want to recall that, in our setup, D = D0.

Assuming such effective distance and paraxial approximation, the NA provided by the CCD square sensitive area along the horizontal direction is: NA ≅ H/(2D0), being H the CCD width. This value (NA = 0.02) provides a theoretical resolution limit equal to R = kλ/NA = 35 μm. Figure 5(a) depicts the low resolution image provided by the proposed layout and corresponding with the case when the last resolved element of the test is placed on-axis, that is, centered in the FOV. Otherwise, the element will be illuminated with a slightly tilted beam and will produce an image with improved resolution. We can see as the last resolved element is the 5th element (starting from the highest period and going down) of the 2nd group (identified by two black squares in the test) which is marked with a black arrow and corresponding with 25 lp/mm or, equivalently, 40 μm pitch. This experimental result matches with the theoretical one (35 μm) since the next element in the resolution test (6th element of 2nd group corresponding with 30 lp/mm or 33 μm) is below the theoretical resolution limit.

 figure: Fig. 5

Fig. 5 Resolution improvement by LOSH. (a) Low resolution image. (b) Superresolved image after applying LOSH. (c) Magnification of the black rectangle depicted in (b). And (c) horizontal plots of the elements marked with the red and blue lines in (c).

Download Full Size | PDF

Then, LOSH is applied and the resulting image is shown in Fig. 5(b). Note that this image is the central part of the image presented in Fig. 4(b). To clearly show the SR effect, Figs. 5(c) and 5(d) respectively depict a magnification of the area marked with a black solid line rectangle in case (b) and two average plots along the 9th and 10th elements of the test in case (c) identified by blue and red lines. We can see that the 15 vertical bars of each one of the test

elements are now resolved until the 10th element which corresponds with 80 lp/mm (or 12.5 μm). This value provides a resolution gain factor equal to 3.2.

In addition, we can interpret the SR effect as a SA generation process where the cut-off frequency provided by the imaging system becomes synthetically expanded [628]. In such a case, Fig. 6(a) depicts the conventional square aperture corresponding with the FT of the image depicted in Fig. 5(a) while Fig. 6(b) shows the generated SA from the FT of the image presented in Fig. 5(b). The synthetic aperture after applying LOSH is wider than the conventional one along the multiplexed (horizontal) direction.

 figure: Fig. 6

Fig. 6 (a) Square aperture representing the proposed system layout at the Fourier domain. And (b) SA generated after applying LOSH.

Download Full Size | PDF

3.3 SNR enhancement

Since LOSH implies the addition of the multiple bandpass images that partially overlap, the noise source of coherent imaging systems are expected to be reduced. In our case, speckle noise is suppressed and the coherent artifacts are averaged showing an improvement in final image quality from both qualitative (visual) and quantitative (SNR) points of views.

We have calculated SNR according as the ratio of the signal mean to the standard deviation of the noise and compared its values for two cases. The first one concerns with a single holographic recording obtained when the test central part is centered in the experimental layout FOV, that is, it corresponds with the central illuminated part of Fig. 4(a). The second one relates with the same area of interest provided when applying LOSH, that is, it corresponds with Fig. 5(b). On both cases, we have defined a binary mask to separate the contribution of the signal (those points of the images corresponding with the squares and the vertical bars of the input object) and the background (those points of the images that are not considered as signal). The binary mask for evaluating the signal can be easily obtained by thresholding over the intensity of the superresolved image. In this case, the background is rendered with zeros and the image details with ones. And an inverse mask is the one applied for calculating the STD of the background noise. Results are summarized in Table 1 .

Tables Icon

Table 1. SNR Analysis

3.4 DOF extension

Finally, in order to make evident that LOSH has a coherent nature, we present an additional experiment concerning DOF extension. Since phase information is available, numerical propagation algorithms can be implemented to focus at different object sections. This is true with and without applying LOSH since the proposed experimental setup retrieves phase distribution in both cases. However, the synthetic extended DOF image using LOSH also retains the enhancement in resolution, FOV and SNR. In this new experiment, a micrometer slide is obliquely attached in the front face of the resolution test. The micrometer slide (Edmund Optics, Metric Scale 1mm/100Div) is made up from two crossed scales. Then, the horizontal scale is placed in such a way that it is parallel to the resolution test surface but having a shorter axial distance from the CCD than the resolution test has. And the vertical scale has a variable axial distance from the CCD according to the angle defined by the oblique slide positioning.

Figure 7 includes the experimental results. Images of cases (a) and (b) correspond with a single hologram where the resolution test and the horizontal scale of the micrometer slide are in focus, respectively. This is the extended DOF case for the conventional imaging mode (LOSH is not applied). The refocusing of the scale slide image is achieved by digital propagation at the axial distance which provides the best image quality of the scale and applying the angular spectrum method (not by mechanical displacement of the input object). We can see as now the last resolved element of the resolution test is the 6th element of 2nd group (marked with a white arrow in a). And from (b) we can resolve the numbers 20-30-40 of the horizontal slide scale. Note as both the FOV is restricted and the vertical slide scale is not visible.

 figure: Fig. 7

Fig. 7 DOF extension by LOSH. (a)-(b) Conventional low resolution image being the resolution test and the horizontal slide scale in focus, respectively. (c)-(d) Synthesized image after applying LOSH showing the resolution test and the horizontal slide scale, respectively. (e) Single frame of the video movie showing digital refocusing ability (Media 3). And (f) synthesized expanded DOF image after applying LOSH.

Download Full Size | PDF

Then, LOSH is applied improving resolution, FOV, SNR and allowing the generation of an expanded DOF image. Case (c) corresponds with the superresolved image when focusing at the resolution test plane. Case (d) focus at the horizontal slide scale where, aside of improve global image quality (see numbers 20-30-40 of the scale in comparison with b), the vertical scale also appears now as consequence of the FOV improvement. But the vertical scale is not all in focus due to the slide oblique positioning. In case (e) we show a single frame corresponding with a video movie (Media 3) in which digital propagation to different axial distances is applied to image on different input object details. And a synthesized expanded DOF image is presented in case (f) as consequence of taken those in-focus regions of the input object at different propagation distances.

4. Conclusions

LOSH has been presented as method capable of improving image quality in fully lensless reflective digital Fourier holography. The experimental setup is extremely simple and no lenses are included. LOSH proposes the input object movement (or scanning) as key point to record a set of digital lensless holograms containing different spatial and spatial-frequency content. After proper digital post-processing of those holograms, LOSH outputs a synthetic image with improved resolution, FOV, SNR, and DOF, each one of those improved attributes coming from a different LOSH virtue. Experimental results for a 1D reflective object are presented validating the proposed approach.

The proposed method can be implemented in applications where object scanning will not mean a restriction. Thus, in-line inspection of printed circuits, quality control analysis of micro-barcodes, and micro-optofluidic systems can take the advantages provided by LOSH. In addition and due to the fully lensless nature of LOSH, applications involving wavelengths where lenses are difficult to manufacture (extreme UV or soft X-ray) can also be benefitted from LOSH concept.

Acknowledgment

Part of this work has been supported through grants from the Spanish Ministerio de Educación y Ciencia under the project FIS2010-16646.

References and links

1. L. P. Yaroslavsky, Digital Holography and Digital Image Processing: Principles, Methods, Algorithms (Kluwer Academic, 2003).

2. U. Schnars and W. P. O. Jüpter, Digital Holography (Springer-Verlag, Heidelberg, 2005).

3. J. W. Goodman, Speckle Phenomena: Theory and Applications (Roberts & Company, 2006).

4. J. W. Goodman and R. W. Lawrence, “Digital image formation from electronically detected holograms,” Appl. Phys. Lett. 11(3), 77–79 (1967).

5. T. Huang, “Digital holography,” Proc. IEEE 59(9), 1335–1346 (1971).

6. F. Le Clerc, M. Gross, and L. Collot, “Synthetic-aperture experiment in the visible with on-axis digital heterodyne holography,” Opt. Lett. 26(20), 1550–1552 (2001).

7. J. H. Massig, “Digital off-axis holography with a synthetic aperture,” Opt. Lett. 27(24), 2179–2181 (2002).

8. P. Almoro, G. Pedrini, and W. Osten, “Aperture synthesis in phase retrieval using a volume-speckle field,” Opt. Lett. 32(7), 733–735 (2007).

9. J. Di, J. Zhao, H. Jiang, P. Zhang, Q. Fan, and W. Sun, “High resolution digital holographic microscopy with a wide field of view based on a synthetic aperture technique and use of linear CCD scanning,” Appl. Opt. 47(30), 5654–5659 (2008).

10. V. Micó, L. Granero, Z. Zalevsky, and J. García, “Superresolved phase-shifting Gabor holography by CCD shift,” J. Opt. A, Pure Appl. Opt. 11(12), 125408 (2009).

11. B. Katz and J. Rosen, “Super-resolution in incoherent optical imaging using synthetic aperture with Fresnel elements,” Opt. Express 18(2), 962–972 (2010).

12. C. Yuan, H. Zhai, and H. Liu, “Angular multiplexing in pulsed digital holography for aperture synthesis,” Opt. Lett. 33(20), 2356–2358 (2008).

13. P. Feng, X. Wen, and R. Lu, “Long-working-distance synthetic aperture Fresnel off-axis digital holography,” Opt. Express 17(7), 5473–5480 (2009).

14. L. Granero, V. Micó, Z. Zalevsky, and J. García, “Synthetic aperture superresolved microscopy in digital lensless Fourier holography by time and angular multiplexing of the object information,” Appl. Opt. 49(5), 845–857 (2010).

15. V. Micó and Z. Zalevsky, “Superresolved digital in-line holographic microscopy for high-resolution lensless biological imaging,” J. Biomed. Opt. 15(4), 046027 (2010).

16. L. Granero, Z. Zalevsky, and V. Micó, “Single-exposure two-dimensional superresolution in digital holography using a vertical cavity surface-emitting laser source array,” Opt. Lett. 36(7), 1149–1151 (2011).

17. Y. Kuznetsova, A. Neumann, and S. R. Brueck, “Imaging interferometric microscopy-approaching the linear systems limits of optical resolution,” Opt. Express 15(11), 6651–6663 (2007).

18. V. Micó, Z. Zalevsky, C. Ferreira, and J. García, “Superresolution digital holographic microscopy for three-dimensional samples,” Opt. Express 16(23), 19260–19270 (2008).

19. T. R. Hillman, T. Gutzler, S. A. Alexandrov, and D. D. Sampson, “High-resolution, wide-field object reconstruction with synthetic aperture Fourier holographic optical microscopy,” Opt. Express 17(10), 7873–7892 (2009).

20. M. Kim, Y. Choi, C. Fang-Yen, Y. Sung, R. R. Dasari, M. S. Feld, and W. Choi, “High-speed synthetic aperture microscopy for live cell imaging,” Opt. Lett. 36(2), 148–150 (2011).

21. A. Calabuig, V. Micó, J. Garcia, Z. Zalevsky, and C. Ferreira, “Single-exposure super-resolved interferometric microscopy by red-green-blue multiplexing,” Opt. Lett. 36(6), 885–887 (2011).

22. C. Liu, Z. Liu, F. Bo, Y. Wang, and J. Zhu, “Super-resolution digital holographic imaging method,” Appl. Phys. Lett. 81(17), 3143–3145 (2002).

23. M. S. Hezaveh, M. R. Riahi, R. Massudi, and H. Latifi, “Digital holographic scanning of large objects using a rotating optical slab,” Int. J. Imaging Syst. Technol. 16(6), 258–261 (2006).

24. M. Paturzo, F. Merola, S. Grilli, S. De Nicola, A. Finizio, and P. Ferraro, “Super-resolution in digital holography by a two-dimensional dynamic phase grating,” Opt. Express 16(21), 17107–17118 (2008).

25. L. Granero, V. Micó, Z. Zalevsky, and J. García, “Superresolution imaging method using phase-shifting digital lensless Fourier holography,” Opt. Express 17(17), 15008–15022 (2009).

26. M. Paturzo and P. Ferraro, “Correct self-assembling of spatial frequencies in super-resolution synthetic aperture digital holography,” Opt. Lett. 34(23), 3650–3652 (2009).

27. R. Binet, J. Colineau, and J.-C. Lehureau, “Short-range synthetic aperture imaging at 633 nm by digital holography,” Appl. Opt. 41(23), 4775–4782 (2002).

28. Y. Zhang, X. Lu, Y. Luo, L. Zhong, and C. She, “Synthetic aperture holography by movement of object,” Proc. SPIE 5636, 581–588 (2005).

29. C. Ventalon and J. Mertz, “Quasi-confocal fluorescence sectioning with dynamic speckle illumination,” Opt. Lett. 30(24), 3350–3352 (2005).

30. J. García, Z. Zalevsky, and D. Fixler, “Synthetic aperture superresolution by speckle pattern projection,” Opt. Express 13(16), 6073–6078 (2005).

31. P. Almoro, G. Pedrini, and W. Osten, “Complete wavefront reconstruction using sequential intensity measurements of a volume speckle field,” Appl. Opt. 45(34), 8596–8605 (2006).

32. A. Anand, V. K. Chhaniwal, P. Almoro, G. Pedrini, and W. Osten, “Shape and deformation measurements of 3D objects using volume speckle field and phase retrieval,” Opt. Lett. 34(10), 1522–1524 (2009).

33. P. F. Almoro and S. G. Hanson, “Wavefront sensing using speckles with fringe compensation,” Opt. Express 16(11), 7608–7618 (2008).

34. F. Dubois, L. Joannes, and J.-C. Legros, “Improved three-dimensional imaging with a digital holography microscope with a source of partial spatial coherence,” Appl. Opt. 38(34), 7085–7094 (1999).

35. J. Maycock, B. M. Hennelly, J. B. McDonald, Y. Frauel, A. Castro, B. Javidi, and T. J. Naughton, “Reduction of speckle in digital holography by discrete Fourier filtering,” J. Opt. Soc. Am. A 24(6), 1617–1622 (2007).

36. T. Nomura, M. Okamura, E. Nitanai, and T. Numata, “Image quality improvement of digital holography by superposition of reconstructed images obtained by multiple wavelengths,” Appl. Opt. 47(19), D38–D43 (2008).

37. L. Rong, W. Xiao, F. Pan, S. Liu, and R. Li, “Speckle noise reduction in digital holography by use of multiple polarization holograms,” Chin. Opt. Lett. 8(7), 653–655 (2010).

38. F. Pan, W. Xiao, S. Liu, F. J. Wang, L. Rong, and R. Li, “Coherent noise reduction in digital holographic phase contrast microscopy by slightly shifting object,” Opt. Express 19(5), 3862–3869 (2011).

39. Y. K. Park, W. Choi, Z. Yaqoob, R. Dasari, K. Badizadegan, and M. S. Feld, “Speckle-field digital holographic microscopy,” Opt. Express 17(15), 12285–12292 (2009).

40. J. Goodman, Introduction to Fourier Optics, 2nd ed. (McGraw-Hill, New York, 1996).

41. M. Born and E. Wolf, Principles of Optics, 7th (expanded) ed. (Cambridge University Press, 1999).

42. T. Colomb, J. Kühn, F. Charrière, C. Depeursinge, P. Marquet, and N. Aspert, “Total aberrations compensation in digital holographic microscopy with a reference conjugated hologram,” Opt. Express 14(10), 4300–4306 (2006).

Supplementary Material (3)

Media 1: MOV (179 KB)     
Media 2: MOV (3036 KB)     
Media 3: MOV (1907 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Upper view of the proposed LOSH layout: (a) ray tracing for imaging and reference beams, and (b) identification of the main parameters for the mathematical analysis.
Fig. 2
Fig. 2 Experimental setup for LOSH approach (Media 1).
Fig. 3
Fig. 3 Full LOSH process yielding in the generation of an extended FOV (Media 2).
Fig. 4
Fig. 4 FOV extension by LOSH. (a) Conventional FOV recovered when considering a single digital lensless Fourier hologram. And (b) extended FOV image after applying LOSH.
Fig. 5
Fig. 5 Resolution improvement by LOSH. (a) Low resolution image. (b) Superresolved image after applying LOSH. (c) Magnification of the black rectangle depicted in (b). And (c) horizontal plots of the elements marked with the red and blue lines in (c).
Fig. 6
Fig. 6 (a) Square aperture representing the proposed system layout at the Fourier domain. And (b) SA generated after applying LOSH.
Fig. 7
Fig. 7 DOF extension by LOSH. (a)-(b) Conventional low resolution image being the resolution test and the horizontal slide scale in focus, respectively. (c)-(d) Synthesized image after applying LOSH showing the resolution test and the horizontal slide scale, respectively. (e) Single frame of the video movie showing digital refocusing ability (Media 3). And (f) synthesized expanded DOF image after applying LOSH.

Tables (1)

Tables Icon

Table 1 SNR Analysis

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

U ( x 1 , y 1 ) = exp [ j k 2 D ( x 1 2 + y 1 2 ) ] O ( x 1 , y 1 ) .
U ( x 2 , y 2 ) = 1 j λ D exp [ j k 2 D ( x 2 2 + y 2 2 ) ] U ( x 1 , y 1 ) exp [ j k 2 D ( x 1 2 + y 1 2 ) ] exp [ j 2 π λ D ( x 1 x 2 + y 1 y 2 ) ] d x 1 d y 1 = C exp [ j k 2 D ( x 2 2 + y 2 2 ) ] F ˜ ( u , v )
F ˜ ( u , v ) = U ( x 1 , y 1 ) exp [ j k 2 D ( x 1 2 + y 1 2 ) ] exp [ j 2 π ( u x 1 + v y 1 ) ] d x 1 d y 1 = F T { U ( x 1 , y 1 ) exp [ j k 2 D ( x 1 2 + y 1 2 ) ] }
R ( x 2 , y 2 ) = R 0 exp [ j k 2 D ( x 2 2 + y 2 2 ) ] exp ( j 2 π b y 2 λ D ) .
U ( x 2 , y 2 ) R ( x 2 , y 2 ) = R 0 C exp ( j 2 π b v ) F ˜ ( u , v ) U ( x 2 , y 2 ) R ( x 2 , y 2 ) = R 0 C exp ( j 2 π b v ) F ˜ ( u , v )
F ˜ ( u , v ) = F T { U ( x 1 , y 1 ) exp [ j k 2 D ( x 1 2 + y 1 2 ) ] } = F T { O ( x 1 , y 1 ) exp [ j k D ( x 1 2 + y 1 2 ) ] } F ˜ ( u , v ) = F T { U ( x 1 , y 1 ) exp [ j k 2 D ( x 1 2 + y 1 2 ) ] } = F T { O ( x 1 , y 1 ) exp [ j k D ( x 1 2 + y 1 2 ) ] } .
O [ α x 3 , α y 3 b ] exp { j 2 π λ D [ ( α x 3 ) 2 + ( α y 3 b ) 2 ] } O [ α x 3 , ( α y 3 + b ) ] exp { j 2 π λ D [ ( α x 3 ) 2 + ( α y 3 + b ) 2 ] }
f x = x o λ D and f y = y o b λ D .
[ f x 1 , f x 2 ] = [ L 2 λ D , L 2 λ D ] and [ f y 1 , f y 2 ] = [ L 2 b λ D , L 2 b λ D ] .
L 2 b M λ D = 1 2 P
b M = L 2 λ D 2 P = 1 2 ( L λ D P ) .
b α L 2 α L α b 3 L 2 α .
[ f x 1 ' , f x 2 ' ] = [ L 2 + ξ λ D , L 2 + ξ λ D ] and [ f y 1 ' , f y 2 ' ] = [ L 2 b λ D , L 2 b λ D ]
L 2 + ξ M λ D = 1 2 P .
ξ M = λ D 2 P L 2 .
[ u 1 , u 2 ] = [ H 2 K λ D , H 2 K λ D ] .
u M = ( H + L ' ) / 2 K λ D .
[ u 1 ' , u 2 ' ] = [ ξ H 2 K λ D , ξ + H 2 K λ D ] .
[ u 1 ' , u 2 ' ] = [ 0 K λ D , H K λ D ] = [ 0 , 2 u 2 ]
[ u 1 ' , u 2 ' ] = [ H 2 K λ D , 3 H 2 K λ D ] = [ u 2 , 3 u 2 ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.