Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Image-based registration for synthetic aperture holography

Open Access Open Access

Abstract

High pixel count apertures for digital holography may be synthesized by scanning smaller aperture detector arrays. Characterization and compensation for registration errors in the detector array position and pitch and for phase instability between the reference and object field is a major challenge in scanned systems. We use a secondary sensor to monitor phase and image-based registration parameter estimators to demonstrate near diffraction-limited resolution from a 63.4 mm aperture synthesized by scanning a 5.28 mm subaperture over 144 transverse positions. We demonstrate 60 μm resolution at 2 m range.

©2011 Optical Society of America

1. Introduction

Aperture synthesis is used to increase the resolution of coherent sensors [1]. Synthetic aperture holography has been studied using a scanning system in off-axis digital holography. Two effects of scanning measurement complicate coherent aperture synthesis: subaperture registration errors and the relative phase instability of the reference field to the object field.

A cross-correlation method [2, 3] has been used to estimate the registration errors. The method basically conducts a similarity test on measurement overlap. Massig demonstrated improved resolution and reduced speckle sizes in the reconstructed image at a distance of 0.8 m [4]. In that study, a 12.7 × 12.7 mm synthetic aperture was formed by scanning a 6.4 × 8.3 mm sensor. Binet et al. presented a 27.4 × 1.7 mm aperture synthesis by scanning a sensor with an effective area of 1.7 × 1.7 mm at 1.5 m [5].

This paper proposes an image-based method of coherent aperture synthesis on scanned images. An image-based metric, the sharpness metric [6], is used to estimate registration errors. The sharpness metric does not rely on any measurement overlap. Therefore, the image-based method is free from the change of speckle pattern [7] caused by phase instability. The measurement overlap is not necessary, so every measurement contributes to the improvement of image resolution.

Previous studies have accounted for phase instability by various methods. For example, Mico et al. synthesized nine measurements in a regular octagon geometry, improving resolution by a factor of 3 in digital holographic microscopy [8]. The phase instability was compensated by using phase factors up to the 2nd order that represent constant, linear, and quadratic phase error. Jiang et al. used the same phase factors for a 25.9 × 23.3 mm aperture synthesis in digital Fresnel holography [9].

In this paper, phase instability is mathematically analyzed and compensated. The mathematical model addresses phase instability by using spatial displacement of a point source that generates a wide reference field. The spatial displacement may be caused by experimental instability such as vibration, drift, and temperature fluctuations. By correcting the displaced position of reference field, phase instability is alleviated in the object field. Thus, physical and concise representation of phase instability is possible in aperture synthesis.

The angular spectrum method [10, 11] supports mathematical modeling of phase instability. The angular spectrum method does not depend on Fresnel approximation, which is based on paraxial approximation in optics. Thus, the reference field can be accurately modeled for a large aperture synthesis.

A secondary camera is designed to monitor piston phase error as a kind of phase instability. Since piston phase error is parameterized in the optical measurement, the number of estimation variables decrease in the computational process. A specific scanning scheme also enables us to reduce the number of estimation variables in the computational domain.

A hierarchical strategy is adopted for computational estimation. The estimation process first solves the hologram patch errors within a block and again solves the hologram block errors between blocks within the synthetic aperture hologram. This strategy helps us efficiently break the large synthesis problem into small sub problems in the two steps.

The main result is a near diffraction-limited image synthesis achieving 60 micron resolution over a 63.4 mm field at a range of 2 meters. The hologram size is equivalent to 14400 × 14400 pixels. This lensless imaging enables us to achieve a thin imager for a large synthetic aperture.

Depth imaging is also demonstrated by numerically focusing the image-based synthetic aperture hologram. The phase information of the field is recorded and processed [12, 13]. So, numerical focus axially forms the object image at a desired depth.

2. Problem formulation

Figure 1 shows a simplified schematic of the measurement process. Object plane and detector plane are defined with the spatial coordinates of (x, y) and (u, v), respectively. Also, zd and zr denote the ranges of the object and the reference point from the detector array along the optical axis. A hologram patch indicates a measurement of scanning aperture. A hologram block is composed of the hologram patches, and a wide aperture (WA) hologram is a collection of the hologram blocks. A symbol A denotes the set of spatial coordinates of all locations in the area of the synthetic aperture as shown in Fig. 1(b). The A is partitioned into I × J subsets, where Ai j denotes the (i, j)-th subset of A: ∪i j Ai j = A.

 figure: Fig. 1

Fig. 1 Schematic for image-based synthetic aperture holography: (a) scattered field Es and reference field R form a hologram in the detector plane by propagating the distances, zd and zr, respectively, and (b) dynamic camera scans patch by patch within a reinitialized block. Note that Ãi j describes the error-impacted measurement of a measurement subset Ai j.

Download Full Size | PDF

An illumination field is incident on a diffuse object confined in a field of view (FOV) and creates a field scattered off the object. The incident field and the object field are denoted by Ei (x, y) and Eo (x, y), respectively. A point source is located below the object to create a reference field R (x, y). The object field Eo (x, y) propagates to the detector plane and creates a field Es (u, v; zd) incident on focal plane array (FPA). Also, the reference field R (x, y) propagates to the detector plane and creates a propagated reference field R (u, v; zr). The fields Eo (x, y) and Es (u, v; zd) are related by

Es(u,v;zd)=(E0h)(u,v;zd)=Eo(x,y)h(ux,vy;zd)dxdy,
where ⊛ denotes a 2D convolution operator, and h denotes the point spread function (PSF) [11] given by
h(u,v;z)=[jz/λz2+u2+v2+z/2π(z2+u2+v2)3/2]ej2πλz2+u2+v2,
whose 2D Fourier transform (FT) is the angular spectrum transfer function [11]:
𝒡{h(u,v;z)}=ej2πzλ1(λfu)2(λfv)2.
Propagated fields Es (u, v; zd) and R (u, v; zr) form an interference pattern on the detector plane whose intensity is given by
I(u,v;zd,zr)=|R(u,v;zr)+Es(u,v;zd)|2,
whose partition is measured by scanning the FPA.

Ideally, once the interference intensity measurements Ii j (u,v;zd, zr) for all i and j are collected, we can register them together by placing in the positions according to the Ai j as shown in Fig. 1(b). By doing so, we can synthesize the full synthetic aperture intensity. Then a typical holographic filtering process [10] can be performed to extract the scattered field Es (u, v; zd), which can then be backpropagated by using an adjoint operator to form a coherent image of the object field Eo (x, y).

However, in real experiments, creating a synthetic aperture hologram can involve various errors that will degrade the reconstructed image resolution. These errors are described, modeled, and analyzed in the following subsection.

2.1. Error sources

In synthetic aperture holography, a large diffracted optical field scattered off a 2D object is measured in many hologram patches, each of which is measured with a FPA by moving it to a designated position and then pausing it for a few seconds to allow for the vibration to cease, and then repeating at the next position. This process takes an amount of time and space in linear proportion to the number of hologram patches. Therefore, there can be temporal and spatial changes, causing several different errors in the measurements from one hologram patch to another.

While there are many types of errors, we are mostly concerned about the following:

  1. Piston phase errors: unknown changes in the constant phase of the interference intensity measurements Ii j (u, v; zd, zr),
  2. Detector registration errors: unknown errors in the exact positions of Ii j (u, v; zd, zr) caused by the inaccuracy of the 2D translation stage that scans the FPA,
  3. Reference field errors: unknown relative changes in the position of the reference field to the object field, which may be caused by the experimental instability (e.g. vibration and temperature fluctuations),
  4. Reference field discrepancy: unknown discrepancy in the phase of the reference field caused by the non-ideal generation of the spherical field.

2.2. Mathematical modeling of errors

The ideal interference intensity measurement Ii j (u, v; zd, zr) is captured by placing the sensor array at the (i,j)-th position as shown in Fig. 1(b). It consists of the reference and scattered field, Ri j (u, v; zr) and Esij(u,v;zd), and can be expressed as

Iij(u,v;zd,zr)=|Rij(u,v;zr)+Esij(u,v;zd)|2,(u,v)Aij=|Rij(u,v;zr)|2+|Esij(u,v;zd)|2
+R*,ij(u,v;zr)Esij(u,v;zd)+Rij(u,v;zr)Es*,ij(u,v;zd),
where the superscript ‘*’ denotes a complex conjugate. Applying Fourier filtering to Eq. (6), the (i,j)-th field measurement Dij(u,v;zd,zr)=R*,ij(u,v;zr)Esij(u,v;zd) is obtained.

If considering the errors described above, the errors-impacted field measurement i j (u, v; zd, zr) is re-defined by

D˜ij(u,v;zd,zr)=Dij(u,v;zd,zr;θeij)=ejφcijR˜*,ij(u,v;zr)E˜sij(u,v;zd)ejφrij(u,v).
Error parameters vector is defined as θeij=[φcij,edij,etij,φrij(u,v)]T with the superscript ‘T’ denoting a transpose: φcij indicate the piston phase errors in error source 1, edij indicate the detector registration errors in error source 2, etij indicate transverse errors of the detector registration errors in error source 2 and the reference field errors in error source 3, and φrij(u,v) indicate the reference field discrepancy in source 4.

The inaccurate reference field i j (u, v; zr) and the inaccurate scattered field E˜sij(u,v;zd) may be expressed using the PSF [11],

R˜ij(u,v;zr)=h(u,v;zr)δ(u+ed,uij+ef,uij,v+ed,vij+ef,vij;zr)=h(u,v;zr)δ(u+et,uij,v+et,vij;zr)
E˜sij(u,v;zd)=Esij(u+ed,uij,v+ed,vij;zd)=[Ei(x,y)Eo(x,y)]h(u,v;zd)δ(u+ed,uij,v+ed,vij;zd).
Here the reference field i j (u, v; zr) is impacted by the transverse error etij=(et,uij,et,vij), which is a combination of the detector registration error edij(u,v;zd)=(ed,uij,ed,vij) and the reference field error efij(u,v;zr)=(ef,uij,ef,vij). The scattered field E˜sij(u,v;zd) is impacted by the detector error edij(u,v;zd). Note that the depth error of hologram patch is not considered in this error model. This is justified by the fact that the depth resolution in optical axis is large enough that the effect of the axial displacement is negligible.

3. Computational methods for aperture synthesis

Fig. 2 shows the flow chart of estimation processes for WA hologram synthesis. After the piston phase errors compensation (see subsection 3.1), the error parameters vector becomes

θeij=[edpij,edbij,etij,φrij(u,v)]T,
where the detector registration errors edij(u,v;zd) are split into patch errors edpij(u,v;zd) and block errors edbij(u,v;zd). Estimating the errors edpij(u,v;zd) in hologram block synthesis (subsection 3.2), the errors edbij(u,v;zd) are left which is constant for i, jBmn where Bmn is the set of all indices i and j for the (m,n)-th block. Thus the error parameters vector becomes,
θeij=[edbij,etij,φtij(u,v)]T.
WA hologram synthesis (subsection 3.3) makes the error parameters vector become,
θeij=[φrij(u,v)]T.
Finally, the reference field discrepancy φrij(u,v) is estimated in reference field estimation in subsection 3.4.

 figure: Fig. 2

Fig. 2 Flow chart of the error estimation processes for image-based synthetic aperture holography. The estimated errors are denoted in the processes.

Download Full Size | PDF

3.1. Piston phase compensation

A secondary camera is used to eliminate the piston phase errors φcij in the measurement. The secondary FPA is set up to monitor the piston phase fluctuations of WA hologram field. The piston phase errors φcij are given by,

φcij=angle{uvsign{sij(u,v)/s11(u,v)}},
where si j (u, v) indicates the complex hologram image in the (i,j)-th hologram patch. To figure out the relative field variation to the first hologram image, si j (u, v) is divided by s 11 (u, v) in exponential form. The MATLAB function sign avoids the phase variation from being affected by phase wrapping problem. The MATLAB function angle returns the phase angles, in radians, of complex elements.

3.2. Hologram block synthesis (hologram patch based process)

Recall that a hologram block is defined as a specified group of WA hologram. We alleviate the detector registration errors by registering hologram patches in each block, by shifting the patches by a few pixels that will be estimated. Since the reference field change is negligible within a hologram block (meaning that the associated errors are constants in a block), the detector registration errors mainly degrade the hologram synthesis.

The object field Eomn is backpropagated using the angular spectrum method [10] from the corrected field measurement D˜ij(uedp,uij,vedp,vij;zd,zr), where i = 1,..., I and j = 1,..., J. The (i,j)-th hologram patch is defined in the (m,n)-th hologram block.

Eomn(x,y;edp,u,edp,v)=𝒡1{𝒡{Rmn(u,v;zr)×D˜mn(uedp,u,vedp,v;zd,zr)}ejzdk2ku2kv2},
where Rmn (u, v; zr) is the numerically generated spherical field with the hologram block size. The backpropagated field image is evaluated using the sharpness metric [14] which is expressed as,
ΩSM(edp)=xyGI(x,y;edp)0.5,
edp=[edp,u11,edp,v11,,edp,uIJ,edp,vIJ]T,
where the image intensity is defined as I(x,y;edp,u,edp,v)=Eomn(x,y)·Eo*,mn(x,y) and G is the set of coordinate for the area containing guiding features. The sharpness metric enforces the concentration of energy at few points distinguishing real image from plausible images. The detector registration errors in the hologram patches are estimated and corrected by minimizing the sharpness metric ΩSM (edp) on the guiding feature images,
e^dp=argminedpΩSM(edp).

3.3. Hologram synthesis (hologram block based process)

After the hologram block synthesis, we estimate the detector registration errors and the reference field errors in hologram blocks. Both the errors are dominant in the WA hologram synthesis because the hologram blocks suffer from the phase instability and the registration errors.

The block (say Bk) is denoted in the m-th row and in the n-th column of the matrix of the blocks. The detector registration errors in hologram blocks are defined as edbmn=(edb,umn,edb,vmn), where m = 1, ..., M and n = 1, ..., N. Then the WA hologram field measurement is expressed by summing the estimated hologram blocks Dmn (u, v; zd, zr),

D(u,v;zd,zr)=mnD˜mn(uedb,umn,vedb,vmn;zd,zr).
The transverse errors in the hologram blocks, etmn=(et,umn,et,vmn)i, jBmn, are also considered. The estimated WA reference field is expressed by,
R(u,v;zr)=mnR˜mn(uet,umn,vet,vmn;zr).

The estimated scattered field E s (u, v; zd) is obtained by multiplying the estimated WA hologram field measurement by the estimated WA hologram reference field,

Es(u,v;edb,u,edb,v,et,u,et,v)=R˜(uet,u,vet,v;zr)×D˜(uebd,u,vedb,v;zd,zr)
Then the estimated WA object field E o (x, y) is obtained by using the backpropagation method,
Eo(x,y;edb,u,edb,v,et,u,et,v)=𝒡1{𝒡{Es(u,v;edb,u,edb,v,et,u,et,v)}×ejzdk2ku2kv2}.

To evaluate the errors edb , r = (edb , u, edb , v, et , u, et , v), we again use the sharpness metric [14] on the guiding feature images,

ΩSM(edb,r)=xyGI(x,y;edb,r)0.5,
edb,r=[edb,u11,edb,v11,et,u11,et,v11,edb,uMN,edb,vMN,et,uMN,et,vMN]T.
Thus, the estimation is performed as,
e^db,r=argminedb,rΩSM(edb,r).

3.4. Reference field estimation

Since the hologram synthesis came through the error estimation processes, only reference field discrepancy is left. To generate a phase estimate, 2D Chebyshev polynomials are used. The 2D Chebyshev polynomials need fewer polynomial terms to express the 2D rectangular aperture than Zernike polynomials. The phase estimate has the form

φr(u,v;Ck)=kCkPk(u,v),
where Pk (u, v) and Ck are Chebyshev basis function and coefficient, respectively. Here the Chebyshev polynomials of the first kind are used up to 4th order (15 terms representing piston, tilt, focal shift, primary coma, and primary spherical phase error).

We multiply the phase estimate by the estimated scattered field E s (u, v; zd). Then the phase estimation is performed by minimizing the sharpness metric on the guiding feature images. The estimated WA object field E o (x, y) is obtained by using the backpropagation method,

Eo(x,y;Ck)=𝒡1{𝒡{ejφr(u,v)Es(u,v;zd)}ejzdk2ku2kv2}.
The sharpness metric [14] is again,
ΩSM(Ck)=xyGI(x,y;Ck)0.5,
Finally, the estimation is,
C^k=argminCkΩSM(Ck),
where the estimation starts with the initial phase coefficients of zeros.

4. Experiment

We designed our experiment to demonstrate coherent aperture synthesis in digital holography. The optical setup is composed of field generation and detection. In the field generation, three beams were used for the experiment: two beams were for the object and reference illumination of off-axis holography, and the other beam was for the guiding features illumination of image-based synthetic aperture holography. A HeNe laser with the wavelength of 633nm and the power of 20 mW was used to make a monochromatic light source for hologram measurement as shown in Fig. 3. The laser beam was split by beam splitters (BS1 and BS2) making two object and one reference beams.

 figure: Fig. 3

Fig. 3 Experimental setup: (a) the field generation consists of a HeNe (633 nm laser), M (mirrors), BS1–5 (beam splitters), AF (1951 USAF resolution target), OBJ+P (microscopic objective lens and pinhole), and L (lens), and (b) photographs of the field generation system (left) and the field detection system (right). Object 1 is a performance test object and Object 2 is a depth imaging object.

Download Full Size | PDF

In the reference beam, the two mirrors were used to step down the reference beam maintaining laser polarization perpendicular to the optical table. The polarization is one linear factor to determine the degree of optical coherence, so maintaining the polarization is critical for highly visible hologram measurement. Then the reference beam was guided by a mirror and spatially filtered by a 25 μm pinhole and a 0.65 numerical aperture (NA) microscopic objective lens. The high NA microscopic objective lens generates a wide spherical field in the detector plane. Note the center of the reference field was vertically 100 mm lower than the center of the guiding features and axially 2.032 m (within ±2 mm accuracy) away from the surface of FPA.

In the object beam of the guiding features, the beam after the beam splitter (BS2) was filtered and collimated using a microscopic objective lens, a pinhole and a f/3 lens to illuminate the 2D object features. The filtered/expanded beam was also split into three illumination beams of the guiding features by two beam splitters (BS3 and BS4). In the other object beam of the target object, the beam split by a beam splitter (BS1) was guided to illuminate the target object using a 2 inch beam splitter (BS5). A lens (L) diverged the beam to illuminate the full size of the target object. Therefore, the generated object and reference fields interfere, forming a hologram field in the detector plane.

A 1951 USAF resolution target (AF target) was used as the 2D guiding features, quantifying our estimation process in terms of image resolution. A diffuser inserted in the back side of the AF target scatters the field more uniformly spread with speckle patterns. Fig. 3 shows three AF targets in total: two AF targets at the sides were used for the guiding features and the other one at the center was used for the test object of the hologram synthesis. Note that the axial position of the guiding features are 2.034 m (within ±2 mm accuracy) away from the surface of FPA and 2 mm away from the reference point source. The individual AF targets were horizontally placed 30 mm away in the same object plane.

A reflective 2D object was added to show the depth imaging in the synthetic aperture holography. The computer CPU chip was placed 35 mm below the AF targets’ vertical location and 1.99 m away from the surface of FPA in optical axis. The logo inscription of the CPU chip was illuminated with a beam size of 22 mm in diameter.

The theoretical resolution and FOV is determined by the number of pixels and pixel pitch of the sensor array using the Fraunhofer diffraction formula [11].

δx=λzNδu,
Δx=λzδu,
where λ is the illumination wavelength, z is the propagation distance, δu is the pixel pitch, and N is the number of pixels. Our experiments use a pixel pitch of 4.4 μm, corresponding to the monochrome CCD PointGrey GRAS-20S4M-C FPA. We use a square patch of 1200×1200 pixels on the array, meaning that a single aperture hologram captures a 5.28×5.28 mm aperture and a 12×12 synthetic aperture captures a 63.36×63.36 mm aperture. At the object range of 2 meters, the theoretical resolution and FOV are 20 μm and 288×288 mm (where N= 12 hologram patches × 1200 pixels/hologram patch).

In practice, the FOV is reduced to account for separation of the reference and object fields. In Fourier filtering for the off-axis holography, only one fourth of the total hologram bandwidth is used to avoid the effect of undesired signals. Numerical backpropagation method also limits FOV because of the way it is analytically derived [15]. In the angular spectrum method, the effective resolution and FOV are constant to the propagation range z.

δxeff=δu,
Δxeff=Nδu.
Thus, the effective FOV of the 12×12 synthetic aperture holography becomes 63.36×63.36 mm equivalent to the synthetic aperture size, and the effective image pixel resolution is 4.4 μm equivalent to the pixel pitch of FPA. The angular spectrum method was used since the Fresnel approximation method is designed for the small FOV object at the center.

4.1. Stereo camera system for piston phase compensation

A stereo camera system is designed to compensate the piston phase errors for 144 hologram patches. One static camera is placed 50 mm away from the center of the hologram scanning area as shown in Fig. 1(a) (see also Fig. 3) to record the piston phase fluctuation of hologram field.

The piston phase fluctuation over time is a dominant phase instability in the scanning measurement. The effect of piston phase fluctuation is that the reconstructed images from the hologram patches can destructively combine, resulting in worse image resolution than theoretical resolution.

The idea of the stereo camera system is based on the assumption that the hologram field shares common phase fluctuations over the scanning area. Thus, the piston phase at two distant locations will be highly correlated such that the static camera can be used to estimate the piston phase fluctuations.

To verify the validity of this assumption, two cameras were fixed and tested. Both the cameras simultaneously took 25 image frames every two seconds. Using Eq. (13), the piston phase fluctuations were obtained in Fig. 4. The continuous red and dotted blue lines are the relative phase variations of camera 1 and camera 2 respectively. Both the line plots are shown to be strongly correlated over the frames.

 figure: Fig. 4

Fig. 4 Piston phase correlation in two distant cameras: the red continuous line is the relative phase variation of camera 1 and the blue dotted line is the relative phase variation of camera 2.

Download Full Size | PDF

4.2. Reinitialization points scheme for hologram scanning

A reinitialization points scheme uses a few initial measurement points for the individual blocks. So 2×2 reinitialization points are set to measure 2D WA hologram area as shown in Fig. 1(b). The 2D WA hologram area is equivalent to 12×12 hologram patches area without any overlap. The individual blocks are raster scanned starting from the reinitialization points and the hologram blocks are composed of 6×6 hologram patches. The number of hologram patches in one hologram block is determined such that the hologram block has only localized detector registration errors.

A scanning scheme of reinitialization points is designed to support the computational methods for hologram synthesis. Since the WA hologram is scanned block by block, the detector registration errors are dominant in the hologram patches within a block. For the WA hologram synthesis, we estimate the detector registration errors and the reference field errors. In the measurement, we used a 600mm 2D axis translation stage (Newport M-IMS600CC) specifying a mechanical resolution of 1.25 μm and a bi-directional repeatability of 1.0 – 2.5 μm. However, the guaranteed accuracy is 15 μm and the inaccuracy linearly accumulates along the translation axis.

5. Processes and Results

The data processing of image-based synthetic aperture holography followed the computational methods as described in section 3. To process the WA hologram data (14400 × 14400 pixels), a Dell Precision T5500 was used with Intel Xeon CPU at 2.27 GHz, 48 GB RAM, and Windows7 64 bit operating system.

The processing time was dominantly taken by the hologram block synthesis, the WA hologram synthesis, and the reference field estimation. The hologram block synthesis searched the detection registration errors in a range of 5 pixels, minimizing the sharpness metric. The range was determined by the guaranteed accuracy 15μm of the translation stage. To speed up the estimation of the hologram block, each row of the hologram block was considered to have identical detector registration errors. This is a reasonable assumption since the translation stage achieves bi-directional repeatability in the adjacent row measurement. So the detector registration errors were transversely searched for by sweeping the possible errors within the range for about one hour.

Both the WA hologram synthesis and the reference field estimation used unconstrained multivariate minimum search in MATLAB. The algorithm utilizes the Quasi-Newton line search, which is stably convergent but slow. The WA hologram synthesis required about 4 hours for 5 iterations and the reference field estimation took about 6 hours for 5 iterations.

In the experiment, two transversely distant AF targets were used as the guiding features to avoid finding local minima in the estimation. Fig. 5 shows the evolution of estimation in the 12×12 hologram patches. The raw data image suffers from all the errors θeij=[φcij,edpij,edbij,etij,φrij(u,v)]T, resulting in the periodic ghost images and the blurs in Fig. 5(a). Compensating for the piston phase errors φcij, the ghost images were effectively mitigated. However, the ghost images and the blurs still remain in Fig. 5(b). Estimating the detector patch errors edpij, the ghost images were removed in Fig. 5(c). Due to the blurs remaining in the image, the two AF targets resolve only the features (group 2, element 5) whose resolution correspond to 158 μm.

 figure: Fig. 5

Fig. 5 Evolution of estimation effects on the guiding features image: the left and right hand guiding features of (a) the raw data image, (b) the piston phase compensated image, (c) the hologram block synthesized image, (d) the WA hologram synthesized image, (e) the zoomed-in image of (d), and (f) the zoomed-in image of reference field estimated image. Also, (g) the estimated reference field discrepancy. Note that the image pixel resolution is 4.4 μm in the angular spectrum method.

Download Full Size | PDF

In the WA hologram synthesis, the detector block errors and reference field errors in the hologram blocks [edbij,etij] were estimated, resolving the features in group 3 in Fig. 5(d). Finally, the estimation of reference field discrepancy restores the resolution in the features (group 4, element1) that correspond to the theoretical resolution 62.5 μm in Fig. 5(f). Also, Fig. 5(e) and (f) show the effect of the reference field discrepancy estimation on the zoomed-in images. The estimation helps to resolve the features (group 4, element1) that are marked by a red circle. The estimated phase of the reference field is shown in Fig. 5(g). Note that the theoretical resolution is calculated by multiplying the speckle factor of 3 by the theoretical resolution [11, 16].

Fig. 6 shows the evolution of image resolution to the number of hologram patches in the estimated guiding features. The image of the 1×1 hologram patch barely resolves any feature (Group 2 and 3) in Fig. 6(a) and (d). The image of the 3×3 hologram patches resolves the features (group 2, element 1) whose resolution corresponds to the theoretical resolution of 250 μm in Fig. 6(b) and (e). The image of the 12×12 hologram patch resolves the features (group 4, element 1) whose resolution corresponds to the theoretical resolution 62.5 μm in Fig. 6(c) and (f). Here we used the speckle-affected resolution [16].

 figure: Fig. 6

Fig. 6 Resolution improvement to the number of hologram patches in the guiding features: (a), (b), and (c) show the images of 1×1, 3×3, and 12×12 hologram patches in the left-hand AF targets. Also, (d), (e), and (f) show the images of 1×1, 3×3, and 12×12 hologram patches in the right-hand AF targets.

Download Full Size | PDF

Another experiment demonstrated the holographic images of three AF targets and one reflective 2D object. A zoom-in movie starts from a full FOV image at 63.4 × 63.4 mm to a zoomed-in image at 2.1 × 2.1 mm (Fig. 7). The start image is 14400 × 14400 pixels, and the end image is 480 × 480 pixels. The images were downsampled to 480 × 480 pixels by using bicubic interpolation. The two AF targets at the sides were used as the guiding features to estimate the synthetic errors, and the CPU chip was an object for depth imaging. The two AF targets were transversely separated by 60 mm on the same object plane.

 figure: Fig. 7

Fig. 7 (Media 1) Zoom-in movie starts from a full FOV image at 63.4 × 63.4 mm to a zoomed-in image at 2.1 × 2.1 mm. The zoomed image focuses on the CPU chip.

Download Full Size | PDF

Fig. 8 shows the images estimated by the hologram synthesis. Two AF targets at the sides (see Fig. 8(a) and (c)) were used for the guiding features, and the center one (Fig. 8(b)) was used for the performance test target. Unlike the non estimated images in Fig. 8(a) and (c), the estimated images in Fig. 8(d) and (f) have the ghost images and the blurs mitigated. In Fig. 8(e), the center AF target can also read the numbers in group 2. Thus, the estimation strategy using the guiding features is verified to be useful in the synthetic aperture holography. Unlike the images of Fig. 5, the resolution degradation is caused by the limit of the detector’s dynamic range. The increased field signals easily saturate the dynamic range of the detector as the number of objects increases.

 figure: Fig. 8

Fig. 8 The images of the AF targets in the depth imaging experiment: (a) left-hand guiding features, (b) performance test features, and (c) right-hand guiding features of the raw data. Also, (d) left-hand guiding features, (e) performance test features, and (f) right-hand guiding features of the hologram synthesis.

Download Full Size | PDF

Fig. 9 shows the feasibility of depth imaging. The logo inscription of a CPU chip is in focus by backpropagating the estimated synthetic aperture hologram. The resolution improvement to the number of hologram patches is presented in Fig. 9(a), (b), and (c). The more hologram patches we synthesize, the smaller letters are readable. Fig. 9(d) and (e) show the effect of error estimation on the in-focus image. The zoomed-in images have better sharpness in the estimated image in Fig. 9(e). Fig. 9(f) shows an incoherent base line image of the logo inscription.

 figure: Fig. 9

Fig. 9 The image of the CPU chip: resolution improvement to the number of hologram patches showing the images of (a) the 1×1 hologram patch, (b) the 3×3 hologram patches, and (c) the 12×12 hologram patches. Also, comparison of the images of (d) the un-synthesized hologram, (e) the synthesized hologram, and (f) the real photograph. Note that this object is placed 44 mm closer to the detector plane than the guiding features’ plane.

Download Full Size | PDF

In Fig. 10, the piston phase errors were monitored showing temporal drift over the 144 scanned hologram patches. The hologram block synthesis estimated the horizontal and vertical detector errors as edp,uij=0 and edp,vij=1i, respectively (for m=1 and n=1). The other blocks showed the same detector errors with the hologram block of m=1 and n=1. Table 1 shows the detector registration errors and reference field errors estimated for the hologram blocks. The estimates for the reference field discrepancy are shown in Table 2.

 figure: Fig. 10

Fig. 10 The monitored piston phase variation of scanned 144 hologram patches.

Download Full Size | PDF

Tables Icon

Table 1. The estimated parameters of the detector registration errors and the reference field errors for the WA hologram synthesis.

Tables Icon

Table 2. The Chebychev coefficients for the reference field discrepancy.

6. Conclusion

This paper described a method to compensate for scanning effects in image-based synthetic aperture holography. We used this method to restore near diffraction-limited resolution in a 63.4 × 63.4 mm synthetic aperture. This research suggests that high pixel count imaging on giga-pixel scale may be achieved using available computational power and memory. Depth imaging was also demonstrated by using a reflective object placed 44 mm away from the guiding features’ plane. This infers that the hologram synthesis is valid for the reconstruction of 3D space information. Therefore, the synthetic aperture holography can be used for imaging of 3D samples. Since the synthetic aperture increases the numerical aperture of the measurement system, the resolution in depth and transverse can improve.

Acknowledgments

This research was supported by a DARPA under AFOSR contract FA9550-06-1-0230. The authors thank James Fienup for helpful suggestions.

References and links

1. C. W. Sherwin, P. Ruina, and R. D. Rawcliffe, “Some early developments in synthetic aperture radar systems,” IRE Trans. Mil. Electron. 6, 111–115 (1962). [CrossRef]  

2. L. G. Brown, “A survey of image registration techniques,” ACM Comput. Surv. 24, 4 (1992). [CrossRef]  

3. L. Romero and F. Calderon, A Tutorial on Parametric Image Registration (I-Tech, 2007).

4. J. H. Massig, “Digital off-axis holography with a synthetic aperture,” Opt. Lett. 27, 24, 2179–2181 (2002). [CrossRef]  

5. R. Binet, J. Colineau, and J.-C. Lehureau, “Short-range synthetic aperture imaging at 633 nm by digital holography,” Appl. Opt. 41, 4775–4782 (2002). [CrossRef]   [PubMed]  

6. J. R. Fienup and J. J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. A 20, 609–620 (2003). [CrossRef]  

7. J. W. Goodman, Speckle Phenomena in Optics - Theory and Applications (Roberts and Company, 2007).

8. V. Mico, Z. Zalevsky, C. Ferreira, and J. Garca, “Superresolution digital holographic microscopy for three-dimensional samples,” Opt. Express 16, 19260–19270 (2008). [CrossRef]  

9. H. Jiang, J. Zhao, J. Di, and C. Qin, “Numerically correcting the joint misplacement of the sub-holograms in spatial synthetic aperture digital Fresnel holography,” Opt. Express 17, 18836–18842 (2009). [CrossRef]  

10. J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts and Company, 2005).

11. D. J. Brady, Optical Imaging and Spectroscopy (Wiley, 2009). [CrossRef]  

12. U. Schnars and W. P. O. Juptner, “Digital recording and numerical reconstruction of holograms,” Meas. Sci. Technol. 13, R85–R101 (2002). [CrossRef]  

13. B. Javidi, P. Ferraro, S.-H. Hong, S. De Nicola, A. Finizio, D. Alfieri, and G. Pierattini, “Three-dimensional image fusion by use of multiwavelength digital holography,” Opt. Lett. 30, 144–146 (2005). [CrossRef]   [PubMed]  

14. S. T. Thurman and J. R. Fienup, “Phase-error correction in digital holography,” J. Opt. Soc. Am. A 25, 983–994 (2008). [CrossRef]  

15. T. M. Kreis, M. Adams, and W. P. O. Jueptner, “Methods of digital holography: a comparison,” Proc. SPIE 3098, 224–233 (1997). [CrossRef]  

16. A. Kozmat and C. R. Christensent, “Effects of speckle on resolution,” J. Opt. Soc. Am. 66, 1257–1260 (1976). [CrossRef]  

Supplementary Material (1)

Media 1: MOV (2366 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Schematic for image-based synthetic aperture holography: (a) scattered field Es and reference field R form a hologram in the detector plane by propagating the distances, zd and zr , respectively, and (b) dynamic camera scans patch by patch within a reinitialized block. Note that Ãi j describes the error-impacted measurement of a measurement subset Ai j .
Fig. 2
Fig. 2 Flow chart of the error estimation processes for image-based synthetic aperture holography. The estimated errors are denoted in the processes.
Fig. 3
Fig. 3 Experimental setup: (a) the field generation consists of a HeNe (633 nm laser), M (mirrors), BS1–5 (beam splitters), AF (1951 USAF resolution target), OBJ+P (microscopic objective lens and pinhole), and L (lens), and (b) photographs of the field generation system (left) and the field detection system (right). Object 1 is a performance test object and Object 2 is a depth imaging object.
Fig. 4
Fig. 4 Piston phase correlation in two distant cameras: the red continuous line is the relative phase variation of camera 1 and the blue dotted line is the relative phase variation of camera 2.
Fig. 5
Fig. 5 Evolution of estimation effects on the guiding features image: the left and right hand guiding features of (a) the raw data image, (b) the piston phase compensated image, (c) the hologram block synthesized image, (d) the WA hologram synthesized image, (e) the zoomed-in image of (d), and (f) the zoomed-in image of reference field estimated image. Also, (g) the estimated reference field discrepancy. Note that the image pixel resolution is 4.4 μm in the angular spectrum method.
Fig. 6
Fig. 6 Resolution improvement to the number of hologram patches in the guiding features: (a), (b), and (c) show the images of 1×1, 3×3, and 12×12 hologram patches in the left-hand AF targets. Also, (d), (e), and (f) show the images of 1×1, 3×3, and 12×12 hologram patches in the right-hand AF targets.
Fig. 7
Fig. 7 (Media 1) Zoom-in movie starts from a full FOV image at 63.4 × 63.4 mm to a zoomed-in image at 2.1 × 2.1 mm. The zoomed image focuses on the CPU chip.
Fig. 8
Fig. 8 The images of the AF targets in the depth imaging experiment: (a) left-hand guiding features, (b) performance test features, and (c) right-hand guiding features of the raw data. Also, (d) left-hand guiding features, (e) performance test features, and (f) right-hand guiding features of the hologram synthesis.
Fig. 9
Fig. 9 The image of the CPU chip: resolution improvement to the number of hologram patches showing the images of (a) the 1×1 hologram patch, (b) the 3×3 hologram patches, and (c) the 12×12 hologram patches. Also, comparison of the images of (d) the un-synthesized hologram, (e) the synthesized hologram, and (f) the real photograph. Note that this object is placed 44 mm closer to the detector plane than the guiding features’ plane.
Fig. 10
Fig. 10 The monitored piston phase variation of scanned 144 hologram patches.

Tables (2)

Tables Icon

Table 1 The estimated parameters of the detector registration errors and the reference field errors for the WA hologram synthesis.

Tables Icon

Table 2 The Chebychev coefficients for the reference field discrepancy.

Equations (32)

Equations on this page are rendered with MathJax. Learn more.

E s ( u , v ; z d ) = ( E 0 h ) ( u , v ; z d ) = E o ( x , y ) h ( u x , v y ; z d ) d x d y ,
h ( u , v ; z ) = [ j z / λ z 2 + u 2 + v 2 + z / 2 π ( z 2 + u 2 + v 2 ) 3 / 2 ] e j 2 π λ z 2 + u 2 + v 2 ,
𝒡 { h ( u , v ; z ) } = e j 2 π z λ 1 ( λ f u ) 2 ( λ f v ) 2 .
I ( u , v ; z d , z r ) = | R ( u , v ; z r ) + E s ( u , v ; z d ) | 2 ,
I i j ( u , v ; z d , z r ) = | R i j ( u , v ; z r ) + E s i j ( u , v ; z d ) | 2 , ( u , v ) A i j = | R i j ( u , v ; z r ) | 2 + | E s i j ( u , v ; z d ) | 2
+ R * , i j ( u , v ; z r ) E s i j ( u , v ; z d ) + R i j ( u , v ; z r ) E s * , i j ( u , v ; z d ) ,
D ˜ i j ( u , v ; z d , z r ) = D i j ( u , v ; z d , z r ; θ e i j ) = e j φ c i j R ˜ * , i j ( u , v ; z r ) E ˜ s i j ( u , v ; z d ) e j φ r i j ( u , v ) .
R ˜ i j ( u , v ; z r ) = h ( u , v ; z r ) δ ( u + e d , u i j + e f , u i j , v + e d , v i j + e f , v i j ; z r ) = h ( u , v ; z r ) δ ( u + e t , u i j , v + e t , v i j ; z r )
E ˜ s i j ( u , v ; z d ) = E s i j ( u + e d , u i j , v + e d , v i j ; z d ) = [ E i ( x , y ) E o ( x , y ) ] h ( u , v ; z d ) δ ( u + e d , u i j , v + e d , v i j ; z d ) .
θ e i j = [ e d p i j , e d b i j , e t i j , φ r i j ( u , v ) ] T ,
θ e i j = [ e d b i j , e t i j , φ t i j ( u , v ) ] T .
θ e i j = [ φ r i j ( u , v ) ] T .
φ c i j = angle { u v sign { s i j ( u , v ) / s 11 ( u , v ) } } ,
E o m n ( x , y ; e d p , u , e d p , v ) = 𝒡 1 { 𝒡 { R m n ( u , v ; z r ) × D ˜ m n ( u e d p , u , v e d p , v ; z d , z r ) } e j z d k 2 k u 2 k v 2 } ,
Ω S M ( e d p ) = x y G I ( x , y ; e d p ) 0.5 ,
e d p = [ e d p , u 11 , e d p , v 11 , , e d p , u I J , e d p , v I J ] T ,
e ^ d p = arg min e d p Ω S M ( e d p ) .
D ( u , v ; z d , z r ) = m n D ˜ m n ( u e d b , u m n , v e d b , v m n ; z d , z r ) .
R ( u , v ; z r ) = m n R ˜ m n ( u e t , u m n , v e t , v m n ; z r ) .
E s ( u , v ; e d b , u , e d b , v , e t , u , e t , v ) = R ˜ ( u e t , u , v e t , v ; z r ) × D ˜ ( u e b d , u , v e d b , v ; z d , z r )
E o ( x , y ; e d b , u , e d b , v , e t , u , e t , v ) = 𝒡 1 { 𝒡 { E s ( u , v ; e d b , u , e d b , v , e t , u , e t , v ) } × e j z d k 2 k u 2 k v 2 } .
Ω S M ( e d b , r ) = x y G I ( x , y ; e d b , r ) 0.5 ,
e d b , r = [ e d b , u 11 , e d b , v 11 , e t , u 11 , e t , v 11 , e d b , u M N , e d b , v M N , e t , u M N , e t , v M N ] T .
e ^ d b , r = arg min e d b , r Ω S M ( e d b , r ) .
φ r ( u , v ; C k ) = k C k P k ( u , v ) ,
E o ( x , y ; C k ) = 𝒡 1 { 𝒡 { e j φ r ( u , v ) E s ( u , v ; z d ) } e j z d k 2 k u 2 k v 2 } .
Ω S M ( C k ) = x y G I ( x , y ; C k ) 0.5 ,
C ^ k = arg min C k Ω S M ( C k ) ,
δ x = λ z N δ u ,
Δ x = λ z δ u ,
δ x eff = δ u ,
Δ x eff = N δ u .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.