Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Three dimensional digital holographic aperture synthesis

Open Access Open Access

Abstract

Aperture synthesis techniques are applied to temporally and spatially diverse digital holograms recorded with a fast focal-plane array. Because the technique fully resolves the downrange dimension using wide-bandwidth FMCW linear-chirp waveforms, extremely high resolution three dimensional (3D) images can be obtained even at very long standoff ranges. This allows excellent 3D image formation even when targets have significant structure or discontinuities, which are typically poorly rendered with multi-baseline synthetic aperture ladar or multi-wavelength holographic aperture ladar approaches. The background for the system is described and system performance is demonstrated through both simulation and experiments.

© 2015 Optical Society of America

1. Introduction

Three dimensional (3D) imaging has been a rapidly expanding research field with approaches that capitalize on parallel readout architectures seeing significant advancement recently [1]. For example, in flash ladar systems the target is broadly illuminated with a high energy pulsed laser and a receive aperture is used to image the target onto a fast focal-plane array (FPA). The fast FPA typically consists of an array of avalanche photodiodes (APDs) coupled to an advanced read-out-integrated-circuit (ROIC), which also allows it to time resolve the return pulses [2]. Several companies now manufacture fast FPAs operating either in linear or Geiger mode for short and long range flash ladar respectively. Over the past decade, incoherent, direct-detect flash ladar has been used for both terrestrial and airborne applications including mapping and autonomous navigation [1,2]. However, the achievable crossrange resolution in flash ladar is limited by the conventional diffraction limit imparted by the receive aperture size, while the down-range resolution is limited by the effective bandwidth of the ladar system. These resolution constraints typically limit the applicability of flash ladar especially when ultra-high resolution is required, such as in large-volume metrology applications [1].

Synthetic aperture ladar (SAL) [3], distributed or sparse aperture imaging [4], and holographic aperture ladar (HAL) [5,6] are coherent imaging techniques that coherently combine spatially and temporally diverse target returns to overcome the conventional diffraction limit. By recording individual estimates of the electric field with either a single receiver or a sensor array and by considering the relative motion between the transceiver and the target, these field estimates can be registered or “stitched” in the pupil plane to produce enhanced resolution two dimensional (2D) imagery. To form 3D imagery, two or more baselines in an interferometric SAL system [7], or two or more wavelengths in a distributed aperture or HAL system [8] have been proposed and demonstrated. The unresolved dimension is derived from the measured phase difference between the spatial or wavelength multiplexed 2D images. Because these techniques rely on a few synthetic wavelengths or spatial frequencies, targets with significant structure or discontinuities are poorly rendered. Recent work extended the number of discrete wavelengths to 256 to address these issues [9].

In their paper, Duncan and Dierking briefly suggest that the HAL approach, when paired with a fully coherent ranging system, could overcome this limitation and provide fully resolved 3D imagery while preserving the ability to enhance the crossrange resolution via aperture synthesis [5]. In this paper, we demonstrate for the first time, fully downrange resolved coherent imaging with aperture synthesis by use of a FMCW chirped heterodyne ladar system coupled with a fast FPA. We term the approach 3D-HAL for brevity throughout this paper because the approach derives much of its aperture synthesis techniques from Ref. 5. However, the approach can also be considered as extending the 3D imaging approach described by Marron by applying aperture synthesis techniques in the lateral dimensions [10].

Because 3D-HAL relies upon a parallel readout architecture, it has advantages similar to direct-detect flash ladar systems. However, because the approach is coherent, it has many additional capabilities including: 1) enhanced crossrange resolution through aperture synthesis approaches, 2) access to ultra-high downrange resolution through stretched processing of chirped waveforms, 3) the ability to utilize digital phase correction and image sharpness metrics for correcting optical or atmospheric aberrations [6], and 4) advanced Doppler or interferometric processing to measure the coherent evolution of the 3D point cloud. The 3D-HAL approach also uses heterodyne detection, which is extremely sensitive when shot-noise-limited, and which enables coherent integration to increase carrier-to-noise ratios. This makes 3D-HAL ideally suited for use in photon-starved environments typical of flash ladar. In this paper, we provide background for the 3D-HAL approach and provide simulation and experimental results for 3D aperture synthesis. We also explore the coherent nature of the point cloud, by tracking the phase evolution of 3D images when the target undergoes thermal expansion, allowing interferometric scale point-cloud resolution.

2. Background and simulation

HAL is a form of digital holography that resolves both transverse dimensions of the scene, but not the downrange dimension [5,6]. Reference 5 provides a thorough theoretical treatment of crossrange aperture synthesis using a sensor array including both spotlight and stripmap holographic transformation and the reader is referred there for details of the approach. Through the use of these transformations, the pupil-plane field segments are properly registered so that a larger synthesized complex pupil-plane field is created without residual errors due to the given translational or rotational geometry. The sensor array can be placed in the pupil plane or in the focal plane, since they are connected through a Fourier transform.

In 3D-HAL, we introduce a temporally-modulated, linear-chirp waveform for ranging at each aperture location, which is similar to a SAL system. While any coherent form of laser ranging could be utilized, FMCW chirped heterodyne ranging has the advantage of providing very high downrange resolution with a reduced receiver bandwidth via optical stretched processing [11]. The achievable range resolution, δR, of this technique is limited by the chirp bandwidth, B, and is described by the relation, δR=c/(2B), where c is the speed of light.

Figure 1(a) shows a schematic diagram for a generalized 3D-HAL system. Here a chirped transmit source, after being suitably amplified, flood illuminates a target. A fast FPA is used to record both the temporal and spatial interference between a local oscillator (not shown) and the complex return field for a given location along the dimension of motion. This is referred to as a field segment, go(x,y,tf), where x and y are crossrange coordinates and tf is the time during which the temporal interference is recorded (i.e. “fast time” in SAL formalism). Throughout this paper, we refer to sensor arrays that are capable of recording this temporal interference as “fast FPAs” to distinguish them from more traditional sensor arrays that operate with very slow frame rates. However, in 3D-HAL the fast FPAs need not be restricted to being placed in the “focal plane” as the acronym suggests.

 figure: Fig. 1

Fig. 1 (a) Schematic of a generalized 3D-HAL system. A chirped transmit source flood illuminates a target (Tx) and the fast focal plane array (FPA) records the interference between a local oscillator (not shown) and the complex return field at various positions along a line of travel (labeled a-g). The total synthetic aperture, dSA, provides enhanced resolution along the direction of platform motion. (b) The sequence of chirp waveforms, the temporally dependent field segment for chirp b, and the fully registered synthetic field for all FPA positions or “data cube”, which is the fully sampled Fourier space representation of the scene.

Download Full Size | PDF

Aperture synthesis depends on translation between the speckle field and the receive aperture in the aperture plane. A multitude of configurations achieve this effect [3–7]. In our analysis, the transceiver locations are labeled a-g in Fig. 1(a) and we assume that the transmitter moves with the fast FPA in a monostatic configuration. However, this assumption is not required as bi-static and multi-static configurations are also possible [4,7]. The synthetic aperture size, dSA, is shown and the enhanced crossrange resolution is nominally δCR=λR/(2dSA+dAP) where λ is the carrier wavelength, R is the range to target, dSA is the focal plane array width in the dimension of travel, the step size is assumed to be dAP/2 [5].

Figure 1(b) shows the corresponding sequence of chirp waveforms and an example recorded field segment for aperture location “b” is drawn as a small cube. It is important to note that the fast FPA samples the field spatially at the pixel spacing in the aperture plane, while the frame rate of the fast FPA gives the temporal sampling. The pixel spacing sets the overall field-of-view of the system in the crossrange dimensions, while the frame rate is proportional to the maximum range difference that can be measured in the downrange dimension. Each recorded field segment is subsequently registered during a post-processing step with the other field segments, resulting in a larger synthesized or fully registered field estimate as shown in the bottom of Fig. 1(b). We also refer to this structure as a “data cube”. Registration can either be performed by measuring the aperture positions with high precision or by using data driven registration techniques with sufficient overlap between field segments. Even after registration, the field segments typically remain incoherent with each other because the aperture motion must technically be compensated to less than the optical carrier wavelength, which is often not physically possible. Phase gradient autofocus or prominent point algorithms provide coherence across field segments and the data can be further enhanced using digital image correction for atmospheric or other aberrations [6].

This data cube can be processed in a variety of ways to form useful 2D and 3D imagery. For example, any 2D slice through both transverse dimensions for a fixed fast time can be processed as a HAL image using a 2D fast Fourier transform (FFT). Similarly, any 2D slice of the data cube in a downrange / transverse combination can be processed as a SAL image again using a 2D FFT. However, by transforming the whole data cube, with a 3D FFT, all dimensions are compressed into a fully resolved 3D distribution. Once fully compressed, peak extraction is performed along the downrange dimension to construct a point cloud.

We have developed an advanced 3D-HAL model to simulate a variety of system architectures. Platform motion and/or target rotation has been incorporated for aperture synthesis. To demonstrate resolution enhancement, we simulated 3D-HAL using an AF bar target from a standoff distance of 2 km. The fast FPA was assumed to be 200 x 280 pixels wide with a pixel spacing of 1.25 mm. Thirty-five total aperture locations were simulated as shown in Fig. 2(a) with an aperture gain of 5x in the vertical dimension and 7x in the horizontal dimension. Figure 2(b) shows a 3D rendering for a single pupil-plane segment. Figure 2(c) shows the full aperture synthesis and demonstrates the anticipated resolution enhancement in both transverse dimensions.

 figure: Fig. 2

Fig. 2 A diagram (a) of the pupil-plane field segments and the 5x7 grid of segments. 3D renderings of an Air Force bar target for (b) a single segment and (c) a synthesized field showing 5x vertical and 7x horizontal resolution enhancement. Visualization 1 is a movie comparing the point clouds for the single segment and the stitched field.

Download Full Size | PDF

3. Experimental demonstrations

3D-HAL demonstrations were conducted using a lens-less digital holographic setup. A stabilized FMCW linear-chirp laser at 1064 nm was used to flood illuminate various targets 2.5 meters away, each of which was mounted on a rotary stage. The laser output power was approximately 15 mW. The linear chirp had a bandwidth of 102 GHz and was repeated at a 5 Hz rate. A mirror near the target was used to create a local oscillator beam. A Basler acA2000-340km CMOS array recorded the interference between the local oscillator and the target returns synchronized with the chirp output. A Bitflow Karbon frame grabber card was used to capture a 200x280 pixel region-of-interest (ROI) at a frame rate of 1.8 kHz. The resulting field segments had 360 frames in fast time and were 200 ms in duration.

The ROI was shifted to sample the full extent of the fast FPA for aperture synthesis. This is effectively a bi-static mode with a moving receive aperture and stationary transmitter. The bi-static mode halves the synthetic aperture and doubles the resolution [4, 7]. However, the stationary fast FPA results in ideal spatial placement of the field segments and simplifies the processing. The size of the ROI limits the fast FPA to a 5x7 tiling resulting in 1000 x 1960 utilized pixels, similar to Fig. 2(a). The resultant voxel is 1.5mm x 968µm x 494µm. The tiles required piston phasing which was achieved with a ball lens retro-reflector placed in the scene. Given the PRF, the collection time of a data cube is roughly 7 seconds. Several data cubes can allow for coherent integration of the time domain signal for increased SNR. A rotary stage rotated the target by the angle subtended by the FPA from the target to achieve an independent speckle realization. Surface migration was not significant in this operation. The data in Figs. 3 and 4 is derived from four speckle realizations of the full data cube. We have successfully used target rotation for aperture synthesis, but we preferred to use the additional information for speckle averaging to improve image contrast.

 figure: Fig. 3

Fig. 3 Demonstration of resolution enhancement using 3D-HAL. (a) Photograph of an aluminum plate overlaid with machined features. (b) Point cloud derived from processing of a single 200x280 field segment. (c) Point cloud derived from processing of the fully synthesized aperture of size 1000x1960 colored by depth out of the plane. Visualization 2 shows the cross range resolution enhancement of the stitched field versus the single segment.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 Photographs of (a) a shaving razor and (b) a paper cup. Single field segment and full synthesized aperture 3D-HAL renderings of (c-d) the razor and (e-f) the paper cup respectively (see Visualization 3 and Visualization 4 for resolution comparison).

Download Full Size | PDF

To demonstrate resolution enhancement, we first imaged a satin finished aluminum plate with precision machined features. The features included vertical and horizontal grooves at three different widths (300, 600, and 900µm) and two different depths (100 and 200µm). A corner of the aluminum plate was removed at a depth of 500µm to assess the “step” resolution of the system. Holes of varying diameters (1, 4, 6 and 10 mm) were also drilled through the plate. The 3D image was formed using a single field segment as well as the entire coherently stitched array to demonstrate the resolution enhancement capability. The results of this experiment are shown in Fig. 3.

Figures 4(a) and 4(b) are photographs of two common objects imaged with the 3D-HAL system. These targets were chosen to provide different material and shape characteristics using familiar objects. Figure 4 also shows the single field segment and the fully synthesized aperture 3D-HAL imagery rendered with log scale intensity shading for Figs. 4(c) and 4(d), the shaving razor, and Figs. 4(e) and 4(f), the 12 ounce paper cup. The synthetic point clouds provide more detail, allowing for example the grip on the razor to be resolved.

Finally, to fully demonstrate the coherent nature of point clouds rendered with the 3D-HAL approach, the phase information of the various points was tracked over sequential captures. Because this phase is proportional to the wavelength of the optical carrier, very small scale changes can be observed and mapped to the 3D rendering. To explore this capability, we imaged the thermal expansion of a coffee cup in the seconds after hot water was introduced into the container. The ROI was fixed at 400x560 pixels, with 2x2 binning, resulting in a narrower field-of-view, which allowed a 5 Hz frame rate. The phase value of each voxel over the sequential capture was then referenced to the phase of the identical voxel in the first capture. This phase was unwrapped and rendered onto a surface reconstruction of the point cloud to visualize thermal expansion of the mug on the micron scale. The results are shown in Fig. 5. The observed phase evolution demonstrates that more thermal expansion is occurring in the bottom half of the mug where the hot water is present. In order to correctly utilize this phase to measure displacement, the surface normal is estimated and compared to the line of sight of the 3D-HAL system as shown in Fig. 5(c) and 5(d).

 figure: Fig. 5

Fig. 5 (a) Picture of the coffee mug and the boiling water level. (b) A colormap of the unwrapped phase after 60 seconds rendered onto the point cloud (see Visualization 5). (c) The observed phase change is proportional to the red arrows, which represent the dot product of the thermal expansion (exaggerated) along the line of sight of the 3D-HAL system. (d) By taking this projection into account, an estimate of the displacement can be made.

Download Full Size | PDF

7. Conclusion

In this paper, the 3D-HAL technique has been presented and demonstrated. Aperture synthesis showed dramatic resolution enhancement over single field segments while fully resolving the 3D information content of the target. The fully coherent nature of the point clouds was also demonstrated by tracking the phase of a thermally expanding coffee cup.

The parallel readout of FPAs makes 3D-HAL similar to direct-detect flash ladar systems with the additional benefit of coherent detection. While the approach is similar to both HAL and SAL architectures, the use of coherent downrange waveforms provides unprecedented extraction of 3D content. These 3D images are fully resolved and will perform better at 3D imaging targets with significant structure or discontinuities when compared to multi-wavelength HAL or multi-baseline SAL approaches. There are many possibilities for further research for 3D-HAL including exploration of data-driven registration of the field segments, more advanced Doppler processing of moving targets, and longer range performance.

Acknowledgements

We gratefully acknowledge AFRL-WPAFB and an anonymous reviewer. This work was supported by the U.S. Air Force under Grant No. FA 8650-14-M-1793.

References and links

1. G. Berkovic and E. Shafir, “Optical methods for distance and displacement measurements,” Adv. Opt. Photonics 4(4), 441–471 (2012). [CrossRef]  

2. B. F. Aull, A. H. Loomis, D. J. Young, R. M. Heinrichs, B. J. Felton, P. J. Daniels, and D. J. Landers, “Geigermode avalanche photodiodes for three-dimensional imaging,” Linc. Lab. J. 13, 335–350 (2002).

3. S. M. Beck, J. R. Buck, W. F. Buell, R. P. Dickinson, D. A. Kozlowski, N. J. Marechal, and T. J. Wright, “Synthetic-aperture imaging laser radar: laboratory demonstration and signal processing,” Appl. Opt. 44(35), 7621–7629 (2005). [CrossRef]   [PubMed]  

4. D. J. Rabb, D. F. Jameson, J. W. Stafford, and A. J. Stokes, “Multi-transmitter aperture synthesis,” Opt. Express 18(24), 24937–24945 (2010). [CrossRef]   [PubMed]  

5. B. D. Duncan and M. P. Dierking, “Holographic aperture ladar: erratum,” Appl. Opt. 52(4), 706–708 (2013). [CrossRef]   [PubMed]  

6. A. E. Tippie, A. Kumar, and J. R. Fienup, “High-resolution synthetic-aperture digital holography with digital phase and pupil correction,” Opt. Express 19(13), 12027–12038 (2011). [CrossRef]   [PubMed]  

7. S. Crouch and Z. W. Barber, “Laboratory demonstrations of interferometric and spotlight synthetic aperture ladar techniques,” Opt. Express 20(22), 24237–24246 (2012). [CrossRef]   [PubMed]  

8. B. R. Dapore, D. J. Rabb, and J. W. Haus, “Phase noise analysis of two wavelength coherent imaging system,” Opt. Express 21(25), 30642–30652 (2013). [CrossRef]   [PubMed]  

9. J. Stafford, B. Duncan, and D. J. Rabb, “Range-compressed Holographic Aperture Ladar,” in Imaging and Applied Optics 2015, OSA Technical Digest (online) (Optical Society of America, 2015), paper LM1F.3. [CrossRef]  

10. J. C. Marron and K. S. Schroeder, “Three-dimensional lensless imaging using laser frequency diversity,” Appl. Opt. 31(2), 255–262 (1992). [CrossRef]   [PubMed]  

11. P. A. Roos, R. R. Reibel, T. Berg, B. Kaylor, Z. W. Barber, and W. R. Babbitt, “Ultrabroadband optical chirp linearization for precision metrology applications,” Opt. Lett. 34(23), 3692–3694 (2009). [CrossRef]   [PubMed]  

Supplementary Material (5)

NameDescription
Visualization 1: AVI (14262 KB)      A movie showing the 3D point cloud for an Air Force bar target derived from first a single field segment followed by the fully synthetic field.
Visualization 2: AVI (13373 KB)      A movie that compares the point clouds for the single segment and stitched fields for the machined aluminum plate.
Visualization 3: AVI (13251 KB)      A movie comparing the low and high resolution point clouds for the razor.
Visualization 4: AVI (12480 KB)      A movie comparing the low and high resolution point clouds for the paper cup.
Visualization 5: AVI (2432 KB)      A movie depicting the interferometric phase evolution derived from temporally coherent 3D-HAL images for a coffee cup undergoing thermal expansion.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 (a) Schematic of a generalized 3D-HAL system. A chirped transmit source flood illuminates a target (Tx) and the fast focal plane array (FPA) records the interference between a local oscillator (not shown) and the complex return field at various positions along a line of travel (labeled a-g). The total synthetic aperture, d S A , provides enhanced resolution along the direction of platform motion. (b) The sequence of chirp waveforms, the temporally dependent field segment for chirp b, and the fully registered synthetic field for all FPA positions or “data cube”, which is the fully sampled Fourier space representation of the scene.
Fig. 2
Fig. 2 A diagram (a) of the pupil-plane field segments and the 5x7 grid of segments. 3D renderings of an Air Force bar target for (b) a single segment and (c) a synthesized field showing 5x vertical and 7x horizontal resolution enhancement. Visualization 1 is a movie comparing the point clouds for the single segment and the stitched field.
Fig. 3
Fig. 3 Demonstration of resolution enhancement using 3D-HAL. (a) Photograph of an aluminum plate overlaid with machined features. (b) Point cloud derived from processing of a single 200x280 field segment. (c) Point cloud derived from processing of the fully synthesized aperture of size 1000x1960 colored by depth out of the plane. Visualization 2 shows the cross range resolution enhancement of the stitched field versus the single segment.
Fig. 4
Fig. 4 Photographs of (a) a shaving razor and (b) a paper cup. Single field segment and full synthesized aperture 3D-HAL renderings of (c-d) the razor and (e-f) the paper cup respectively (see Visualization 3 and Visualization 4 for resolution comparison).
Fig. 5
Fig. 5 (a) Picture of the coffee mug and the boiling water level. (b) A colormap of the unwrapped phase after 60 seconds rendered onto the point cloud (see Visualization 5). (c) The observed phase change is proportional to the red arrows, which represent the dot product of the thermal expansion (exaggerated) along the line of sight of the 3D-HAL system. (d) By taking this projection into account, an estimate of the displacement can be made.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.