Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Terahertz digital holographic imaging

Open Access Open Access

Abstract

This tutorial describes the application of digital holography to the terahertz spectral region and demonstrates how to reconstruct images of complex dielectric targets. Using highly coherent terahertz sources, high-fidelity amplitude and phase reconstructions are achieved, but because the millimeter-scale wavelengths approach the decimeter-sized targets and optical components, undesirable aperture diffraction degrades the quality of the reconstructions. Consequently, off-axis terahertz digital holography differs significantly from its visible light counterpart. This tutorial addresses these challenges within the angular spectrum method and the Fresnel approximation for digital hologram reconstruction, from which the longitudinal and transverse resolution limits may be specified. We observed longitudinal resolution (λ/284) almost two times better than has been achieved with visible light digital holographic microscopy and demonstrate that submicrometer longitudinal resolution is possible using millimeter wavelengths for an imager limited ultimately by the phase stability of the terahertz source and/or receiver. Minimizing the number of optical components, using only large reflective optics, maximizing the angle of the off-axis reference beam, and judicious selection of spatial frequency filters all contribute to improve the quality of the reconstructed image. As in visible wavelength analog holography, the observed transverse resolution in terahertz digital holography is comparable to the wavelength but improves for features near the edge of the imaged object compared with features near the center, a behavior characterized by a modified description of the holographic transfer function introduced here. Holograms were recorded by raster scanning a sensitive superheterodyne receiver, and several visibly transparent and opaque dielectric structures were quantitatively examined to demonstrate the compelling application of terahertz digital holography for nondestructive test, evaluation, and analysis.

1. Introduction

Thanks to remarkable advances in terahertz frequency source and receiver technologies over the last decade, the potential for practical, commercially viable applications is increasingly promising and compelling [16]. However, photography in the terahertz region remains in its infancy, in spite of decades of research, hampered by detectors that are either sensitive or inexpensive but never both. Early forms of terahertz imaging collected the reflected or transmitted radiation from an object illuminated confocally, and a two-dimensional image was formed by raster scanning the object through the confocal spot [710] Even today, slow, mechanically scanned confocal single-pixel imaging systems using either ultrafast optical techniques or Schottky diodes dominate the research community and commercial markets. While terahertz focal plane arrays are improving in sensitivity, pixel pitch, and array size [1114], their performance in sensitivity and dynamic range does not approach that of the single superheterodyne receiver scanning systems. Most other terahertz imaging systems are broad-banded, thus increasing the ambient and blackbody noise levels and reducing the obtainable signal-to-noise ratio (SNR) and contrast ratio.

As these analog broadband imaging systems mature, the alternative of narrowband terahertz imaging has only rarely been exploited for imaging, usually in the form of extremely high-frequency radar [1520]. When highly coherent radiation from such narrowband sources scatters from targets and environmental scenes, it produces speckle that can severely degrade image quality [2123]. Moreover, coherence imaging can be complicated by stray or vignetted radiation scattered into the beam path, thereby modulating the desired signal by producing undesirable interference and diffraction that obfuscates reconstructions or lowers the dynamic range of the system with clutter. The challenge of suppressing coherent “standing wave” modes is amplified by the relatively long wavelength of terahertz radiation in comparison to the size of the optical components, a diffraction issue that is rarely encountered in visible light coherent imaging.

To address these challenges, holographic imaging in the terahertz spectral region has recently emerged as a promising coherent imaging technique, representing the natural convergence of decades of experience with optical holography and synthetic aperture radar (SAR) [2427]. As with SAR imaging, we may use sensitive narrowband heterodyne receivers with more than 100 dB of dynamic range [2832]. From optical holography, we may use interferometric techniques to create interferograms so that phase and amplitude information may be collected simultaneously. The combination of these techniques heralds terahertz holography as a compelling tool for quantitative analysis that, by generating accurate phase plots to extract optical path differences (OPDs), will find many practical uses for the measurement of refractive indices, internal dimensions, and subwavelength-size discontinuities in materials.

Because holographic imaging can produce high-quality, high-resolution images without forming a focal spot that must be scanned across an object, the development of focal plane array detectors has transformed the field of holography. The comparatively recent innovation of digital holography (DH) now allows detectors and computers to acquire, store, and process holograms digitally [3336], a critical requirement for the terahertz spectral region because of the lack of appropriate photorefractive recording media. Until recently, most work on DH was performed in the infrared, visible, ultraviolet, and even the x-ray regions [3741] because of the widespread availability of charge-coupled device (CCD) cameras as the digital recording medium. As both single (raster-scanned) terahertz detectors and terahertz detector array technologies have matured, DH has recently been extended into the terahertz spectral region, and several proof-of-concept demonstrations of off-axis DH have appeared using coherent sources between 0.3–3 THz [25,26,4261]. Because the axial focus can be adjusted arbitrarily and quickly by changing the reconstruction distance in the reconstruction algorithm, the object location does not need to be known a priori, an important consideration if visibly opaque objects are imaged. In response to the growing need for quantitative analytical tools, especially for visually opaque structures, terahertz digital holography (TDH) has subsequently found its way into many applications including biomedical imaging, security screening, nondestructive test and evaluation (NDT&E), and analysis of systems undergoing microfluidic flow, vibrations, and thermal deformations [6269].

Extensions to terahertz frequency computed tomography [46,70] and the adaptation of techniques that would accelerate image acquisition using compressive sampling and Bayesian techniques [29,7175] have also recently appeared. Many of these demonstrations benefit from unprecedented longitudinal resolution made possible by the high coherence and phase stability of electronically generated THz radiation. Given that optics used in TDH can be of high optical quality (λvisible/10) and that terahertz wavelengths are 1000 times longer, it is possible for TDH to have phase resolution as narrow as λterahertz/10,000, remarkably providing nanometer-scale longitudinal or “axial” spatial resolution using sub-millimeter wavelengths.

Although resolution in the perpendicular or “transverse” dimension is limited by the wavelength of the radiation, standard approximations routinely made in optical holography must be reconsidered if the size of imaged objects are comparable to the wavelength of radiation used (decimeters and millimeters, respectively). Indeed, given that the diffraction-limited angular resolution from a circular aperture of diameter a is sin(θ)=1.22λ/a, the resolution of a holographic imager is reduced by a factor of 1000 when moving from the visible (λ0.5μm) to the terahertz (λ0.5mm) for a given optical system. Clearly, TDH experiences much greater diffraction than traditional optical holography, and diffraction introduced by the finite size of the optics must be minimized so that only diffraction caused by the object is recorded. The dimensionless Fresnel number of an optical component,

NF=(2a)2λd,
determines the extent of diffraction at distance d from the aperture (or object) of radius a for wavelength λ [76], with NF1 and NF1 representing the Fraunhofer and Fresnel diffraction regimes, respectively [76]. Diffraction is quite noticeable for NF<10, a regime common for terahertz holographic interferometers of typical path length d=1m with optical components of size 2a=5cm and wavelengths λ>0.25mm (f<1.2THz). Since optical holography is routinely performed in the Fresnel regime with NF1, assumptions that are widely used there do not necessarily apply in the terahertz regime, and the consequences of that will be discussed later. Consequently, some of the assumptions used in traditional holography may not extend to the terahertz region, and some fundamentals of holographic imaging must be reconsidered.

Here we provide an introductory tutorial on the application of off-axis DH to the terahertz spectral region, combining insights from both optical DH and microwave SAR. We demonstrate that longitudinal resolution better than λ/100 may be routinely achieved in a nonoptimized interferometer with a raster-scanned detector, while resolution better than λ/1000 is possible with a low phase noise source, a static detector array, and mechanically stable high-quality optical components. Moreover, a reexamination of holographic principles in this highly diffractive long-wavelength regime reveals that the transverse resolution improves as the target moves increasingly off-axis, ultimately limited by the size of the detector aperture and its ability to resolve high spatial frequency fringes in the hologram. TDH may be used to reconstruct complex three-dimensional surfaces, and the material-dependent penetration depth of terahertz radiation [77] makes TDH an attractive tool for nondestructive analysis of visually opaque objects and their dielectric properties. The utility of this technique will become an especially attractive alternative to standard terahertz imaging methodologies as large arrays of sensitive terahertz detectors become available.

2. Principles of Digital Holography

2.1. Off-Axis Digital Holography

To begin, let us review the basics of off-axis holography and its application to TDH. Holographic imagers require an interferometer that spatially combines the coherent scatter from the object (O(x,y,z0)) with a plane wave reference beam (R), and the resulting two-dimensional interferogram H(x,y,z1) encodes the amplitude and phase information as [78]

H(x,y,z)=R2+O(x,y,z)2+RO*(x,y,z)+R*O(x,y,z).
This is typically accomplished using a Mach–Zehnder interferometer with two beam splitters, one to separate the coherent beam from the source into object and reference beams, then another to recombine those beams after the object beam has scattered from the object [Fig. 1(a)].

 figure: Figure 1.

Figure 1. (a) Standard Mach–Zehnder interferometer. (b) Geometry for an off-axis terahertz holographic interferometer.

Download Full Size | PDF

The hologram is recorded by some storage medium in the image plane located at z1=d, and it is reconstructed by illuminating the stored interferogram with an identical reference beam. The goal of the reconstruction process is to find O*(x,y,0) at the object location z0=0. In traditional optical holography, the hologram is recorded on an analog film and reconstructed with a coherent light source, typically a laser, by illuminating H(x,y,d)) with R and observing O*(x,y,0). In DH, the same optics are used to create a real hologram in space, but it is then recorded by a detector or digital camera, and the reconstruction is performed on a computer by propagating O(x,y,0) digitally on a computer using a scalar diffraction theory algorithm. Typically, the reconstruction uses the Fresnel diffraction method of scalar diffraction theory [79], but the Huygens convolution method [80,81] and, more recently, the angular spectrum method (i.e., the Rayleigh–Sommerfeld diffraction formula for scalar diffraction theory) are used as well [82]. These approaches are applicable to both in-line (Gabor type) and off-axis (Leith–Upatnieks) holography [83].

In contrast to in-line holography, the reference wave for off-axis holography does not copropagate along the same direction as the object beam but instead combines with the object beam at the hologram recording plane at an angle θ from it. Figure 1 illustrates a traditional and nontraditional Mach–Zehnder interferometer with θ labeled. The traditional setup [Fig. 1(a)] is commonly used in optical DH as it creates nearly equal path lengths for the object and reference waves [84], a requirement for maintaining high coherence between the beams. In contrast, minimizing the number of optical components and achieving large off-axis angles is essential for TDH; hence, the interferometer uses just one beam splitter and one mirror [Fig. 1(b)]. The simplest of these may be the Lloyd-mirror-type setup, which uses just one mirror and no beam splitter [42]. The difference in path lengths between the object and reference beams is of little concern considering the high coherence characteristics of the terahertz source, so in this nontraditional setup, large off-axis angles can be more easily realized. The reference wave R(y) can be described as

R(y)=R·eik0ysinθ,
where k0=2π/λ0, λ0 is the free-space wavelength, and the off-axis angle θ is assumed to lie in the yz plane. Equation (2) can then be rewritten for the off-axis hologram,
H(x,y)=R2+O2+Reik0ysinθO*(x,y)+R*eik0ysinθO(x,y).
All four terms of Eq. (4) are recorded in the hologram. However, during the reconstruction process, these terms will separate because they do not propagate along the same path. The essence of off-axis holography is to use this spatial separation of the four terms so that a reconstruction may be made of either the third or fourth terms, which are conjugates of each other.

The separation of the interference terms is best described in the spatial frequency domain. Performing a two-dimensional Fourier transform F{H(x,y)} on Eq. (4) produces a spatial frequency spectrum containing the Fourier spectra of the four terms in Eq. (4), which can be abbreviated as O2+R2+O*R+OR*. For each Fourier spectrum, the spatial frequency f (described in units of cycles/mm) helps define resolution. For example, a sinusoidal amplitude pattern with a spatial period of 0.5 mm is expressed in the spatial frequency domain with a spatial frequency of 2 cycles/mm. Complex spatial patterns from complex objects can be decomposed into their constituent spatial frequencies via a Fourier transform. The largest spatial frequency that can be recorded in the hologram and reconstructed successfully is called the cutoff frequency, which is a measure of the system’s resolution. As will be shown later, the cutoff frequency can depend on a variety of physical parameters in the experiment and on the digital reconstruction process itself.

|F{O2}| represents the autocorrelation of O, which extends over twice the object’s spatial frequency bandwidth (2B) and is located symmetrically around the origin fx,fy=0 [79]. The variables fx,fy are the spatial frequencies corresponding to the x and y directions of the hologram, and |F{R2}| is a delta function located at fx,fy=0, assuming R is a plane wave. |F{O*R}| contains the spectrum of the real image of O, and it is located symmetrically around fx=0,fy=sinθ/λ where λ is the wavelength of radiation. Similarly, |F{OR*}| contains the spectrum of the virtual, conjugate image of O and is located symmetrically around fx=0,fy=sinθ/λ. Figure 2(a) illustrates the locations of the four spectra for the case when they are fully separated in spatial frequency space. According to Fig. 2(a), the minimum off-axis angle θmin to avoid overlapping of the individual spectra is

θmin=sin1(3Bk0).
This is a rather stringent requirement, and it is often the case that Eq. (5) overestimates the relationship between the bandwidth and the minimum off-axis angle. Indeed, if the reference wave is much stronger in magnitude than the object wave, the spectra may be separated successfully at angles smaller than predicted by Eq. (5) but never smaller than
θmin=sin1(Bk0),
as illustrated in Fig. 2(b). This angular separation constrains the geometry of the interferometer used for holographic reconstructions.

 figure: Figure 2.

Figure 2. Illustration of separable Fourier spectra using off-axis holography for (a) small and (b) large reference wave signal levels [26].

Download Full Size | PDF

2.2. In-Line and Gabor-Type Digital Holography

Unlike off-axis DH, in-line DH and Gabor-type DH do not introduce an off-axis angle. The Gabor-type interferometer assumes that the object is fairly transparent, so it does not need to separate the object and reference beams. Instead, it allows the waves modulated and unmodulated by the object to interfere at the detector to form the hologram. In-line holography, which does not require a transparent object, separates a reference beam from the object beam before its interaction with the object, then recombines them after the object to interfere on axis at the detector [52,63].

As compared with off-axis holography, the detector’s spatial bandwidth requirement is reduced for in-line and Gabor-type holography (by an amount equal to the spatial frequency associated with the off-axis angle). This tremendous advantage comes with a cost: reconstructing the hologram requires additional efforts to remove the spatial frequency content of the autocorrelation and virtual image conjugate. A powerful technique for isolating the desired frequency content is phase shifting digital holography (PSDH) [35]. Because PSDH requires multiple hologram acquisitions, it is impractical for single-pixel TDH. However, PSDH could one day become the preferred option for high-resolution TDH when large focal plane arrays of sensitive terahertz detectors become available so that hologram acquisitions can occur quickly.

2.3. Scalar Diffraction Theory

Scalar diffraction theory uses several assumptions, which may be extended from traditional optical holographic imaging to the terahertz region. If the electric and magnetic field of a propagating electromagnetic wave can be represented by a single wave equation, scalar diffraction theory may be applied as a very good approximation towards the solution of wave propagation [79]. This single wave equation holds if the propagation medium is a linear, isotropic, homogenous, and nondispersive dielectric such as air. Note that conventional holography is an inherently compressive technique, and the object is regarded as an infinitely thin transparency. Therefore, this review will not consider multiple or “multipath” interactions between the object and terahertz wave but will assume that the recorded object wave field is the result of a single interaction between the incident wave and object.

To describe the wave field as it propagates through air after its interaction with an arbitrary object, scalar diffraction theory is highly appropriate. If the wave field incident on the object is normal to the object and a plane wave, any deviations in the reconstructed image from the observed or expected physical appearance of the object must then be further investigated using other methods. The angular spectrum method for reconstructing digital holograms will be described next, followed by a short summary of the Fresnel diffraction reconstruction method.

2.4. Angular Spectrum Method

When considering the propagation of a wave distribution according to scalar diffraction theory, most textbooks calculate the wave distribution at an observation point in space with Green’s theorem and the alternative Green’s function, giving the Rayleigh–Sommerfeld diffraction formula [79]. Instead of revisiting this derivation, we review another method to arrive at the same solution by using the Fourier transform technique, also known as the angular spectrum method [35,36], because it will be used here as the primary method to reconstruct holograms.

The angular spectrum method calculates the Fourier transform of a diffraction pattern U(x,y) to decompose U into constituent plane waves. Each plane wave can be represented by a single spatial frequency pair fx,fy equivalent to the direction cosines of the plane wave with respect to the transverse x-y plane. The entire spectrum of plane waves calculated by the Fourier transform can therefore be denoted as the angular spectrum of U [79]. Following the derivation of the scalar diffraction formula by Poon [35], the angular spectrum may be written as

A(fx,fy,z)=U(x,y,z)ei2π(fxx+fyy)dxdy.
The derivation of the diffraction formula using the angular spectrum method involves the use of the Helmholtz equation, which may be derived from the wave equation to give [85]
2U(x,y,z)+k2U(x,y,z)=0.
Working in the spatial frequency domain to find a solution to the Helmholtz equation using the angular spectrum method, U(x,y,z) can be transformed with a two-dimensional Fourier transform defined as
F(fx,fy)=F{U(x,y,z)}=U(x,y,z)ei2π(fxx+fyy)dxdy.
The Helmholtz equation, including its second-order derivatives of a three-dimensional spatial signal, must also be transformed. According to the Fourier transform pair [35],
F{2x2U(x,y,z)}=(ifx)2F(fx,fy),
a Fourier transform representation of the Helmholtz equation is
2F(fx,fy,z)z2+k2(1fx2k2fy2k2)F(fx,fy,z)=0.
A solution to Eq. (11) is [35,36]
F(fx,fy,z)=F(fx,fy,0)eizf2fx2fy2.
Equation (12) can be slightly modified to the format of the angular spectrum presented by Goodman as [79]
A(αλ,βλ,z)=A(αλ,βλ,0)ei2πλz1α2β2,
where k=2π/λ,fx=α/λ,fy=β/λ, and α, β are the direction cosines determined by the direction of the plane waves. Equation (13), which states that the angular spectrum at any location z can be easily calculated from the propagation kernel ei2πλz1α2β2 if the angular spectrum is known at z=0, yields the same results as the Rayleigh–Sommerfeld solution [86] for the propagation of diffracted fields.

We next consider how the angular spectrum method may be applied to DH. In conventional holography, the object is described by an infinitely thin complex transmittance function to(x,y). Assuming the wave field incident on the object is a plane wave eiωt, where ω=2πc/λ, the wave field U(x,y,0) immediately after the object is U(x,y,0)=to(x,y)eiωt. Defining the object location as z=0 and the hologram plane location as z=d, as shown in the experimental setup of Fig. 1(b), the angular spectrum at z=0 becomes

A(αλ,βλ,0)=1·T0(αλ,βλ,0),
where
T0(αλ,βλ,0)=t(x,y,0)ei2πλ(αx+βy)dxdy.
Equation (13) may now be used to propagate T0 to the hologram plane where the object wave field U(x,y,d) is recorded. Of course, U(x,y,d) is recorded with the goal of calculating the original transmittance function to(x,y), and this process requires two Fourier transforms and one field propagation,
to(x,y,0)=F1{F{U(x,y,d)}ei2πλd1α2β2}.

It must be reiterated that in conventional holography, each object voxel is described by a single amplitude and phase pair. Therefore, if the object is not uniform in the axial dimensions, the reconstructed to(x,y) may be an inaccurate representation of the original transmittance function. Therefore, unless several holograms are recorded at different orientations of the object and special reconstruction techniques are applied, such as diffraction tomographic holography [87,88], one must accept that holography is an inherently compressive technique, which requires the use of certain assumptions about the object. Among these are assumptions that the object’s complex refractive index is axially uniform on a pixel-by-pixel basis. In addition, if the object is a phase-modulating object, its geometrical composition or complex refractive index must be well known to infer physical thickness from optical thickness measurements or vice versa.

2.5. Fresnel Diffraction

Most TDH occurs just within the Fresnel diffraction regime defined by Eq. (1), given that objects are typically larger than the millimeter-scale wavelengths and the interferometer is meter-sized. For this regime, the Fresnel diffraction formula may be derived from the angular spectrum in Eq. (7) under the paraxial approximation, which states that [79]

fx2+fy21λ2.
Equation (16) can then be written as [35]
to(x,y,0)=F1{F(U(x,y,d))·ei2πλdei2πλd(fx2+fy2)}.
In order to arrive at the well-known Fresnel diffraction formula, the inverse Fourier transform must be calculated so that Eq. (18) is expressed as the convolution
to(x,y,0)=ei2πλdiλd·U(x,y,d))eiπλd(x2+y2).
Equation (19) is the Fresnel diffraction formula in convolution form, and an example of a Fresnel reconstruction of a digital hologram is presented in Fig. 3.

 figure: Figure 3.

Figure 3. Fresnel reconstruction of real image (lower right), autocorrelation (center), and virtual image conjugate terms (upper left) for a digital hologram recorded of a Siemens star target at 0.48 THz. Visualization 1 illustrates the reconstructed wave field as it is backpropagated from the hologram plane to the object plane.

Download Full Size | PDF

In this manner, the angular spectrum and Fresnel diffraction methods, which describe wave field propagation in free space, may be used to reconstruct terahertz digital holograms. An advantage of the Fresnel approximation is the fact that only one Fourier transform is required. But because the Fresnel reconstruction method does not allow for digital filtering as part of the reconstruction process, it cannot take advantage of one of the major attributes of DH. Furthermore, the second Fourier transform required for the angular spectrum method is easy to perform on a computer. Therefore, the remainder of this tutorial uses only the angular spectrum method. Following a description of the experimental techniques required to record these holograms, the attributes and limitations of the angular spectrum reconstruction process on TDH, accompanied by the digital representations of the aforementioned equations, will be presented.

3. Techniques for Terahertz Holographic Imaging

As mentioned above, a variety of in-line or off-axis interferometers may be used for coherent terahertz imaging applications. To explore the essential components and their minimum requirements, we describe a prototypical terahertz digital holographic imager we constructed to capture holograms of amplitude and phase objects in transmission. This off-axis holographic imager was designed to maximize the hologram size acquired at high transverse and axial image resolutions and to permit analysis of the resolution performance characteristics of the imager. Several hologram reconstructions were performed on a variety of objects, starting with simple phase plates and pure amplitude targets described in the next two sections, then moving on to more intricate structures containing voids and lossy materials in the final section.

3.1. Terahertz Sources and Detectors

To construct an off-axis holographic imager collecting interferograms of a transmissive target, an appropriate interferometer must be built with due consideration given to the power, phase stability, and frequency tunability of the terahertz source; the sensitivity of the terahertz detector or receiver; and the optical figure and size of the necessary optics. Regarding the source, the imager requires a highly accurate, stable, monochromatic continuous wave terahertz source because a single hologram acquisition can take between hours to days to acquire if only a single-pixel detector is available to scan across the hologram plane. Compelling options include a stabilized terahertz laser, a backward wave oscillator, or frequency multipliers and amplifiers following a synthesized microwave signal source.

Two complementary detector technologies are considered, the diode detector and subharmonic mixer receiver. Comparatively, broadband Schottky diode detectors with band-dependent bandwidths ranging tens to hundreds of GHz are widely used throughout the millimeter and terahertz bands. When used for incoherent detection, these square law detectors operate with noise-equivalent powers (NEPs) on the order of 110dB/Hz, but the system NEP may be improved by techniques such as lock-in detection at the cost of integration time. Although they are not the most sensitive terahertz detectors, they are comparatively inexpensive and highly practical square law devices that do not require any cryogenics, biasing, or peripheral equipment for operation. In contrast, a narrowband superheterodyne receiver, which frequency mixes terahertz signals to lower frequencies where they are easily filtered, amplified, and digitized, represents a significant improvement in sensitivity. Assuming a typical noise temperature for a 0.500–0.750 THz receiver of 2000 K (noise figure 9dB), the receiver NEP for coherent detection improves to approximately 165dBm/Hz, according to

NEP=10·log10(kTb)+NF=174dBm/Hz+9dB,
where 10·log10(kTb) is the thermal noise floor at temperature T. For a representative acquisition video bandwidth b of 20kHz, the typical NEP=122dBm for coherent detection represents a roughly 70 dB improvement over a Schottky diode incoherent square law detector with a 250 GHz bandwidth. Further detail on coherent and incoherent detection at terahertz frequencies can be found in [89]. These improvements in dynamic range and sensitivity come at a price, and not just in cost. Terahertz heterodyne receivers require a synthesized microwave source and an amplifier multiplier chain, plus a digitizer and additional frequency downconverters capable of capturing an intermediate frequency (IF) typically >1.0GHz to minimize 1/f noise.

3.2. Terahertz Optical Components

As previously discussed, the amount of diffraction observed in the hologram plane depends on the relative size of the optical components, the wavelength of the radiation, and the size of the interferometer through the Fresnel number defined in Eq. (1). Standard size lenses and mirrors used in the visible and even infrared regions are usually too small to avoid significant diffraction at terahertz frequencies. Large optics are needed, but the fact that holography is an interferometric technique still requires optics of high quality. Optical components with λvisible/10 peak-to-valley surface figure have λ/10,000 at terahertz frequencies, but even if the surface quality of terahertz optics is sufficient, the material uniformity and isotropic behavior may not be, especially for refractive lenses and dielectric windows. Manufacturing of some dielectric materials produces anisotropy and stress-induced birefringence, and inhomogeneity-induced aberrations will be quite evident if an imperfect lens is used to expand and/or collimate the beam. Moreover, terahertz lenses and windows are made from materials whose refractive indices range from 1.5–2.5, significantly larger than the typical 1.3–1.5 for optical materials. This can cause Fabry–Perot etalon-type standing wave fringes between the surfaces, and those fringes will degrade the quality of the interferogram.

To avoid such refractive anisotropies and fringes, reflective optics are recommended for DH, including the use of large off-axis parabolic mirrors (OAPMs) to produce planar terahertz beams from an expanding source beam while minimizing diffraction. With the exception of a Lloyd’s mirror setup, which uses no beam splitter [55], a lens-less Fourier transform holographic imager minimally requires only one OAPM, one beam splitter, and one large planar mirror to redirect the reference beam to overlap with the object beam at the hologram plane (which is also the imaging plane). Although a very flat (surface variations λ/100) wire grid mesh may be used as a beam splitter, it may have undesirable polarimetric properties. Instead, a suitable nonpolarizing beam splitter may be constructed from a thin (25μm) dielectric film of polyethylene terephthalate (PET, also known as Mylar), whose reflectivity may be increased by applying a thin metallic coating, using metallic spray paint for example.

Frequency and phase stability are critically important, especially if a scanned single-pixel detector is used and acquisition takes hours or days. This is particularly challenging given the large optics required, although the longer wavelengths do make the interferometer more resilient than those operating at visible wavelengths. Nevertheless, vibration isolation remains necessary, including enclosures to minimize air flow through the experiment.

3.3. Spatial Resolution and Coherent Transfer Function

In lens-less Fourier transform holography, the achievable image resolution is determined by four parameters: the wavelength, the recorded hologram size, the detector pixel aperture, and the off-axis angle between the object and reference beams. If no lenses or imaging optics are used between the object and hologram planes, the hologram plane coincides with the system aperture, entrance pupil, exit pupil, and imaging plane. Consequently, measurements in the hologram plane determine both the amount of radiation collected and the image space numerical aperture.

For coherent imaging, the maximum achievable image resolution is defined by a spatial cutoff frequency fc (cycles/millimeter), determined by the off-axis reference angle, image space numerical apertures, and detector pixel size [35,79]. In digital-off axis holography, three spatial frequencies actually determine the overall system spatial cutoff frequency fc. The first is fci, which is associated with the hologram size and effective numerical aperture. For a spatial frequency to be resolvable at the Rayleigh resolution limit (visibility 0.2 for incoherent imaging [90]), this cutoff frequency fci is fundamentally set by diffraction and defined by the wavelength λ and the system f-number (f/#) specified in the hologram (image) plane as

fci=1λ1(f/#)2(NAi)λ,
where NAi is the image space numerical aperture. To maximize fci at a given wavelength, one must maximize the numerical aperture between the object and hologram plane, which requires the object–detector distance to be as short as possible. Indeed, the numerical aperture may ultimately be limited by the size of the object and the achievable clearance between it and the reference beam.

The second spatial frequency, fcd, depends on the size of the detector aperture w, which determines the maximum spatial frequency the detector can resolve in the hologram plane. It is defined as the spatial frequency that corresponds to the first zero of the detector’s optical transfer function (OTF), which in turn is related to the Fourier transform of the detector’s pixel aperture [79]. If the detector and receiver waveguide apertures are rectangular functions,

OTFF{rect(xw)}2sinc2(w·fcd),
where w is the size of the detector aperture in a given dimension (the x coordinate for this illustration). The first zero of the sinc2 function occurs at 1/w, so the detector size cutoff frequency fcd in this dimension is simply
fcd=1w.

The third spatial frequency is defined by the off-axis angle between the object and reference beams. The superposition of the planar reference and object beams in the absence of an object produces a fringe pattern with a spatial frequency

foff=sinθλ,
where θ is the angle between the reference and object beams. The resolution limit of the holographic imager is usually determined by the smallest of these three spatial frequencies. Since fci is determined by the size and location of the object and fcd is fixed for a given detector, foff is the only spatial frequency that can be adjusted experimentally, so an objective when designing a terahertz holographic imager is to maximize θ to maximize foff.

3.4. Terahertz Holographic Imager

Combining these optical components, a prototypical holographic imager may be constructed. Figure 4(a) shows a three-dimensional rendering of one such design for which the reference beam and mirror were raised above the plane defined by the other optics, including the terahertz source, in order to maximize θ. As will be explained shortly, this was done to increase the separation of the object and reference beams in the image plane by creating large, nonzero off-axis angles in both azimuth and elevation. Of course, the reference and object beams combine at the detector to form the hologram. Figure 4(b) shows the experimental setup in more detail. The source is located at the effective focal length (EFL) of the 90° OAPM in order to produce a collimated wavefront that is directed onto the beam splitter. The 50:50 beam splitter, which bifurcates the beam into object and reference beams, is aligned so that it elevates the reference beam before it is redirected back down to interfere with the object beam in the image plane. The system has been optimized to generate a large off-axis angle and short object–detector distance while ensuring that the reference beam is not clipped by the beam splitter or object.

 figure: Figure 4.

Figure 4. (a) Three-dimensional drawing of a holographic imager. (b) Top view of the off-axis holographic imager, for which the detector location defines the hologram plane. [44]

Download Full Size | PDF

As an illustration, Table 1 lists the spatial frequencies associated with this experimental configuration, for which the image size was 150×150mm and the image distance was 170 mm. For the configuration used here, the effective off-axis angle θ of the reference beam is 46.89°, composed of azimuth and elevation angles of θa=41.2°and θe=22.4°, respectively. Assuming waveguide-mounted Schottky diode detectors or receivers are used, the first column lists the rectangular waveguide band designations for three bands spanning 0.230–0.750 THz, and the second column provides the associated detector aperture dimension w that is inversely related to fcd. A fundamental requirement for off-axis holography is that the detector must be able to resolve the spatial frequency presented by the off-axis interferogram; hence, fcdfoff. Even with its large θ, our experimental configuration easily achieves this criterion, suggesting room for improvement. However, foff cannot be increased much before reaching the limit set by the size of the object through fci. Therefore, a well-designed interferometer will achieve fcdfcifoff. The last column of Table 1 presents the maximum spatial frequency the detector will see due to the image numerical aperture and the off-axis angle. Note that in every case, the detector resolution exceeds that of the experimental setup, so even higher resolution images are possible.

Tables Icon

Table 1. Spatial Cutoff Frequencies

These experimental parameters approach the limit of what is possible using terahertz lens-less Fourier transform holography at these frequencies. As will be described in Subsection 3.6 below, one disadvantage of off-axis holography compared with in-line holography is the spatial bandwidth requirements imposed by the detector aperture, since the off-axis spatial frequency is added to the intrinsic hologram spatial frequencies produced by the object. A comparison between the detector spatial frequency and Nyquist sampling of the maximum spatial frequency content in the image plane indicates that this holographic imaging system has been optimized for the 0.2–0.8 THz region. A further increase in the off-axis angle would require a larger beam splitter and planar reference mirror, since their usable area grows with a cos1θ relationship. Furthermore, at such large angles, the reference beam changes from circular to elliptical, which reduces the actual overlap of the reference and object beams.

3.5. Hologram Acquisition

Ideally, TDH would use a densely packed terahertz focal plane array of detectors or receivers to record the hologram. However, state-of-the-art focal plane arrays at terahertz frequencies are in their developmental infancy, and existing focal plane arrays, such as broadband room temperature bolometers, exhibit poor sensitivity and dynamic range at terahertz frequencies [91,92]. Because of the paucity of good detector arrays, the majority of high-fidelity terahertz imaging methods utilize a single, sensitive, often narrowband detector [20]. The single-pixel detector, whether it be a Schottky diode or superheterodyne receiver, can be mounted on two linear translation stages mounted orthogonally to allow for horizontal and vertical raster scanning of the detector. Such scanned single-pixel detectors are less expensive and more sensitive, but acquiring a hologram by scanning can be quite slow. Using commercial translation stages to raster scan the detector across the hologram plane, a 150×150mm hologram takes us 2.5 days to acquire.

In addition, unwanted Fabry–Perot fringes are insidious and always degrade the quality of the hologram, so it is as important to misalign some elements as it is to align other elements. By orienting the detector at an angle slightly oblique from the normal of the object and reference beams, reflections at the detector interface will not reenter the beam path. Although attaching a horn antenna to the detector would increase the detected signal, it also causes significant reflections. More importantly, its large effective aperture (defined by the horn antenna pattern) significantly reduces the detector’s spatial resolution and, therefore, the quality of the holographic reconstruction. For example, an open WR3.4 waveguide detector (w=0.432mm) places the instrument in the desirable regime where instrument resolution is determined by the object size and offset angle of the reference beam (fci, foff<fcd=2.31cycles/mm). However, if a standard horn antenna is attached (w=5.6mm), the detector will have better impedance matching to free space, but its resolution will be severely limited by the size of the horn antenna (fci, foff>fcd=0.179cycles/mm). By maximizing the offset angle and minimizing the detector aperture size, we maximize the spatial resolution of the instrument. For the images presented in this tutorial, we eliminated the horn antenna, but this trade-off of spatial resolution and signal strength must be made for each application. Oversampling (pixel pitch Δx, Δy less than the size of the detector aperture) does prove beneficial for the reconstruction of the holograms.

Because the hologram plane is large, the terahertz power at a given location will be tiny, so a heterodyne receiver was used for most of the measurements reported here. For example, our WR1.5 receiver uses a WR1.5 subharmonic mixer accepting a RF of 0.5–0.75 THz with a local oscillator frequency of 0.250–0.375 THz. An IF in the 1–2 GHz range is quite acceptable, especially given the availability of inexpensive commercial amplifiers and filters in this region. Our receiver’s dynamic range was approximately 100 dB with a noise figure of 11 dB at room temperature. To record a 150mm×150mm hologram using a 50μW source at 0.74 THz, the irradiance uniformly distributed across the hologram plane is I=2.22·107W/cm2(36dBm/cm2). For a WR1.5 rectangular waveguide (area A=0.07277mm2), the power captured by the open-ended waveguide is 68dBm. So, for an NEP of 122dBm with a 20 kHz acquisition bandwidth, an SNR of 54dB is possible without the use of a horn antenna. Indeed, for our measurements, an SNR of 50 dB was routinely measured at 0.74 THz, while in the lower frequency 0.220–0.320 THz band where sources emit 50mW and receivers have lower noise figures of NF=6.5dB, the SNR can be as high as 90 dB.

3.6. Reconstructing the Image

The heart of DH is the use of the angular spectrum method to reconstruct an image from the hologram. Specifically, this technique allows one to access and modify the hologram digitally in its spatial frequency domain. The angular spectrum method makes use of the fact that any wave field can be decomposed into individual plane waves traveling in different directions. These plane waves add to create a complex wave field, such as a hologram. After decomposition into its individual plane waves, these plane waves can be digitally propagated a distance z to recover the complex wave field at any other location in space. If, for example, this distance is equal to the distance between the object and hologram plane, the hologram is reconstructed in the form of the real image.

Of course, the first step is the creation and digital acquisition of the hologram itself, as described above. The associated fringe frequency will increase with increasing frequency for a given off-axis angle. The second step is the reconstruction of the digital hologram using two-dimensional discrete Fourier transforms and inverse Fourier transforms. This reconstruction recovers the four terms that need to be separated: reference term, autocorrelation term, real spectrum, and virtual spectrum. The reconstruction then focuses on isolating and transforming the real spectrum into the image.

Unlike conventional holography, which relies on the wave propagation of the reconstruction to separate these terms, in DH, the terms are separated in the spatial frequency domain before they are digitally propagated. This is accomplished by first transforming the hologram into the frequency domain, which is equivalent to the plane wave decomposition, by computing the fast Fourier transform,

Fh(fxm,fyn)=FFT{Uh(m,n)},
where Uh(m,n) is the two-dimensional hologram recorded at pixel indices m (x direction) and n (y direction), where Fh represents the decomposed plane waves with spatial frequencies fxm and fyn. A more accurate method is to multiply Uh(m,n) by the reference wave—as is done in conventional holography—before applying Eq. (25). This approach does not limit the accuracy to Δfxm,Δfyn since any phase tilt in the object beam can be corrected by adjusting the reference angle, a very beneficial attribute given the extreme subwavelength sensitivity of the interferometer. In fact, the digital reference angle may be adjusted by monitoring the phase plot until all remaining tilt is removed. By this approach, the reference wave R(m,n) may be described digitally as
R(m,n)=ei2πλ(msinθx+nsinθy),
which is multiplied by Uh(m,n) before applying Eq. (25).

Figure 5 shows a representative hologram in its recorded spatial domain and its Fourier-transformed spatial frequency domain. The autocorrelation term is located in the center of the latter image, while the real and virtual spectra are diagonally displaced towards the bottom right and top left, respectively, according to the off-axis angle in the x and y directions. The empty space between these three terms indicates that the spatial bandwidth of these terms is narrow enough that they may be separated, after which spectral filtering and a reconstruction may be attempted. Prior to the reconstruction, however, the spatial frequency offset due to the reference wave must also be removed, which can be accomplished by digitally shifting Fh(fxm,fyn) such that the frequency content associated with the real image, Fo(fxm,fyn), is centered about fxm,fyn=0.

 figure: Figure 5.

Figure 5. (a) Amplitude target (Siemens star), (b) hologram recorded at 0.550 THz with small insert showing representative interference fringes, and (c) real part of the spatial frequency map showing the autocorrelation, real, and virtual image terms. (d) Portion of spatial frequency map (c) labeling various spatial frequency cutoffs Wh and indicating a circular spectral filter.

Download Full Size | PDF

The real advantage of DH comes next, as a frequency filter Wh is digitally defined to retain the spatial frequency content of the real image while rejecting that of the object field autocorrelation, reference field, and virtual object field. Because the filter is digitally applied, it may be circular or have a tailored shape determined by the application and the bandwidth of the real image. For illustrative purposes, the objects presented here have circular symmetry, so a circular binary filter was used, and in Subsection 5.4 we will explore how its performance depends on filter radius. The filter function Wh then multiplies Fh to isolate the object spectrum Fo,

Fo(fxm,fyn)=Fh(fxm,fyn)·Wh(fxm,fyn),
as shown in Fig. 5(d).

Following implementation of Eq. (27), the wave field can be propagated to its original location in the object plane, completing the reconstruction. This backpropagation step is accomplished by multiplying Fo(fxm,fyn) by the spatial frequency transfer function Hf(fxm,fyn), commonly called the backpropagation kernel [84],

Hf(fxm,fyn)=ei2πλz1(λfxm)2(λfyn)2.
The reconstructed object Ur(m,n) is then found by an inverse Fourier transform,
Ur(m,n)=IFFT{Fo(fxm,fyn)·Hf(fxm,fyn)},
which completes the angular spectrum reconstruction process. This reconstruction process will now be illustrated by imaging and examining exemplary phase (wedge prism) and amplitude (resolution target) objects.

4. Reconstruction of Phase Holograms

With this understanding of the fundamentals of lens-less terahertz Fourier transform holography, and having derived the transfer function Hf of the instrument, we may now explore and investigate the reconstruction process. This will be illustrated through a series of measurements that describes the performance of the holographic imager in its classical form, presenting, reconstructing, and analyzing holograms of phase objects acquired at seven frequencies (0.230, 0.320, 0.400, 0.480, 0.550, 0.660, and 0.740 THz). All holograms were recorded at the same reference angle, pixel pitch, hologram size, and object–hologram distance. Hence, differences in the reconstructed images depend solely on the spectral response of the target, the spectral cutoff frequencies, and the digital reconstruction processes. The lengths of both beam paths were less than 2 m, and the controlled laboratory environment kept humidity levels constant during hologram acquisition, so atmospheric absorption effects did not affect the measurements. Instead, mechanical stability of the experimental components was crucial, even at these long wavelengths, and the thin film beam splitter had to be shielded from air currents in the lab.

The analysis of these phase object holograms will reveal the highest phase resolution ever reported on using TDH: we show that terahertz radiation can be used to measure OPDs on the order of micrometers, which corresponds to an unprecedented phase resolution of λ/300 without the use of microscopic techniques. By comparison, a state-of-the-art visible light DH technique using digital holographic microscopy has reported a 5 nm resolution at 683 nm, or λ/136 [93]. Examples for applications of terahertz phase mapping as well as amplitude mapping will be presented later [26].

4.1. Reconstructing Wedge Prisms

Four wedge prisms with different wedge angles were used as targets to demonstrate and quantify the phase reconstruction performance of the instrument. The prisms were made of N-BK7 glass, which is fairly absorptive and reflective at terahertz frequencies. The real refractive index is 2.50 across the 0.250–0.750 THz band, and the absorption coefficients are 2, 5, and 12cm1 at 0.250, 0.500, and 0.750 THz, respectively [94]. The wedges were 25.0 mm in diameter and had deviation angles of 1°, 2°, 5°, and 10° at 632.8 nm, which respectively correspond to wedge angles of 1.933°, 3.867°, 9.500°, and 18.133°. By recording holograms at the aforementioned wavelengths, the phase accuracy of the phase reconstructed images will be quantitatively assessed. Conversely, the measured optical path length in each wedge will be used to measure the refractive index of N-BK7 at these frequencies. The four prism wedges were mounted in rigid EV45 Conductive Crosslink anti-electrostatic foam [95], which acts as an excellent terahertz absorber and diffuser (Fig. 6).

 figure: Figure 6.

Figure 6. Prism wedges mounted in rigid terahertz absorbing foam, with wedge angles of 1°, 2°, 10°, and 5°, going clockwise from the upper left corner.

Download Full Size | PDF

Holograms 150×150mm2 in size were collected at all seven frequencies and displayed in Fig. 7(a). Notice that the intensity distribution is oval shaped, a consequence of the elliptical reference wave entering the hologram plane at the off-axis angle θ. Next, notice the refracted spots from the 1°, 2°, and 5° wedges are easily identified in the 0.230 and 0.320 THz holograms. Increasing the wedge angle increased the amount each spot refracted, with the 5° spot refracting to the edge of the hologram and the 10° spot refracting outside the hologram entirely. The appearance of spots confirms that these samples are essentially pure phase objects that do not produce high spatial frequencies in the diffraction pattern (except from their edges).

 figure: Figure 7.

Figure 7. (a) Phase object holograms from 0.230–0.740 THz. (b) Fourier domain maps of the associated phase object holograms.

Download Full Size | PDF

The refracted spots become less noticeable above 0.400 THz because of the increasing N-BK7 absorption with increasing frequency. Although the reference wave overwhelms the object wave so that the diffracted wedge spots are not apparent in the 0.480 THz hologram, high-quality reconstructions are still possible. In fact, we can expect a higher quality object reconstruction from the 0.480 THz hologram than from the 0.230 THz hologram, not just because of the higher spatial and phase resolution achievable at higher frequencies, but also because a strong reference wave was shown to be beneficial for separating the real image spectrum from the autocorrelation term.

This increasing separation with increasing frequency can be clearly seen in Fig. 7(b), which presents the Fourier domain decompositions of each hologram. The holograms have already been multiplied by the reference wave so that the real image term is aligned to the center of the Fourier domain plots. A frequency-dependent comparison of the separation between the autocorrelation term and the two conjugate image spectra shows that the three terms overlap significantly for the 0.230 and 0.320 THz holograms, a problem not easily corrected by filtering. In fact, the circular halo centered on the autocorrelation term, which unfavorably overlaps the off-axis terms, is a complication caused by the comparatively strong signal strength of the object wave field at these lower frequencies. At higher frequencies, especially 0.400 and 0.480 THz, the three terms are separated sufficiently so that a Fourier domain spatial filter isolating this term will improve the reconstruction considerably. The increased separation is primarily caused by the way the spatial cutoff frequency associated with the off-axis angle increases as the frequency is increased. But, this benefit comes at a cost: the object wave irradiance decreases with increasing frequency due to the increasing absorption of the wedges, decreasing the strength of the spatial frequencies observed in the autocorrelation term. Indeed, at the highest frequencies (0.550, 0.660, and 0.740 THz), the signal strength of the spectral content for the real image term almost vanishes, so all that remains are the narrow autocorrelation and off-axis terms from the reference wave. Consequently, the reconstructions at these higher frequencies produce noisy images.

4.2. Analysis of the Reconstruction

To complete good reconstructions, several simple digital processing steps are needed. Theoretically, the same process is followed for phase and amplitude reconstructions, aside from the fact that the amplitude plot of the reconstructed wave field is found by multiplying the wave field by its complex conjugate while the phase plot is found from the inverse tangent of the real and imaginary terms. Assuming the off-axis reference angle is known accurately, phase reconstructions are significantly improved by removing spatial frequencies not associated with the object. Figure 8 illustrates how an initial phase reconstruction can be significantly improved through minor corrections to remove these obvious obfuscating spatial frequencies.

 figure: Figure 8.

Figure 8. (a) Phase reconstruction of the wedges from Fig. 6 degraded by clutter noise. (b) Phase reconstruction with a narrower filter. (c) Phase reconstruction after tilt correction. Visualization 2 and Visualization 3 respectively provide amplitude and phase reconstructions of the backpropagation from the hologram plane to the objectplane.

Download Full Size | PDF

Figure 8(a) shows the reconstructed phase image of the 0.480 THz hologram using the spatial frequency filter Wh(fxm,fyn)=circ(fci) offset by foff=fθa+fθe. From this reconstruction, the 1°, 2°, and 5° wedges are clearly seen, and the observed increase in fringe spatial frequency with increasing wedge angle will allow each wedge angle to be recovered. Although the high-frequency fringe pattern at the location of the fourth wedge suggests that the 10° wedge may also be reconstructed, this is actually an artifact caused by a leakage around the edge and in the filter that allow high-frequency spectra from the autocorrelation term to appear. Confirmation that this is an artifact comes by opening the filter even more and observing this fringe pattern extends throughout the reconstruction and is not caused by the object. Even the reconstruction of the 5° wedge is partially scrambled by the same high-frequency pattern, indicating that the filter size must be reduced.

The simple filter used in Fig. 8(a), discussed in more detail in Subsection 5.4, is not ideal because it retains a segment of the fx=fx,fy=0 line that causes some ringing in the amplitude reconstruction. Figure 8(b), which was generated with a narrower filter that excludes the fx=fx,fy=0 line, shows that the reconstruction has been improved significantly throughout the entire image. With this filter, the reconstruction of the 5° phase wedge shows a much cleaner fringe pattern, while reconstruction of the 10° phase wedge only exhibited random noise, confirming the expectation that refraction prevented its diffraction spot from being recorded. However, the sample exhibits a residual diagonal tilt of about λ/2(300μm) from the top left corner (bright yellow) to the bottom right corner (dark blue). This remaining tilt, which can be digitally corrected by small adjustments of the off-axis angle, flattens the background [Fig. 8(c)] by adjusting θa by 0.12° and θe by 0.2°.

Figure 9 displays the amplitude and phase reconstructions at all frequencies. The low spatial resolution and interfering autocorrelation term make the 0.230 and 0.320 THz amplitude reconstruction appear blurry, and the fringes in the phase reconstruction are not crisp and exhibit a slow modulation. Conversely, the 0.550, 0.660, and 0.740 THz reconstructions are quite noisy because of the strong absorption by the wedges. Indeed, the 5° wedge shows a strong amplitude gradient at all frequencies caused by the thickness gradient of the wedge. The 0.400 and 0.480 THz reconstructions demonstrate the best trade-off between crisp, high spatial frequency lines and a good SNR.

 figure: Figure 9.

Figure 9. Amplitude and phase reconstructions of the four prism wedges over the frequency range 0.230 to 0.740 THz.

Download Full Size | PDF

4.3. Shape Reconstruction

For further quantitative analysis of the reconstructed 0.480 THz image, consider Fig. 10, which summarizes the steps required to isolate and unwrap the phase wedges. The arrangement of the individual prism wedges is shown in the photograph of Fig. 10(a). Figure 10(b) displays the phase reconstruction as three- and two-dimensional plots. Figure 10(c) shows the isolated phase wedges after simple cropping of Fig. 10(b); no other signal processing was performed. To unwrap the phase map of Fig. 10(c) requires only a simple algorithm to stitch together segments differing in phase by 2π. As displayed in Fig. 10(d), the prism wedges become simple objects with no phase ambiguities. Of course, if more complex structures need to be unwrapped, a more complex algorithm must be developed, or the two-wavelength phase unwrapping method discussed in Subsection 6.2 may be applied to extend the unambiguous depth.

 figure: Figure 10.

Figure 10. Unwrapping steps of the 0.480 THz phase reconstruction, from (a) an optical photograph of the four wedges (placed on a red plate instead of in the absorbing foam so their orientations may be seen), (b) hologram phase reconstruction, (c) cropped phase reconstruction, and (d) unwrapped phase map.

Download Full Size | PDF

In order to extract the slopes of the wedges from these holographic reconstructions, the individual unwrapped phase maps are plotted in Fig. 11(a). The slopes of the wedges along the lines overlaid on these plots may be extracted, and these slopes may be further analyzed to recover the phase resolution of the reconstruction and estimate the effective refractive index of N-BK7 at 0.480 THz. The phase resolution was first extracted from the 1° wedge because it displayed the minimum phase step between adjacent pixels, and the mean phase step between adjacent pixels along the overlaid line was 0.0342±0.0035 radians. In order to translate radians into a measure of depth, the refractive index must be known. Rather than use the index reported in literature, which is 2.50 at 0.480 THz [94], a linear fit to the slopes of the three wedges was performed to calculate the refractive index from measurement. Figure 11(b) plots these slopes, for which a refractive index of n=2.54±0.03 gives the best fit, as described in the next section.

 figure: Figure 11.

Figure 11. (a) Enlarged view of unwrapped phase wedges from Fig. 10(d) using the 0.48 THz holographic phase reconstruction. (b) Measurements and fitted slopes of phase wedges.

Download Full Size | PDF

4.4. Longitudinal Resolution

The phase resolution of the reconstruction process depends on many factors, including the phase stability of the terahertz source, environmental conditions such as mechanical vibrations, scan accuracy, and hysteresis of the scanning stages, and even how well the reference and object wavefronts are described by the hologram reconstruction process. According to the manufacturer, the phase stability of our source is better than 0.5 deg, which translates to λ/720 or 0.87 μm at 0.48 THz. This is the depth resolution limit for our holographic imager. The measured depth resolution Δd, defined here as the mean of the phase difference between adjacent pixels of the 1° wedge, may calculated using

Δd=Δφ·λ2π(n1),
where Δφ is the mean phase step between adjacent pixels. Δd was calculated to be 2.2μm±0.2μm. As a function of wavelength, Δd translates to λ/284, only a factor of 2.5 larger than the ultimate resolution limit!

The largest source of pixel-to-pixel phase error may be the reported 1 μm accuracy of the scanning stages, a problem that may be resolved by using better stages or a stationary imaging array. Environmental disturbances, such as mechanical vibrations of the beam splitter and optical mounts, air currents, and temperature fluctuations, present another source of phase error. Perhaps even more significant is the assumption that the reference and object beams are plane waves, because this ignores problems caused by misalignments between the source and collimating OAPMs. The plane wave agreement can be measured by recording a hologram without an object and assuming a plane wave reference beam. Any deviations from an actual plane wave may then be removed by subtracting the reconstructed phase hologram without an object from the reconstructed phase hologram with an object [35,80].

5. Reconstruction of Amplitude Holograms

Having explored the reconstruction of holograms from “phase objects,” we proceed now to more comprehensive reconstructions of true objects, including both phase and amplitude. By expanding the system transfer function to include the reconstruction process, we present a series of measurements that describes the performance of the holographic imager. For this, we use an amplitude resolution target located in the center of the object plane so that the transfer function is independent of the size 2a or lateral location of the object. Therefore, the transfer function derived from these measurements is compared to the nominal cutoff spatial frequency a/λd for 2ad, as well as other cutoff frequencies defined by various spatial frequency filters employed in the reconstruction process. This transfer function will then be examined as a function of the lateral location of a single point object and the hologram aperture size to illustrate how these parameters can affect the transfer function’s spatial cutoff frequency.

The performance of the holographic system will be characterized as a function of frequency between 0.230–0.740 THz. The detectors were identical in physical size, and all holograms were recorded at the same reference angle, pixel pitch, hologram size, and object–hologram distance. Thus, any differences in the reconstructed images are independent of the optical setup and must depend on the spectral response of the target, the spectral cutoff frequencies, and the digital reconstruction processes.

5.1. Coherent Transfer Function for Small Objects

As described in Subsection 3.3, a measure of the spatial resolution of a DH imager is the coherent transfer function. For an object of linear spatial radius a, we first consider a lens-less Fourier hologram with object–hologram distance d2a, as illustrated in Fig. 12(a). In this configuration, the object height is assumed to be small compared with the size of the hologram plane, and the beam illuminating the object is considered to be a planar wave front.

 figure: Figure 12.

Figure 12. (a) Object and hologram planes in lens-less Fourier transform holography for which the object size is small compared to d. (b) Illustration of diffraction rings in the hologram plane from a large object (or from point scatterers in the center and at the edge of the object plane).

Download Full Size | PDF

The resolution of an imaging system may be conveniently described by its spatial cutoff frequency. Considering Fig. 12(a) and omitting the reference beam for now, the maximum spatial frequencies seen in the hologram plane due to structures in the center and on the edge of the object follow Eq. (24), so that fccenter=sinα1/λ and fcedge=sinα2/λ. However, for small objects, d2a, α1α2, and fc-centerfc-edge. Following the discussion of the amplitude transfer function by Goodman [79], the cutoff frequency of this geometry is described by α1, and the transfer function H(fx,fy) is related to a pupil function P(x,y) defined by the hologram plane

P(x,y)=rect(x2a)rect(y2a),
and
H(fx,fy)=rect(λdfx2a)rect(λdfy2a).
Solving Eq. (32) along fx,fy when H(fx,fy)=0 gives the familiar cutoff frequency,
fc=aλd.
The object used for these experiments is a 50 mm diameter Siemens star target (Fig. 5) centered in the object plane with a spoke angle of 10° and a total of 36 spokes. The spatial frequency varied gradually from 0.229 cycles/mm at the outer rim to 114.5 cycles/mm at the center of the target. Hence, the spatial frequency of the star target can be expressed as a function of radius. Figure 13 displays the amplitude reconstruction of the star target at the same seven frequencies used in the previous section. Because of spatial frequency filtering, no autocorrelation term is visible, an advantage of the angular spectrum method. The horizontal line in the center of the target, which is most visible in the 0.230 THz reconstruction, is a polarization effect. Here, the spoke separation is so small that the target acts like a wire grid polarizer, allowing only the horizontal linearly polarized radiation to pass through.

 figure: Figure 13.

Figure 13. Angular spectrum reconstruction of the star target between 0.230 and 0.740 THz. Visualization 4 presents an amplitude reconstruction of the backpropagation from the hologram plane to the object plane using the angular spectrum method at 0.660 THz.

Download Full Size | PDF

The measured transfer functions deduced from Fig. 13 for each frequency are plotted in Fig. 14, overlaid by dotted lines representing the maximum spatial frequencies predicted by Eq. (33). To define the empirical maximum resolution of the experiment using the star target, we used as a threshold the spatial frequency (cycles/mm) at which at least 30 of the 36 spokes were resolved at the Rayleigh limit. Figure 14 confirms the expectation that the spatial resolution improves with increasing frequency. The abrupt drop of the measured spatial frequencies, which occurs at frequencies slightly lower than predicted by Eq. (33), is due to the cutoff frequency of the filter Wh.

 figure: Figure 14.

Figure 14. Measured transfer function of the holographic imager using the angular spectrum reconstruction method, compared with the expected frequency-dependent resolution limits (dotted vertical lines) calculated using Eq. (33).

Download Full Size | PDF

5.2. Coherence Transfer Function for Large Objects

It is important to note that Goodman [79] specified that Eq. (32) and, therefore, Eq. (33) are approximately true for plane wave illumination if the object is sufficiently small in extent, as shown in Fig. 12(a). For large objects, such as those described here, this analysis is incomplete. In this case, the transfer function of this TDH imager deviates quite significantly from a classical representation for small objects for two primary reasons: the unusually large image space numerical aperture of the system, and the large object size compared to the object–hologram plane distance. Recall that the object–hologram plane distance of 170mm is almost the same as the size of the 50 mm object and the 150×150mm recorded hologram.

Here we present a new transfer function to describe TDH in this regime, validated by empirical results that reveal how lens-less Fourier transform holography can resolve wavelength-scale features, without additional optics, at standoff distances of tens to hundreds of centimeters. Consequently, this technique could become highly practical with the development of terahertz focal plane arrays so that real-time imaging could become possible.

Experimental measurements confirm that the transfer function depends not only on wavelength λ, the size of the object plane 2a, and the distance between the object plane and the hologram plane d, but also on the coordinate of the scatterer in the object plane. Aside from Goodman’s assumption of a converging object beam [79], which is often omitted in other discussions, the object plane is not considered at all. In the following, we develop a theoretical construct and algorithm that incorporates the object coordinate of the scatterer into this transfer function. It is important to clarify that the validity of Eq. (32) is not in question, only that it requires additional attention in this regime. One assumes that the optical axis aligns with the center of the detector and hologram planes and that both planes are oriented parallel to each other. This assumption holds for almost every transmission imaging configuration.

Holography in general can be explained by the recording of a collection of point objects, whose spherical diffraction patterns interfere with a reference plane wave. In the case of an in-line (Gabor-type) hologram, the interference pattern then resembles that of a collection of Fresnel zone rings [35]. In the case of an off-axis hologram, these Fresnel zone rings are shifted in spatial frequency according to the off-axis angle of the reference wave. If the detector’s spatial frequency bandwidth is large enough to record the frequency shift of the reference wave and the spatial frequencies of the object, then the quality of the reconstruction remains a function of one’s ability to process the hologram digitally. Theoretically, the recorded object diffraction patterns for in-line or off-axis holography are identical, assuming perfect signal processing using spatial filtering, phase shifting, or other techniques to eliminate the twin-image or zero-order light. Hence, the following discussion applies to both in-line and off-axis holography.

Guided by Fig. 12(b), we consider a larger object and two shown radii, which correspond to the rings of a Fresnel zone plate originating from the object center and edge. We can use the corresponding maximum recorded diffraction angles α1 and α2 and their respective radii r1 and r2 to find the spatial frequency. From Fig. 12(b) and Eq. (33), the diffraction angle, radius, and spatial frequency are related by

f=r·cosαd·λ.
Figure 12(b) suggests that a diffracting wave that originated from a point towards the edge of a large object is recorded at a higher spatial frequency than a diffracting wave that originated from the center of the object, i.e., f2>f1. Furthermore, the recorded high-frequency diffraction ring represented by r2 in Fig. 12(b) is captured only partially (one half of the full diffraction ring in this case), while the diffraction ring represented by r1 is captured fully. When a diffraction ring is only partially recorded, some of the signal associated with it is lost, and the amplitude of its transfer function is reduced, i.e., |H(f2)|<|H(f1)|. The total amount of the signal collected from a given spatial frequency can then be defined as the fractional circumference R(x,y,r) of a diffraction ring with radius r, which originated from a point object in the object coordinate (x,y) and is contained within the boundaries of the hologram plane,
R(x,y,r)=arc length within boundaries of hologram plane/2πr.
The goal then is to find a relationship of intercepts between a circle with radius r and a hologram plane with dimensions 2a×2a. The object size here was chosen to be 2a as well, but the algorithm does not require that. The algorithm considers 11 cases corresponding to various intersections between the square hologram aperture and a diffraction ring that originated from a point object (x,y) somewhere in the object plane. These 11 cases are discussed in detail in Appendices A and B and used below.

5.3. Lateral Spatial Resolution Limits

A validation of this algorithm was performed with a simulation of six identical diffracting object waves at 0.6 THz. Each object was modeled as two circular holes spaced 1 mm apart. The diffracted waves originating from these objects—which were placed on a diagonal with increasing distance from the center of the object plane as shown in Fig. 15(a)—were calculated for the hologram plane using angular spectrum propagation theory. According to this modified transfer function, if the cutoff frequency of Eq. (33) was below the spatial frequency associated with the spacing of the two holes, then a reconstruction of the two holes should be unsuccessful for the object at the center of the object plane. However, if the spatial frequency associated with the hole separation was smaller than 2fc, for example, then the object placed in the corner of the object plane should be resolved, though at a reduced magnitude, i.e., the reconstruction object should appear dimmer in comparison to the object in the center of the object plane.

 figure: Figure 15.

Figure 15. (a) Simulated two point objects 1 mm apart at various locations in the object plane to illustrate the lateral resolution of the transfer function. (b) Calculated amplitude representation of the diffracted object waves, recorded for each location in the hologram plane at 0.6 THz.

Download Full Size | PDF

The ability to reconstruct object pairs in between the two extreme locations should show a gradual improvement as they move from center to corner. Figure 15(b) plots the cross sections through each pair after reconstruction for each location indicated in Fig. 15(a). The plots confirm that the pairs labeled 1–4 are essentially unresolved, while pairs at the edge of the object plane labeled 5 and 6 are resolved but weak, as shown by a significant separation and attenuation of the two peaks. Note that the simulation was intended to provide a visualization of this algorithm, and no absolute dimensions were provided.

Transfer functions are usually shown as a function of spatial frequency, not object location. |H(x,y,fx,fy)| can certainly be plotted as a function of fx,fy for defined point object locations, as shown in Fig. 16. In order to generate these plots, target half-length a was chosen to be 75 mm, the object–detector distance was set at 180 mm, and the wavelength was set to 0.5 mm (0.6 THz). The inset of Fig. 16 shows the nine different locations where a single point scatterer was placed to emit a diffracting wave. |H(x,y,fx,fy)| is then plotted for each scatterer assuming symmetry fx=fy. The cutoff frequency from Eq. (33) overlays these calculations as a solid red line, which corresponds to 0.83 cycles/mm. As expected, a point scatter at location 1 centered in the object plane closely follows the reference curve from Eq. (33). The point scatterer farthest from the object center (5) indicates that spatial frequencies higher than 2 cycles/mm can be resolved here. However, this comes at a price: H associated with scatterer 5 quickly drops to 0.3 even for low spatial frequencies. Abrupt changes observed in the transfer function occur when the diffraction ring crosses an additional side or edge as its diameter increases with increasing spatial frequency.

 figure: Figure 16.

Figure 16. Transfer function at 0.6 THz for nine point scatterer locations within the object plane.

Download Full Size | PDF

To illustrate how this spatial dependence affects the reconstruction of an actual object, Fig. 17 shows two amplitude reconstructions of the Siemens star target with high spatial frequency content at 0.550 THz: one with the target located in the center of the object plane, and the other with the target at the edge of the object plane. The red circle in both reconstructions of Fig. 17(a) represents a star spoke separation whose spatial frequency is 0.45cycles/mm. This spatial resolution coincides with the maximum resolvable feature size of the target if placed in the center of the object plane. In contrast, the resolved spokes of the target located towards the edge of the object plane extend beyond the red circle, confirming the improved resolution there.

 figure: Figure 17.

Figure 17. (a) Reconstructions of the targets in two locations at 0.55 THz. (b) Frequency domain spatial Fourier transform of hologram. (c) Empirical comparison of the transfer functions for the two object locations, confirming the higher spatial frequency content when the target is placed near the edge of the object plane.

Download Full Size | PDF

A representation of the star target in the spatial frequency domain [Fig. 17(b)] is another useful way to compare captured spatial frequency content. Again, the red circle corresponds to a spatial frequency of 0.45cycles/mm. The spatial frequency map of the target placed at the object edge confirms that a higher spatial frequency is recorded (beyond the red circle in the positive fx dimensions) than for the object placed in the center. Lastly, Fig. 17(c) plots the transfer function for both cases, overlaid by simulations using the previously described algorithm. The cutoff frequency according to Eq. (33) is also presented in black. This graph again confirms that for an object near the edge, a spatial frequency close to 2fc can be reconstructed, although at a reduced magnitude. The simulation and measured results for the edge object follow a common trend but are offset in magnitude. This is most likely due to the fact that the simulation considers a single point object in a single object plane location. The transfer function of the star target is found over a larger area. The sharp cutoff of the transfer function comes from the analysis code, which sets the spokes of the star target as unresolved if fewer than 30 of the 36 spokes are resolved. How aberrations such as coma and astigmatism affect image quality for this imaging system with its high numerical aperture is discussed in Appendix C.

5.4. Filter Choices for the Angular Spectrum Reconstruction

As noted previously, the results of the angular spectrum reconstruction process critically depend on the selection of a suitable spatial frequency filter Wh(fxm,fyn). We use a simple circular filter: multiplying the Fourier domain maps by 1 inside a circle of radius fw=fxm,fyn and multiplying by 0 outside. Here we explore the sensitivity of the reconstruction on the selection of fw. Recall that one of the major improvements to the holographic imager was an off-axis angle that extended both in azimuth and elevation, allowing greater separation and cleaner filtering of the real image term from the other terms. Figure 18(a), which displays the spatial frequency map of the 0.400 THz hologram, clearly shows the motivation behind this improvement. The Fourier transform of the hologram plane shows a strong line feature along the fx=0,fy=fy and fx=fx,fy=0 axes. This feature stems from the 150×150mm2 detection aperture area. These lines do not exist in the actual object but would introduce an artifact in the backpropagation kernel that produces significant ringing in the reconstructed image if not rejected. In order to reject these lines, we consider filters corresponding to three off-axis angles: the spatial frequency associated with the off-axis angle in azimuth, in elevation, and the combined off-axis angle. We consider also a fourth filter associated with the hologram aperture size cutoff frequency described by Eq. (33) and a fifth filter described by the textbook relationship [Eq. (5)] between the off-axis angle and the maximum separable bandwidth for off-axis holography sinθ/λ=3B [79]. Lastly, we consider a sixth filter that uses the spatial cutoff frequency associated with the azimuth off-axis angle combined with a Kaiser apodization window to remove the fx=0,fy=fy and fx=fx,fy=0 spectral lines.

 figure: Figure 18.

Figure 18. (a) Spatial frequency map of 0.400 THz hologram and (b) real image spectrum with overlaid filter circles.

Download Full Size | PDF

Figure 18(b) shows five rings centered about the spectrum of the real image. Each ring represents a filter radius choice, and the performance of all six filters will be discussed in the following. Table 2 summarizes the six filters, labeled to match the rings in Fig. 18. Note that the ultimate choice of filter depends not only on the desired image resolution or quality but also on other applications such as edge detection. If edge detection is desired, a bandpass filter that isolates the real image spectrum and low frequency content in the spectrum center must be used. Alternatively, the circular filter could be chosen in the form of a notch filter to detect features that generate specific spatial frequencies.

Tables Icon

Table 2. Filters Related to the Physical Quantities of the Experimental Setup

Figure 19 displays the filtered spectra as it is backpropagated for each filter listed in Table 2, as well as the associated image reconstruction. We start with the largest spatial cutoff frequency of fw=0.98cycles/mm [Fig. 19(a)], representing to the total off-axis angle of 46.9° and shown as red circle 1 in Fig. 18(b). The reconstruction in Fig. 19(b) includes a large portion of the autocorrelation term, which degrades the reconstructed real image so severely that it is barely recognizable. Clearly, this filter is too large. In fact, fw=sinθ/λ is equivalent to Eq. (6) and can be considered the maximum spatial filter range; a stronger reference wave would improve the filter performance.

 figure: Figure 19.

Figure 19. (a) Filtered spectrum with fw=0.98cycles/mm and (b) reconstructed image. (c) Filtered spectrum with fw=0.88cycles/mm and (d) reconstructed image. (e) Filtered spectrum with fw=0.88cycles/mm and an applied Kaiser apodization window, and (f) reconstructed image. (g) Filtered spectrum with fw=0.59 cycles/mm and (h) reconstructed image. (i) Filtered spectrum with fw=0.51cycles/mm and (j) reconstructed image. (k) Filtered spectrum with fw=0.33cycles/mm and (l) reconstructed image.

Download Full Size | PDF

Filter 2 corresponds to the azimuth off-axis angle of 41.2°, which corresponds to a spatial cutoff frequency of fw=0.88cycles/mm, as shown by the blue circle in Fig. 18(b). Figures 19(c) and 19(d) show the filtered spectra and reconstructed image, respectively. Here, the autocorrelation term is completely rejected, which significantly improves the quality of the reconstruction. However, the fx=fx,fy=0 line remains, producing a fringe pattern that overlays the star target and causes a rugged appearance of the star spokes. A closer look at Fig. 19(c) shows that the filter cutoff frequency has not reached the maximum spatial frequency content of the real image, suggesting that the filter radius should be decreased further.

However, before we consider the next filter reduction, a variation to the filter in Fig. 19(c) was used to reject the fx=fx,fy=0 line by apodization of the hologram in software. The goal of an apodization window is to attenuate the signal near the hologram edges to about 0.5 of the original signal. This will reduce the fx=fx,fy=0 line while retaining at least half of the high spatial frequency information, which is captured towards the edge of the hologram aperture. After trying several apodizations, including Bartlett, Blackman, Chebyshev, Hamming, and Gaussian functions, the Kaiser window WK(m) was found to be the most suitable filter, defined as

WK(m)=Io(0.5π1(2mM11)2)Io(0.5π),
in one dimension, where m is the pixel index (1 ≤ mM) and Io is the zeroth-order modified Bessel function of the first kind [96]. Figure 19(e) shows the result when the Kaiser apodization window was applied to the hologram as the initial reconstruction step, and the same cutoff frequency as in Fig. 19(c) was used. The Kaiser window successfully suppressed the fx=fx,fy=0 spectral line, which almost completely removed the ringing artifact in the reconstructed image in Fig. 19(f). However, the high spatial frequency content was also reduced, which can be seen in the larger unresolved central area of the star target and the blurry appearance of the image.

Filter 4, the next narrower filter tried, corresponds to the spatial cutoff frequency in Eq. (33), which is fw=0.59cycles/mm and is shown by the black circle in Fig. 18(b). A comparison with the previous filters shows that this filter, depicted in Fig. 19(g), improves the reconstructed high-frequency content significantly. Nevertheless, this filter still contains a small portion of the fx=fx,fy=0 spectral line, which causes some ringing in the right portion of the reconstructed image in Fig. 19(h). This obfuscating line can be moved beyond the spatial cutoff frequency with apodization or if the elevation off-axis angle is increased from 22.4° to 26.3°.

The fifth filter is associated with the elevation off-axis angle of 22.4°. It corresponds to fw=0.51cycles/mm and is shown as a purple ring in Fig. 18(b). At this filter radius, the spectral lines due to the hologram aperture are completely rejected, and the ringing in the reconstruction is suppressed. However, since this filter is slightly smaller than fc according to Eq. (33), one should expect a small loss (15%) in resolution. Figures 19(i) and 19(j) show the spectrum and reconstruction, respectively, which confirms that the ringing has been completely removed at the expense of some transverse resolution. The choice between a filter that completely removes the effects due to the hologram aperture and a filter that retains fc according to Eq. (33) as the classical cutoff frequency depends on the application. One must also consider that the ringing will be apparent in the phase reconstruction imagery. Hence, the requirement of the quality of the phase plot must be a contributing factor in the choice of the cutoff frequency.

The sixth and last filter considered is related to the spatial cutoff frequency of the object bandwidth B, which was discussed in Eq. (5). According to Goodman [79], the maximum separable bandwidth B equals 1/3 of the off-axis angle’s spatial frequency. Figures 19(k) and 19(l), respectively, show the spectrum and reconstruction in this case for fw=0.33cycles/mm. This is a worst-case scenario as Goodman clarifies: if the signal strength due to the reference beam is sufficiently large, this requirement can be relaxed according to Fig. 18 up to the opposite extreme where fw is chosen to be filter 1. Therefore, while filter 1 with fw=sinθ/λ can be regarded as the maximum filter radius given by Eq. (24) from Eq. (6), filter 6 with fw=sinθ/3λ can be regarded as the minimum filter radius given by Eq. (5).

Using this analysis, it is then up to the researcher to find an optimum filter between these extremes depending on the application. An important lesson of this analysis suggests that the minimum off-axis angle in elevation and azimuth corresponds to the cutoff frequency of the hologram plane. This was almost satisfied here, and we elected to remove the hologram aperture effects completely, at the cost of reducing the cutoff frequency from fw=0.59 to fw=0.51cycles/mm.

6. Applications and Outlook

Now that the basics of phase and amplitude reconstructions have been outlined for TDH, we conclude with some examples of how they may be applied for quantitative analytical imaging of representative targets. We concentrate on two illustrative applications of TDH: imaging the internal void structure of visibly opaque dielectrics and imaging a thick terahertz lens using the dual-wavelength reconstruction method.

6.1. Voids within Visibly Opaque Dielectrics

Among the many applications of TDH, an area of particular interest is the detection and spatial mapping of inclusions embedded within visually opaque hosts. We consider the application of TDH for the NDT&E of two visibly opaque dielectric objects containing air and polymer inclusions to simulate unwanted voids, cracks, or delamination that could threaten the integrity a structure and lead to failure if not detected.

First consider a sample containing nine air voids of differing depths, arranged in a 3×3 array [44]. A Stratasys Eden 350 three-dimensional printer fabricated the visually opaque sample containing various types of voids, as shown in Fig. 20(a). The samples were made from Object VeroBlackPlus RGD875, a black printing material, while the void regions were empty air pockets. Table 3 summarizes the optical properties of the two three-dimensional printed material dielectrics, measured at 0.495 and 0.710 THz by time domain terahertz spectroscopy [44].

 figure: Figure 20.

Figure 20. (a) Photograph, (b) hologram, and (c) phase reconstruction of the 3×3 void sample at 0.480 THz.

Download Full Size | PDF

Tables Icon

Table 3. Optical Properties of Three-Dimensional Printed Materials

Following the procedure outlined in Section 5, the measured hologram at 0.480 THz (0.625 mm wavelength) is plotted in Fig. 20, as well as the associated phase reconstruction. The photograph provides no evidence of the voids within the visually opaque sample, and since the sample was made to have a uniform thickness, tiles with thinner voids are covered with more black material. The presence of voids becomes evident as OPDs, visualized as brightness variations between tiles and the surrounding borders. Gray-colored voids mean that the path difference between the tiles and the borders is close to 0π. Bright voids mean that the path difference is between 0π, and dark voids mean that the path difference is between π0. One should note that the phase reconstruction does not explicitly prove the presence of air voids, only that there is a difference in effective refractive index or material properties between neighboring areas.

Considering just the void in row 1, column 2 [the top-center void in Fig. 20(c)], one can compare OPDs and calculate real thickness variations with subwavelength resolution [Fig. 21(a)]. When these thickness variations are presented as a contour plot in Fig. 21(b), where successive colored contours represent thickness steps every 15 μm, striations and imperfections produced during fabrication are revealed at a level consistent with the resolution limitations of the three-dimensional printer for the materials used. Such a contour plot, exploiting the high phase resolution of TDH to provide high depth resolution (<15μm for λ=625μm), powerfully illustrates its potential for high-resolution NDT&E.

 figure: Figure 21.

Figure 21. (a) Magnified phase reconstruction of a single void (row 1, column 2 of Fig. 20), and (b) topographic plot (contours every 15 μm) illustrating that the void depth variations caused by three-dimensional printing are less than 50 μm.

Download Full Size | PDF

A sample containing 400 air-filled square voids, each 3mm×3mm and sequentially increased in depth from 15 μm to 6000 μm in 15 μm steps, has also been investigated [44]. As with the 3×3 sample in Fig. 20, these 400 voids were capped by dielectric surfaces, so each void produced Fabry–Perot cavity effects. We investigated the imager’s ability to measure air voids of different thicknesses at 0.495 and 0.710 THz. The image size was 200×200mm2, and with a pixel step size of 0.1 mm, one hologram recording required about 24 h to complete. Of course, the acquisition speed would reduce to seconds or less with the use of focal plane array technology, which will become the preferred approach when their sensitivity approaches that of state-of-the-art single-pixel Schottky diode detectors.

The phase image in Fig. 22 shows that voids with a depth of 45 μm (third cell in the first row) could be distinguished clearly, while the two thinnest voids of 15 and 30 μm, which challenged the printer accuracy of ±14μm but not the 2.2 μm depth resolution of our imager, could be seen only partially. These observations are consistent with the micrometer-scale depth resolution observed in Section 4 using the phase wedge prisms. Such a depth resolution is superior to what is achievable to state-of-the-art time domain techniques [97].

 figure: Figure 22.

Figure 22. Magnified phase image of first three cells for 0.495 THz phase reconstruction, for which the depth of the void is labeled.

Download Full Size | PDF

A third sample, shown in Fig. 23, included nine 1 mm thick void structures capped above and below by 1 mm of RGD875 for a total object thickness of 3 mm. The voids were filled not with air but with an absorbing dielectric called SUP705 whose real refractive indices are similar to the RGD875 dielectric but whose absorption coefficient is more than 3 times larger (Table 3). The composition of these nine void structures was varied in order to explore a variety of possible orientations and sizes of cracks and voids.

 figure: Figure 23.

Figure 23. Photograph of sample A and drawing of its internal structure [44].

Download Full Size | PDF

The hologram acquisition and reconstruction processes were simulated using the angular spectrum method to provide intuition and guide interpretation of the measurements. The simulation included all physical dimensions and refractive indices of the samples, the propagation of the object and reference beams, and the estimated image resolution of the holographic imager according to Eq. (33), which is 1.01 cycles/mm for 0.495 THz, and 1.45 cycles/mm for 0.710 THz. The sample A transfer function t(x,y) may be formulated as

tm(x,y)=e(αm·Lm)·e(i·2πλ·(nm1)Lm),
or
tm+s(x,y)=e(23αm·Lm+αs·Ls)·e(i·2πλ·(23(nm1)Lm+(ns1)Ls)),
where Lm=3mm and Ls=1mm are the thicknesses of the entire sample and embedded support material structures, respectively, and α and n are given in Table 3. The choice of t(x,y) depends on whether the z dimension of t(x,y) included support material, which is represented by tm+s(x,y), or was only made of model material, which is represented by tm(x,y). Sample A was simulated as a dielectric resolution chart with transmission and absorption characteristics. Reflection at each model/support material interface was omitted in this approximation for two reasons: the tiny difference in the real refractive indices of the model and support material of about 0.05 and the strong absorption by the support material producing a negligible void finesse of less than 104. Figure 24 compares the simulations and measurements at 0.495 and 0.710 THz.

 figure: Figure 24.

Figure 24. (a) Simulated (left) and measured (right) holograms and amplitude reconstructions of sample A at 0.495 THz. (b) Simulated (left) and measured (right) holograms and amplitude reconstructions of sample A at 0.710 THz [44].

Download Full Size | PDF

As expected, the reconstructed images at 0.710 THz are better resolved than those at 0.495 THz. Furthermore, the strong difference in their respective absorption coefficients makes the support material show up at a high contrast to the surrounding black bulk material in both simulation and analysis. A quantitative representation of the lateral resolution was carried out on the structure with the random channels of varying thicknesses, shown magnified in Fig. 25. Both sections were analyzed by calculating their two-dimensional Fourier transforms. The cutoff frequencies of the measurements were 0.7cycles/mm and 1.0cycles/mm at 0.495 and 0.710 THz, respectively, which is slightly lower than predicted by Eq. (33). However, this was to be expected because the spatial filter of the measurement was again chosen according to the off-axis angle to eliminate the strong line of spatial frequencies associated with the hologram aperture. The 0.495 THz image shows greater contrast because of the greater source power and lower receiver noise.

 figure: Figure 25.

Figure 25. Magnified view of the structure corresponding to column 1, row 2 of sample A, reconstructed at 0.495 and 0.710 THz.

Download Full Size | PDF

6.2. Dual-Wavelength Reconstruction

Most of the voids considered in the previous section were less than a wavelength thick, so no phase ambiguity was introduced in the reconstruction. For thicker materials, phase wrapping can produce phase ambiguity that can significantly complicate reconstructions, especially if there are abrupt changes in path length. Specifically, for an optical path step larger than λ, or a physical step larger than λ/(n1), the associated phase wraps beyond 2π and produces an ambiguity. Depending on the structure, statistical computer algorithms can sometimes unwrap the phase successfully. However, certain objects are more difficult to reconstruct using one hologram, such as a high-aspect object that abruptly causes an actual phase step exceeding 2π. This is exacerbated by the fact that the measurement cannot indicate whether the phase step corresponds to an increase or decrease in optical path length. These limitations may be overcome by the dual-wavelength reconstruction method [84]. By recording two holograms of the same object at slightly different wavelengths, this method may be used to extend the unambiguous phase reconstruction range. Dual-wavelength reconstruction is highly applicable to TDH because the wavelength of the source may be changed easily [98].

For two holograms of the same object recorded at wavelengths λa and λb, the optical phase difference ΔOPDj+1 between two adjacent pixels j and j+1 of each hologram can be written as

ΔOPDj+1=OPDj+1OPDj.
In order to simplify the discussion here without loss of generality, we assume a refractive index n=1 for each wavelength and a one-dimensional reconstruction. We then obtain the reconstructed phase maps as
ΔOPDj+1=[ϕ(j+1)a2π+m]λaϕja2πλa,
and
ΔOPDj+1=[ϕ(j+1)b2π+p]λbϕjb2πλb,
where ϕji are the reconstructed phase values, and m, p are integers. If m,p1, a phase ambiguity is present between adjacent pixels. Note that there are two equations but three unknowns: ΔOPDj+1, m, and p. This was to be expected as two holograms will only extend the range of the 2π phase ambiguity, not eliminate it. This constraint is acknowledged by setting m=p, then combining Eqs. (40) and (41) to solve for m by
m=(ϕ(j+1)aϕja)λa(ϕ(j+1)bϕjb)λb2π(λbλa).
Then by substituting Eq. (42) into Eq. (40), we can now solve for the OPD [98],
ΔOPDj+1=12π[(ϕ(j+1)aϕja)(ϕ(j+1)bϕjb)]·λaλbλaλb.
The wavelength ratio at the end of Eq. (43) is commonly called the beat wavelength,
λb=λ1·λ2|λ2λ1|,
and it represents a new synthetic wavelength that determines the new 2π phase ambiguity in range. This new unambiguous phase range, which is effectively determined by the wavelength difference, is limited by the phase noise of the individual phase profiles [99]. The dual-wavelength process amplifies the phase noise by a factor proportional to the beat wavelength, thus introducing noise-induced phase steps if λ1 and λ2 are chosen too closely to each other.

To illustrate dual-wavelength reconstruction, Fig. 26 shows the reconstructed phase maps of a Teflon lens at 0.485 and 0.500 THz, where false coloring of the cross-section plots clearly indicates the 2π phase steps and phase ranging between π and π.

 figure: Figure 26.

Figure 26. (a) Photograph of lens and (b) phase reconstructions of holograms that were acquired at 0.485 and 0.500 THz, for which the (c) cross-section plots cross the center of the phase maps.

Download Full Size | PDF

The first step in the dual-wavelength reconstruction process is to subtract the phase reconstruction of λa from the reconstruction of λb, as shown in Fig. 27. This generates a new phase map with a range between 2π and 2π. Next, every pixel that has a phase value of less than zero is increased by 2π. This produces a new phase map with an unambiguous 2π range that corresponds to λbeat (Fig. 28). Although the phase reconstructions at 0.485 THz and 0.500 THz are unambiguous over an optical path length of 0.619 mm and 0.6 mm, respectively, applying the dual-wavelength reconstruction process extends this length to 19.5 mm.

 figure: Figure 27.

Figure 27. Phase map of lens and corresponding cross section after one phase map was subtracted from the other.

Download Full Size | PDF

 figure: Figure 28.

Figure 28. New phase map and center cross section of lens reconstruction, free from phase ambiguities.

Download Full Size | PDF

The dual-wavelength reconstruction method thereby offers a powerful method to reconstruct phase holograms that exceed the unambiguous range at one wavelength, removing any unambiguity up to the aforementioned synthetic wavelength. To explore how the dual-wavelength method may be used for quantitative analysis, we repeat the above experiment [26] and unwrap the image of the 52 mm diameter plano–convex polymethylpentene (PMP) lens shown in Fig. 29. From this, we will attempt to recover both its refractive index, which is reported to be 1.43 at 0.3 THz [95], and the variation of the lens thickness from the edge to the center, which was measured by caliper to be hc=2.00±0.01mm.

 figure: Figure 29.

Figure 29. (a) Plano–convex PMP lens used as phase object for the dual-wavelength reconstruction process. (b) Holograms of the lens recorded at 0.680 and 0.725 THz [26].

Download Full Size | PDF

Figure 29 also plots the recorded holograms at 0.680 THz and 0.725 THz, corresponding to wavelength of λ1=0.4409mm and λ2=0.4135mm, respectively. The field of view is 80×80cm2 with 400×400pixels and a pixel spacing of 0.2 mm. The strong signal in the center of the hologram indicated that the beam was converging from the lens to the hologram plane, and we took care to ensure that the object–hologram distance was shorter than the focal length of the lens so that the phase information of the lens would be captured by many detector pixels and a reliable phase reconstruction could be made. Note that the interference fringes are clearly resolved here because the off-axis angle of this experimental setup was only 12°, much smaller than the 47° angle used in the holography experiments described in the previous sections.

Each hologram was then processed using the angular spectrum method at a reconstruction distance of 280 mm to obtain the amplitude and phase reconstructions shown in Fig. 30. The mount can be clearly seen on the right side of the image. The amplitude reconstruction shows the expected diffraction pattern near the edge and an unexpected X-shaped feature, likely stress birefringence caused either by the curing of the PMP or by the pressure from the mounts. The phase reconstruction clearly shows at least three circular phase steps spanning the range from π to π, as one would expect for a hemispherical lens, from the center of the lens to the edge.

 figure: Figure 30.

Figure 30. Amplitude reconstructions of PMP lens at (a) 0.680 and 0.725 THz. Phase reconstructions of lens at (b) 0.680 and 0.725 THz [26].

Download Full Size | PDF

Following the dual-wavelength reconstruction procedure outlined above, the two phase profiles are combined and subtracted, so that the new phase values of the new phase map range between 2π to 2π. Adding 2π wherever the phase map is less than zero generates a new phase map that is free of 2π phase discontinuities over the range of the beat wavelength λb=6.65mm. The remaining step is to convert the phase map profile into an optical thickness profile, achieved by multiplying the phase map by the beat frequency and dividing it by 2π. The resultant unwrapped phase profile is plotted in Fig. 31, including a side view of the unwrapped phase map with the vertical scale is in radians. The OPD between the edge and the center of the lens resulted in a phase difference of 0.89±0.10 radians. Given the measured thickness variation of the lens, a refractive index of n=1.46 was ascertained from the OPD at 0.700THz.

 figure: Figure 31.

Figure 31. Unwrapped phase map of PMP lens from three perspectives: (a) above, (b) cross section in radians, and (c) pseudo-three-dimensional [26].

Download Full Size | PDF

The deduced refractive index of 1.46 is in good agreement with values reported the literature [100], especially given the possibility of curing inhomogeneities and stress birefringence evident in the amplitude reconstruction. The cross-sectional plot indicates the technique provided a subwavelength thickness resolution of 10μm (λ/40 at 0.725 THz). The lens’s edge thickness, measured by caliper to be 10.60 mm, is between λb and 2λb To confirm the deduced refractive index, the lens edge thickness was calculated from the measurements shown in Fig. 31(b). Using the deduced refractive index, the lens thickness at the edge is estimated to be 10.20±1.00mm in excellent agreement with the measured value, especially considering the previously mentioned refractive index variations.

These demonstrations clearly indicate the potential of TDH for NDT&E, especially for objects micrometer-to-millimeter-scale thicknesses that may be opaque to the visible region but transparent to terahertz radiation. For example, amplitude reconstructions may reveal wiring defects on multilayer circuit boards, whereas phase reconstructions can reveal inhomogeneities in dielectrics. As previously mentioned, strides to develop terahertz focal plane arrays would further expand possible applications to include one-shot or short duration events such as vibration analysis and stress-induced birefringence.

6.3. Outlook

In this tutorial, we have detailed how to perform DH in the terahertz spectral region and discussed the achievable spatial resolution of an imager. The fundamentals presented here, soundly grounded in decades of work in visible wavelength holography, DH, and SAR, are adapted to the unique challenges of the terahertz spectral region, including the greater role of diffraction, the opportunity for a simplified off-axis interferometer, and the incredible sensitivity and spatial resolution afforded by superheterodyne receivers.

Two additional techniques in the field of DH deserve special attention for their potential application in the terahertz region: PSDH and diffraction tomographic holography. As described in Section 2.2, PSDH used with in-line interferometers can separate the autocorrelation term from the real and virtual images [101]. Several holograms must be recorded, each with a phase-shifted reference wave. For a four-hologram reconstruction, reference wave phase shifts of π/2, π, and 3π/2 are easily accomplished at terahertz frequencies with a high accuracy translation stage. Although this technique can also be applied to off-axis holography, the in-line technique allows the step size to be increased and the acquisition time to be decreased significantly.

Diffraction tomographic holography reconstructs a three-dimensional image by acquiring multiple holograms as the object is rotated through the beam, much like in inverse SAR [102]. Unlike optical coherence tomography, which often assumes nondiffracting pencil beams, the Fourier diffraction theorem must be used to account for the diffracting terahertz beams [35]. Once a sufficient number of holographic slices has been recorded and transformed to the Ewald sphere, performing an inverse Fourier transform with field backpropagation will yield the refractive index distribution of a geometrically well-defined object function. Alternatively, if the object has a homogeneous refractive index distribution, abnormal geometric structures or imperfections can be found, even within visibly opaque structures.

Indeed, it is well known that the ability of terahertz radiation to penetrate most dielectrics presents an important opportunity for nondescructive imaging of composite materials using nonionizing radiation to locate embedded features, identify imperfections, and confirm the quality of joints, seams, and laminates. Now, by combining the benefits of commercially available sources of highly coherent radiation with the maturation of focal plane array detectors in the terahertz regime, TDH is poised to become a practical, potentially revolutionary alternative to traditional terahertz imaging techniques.

Appendix A: The Transfer Function

Holography in general can be explained by the recording of a collection of point objects, whose spherical diffraction patterns interfere with a reference plane wave. In the case of an in-line (Gabor-type) hologram, the interference pattern then resembles that of a collection of Fresnel zone rings [35]. In the case of an off-axis hologram, these Fresnel zone rings are shifted in spatial frequency according to the off-axis angle of the reference wave. If the detector’s spatial frequency bandwidth is large enough to record the frequency shift of the reference wave and the spatial frequencies of the object, then the quality of the reconstruction remains a function of one’s ability to signal-process the hologram. Theoretically, the recorded object diffraction patterns for in-line or off-axis holography—assuming perfect signal processing using spatial filtering, phase shifting, or other techniques to eliminate the twin-image or zero-order light—are identical. Hence, the following discussion is general to both in-line and off-axis holography.

Figure 12(b) is identical to Fig. 12(a) with the exception that we now consider a larger object and two shown radii, which correspond to the rings of a Fresnel zone plate originating from the object center and edge. We can use the corresponding maximum recorded diffraction angles α1 and α2 and their respective radii r1 and r2 to find the spatial frequency. From Fig. 12(b) and Eq. (33), the diffraction angle, radius, and spatial frequency are related by

f=r·cosαd·λ.
Figure 12(b) suggests that a diffracting wave that originated from a point towards the edge of a large object is recorded at a higher spatial frequency than a diffracting wave that originating from the center of the object. Furthermore, the recorded high-frequency diffraction ring represented by r2 in Fig. 12(b) is captured only partially (one half of the full diffraction ring in this case), while the diffraction ring represented by r1 is captured fully. The partial recording of a diffraction ring then means that some of the signal associated with f2 is lost, which should be represented in a reduced amplitude for |H(f2)|. The total amount of the signal collected that relates to some spatial frequency can then be defined as the fractional circumference R(x,y,r) of a diffraction ring with radius r, which originated from a point object in the object coordinate (x,y) and is contained within the boundaries of the hologram plane,
R(x,y,r)=arc length within boundaries of hologram plane/2πr.
The goal then is to find a relationship of intercepts between a circle with radius r and a hologram plane with dimensions 2a×2a. The object size here was chosen to be 2a as well, but the algorithm does not require that. The algorithm considers 11 cases corresponding to various intersections between the square hologram aperture and a diffraction ring, which originated from a point object (x,y) somewhere in the object plane. These 11 cases are detailed below.

A.1. Transfer Function for ra

Figures 32(a)32(d) show the first four cases for ra. Recall that the object and hologram aperture sizes are 2a×2a as shown by the square apertures, and r is associated with the size of a diffraction ring (red circles) in the hologram plane. The location of the scatterer in the object plane is described by x,y,2ax,2ay.

 figure: Figure 32.

Figure 32. Diffraction ring–hologram plane intersections for ra for cases (1)–(4).

Download Full Size | PDF

The diffraction ring in red is shown to increase in radius from Fig. 32(a) to Fig. 32(d), and evolves from being fully enclosed by the hologram aperture to being partially clipped. Angles θ1 and θ2 measured in radians describe intercepts between the diffraction ring and the hologram aperture, and are useful variables to calculate the arc of the diffraction ring contained in the hologram. Cases (1) – (4) corresponding to Figs. 32(a)32(d) are presented in the following in notation.

First, let us rewrite the variables A,B,C,D as x,y,2ax,2ay such that A=min[x,y,2ax,2ay], D=max[x,y,2ax,2ay], and A<B<C<D. In other words, we arrange x,y,2ax,2ay from smallest to largest. The four cases are then described as

Case(1):rA,thenR(x,y,r)=1,
Case(2):r>A,rB,θ1=cos1Ar,thenR(x,y,r)=2πr2θ1r2πr,
Case(3):r>B,rA2+B2,rC,θ1=cos1Ar,θ2=cos1Br,thenR(x,y,r)=2πr2(θ1+θ2)r2πr,
Case(4):r>A2+B2,rC,θ1=cos1Ar,θ2=cos1Br,thenR(x,y,r)=1(π2+θ1+θ2)r2πr.
Let us take a closer look at Cases (1)–(4). Case (1) includes all spatial frequencies whose diffraction rings are completely captured by the hologram plane as shown in Fig. 32(a). Case (2) includes the case where the diffraction ring crosses one side of the hologram aperture [Fig. 32(b)]. Case (3) takes two side crossings into account but excludes diffraction rings that pass round the aperture edge [Fig. 32(c)]. Finally, Case (4) represents a scenario that includes one edge of the hologram aperture [Fig. 32(d)]. As previously noted, ra for all of these cases. Also note that r=acorresponds to Eq. (33).

 figure: Figure 33.

Figure 33. Diffraction ring–hologram plane intersections for a<r<2a and rD for cases (5)–(7).

Download Full Size | PDF

A.2. Transfer Function for ar2a

The following three cases cover diffraction ring radii of a<r2a and r<D. Again, these cases are mathematically described below. Figure 33(a) includes radii that cross the aperture at three sides [Case (5)]. Figure 33(b) represents one crossing with a side and one inclusion of an edge [Case (6)]. Figure 33(c) shows two edge inclusions but no single side crossing [Case (7)],

Case(5):r>C,rA2+B2,rA2+C2,rD,θ1=cos1Ar,θ2=cos1Br,θ3=cos1Cr,thenR(x,y,r)=2πr2(θ1+θ2+θ3)r2πr,
Case(6):r>C,r>A2+B2,rA2+C2,rD,θ1=cos1Ar,θ2=cos1Br,θ3=cos1Cr,thenR(x,y,r)=2πr(π2+θ1+θ2+2θ3)r2πr,
Case(7):r>A2+B2,r>A2+C2,rD,θ2=cos1Br,θ3=cos1Cr,thenR(x,y,r)=2πr(π+θ2+θ3)r2πr.

A.3. Transfer Function for 2ar8a

The remaining four cases cover radii with D<r8a. r=8a is the limiting case for which no diffraction ring intersects the hologram plane and therefore R(x,y,r)=0. These four cases are shown in Fig. 34, followed by mathematical representations,

Case(8):r>D,rA2+B2,rA2+C2,rB2+D2,rC2+D2,θ1=cos1Ar,θ2=cos1Br,θ3=cos1Cr,θ4=cos1Dr,thenR(x,y,r)=2πr2(θ1+θ2+θ3+θ4)r2πr,
Case(9):r>D,r>A2+B2,rA2+C2,rB2+D2,rC2+D2,θ2=cos1Br,θ3=cos1Cr,θ4=cos1Dr,thenR(x,y,r)=2πr(π2+θ1+θ2+2θ3+2θ4)r2πr,
Case(10):r>D,r>A2+B2,r>A2+C2,rB2+D2,rC2+D2,θ2=cos1Br,θ3=cos1Cr,θ4=cos1Dr,thenR(x,y,r)=2πr(π+θ2+θ3+2θ4)r2πr,
Case(11):r>D,r>A2+B2,r>A2+C2,r>B2+D2,rC2+D2,θ3=cos1Cr,θ4=cos1Dr,R(x,y,r)=2πr(3π2+θ3+θ4)r2πr.

 figure: Figure 34.

Figure 34. Diffraction ring–hologram plane intersections for r8a for cases (8)–(11).

Download Full Size | PDF

Appendix B: The Transfer Function as a Function of Object Coordinate

As previously mentioned, R(x,y,r) describes the amount of signal that is collected for a spatial frequency related to r. This signal then translates to a new transfer function |H| that is a function of the scatterer in the object plane,

|H(x,y,fx,fy)|=R(x,y,r(fx,fy)),
while assuming a rotational symmetric point scatterer. Equation (B1) suggests that each point (x,y) in the object plane results in an individual transfer function for the hologram plane. Equation (B1) also shows that |H(x,y,fx,fy)| may now be plotted as a function of object location (x,y) for specific spatial frequencies. Figure 35 shows three cases of |H(x,y,fx,fy)| for three different spatial frequencies. The base represents the location of the object scatterer in the object plane that generates the spatial frequency, which is recorded in the hologram plane. The false coloring |H| shows at what relative signal strength (full signal strength=1) the spatial frequency is captured in the hologram plane. Figure 35(a) shows the transfer function for small spatial frequencies. These frequencies are captured at full magnitude essentially for all object locations. An object size of 150×150mm was assumed. As the spatial frequency increases to 0.5fc, the magnitude of the transfer function is equal to 1 for a reduced object area in the center of the object plane. For fx,fy=fc,fcis still transferred for all object coordinates but only at a magnitude of 1 for a point scatterer located in center of the object plane. The center axis is then the location for which Eq. (33) is always satisfied.

 figure: Figure 35.

Figure 35. Three transfer functions |H(x,y,fx,fy)| for the spatial frequency range 0<fx,fyfc.

Download Full Size | PDF

Let us consider now spatial frequencies greater than fc. Equation (32) suggests that their transfer function should be zero. However, it should be quite obvious that a point object emitting a diffracting wave from the edge of the object is recorded at f>fc, although at reduced signal strength. Figure 36 considers spatial frequencies for which fc<f2fc. Specifically, Fig. 36(a) shows the transfer function for a spatial frequency that is slightly larger than fc: f=1.05fc. As one may expect, the area defined by the intersection of the optical axis with the object plane quickly drops, confirming behavior described by Eq. (32) at the object center. Furthermore, the maximum magnitude of the transfer function drops significantly to about 0.65. Increasing the spatial frequency further to f=1.5fc as shown in Fig. 36(b) and then to f=2fc in Fig. 36(c) shows an area for which |H(x,y,f)|=0. This area originated at the object center for f=fc and expands from there as the spatial frequency is increased. A spatial frequency of f=2fc is only resolved at the edges of the object. Furthermore, the magnitude drops to levels where the SNR may be limiting the ability to capture the information. Also, f=2fc becomes difficult to isolate from the zero-order term using spatial filtering, and the detector’s bandwidth may not be large enough to record this frequency.

 figure: Figure 36.

Figure 36. Three transfer functions |H(x,y,fx,fy)| for the spatial frequency range fc<fx,fy2fc.

Download Full Size | PDF

For completeness, let us consider spatial frequencies even greater than f=2fc. Figures 37(a)37(c) correspond to f=2.25fc,2.5fc, and 8fc2, respectively. f=8fc2 represents the limit for which a diffraction ring originating from anywhere on the object never intersects the hologram plane.

 figure: Figure 37.

Figure 37. Three transfer functions |H(x,y,fx,fy)| for the spatial frequency range 2fc<fx,fy8fc2.

Download Full Size | PDF

A validation of this algorithm was performed with a simulation of six identical diffracting object waves and is described in Section 5.3 of the tutorial.

Appendix C: Aberrations

As has been shown, significant improvement in transverse resolution is possible for features located towards the edge of a large object plane. In fact, a lens-less Fourier transform setup is advantageous over a setup that uses imaging optics such as lenses because whenever high numerical apertures are involved in imaging systems, one must consider aberrations, especially off-axis aberrations such as coma and astigmatism.

Generally, aberrations in the field of optics are considered factors that contribute to a spot diagram causing the spot to be larger than an ideal point. Spot diagrams do not consider diffraction, hence a diffraction-limited optical system without aberrations is represented by a single point spot (ray tracing) diagram. Aberrations generally increase as the aperture size grows and the field angle of the imaging system increases. Simultaneously, the size of the diffraction-limited spot (main lobe of the Airy pattern) decreases. As long as the aberrations cause a spot diagram that is smaller than the diffraction-limited spot size, the system is considered to be diffraction limited in the presence of aberrations.

In this study, no lens is used to image the object plane onto an image plane. Nevertheless, one can imagine replacing the detector plane with a lens of the same aperture as the detector plane. What then would the aberrations be due to an imaging system that used such a lens to produce an image of magnification 1?

Let us consider coma and astigmatism as they are both off-axis aberrations and increase as the aperture size grows and the field angle increases [90]. Figure 38 defines the chief ray, pupil diameter (2a), and EFL for a diffracting wave that originated from the edge of the object plane. Note, the chief ray represented by u¯ was drawn to intersect the center of the hologram plane to represent the field angle, for which a spatial frequency of fc according to Eq. (33) is recorded. The f/# is defined as f/#=EFL/2y. From Geary [90], coma wavefront error wc varies inversely with f/#, and astigmatism wavefront error wa varies directly with u¯, which is the tangent of the angle associated with u¯.

wcmax=0.667·u¯y16·(f/#)2,
wamax=u¯2y4·f/#.
Equations (C1) and (C2) show the maximum wavefront aberrations in millimeters of a plano–convex lens with assumed refractive index of n=1.5. The unit of dimension is given by y, which is 75 mm at its maximum. A calculation of wcmax and wamax with relevant values of f/#=1.13, u¯=0.44, and y=75mm results in wcmax=1.07mm and wamax=3.21mm for a total aberrated wavefront of 4.28 mm or 7.85λ.

In order to visualize an aberration of roughly 10 waves, Fig. 39 shows a reconstructed star object from a hologram capture at 0.550 THz. Figure 39(a) shows the unaberrated reconstruction, while Fig. 39(b) shows a reconstruction with 10 waves of aberrations in the form of defocus. A comparison shows a significant image degradation for the aberrated reconstruction. A detailed investigation of a star target as a way of describing the performance of the experiment is provided in Section 5. The lens-less Fourier transform holography experiment, however, does not introduce such optical aberrations.

 figure: Figure 38.

Figure 38. Holography setup assuming an imaging lens at the hologram plane.

Download Full Size | PDF

 figure: Figure 39.

Figure 39. Comparison between hologram reconstruction (a) without and (b) with aberrations.

Download Full Size | PDF

Funding

U.S. Army Combat Capabilities Development Command Aviation & Missile Center (In-house laboratory innovative research program).

Acknowledgment

The authors would like to acknowledge the collaborative contributions by M. Kim and D. A. Gregory during the initial demonstrations of TDH, with W.-R. Ng and M. Gehm on the work with three-dimensional printed samples, and the assistance of J. G. Simmons during the final phases of this work.

References

1. E. Pickwell and V. P. Wallace, “Biomedical applications of terahertz technology,” J. Phys. D 39, R301–R310 (2006). [CrossRef]  

2. M. Tonouchi, “Cutting-edge terahertz technology,” Nat. Photonics 1, 97–105 (2007). [CrossRef]  

3. P. H. Siegel, “Terahertz technology,” IEEE Trans. Microwave Theory Tech. 50, 910–928 (2002). [CrossRef]  

4. M. C. Kemp, P. F. Taday, B. E. Cole, J. A. Cluff, A. J. Fitzgerald, and W. R. Tribe, “Security applications of terahertz technology,” Proc. SPIE 5070, 44–52 (2003). [CrossRef]  

5. H.-J. Song and T. Nagatsuma, Handbook of Terahertz Technologies: Devices and Applications (Pan Stanford, 2015).

6. E. Jacobs, R. G. Driggers, K. A. Krapels, F. C. De Lucia, and D. T. Petkie, “Terahertz imaging performance model for concealed weapon identification,” Proc. SPIE 5619, 98–107 (2004). [CrossRef]  

7. B. B. Hu and M. C. Nuss, “Imaging with terahertz waves,” Opt. Lett. 20, 1716–1718 (1995). [CrossRef]  

8. K. Kawase, Y. Ogawa, Y. Watanabe, and H. Inoue, “Non-destructive terahertz imaging of illicit drugs using spectral fingerprints,” Opt. Express 11, 2549–2554 (2003). [CrossRef]  

9. D. M. Mittleman, M. Gupta, R. Neelamani, R. G. Baraniuk, J. V. Rudd, and M. Koch, “Recent advances in terahertz imaging,” Appl. Phys. B 68, 1085–1094 (1999). [CrossRef]  

10. D. Arnone, C. Ciesla, and M. Pepper, “Terahertz imaging comes into view,” Phys. World 13, 35–40 (2000). [CrossRef]  

11. E. Ojefors, U. R. Pfeiffer, A. Lisauskas, and H. G. Roskos, “A 0.65 THz focal-plane array in a quarter-micron CMOS process technology,” IEEE J. Solid-State Circuits 44, 1968–1976 (2009). [CrossRef]  

12. A. Rogalski and F. Sizov, “Terahertz detectors and focal plane arrays,” Opto-Electron. Rev. 19, 346–404 (2011). [CrossRef]  

13. R. Al Hadi, J. Grzyb, B. Heinemann, and U. R. Pfeiffer, “A terahertz detector array in a SiGe HBT technology,” IEEE J. Solid-State Circuits 48, 2002–2010 (2013). [CrossRef]  

14. A. W. M. Lee, B. S. Williams, S. Kumar, Q. Hu, and J. L. Reno, “Real-time imaging using a 4.3-THz quantum cascade laser and a 320/spl times/240 microbolometer focal-plane array,” IEEE Photon. Technol. Lett. 18, 1415–1417 (2006). [CrossRef]  

15. R. W. McMillan, “Terahertz imaging, millimeter-wave RADAR BT,” in Advances in Sensing with Security Applications, J. Byrnes and G. Ostheimer, eds. (Springer, 2006), pp. 243–268.

16. A. A. Danylov, T. M. Goyette, J. Waldman, M. J. Coulombe, A. J. Gatesman, R. H. Giles, X. Qian, N. Chandrayan, S. Vangala, K. Termkoa, W. D. Goodhue, and W. E. Nixon, “Terahertz inverse synthetic aperture radar (ISAR) imaging with a quantum cascade laser transmitter,” Opt. Express 18, 16264–16272 (2010). [CrossRef]  

17. A. Semenov, H. Richter, U. Böttger, and H.-W. Hübers, “Imaging terahertz radar for security applications,” Proc. SPIE 6949, 694902 (2008). [CrossRef]  

18. K. B. Cooper, R. J. Dengler, N. Llombart, T. Bryllert, G. Chattopadhyay, E. Schlecht, J. Gill, C. Lee, A. Skalare, I. Mehdi, and P. H. Siegel, “Penetrating 3-D imaging at 4- and 25-m range using a submillimeter-wave radar,” IEEE Trans. Microwave Theory Tech. 56, 2771–2778 (2008). [CrossRef]  

19. K. Cooper and A. R. Dengler, “A high-resolution imaging radar at 580 GHz,” IEEE Microwave Wireless. Compon. Lett. 18, 64–66 (2008). [CrossRef]  

20. N. Karpowicz, H. Zhong, J. Xu, K.-I. Lin, J. Hwang, and X.-C. Zhang, “Comparison between pulsed terahertz time-domain imaging and continuous wave terahertz imaging,” Semicond. Sci. Technol. 20, 293–299 (2005). [CrossRef]  

21. D. T. Petkie, C. Casto, F. C. De Lucia, S. R. Murrill, B. Redman, R. L. Espinola, C. C. Franck, E. L. Jacobs, S. T. Griffin, C. E. Halford, J. Reynolds, S. O’Brien, and D. Tofsted, “Active and passive imaging in the THz spectral region: phenomenology, dynamic range, modes, and illumination,” J. Opt. Soc. Am. B 25, 1523–1531 (2008). [CrossRef]  

22. M. A. Patrick, J. A. Holt, C. D. Joye, and F. C. De Lucia, “Elimination of speckle and target orientation requirements in millimeter-wave active imaging by modulated multimode mixing illumination,” J. Opt. Soc. Am. A 29, 2643–2656 (2012). [CrossRef]  

23. M. A. Patrick, C. D. Joye, and F. C. De Lucia, “Multimode illumination for speckle reduction and angle neutrality in millimeter wave active imaging: range and time-resolved mode averaging,” J. Opt. Soc. Am. A 31, 2135–2141 (2014). [CrossRef]  

24. E. N. Leith, “Quasi-holographic techniques in the microwave region,” Proc. IEEE 59, 1305–1318 (1971). [CrossRef]  

25. S. Ding, Q. Li, Y.-D. Li, and Q. Wang, “Continuous-wave terahertz digital holography by use of a pyroelectric array camera,” Opt. Lett. 36, 1993–1995 (2011). [CrossRef]  

26. M. S. Heimbeck, M. K. Kim, D. A. Gregory, and H. O. Everitt, “Terahertz digital holography using angular spectrum and dual wavelength reconstruction methods,” Opt. Express 19, 9192–9200 (2011). [CrossRef]  

27. L. Valzania, Y. Zhao, L. Rong, D. Wang, M. Georges, E. Hack, and P. Zolliker, “THz coherent lensless imaging,” Appl. Opt. 58, G256–G275 (2019). [CrossRef]  

28. M. Caris, S. Stanko, S. Palm, R. Sommer, A. Wahlen, and N. Pohl, “300 GHz radar for high resolution SAR and ISAR applications,” in 16th International Radar Symposium (IRS) (2015), pp. 577–580.

29. A. Mrozack, M. Heimbeck, D. L. Marks, J. Richard, H. O. Everitt, and D. J. Brady, “Adaptive millimeter-wave synthetic aperture imaging for compressive sampling of sparse scenes,” Opt. Express 22, 13515–13530 (2014). [CrossRef]  

30. J. T. Richard and H. O. Everitt, “Millimeter wave and terahertz synthetic aperture radar for locating metallic scatterers embedded in scattering media,” IEEE Trans. Terahertz Sci. Technol. 7, 732–740 (2017). [CrossRef]  

31. S. Palm, R. Sommer, M. Caris, N. Pohl, A. Tessmann, and U. Stilla, “Ultra-high resolution SAR in lower terahertz domain for applications in mobile mapping,” in German Microwave Conference (GeMiC) (2016), pp. 205–208.

32. V. Krozer, T. Loffler, J. Dall, A. Kusk, F. Eichhorn, R. K. Olsson, J. D. Buron, P. U. Jepsen, V. Zhurbenko, and T. Jensen, “Terahertz imaging systems with aperture synthesis techniques,” IEEE Trans. Microwave Theory Tech. 58, 2027–2039 (2010). [CrossRef]  

33. P. Hariharan, Optical Holography: Principles, Techniques, and Applications, 2nd ed. (Cambridge Univ. Press, 1996).

34. U. Schnars and W. Juptner, “Direct recording of holograms by a CCD target and numerical reconstruction,” Appl. Opt. 33, 179–181 (1994). [CrossRef]  

35. T.-C. Poon and J. Liu, Introduction to Modern Digital Holography, 1st ed. (Cambridge University, 2014).

36. P. Picart and J. Li, Digital Holography, 1st ed. (Wiley-ISTE, 2012).

37. X. Yu, M. Cross, C. Liu, D. C. Clark, D. T. Haynie, and M. K. Kim, “Quantitative imaging and measurement of cell-substrate surface deformation by digital holography,” J. Mod. Opt. 59, 1591–1598 (2012). [CrossRef]  

38. A. Khmaladze, M. K. Kim, and C. M. Lo, “Phase imaging of cells by simultaneous dual-wavelength reflection digital holography,” Opt. Express 16, 10900–10911 (2008). [CrossRef]  

39. S. De Nicola, P. Ferraro, S. Grilli, L. Miccio, R. Meucci, P. K. Buah-Bassuah, and F. T. Arecchi, “Infrared digital reflective-holographic 3D shape measurements,” Opt. Commun. 281, 1445–1449 (2008). [CrossRef]  

40. A. Faridian, D. Hopp, G. Pedrini, U. Eigenthaler, M. Hirscher, and W. Osten, “Nanoscale imaging using deep ultraviolet digital holographic microscopy,” Opt. Express 18, 14159–14164 (2010). [CrossRef]  

41. J. Geilhufe, B. Pfau, M. Schneider, F. Buttner, C. M. Gunther, S. Werner, S. Schaffert, E. Guehrs, S. Frommel, M. Klaui, and S. Eisebitt, “Monolithic focused reference beam x-ray holography,” Nat. Commun. 5, 3008 (2014). [CrossRef]  

42. P. Zolliker and E. Hack, “THz holography in reflection using a high resolution microbolometer array,” Opt. Express 23, 10957–10967 (2015). [CrossRef]  

43. H. Huang, L. Rong, D. Wang, W. Li, Q. Deng, B. Li, Y. Wang, Z. Zhan, X. Wang, and W. Wu, “Synthetic aperture in terahertz in-line digital holography for resolution enhancement,” Appl. Opt. 55, A43–A48 (2016). [CrossRef]  

44. M. S. Heimbeck, W. R. Ng, D. R. Golish, M. E. Gehm, and H. O. Everitt, “Terahertz digital holographic imaging of voids within visibly opaque dielectrics,” IEEE Trans. Terahertz Sci. Technol. 5, 110–116 (2015). [CrossRef]  

45. H. Huang, D. Wang, L. Rong, X. Zhou, Z. Li, and Y. Wang, “Application of autofocusing methods in continuous-wave terahertz in-line digital holography,” Opt. Commun. 346, 93–98 (2015). [CrossRef]  

46. M. S. Heimbeck, D. L. Marks, D. Brady, and H. O. Everitt, “Terahertz interferometric synthetic aperture tomography for confocal imaging systems,” Opt. Lett. 37, 1316–1318 (2012). [CrossRef]  

47. L. Valzania, P. Zolliker, and E. Hack, “Topography of hidden objects using THz digital holography with multi-beam interferences,” Opt. Express 25, 11038–11047 (2017). [CrossRef]  

48. X. Gao, C. Li, and G. Fang, “Study of image reconstruction for terahertz indirect holography with quasi-optics receiver,” J. Opt. Soc. Am. A 30, 1291–1296 (2013). [CrossRef]  

49. H. Huang, D. Wang, W. Li, L. Rong, Z. D. Taylor, Q. Deng, B. Li, Y. Wang, W. Wu, and S. Panezai, “Continuous-wave terahertz multi-plane in-line digital holography,” Opt. Laser Eng. 94, 76–81 (2017). [CrossRef]  

50. H. Huang, D. Wang, L. Rong, and Y. Wang, “Experimental imaging research on continuous-wave terahertz in-line digital holography,” Proc. SPIE 9199, 919909 (2014). [CrossRef]  

51. Q. Deng, W. Li, X. Wang, Z. Li, H. Huang, C. Shen, Z. Zhan, R. Zou, T. Jiang, and W. Wu, “High-resolution terahertz inline digital holography based on quantum cascade laser,” Opt. Eng. 56, 113102 (2017). [CrossRef]  

52. K. Xue, Q. Li, Y.-D. Li, and Q. Wang, “Continuous-wave terahertz in-line digital holography,” Opt. Lett. 37, 3228–3230 (2012). [CrossRef]  

53. M. S. Heimbeck, M. K. Kim, D. A. Gregory, and H. O. Everitt, “Terahertz digital off-axis holography for non-destructive testing,” in International Conference on Infrared, Millimeter, and Terahertz Waves (IRMMW-THz) (International Conference on Infrared, 2011), pp. 1–2.

54. D. Wang, L. Rong, X. Zhou, and Y. Wang, Phase Retrieval for Terahertz Digital Holography (Optical Society of America, 2014).

55. E. Hack and P. Zolliker, “Terahertz holography for imaging amplitude and phase objects,” Opt. Express 22, 16079–16086 (2014). [CrossRef]  

56. N. V. Petrov, M. S. Kulya, A. N. Tsypkin, V. G. Bespalov, and A. Gorodetsky, “Application of terahertz pulse time-domain holography for phase imaging,” IEEE Trans. Terahertz Sci. Technol. 6, 464–472 (2016). [CrossRef]  

57. Q. Li, K. Xue, Y. D. Li, and Q. Wang, “Experimental research on terahertz Gabor inline digital holography of concealed objects,” Appl. Opt. 51, 7052–7058 (2012). [CrossRef]  

58. M. Locatelli, M. Ravaro, S. Bartalini, L. Consolino, M. S. Vitiello, R. Cicchi, F. Pavone, and P. De Natale, “Real-time terahertz digital holography with a quantum cascade laser,” Sci. Rep. 5, 13566 (2015). [CrossRef]  

59. Q. Li, S. H. Ding, Y. D. Li, K. Xue, and Q. Wang, “Experimental research on resolution improvement in CW THz digital holography,” Appl. Phys. B 107, 103–110 (2012). [CrossRef]  

60. N. V. Petrov, A. A. Gorodetsky, and V. G. Bespalov, “Holography and phase retrieval in terahertz imaging,” Proc. SPIE 8846, 88460S (2013). [CrossRef]  

61. X. Wang, W. Xiong, W. Sun, and Y. Zhang, “Coaxial waveguide mode reconstruction and analysis with THz digital holography,” Opt. Express 20, 7706–7715 (2012). [CrossRef]  

62. M. Locatelli, E. Pugliese, M. Paturzo, V. Bianco, A. Finizio, A. Pelagotti, P. Poggi, L. Miccio, P. Meucci, and R. Ferraro, “Imaging live humans through smoke and flames using far-infrared digital holography,” Opt. Express 21, 5379–5390 (2013). [CrossRef]  

63. L. Rong, T. Latychevskaia, C. Chen, D. Wang, Z. Yu, X. Zhou, Z. Li, H. Huang, Y. Wang, and Z. Zhou, “Terahertz in-line digital holography of human hepatocellular carcinoma tissue,” Sci. Rep. 5, 8445 (2015). [CrossRef]  

64. L. Rong, T. Latychevskaia, D. Wang, X. Zhou, H. Huang, Z. Li, and Y. Wang, “Terahertz in-line digital holography of dragonfly hindwing: amplitude and phase reconstruction at enhanced resolution by extrapolation,” Opt. Express 22, 17236–17245 (2014). [CrossRef]  

65. M. P. Georges, J. F. Vandenrijt, C. Thizy, Y. Stockman, P. Queeckers, F. Dubois, and D. Doyle, “Digital holographic interferometry with CO2 lasers and diffuse illumination applied to large space reflector metrology,” Appl. Opt. 52, A102–A116 (2013). [CrossRef]  

66. V. Bianco, M. Paturzo, O. Gennari, A. Finizio, and P. Ferraro, “Imaging through scattering microfluidic channels by digital holography for information recovery in lab on chip,” Opt. Express 21, 23985–23996 (2013). [CrossRef]  

67. W. E. Baughman, H. Yokus, S. Balci, D. S. Wilbert, P. Kung, and S. M. Kim, “Observation of hydrofluoric acid burns on osseous tissues by means of terahertz spectroscopic imaging,” IEEE Trans. Terahertz Sci. Technol. 3, 387–394 (2013). [CrossRef]  

68. W. E. Baughman, D. S. Wilbert, S. Balci, M. Bolus, M. Baker, P. Kung, S. M. Kim, M. S. Heimbeck, and H. O. Everitt, “Comparative reconstructions of THz spectroscopic imaging for non-destructive testing and biomedical imaging,” Proc. SPIE 8363, 83630W (2012). [CrossRef]  

69. L. Rong, D. Wang, T. Latychevskaia, X. Zhou, and Y. Wang, “Continuous-wave terahertz digital holography on biological specimen,” in Digital Holography and Three-Dimensional Imaging (2014), Vol. 2, paper DThB-2.

70. Q. Li, Y. D. Li, S. H. Ding, and Q. Wang, “Terahertz computed tomography using a continuous-wave gas laser,” J. Infrared Millimeter Terahertz Waves 33, 548–558 (2012). [CrossRef]  

71. C. Cull, D. Wikner, J. Mait, and M. Mattheiss, “Millimeter-wave compressive holography,” Appl. Opt. 49, E67–E82 (2010). [CrossRef]  

72. Q. Li and Y. D. Li, “Continuous-wave 2.52 terahertz Gabor inline compressive holographic tomography,” Appl. Phys. B 117, 585–596 (2014). [CrossRef]  

73. Y. D. Li, Q. Li, J. Q. Hu, and Y. P. Zhao, “Compressive sensing algorithm for 2D reconstruction of THz digital holography,” Chin. Opt. Lett. 13, S11101 (2015). [CrossRef]  

74. G. Chen and Q. Li, “Markov chain Monte Carlo sampling based terahertz holography image denoising,” Appl. Opt. 54, 4345–4351 (2015). [CrossRef]  

75. R. Zhu, J. T. Richard, D. J. Brady, D. L. Marks, and H. O. Everitt, “Compressive sensing and adaptive sampling applied to millimeter wave inverse synthetic aperture imaging,” Opt. Express 25, 2270–2284 (2017). [CrossRef]  

76. F. A. White and H. E. Jenkins, Fundamentals of Optics, 3rd ed. (McGraw-Hill, 1957).

77. D. L. Woolard, R. Brown, M. Pepper, and M. Kemp, “Terahertz frequency sensing and imaging: a time of reckoning future applications?” Proc. IEEE 93, 1722–1743 (2005). [CrossRef]  

78. D. Gabor, “A new microscopic principle,” Nature 161, 777–778 (1948). [CrossRef]  

79. J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts & Company, 2005).

80. J. C. Li, P. Tankam, Z. J. Peng, and P. Picart, “Digital holographic reconstruction of large objects using a convolution approach and adjustable magnification,” Opt. Lett. 34, 572–574 (2009). [CrossRef]  

81. S. Grilli, P. Ferraro, S. De Nicola, A. Finizio, G. Pierattini, and R. Meucci, “Whole optical wavefields reconstruction by digital holography,” Opt. Express 9, 294–302 (2001). [CrossRef]  

82. Y. Lingfeng and M. K. Kim, “Wavelength-scanning digital interference holography for tomographic three-dimensional imaging by use of the angular spectrum method,” Opt. Lett. 30, 2092–2094 (2005). [CrossRef]  

83. E. N. Leith and J. Upatnieks, “Wavefront reconstruction with continuous-tone objects,” J. Opt. Soc. Am. 53, 1377–1381 (1963). [CrossRef]  

84. M. K. Kim, L. Yu, and C. J. Mann, “Interference techniques in digital holography,” J. Opt. A 8, S518–S523 (2006). [CrossRef]  

85. E. Hecht, Optics, 5th ed. (Addison-Wesley, 2015).

86. G. C. Sherman, “Application of the convolution theorem to Rayleigh’s integral formulas,” J. Opt. Soc. Am. 57, 546–547 (1967). [CrossRef]  

87. E. Wolf, “Three-dimensional structure determination of semi-transparent objects from holographic data,” Opt. Commun. 1, 153–156 (1969). [CrossRef]  

88. Y. Sung, W. Choi, C. Fang-Yen, K. Badizadegan, R. R. Dasari, and M. S. Feld, “Optical diffraction tomography for high resolution live cell imaging,” Opt. Express 17, 266–277 (2009). [CrossRef]  

89. A. I. Harris, “Coherent and incoherent detection,” in Proc. 29th Liege International Astrophysical Colloquium from Ground-Based to Space-Borne Sub-mm Astronomy (1990), Vol. 29, pp. 165–169.

90. J. M. Geary, Introduction to Lens Design: with Practical ZEMAX Examples, 1st ed. (William-Bell, 2002).

91. A. WeiMin and Q. Hu, “Real-time, continuous-wave terahertz imaging by use of a microbolomter focal-plane array,” Opt. Lett. 30, 2563–2565 (2005). [CrossRef]  

92. H. Tao, N. I. Landy, C. M. Bingham, X. Xhang, R. D. Averitt, and A. W. J. Padilla, “A metamaterial absorber for the terahertz regime: design, fabrication, and characterization,” Opt. Express 16, 7188–7197 (2008). [CrossRef]  

93. B. Rappaz, A. Barbul, A. Hoffmann, D. Boss, R. Korenstein, C. Depeursinge, P. J. Magistretti, and P. Marquet, “Spatial analysis of erythrocyte membrane fluctuations by digital holographic microscopy,” Blood Cells Mol. Dis. 42, 228–232 (2009). [CrossRef]  

94. M. Naftaly and R. E. Miles, “Terahertz time-domain spectroscopy: a new tool for the study of glasses in the far infrared,” J. Non-Cryst. Solids 351, 3341–3346 (2005). [CrossRef]  

95. https://corstat.com/product/permanent-esd-safe-foams.

96. J. F. Kaiser and R. W. Schafer, “On the use of the I0-sinh window for spectrum analysis,” IEEE Trans. Acoust. Speech Signal Process. 28, 105–107 (1980). [CrossRef]  

97. K. D. D. Willis and A. D. Wilson, “InfraStructs: fabricating information inside physical objects for imaging in the terahertz region,” ACM Trans. Graph. 32, 138 (2013). [CrossRef]  

98. Y. Cheng, “Two-wavelength phase shifting interferometry,” Appl. Opt. 23, 4539–4543 (1984). [CrossRef]  

99. J. Gass, A. Dakoff, and M. K. Kim, “Phase imaging without 2π ambiguity by multiwavelength digital holography,” Opt. Lett. 28, 1141–1143 (2003). [CrossRef]  

100. J. W. Lamb, “Miscellaneous data on materials for millimetre and submillimetre optics,” Int. J. Infrared Millimeter Waves 17, 1997–2034 (1996). [CrossRef]  

101. J. Rosen, G. Indebetouw, and G. Brooker, “Homodyne scanning holography,” Opt. Express 14, 4280–4285 (2007). [CrossRef]  

102. Y.-C. Lin, C.-J. Chen, and T.-C. Poon, “Optical sectioning with a low-coherence phase-shifting digital holographic microscope,” Appl. Opt. 50, B25–B30 (2011). [CrossRef]  

aop-12-1-1-i001 Martin S. Heimbeck is a physicist at the U.S. Army Space and Missile Defense Command Technical Center. There he performs basic and applied laser research in the field of directed energy and collaborates in terahertz technologies to include digital holography, radar imaging, and high-bandwidth datalinks. He received his Ph.D. in Optical Sciences and Engineering from the University of Alabama in Huntsville in 2016 and is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE).

aop-12-1-1-i002 Henry O. Everitt is a member of the Department of Defense (DoD) senior executive service at the U.S. Army’s Combat Capabilities Development Command (CCDC) Aviation & Missile Center, where he serves as the Army’s senior technologist (ST) in optical sciences. There he performs collaborative spectroscopic investigations of wide bandgap semiconductors, nanophotonics, ultraviolet plasmonics and photocatalysis, molecular physics and lasers, and terahertz technologies (imaging, holography, RADAR, communications). He received his Ph.D. in Physics from Duke University in 1990, holds adjunct faculty positions at several universities, and is a Fellow of The Optical Society, the American Physical Society, the American Association for the Advancement of Science, and the Army Research Laboratory (Emeritus).

Supplementary Material (4)

NameDescription
Visualization 1       This video accompanies Figure 2.3 of the tutorial.
Visualization 2       This video of an amplitude reconstruction accompanies Figure 4.3.
Visualization 3       This video of a phase reconstruction accompanies Figure 4.3.
Visualization 4       This video accompanies Figure 5.2 of the tutorial.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (39)

Figure 1.
Figure 1. (a) Standard Mach–Zehnder interferometer. (b) Geometry for an off-axis terahertz holographic interferometer.
Figure 2.
Figure 2. Illustration of separable Fourier spectra using off-axis holography for (a) small and (b) large reference wave signal levels [26].
Figure 3.
Figure 3. Fresnel reconstruction of real image (lower right), autocorrelation (center), and virtual image conjugate terms (upper left) for a digital hologram recorded of a Siemens star target at 0.48 THz. Visualization 1 illustrates the reconstructed wave field as it is backpropagated from the hologram plane to the object plane.
Figure 4.
Figure 4. (a) Three-dimensional drawing of a holographic imager. (b) Top view of the off-axis holographic imager, for which the detector location defines the hologram plane. [44]
Figure 5.
Figure 5. (a) Amplitude target (Siemens star), (b) hologram recorded at 0.550 THz with small insert showing representative interference fringes, and (c) real part of the spatial frequency map showing the autocorrelation, real, and virtual image terms. (d) Portion of spatial frequency map (c) labeling various spatial frequency cutoffs Wh and indicating a circular spectral filter.
Figure 6.
Figure 6. Prism wedges mounted in rigid terahertz absorbing foam, with wedge angles of 1°, 2°, 10°, and 5°, going clockwise from the upper left corner.
Figure 7.
Figure 7. (a) Phase object holograms from 0.230–0.740 THz. (b) Fourier domain maps of the associated phase object holograms.
Figure 8.
Figure 8. (a) Phase reconstruction of the wedges from Fig. 6 degraded by clutter noise. (b) Phase reconstruction with a narrower filter. (c) Phase reconstruction after tilt correction. Visualization 2 and Visualization 3 respectively provide amplitude and phase reconstructions of the backpropagation from the hologram plane to the objectplane.
Figure 9.
Figure 9. Amplitude and phase reconstructions of the four prism wedges over the frequency range 0.230 to 0.740 THz.
Figure 10.
Figure 10. Unwrapping steps of the 0.480 THz phase reconstruction, from (a) an optical photograph of the four wedges (placed on a red plate instead of in the absorbing foam so their orientations may be seen), (b) hologram phase reconstruction, (c) cropped phase reconstruction, and (d) unwrapped phase map.
Figure 11.
Figure 11. (a) Enlarged view of unwrapped phase wedges from Fig. 10(d) using the 0.48 THz holographic phase reconstruction. (b) Measurements and fitted slopes of phase wedges.
Figure 12.
Figure 12. (a) Object and hologram planes in lens-less Fourier transform holography for which the object size is small compared to d. (b) Illustration of diffraction rings in the hologram plane from a large object (or from point scatterers in the center and at the edge of the object plane).
Figure 13.
Figure 13. Angular spectrum reconstruction of the star target between 0.230 and 0.740 THz. Visualization 4 presents an amplitude reconstruction of the backpropagation from the hologram plane to the object plane using the angular spectrum method at 0.660 THz.
Figure 14.
Figure 14. Measured transfer function of the holographic imager using the angular spectrum reconstruction method, compared with the expected frequency-dependent resolution limits (dotted vertical lines) calculated using Eq. (33).
Figure 15.
Figure 15. (a) Simulated two point objects 1 mm apart at various locations in the object plane to illustrate the lateral resolution of the transfer function. (b) Calculated amplitude representation of the diffracted object waves, recorded for each location in the hologram plane at 0.6 THz.
Figure 16.
Figure 16. Transfer function at 0.6 THz for nine point scatterer locations within the object plane.
Figure 17.
Figure 17. (a) Reconstructions of the targets in two locations at 0.55 THz. (b) Frequency domain spatial Fourier transform of hologram. (c) Empirical comparison of the transfer functions for the two object locations, confirming the higher spatial frequency content when the target is placed near the edge of the object plane.
Figure 18.
Figure 18. (a) Spatial frequency map of 0.400 THz hologram and (b) real image spectrum with overlaid filter circles.
Figure 19.
Figure 19. (a) Filtered spectrum with fw=0.98cycles/mm and (b) reconstructed image. (c) Filtered spectrum with fw=0.88cycles/mm and (d) reconstructed image. (e) Filtered spectrum with fw=0.88cycles/mm and an applied Kaiser apodization window, and (f) reconstructed image. (g) Filtered spectrum with fw=0.59 cycles/mm and (h) reconstructed image. (i) Filtered spectrum with fw=0.51cycles/mm and (j) reconstructed image. (k) Filtered spectrum with fw=0.33cycles/mm and (l) reconstructed image.
Figure 20.
Figure 20. (a) Photograph, (b) hologram, and (c) phase reconstruction of the 3×3 void sample at 0.480 THz.
Figure 21.
Figure 21. (a) Magnified phase reconstruction of a single void (row 1, column 2 of Fig. 20), and (b) topographic plot (contours every 15 μm) illustrating that the void depth variations caused by three-dimensional printing are less than 50 μm.
Figure 22.
Figure 22. Magnified phase image of first three cells for 0.495 THz phase reconstruction, for which the depth of the void is labeled.
Figure 23.
Figure 23. Photograph of sample A and drawing of its internal structure [44].
Figure 24.
Figure 24. (a) Simulated (left) and measured (right) holograms and amplitude reconstructions of sample A at 0.495 THz. (b) Simulated (left) and measured (right) holograms and amplitude reconstructions of sample A at 0.710 THz [44].
Figure 25.
Figure 25. Magnified view of the structure corresponding to column 1, row 2 of sample A, reconstructed at 0.495 and 0.710 THz.
Figure 26.
Figure 26. (a) Photograph of lens and (b) phase reconstructions of holograms that were acquired at 0.485 and 0.500 THz, for which the (c) cross-section plots cross the center of the phase maps.
Figure 27.
Figure 27. Phase map of lens and corresponding cross section after one phase map was subtracted from the other.
Figure 28.
Figure 28. New phase map and center cross section of lens reconstruction, free from phase ambiguities.
Figure 29.
Figure 29. (a) Plano–convex PMP lens used as phase object for the dual-wavelength reconstruction process. (b) Holograms of the lens recorded at 0.680 and 0.725 THz [26].
Figure 30.
Figure 30. Amplitude reconstructions of PMP lens at (a) 0.680 and 0.725 THz. Phase reconstructions of lens at (b) 0.680 and 0.725 THz [26].
Figure 31.
Figure 31. Unwrapped phase map of PMP lens from three perspectives: (a) above, (b) cross section in radians, and (c) pseudo-three-dimensional [26].
Figure 32.
Figure 32. Diffraction ring–hologram plane intersections for ra for cases (1)–(4).
Figure 33.
Figure 33. Diffraction ring–hologram plane intersections for a<r<2a and rD for cases (5)–(7).
Figure 34.
Figure 34. Diffraction ring–hologram plane intersections for r8a for cases (8)–(11).
Figure 35.
Figure 35. Three transfer functions |H(x,y,fx,fy)| for the spatial frequency range 0<fx,fyfc.
Figure 36.
Figure 36. Three transfer functions |H(x,y,fx,fy)| for the spatial frequency range fc<fx,fy2fc.
Figure 37.
Figure 37. Three transfer functions |H(x,y,fx,fy)| for the spatial frequency range 2fc<fx,fy8fc2.
Figure 38.
Figure 38. Holography setup assuming an imaging lens at the hologram plane.
Figure 39.
Figure 39. Comparison between hologram reconstruction (a) without and (b) with aberrations.

Tables (3)

Tables Icon

Table 1. Spatial Cutoff Frequencies

Tables Icon

Table 2. Filters Related to the Physical Quantities of the Experimental Setup

Tables Icon

Table 3. Optical Properties of Three-Dimensional Printed Materials

Equations (60)

Equations on this page are rendered with MathJax. Learn more.

NF=(2a)2λd,
H(x,y,z)=R2+O(x,y,z)2+RO*(x,y,z)+R*O(x,y,z).
R(y)=R·eik0ysinθ,
H(x,y)=R2+O2+Reik0ysinθO*(x,y)+R*eik0ysinθO(x,y).
θmin=sin1(3Bk0).
θmin=sin1(Bk0),
A(fx,fy,z)=U(x,y,z)ei2π(fxx+fyy)dxdy.
2U(x,y,z)+k2U(x,y,z)=0.
F(fx,fy)=F{U(x,y,z)}=U(x,y,z)ei2π(fxx+fyy)dxdy.
F{2x2U(x,y,z)}=(ifx)2F(fx,fy),
2F(fx,fy,z)z2+k2(1fx2k2fy2k2)F(fx,fy,z)=0.
F(fx,fy,z)=F(fx,fy,0)eizf2fx2fy2.
A(αλ,βλ,z)=A(αλ,βλ,0)ei2πλz1α2β2,
A(αλ,βλ,0)=1·T0(αλ,βλ,0),
T0(αλ,βλ,0)=t(x,y,0)ei2πλ(αx+βy)dxdy.
to(x,y,0)=F1{F{U(x,y,d)}ei2πλd1α2β2}.
fx2+fy21λ2.
to(x,y,0)=F1{F(U(x,y,d))·ei2πλdei2πλd(fx2+fy2)}.
to(x,y,0)=ei2πλdiλd·U(x,y,d))eiπλd(x2+y2).
NEP=10·log10(kTb)+NF=174dBm/Hz+9dB,
fci=1λ1(f/#)2(NAi)λ,
OTFF{rect(xw)}2sinc2(w·fcd),
fcd=1w.
foff=sinθλ,
Fh(fxm,fyn)=FFT{Uh(m,n)},
R(m,n)=ei2πλ(msinθx+nsinθy),
Fo(fxm,fyn)=Fh(fxm,fyn)·Wh(fxm,fyn),
Hf(fxm,fyn)=ei2πλz1(λfxm)2(λfyn)2.
Ur(m,n)=IFFT{Fo(fxm,fyn)·Hf(fxm,fyn)},
Δd=Δφ·λ2π(n1),
P(x,y)=rect(x2a)rect(y2a),
H(fx,fy)=rect(λdfx2a)rect(λdfy2a).
fc=aλd.
f=r·cosαd·λ.
R(x,y,r)=arc length within boundaries of hologram plane/2πr.
WK(m)=Io(0.5π1(2mM11)2)Io(0.5π),
tm(x,y)=e(αm·Lm)·e(i·2πλ·(nm1)Lm),
tm+s(x,y)=e(23αm·Lm+αs·Ls)·e(i·2πλ·(23(nm1)Lm+(ns1)Ls)),
ΔOPDj+1=OPDj+1OPDj.
ΔOPDj+1=[ϕ(j+1)a2π+m]λaϕja2πλa,
ΔOPDj+1=[ϕ(j+1)b2π+p]λbϕjb2πλb,
m=(ϕ(j+1)aϕja)λa(ϕ(j+1)bϕjb)λb2π(λbλa).
ΔOPDj+1=12π[(ϕ(j+1)aϕja)(ϕ(j+1)bϕjb)]·λaλbλaλb.
λb=λ1·λ2|λ2λ1|,
f=r·cosαd·λ.
R(x,y,r)=arc length within boundaries of hologram plane/2πr.
Case(1):rA,thenR(x,y,r)=1,
Case(2):r>A,rB,θ1=cos1Ar,thenR(x,y,r)=2πr2θ1r2πr,
Case(3):r>B,rA2+B2,rC,θ1=cos1Ar,θ2=cos1Br,thenR(x,y,r)=2πr2(θ1+θ2)r2πr,
Case(4):r>A2+B2,rC,θ1=cos1Ar,θ2=cos1Br,thenR(x,y,r)=1(π2+θ1+θ2)r2πr.
Case(5):r>C,rA2+B2,rA2+C2,rD,θ1=cos1Ar,θ2=cos1Br,θ3=cos1Cr,thenR(x,y,r)=2πr2(θ1+θ2+θ3)r2πr,
Case(6):r>C,r>A2+B2,rA2+C2,rD,θ1=cos1Ar,θ2=cos1Br,θ3=cos1Cr,thenR(x,y,r)=2πr(π2+θ1+θ2+2θ3)r2πr,
Case(7):r>A2+B2,r>A2+C2,rD,θ2=cos1Br,θ3=cos1Cr,thenR(x,y,r)=2πr(π+θ2+θ3)r2πr.
Case(8):r>D,rA2+B2,rA2+C2,rB2+D2,rC2+D2,θ1=cos1Ar,θ2=cos1Br,θ3=cos1Cr,θ4=cos1Dr,thenR(x,y,r)=2πr2(θ1+θ2+θ3+θ4)r2πr,
Case(9):r>D,r>A2+B2,rA2+C2,rB2+D2,rC2+D2,θ2=cos1Br,θ3=cos1Cr,θ4=cos1Dr,thenR(x,y,r)=2πr(π2+θ1+θ2+2θ3+2θ4)r2πr,
Case(10):r>D,r>A2+B2,r>A2+C2,rB2+D2,rC2+D2,θ2=cos1Br,θ3=cos1Cr,θ4=cos1Dr,thenR(x,y,r)=2πr(π+θ2+θ3+2θ4)r2πr,
Case(11):r>D,r>A2+B2,r>A2+C2,r>B2+D2,rC2+D2,θ3=cos1Cr,θ4=cos1Dr,R(x,y,r)=2πr(3π2+θ3+θ4)r2πr.
|H(x,y,fx,fy)|=R(x,y,r(fx,fy)),
wcmax=0.667·u¯y16·(f/#)2,
wamax=u¯2y4·f/#.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.