Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

W-band sparse synthetic aperture for computational imaging

Open Access Open Access

Abstract

We present a sparse synthetic-aperture, active imaging system at W-band (75 – 110 GHz), which uses sub-harmonic mixer modules. The system employs mechanical scanning of the receiver module position, and a fixed transmitter module. A vector network analyzer provides the back end detection. A full-wave forward model allows accurate construction of the image transfer matrix. We solve the inverse problem to reconstruct scenes using the least squares technique. We demonstrate far-field, diffraction limited imaging of 2D and 3D objects and achieve a cross-range resolution of 3 mm and a depth-range resolution of 4 mm, respectively. Furthermore, we develop an information-based metric to evaluate the performance of a given image transfer matrix for noise-limited, computational imaging systems. We use this metric to find the optimal gain of the radiating element for a given range, both theoretically and experimentally in our system.

© 2016 Optical Society of America

1. Introduction

Millimeter waves (30 – 300 GHz) occupy a strategic part of the electromagnetic spectrum because they can penetrate many non-metallic barriers including things like walls, clothing, smoke and fog, while at the same time allow probing of objects with superior resolution to microwaves. Hence, millimeter wave imaging systems have found applications in the fields of radio astronomy [1], remote sensing [2], biomedical imaging [3] and security surveillance [6].

The current state of the art in advanced millimeter wave imaging systems used in airports and security checkpoints employ a wideband, cylindrically scanning, 3D imaging system [4,5]. Specifically, the L3 ProVision ATD body scanner deployed in United States airports consist of two vertical linear antenna arrays (with 384 antenna elements in each array) operating at Ka-band (24.5 – 30 GHz) [5]. The arrays are mounted on a cylindrical mechanical scanning stage, and employ 3D near-field holographic image reconstruction techniques, which are computationally very intensive [6]. In contrast, a more ideal imaging system for security screening applications, might incorporate a planar form factor with fixed (or electrically tuned or switched) elements and operate at greater stand-off. Employing fewer elements could mitigate analog hardware costs, as well as computational requirements, if a suitable compressed image reconstruction algorithm were to enable comparable imaging performance.

In recent years, advances in CMOS and SiGe BiCMOS devices have significantly reduced the per channel cost of operating in this band, enabling more wide spread use of millimeter waves for both communications, and imaging [7–9]. However, over the same period, computational resources have advanced at an even greater pace, so that the optimal trade-off position strongly favors using more computational resources and less analog, source and acquisition hardware. The techniques of computational imaging make such trade-off possible [10]. Not to be confused with image processing, computational imagers use hardware specific algorithms, and generally do not produce useful or recognizable images before the data is processed.

The unique constraints and capabilities of millimeter-waves and the associated source and acquisition hardware lead quite naturally to unconventional imaging systems. The specific system we investigate here is: active and phase sensitive, employs a sparse, under-sampled, mechanically-scanned, planar, array aperture, encodes spatial diversity over a swept frequency range, and reconstructs images from acquired data sets that are under-determined (i.e. compressed imaging).

With the notable exception of astronomy, most millimeter-wave imaging is active, since ambient, terrestrial, millimeter-wave radiation is quite weak [11]. Also, since phase-sensitive measurements are readily supported by the hardware up through millimeter wave frequencies, active imagers are usually also holographic, and do not require focusing optics [6]. Though less common, active imagers also offer the opportunity to encode distinct spatial patterns (modes) on the sensor and source fields. We opted for dominant spatial encoding on the sensor side, the usual case.

Our source is composed of a single-element radiator at a fixed position, co-planar with and adjacent to our sensor array, which is mechanical-scanned and synthetic. Mechanical scanning precludes real-time imaging of dynamic objects, but the performance of a hypothetical parallel-acquisition, multi-static imager can be evaluated from the results obtained with this prototype system. For this purpose, our synthetic aperture is both sparse and under-sampled, to keep the number of channels to a more realist number.

Our objects of interest are at most weakly dispersive over the millimeter wave spectrum. Hyperspectral information is thus fairly limited and not pursued. Instead we take advantage of the option to use data acquired at different frequencies to reconstruct a single, monochromatic image. The multi-frequency measurements can be acquired either in series (with a frequency ramp) or in parallel (frequency multiplexed). We used a frequency ramp, but again some aspects of parallel-acquisition performance may be evaluated using our simpler system. (In any case, available frequency ramp repetition rates allow for real-time use in many circumstances.) For the multi-frequency measurements to provide useful, independent information, they must encode independent spatial information. Frequency encoded spatial diversity can be supplied by an aperture with intrinsic frequency dispersion [14,15] or by the frequency dispersion of the propagation of complex field patterns between the measurement aperture and the object. We rely on the latter, since our aperture elements are either open-ended waveguides or horn radiators that have fixed field patterns versus frequency thus leading to a simpler aperture design. The approach of leveraging frequency encoded spatial diversity allows for additional target spatial information to be acquired with the simple addition of a swept oscillator to the measurement hardware. These frequency encoding approaches are in contrast to the well known single-pixel camera which typically uses a switched (temporal encoded) approach to spatial diversity [16, 17]. These temporally encoded spatial diversity approaches rely on creating complex dynamically programmable spatially encoding apertures which could potentially allow for use of a targeted and possibly orthogonal spatial basis. Of course, hybrid systems can employ both temporal and frequency encoding.

Finally, to maximize the utilization of hypothetically-channel-constrained parallel acquisition hardware, we reconstruct images with more voxels than the number of measurements, i.e we perform under-determined reconstructions or compressive imaging [18, 19]. We reconstruct 2D and 3D images and demonstrate diffraction-limited resolution with our experimental system, in agreement with our simulated results. Additionally, we develop a figure of merit for noise-limited computational imaging systems based on the Singular Value Decomposition (SVD) and the Shannon-Hartley theorem, and verify its usefulness experimentally.

2. W-band imaging system setup

Our imaging system consists of a W-band transmitter (Tx) and receiver (Rx) which are subharmonic mixer extensions of a vector network analyzer (VNA). The Tx is stationary mounted and the Rx is mounted on a x-y scanning stage to create a synthetic receiver array. Interfaced to an Agilent PNA 5227A VNA through an N5260A millimeter-wave head module, the extensions are provided with a local oscillator (9.375 – 13.75 GHz) to mix between an intermediate-frequency (5 – 300 MHz) and the W-band (75 – 110 GHz). The W-band signal ports are WR-10 wave-guide. Both the bare wave-guide flange and a standard-gain horn were used as radiation elements. The Tx is connected to port1 of the VNA and the Rx to port 2. The VNA’s frequency offset option is used in measuring and recording the transmission coefficients (S21). These coefficients comprise the measurement set from which images were reconstructed. The setup is shown in Fig. 1.

 figure: Fig. 1

Fig. 1 W-band setup showing Tx and Rx modules. Tx is at a fixed location and Rx is on a x–y scanning stage.

Download Full Size | PDF

3. Forward model and reconstruction

Using the first-order Born approximation [22], we assume our objects can be represented by a regularly spaced grid of point scatterers in the imaging domain. There is a linear relationship between the cross-sections of this grid of scatterers, represented by the complex-valued vector, f, and our set of measurements, represented by the complex-valued vector, g. This linear relationship is referred to as the image transfer matrix (or H-matrix) and it comprises our forward model,

g[M×1]=H[M×N]f[N×1]
where we have M measurements and N scattering grid points (or voxels) in the imaging space (also called the scene). The forward model, H, is generally fairly straightforward to compute, as will be shown below. However, reconstruction of an image from measurements is the inverse problem, the solutions of which are often under-determined and not unique. To aid in the solution of this often ill-posed problem, prior information, such as scene sparsity, scattering cross-section bounds, or object location, can be used. In this work, we use the regularized least squares approach to reconstruct a scene estimate fe,
fe=minfHfg22+ϒf11
where ║·║2 is the L2 norm, ϒ > 0 is a regularization parameter and ║·║1 is the L1 norm. For most of the large under-determined systems minimal L1 norm regularization is sufficient [12,13]. The regularization favors images with smaller combined cross-section. The first-order Born approximation assumes that the grid of point scatterers (representing the object) are weakly interacting. However, strong interactions that are relatively local will only result in a re-scaling of local scattering strength which may not significantly affect the reconstructed image geometry. (Strong interactions that are substantially non-local are likely to create image artifacts.) For example, a convex reflective surface with radius of curvature large compared to the wavelength should be reasonably well represented (and imaged) using this model. Conceptually, an element of the image transfer matrix quantifies (with magnitude and phase) the signal path from source to scattering point to detector - for a unit scattering point. The row index of the transfer matrix element specifies the location of the transmitter and receiver, and the frequency. The column index specifies the location of the scattering point. The signal path includes: the propagation of fields from the transmitting element to a point scatterer, and the propagation of scattered fields from that point to the receiving element and the partial acceptance of the field energy by the receiving aperture. Invoking reciprocity, the back scatter propagation and receiver acceptance can be replaced by a forward propagation of the receiving aperture mode. This representation of the transfer matrix is more symmetric and reduces the complexity of the calculation.
H({rT,rR,ω},{rS})=α(ω)E(rS;rT,ω)E(rS;rR,ω)
where E(rS;rR or rT) is the electric field propagated from either the receiving or transmitting element, to the location of the scattering point, rS. The factor α (ω), includes any non-free-space parts of the signal path, which may be accounted for either through a hardware calibration (that de-embeds the measurement to the element aperture planes) or by inclusion in the transfer matrix. The arguments of H are grouped into sets that comprise the row and column indices. For example the column index, m, labels all possible combinations of Tx position, Rx position and frequency.
Hmn=H({rT,rR,ω}m,{rS}n)

We perform the necessary field-propagations and construct the image transfer matrix by numerical computation. In our system, we use the bare flange of a WR-10 wave-guide or a standard-gain pyramidal horn as the transmitting and receiving element apertures. Both of these have known field patterns, and the former has a simple analytical expression (i.e. the TE10 mode). The electric field pattern at the mouth of a WR-10 open-ended waveguide is given by,

E=E0cos(πya)x^
where, a=2.54 mm and b=1.27 mm (see inset in Fig. 2) are the dimensions of the WR-10 rectangular waveguide. Similarly for a standard gain pyramidal horn the near field electric pattern is given by,
E=E0cos(πya)exp[jω2(x2R1+y2R2)]x^
where, a=26.2 mm and b=20.3 mm (see inset in Fig. 2) are the dimensions of the WR-10 standard gain pyramidal horn and R1 and R2 are the E and H plane phase center distances. The aperture fields from the open-ended waveguide or the horn shown in Eqs. 5(6) can be converted to magnetic surface currents using surface equivalence theorem [20],
Ms=2n^×E
where, n^=z^ is the surface normal. The magnetic surface current in the aperture plane can be converted to a set of magnetic dipoles,
mp=(jΔxΔyωμ0)Ms
where, Δx and Δy are the near field pixel dimensions used for discretization. These individual dipoles can be propagated to the scene plane of interest using Green’s function and summed to get the overall response of Tx/Rx at a given scene voxel.
E(rS;rRorrT,ω)=jωμ04πΣp[(mp×rS|rS|)(jkRp1Rp2)exp(jkRp)]
where, k=2π/λ is the wavenumber, Rp = |rS rR|for Rx or Rp = |rSrT| for Tx, is the distance of a pth magnetic dipole to the scene voxel. The above equation needs to computed for each Tx/Rx position in the aperture plane in order to construct the H matrix shown in Eq. (3).

 figure: Fig. 2

Fig. 2 Forward Model setup showing the stationary Tx and scanned Rx grid on the source/measurement aperture along with target voxels on the scene plane.

Download Full Size | PDF

We explored two propagation algorithms. The first approach propagates the element aperture electric field to a distance zS using fast Fourier angular spectrum method (ASM). The other approach converts the waveguide fields at the aperture to magnetic dipole moments using the surface equivalence theorem, then sums all the dipole fields at a given scene voxel using the dipole Green’s function. The periodicity of discrete Fourier transforms and the divergence of the source fields lead to substantial required scaling of the Fourier domain versus propagation distance zS. For this reason we found the Green’s function method to be more efficient in our desired configurations. In Eq. (9), each dipole response at a given voxel can be independently calculated with respect to the other dipoles. This operation can be easily parallelized and hence the Green’s function method was implemented on a GPU.

4. Experimental imaging

As explained in the previous section, the H matrix is constructed computationally. However, this H matrix is incomplete without knowing the relative amplitude and phase of the Tx and Rx aperture fields as a function of frequency. This requires one calibration measurement between every Tx and Rx pair (in our case one pair). The calibration measurement is configured with the Tx aperture connected directly to the Rx. In this configuration, S21 provides the required through measurement. The resulting complex spectrum is used as a scaling factor for calculating the H matrix. The imaging capabilities of this setup were characterized by reconstructing resolution targets at different stand-off distances. The resolution target consists of copper strips glued on to a wooden board. The target consisting of 3mm wide copper strips was placed 15 cm away from the Tx/Rx aperture plane. The Rx was positioned on 11 × 11 sparse rectangular grid with a spacing of 1.2 cm. The distance between the farthest Rx position and the stationary Tx position was ~ 12 cm. This largest baseline D determines the effective size of the synthetic aperture which in turn governs the cross range resolution. The cross range resolution is given by [21],

Δcr=λR2D
where, λ is the wavelength and R is the distance between aperture plane and target (i.e. stand-off distance). At each Rx position, complex S21 data for 101 frequency points (0.35 GHz frequency spacing) were recorded. This corresponds to a total number of measurements M = 121 ×1 × 101 = 12221. For the 3mm resolution target, 3D reconstruction was performed for a total number of scene voxels, N = 60000. Figure 3 shows the reconstruction of a 2D target performed using Eq. (2) at a stand-off distance of 15 cm. The image reconstructions in both simulation and measurement clearly resolve the 3mm strips. Similarly, a target consisting of 5mm copper strips was placed 30 cm away from the Tx/Rx aperture plane. The cross range resolution worsens with the increase in stand-off distance R. For the 5mm resolution target, 3D reconstruction was performed for a total number of scene voxels, N = 208000. The image reconstruction of this target is shown in Fig. 4. The 5mm wide strips are clearly resolved. We perform reconstruction using 3D scene volumes, even for the 2D planar targets, thus using only rough prior knowledge of the the plane of the scatterer. The depth/range resolution is given by [21],
Δdr=c2B
where, c is the speed of light in the medium and B is the bandwidth. To quantify the depth resolution of our system, we used a target consisting of three 5 mm wide copper strips. One of the strips was raised 4mm above the other two strips. For this target, 3D reconstruction was performed for a total number of scene voxels, N = 180000. The image reconstructions from both simulation and measurement are shown in Fig. 5. The effect of measurement bandwidth on such frequency diverse computational imaging system was also studied (See Appendix: A). The reduction of measurement bandwidth decreases the target spatial information that can be acquired from the imaging system.

 figure: Fig. 3

Fig. 3 a). Shows the optical image of 3mm resolution target made of copper strips. b). Shows the W-band reconstructed image of the target from both simulation and measurement at a standoff distance of 15 cm. c). Shows the Rx grid and Tx locations relative to the target.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 a). Shows the optical image of 5mm resolution target made of copper strips. b). Shows the W-band reconstructed image of the target from both simulation and measurement at a standoff distance of 30 cm.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 a). Shows the optical image of 4mm depth resolution target made of copper strips. b). Shows the W-band reconstructed image (perspective and side view) of the target from both simulation and measurement at a standoff distance of 30 cm.

Download Full Size | PDF

Some differences between the simulation and experimental image reconstruction results are observed. We believe these differences are primarily due to differences between the configuration geometry used for the propagation model (incorporated in the the construction of the forward model H matrix) and the actual positions and orientations of the Tx and Rx modules used in the experiment. Additionally, analytic expressions were used for the waveguide and horn aperture fields in the forward model, which may have differed somewhat from the actual experimentally realized fields. We do not believe, however, that deviations of the experimental noise characteristics from the assumed noise model were significant factors, since the signal-to-noise levels were generally high. Despite these potential sources of error, we found the computed forward model, embodied by the H matrix, and used in solving the inverse problem, effective for diffraction-limited experimental image reconstructions.

This method qualifies as a sparse imaging technique because the scene is spatial sampled below the Nyquist limit (unlike a conventional synthetic aperture radar) and the total number of voxels reconstructed is substantially greater than the number of measurements (N > M). This overall approach can be easily scaled to multiple Txs and Rxs, to form a multi-static imaging system without requiring mechanical scanning for data acquisition. In our single fixed Tx system, the aperture size, D, is constrained by measurement signal-to-noise ratio. This limits the achievable cross range resolution and field of view. Our setup being active and planar Tx/Rx aperture, specularity can also be an issue [23]. Target specular areas with surface normals that point away from the Tx/Rx pair do not contribute significant signal and hence such areas do not appear (appear weaker) in the reconstructions. This can be mitigated by having larger or partially (or totally) curved enclosing apertures. This problem can also be overcome by exploiting polarization information [24]. Our measurements were not optimized to highlight this difficulty. With multiple Txs and Rxs, one could create a larger aperture leading to better cross range resolutions at greater standoff distances and at the same time reduce specularity effects.

5. Information metric for computational imaging

The image transfer matrix, H, is one of the key factors that affects the quality of image reconstructions in a noise limited computational imaging system. Hence, it is useful to develop a strategy to quantify the performance of a system described by the this matrix. Starting with Eq. (1)g = H f, we can find an information metric (or figure of merit) for characterizing a given H matrix. Consider an ensemble of possible scene vectors, {f}. If the scattering of scene voxels is uncorrelated and the spread of scattering amplitudes is equal in all voxels, then the covariance of the ensemble is given by

Σf=Δf2I
where I is the identity matrix, and Δf is the spread of scattering amplitudes over the ensemble {f}. Equation. (12) could potentially include correlations of the scene voxels leading to off-diagonal elements in the covariance matrix and this prior can also be exploited in the image reconstructions. Since we do not leverage this prior in our image reconstructions, the assumption of uncorrelated scene voxels in the calculation of information metric is valid. We can find the related covariance in the measurement space of g, with the standard uncertainty propagation formula, applied to Eq. (1)
Σg=HΣfH

This matrix describes the a priori known range of possible values of the measurements and their correlations. If we have both the a priori measurement ranges and the measurement uncertainty, we can find the added information of the measurements, using the Shannon-Hartley theorem. If we find a basis in which the range and uncertainty covariances are mutually diagonal, the total added information of the measurement set can be easily computed. To find a suitable mutual diagonalizing basis we perform the singular value decomposition H = USV, then the g-covariance matrix becomes,

Σg=HΣfH=USVΔf2I(USV)=Δf2USSU
where we have used the unitarity of V. Next, we find the measurement uncertainty of g. If we assume the measurement noise is uncorrelated and equal for all measurement components, the measurement noise covariance matrix is,
σg=δg2I
where δg is the measurement noise magnitude, and I is the identity matrix. Note that we will use Σ to denote the covariance associated with the ensemble of different possible scenes (i.e the range) and σ to denote covariance associated with the measurement noise (i.e. the uncertainty), though the latter is non-standard. Though σg is diagonal, Σg is not. The singular value decomposition of H provides a mutually diagonalizing basis through the unitary matrix, U. Define a new ”measurement” vector
γ=Ug

Now error propagate the range- and uncertainty-covariance matrices to this basis

σγ=Uσg(U)=Uδg2I(U)=δg2I
Σγ=UΣg(U)=UΔf2USSU(U)=Δf2SS

The covariances in this basis are mutually diagonal and their components represent independent measurements both in terms of range and uncertainty. Employing the Shannon–Hartley theorem, the added information of one such measurement component is

Qm=ΔtBlog2[Σγmmσγmm+1]=ΔtBlog2[(ΔfδgSmm)2+1]
where ΔtB is the measurement time–bandwidth product. The total measurement–added information is just the sum of these independent pieces of information
Q=m=1MQm
where M is the total number of measurements.

To use this metric we require the singular values of the image transfer matrix, Smm, the measurement noise level, δg, and the scattering variance, Δf. The singular values come from the H matrix, which is determined as described above. The measurement noise level, δg, can be found from the variance of multiple measurements of the same configuration. (The multiple measurements should be net measurements, including any calibration procedures. We used a Rx-position and frequency dependent calibration.) Since we assume a constant noise level in the metric analysis, we assign a typical value for the noise level based on measurements of different configurations. The scattering variance, Δf, should scale with the maximum scattering cross-section that will be observed for the ensemble of measurements being considered. In particular, it will be dependent on the size of the scene voxels. One could find the maximum scattering cross-section analytically from fundamental electromagnetic scattering calculations and scale the result appropriately to the desired H matrix. Alternatively, one could perform a reconstruction from measured data, g, using an arbitrarily scaled H matrix and a strong scattering target. Then the largest magnitudes in the reconstructed scene vector, f, provide a maximum scattering cross-section appropriately scaled to the H matrix used. Based on this maximum scattering cross-section, the scattering variance, Δf, can be assigned.

6. Gain and stand-off distance trade-off

We use information metric analysis described in the previous section to evaluate the trade-off between Tx/Rx aperture gain and standoff distance of the target. This approach is particularly useful while designing computational imaging systems. We construct different H matrices corresponding to every combination of gain and stand-off values. For each H matrix, we evaluate the total information Q for an uncorrelated scene ensemble given by Eq. (19). We assume a measurement time bandwidth product, ΔtB = 1, and a δg and Δf from measurement (as described in the previous section). Figure 6(b) shows the normalized information metric Q as a function of gain and range. Heuristically arguing, a lower gain element (eg. an open ended WR10 waveguide) for the Tx and Rx, leads to a beam with wider divergence that illuminates the target more broadly as compared to the highly directive beam provided by a high gain element (eg. standard gain WR10 pyramidal horn). The higher-gain element can fail to significantly illuminate the target at wider Tx/Rx baselines and miss some of the available information. However, at larger stand-off distance, the highly directive beam will illuminate the target sufficiently broadly, and provide greater signal to noise (and thus greater information) as compared to the broader beam.

 figure: Fig. 6

Fig. 6 a). Shows the Tx and Rx modules with brass wire as the target. b) Shows the plot of normalized total information metric, Q, as a function of Tx/Rx aperture gain and standoff range distance. Points (1), (3) and (5) correspond to WR10 open ended waveguide element-apertures (low gain) for both Tx and Rx. Points (2), (4) and (6) correspond to WR10 standard pyramidal horn element-apertures (high gain) for both Tx and Rx.

Download Full Size | PDF

We experimentally demonstrate this effect by choosing a 1 mm thick wire scatterer as the target and reconstructing this same target in different scenarios. The image reconstructions were performed at three different stand-off distances and two element gain values. The two gain values corresponded to a WR10 open-ended waveguide (~ 6 dB gain at 92.5 GHz) and a WR10 standard pyramidal horn (~22 dB gain at 92.5 GHz). The white ovals on Fig. 6(b) mark these six scenarios on the normalized information metric plot. At a stand-off range of 4 cm, the open-ended waveguide setup resolves the wire target better than the horn setup and the corresponding reconstructions for both cases are shown in Fig. 7. At a target distance of 20 cm, the open-ended waveguide and horn elemental apertures perform about equally well, in qualitative agreement with the information metric shown in Fig. 6(b). The image reconstructions are shown in Fig. 8. At 50 cm, the reconstruction from open-ended waveguide setup completely fails, but the imaging system with horn still reconstructs the target effectively. This is shown in Fig. 9. In all the above cases, the effective synthetic aperture size was kept constant. A single thin-wire target (with constant scattering cross section at all stand-off distances) was chosen for this demonstration as one can also easily visualize the change in resolution and field of view as gain and stand-off distance varies.

 figure: Fig. 7

Fig. 7 Image reconstructions of a 1 mm wire target at a stand-off distance of 4 cm with WR10 open-ended waveguide and standard gain pyramidal horn on Tx/Rx. The line plots correspond to the dashed line region in the 2D measured plots. The corresponding information metric Q for these two cases corresponds to marked points (1) and (2) in Fig. 6.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Image reconstructions of a 1 mm wire target at a stand-off distance of 20 cm with WR10 open-ended waveguide and standard gain pyramidal horn on Tx/Rx. The line plots correspond to the dashed line region in the 2D measured plots. The corresponding information metric Q for these two cases corresponds to marked points (3) and (4) in Fig. 6.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 Image reconstructions of a 1 mm wire target at a stand-off distance of 50 cm with WR10 open-ended waveguide and standard gain pyramidal horn on Tx/Rx. The line plots correspond to the dashed line region in the 2D measured plots. The corresponding information metric Q for these two cases corresponds to marked points (5) and (6) in Fig. 6.

Download Full Size | PDF

7. Conclusion

We have demonstrated diffraction limited imaging in the W-band with a system that is both under-sampled and under-determined. Specifically, the synthetic aperture of transmitter locations is spaced at intervals greater than that prescribed by the Shannon-Nyquist sampling theorem, and the number of voxels in the reconstruction space exceed the total number of measurements. Though this system employs a mechanically-scanned synthetic receiver aperture with periodic locations, the methods of analysis and image reconstruction could easily be applied to a system with any combination of mechanically-scanned or parallel receivers and/or transmitters, at aperiodic locations. In particular, the information metric described here, applies to any noise limited imaging system where a forward model determination allows the construction of the image transfer matrix. This metric provided the answer to the simple, yet fundamental, question of optimal element gain for a given target range. Such questions can be difficult to answer for a complex computational imaging system.

Appendix A Effect of measurement bandwidth on image reconstructions

Figure 10 shows the effect of reducing the bandwidth on frequency diverse computational imaging system. Figure 10a) corresponds to the image reconstruction of 3 mm resolution target using complete 100% bandwidth (75 – 110 GHz) measured data. All the features corresponding to the original target are effectively resolved. Upon reducing the measurement bandwidth to 70% (75 – 100 GHz), 25% (88 – 97 GHz) and 10% (90.75 – 94.25 GHz) the experimental image reconstructions degrade with the bandwidth reduction (corresponding to Fig. 10 b), c) and d) respectively). The reduction of measurement bandwidth also has an adverse effect on depth resolution.

 figure: Fig. 10

Fig. 10 Shows the effect of reducing the bandwidth on our frequency diverse computational imaging system. Shown are the experimental image reconstructions for the 3 mm resolution target.

Download Full Size | PDF

Acknowledgments

This work was supported by the Department of Homeland Security, Science and Technology Directorate (Contract No. HSHQDC-12-C-00049). The published material represents the position of the authors and not necessarily that of the DHS or S&T. We would like to acknowledge Prof. David Smith (Duke University) and Dr. Alec Rose (Evolv Technology) for their help with the simulations.

References and links

1. A. Wootten and A. R. Thompson, “The Atacama Large Millimeter/Submillimeter Array,” Proceedings of the IEEE 97, 1463–1471 (2009). [CrossRef]  

2. G. M. Rebeiz, D. P. Kasilingam, Y. Guo, P. A. Stimson, and D. B. Rutledge, “Monolithic millimeter-wave two-dimensional horn imaging arrays,” IEEE Trans. Antennas Propag. 38, 1473–1482 (1990). [CrossRef]  

3. Z. Q. Zhang and Q. H. Liu, “Three-dimensional nonlinear image reconstruction for microwave biomedical imaging,” IEEE Trans. Biomed. Eng. 51, 544–548 (2004). [CrossRef]   [PubMed]  

4. R. Appleby and C. Cameron, “Seeing hidden objects with millimetre waves,” Physics World 25, 35 (2012). [CrossRef]  

5. D. M. Sheen, D. L. McMakin, and T. E. Hall, “Near-field three-dimensional radar imaging techniques and applications,” Appl. Opt. 49, E83–E93 (2010). [CrossRef]   [PubMed]  

6. D. M. Sheen, D. L. McMakin, and T. E. Hall, “Three-dimensional millimeter-wave imaging for concealed weapon detection,” IEEE Trans. Microwave Theory Tech. 49, 1581–1592 (2001). [CrossRef]  

7. E. Ojefors, B. Heinemann, and U. R. Pfeiffer, “Active 220- and 325-GHz Frequency Multiplier Chains in an SiGe HBT Technology,” IEEE Trans. Microwave Theory Tech. 59, 1311–1318 (2011). [CrossRef]  

8. B. A. Floyd, S. K. Reynolds, U. R. Pfeiffer, T. Zwick, T. Beukema, and B. Gaucher, “SiGe bipolar transceiver circuits operating at 60 GHz,” IEEE Journal of Solid-State Circuits 40, 156–167 (2005). [CrossRef]  

9. E. Ojefors and U. R. Pfeiffer, “A 650GHz SiGe receiver front-end for terahertz imaging arrays,” Solid-State Circuits Conference Digest of Technical Papers (ISSCC), 2010 IEEE International, 430–431, Feb. 2010.

10. B. Gonzalez-Valdes, G. Allan, Y. Rodriguez-Vaqueiro, Y. Alvarez, S. Mantzavinos, M. Nickerson, B. Berkowitz, J. A. Martinez-Lorenzo, F. Las-Heras, and C. M. Rappaport, “Sparse Array Optimization Using Simulated Annealing and Compressed Sensing for Near-Field Millimeter Wave Imaging,” IEEE Trans. Antennas Propag. 62, 1716–1722 (2014). [CrossRef]  

11. S. Stanko, F. Kloppel, J. Huck, D. Notel, M. Hagelen, G. Briese, A. Gregor, S. Erukulla, H.-H. Fuchs, H. Essen, and A. Pagels, “Remote concealed weapon detection in millimeter-wave region: active and passive,” Proc. SPIE 6396, 639606, Oct 2006. [CrossRef]  

12. D. L. Donoho, “For most large underdetermined systems of linear equations the minimal L1-norm solution is also the sparsest solution,” Comm. on Pure and Applied Math. 59, 797–829 (2006). [CrossRef]  

13. L. C. Potter, E. Ertin, J. T. Parker, and M. Cetin, “Sparsity and Compressed Sensing in Radar Imaging,” Proceedings of the IEEE 98, 1006–1020 (2010). [CrossRef]  

14. G. Lipworth, A. Mrozack, J. Hunt, D. Marks, T. Driscoll, D. Brady, and D. Smith, “Metamaterial apertures for coherent computational imaging on the physical layer,” J. Opt. Soc. Am. A 30, 1603–1612 (2013). [CrossRef]  

15. J. Hunt, J. Gollub, T. Driscoll, G. Lipworth, A. Mrozack, M. Reynolds, D. Brady, and D. Smith, “Metamaterial microwave holographic imaging system,” J. Opt. Soc. Am. A 31, 2109–2119 (2014). [CrossRef]  

16. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013). [CrossRef]   [PubMed]  

17. C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8, 605 (2014). [CrossRef]  

18. E. J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on Information Theory 52, 489–509 (2006). [CrossRef]  

19. C. Cull, D. Wikner, J. Mait, M. Mattheiss, and D. Brady, “Millimeter-wave compressive holography,” Appl. Opt. 49, E67–E82 (2010). [CrossRef]   [PubMed]  

20. C. A. Balanis, Advanced Engineering Electromagnetics (Wiley, 1989).

21. M. I. Skolnik, Introduction to Radar Systems (McGraw-Hill, 1980).

22. M. Born and E. Wolf, Principle of Optics (Cambridge University, 1999). [CrossRef]  

23. G. Charvat, A. Temme, M. Feigin, and R. Raskar, “Time-of-Flight Microwave Camera,” Scientific Reports 5, 14709 (2015). [CrossRef]   [PubMed]  

24. S. Rahmann and N. Canterakis, “Reconstruction of specular surfaces using polarization imaging,” Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2001), pp. 149–155.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 W-band setup showing Tx and Rx modules. Tx is at a fixed location and Rx is on a x–y scanning stage.
Fig. 2
Fig. 2 Forward Model setup showing the stationary Tx and scanned Rx grid on the source/measurement aperture along with target voxels on the scene plane.
Fig. 3
Fig. 3 a). Shows the optical image of 3mm resolution target made of copper strips. b). Shows the W-band reconstructed image of the target from both simulation and measurement at a standoff distance of 15 cm. c). Shows the Rx grid and Tx locations relative to the target.
Fig. 4
Fig. 4 a). Shows the optical image of 5mm resolution target made of copper strips. b). Shows the W-band reconstructed image of the target from both simulation and measurement at a standoff distance of 30 cm.
Fig. 5
Fig. 5 a). Shows the optical image of 4mm depth resolution target made of copper strips. b). Shows the W-band reconstructed image (perspective and side view) of the target from both simulation and measurement at a standoff distance of 30 cm.
Fig. 6
Fig. 6 a). Shows the Tx and Rx modules with brass wire as the target. b) Shows the plot of normalized total information metric, Q, as a function of Tx/Rx aperture gain and standoff range distance. Points (1), (3) and (5) correspond to WR10 open ended waveguide element-apertures (low gain) for both Tx and Rx. Points (2), (4) and (6) correspond to WR10 standard pyramidal horn element-apertures (high gain) for both Tx and Rx.
Fig. 7
Fig. 7 Image reconstructions of a 1 mm wire target at a stand-off distance of 4 cm with WR10 open-ended waveguide and standard gain pyramidal horn on Tx/Rx. The line plots correspond to the dashed line region in the 2D measured plots. The corresponding information metric Q for these two cases corresponds to marked points (1) and (2) in Fig. 6.
Fig. 8
Fig. 8 Image reconstructions of a 1 mm wire target at a stand-off distance of 20 cm with WR10 open-ended waveguide and standard gain pyramidal horn on Tx/Rx. The line plots correspond to the dashed line region in the 2D measured plots. The corresponding information metric Q for these two cases corresponds to marked points (3) and (4) in Fig. 6.
Fig. 9
Fig. 9 Image reconstructions of a 1 mm wire target at a stand-off distance of 50 cm with WR10 open-ended waveguide and standard gain pyramidal horn on Tx/Rx. The line plots correspond to the dashed line region in the 2D measured plots. The corresponding information metric Q for these two cases corresponds to marked points (5) and (6) in Fig. 6.
Fig. 10
Fig. 10 Shows the effect of reducing the bandwidth on our frequency diverse computational imaging system. Shown are the experimental image reconstructions for the 3 mm resolution target.

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

g [ M × 1 ] = H [ M × N ] f [ N × 1 ]
f e = min f Hf g 2 2 + ϒ f 1 1
H ( { r T , r R , ω } , { r S } ) = α ( ω ) E ( r S ; r T , ω ) E ( r S ; r R , ω )
H m n = H ( { r T , r R , ω } m , { r S } n )
E = E 0 cos ( π y a ) x ^
E = E 0 cos ( π y a ) exp [ j ω 2 ( x 2 R 1 + y 2 R 2 ) ] x ^
M s = 2 n ^ × E
m p = ( j Δ x Δ y ω μ 0 ) M s
E ( r S ; r R or r T , ω ) = j ω μ 0 4 π Σ p [ ( m p × r S | r S | ) ( j k R p 1 R p 2 ) exp ( j k R p ) ]
Δ c r = λ R 2 D
Δ d r = c 2 B
Σ f = Δ f 2 I
Σ g = H Σ f H
Σ g = H Σ f H = US V Δ f 2 I ( US V ) = Δ f 2 US S U
σ g = δ g 2 I
γ = U g
σ γ = U σ g ( U ) = U δ g 2 I ( U ) = δ g 2 I
Σ γ = U Σ g ( U ) = U Δ f 2 US S U ( U ) = Δ f 2 S S
Q m = Δ t B log 2 [ Σ γ m m σ γ m m + 1 ] = Δ t B log 2 [ ( Δ f δ g S m m ) 2 + 1 ]
Q = m = 1 M Q m
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.