Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Beamforming and holography image formation methods: an analytic study

Open Access Open Access

Abstract

Beamforming and holographic imaging procedures are widely used in many applications such as radar sensing, sonar, and in the area of microwave medical imaging. Nevertheless, an analytical comparison of the methods has not been done. In this paper, the Point Spread Functions pertaining to the two methods are analytically determined. This allows a formal comparison of the two techniques, and to easily highlight how the performance depends on the configuration parameters, including frequency range, number of scatterers, and data discretization. It is demonstrated that the beamforming and holography basically achieve the same resolution but beamforming requires a cheaper (less sensors) configuration..

© 2016 Optical Society of America

1. Introduction

Since the invention of radar technologies and the rapid growth of sensor networks, wave-sensing devices have undergone significant progress. Arrays of sensors have recently become a crucial part of many imaging technologies. Just like an optical lens, a sensor array forms an aperture that can focus and steer a beam. Recent technical advances in micromachining and solid-state electronics offer the possibility of manufacturing arrays with thousands of sensors [1]. The development of sensor arrays has increased with the adoption of beamforming (BF) imaging methods used in radar sensing [2, 3], sonar [4], non-destructive testing [5] and medical diagnostics [6].

Beamforming (BF) refers to a large class of focusing algorithms largely employed in RADAR imaging [3]. In general, BF can be described as a time-shift technique since BF typically involves time-shifting signals to isolate signals reflected from a particular synthetic focal point [7]. The synthetic focal point is then scanned to create an image of the region of interest. The success of the BF approach relies primarily on the very good trade-off between achievable performance and procedure complexity. The Delay and sum (DAS) beamformer is the simplest form of BF [8]. Many different extensions of the DAS beamformer have been devised and compared [9], including the Delay multiply and sum (DMAS) beamformer [10] and the enhanced DAS (EDAS) beamformer [11].

In parallel with BF methods, wavefront reconstruction techniques have also been developed and employed for radar imaging applications in recent years. Wavefront algorithms focus signals by performing a series of operations in the frequency domain. The spectrum of the collected reflections is calculated, both in the signal travel time and the scan trajectory directions. The phase of the signals is modified in order to transfer the data from the spatial-temporal domain where it is originally acquired to the spatial domain where it will be displayed. Finally, the Fourier transform of the compensated spectrum is computed in order to properly visualize the data [12]. Holographic imaging (HI) methods have populated the literature of different applicative contexts ranging from seismic prospecting to breast cancer detection [13] for many years. An unconventional holography procedure is proposed in [14], where the object is reconstructed as the distribution of a spatial coherence function. In [15] the holography methods is employed in order to measure the oscillation amplitude of vibrating objects. Other examples of application of holographic methods include the wave-interference migration [16], the in-depth field extrapolation [17], the range migration [18] and the Stolt migration [19]. All of these methods employ the spatial spectral representation of the solutions of the wave equation to develop imaging schemes that rely on Fourier transformations [20, 21]. Therefore, they are particularly appealing from the computational perspective, as their implementation can exploit the computational simplicity of the Fast Fourier Transform (FFT). Essentially, all of these methods are simply variants of the Rayleigh-Sommerfeld Holography [22].

Both BF and HI methods have been investigated as potentially computationally-efficient methods for microwave medical imaging. Microwave medical imaging was presented as one of the promising emerging imaging modalities of the last two decades. Based fundamentally on the dielectric contrast between tissues, it offers a low-cost, non-ionising and non-invasive method for medical imaging. From a practical perspective, the imaging process involves illuminating the tissue of interest with a wideband radar pulse and recording the reflected (and sometimes transmitted) signals from any dielectric boundaries present within the tissue. Several different mechanisms for combining these signals to create an image have been proposed. These algorithms can be divided into two broad categories: those that seek to identify the presence and location of significant dielectric scatterers in the tissue; and those whose aim is to reconstruct the entire dielectric profile of the tissue under examination. Both BF and HI fall into the former category.

In this paper, the two most common radar image formation algorithms, namely BF and HI, will be described and compared. It is worth remarking that literature already accounts contributions that compare BF and HI. For example, in [23] BF and HI are numerically compared in the framework of sound source visualization for a very near zone configuration and planar sources, whereas in [24] the comparison takes place in presence of reverberations. Here, the comparison will be worked out analytically for a scattering scenario which is pertinent to breast imaging. An accurate analysis of each algorithm will be carried out in terms of critical parameters such as the operating frequency range, the number of scatterers and data discretization. This formal analytical approach enables a rigorous performance comparison of these techniques, which has not been previously reported. In fact, only a vague qualitative comparison can be extrapolated from the literature, where highly-cited contributions present the performance of these procedures on the basis of significantly different datasets and experimental configurations. The comparison in this paper will be carried out in terms of the geometrical characteristics of the Point Spread Function (PSF), which is the image of a point scatterer as seen by the imaging systems.

As a first step in the analytical comparison, the scattering configuration for this investigation is a simple two-dimensional circular mono-static array configuration and is introduced in Section II. While practical imaging scenarios are undoubtedly much more complex, the simplistic model presented here is an ideal platform to derive the required analytical results to compare the relative performance of both algorithms. Section III evaluates the performance of BF for both non-windowed and windowed configurations, and as a function of frequency. Section III also considers the effect of multiple scatterers. The holographic technique is similarly discussed in Section IV. In Section V the effects of a realistic data discretization (i.e., limited number of sensor locations) are taken into account for both methods. The paper concludes with Section VI, where the results of the study are summarised and discussed.

2. Scattering configuration

In this section we introduce the scattering configuration used to evaluate and compare the imaging algorithms, along with the pertinent equations describing the scattering phenomenon.

In order to focus on a comparative study, an idealized scattering scenario is proposed. More specifically, the scattering problem is considered for a two-dimensional scalar configuration. This means that invariance is assumed along the z-axis [see Fig. 1] and the electromagnetic incident field has a TM polarization being radiated by a filamentary electric current directed along the z-axis.

 figure: Fig. 1

Fig. 1 Pictorial view of the scattering scene. Invariance is assumed along the z-axis.

Download Full Size | PDF

The analysis is confined to the multimonostatic case. Accordingly, the scattered field is collected only over the same position of the transmitter as the latter moves in order to synthesize the measurement aperture. Alternatively, if a real array structure is available, transmitting antennas are turned on one at a time and data are collected only by the same antenna.

Sensors are assumed to be located over a circle of radius R which embodies the scattering region. Hence, the position of the sensors are identified by the angular coordinate θ. These will be denoted as θn n = 1,…N, with N being the number of adopted sensors, when we explicitly refer to the case of discrete positions.

Finally, the background medium is assumed to be homogeneous, lossless and a priori known. Of course this assumption is questionable under applicative conditions. However, our approach in the paper is to employ a sufficiently simplistic model to allow us to derive broad but important analytical results regarding the advantages and disadvantages of each image formation method.

A linear or planar measurement set-up is normally suitable for some non-invasive diagnostic scenarios such as subsurface and through-wall imaging. In other cases, such as breast imaging, a much more convenient (and commonly used) set-up is a circular configuration where sensors are arranged (or moved) in a circle around the imaging region.

According to the previous assumptions, after linearizing the scattering by the Born approximation [25], the model equation in frequency domain is given as:

US(ω,θ)=(ω/2v)2S(ω)D{H0(2)[ωvR2+r22Rrcos(θϕ)]}2χ(r¯)dr¯
where R¯=(R,θ), r¯=(r,ϕ), US(ω, θ) is the scattered field data, S(ω) is the temporal Fourier spectrum of the transmitted pulse, v the background propagation speed and D is the spatial region under investigation. χ(r¯) is the object function which describes the scatterers in terms of their shape and electromagnetic parameters. In particular, for an ensemble of scatterers being small as compared to the wavelength, χ(r¯)=pχpδ(r¯r¯p). Finally, 1/4jH0(2)() is the two-dimensional scalar Green function for the pertinent Helmholtz equation, H0(2)() being the Hankel function of second type and zero order.

It should be noted that the contrast function χ(r¯) accounts for the relative (with respect to the background medium) variation of the electromagnetic parameters introduced by the scatterers. Hence, it is generally also frequency dependent. Such a dependence has been neglected in this analysis.

This section is completed by rearranging Eq. (1) according to the asymptotic expansion of the Hankel function (i.e., H0(2)(x)2/πxexp[j(xπ/4)]) which will be useful later on:

US(ω,θ)=(jω/2πv)S(ω)Dexp(2jωvR2+r22Rrcos(θϕ))R2+r22Rrcos(θϕ)χ(r¯)dr¯

3. Beamforming

The starting point of BF methods is the time domain version of Eq. (2). This can be easily obtained by Fourier transforming that equation with respect to ω. Accordingly, it is obtained as follows:

uS(t,θ)=Ds[tτ(θ,r¯)]χ(r¯)dr¯
where s(t) is related to the transmitted pulse and is the Fourier transform of (jω/2πv)S(ω)/[R2 + r2− 2Rr cos (θ − ϕ)]1/2, τ(θ,r¯)=2/v[R2+r22Rrcos(θϕ)]1/2 is the round-trip delay.

It is clear that neglecting losses, in the background medium and scatterers, is equivalent to assuming that the scattering from a point-like scatterer does not change the shape of the transmitted pulse. However, dispersive effects could be significant, especially for wide-band systems. In such cases dispersion effects need to be accounted or compensated for. Within the BF framework the second option is by far the more usual. However, here, for the sake of simplicity, it is assumed that (3) strictly holds.

The image obtained by the DAS beamforming is given by:

IBF(r¯)=W(t)[02πus[tT(θ,r¯),θ]dθ]2dt
where IBF(r¯) is the imaging obtained after BF is completed, W(t) is a suitable time window and T(θ,r¯)=maxθ,r¯{τ(θ,r¯)}τ(θ,r¯).

3.1. No windowing

In order to analyze the achievable performance and for comparison purposes (with respect to the HI method to be presented in the next section), it is convenient to go back to the frequency domain and rewrite Eq. (4) in terms of Fourier transform with respect to time t. While doing this, the windowing due to W(t) is at first ignored. Thanks to Parseval’s equality, Eq. (4) can be rewritten as:

IBFNW(r¯)=Ω|02πUS(ω,θ)exp[jωτ(θ,r¯)]dθ|2dω
where US(ω,θ) is given by Eq. (2) and the transmitted pulse is considered band-limited to Ω.

Now, by substituting in Eq. (5) the expression of US(ω,θ) returned by (2) yields:

IBFNW(r¯)=Ω|ω/(2πv)S(ω)|2|Dχ(r¯)02πexp{jω[τ(θ,r¯)τ(θ,r¯)]}R2+r22Rrcos(θϕ)dθdr|2dω

Eq. (6) represents the mathematical model of the imaging procedure. That is to say, the operator ℬFNW

FNW:χχ˜=IBFNW
is the operator that “transforms” the scatter in the image. As can be seen, that operator is nonlinear. Therefore, further considerations on using the point spread function (i.e., the reconstruction of a point-like scatterer) to assess the achievable performance need to be discussed. In this section we proceed to the study of the reconstruction of one single point-like scatterer. In a subsequent section, the effects of such a non-linearity for the case of two point-like scatterers will be discussed.

Let a single point-like scatterer be located at r¯p. It is shown in the appendix that its reconstruction, and hence the point spread function (PSF), is given by:

PSFNW(r¯,r¯p)|v[Φ(2kmax|r¯r¯p|)Φ(2kmin|r¯r¯p|)]2πR2(2|r¯r¯p|)3|
where k = ω/v is the wavenumber, kmin and kmax are the edges of the employed wavenumber band KΩ and
Φ(α)=0αy2J02(y)dy=1/8×[(2y3+y)J02(y)+2y2J0(y)J1(y)+2y3J12(y)|0α0αJ02(y)dy]
with
0αJ02(y)dy=y[J02(y)+J12(y)]|0α+0αJ12(y)dy

Note that, without loss of generality, |S(ω)| = 1 has been considered and the transmitter/receiver sensors (i.e. Tx/Rx) are not close to the trial and the scatterer positions r¯ and r¯p.

In order to check the goodness of the PSF closed form estimation, in Fig. 2(a) the reconstruction of a point-like scatterer which is obtained by implementing (6) is compared to the results returned by Eq. (8). It must be remarked that the integral term J12(y)dy often is ignored in the PSF computation, as it generally gives a negligible contribution [26]. However, in the following such a term has been retained.

 figure: Fig. 2

Fig. 2 (a) Comparing normalized point spread functions returned by Eqs. (6) (denoted as BFNW) and (8) (denoted as psfBFNW). Cut view along the x axis is displayed for two scatterer’s locations, (0,0) (top panel) and (0,4λm) (middle panel) where λm is the wavelegth in a medium with v = c/3 and relative to the bottom frequency of the band [1,3] GHz. The bottom panel shows the difference between the two point spread functions. (b) Comparing point spread functions returned by Eqs. (13) (denoted as BFW) and (15) (denoted as pdfBFW). Cut view along the x axis is displayed for two scatterer’s locations, (0,0) (top panel) and (0,4λm) (bottom panel) where λm is the wavelegth in a medium with v = c/3 and relative to the bottom frequency of the band [1,3] GHz.

Download Full Size | PDF

As can be seen, the two PSFs agree very well as far as the main beam is concerned. Instead, as can be expected by the arguments reported in the appendix, differences arise on the side-lobe structure. This is even more evident from the bottom panel of the same figure. However, these differences do not depend on the point-like scatterer’s position.

3.2. Time-windowing

The windowing approach typically used in DAS beamformers up to now has been neglected. Therefore, it is important to study how previous results change in relation to W(t). In order to better highlight differences to the previous case, the limit case of W(t) = δ(t) is considered. This choice also makes the derivation simpler.

According to Eq. (4), scattered pulses are aligned at the time instant TW=maxθ,r¯{τ(θ,r¯)}. Hence, W(t) = δ (t − TW) and (4) becomes:

IBFW(r¯)=|02πuS[TWT(θ,r¯),θ]dθ|2

Now, (10) can be conveniently recast as:

IBFW(r¯)=|02πuS(t,θ)δ[tτ(θ,r¯)]dtdθ|2

This equation is useful since it highlights the equivalence between the BF and the so-called diffraction summation (i.e., pixel driven approach), which is a focusing technique routinely applied in seismic exploration [16]. Moreover, it allows us to perform an analytical evaluation of the PSF in the same way as in the previous case. To this end, Eq. (11) in the temporal Fourier domain reads as:

IBFW(r¯)=|Ω02πUS(ω,θ)exp[jωτ(θ,r¯)]dθdω|2

Using the expression of US(ω,θ) returned by (2) yields

IBFW(r¯)=|Dχ(r¯)Ωjω/(2πv)×02πexp{2jω/v[|R¯r¯||R¯r¯|]}|R¯r¯|dθdωdr¯|2

Eq. (13) represents the mathematical model of the imaging procedure, that is:

FW:χχ˜=IBFW

As in the previous case, the imaging operator is non-linear. However, we also proceed to the study of the PSF.

Therefore, consider a single point-like scatterer located at rp, then its reconstruction is given by (see the Appendix):

PSFW(r¯,r¯p){v[Ψ(2kmax|r¯r¯p|)Ψ(2kmin|r¯r¯p|)]R(2|r¯r¯p|)2}2
with
Ψ(α)=0αyJ0(y)dy=yJ1(y)|0α

We conclude this section by numerically verifying the quality of the PSF analytical estimation [see Fig. 2(b)]. As can be seen, the agreement here is even better than in the previous case. In particular, it is noted that also in the side-lobe region the estimate in Eq. (15) matches very well the actual PSF. Moreover, it can also be seen that side-lobes have a less pronounced structure here. Both these circumstances are clearly consequences of the introduced time-windowing.

3.3. Effect of frequency

It is often stated that in order to achieve a finer resolution, a wider frequency band is required. This is certainly true in the case of aspect-limited configurations where the scattered field cannot be collected all around the scattering scene, as in SAR or GPR applications [27]. However, for the case at hand, this statement does not hold anymore. This can be easily understood by looking at the closed form expression of the PSF that we previously derived. For the sake of simplicity, Figures 3(a) and 3(b) show the PSF in order to graphically support our discussion. More specifically, Fig. 3(a) reports the PSF as the frequency band increases. As expected, the larger the frequency band the narrower the PSF. Instead, in Fig. 3(b) the frequency band width is kept constant at 2 GHz but its allocation is varied. It is seen that, even in this case, the main beam of the PSF narrows, and hence the resolution improves, when the same frequency band is moved towards higher frequencies. This result is very interesting as it allows to conclude that in order to get a high resolution, wide frequency bands are not necessary. However, the cost to pay is the occurrence of a small ripple due to an increase of the side lobes. This can be actually expected by glancing at the spatial Fourier transform of the PSF reported in eq. (44) (note that actually, Eq. (44) refers to the square root of the PSF. This, however, does not change the discussion). Indeed, the Fourier transform of the PSF presents a hole at low frequencies. Having fixed the frequency band (i.e., kmax − kmin) this hole increases as the adopted frequencies move towards high values. Hence, side lobes increase as well. This is somehow similar to what happens for a spectrum function like Π[(ω −ωav)/B] + Π[(ω +ωav)/B], with ω being the frequency, B the bandwidth and ωav the translation. Of course, for the case at hand, a quantitative estimation of these side lobes can be obtained thanks to our analytical estimation of the PSF. Figure 4(a) shows the level of the first side lobe (i.e., the maximum one) normalized to the maximum of the PSF as the frequency band or the frequency allocation vary. As expected, and according to Figures 3, when the bandwidth grows up the side lobes reduce; the opposite is true when the bandwidth is fixed and the central frequency increases. In particular, it is seen that in both cases the side lobe level tends to saturate to a constant value. This phenomenon can be easily explained, and the corresponding side lobe level estimated, by returning to the Eq. (42). From this equation, the following PSF approximations, PSFW(r¯,r¯p)app, can be deduced.

 figure: Fig. 3

Fig. 3 (a) Cut view along the x-axis of the reconstruction of a point objects obtained by the windowed beamforming procedure for the same parameters as in Fig. 2(b) for different frequency band width: [1,3] GHz (top panel); [1,5] GHz (middle panel); [1,10] GHz (bottom panel). (b) Cut view along the x-axis of the reconstruction of a point objects obtained by the windowed beamforming procedure for the same parameters as in Fig. 2(b) for different frequency allocation: [1,3] GHz, [3,5] GHz, [5,7] GHz and [7,9] GHz (from top to bottom).

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 (a) Level of the first side lobe normalized to the maximum of the PSF as the frequency band (top panel) and the central frequency (bottom panel) change. (b) Comparing the cut views along the x-axis of the reconstructions returned by Eqs. (13), (15) and (17) for [1,10] GHz (top panel) and Eqs. (13), (15) and (18) for [8,10] GHz (bottom panel).

Download Full Size | PDF

When the frequency band is so large that kmaxkmin, then the spectral integral 2kmin2kmax()dk can be approximated as 02kmax()dk, hence:

PSFW(r¯,r¯p)app={v[Ψ(2kmax|r¯r¯p|)]R(2|r¯r¯p|)2}2

Instead, when kmax − kmin is kept fixed while kav = (kmax + kmin)/2 increases so that kmax − kminkav, then integration in k can be approximated by the value of the function at kav. Therefore, from eq. (42) it follows:

PSFW(r¯,r¯p)app=[(v/R)kavJ0(2kav|r¯r¯p|)]2

Therefore, the side lobe levels tend to those returned by Eqs. (17) and (18). Figure 4(b) shows the excellent agreement between Eqs. (17) and (18) and Eqs. (13) and (15).

Similar conclusions hold for the not windowed beamforming procedure and are hence omitted for brevity.

3.4. The two-scatterer case

As mentioned above, the BF operators IBFi, with i ∈ {NW,W} are non linear. Accordingly, when the scattering scene encompasses more than one scatterer, the corresponding reconstruction is not merely a superimposition of the associated PSFs (one for each scatterer). This clearly questions the use of the PSF as a method to foresee the achievable performance. In this section we address this question by considering the case of two point-like scatterers. The latter being the simplest possible relevant scenario among the multiple target case.

Let us assume two point scatterers are located at r¯1 and r¯2 so that χ(r¯)=χ1δ(r¯r¯1)+χ2δ(r¯r¯2). After the beamforming procedure the reconstruction is expressed as:

IBFi(r¯)=IBFi(r¯,r¯1,χ1)+IBFi(r¯,r¯2,χ2)+MBFi(r¯,r¯1,r¯2,χ1,χ2)
where the last term in the right hand side simply accounts for the “interaction” between the reconstructions of the two scatterers. If |r¯1r¯2| is sufficiently large then it is obvious that:
IBFi(r¯)IBFi(r¯,r¯1,χ1)+IBFi(r¯,r¯2,χ2)
and the PSF works in predicting the achievable performance. As the scatterers get closer, it is expected that the previous conclusion does not hold anymore.

In order to check these arguments, we consider the reconstruction of two point scatterers (of equal scattering amplitude) by varying the distance between them. The corresponding results are reported in Figs. 5(a) and 5(b). For comparison purposes, the “linear” superimpositions returned by Eq. (20) are shown for both cases of exact and estimated PSFs. As anticipated, (19) and (20) are practically (top and middle panel) coincident as long as the scattering objects are sufficiently separated: this distance turns to be as large as the PSF main beam width. Note that the PSF main beam width is basically the same for the no-windowed and windowed cases. Of course, this distance changes with the parameter of the configuration but can always be estimated as we have the closed form expression of the PSF. For example, for the case considered here, this distance is roughly 0.18λm, λm being the wavelength in a medium with v = c/3 and relative to the bottom frequency of the band [1,3] GHz. At shorter distances, the mutual term MBFi starts to be relevant and a faster degradation of resolution occurs. Hence, it can be concluded that if the PSF main beam width is used to define the resolution, the mutual term does not play a relevant role and then the achievable performance can be safely foreseen through the estimated PSFs.

 figure: Fig. 5

Fig. 5 (a) Cut view along the x-axis of the reconstruction of two scattering point objects obtained by the no-windowed beamforming procedure for the same parameters as in Fig. 2(a). The green and the dashed blue lines refer to the outcomes of Eqs. (19) and (20), respectively. Red lines instead reports Eq. (20) implemented by using the estimated point spread function retuned by Eq. (8). (b) Cut view along the x-axis of the reconstruction of two scattering point objects obtained by the windowed beamforming procedure for the same parameters as in Fig. 2(b). The green and the dashed blue lines refer to the outcomes of Eqs. (19) and (20), respectively. Red lines instead reports Eq. (20) implemented by using the estimated point spread function retuned by Eq. (15). (c) Cut view along the x-axis of the reconstruction of two scattering point objects for the same parameters as in Fig. 2(b). The green lines refer to holography given by eq. (28) and the blue lines to the windowed beamforming retuned by Eq. (13). For all three procedures the objects have been located at r¯1=(0,0,0) and r¯2=(0.3,0,0)λm(top panel); r¯1=(0,0,0)λm and r¯2=(0.18,0,0)λm(middle panel); r¯1=(0,0,0)λm and r¯2=(0.1,0,0)λm(bottom panel).

Download Full Size | PDF

4. Holography

In this section, we describe the necessary steps to arrive at the PSF estimation of the HI performance. Typically, HI methods require the scattered field data to be collected over a planar (rectilinear for 2D cases) measurement aperture. This occurs because Cartesian spatial coordinates naturally match the spatial Fourier transform. To deal with the 2-D circular configuration, HI methods need to be adapted. This was previously completed for example here [28]. For convenience, Eq. (2) is reported and rearranged properly as:

US(ω,θ)=DK(ω,R,θ,r,ϕ)χ(r¯)dr¯
where the kernel function is given by:
K(ω,R,θ,r,ϕ)=(jω/2πν)×exp(2jωνR2+r22Rrcos(θϕ))R2+r22Rrcos(θϕ)

The first step is to Fourier transform with respect to θ, giving the following:

US(ω,ε)=DFTθ[K(ω,R,θ,r,ϕ)]χ(r¯)dr¯
where FTθ (·) denotes the above mentioned Fourier transform and ε is the spectral variable corresponding to θ. This transformation can be established analytically by using the method of stationary phase [29]. This step allows us to highlight the presence of a phase term which depends only on the observation variables and not on the scattering inclusions. Therefore, to avoid phase wrapping this term is compensated for as follows:
U˜S(ω,ε)=Dexp[jψ(ω,ε,R)]FTθ[K(ω,R,θ,r,ϕ)]χ(r¯)dr¯

Then by Fourier transforming back to the θ domain, we obtain the following (still by employing the stationary phase method):

U˜S(ω,θ)=Dexp[2jkrcos(θϕ)]χ(r¯)dr¯
where an amplitude factor has been ignored. Finally, Eq. (24) can be rewritten as
U˜S(kx,kz)=Dexp[j(kxx+kzz)]χ(x,z)dxdz
with kx = 2k cosθ and kz = 2k sinθ. Accordingly, reconstructions can be obtained by a simple 2D Fourier transform. Of course, in order to proceed this way, data should be properly rearranged from the ωθ domain to the kxkz space. This requires suitable interpolation and re-sampling procedures. Here, this stage is omitted (see [19] or more recently [28] for details). Say Ωkx,kz be the data spectral support. The HI reconstruction process:
ol:χχ˜
writes as
χ˜(x,z)=DΩkx,kzexp{j[kx(xx)+kz(zz)]}dkxdkzχ(x,z)dxdz

It is noted that holographic imaging is a linear procedure. Moreover, the PSF (see Appendix) is given by:

PSFHol(r¯,r¯p)=2π[Ψ(2kmax|r¯r¯p|)Ψ(2kmin|r¯r¯p|)]|r¯r¯p|2

Therefore, no interaction term like MBFi(r¯,r¯1,r¯2,χ1,χ2) is present in the case of two scatterers. As the latter term was the responsible of the loss of resolution for closely located scatterers in beamforming, one could argue that holography could achieve a better resolution. As shown in Fig. 5(c), this is not the case. In addition, holography PSF exhibits higher side lobe level, even when the scatterers are resolved. These points can be explained by noting that holography returns a PSF which coincides to the square root of the PSF of the windowed beamforming case (apart from some scalar terms that we disregarded in the derivation). This entails that, even though the zeros of the PSF remain unchanged (with respect to the BF) both the main beam and side lobe level increase indeed.

5. Discrete data points

In the previous section we considered an ideal situation where data were collected continuously all around the scattering scene. Analytical estimations of the achievable resolution was derived for both the cases of beamforming and holographic methods. This is in itself a remarkable result as it appears for the first time in the literature to the best of our knowledge.

In practical situations the number of data samples is finite. Therefore, it is appropriate to investigate the minimum number of sensors (if any) that should be deployed around the scattering scene in order to obtain the same results as the ideal case.

To this end, we now particularize the previous analysis to the case of N measurement points. Looking at Eqs. (37) and (41) it is evident that the number of spatial positions upon which data are being collected enters in the evaluation of the integral along θ (which is the same for both the considered beamforming schemes). Therefore, it is expected that while reducing the number of measurements this term will be responsible for the occurrence of spurious artefacts in the same way as aliasing appears when undersampled functions are Fourier-transformed. Accordingly, determining the minimum number of spatial measurements that are required for the performance to remain stable (as compared to the ideal configuration) is equivalent to studying that integral term. In particular, the question that we are going to focus on is the following: to determine a condition (if any) that ensures that even for a finite number of spatial measurement points the outcome of the θ integration does not introduce spurious peaks inside the investigation domain.

To this end, we rewrite the θ integration as:

02πH0(2)(2k|R¯r¯|)H0(1)(2k|R¯r¯p|)dθ

The discrete counterpart, occurring when data are finite, is then:

n=0N1H0(2)(2k|R¯nr¯|)H0(1)(2k|R¯nr¯p|)
where the n-th measurement point has coordinates (R,θn). By employing the addition theorem for the Hankel functions (30) can be recast as:
h,lHh(2)(2kR)HhlN(1)(2kR)Jh(2kr)JhlN(2krp)exp[jh(θpθ)]exp[jlNθp]
where, without losing generality, N has been assumed to be an odd number. Now, by reasoning as done in the previous sections and properly applying the addition theorem for Bessel functions, Eq. (31) can be approximated as:
|H0(2)(kR)|2ljlNJlN(2k|r¯r¯p|)exp[jlNarg(r¯r¯p)]

Note that the term jlN takes into account the phase differences between Hh(2)() and HhlN(1)() in Eq. (31) and the term arg(r¯r¯p) instead means the phase of the difference vector r¯r¯p. Therefore, it finally follows that:

|H02(2kR)|2{J0(2k|r¯r¯p|)+l0jlNJlN(2k|r¯r¯p|)exp[jlNarg(r¯r¯p)]}

Eq. (33) allows to point out the effect of sampling the data. Indeed, sampling led to a further term (with respect to the continuos case): the series in the r.h.s. of (33). This term clearly is responsible of possible artefacts. Therefore, in order to avoid artefacts, N should be chosen so as to make negligible such a term. By recalling the asymptotic expansion of Bessel functions for orders much greater than their arguments, it readily follows that as long as the spurious term is negligible (for each point inside the detection scene), it should be

N4kRb
with Rb < R being the radius of the circular investigation domain. Finally, to be sure that sampling does not produce significant effects for each frequencies, a conservative constraint is
N4kmaxRb
kmax corresponding to the highest adopted frequencies.

As for the HI method, a quick analysis of Eq. (27) reveals that the above condition also guaranties similar resilience to the sampling effect. However, it must be remarked that holography reconstructions are not obtained by directly implementing Eq. (27), but rather by FFT procedures. Therefore, sampling should assure that, while performing FFT, aliasing does not appear. In this regard the Nyquist criterion must be fulfilled. By following this reasoning on the “instantaneous frequency” of the kernel function involved in the FFT procedures, it has been shown that [30]

N2π4kmaxRRb/R2+Rb2

Even though Eqs. (35) and (36) look very similar, the estimation we established dictates that nearly three time less samples are required to avoid aliasing by beamforming compared to holography under the same conditions.

In order to validate the presented theory, we show in the left panel of Fig. 6 the normalized reconstruction of a point scatterer for varying number of measurements. A single frequency f = 3 GHz has been considered and a medium with v = c/3. The investigation domain D is assumed to be a circle of radius 1.8λm whereas measurements are taken over a concentric circle of radius 2.1λm, with λm being the wavelength in medium at 3 GHz. The scatterer is located in the centre of D. For the case at hand, Eq. (35) suggests N > 45 to avoid artefacts. Indeed, this is the case as shown in Fig. 6(a). When the number of points is halved (N = 22) or even lowered, spurious artefacts actually corrupt the reconstructions [see Figs. 6(b), 6(c), and 6(d)]. This results are perfectly consistent with the theory. What is more, through Eq. (33) it is possible to foresee at which distance (from the scatterer) such artefacts begin to corrupt the image. This is detailed below.

 figure: Fig. 6

Fig. 6 In the left panel angular summation in Eq. (30) for different number of measurement points. (a) N = 45. (b) N = 22. (c) N = 17. (d) N = 8. According to theory, spurious artefacts start at |r¯r¯p|=N/2k. In the middle panel no-windowed beamforming for [1,3] GHz. (e) N = 22. (f) N = 17. (g) N = 8. In the right panel windowed beamforming for [1,3] GHz. (h) N = 22. (i) N = 17. (l) N = 8.

Download Full Size | PDF

First, it is obvious from Eq. (33) that artefacts start when JN(2k|r¯r¯p|) becomes significant. Second, it is known that Jn(x) ≃ 0 for x < n [31]. Then, it follows that artefacts can appear only for |r¯r¯p|>N/2k. If N is chosen as in (35), no points in D can satisfy the latter condition and thus no artefacts occur. By contrary, when the number of measurements is reduced JN(2k|r¯r¯p|) become relevant. For example, when measurements are halved artefacts appear for |r¯r¯p|>N/4k (from the case at hand at about 1.74λm from the scatterer). By further reducing the measurements as in Figs. 6(c) and 6(d), the same reasoning says that the aliased region should start closer to the scatterer at 1.35λm and 0.63λm, respectively. This is very well verified by the numerical examples.

At this stage we have shown that as long as criterion (35) is fulfilled, discrete data give rise to reconstructions in the same way as if no sampling occurred. On the other hand, artefacts appear at a distance (from the scatterer) that can be estimated by the property of the Bessel functions. However, the angular integration we focused on so far is only part of the imaging procedure. Indeed, beamforming also requires a frequency integration. Therefore, by summing up contributions at different frequency, one can expect to get ”satisfactory” reconstructions even though the spatial measurements are below the number dictated by (35). This can be understood this way. Assume that (35) is not satisfied. Then single frequency reconstructions (or part of them) will be in general crowed of artefacts. Now the point is that spurious artefacts are frequency dependent. In fact, the series term in (33) is a function of frequency. Therefore, it generally happens that artefacts appearing at a given frequency are located at a different place when the frequency is changed. By contrary, the main beam of the reconstruction (i.e., the first term in Eq. (33)) always peaks at the actual scatterer’s location. Therefore, while summing up frequencies, artefacts tend to be averaged out whereas the main beam is not. The result is a clearer (few artefacts) reconstruction. This expectation is checked in the middle and right panel of Fig. 6 where the windowed [see Figs. 6(e), 6(f), and 6(g)] and not-windowed [see Figs. 6(h), 6(i), and 6(l)] beamforming schemes for the same case as in the left panel of Fig. 6 but a frequency band [1,3] GHz are shown. As can be seen, the frequency band greatly helps in reducing artefacts. Moreover, the windowed case is far better than the no-windowed beamforming. This is consistent with previous sections where it has been shown that the no-window case has a larger side lobe structure.

6. Conclusions

This paper has investigated the achievable performance of BF and HI in a multi-monostatic configuration for a two-dimensional homogeneous detection domain. The analysis has considered BF in both windowed and non-windowed versions. A simplified homogeneous scenarios have been considered that allowed analytical estimation for the pertinent PSF. Hence, this choice enabled a rigorous comparison of the algorithms in terms of the achievable resolution and the role of the operating frequency range, the number of scatterers and data discretization. In particular, we have shown that as long as criterion (35) is fulfilled, discrete data will produce reconstructions in the same way as if no sampling occurred. On the other hand, artefacts appear at a distance (from the scatterer) that can be estimated by the property of the Bessel functions. This analysis has also demonstrated that nearly three time few samples are required to avoid aliasing using BF compared to HI under the same conditions. Finally, the analysis has shown that the frequency bandwidth can greatly help in reducing artefacts and, as expected, BF in its windowed version is superior than the non-windowed case.

Finally, it is worth remarking that the results presented herein hold true as long as evanescent waves are negligible. This stems from the adopted asymptotic Hankel function approximation and also from the truncated Fourier series used while evaluating the Hankel function addition theorem. Near field configurations, where evanescent waves are relevant, allow to obtain a better resolution [23]. This, however, requires deploying the sensors very close to the scattering scene and are not feasible for the case at hand. Indeed, here the scattering domain is supported over a 2D domain (not a curve) and the sensors are deployed all around it. Therefore, a very near zone configuration would entail a region to be imaged much smaller than the wavelength. This of course is not feasible in the framework of breast imaging, which is of main concern for us.

The effects of noise, losses, dispersion and dielectric properties mismatch between the investigated domain and its background on the performance of these imaging procedures are also important aspects to consider when assessing their performance. Although, these further considerations can as well be carried out in analythical terms, they will be discussed in future contributions.

Appendix

In this section we go on the detailed derivation of the point spread functions reported in Sections 3 and 4. To this end, we consider a single point-like scatterer located at rp and assume |S(ω)| = 1.

No-windowed beamforming

Eq. (6) can be rewritten as

IBFNW(r¯)=ν|χp|24KΩk4|02π|R¯r¯||R¯r¯p|×[2j2kπ×exp(2jk|R¯r¯|)|R¯r¯|]×[2j2kπ×exp(2jk|R¯r¯p|)|R¯r¯p|]dθ|2dk
where k = ω/v is the wavenumber and KΩ is the band in the wavenumber domain. The terms within the square brackets can be easily recognized as the Hankel function H0(2)() and its conjugate (i.e., their asymptotic versions). The amplitude term |R¯r¯|/|R¯r¯p| is neglected; this appears reasonable, at least for the PSF main lobe evaluation. Accordingly, the integration in θ can be readily evaluated by exploiting the Hankel addition theorem. Therefore, (37) can be rewritten as:
IBFNW(r¯)=νπ|χp|22KΩk4×|n|Hn(2)(2kR)|2Jn(2kr)Jn(2krp)exp[jn(ϕϕp)]|2dk
where Jn(·) are Bessel functions.

If the Tx/Rx positions are not close to the trial and the scatterer positions r¯ and r¯p, then only 2L = max{2kr,2krp} + 1 terms are relevant in the previous series [32]. Moreover, in this case the amplitude of Hankel function can be considered constant with the index n. This is due to the Hankel function asymptotic behaviour when the argument is greater than the order. Accordingly, (38) can be approximated as:

IBFNW(r¯)=νπ|χp|22KΩk4×|H0(2)(2kR)|2|LLJn(2kr)Jn(2krp)exp[jn(ϕϕp)]|2dk

Summation in (39) can be restored as a series by avoiding truncating the index as the added terms do not change the result significantly. Therefore, by applying the addition theorem for Bessel functions, it finally yields:

IBFNW(r¯)=ν|χp|22πR2KΩk2J02(2k|r¯r¯p|)dk
which is known in closed form. In particular, Eq. (8) is obtained through the use of (9).

Windowed beamforming

In this case Eq. (13) is of concern. It can be rewritten as:

IBFW(r¯)=|(jν/2)χpKΩk202π[2j2kπexp(2jk|R¯r¯|)|R¯r¯|]×[2j2kπexp(2jk|R¯r¯p|)|R¯r¯p|]dθdk|2
where as before, the amplitude term |R¯r¯|/|R¯r¯p| has been neglected. Accordingly, the integration in θ can be readily evaluated by exploiting the Hankel addition theorem as completed in Eq. (38). Therefore, we obtain the following:
IBFW(r¯)=|(ν/R)χpKΩkJ0(2k|r¯r¯p|)dk|2
which when evaluated thanks to Eq. (16) yields (15).

6.1. Holography

Based on Eq. (27), the point spread function can be expressed as

PSF(r¯)=Ωkx,kzexp(jβ¯Δ¯r)dkxdkz
where β¯=(kx,kz) and Δ¯r=(xxp,zzp).

It becomes obvious that the shape of the corresponding data support in the kxkz space dictates the PSF behavior. For the case at hand, it is easy to realize by the very definition of the spectral variables that the data support is the circular shall Ωkx,kz={(kx,kz):2kminβ2kmax}. Therefore, eq. (43) can be conveniently rewritten as

PSF(r¯)=2kmin2kmax02πexp[jβΔrcos(ϕβϕΔ)]βdβdϕβ

ϕβ and ϕΔ are the phases of β¯ and Δ¯r, respectively. This double integral can be readily evaluated by employing the Jacobi-Anger expansion [33] of the exponential kernel for the θ integral and the result eq. (16) for the k integration.

References and links

1. M. P. Andre, H. S. Janee, P. J. Martin, G. P. Otto, B. A. Spivey, and D. A. Palmer, “High-speed data acquisition in a diffraction tomography system employing large-scale toroidal arrays,” Int. J. Imaging Syst. Technol. 8(1), 137–147 (1997). [CrossRef]  

2. B. D. Van Veen and K. M. Buckley, “Beamforming: a versatile approach to spatial filtering,” IEEE ASSP Mag. 5(2), 4–24 (1988). [CrossRef]  

3. H. Krim and M. Viberg, “Two decades of array signal processing research: the parametric approach,” IEEE Signal Process. Mag. 13(4), 67–94 (1996). [CrossRef]  

4. A. B. Baggeroer, “Sonar arrays and array processing,” Rev. Prog. Quant. Nondestr. Eval. 13(3), 137(2005). [CrossRef]  

5. B. Drinkwater and P. Wilcox, “Ultrasonic arrays for non-destructive evaluation: a review,” NDT and E Int. 39(6), 525–541 (2006). [CrossRef]  

6. P. N. T. Wells, “Ultrasonic imaging of the human body,” Rep. Prog. Phys. 62(5), 671–722 (1999). [CrossRef]  

7. J. Chen, K. Yao, and R. Hudson, “Source localization and beamforming,” IEEE Signal Process. Mag. 19(2), 30–39 (2002). [CrossRef]  

8. S. C. Hagness, A. Taove, and J. E. Bridges, “Two-dimensional FDTD analysis of a pulsed microwave confocal system for breast cancer detection: Fixed focus and antenna array sensors,” IEEE Trans. Biomed. Eng. 45(12), 1470–1479 (1998). [CrossRef]   [PubMed]  

9. M. O’Halloran, M. Glavin, and E. Jones, “Effects of fibroglandular tissue distribution on data-independent beam-forming algorithms,” Progress in Electromagnetics Research 97, 141–158 (2009). [CrossRef]  

10. H. Lim, N. Nhung, E. Li, and N. Thang, “Confocal microwave imaging for breast cancer detection: delay-multiply-and-sum image reconstruction algorithm,” IEEE Trans. Biomed. Eng. 55(6), 1697–1704 (2008). [CrossRef]   [PubMed]  

11. M. Klemm, I. J. Craddock, J. A. Leendertz, A. Preece, and R. Benjamin, “Improved delay-and-sum beamforming algorithm for breast cancer detection,” Int. J. Ant. Propag. 2008, 761402 (2008).

12. E. Wolf, “Determination of the amplitude and the phase of scattered fields by holography,” J. Opt. Soc. Am. 60(1), 18–20 (1970). [CrossRef]  

13. F. Soldovieri and R. Solimene, “Ground penetrating radar subsurface imaging of buried objects,” in Radar Technology (Guy Kouemou, 2010). [CrossRef]  

14. M. Takeda, W. Wamg, Z. Duan, and Y. Miyamoto, “Coherence holography,” Opt. Express 13(23), 9629–9635 (2005). [CrossRef]   [PubMed]  

15. F. Joud, F. Laloe, M. Atlan, J. Hare, and M. Gross, “Imaging a vibrating object by sideband digital holography,” Opt. Express 17, 2774–2779 (2009). [CrossRef]   [PubMed]  

16. J. Gazdag and P. Sguazzero, “Migration of Seismic Data,” Proc. IEEE 72(10), 1302–1315 (1984). [CrossRef]  

17. C. P. Oden, H. M. Powers, D. L. Wright, and G. R. Olhoeft, “Improving GPR Image Resolution in Lossy Ground Using Dispersive Migration,” IEEE Trans. Geosc. Rem. Sens. 45, 2492–2500 (2007). [CrossRef]  

18. J. M. Lopez-Sanchez and J. Fortuny-Guasch, “3-D Radar imaging using range migration techniques,” IEEE Trans. Antennas Propag. , 48(5), 728–737 (2000). [CrossRef]  

19. R. H. Stolt, “Migration by Fourier transform,” Geophys. 43(1), 23–48 (1978). [CrossRef]  

20. J. Goodman, Introduction to Fourier Optics (McGraw Hill, 1968).

21. M. Soumekh, Synthetic Aperture Radar Signal Processing with Matlab Algorithms (Wiley-Interscience, 1999).

22. K. J. Langenberg, Applied Inverse Problems for Acoustic, Electromagnetic and Elastic Wave Scattering. Basic Methods for Tomography and Inverse Problems (Hilger, 1987).

23. Y. T. Cho, M. J. Roan, and J. S. Bolton, “A comparison of near-field beamforming and acoustical holography for sound source visualization,” Proc. Inst. Mech. Eng., Part C: J. Mech. Eng. Sci. 223, 819–834 (2009). [CrossRef]  

24. E. Julliard, S. Pauzin, F. Simon, and D. Biron, “Acoustic sources localization in presence of reverberation,” J. Acoust. Soc. Am. 118, 1886 (2005). [CrossRef]  

25. W. C. Chew, Waves and Fields in Inhomogeneous Media, (Van Nostrand Reinhold, 1990).

26. Y.-D. Joh, Y. M. Kwon, J. Y. Huh, and W.-K. Park, “Structure analysis of single-and multi-frequency subspace migrations in inverse scattering problems,” Prog. Electromag. Res. 136, 607–622 (2013). [CrossRef]  

27. R. Solimene, I. Catapano, G. Gennarelli, A. Dell’Aversano, A. Cuccaro, and F. Soldovieri, “SAR imaging algorithms and some unconventional applications: a unified mathematical overview,” IEEE Signal Process. Mag. 31(4), 90–98 (2014). [CrossRef]  

28. D. Flores-Tapia and S. Pistorius, “Real time breast microwave radar image reconstruction using circular holography: a study of experimental feasibility,” Med Phys. 38, 5420–5431 (2011). [CrossRef]   [PubMed]  

29. N. Blinstein and R. A. Handelsman, Asymtotic Expansions of Integrals, (Dover Publications, 1986).

30. D. Flores-Tapia and S. Pistorius, “Spatial sampling constraints on breast microwave radar scan acquired along circular geometries,” in IEEE International Symposium on Biomedical Imaging: From Nano to Macro (2011) pp. 496–499.

31. L. Gatteschi, “Sul comportamento asintotico delle funzioni di Bessel di prima specie di ordine ed argomento quasi uguali,” Ann. Mat. Pura Appl. 43(4), 97–117 (1957). [CrossRef]  

32. A. Brancaccio, G. Leone, and R. Pierri, “Information content of Born scattered fields: Results in the circular cylindrical case,” J. Opt. Soc. Am. A 15, 1909–1917 (1998). [CrossRef]  

33. D. Colton and R. Kress, Inverse Acoustic and Electromagnetic Scattering Theory, (Springer-VerlagDover, 1992). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Pictorial view of the scattering scene. Invariance is assumed along the z-axis.
Fig. 2
Fig. 2 (a) Comparing normalized point spread functions returned by Eqs. (6) (denoted as BFNW) and (8) (denoted as psfBFNW). Cut view along the x axis is displayed for two scatterer’s locations, (0,0) (top panel) and (0,4λm) (middle panel) where λm is the wavelegth in a medium with v = c/3 and relative to the bottom frequency of the band [1,3] GHz. The bottom panel shows the difference between the two point spread functions. (b) Comparing point spread functions returned by Eqs. (13) (denoted as BFW) and (15) (denoted as pdfBFW). Cut view along the x axis is displayed for two scatterer’s locations, (0,0) (top panel) and (0,4λm) (bottom panel) where λm is the wavelegth in a medium with v = c/3 and relative to the bottom frequency of the band [1,3] GHz.
Fig. 3
Fig. 3 (a) Cut view along the x-axis of the reconstruction of a point objects obtained by the windowed beamforming procedure for the same parameters as in Fig. 2(b) for different frequency band width: [1,3] GHz (top panel); [1,5] GHz (middle panel); [1,10] GHz (bottom panel). (b) Cut view along the x-axis of the reconstruction of a point objects obtained by the windowed beamforming procedure for the same parameters as in Fig. 2(b) for different frequency allocation: [1,3] GHz, [3,5] GHz, [5,7] GHz and [7,9] GHz (from top to bottom).
Fig. 4
Fig. 4 (a) Level of the first side lobe normalized to the maximum of the PSF as the frequency band (top panel) and the central frequency (bottom panel) change. (b) Comparing the cut views along the x-axis of the reconstructions returned by Eqs. (13), (15) and (17) for [1,10] GHz (top panel) and Eqs. (13), (15) and (18) for [8,10] GHz (bottom panel).
Fig. 5
Fig. 5 (a) Cut view along the x-axis of the reconstruction of two scattering point objects obtained by the no-windowed beamforming procedure for the same parameters as in Fig. 2(a). The green and the dashed blue lines refer to the outcomes of Eqs. (19) and (20), respectively. Red lines instead reports Eq. (20) implemented by using the estimated point spread function retuned by Eq. (8). (b) Cut view along the x-axis of the reconstruction of two scattering point objects obtained by the windowed beamforming procedure for the same parameters as in Fig. 2(b). The green and the dashed blue lines refer to the outcomes of Eqs. (19) and (20), respectively. Red lines instead reports Eq. (20) implemented by using the estimated point spread function retuned by Eq. (15). (c) Cut view along the x-axis of the reconstruction of two scattering point objects for the same parameters as in Fig. 2(b). The green lines refer to holography given by eq. (28) and the blue lines to the windowed beamforming retuned by Eq. (13). For all three procedures the objects have been located at r ¯ 1 = ( 0 , 0 , 0 ) and r ¯ 2 = ( 0.3 , 0 , 0 ) λ m ( top panel ); r ¯ 1 = ( 0 , 0 , 0 ) λ m and r ¯ 2 = ( 0.18 , 0 , 0 ) λ m ( middle panel ); r ¯ 1 = ( 0 , 0 , 0 ) λ m and r ¯ 2 = ( 0.1 , 0 , 0 ) λ m ( bottom panel ).
Fig. 6
Fig. 6 In the left panel angular summation in Eq. (30) for different number of measurement points. (a) N = 45. (b) N = 22. (c) N = 17. (d) N = 8. According to theory, spurious artefacts start at | r ¯ r ¯ p | = N / 2 k. In the middle panel no-windowed beamforming for [1,3] GHz. (e) N = 22. (f) N = 17. (g) N = 8. In the right panel windowed beamforming for [1,3] GHz. (h) N = 22. (i) N = 17. (l) N = 8.

Equations (46)

Equations on this page are rendered with MathJax. Learn more.

U S ( ω , θ ) = ( ω / 2 v ) 2 S ( ω ) D { H 0 ( 2 ) [ ω v R 2 + r 2 2 R r cos ( θ ϕ ) ] } 2 χ ( r ¯ ) d r ¯
U S ( ω , θ ) = ( j ω / 2 π v ) S ( ω ) D exp ( 2 j ω v R 2 + r 2 2 R r cos ( θ ϕ ) ) R 2 + r 2 2 R r cos ( θ ϕ ) χ ( r ¯ ) d r ¯
u S ( t , θ ) = D s [ t τ ( θ , r ¯ ) ] χ ( r ¯ ) d r ¯
I B F ( r ¯ ) = W ( t ) [ 0 2 π u s [ t T ( θ , r ¯ ) , θ ] d θ ] 2 d t
I B F N W ( r ¯ ) = Ω | 0 2 π U S ( ω , θ ) exp [ j ω τ ( θ , r ¯ ) ] d θ | 2 d ω
I B F N W ( r ¯ ) = Ω | ω / ( 2 π v ) S ( ω ) | 2 | D χ ( r ¯ ) 0 2 π exp { j ω [ τ ( θ , r ¯ ) τ ( θ , r ¯ ) ] } R 2 + r 2 2 R r cos ( θ ϕ ) d θ d r | 2 d ω
F N W : χ χ ˜ = I B F N W
P S F N W ( r ¯ , r ¯ p ) | v [ Φ ( 2 k max | r ¯ r ¯ p | ) Φ ( 2 k min | r ¯ r ¯ p | ) ] 2 π R 2 ( 2 | r ¯ r ¯ p | ) 3 |
Φ ( α ) = 0 α y 2 J 0 2 ( y ) d y = 1 / 8 × [ ( 2 y 3 + y ) J 0 2 ( y ) + 2 y 2 J 0 ( y ) J 1 ( y ) + 2 y 3 J 1 2 ( y ) | 0 α 0 α J 0 2 ( y ) d y ]
0 α J 0 2 ( y ) d y = y [ J 0 2 ( y ) + J 1 2 ( y ) ] | 0 α + 0 α J 1 2 ( y ) d y
I B F W ( r ¯ ) = | 0 2 π u S [ T W T ( θ , r ¯ ) , θ ] d θ | 2
I B F W ( r ¯ ) = | 0 2 π u S ( t , θ ) δ [ t τ ( θ , r ¯ ) ] d t d θ | 2
I B F W ( r ¯ ) = | Ω 0 2 π U S ( ω , θ ) exp [ j ω τ ( θ , r ¯ ) ] d θ d ω | 2
I B F W ( r ¯ ) = | D χ ( r ¯ ) Ω j ω / ( 2 π v ) × 0 2 π exp { 2 j ω / v [ | R ¯ r ¯ | | R ¯ r ¯ | ] } | R ¯ r ¯ | d θ d ω d r ¯ | 2
F W : χ χ ˜ = I B F W
P S F W ( r ¯ , r ¯ p ) { v [ Ψ ( 2 k max | r ¯ r ¯ p | ) Ψ ( 2 k min | r ¯ r ¯ p | ) ] R ( 2 | r ¯ r ¯ p | ) 2 } 2
Ψ ( α ) = 0 α y J 0 ( y ) d y = y J 1 ( y ) | 0 α
P S F W ( r ¯ , r ¯ p ) a p p = { v [ Ψ ( 2 k max | r ¯ r ¯ p | ) ] R ( 2 | r ¯ r ¯ p | ) 2 } 2
P S F W ( r ¯ , r ¯ p ) a p p = [ ( v / R ) k a v J 0 ( 2 k a v | r ¯ r ¯ p | ) ] 2
I B F i ( r ¯ ) = I B F i ( r ¯ , r ¯ 1 , χ 1 ) + I B F i ( r ¯ , r ¯ 2 , χ 2 ) + M B F i ( r ¯ , r ¯ 1 , r ¯ 2 , χ 1 , χ 2 )
I B F i ( r ¯ ) I B F i ( r ¯ , r ¯ 1 , χ 1 ) + I B F i ( r ¯ , r ¯ 2 , χ 2 )
U S ( ω , θ ) = D K ( ω , R , θ , r , ϕ ) χ ( r ¯ ) d r ¯
K ( ω , R , θ , r , ϕ ) = ( j ω / 2 π ν ) × exp ( 2 j ω ν R 2 + r 2 2 R r cos ( θ ϕ ) ) R 2 + r 2 2 R r cos ( θ ϕ )
U S ( ω , ε ) = D F T θ [ K ( ω , R , θ , r , ϕ ) ] χ ( r ¯ ) d r ¯
U ˜ S ( ω , ε ) = D exp [ j ψ ( ω , ε , R ) ] F T θ [ K ( ω , R , θ , r , ϕ ) ] χ ( r ¯ ) d r ¯
U ˜ S ( ω , θ ) = D exp [ 2 j k r cos ( θ ϕ ) ] χ ( r ¯ ) d r ¯
U ˜ S ( k x , k z ) = D exp [ j ( k x x + k z z ) ] χ ( x , z ) d x d z
o l : χ χ ˜
χ ˜ ( x , z ) = D Ω k x , k z exp { j [ k x ( x x ) + k z ( z z ) ] } d k x d k z χ ( x , z ) d x d z
P S F H o l ( r ¯ , r ¯ p ) = 2 π [ Ψ ( 2 k m a x | r ¯ r ¯ p | ) Ψ ( 2 k m i n | r ¯ r ¯ p | ) ] | r ¯ r ¯ p | 2
0 2 π H 0 ( 2 ) ( 2 k | R ¯ r ¯ | ) H 0 ( 1 ) ( 2 k | R ¯ r ¯ p | ) d θ
n = 0 N 1 H 0 ( 2 ) ( 2 k | R ¯ n r ¯ | ) H 0 ( 1 ) ( 2 k | R ¯ n r ¯ p | )
h , l H h ( 2 ) ( 2 k R ) H h l N ( 1 ) ( 2 k R ) J h ( 2 k r ) J h l N ( 2 k r p ) exp [ j h ( θ p θ ) ] exp [ j l N θ p ]
| H 0 ( 2 ) ( k R ) | 2 l j l N J l N ( 2 k | r ¯ r ¯ p | ) exp [ j l N a r g ( r ¯ r ¯ p ) ]
| H 0 2 ( 2 k R ) | 2 { J 0 ( 2 k | r ¯ r ¯ p | ) + l 0 j l N J l N ( 2 k | r ¯ r ¯ p | ) exp [ j l N a r g ( r ¯ r ¯ p ) ] }
N 4 k R b
N 4 k m a x R b
N 2 π 4 k m a x R R b / R 2 + R b 2
I B F N W ( r ¯ ) = ν | χ p | 2 4 K Ω k 4 | 0 2 π | R ¯ r ¯ | | R ¯ r ¯ p | × [ 2 j 2 k π × exp ( 2 j k | R ¯ r ¯ | ) | R ¯ r ¯ | ] × [ 2 j 2 k π × exp ( 2 j k | R ¯ r ¯ p | ) | R ¯ r ¯ p | ] d θ | 2 d k
I B F N W ( r ¯ ) = ν π | χ p | 2 2 K Ω k 4 × | n | H n ( 2 ) ( 2 k R ) | 2 J n ( 2 k r ) J n ( 2 k r p ) exp [ j n ( ϕ ϕ p ) ] | 2 d k
I B F N W ( r ¯ ) = ν π | χ p | 2 2 K Ω k 4 × | H 0 ( 2 ) ( 2 k R ) | 2 | L L J n ( 2 k r ) J n ( 2 k r p ) exp [ j n ( ϕ ϕ p ) ] | 2 d k
I B F N W ( r ¯ ) = ν | χ p | 2 2 π R 2 K Ω k 2 J 0 2 ( 2 k | r ¯ r ¯ p | ) d k
I B F W ( r ¯ ) = | ( j ν / 2 ) χ p K Ω k 2 0 2 π [ 2 j 2 k π exp ( 2 j k | R ¯ r ¯ | ) | R ¯ r ¯ | ] × [ 2 j 2 k π exp ( 2 j k | R ¯ r ¯ p | ) | R ¯ r ¯ p | ] d θ d k | 2
I B F W ( r ¯ ) = | ( ν / R ) χ p K Ω k J 0 ( 2 k | r ¯ r ¯ p | ) d k | 2
P S F ( r ¯ ) = Ω k x , k z exp ( j β ¯ Δ ¯ r ) d k x d k z
P S F ( r ¯ ) = 2 k m i n 2 k m a x 0 2 π exp [ j β Δ r cos ( ϕ β ϕ Δ ) ] β d β d ϕ β
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.