Abstract
In this paper, we propose an approach to overcome the well-known “diffraction limit” when imaging sources several wavelengths away. We employ superdirectivity antenna concepts to design a well-controlled superoscillatory filter (SOF) based on the properties of Tschebyscheff polynomials. The SOF is applied to the reconstructed images from holographic algorithms which are based on the back-propagation principle. We demonstrate the capability of this approach when imaging point-sources several wavelengths away in one-, two-, and three-dimensional imaging with super-resolution. We also investigate the robustness of the proposed algorithm with the sharpness of the SOF, the presence of noise, the imaging distance, and the size of the scanning aperture.
©2013 Optical Society of America
1. Introduction
The well-known “diffraction limit” restricts the resolution of conventional imaging approaches. Many techniques have been proposed so far to overcome this limit. Most of these approaches exploit the evanescent field components in the vicinity of the imaged object, e.g. see [1-4], although in some instances the near-field information can be projected into the far field [5,6]. The evanescent components contain information about the finer details of the electromagnetic field variation. Overcoming the diffraction limit for longer imaging distances is very challenging due to the fast attenuation of the evanescent waves.
Furthermore, a number of approaches have been proposed so far to overcome the diffraction limit without the need for evanescent waves. A good summary of some early efforts can be found in [7] and include: an approach based on the sampling theorem in the frequency domain [8], an approach based on the prolate-spheroidal wave function expansion when imaging through a lens [9], and an iterative approach [10] based on minimizing a defined ‘error energy’ function until a correction criterion is reached.
In this paper, we propose an approach to overcome the diffraction limit without evanescent waves based on the properties of superoscillatory (SO) wave variations. These variations can be realized by employing an expansion of band-limited functions. In [11], SO wave variations have been realized by employing the prolate-spheroidal wave functions [12]. More specifically, a SO field was constructed as a series of spheroidal wave functions within a prescribed size of the imaged region. A mask has been fabricated to generate sub-wavelength beams in the optical regime for imaging at distances many wavelengths away. Also, in [13], a super-resolution microscope has been proposed for far-field optical imaging. This imaging technique utilizes a binary amplitude mask, comprising optimized concentric rings of different widths and diameters, for focusing of light from a laser into a subwavelength SO spot in the far-field. This beam is then scanned to construct an image of the object with subwavelength resolution.
An alternative approach to the synthesis of superoscillations has been proposed [14,15], by employing superdirective antenna concepts. It has been shown that superdirectivity and superoscillations are dual phenomena in the spectral and spatial frequency domains respectively. By leveraging this analogy, the designer has access to the large body of information available in the antenna literature to synthesize controlled SO beams. For example, the use of Tschebyshev polynomials leads to sidelobes of equal amplitude and a well-controlled ratio between the peak of the main SO beam and these sidelobes. By employing this concept and the properties of Tschebyscheff polynomials in the design of supredirective antennas [16], a SO waveform has been physically realized five wavelengths away from an antenna array using only propagating waves [15].
In this paper, we propose a simple yet effective technique to perform sub-wavelength imaging. Instead of a hardware implementation to achieve sub-wavelength focusing of electromagnetic waves (as performed in [11], [13,15]), our approach is based on the design and application of a superoscillatory filter (SOF) utilizing the theory presented in [14]. This filter is applied to the complex-valued images already reconstructed by conventional holographic algorithms [17-19] which are based on the principle of back-propagation. The resolution of these holographic algorithms is subject to the diffraction limit when imaging sources several wavelengths away from the imaging apparatus. Here, we demonstrate the capability of the proposed approach in improving the resolution of the images beyond the diffraction limit in one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D) imaging. Our study is based on simulation data from point sources. We also investigate the robustness of the proposed algorithm with various parameters such as the sharpness of the SOF, the presence of noise, the imaging distance, and the size of the scanning aperture.
2. Overcoming the diffraction limit with a SOF
Figure 1(a) illustrates the holographic imaging setup for 2D or 3D imaging. The coherent data is measured on a 2D aperture plane and holographic algorithms are employed to reconstruct images of the sources on the imaged plane/planes. Single frequency data is employed to perform 2D imaging [17,18] at a single range position (z-position) while acquisition of wideband data allows for 3D imaging of the sources [17,19] at various range positions (z-positions).
Here, following the guidelines in [17-19], for simplicity we start our formulation assuming a linear aperture and a linear imaged domain (both along the x direction) as shown in Fig. 1(b). We also assume single frequency data acquisition for now which allows for 1D imaging over the inspected region. After we describe the proposed approach in improving the resolution in 1D imaging, the extension of the approach to 2D and 3D imaging is straight-forward.
With reference to Fig. 1(b), for sources on the imaged line , the measured field on the scanned line is obtained from:
where is the current density of the sources on the imaged line and is the Green’s function.Equation (1) is a convolution integral. Taking the Fourier transform (FT) of both sides of Eq. (1) gives
where , , and are the FTs of , , and , respectively.2.1 Holographic imaging and diffraction limited resolution
In the 1D holographic imaging algorithm, knowing and in Eq. (2), the estimated FT of the image of the source is obtained as
The reconstructed image is then obtained by taking the inverse FT of .Since the scanned line has a finite size, the maximum measurable wavenumber corresponding to the x axis is as illustrated in Fig. 1(b), where is the angle subtended by the aperture. Thus, Eq. (3) is valid only within the spectral range of .
To study the resolution of this approach, we follow the reconstruction process for a point-source on the z-axis, i.e. and . Following the above expressions, it is easy to show that the estimated FT of the image of this source is obtained as
Therefore, the reconstructed image is a “sinc” function:The first zero of this “sinc” function occurs at . The diffraction limited resolution is the minimum distance between two identical point sources so that they can be resolved well in the reconstructed image. So, this resolution is approximately . In practice, when the scanned aperture is sufficiently large (), the distance between two identical sources should be approximately so that the dip between the two peaks in the image (corresponding to the position of two sources) drops below 0.7 level (the half-power level). This is appropriate since in this paper we consider coherent imaging detection processes.
Holographic approach for 2D imaging is very similar to 1D imaging described above [17,18]. For 3D holographic imaging, however, wideband data has to be acquired and the processing involves more complicated steps and approximations. In [17], single-frequency 2D holographic imaging methods have been merged with a wideband 2D synthetic aperture radar (SAR) approach to propose a 3D holographic imaging algorithm. The processing relies on an assumed analytical (exponential) form of the incident field and the free-space Green’s function in order to cast the inversion expression in the form of a 3D inverse FT. Re-sampling of the data in the -space is also necessary. Although in [17] is treated as an independent Fourier variable corresponding to the z direction, this parameter actually depends on the other two Fourier variables and , corresponding to the x and y directions, respectively. These approximations introduce errors in the results of the 3D holographic imaging algorithm proposed in [17]. In [19], however, the introduction of the parameter is avoided. Instead, the 2D FTs of the images at various range positions are found by solving a number of systems of equations in a least-square sense. Eventually, the complex-valued 2D images at various range positions are obtained by taking the inverse FT of the solutions.
In the following sections, we first describe an approach to design a SOF and then we propose applications of this filter to overcome the diffraction-limited resolution in the holographic imaging algorithms.
2.2 Design of 1D and 2D SOFs
In this section, we present the approaches to design one-dimensional (1D) and two-dimensional (2D) SOFs. The SOF filter is a bandwidth-limited function such that it operates over the spatial bandwidth available through the propagating waves only, i.e. . However, in the spatial domain, it should have a narrower width of the main lobe compared to the diffraction limited “sinc” function such that when it is applied to the images resulting from the holographic algorithm, it introduces improvement in the resolution as described later.
To design a filter with such properties, we employ the SO waveform design concepts. Superoscillation is the phenomenon in which a waveform possesses variations faster than its highest spectral component. Similar to [14,15], we employ superdirective antenna design concepts to construct a SOF. The SO waveform is synthesized by the proper expansion of Tschebyscheff polynomials. It is performed by finding the proper sets of zeros which yield the SO waveform with the desired narrow width of the main lobe and prescribed side-lobe levels.
Here, we first summarize the process of designing a SOF with spectral lines distributed evenly in the spectral domain within the bandwidth corresponding to the propagating waves. In the spectral frequency domain, the SOF is written as
where is a uniform line spacing and , are the coefficients (values of spectral lines) to be determined. The spatial variation of this filter can then be written asDefining , it is easy to show that for spectral lines, with symmetrical strength around the center line, Eq. (7) can be written as [16]where is the n-th order Tschebyscheff polynomial. It is feasible to design the SOF employing the useful properties of Tschebyscheff polynomials including [16]: (1) all zeros of occur between and (2) the maximum and minimum values of lying in the interval of are . A linear shifting of the desired portion of the Tschebyscheff polynomial of the correct degree into the range defined by for the given array, where is the designed spatial extent of the filter, followed by equating the coefficients of Eq. (8) to those of the transformed Tschebyscheff function, leads to the determination of the coefficients [16]. Knowing the positions of the nulls, the spatial variation of the SOF is obtained from Eq. (8).Furthermore, to construct a 2D SOF, the simplest method is to take the array of spectral lines along the -axis as an element of another similar array along the -axis. Assuming similar parameters for the filter along both the and directions, the 2D variation of the SOF is obtained from
2.3 Application of the SOF to overcome the diffraction limited resolution
To overcome the diffraction limited resolution, we propose applying the SOF designed in the previous section to the complex-valued images obtained from the holographic imaging techniques. As we described above, the SOF is designed using the range of and values corresponding to the propagating waves only, i.e. . Applying such a filter to the reconstructed values obtained from holographic imaging provides improved resolution as described later in this section. Since the filter is designed to act on the transverse spatial spectrum corresponding to propagating waves only, the improvement in the resolution can be obtained far beyond the evanescent region.
Figure 2 illustrates the block diagram of the proposed algorithm in the spectral domain. For example, for a point-source, from Eq. (4), the estimated FT of the image obtained from holographic reconstruction is ideally 1 within the spectrum of the propagating waves. When we apply the SOF to this image, the FT of the outcome is the product of the FT of the estimated image and the FT of the filter . So, for a point-source, ideally we have
Thus, when taking the inverse FT of Eq. (10), we obtain an image which has SO variation with arbitrarily narrower width (corresponding to the sharpness of the main lobe in the designed SOF) compared to the diffraction limited “sinc” function obtained in Eq. (5). Similarly, for multiple point-sources, the application of the SOF leads to obtaining better resolving capability. The only limitation is that the field of view (FOV) has to be limited as it is well-known for all imaging approaches based on the formation of SO variations [9,11], and [13-15].3. Results
We hereby present various simulation examples to demonstrate the capabilities of applying 1D and 2D SOFs to improve the resolution of images obtained in 1D, 2D, and 3D holographic imaging of point-sources.
While showing the improvement in the resolution by the proposed approach, we also investigate the robustness of the proposed approach to various factors including: number of spectral lines 2N + 1 employed to design the SOF, level of additive white Gaussian noise (WGN), imaging distance, and size of the scanning aperture. We perform this study for 1D imaging but the conclusions can be extended to 2D and 3D imaging.
3.1 Applying SOF in 1D imaging
First, we employ the concept described in section 2 to design SOFs at the frequency of 3 GHz. Figure 3 shows the normalized spectrum of three 1D SOFs designed with 5 (N = 2), 7 (N = 3), and 9 (N = 4) spectral lines in the spectral range provided by the propagating waves, i.e. (this is with the assumption that the scanned aperture is large enough such that ). The filters have been designed to have the narrowest main lobe in the spatial domain with constant sidelobes at 20% the main lobe’s strength for a range of on both sides of the peak, i.e. . It is observed from Fig. 3 that the spatial variations of these SOFs have narrower main lobes compared with the diffraction limited “sinc” function employing the same spectrum. It is also observed that as N increases, the width of the main lobe decreases. However, with increasing N the spatial variation of SOF becomes more sensitive to the tolerances in the values of the spectral lines. Although this may not be important in the design of the filter (we fine-tune the values), when we employ the SOF in the proposed approach to improve the resolution, this issue becomes of significant importance. In section 2, we described that when applying the SOF in imaging of a point-source, in an ideal scenario, the FT of the image is the same as the FT of the SOF. However, if the estimated FT of the point-source is not exactly 1, the values of deviate from those of . Thus, the inverse FT of may not lead to a satisfactory spatial variation. This situation can occur due to the small size of the scanning aperture or high levels of noise. This leads to difficulties in employing the proposed approach to improve the resolution with SOFs of higher order N. Thus, there is a trade-off between employing SOFs designed with smaller N (wider main lobe and less sensitive to the values of the spectral lines) and SOFs designed with larger N (narrower main lobe and more sensitive to the values of the spectral lines).
Figure 4 shows the reconstructed images of double point-sources at and when and without applying SOF and when applying SOFs with spectral and spatial variations shown in Fig. 3. The FOV for this example and the other 1D examples in this paper is . It is observed that the resolution of the image is improved when applying the SOF with N = 2 and N = 3 compared to the original image obtained from holographic reconstruction. The dip between the two peaks in the image is smaller and the width of the peaks corresponding to the two sources is narrower compared with the image obtained directly from the holographic imaging. Therefore, the resolution has been indeed improved. It should be noted that the truncation of the images on the left and the right of the peaks, for higher N, is due to the limited FOV inherent in SO based imaging [11,14].
Furthermore, when comparing the results of using the SOF with N = 3 with the one obtained with N = 2, the two lobes corresponding to the two sources are narrower with the dip between them being sharper. This is due to the fact that the SOF with N = 3 has a narrower main lobe compared to the one designed with N = 2 as shown in Fig. 3. However, the reconstructed image becomes worse when applying the SOF with N = 4. Undesired displacements of the main lobes are observed in the image. This is due to the higher sensitivity of the spatial variation of the SOF with N = 4 with respect to the tolerances in the values of the spectral lines. Thus, the finite size of the aperture causes large errors in the results as described before.
To be more realistic, we investigate the effect of adding WGN to the acquired field on the aperture. We have added WGN with signal-to-noise ratio (SNR) values of 80 dB (almost noiseless), 30 dB, 15 dB, and 5 dB to the simulated field acquired over the scanned aperture. Figure 5 shows the results for applying the designed SOF with 7 spectral lines (N = 3) in Fig. 3 to the imaging of two point-sources centered at in free space when and . It is observed that the proposed approach performs well with the reasonable SNR value of 30 dB. As expected, the results are degraded with decreasing the SNR value.
Here, we also investigate the effect of increasing the imaging distance on the reconstructed images. Figure 6 shows the results for applying the designed SOF with 7 spectral lines (N = 3) in Fig. 3 to the imaging of two point-sources centered at in free space. In this example, we add WGN with SNR value of 30 dB to the field acquired on the aperture. The results in Figs. 6(a)-6(c) are shown for and , , and . It is observed that the quality of the reconstructed images degrades with increasing the imaging distance . This degradation of the image quality with increasing the imaging distance can be however compensated with increasing the aperture size. For instance, Fig. 6(d) shows the reconstructed image for the same example when and when we double the aperture size, i.e. . It is observed that the quality of the image improves with increasing the aperture size since more propagating waves are now captured.
3.2 Applying SOF in 2D imaging
Here, we employ the designed 1D SOF of the previous section based on 5 spectral lines (N = 2) and construct a 2D SOF with the approach described in section 2.2. Figure 7 compares the magnitude of the constructed 2D SOF with that of the 2D diffraction limited “sinc” function. It is observed that the bright spot corresponding to the main lobe in the 2D SOF is smaller in size compared with that of the “sinc” function. We show that this leads to the improvement in the resolution when employing this SOF.
Figure 8(a) shows the results of employing the 2D holographic image reconstruction proposed in [17,18] for four point-sources at (−0.43,0,5)λ, (0.43,0,5)λ, (0,−0.43,5)λ, and (0,0.43,5)λ when and . The FOV for this example and the 3D imaging example presented in the next section is .
Here, we have first add WGN with SNR = 30 dB to the simulated field. The four sources cannot be resolved in the image of Fig. 8(a). Figure 8(b) shows the results when we apply thedesigned 2D SOF to the complex-valued image obtained from the holographic imaging. This clearly improves the quality of the image with the four sources resolved.
Figures 8(c) and 8(d) also show the reconstructed 2D images when we add WGN with SNR = 20 dB to the simulated field acquired over the aperture. These figures show the degradation of the results with decreasing the SNR value.
3.3 Applying SOF in 3D imaging
In order to perform 3D imaging with holographic algorithms, wideband data is acquired over the scanned aperture [17,19]. As a 3D imaging example, we first compute the field over the aperture produced by four point-sources at (−0.43,0,5), (0.43,0,5), (0,−0.43,4), and (0,0.43,4) when where is the wavelength at the center frequency. The data is acquired over the frequency range of 3 GHz to 10 GHz.
Figure 9(a) shows the results of applying the 3D holographic image reconstruction proposed in [19] where is used as the Green’s function. We have added WGN with SNR = 30 dB to the simulated field. The four sources cannot be resolved in the images at and . To improve the quality of the images, a SOF filter is designed at the center frequency of the band. Figure 9(b), shows the results when we apply this 2D SOF to the complex-valued images at each range location. This clearly improves the quality of the images with the sources being resolved in the images at and .
Figures 9(c)-9(f) also show the reconstructed images when we add WGN with SNR = 10 dB and −12 dB to the simulated field. These figures show the degradation of the results with decreasing the SNR value. However, it can be observed that the 3D imaging algorithm is very robust to the additive WGN. Our study shows that the robustness is due to the fact that the 3D holographic imaging itself is very robust to noise due to the additional information from themultiple frequencies involved [19]. Thus, the complex-valued images resulting from applying 3D holographic imaging with low SNR for the acquired data still possess values accurate enough to be improved when applying the SOF.
4. Discussion
In this section, we briefly discuss the spatial and frequency sampling criteria for achieving reliable reconstruction results, the modeling of the noise in an imaging system, and the achieved improvement in the resolution using the proposed SOF approach.
4.1 Spatial and Frequency Sampling Criteria
The discrete nature of the data acquisition process (detectors are finite and scanning happens in discrete steps) as well as digital image reconstruction require the data and the image to be discretized. In a successful discretization scheme, the Nyquist sampling criterion should be satisfied, i.e. the phase shift from one sample to the next should be less than rad. Thus, the spatial and frequency sampling rates depend on the wavelength, size of the aperture, distance to the object, etc.
Useful expressions for the sampling criteria have been derived in [17]. With a slight modification to take into account the one time traveling of the path between the sources and the receiver, instead of the twice traveling of the wave for the path between the transmitter/receiver and the object as in [17], the spatial sampling steps and and the sampling step for the frequency should satisfy these criteria
where is the maximum distance of the receiver to the imaged source. In the studied examples, we have to satisfy (11) and for the 3D imaging example in Fig. 9, we have used the frequency sampling step of to satisfy the criteria in (12) for (which corresponds to the case where the receiver is at the edge of the aperture to measure the field due to a source possibly available on the plane z = 7λc). Figure 10 shows the 1D images for the example shown in Fig. 5 when is and . It is observed that violating the condition in (11) leads to errors in the image.4.2 Modeling the Noise
So far, we have considered the WGN model. This is an appropriate model for many imaging systems where the noise is generated by natural sources such as thermal vibrations of atoms in conductors (referred to as thermal noise or Johnson-Nyquist noise). Indeed, this is a suitable noise model at the microwave spectrum, where the above simulations have been carried out, since electronic receivers are limited by thermal noise.
At optical frequencies, photodetectors are limited by shot noise which is associated with the random arrivals of photons. Thus, even if other types of noise are so small to be neglected in an imaging system, SNR cannot exceed a fundamental limit called the shot noise limit (also called quantum-limited noise). For such low photon count imaging systems, the SNR is equal to , where is the mean number of photons received within the measurement time [20]. In an imaging system with modest photon counts, this noise is modeled by the Poisson distribution.
Figure 11 shows the images for the example in Fig. 5 when adding Poisson noise (PN) with SNR values of 80 dB (almost noiseless), 30 dB, 15 dB, and 5 dB to the acquired field on the aperture. It is observed that the dip in the image of the two sources crosses the 0.7 level (half-power level) for an SNR of almost 14 dB. This SNR value is very close to the corresponding SNR value of 15 dB for which the imaging algorithm starts to break down with WGN (Fig. 5(c)). This level of noise corresponds to which is indeed a very low photon count case. In many imaging systems, however, the number of received photons is much higher which leads to higher SNR values when considering only the shot noise. In such systems, other sources of noise such as thermal noise are usually dominant and are commonly modeled by WGN. Also, even for the systems where the shot noise is dominant, this type of noise can still be approximated with WGN at high SNR values. In fact, it has been shown that for large number of photons (say larger than 1000, corresponding to SNR value of 30 dB), the Gaussian distribution provides an excellent approximation to the Poisson distribution for modeling the shot noise [21]. This is expected due to the “Central Limit Theorem” [22].
4.3 Improvement in the Resolution
Here, we assess the improvement in the resolution more quantitatively. For this purpose we again consider the example in Fig. 5, assuming an SNR of 30 dB when adding WGN and PN. Figure 12 shows the reconstructed images for various cases when the dip between the two peaks corresponding to two identical sources crosses the 0.7 level (half power level) in the reconstructed image with or without applying SOF. It is observed that for both WGN and PN, and without applying SOF, the distance between the two sources should be almost 0.78λ while with applying SOF, the distance between the two sources should be almost 0.64 λ. This shows an improvement of approximately 18% in the resolution. According to our study, the resolution can be improved further using data at multiple frequencies. For example, by acquiring data over the frequency band of 3 GHz to 10 GHz and using a reconstruction approach similar to 3D imaging (see section 3.3), the resolution of 1D imaging can be improved by approximately 30% using the SOF when compared to plain image reconstruction over the same frequency range [17,19].
5. Conclusion
In this paper, we have proposed a computational approach to overcome the diffraction limit for imaging sources several wavelengths away from a scanned aperture. This method is based on applying a well-controlled SOF to the acquired data, designed using a recently proposed analogy between antenna superdirectivity and superoscillations [14]. We demonstrated the capability of this approach in improving the resolution of 1D, 2D, and 3D images obtained from the holographic imaging algorithms reported in [17-19]. We have added white Gaussian noise and Poisson noise to the simulated fields acquired over the aperture to show the performance of the proposed algorithm in improving the resolution in the presence of noise. Although, theoretically we can achieve any arbitrarily high resolution in the proposed approach, in practice, the achieved improvement in resolution will be limited by the finite size of the scanned aperture, the working distance between the sources and scanned aperture, and measurement noise. Our simulations show that useful results can be obtained in the presence of realistic signal-to-noise levels (e.g. 30 dB for a 5λ working distance). Remarkably we have also discovered that the robustness to noise can be further increased significantly by acquiring data over a range of frequencies, as was demonstrated in our 3D reconstruction cases.
References and links
1. J. B. Pendry, “Negative refraction makes a perfect lens,” Phys. Rev. Lett. 85(18), 3966–3969 (2000). [CrossRef] [PubMed]
2. A. Grbic and G. V. Eleftheriades, “Overcoming the diffraction limit with a planar left-handed transmission-line lens,” Phys. Rev. Lett. 92(11), 117403 (2004). [CrossRef] [PubMed]
3. A. Grbic, L. Jiang, and R. Merlin, “Near-field plates: Subdiffraction focusing with patterned surfaces,” Science 320(5875), 511–513 (2008). [CrossRef] [PubMed]
4. L. Markley and G. V. Eleftheriades, “Meta-screens and near-field antenna-arrays: a new perspective on subwavelength focusing and imaging,” Metamaterials; Elsevier 5(2–3), 97–106 (2011).
5. Z. Jacob, L. V. Alekseyev, and E. Narimanov, “Optical hyperlens: Far-field imaging beyond the diffraction limit,” Opt. Express 14(18), 8247–8256 (2006). [CrossRef] [PubMed]
6. A. Salandrino and N. Engheta, “Far-field subdiffraction optical microscopy using metamaterial crystals: Theory and simulations,” Phys. Rev. B 74(7), 075103 (2006). [CrossRef]
7. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).
8. J. L. Harris, “Diffraction and resolving power,” J. Opt. Soc. Am. 54(7), 931–933 (1964). [CrossRef]
9. C. W. Barnes, “Object restoration in a diffraction-limited imaging system,” J. Opt. Soc. Am. 56(5), 575–578 (1966). [CrossRef]
10. R. W. Gerchberg, “Super-resolution through error energy reduction,” Opt. Acta (Lond.) 21(9), 709720 (1974). [CrossRef]
11. F. M. Huang and N. I. Zheludev, “Super-resolution without evanescent waves,” Nano Lett. 9(3), 1249–1254 (2009). [CrossRef] [PubMed]
12. D. Slepian and H. O. Pollak, “Prolate spheroidal wavefunctions, quadrature and uncertainty-I,” Bell Syst. Tech. J. 40, 43–63 (1961).
13. E. T. F. Rogers, J. Lindberg, T. Roy, S. Savo, J. E. Chad, M. R. Dennis, and N. I. Zheludev, “A super-oscillatory lens optical microscope for subwavelength imaging,” Nat. Mater. 11(5), 432–435 (2012). [CrossRef] [PubMed]
14. A. M. H. Wong and G. V. Eleftheriades, “Adaptation of Schelkunoff’s superdirective antenna theory for the realization of superoscillatory antenna arrays,” IEEE Antennas Wirel. Propag. Lett. 9, 315–318 (2010). [CrossRef]
15. A. M. H. Wong and G. V. Eleftheriades, “Sub-wavelength focusing at the multi-wavelength range using superoscillations: an experimental demonstration,” IEEE Trans. Antenn. Propag. 59(12), 4766–4776 (2011). [CrossRef]
16. N. Yaru, “A note on super-gain antenna arrays,” Proc. of I.R.E. 39(9), 1081–1085 (1951). [CrossRef]
17. D. M. Sheen, D. L. McMakin, and T. E. Hall, “Three-dimensional millimeter-wave imaging for concealed weapon detection,” IEEE Trans. Microw. Theory Tech. 49(9), 1581–1592 (2001). [CrossRef]
18. M. Ravan, R. K. Amineh, and N. K. Nikolova, “Two-dimensional near-field microwave holography,” Inverse Probl. 26(5), 055011 (2010). [CrossRef]
19. R. K. Amineh, M. Ravan, A. Khalatpour, and N. K. Nikolova, “Three-dimensional near-field microwave holography using reflected and transmitted signals,” IEEE Trans. Antenn. Propag. 59(12), 4777–4789 (2011). [CrossRef]
20. B. E. Saleh and C. T. Malvin, Fundamentals of Photonics (Wiley, 1991).
21. S. O. Rice, “Mathematical analysis of random noise,” Bell Syst. Tech. J. 23, 282–332 (1944), 24, 46–156 (1945).
22. C. M. Grinstead and J. L. Snell, Introduction to Probability (American Mathematical Society, 1997).