Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Far-field signature of sub-wavelength microscopic objects

Open Access Open Access

Abstract

Information about microscopic objects with features smaller than the diffraction limit is almost entirely lost in a far-field diffraction image but could be partly recovered with data completition techniques. Any such approach critically depends on the level of noise. This new path to superresolution has been recently investigated with use of compressed sensing and machine learning. We demonstrate a two-stage technique based on deconvolution and genetic optimization which enables the recovery of objects with features of 1/10 of the wavelength. We indicate that l1-norm based optimization in the Fourier domain unrelated to sparsity is more robust to noise than its l2-based counterpart. We also introduce an extremely fast general purpose restricted domain calculation method for Fourier transform based iterative algorithms operating on sparse data.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Superresolving optical microscopy becomes increasingly important in medical and biological applications, in nanotechnology, material science etc. Recent advances in computational imaging [1,2] and deep learning [3] reopen the question of how much resolution can be enhanced by data completition methods [48]. While scanning near-field optical microscopy [9] as well as techniques based on fluorescence microscopy [10] allow to reach a deeply sub-wavelength resolution down to the order of several nanometers [11], the resolution of classical optical imaging is restricted by the Abbe diffraction limit. Fluorescence microscopy gave rise to methods that overcome this limit such as stochastic optical reconstruction microscopy [12], photo-activated localization microscopy [13] or stimulated emission depletion [14,15]. These techniques bypass the diffraction limit which refers to two-point resolution or the width of the point spread function but not to the localization precision of point sources [16], whereas optical detection of isolated sub-wavelength objects remains a part of these methods. However, the Abbe limit alone only says that the images of small closely positioned objects overlap, which is not equivalent to that they can not be resolved by computational methods. Also, due to the low-pass filtering nature of optical imaging, a large part of image spatial spectrum representing small image features is lost in the measurement. As a result, any potential attempt to reconstruct the original image at a resolution better than the diffraction limit without further assumptions about the object represents an ambiguous inverse problem even in the absence of noise. For specific assumptions on object and noise models, image reconstruction may be treated in the frameworks of decision theory or information theory, and in such a formulation spatial resolution is fundamentally limited only by the signal-to-noise ratio (SNR) and the information capacity of the imaging system [1722].

In practice though it is difficult to reach a resolution significantly exceeding the diffraction limit by purely optical means [21,23]. Respective methods range from using a high refractive index immersion liquid and a high-numerical aperture (NA>1) objective on which the diffraction limit actually depends, through structured illumination [24] and deconvolution techniques [2527], up to development of novel optical set-ups such as for 4PI confocal optics [28,29].

Resolution depends on the use of the available degrees of freedom of the imaging system [18,30] given in terms of its space-bandwidth product and independent polarization channels [21,31,32]. It may be increased by using a priori knowledge about the object combined with signal modulation and reconstruction techniques [5,22]. A well known encoding technique that enables to extend the measured spatial spectrum of the specimen by copying high frequency information to lower frequencies is based on the use of gratings [18,30,33]. More generally, modulation may involve structured illumination [24], speckle pattern projection [34,35], or structured illumination varied on sub-wavelength scale [33,3638]. A potential novel promising approach to superresolution is based on superoscillations [3941]. Imaging with nanospheres has been also shown to overcome the diffraction limit [42]. Finally, digital processing with deep neural networks may enhance the spatial resolution of regular microscopic images slightly beyond Abbe’s limit [4].

In this paper we make use of far-field intensity pattern obtained under coherent illumination to deduce information on the shape and location of sub-wavelength sized objects. The specimen is placed in slits of a sub-wavelength binary metallic grating. Interference of the Fourier spectra of the object(s) and the grating enhances far-field intensity modulation introduced by the object(s). This approach resembles object recovery in deconvolution microscopy [25]. It is also similar to the interscale mixing microscopy (IMM) [8,43], although we do not share the opinion [8] that a binary grating nonoverlapping with the object could introduce mixing of the evanescent spectrum of the object into the far-field pattern. In fact this concept is not upheld in [7]. Nonetheless 1D objects with sizes on the order of $1/10$ of the wavelength have been characterized experimentally with IMM [8]. This is much below the resolutions on the order of half of the Abbe limit obtained with structured illumination microscopy and confocal microscopy with various algorithms [44] or to a $2.3$-fold enhancement with chip-based structured illumination [45]. As compared to IMM, we use a different signal reconstruction method and consider a 2D situation. The far-field diffraction intensity pattern is captured and processed to obtain information on the object. We apply a two-step image reconstruction method for sub-wavelength objects and demonstrate that signal recovery from the far-field diffraction pattern remains feasible in 2D despite the significant drop in the fraction of diffracted light within the diffractive pattern observed when a 2D situation is compared against 1D. Numerical recovery of sub-wavelength objects from their far-field interference signatures is computationally challenging and depends on using further assumptions, for instance on the sparsity of the objects. The overall resolution and amount of details that may be recovered from the non-evanescent field is strongly limited by SNR.

2. Recovery of sub-wavelength objects from a far-field interference pattern

Reconstructing an object from the intensity measurement of its far-field interference pattern is an ambiguous inverse problem. The proposed method consists of two parts. The first stage is derived from the framework of deconvolution microscopy. Then we apply an original optimization procedure with a genetic algorithm using a criterion evaluated with a restricted-domain Fourier transform.

Figure 1 shows a sample object placed together with a binary grating and introduces respective denotations for the geometric features. The field from the grating interferes with that from the object(s) placed within the slits of the grating. Intensity distribution is measured in the Fourier plane of the objective. The overall far field intensity is $I(k_x,k_y)\propto |(\hat R(k_x,k_y) + \hat O(k_x,k_y))\cdot \hat H(k_x,k_y)|^2$, where $\hat H(k_x,k_y)$ is the optical transfer function (OTF) and its squared modulus is the modulation transfer function (MTF) of the optical system, $O(x,y)$ is the object field, and $R(x,y)$ is the reference field created by the grating. The caret denotes the 2D Fourier transform. OTF is a low-pass function with a cut-off at $|\textbf {k}|<NA\cdot k_0$, where NA is the numerical aperture and $k_0$ is the wavenumber. When the object $O$ fits in the grating slits, it is not modulated by the grating $R$ and the far field contains a superposition of the two spectra filtered independently by the same OTF. In effect, the far-field holds no information about the spatial spectrum of the object above the cut-off and the role of the grating in the measurement is other than to shift the evanescent spectrum below the cut-off. For small objects, the interference term $I_{int}=2Re(\hat R^{*}(k_x,k_y) \cdot \hat O(k_x,k_y))\cdot |H(k_x,k_y)|^2$ present in $I(k_x,k_y)$ carries a lot more energy than the object term $|\hat O(k_x,k_y)|^2$. For instance for 2D square objects and a 2D grating with square masks the reinforcement of the intensity signal due to interference is on the order of the squared ratio of their surfaces and may be substantial. Figure 2 illustrates this enhancement for a 2D $25\times 25$ rectangular grating with $\Lambda =275$ nm, $w/\Lambda =50\%$ at the wavelength of $\lambda =532$ nm and for $NA=1.49$ when the size of the object is $15$ nm. The interference contribution from this deeply sub-wavelength object is clearly a measurable correction to the far-field intensity of the grating. The interference pattern also encodes the phase of $\hat O$ as interference fringes. This justifies the use of the grating in our set-up. At the same time, the interference mechanism enhances noise in a similar way as object information.

 figure: Fig. 1.

Fig. 1. a) Subwavelength-sized object (in red) placed in a slit of a 2D binary grating. (b) Far field intensity expressed in dB (and obtained for an object shown in Fig. 3(a). The dynamic range of the measurement is limited to 30 dB which results in masking overexposed low-frequency information.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Contributions to the far-field interference pattern at different points of the far-field image (sorted by intensity and normalized by the measured mean far-field intensity $<I>$). The interference term $I_{int}/<I>$ (in red) is by $2$ to $3$ orders of magnitude larger than the intensity of the object $|\hat O|^2$ (green) and can be measured. (a) far field determined with a conical modulation transfer function (MTF) of a lens, (b) far field determined with a binary band-limited circularly-shaped MTF (for MTF-compensated systems or for Fresnel diffraction).

Download Full Size | PDF

Recording the far field interference pattern $I(k_x,k_y)$ resembles recording a Fourier hologram with a reference beam $\hat R(k_x,k_y)$. The denotation $\hat R$ underlines the role of a reference beam in the holographic recording played by the far-field diffraction pattern of the grating. Similar as in deconvolution microscopy we want to recover $O(x,y)$ from the interference pattern. For a 2D diffraction grating $\hat R$ can be written as

$$\hat R(k_x,k_y)=\hat P(k_x,k_y)\cdot \hat S(k_x,k_y)/ \hat Q(k_x,k_y),$$
where $\hat P(k_x,k_y)$ is the Fourier transform of the elementary cell of the grating, $\hat P=const-w^2 {\textrm{sinc}} \left ({k_x w}/{2}\right ){\textrm{sinc}} \left ( {k_y w}/{2}\right )$, and $\hat S/\hat Q$ depends on the distribution of cells in the finite sized grating, $\hat S(k_x,k_y)= {\sin \left ({ k_x N\Lambda }/{2}\right )} {\sin \left ({ k_y N\Lambda }/{2}\right )}$, and $\hat Q(k_x,k_y)= {\sin \left ({k_y \Lambda }/{2}\right )\sin \left ({k_x \Lambda }/{2}\right )}$. Here $N$, $w$, and $\Lambda$ denote the number of grating periods in each dimension, width of a square grating mask, and the grating pitch, respectively, as indicated in Fig. 1. The object may be approximately recovered with the deconvolution formula
$$O^{deconv}(x,y)=\mathcal{F}^{-1} \left\{\frac{I\cdot \hat Q}{\hat P \cdot \hat S \cdot|\hat H|^2}- \frac{\hat P \cdot \hat S}{\hat Q}\right\},$$
where the inverse Fourier transform is calculated over part of the domain for which $|\hat P \cdot \hat S \cdot \hat Q \cdot \hat H|>0$. The result is diffraction limited. For a sub-wavelength sized object all fine details are lost, and closely positioned objects can not be resolved, but the locations of the objects may be approximately determined restricting the vast domain that will be examined in further computationally intensive optimization. The deconvolution formula (2) is used by us only to estimate the region of pixels where the object(s) may be located.

The second stage of the algorithm consists of minimizing an error function dependent on the hypothetical object location and shape using the actual far-field intensity measurement. We consider both the $\ell _1$ and $\ell _2$ norms to construct the criterion,

$$F_p(O^{hyp})= \left( \sum_{i\in\Omega} \left|I(k_{x_i},k_{y_i}) -I^{hyp}(\hat O^{hyp}(k_{x_i},k_{y_i})) |^2\right|^p\right)^{1/p},$$
where $O^{hyp}$ is the tested hypothesis of the objects shape, and $I^{hyp}=|\hat H(k_{x_i},k_{y_i})|^2\cdot |\hat R(k_{x_i},k_{y_i}) + \hat O^{hyp}(k_{x_i},k_{y_i})|^2$ is the corresponding far-field intensity which may be compared to the actual measurement $I$.

In Eq. (3), $p=1,2$ decides upon the use of $\ell _1$ or $\ell _2$ norm, and $\Omega$ denotes a subset of spatial frequencies selected adaptively based on signal to noise ratio. Optimization is computationally intensive, and may only be successful with additional constraints imposed on the object $O(x,y)$. We assume that $O$ is sparse, and is located in the region first estimated in the deconvolution stage. The size of this region considered here is on the order of $1000$ pixels, while the number of spatial frequencies in $\Omega$ is on the order of $10^2- 10^4$ (we used the value of $300$ most of the time). Optimization is further simplified for binary objects of a known and fixed or parametrized shape.

In practice Eq. (3) has a large number of local minima and can not be minimized with gradient descend methods. In Fig. 4 we show a typical shape of $F_p$ near the global minimum, as a function of two parameters corresponding to the possible size of the object when its location is already known. The minimum is broad, and some local minima are also present. The situation becomes a lot more complicated when more degrees of freedom need to be included in optimization. In practice the number of local minima makes optimization difficult.

For this reason, we have used a genetic algorithm for finding the minimum, which had the additional advantage of the ease to encode additional structural information of the objects. The genetic algorithm was also more robust than Gerchberg-Saxton type iterative optimization.

Finally, it is crucial to minimize the evaluation time of the criterion. Equation (3) is formulated in the Fourier domain, but the constraints on $O(x,y)$ about object location, sparsity, shape or parametrization can be only easily specified in the image domain. Thus a 2D Fourier transform has to be calculated every time, when we want to calculate the criterion. It would extremely inefficient to work with dense zero-padded matrices with high resolution sampling and use the FFT algorithm for this purpose. Instead we are calculating the discrete Fourier transform directly over small subsets of signal and spectral domains. This approach was faster by up to $3$-$4$ orders of magnitude than using FFT from a highly optimized FFTW package included in Matlab. The details of the genetic algorithm are described in the next section and the details of the restricted domain Fourier transform in the Appendix.

3. Computational imaging beyond the diffraction limit

In order to minimize the cost function defined in Eq. (3) with respect to positions and sizes of objects we utilize a dedicated genetic algorithm. We consider a population of solutions, each of which contains information about positions and sizes for a set of objects. The objects have a discrete pixelated structure and are located within the area of $\simeq 1000$ pixels selected with the deconvolution formula (2). The initial population is generated randomly with a uniform distribution of objects over the allowed area. Then, in each algorithm iteration, a new population is generated with the better half of the solutions preserved, and the other half regenerated with three genetic operators, which are mutation applied to object position, mutation applied to object size, and crossover.

The genetic algorithm can be easily adapted to work with parametrized objects. In this work we also analyze arbitrarily oriented line-shaped objects fully characterized by the coordinates of their ends and their width.

Figure 5(a) presents an example of three sub-wavelength objects reconstructed with the genetic algorithm. The objects and their preliminary localization with deconvolution are shown in Fig. 3. The algorithm does not always converge to the exact solution but clearly allows to achieve a localization accuracy on the order of $\lambda /10$ and to resolve objects located at distances smaller than the diffraction limit. In Fig. 5(b) we show a similar example but there is a single linear object parametrized with $5$ coefficients representing the coordinates of the two ends and the thickness. The genetic algorithm is aware of the possibility to parametrize the object and the coefficients are recovered without error. Optimization with the genetic algorithm is also shown in the supplementary materials (See Visualization 1).

 figure: Fig. 3.

Fig. 3. Recovery of sub-wavelength sized objects from a far-field interference pattern by deconvolution: (a) Three $15$ nm objects placed on the grating (length expressed in the units of diffraction limit). The corresponding far-field interference pattern is shown in shown in Fig. 1(b) (b) Objects recovered from the far-field interference pattern using Eq. (2). Isolated objects can be identified and localized but the image is diffraction limited and contains a spurious mirror image. The deconvolved image is the result of the first stage object recovery, which is later refined by numerical optimization beyond the diffraction limit.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Criterion $F_1$ calculated in the case when two square objects with sizes $53\times 53$ nm and $23\times 23$ nm are located in close vicinity. The tested hypothesis assumes that we know the exact object locations and the size of the larger object. $F_1$ is analyzed as a function of the size of the smaller object. This example shows that although the global minimum appears for the correct hypothesis about the size of the object, other local minima also exist.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Recovery of sub-wavelength sized objects from a far-field interference pattern with a genetic algorithm (red). The potential areas where the objects may be located (marked in yellow) were initially determined by deconvolution The actual object locations are shown in green. (a) Three $15$ nm objects found approximately (See. Figure 3(b.); (b) single linear object parametrized with $5$ coefficients found exactly. Optimization with the genetic algorithm is also shown in Visualization 1.

Download Full Size | PDF

4. Influence of noise on image reconstruction

Presence of noise in the far-field intensity measurement has a profound influence on the possibility of object reconstruction and on its quality. As a measure of noise intensity we will use the ratio of its standard deviation $\sigma _n$ to the intensity of the far-field averaged over the measured region $<I>$ (the same normalization of far-field contributions was used in Fig. 2). Noise $n$ is complex and affects the complex field additively $I(k_x,k_y)\propto |(\hat R(k_x,k_y) + \hat O(k_x,k_y))\cdot \hat H(k_x,k_y)+ n|^2$. Due to the presence of noise, optimization of criteria $F_1$ and $F_2$ is not equivalent, and there is no guarantee that either of them has still a global minimum for the correct hypothesis about the microscopic object shapes and locations. Criterion $F_1$ based on the $\ell _1$ norm is more robust to noise than $F_2$. We note that this observation can not be simply attributed to linking object sparsity to the value of the $\ell _1$ norm because the criteria are formulated in the Fourier rather than object domain, and only sparse objects are considered in the genetic algorithm. In Fig. 6 we compare the two criteria calculated in the presence of noise for the hypothesis about object shape or location near the correct values. The global minimum to the optimization problem becomes shallower with the increasing level of noise. By comparing $\partial F_i/\partial \sigma _n$ with $\partial F_i/\partial \epsilon$, where $\epsilon$ represents location error or size error, we notice that $F_2$ criterion loses sensitivity to these errors much faster than $F_1$ when noise is present. A fundamental question is up to what level of noise $\sigma _n$, the criteria $F_1$ and $F_2$ retain the global minima at the correct location or in its close vicinity. When this level is exceeded, it will no longer be possible to recover the objects, independently of the computational resources available. In practice though, for more complicated problems it may be extremely difficult to find the global minimium, and especially a flat and shallow minimum may be easily omitted by numerical optimization.

 figure: Fig. 6.

Fig. 6. Sensitivity of $\ell _1$ and $\ell _2$ norm based criteria to noise and distortions. Criteria $F_1$ and $F_2$ are calculated in the presence of noise when the actual object consists of a single $30$ nm square spot (a),(b) or of two such objects separated by $30$ nm (c). The change of the shape of the minimum with noise indicates that $\ell _1$-based criterion is more robust to noise. (a) the tested hypothesis assumes a correct size but includes a position error; (b) the tested hypothesis assumes a correct position and proportions but includes a scale error. c) the tested hypothesis assumes correct positions and sizes but unknown separation.

Download Full Size | PDF

We will now focus on localizing a single binary-valued linearly-shaped sub-wavelength sized object positioned on the discrete rectangular pixel grid (such as shown in Fig. 5(b)). A regular object of this kind could be interesting for data-storage or security applications, where a simple far-field interference measurement provides more information than is possible to see with an optical microscope at the same wavelength. An object that is fully parametrized by $5$ integer numbers is a convenient test example since for these number of degrees of freedom we are able to compare the operation of an iterative algorithm with a brute-force optimization of criteria $F_1$ and $F_2$ over the entire parameter space. Such a comparison, calculated at various noise levels, is presented in Fig. 7. The fidelity of the object reconstruction with values between $0$ and $100\%$ is defined as

$$fidelity=max \left(1-\frac{S_{err}}{S_{obj}} ,0\right),$$
where $S_{obj}$ represents the actual surface of the objects and $S_{err}$ represents the incorrectly identified object surface (including both omitted and erroneously attributed areas). Fidelity of $100\%$ corresponds to a perfectly identified object, while the fidelity of $0\%$ usually signifies that less than half of the object pixels have been correctly identified. Two important conclusions may be drawn from the results in Fig. 7. The first is that the $\ell _1$-norm based optimization is more robust to noise than the $\ell _2$ based optimization. The second is that sensitivity to noise puts a severe limitation to the possibility of sub-wavelength sized object recovery from far-field intensity information. Since we have tested the full possible parameter space for a rather idealistic object parametrization, we do not expect any optimization method, including methods of compressed sensing or deep learning, to overcome these limitations. At the same time, the recovery is feasible, if only the level of SNR is sufficiently large, which may be potentially achieved by temporal or spatial signal averaging, limiting the aperture, improving experimental stability, or other noise-reduction techniques.

 figure: Fig. 7.

Fig. 7. Reconstruction fidelity of a sub-wavelength linear object found from far-field interference pattern in the presence of noise by optimization of the criterion defined in Eq. (3) with either $p=2$ (a),(c) or with $p=1$ (b),(d). An example of the object is shown in Fig. 5(b). (a),(b) results corresponding to a global minimum obtained by brute-force optimization, (c),(d) results obtained with a genetic algorithm. The object consists of a single $33$ nm-thick line with length varied between $30$ and $120$ nm. Results are extremely sensitive to noise and indicate a better noise robustness obtained with $\ell _1$ norm than with a $\ell _2$ norm.

Download Full Size | PDF

Signal intensity in the far-field directly affects the value of SNR so objects with a larger total surface or spatial density would yield a higher SNR than small objects considered so far. In Fig. 8 we compare the sensitivity of $F_1$ criterion to noise and distortions for spatially sub-wavelength sized objects distributed within a circular region at two different spatial densities ($\rho =10\%$ and $35\%$). By comparing the results from Figs. 8(c) and (d) we see that increasing spatial density of objects $\rho$ improves robustness to additive noise. Besides, for a larger surface density $\rho$, $F_1$ as a function of $fidelity$ becomes monotonic, which is desired for numerical optimization. The hidden hook is that for larger and more complicated objects the $fidelity$ criterion looses a direct relation to resolution. Larger, though still sub-wavelength sized, and denser objects can be more easily localised than tiny and closely positioned ones such as those considered in Fig. 6. But when it comes to finding precisely the boundary of selected fragment of the object in the vicinity of another one the problem becomes similarly difficult as before.

 figure: Fig. 8.

Fig. 8. Sensitivity of $\ell _1$ based criterion to noise and distortions for spatially distributed sub-wavelength sized objects with varying spatial densities $\rho$. (a),(b) Objects distributed with the surface density of $\rho =10\%$ over a circular area shown in dark blue, and object locations corresponding to a hypothesis having fidelity equal to (a) $80\%$ or (b) $0\%$ drawn in light blue. (c),(d) Normalized criterion $F_1$ as a function of fidelity of the hypothesis, with different curves corresponding to different levels of noise. Surface density equals $\rho =10\%$ (c), and $\rho =35\%$ (d). Increasing object density and surface improves noise robustness of the $\ell _1$ based criterion but the fidelity criterion loses a clear connection to resolution.

Download Full Size | PDF

5. Conclusion

We have examined the possibility to recover geometrical information on sub-wavelength sized objects, not limited to their location, from a far-field interference pattern obtained under coherent illumination. The far-field signature of microscopic objects considered by us does not include spatial frequencies beyond the cut-off, i.e. corresponding to evanescent waves or spatial frequencies lost in classical imaging optics with an objective having a given numerical aperture (with $NA=1.49$ assumed in the presented results). We have proposed a two-step object recovery algorithm, with the first diffraction-limited step based on deconvolution, and the second based on numerical optimization. This second step involves the use of a genetic algorithm and a restricted-domain Fourier transform (described in the Appendix) the purpose of which is to speed-up calculations of the far-field for sparse objects when only a limited part of the Fourier coefficients are required. Overall, the method makes it possible to recover sub-wavelength sized objects with sizes on the order of $\lambda /10$ from far-field information, although for more complicated scenes, the method becomes computationally intensive and does not always converge to the correct result. Sub-wavelength object recovery is only possible at a very low level of noise. Even with a known object parametrization, a small level of noise introduces false minima to the cost functions making the true hypothesis impossible to distinguish from alternatives. This limitation concerns object recovery with both $\ell _1$-norm and $\ell _2$-based criterion functions and is unlikely to be mitigated by the use of sophisticated optimization frameworks such as compressive sensing or deep learning. At the same time, the robustness of $\ell _1$-norm based criterion to noise is considerably better than that of the $\ell _2$-based counterpart. Compressive sensing heavily relies on $\ell _1$ norm as a measure of signal sparsity which leads to convex computationally tractable optimization criteria. Perhaps noise robustness properties of the $\ell _1$ norm are in effect underestimated. Our optimization of $\ell _1$-norm applied to dense Fourier domain information is probably original.

It is unlikely that information recovery from a far field signature could become a significant alternative to existing superresolving wide-field microscopic techniques. Prospect applications may be connected with security purposes, where a specific microscopic object impossible to be analyzed by classical optical microscopy could be detected, localized and verified by the examination of the far-field diffraction pattern. Another area of interest is high density data storage where the geometry of sub-wavelength sized features could be used to enhance the amount of information readable without shortening the laser wavelength.

Appendix

It is hard to overestimate the role of Discrete Fourier Transform (DFT) in numerical techniques used in optics, especially in areas related to Fresnel and Fraunhofer diffraction [46] that depend on the use of 2D DFT. DFT is heavily used in iterative multiple projections methods applied in the design of diffractive elements, in deconvolution and in phase retrieval algorithms [4750], as well as for pulse retrieval in frequency resolved optical gating etc. The best known Fast Fourier Transform (FFT) algorithm [51] is certainly most commonly used. However the need for a uniform sampling in both signal and frequency domains at the same time implies operating on large zero-padded arrays, which is inefficient. Also shifted Fresnel transform [52], as well as techniques involving nonuniform sampling with Bluestein’s algorithm [53] do not allow to operate on arbitrarily selected parts of signal and Fourier domain.

This Appendix describes the way we calculate the 2D discrete Fourier transform (DFT) over restricted parts of image and spectral domains. Sample code is available at [54]. We actually apply directly the 2D DFT definition to sparse matrices and express the result as a simple matrix-vector product, where the matrix is dynamically constructed from a set of values precalculated in advance. This allows us to calculate efficiently the 2D DFT of arbitrarily distributed sparse objects at arbitrarily selected frequencies. We consider this result as an important coding detail rather than a novel method, which however may give a huge calculation speedup to iterative sparse diffraction calculations. Using restricted domain DFT we were able to decrease the calculation time by $2$ to $3$ orders of magnitude, as compared to the use of ordinary FFT with full matrices.

The 2D DFT of a vector $v$ (indexed with two indices) is

$$\hat v_{k_x,k_y}\propto \sum_{m_x=0}^{N_x-1} \sum_{m_y=0}^{N_y-1} v_{m_x,m_y}\cdot \exp\left(2\pi i \left(\frac{m_x k_x}{N_x} +\frac{m_y k_y}{N_y} \right)\right).$$
Let $N$ be the smallest common multiple of $N_x$ and $N_y$, and $N=P_x\cdot N_x=P_y\cdot N_y$. Let $\phi (l)=\exp (2\pi i l/N)$ with $l=0..N-1$. Now the 2D DFT may be written as,
$$\hat v_{k_x,k_y}\propto \sum_{m_x=0}^{N_x-1} \sum_{m_y=0}^{N_y-1} v_{m_x,m_y}\cdot \phi\left((m_x P_x k_x+m_y P_y k_y)mod_N\right),$$
where $mod$ is the modulo operation. The same can be written with a single index assuming column ordering of the images, $k=k_x N_y+k_y$, $m=m_x N_y+m_y$,
$$\hat v_{k\in\Omega}\propto \sum_{m\in\Theta} v_m\cdot \phi\left((m_x(m)\cdot P_x\cdot k_x(k)+m_y(m)\cdot P_y\cdot k_y(k))mod_N\right).$$
Assume that $v$ is a sparse image with only $M<<N_x\cdot N_y$ elements making a set $\Theta$, and we need to determine only $K<<N_x\cdot N_y$ Fourier coefficients that make a set $\Omega$. Since the $N$ elements of $\phi$ may be precalculated in advance, according to Eq. (7) we only need to perform $K\cdot M$ multiplications in a single matrix by vector multiplication. A full 2D FFT would require $N_y\cdot N_x \cdot lg_2 (N_x N_y)$ in the best case when $N_x$ and $N_y$ are powers of $2$. Additional savings in computational time come from a much lower memory usage and the possibility of storing sparse matrices in CPU cache.

Funding

Narodowe Centrum Nauki (UMO-2017/27/B/ST7/00885).

Disclosures

The authors declare no conflicts of interest.

References

1. J. N. Mait, G. W. Euliss, and R. A. Athale, “Computational imaging,” Adv. Opt. Photonics 10(2), 409–483 (2018). [CrossRef]  

2. A. Stefanoiu, G. Scrofani, G. Saavedra, M. Martínez-Corral, and T. Lasser, “What about computational super-resolution in fluorescence fourier light field microscopy?” Opt. Express 28(11), 16554–16568 (2020). [CrossRef]  

3. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6(8), 921–943 (2019). [CrossRef]  

4. Y. Rivenson, Z. Gorocs, H. Gunaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017). [CrossRef]  

5. A. Szameit, Y. Shechtman, E. Osherovich, E. Bullkich, P. Sidorenko, H. Dana, S. Steiner, E. B. Kley, S. Gazit, T. Cohen-Hyams, S. Shoham, M. Zibulevsky, I. Yavneh, Y. C. Eldar, O. Cohen, and M. Segev, “Sparsity-based single-shot subwavelength coherent diffractive imaging,” Nat. Mater. 11(5), 455–459 (2012). [CrossRef]  

6. L. Zhu, W. Zhang, D. Elnatan, and B. Huang, “Faster storm using compressed sensing,” Nat. Methods 9(7), 721–723 (2012). [CrossRef]  

7. A. Ghosh, D. J. Roth, L. H. Nicholls, W. P. Wardley, A. V. Zayats, and V. A. Podolskiy, “Machine learning – based diffractive imaging with subwavelength resolution,” arXiv:2005.03595 (2020).

8. C. M. Roberts, N. Olivier, W. P. Wardley, S. Inampudi, W. Dickson, A. V. Zayats, and V. A. Podolskiy, “Interscale mixing microscopy: far-field imaging beyond the diffraction limit,” Optica 3(8), 803–808 (2016). [CrossRef]  

9. S. Kawata, ed., Near Field Optics and Surface Plasmon Polaritons (Springer, 2001).

10. S. W. Hell, “Far-field optical nanoscopy,” Science 316(5828), 1153–1158 (2007). [CrossRef]  

11. M. Rajaei, M. A. Almajhadi, J. Zeng, and H. K. Wickramasinghe, “Near-field nanoprobing using si tip-au nanoparticle photoinduced force microscopy with 120:1 signal-to-noise ratio, sub-6-nm resolution,” Opt. Express 26(20), 26365–26376 (2018). [CrossRef]  

12. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3(10), 793–796 (2006). [CrossRef]  

13. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef]  

14. B. Harke, J. Keller, C. K. Ullal, V. Westphal, A. Schönle, and S. W. Hell, “Resolution scaling in STED microscopy,” Opt. Express 16(6), 4154–4162 (2008). [CrossRef]  

15. S. Hell and J. Wichmann, “Breaking the diffraction resolution limit by stimulated emission,” Opt. Lett. 19(11), 780–782 (1994). [CrossRef]  

16. C. Cremer and B. R. Masters, “Resolution enhancement techniques in microscopy,” Eur. Phys. J. H 38(3), 281–344 (2013). [CrossRef]  

17. J. Harris, “Resolving power and decision theory,” J. Opt. Soc. Am. 54(5), 606–611 (1964). [CrossRef]  

18. W. Lukosz, “Optical systems with resolving powers exceeding the classical limit. II,” J. Opt. Soc. Am. 57(7), 932–941 (1967). [CrossRef]  

19. D. L. Fried, “Resolution, signal-to-noise ratio, and measurement precision,” J. Opt. Soc. Am. 69(3), 399–406 (1979). [CrossRef]  

20. I. J. Cox and C. J. Sheppard, “Information capacity and resolution in an optical system,” J. Opt. Soc. Am. A 3(8), 1152–1158 (1986). [CrossRef]  

21. C. J. Sheppard, “Fundamentals of superresolution,” Micron 38(2), 165–169 (2007). [CrossRef]  

22. A. J. den Dekker and A. van den Bos, “Resolution: a survey,” J. Opt. Soc. Am. A 14(3), 547 (1997). [CrossRef]  

23. J. Bechhoefer, “What is superresolution microscopy?” Am. J. Phys. 83(1), 22–29 (2015). [CrossRef]  

24. M. G. Gustaffson, “Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy,” J. Microsc. 198(2), 82–87 (2000). [CrossRef]  

25. J.-B. Sibarita, “Deconvolution microscopy,” Adv. Biochem. Eng./Biotechnol. 95, 201–243 (2005). [CrossRef]  

26. C. Liu, Z. Liu, F. Bo, Y. Wang, and J. Zhu, “Super-resolution digital holographic imaging method,” Appl. Phys. Lett. 81(17), 3143–3145 (2002). [CrossRef]  

27. T. Latychevskaia and H.-W. Fink, “Coherent microscopy at resolution beyond diffraction limit using post-experimental data extrapolation,” Appl. Phys. Lett. 103(20), 204105 (2013). [CrossRef]  

28. S. Hell and E. H. K. Stelzer, “Fundamental improvement of resolution with a 4Pi-confocal fluorescence microscope using two-photon excitation,” Opt. Commun. 93(5-6), 277–282 (1992). [CrossRef]  

29. S. W. Hell, R. Schmidt, and A. Egner, “Diffraction-unlimited three-dimensional optical nanoscopy with opposing lenses,” Nat. Photonics 3(7), 381–387 (2009). [CrossRef]  

30. W. Lukosz, “Optical systems with resolving powers exceeding the classical limit,” J. Opt. Soc. Am. 56(11), 1463–1472 (1966). [CrossRef]  

31. C. J. Sheppard and K. Larkin, “Information capacity and resolution in three-dimensional imaging,” Optik 113(12), 548–550 (2003). [CrossRef]  

32. M. R. Andrews, P. P. Mitra, and R. deCarvalho, “Tripling the capacity of wireless communications using electromagnetic polarization,” Nature 409(6818), 316–318 (2001). [CrossRef]  

33. A. Sentenac, P. C. Chaumet, and K. Belkebi, “Beyond the Rayleigh criterion: Grating assisted far-field optical diffraction tomography,” Phys. Rev. Lett. 97(24), 243901 (2006). [CrossRef]  

34. J. Garcia, Z. Zalevsky, and D. Fixler, “Synthetic aperture superresolution by speckle pattern projection,” Opt. Express 13(16), 6073–6078 (2005). [CrossRef]  

35. M. Shergei, Y. Beiderman, J. García, and Z. Zalevsky, “Rounding noise effects’ reduction for estimated movement of speckle patterns,” Opt. Express 26(19), 24663–24677 (2018). [CrossRef]  

36. Z. Liu, S. Durant, H. Lee, Y. Pikus, N. Fang, Y. Xiong, C. Sun, and X. Zhang, “Far-field optical superlens,” Nano Lett. 7(2), 403–408 (2007). [CrossRef]  

37. E. Narimanov, “Hyperstructured illumination,” ACS Photonics 3(6), 1090–1094 (2016). [CrossRef]  

38. Q. Ma, H. Qian, S. Montoya, W. Bao, L. Ferrari, H. Hu, E. Khan, Y. Wang, E. E. Fullerton, E. E. Narimanov, X. Zhang, and Z. Liu, “Experimental demonstration of hyperbolic metamaterial assisted illumination nanoscopy,” ACS Nano 12(11), 11316–11322 (2018). [CrossRef]  

39. G. Chen, Z.-Q. Wen, and C.-W. Qiu, “Superoscillation: from physics to optical applications,” Light: Sci. Appl. 8(1), 56 (2019). [CrossRef]  

40. G. H. Yuan, E. T. F. Rogers, and N. I. Zheludev, ““Plasmonics” in free space: observation of giant wavevectors, vortices, and energy backflow in super-oscillatory optical fields,” Light: Sci. Appl. 8(1), 2 (2019). [CrossRef]  

41. T. Pu, V. Savinov, G. Yuan, N. Papasimakis, and N. I. Zheludev, “Unlabelled Far-field Deeply Subwavelength Superoscillatory Imaging (DSSI),” arXiv:1908.00946 (2019).

42. Z. Wang, W. Guo, L. Li, B. Lukyanchuk, A. Khan, Z. Liu, Z. Chen, and M. Hong, “Optical virtual imaging at 50 nm lateral resolution with a white-light nanoscope,” Nat. Commun. 2(1), 218 (2011). [CrossRef]  

43. S. Inampudi, N. Kuhta, and V. A. Podolskiy, “Interscale mixing microscopy: numerically stable imaging of wavelength- scale objects with sub-wavelength resolution and far field measurements,” Opt. Express 23(3), 2753–2763 (2015). [CrossRef]  

44. L.-H. Yeh, L. Tian, and L. Waller, “Structured illumination microscopy with unknown patterns and a statistical prior,” Opt. Express 8(2), 695–711 (2017). [CrossRef]  

45. O. I. Helle, F. T. Dullo, M. Lahrberg, J.-C. Tinguely, O. G. Helleso, and B. S. Ahluwalia, “Structured illumination microscopy using a photonic chip,” Nat. Photonics 14(7), 431–438 (2020). [CrossRef]  

46. D. Mas, J. Garcia, C. Ferreira, L. M. Bernardo, and F. Marinho, “Fast algorithms for free-space diffraction patterns calculation,” Opt. Commun. 164(4-6), 233–245 (1999). [CrossRef]  

47. J. R. Fienup, “Phase retrieval algorithms: a personal tour [invited],” Appl. Opt. 52(1), 45–56 (2013). [CrossRef]  

48. H. H. Bauschke, P. L. Combettes, and D. R. Luke, “Phase retrieval, error reduction algorithm, and Fienup variants: a view from convex optimization,” J. Opt. Soc. Am. A 19(7), 1334–1345 (2002). [CrossRef]  

49. F. Momey, L. Denis, T. Olivier, and C. Fournier, “From Fienup’s phase retrieval techniques to regularized inversion for in-line holography: tutorial,” J. Opt. Soc. Am. A 36(12), D62–D80 (2019). [CrossRef]  

50. T. Latychevskaia and H.-W. Fink, “Reconstruction of purely absorbing, absorbing and phase-shifting, and strong phase-shifting objects from their single-shot in-line holograms,” Appl. Opt. 54(13), 3925–3932 (2015). [CrossRef]  

51. J. Cooley and J. Tukey, “An algorithm for the machine calculation of complex Fourier series,” Math. Comp. 19(90), 297 (1965). [CrossRef]  

52. R. P. Muffoletto, J. M. Tyler, and J. E. Tohline, “Shifted Fresnel diffraction for computational holography,” Opt. Express 15(9), 5631–5640 (2007). [CrossRef]  

53. Y. Hu, Z. Wang, X. Wang, S. Ji, C. Zhang, J. Li, W. Zhu, D. Wu, and J. Chu, “Efficient full-path optical calculation of scalar and vector diffraction using the Bluestein method,” Light: Sci. Appl. 9(1), 119 (2020). [CrossRef]  

54. M. Bancerek, K. Czajkowski, and R. Kotynski, Restricted domain Fourier transform (code) (2020). https://github.com/rkotynski/RDFT.

Supplementary Material (1)

NameDescription
Visualization 1       Convergence of the genetic algorithm applied to recover a superresolution image.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. a) Subwavelength-sized object (in red) placed in a slit of a 2D binary grating. (b) Far field intensity expressed in dB (and obtained for an object shown in Fig. 3(a). The dynamic range of the measurement is limited to 30 dB which results in masking overexposed low-frequency information.
Fig. 2.
Fig. 2. Contributions to the far-field interference pattern at different points of the far-field image (sorted by intensity and normalized by the measured mean far-field intensity $<I>$). The interference term $I_{int}/<I>$ (in red) is by $2$ to $3$ orders of magnitude larger than the intensity of the object $|\hat O|^2$ (green) and can be measured. (a) far field determined with a conical modulation transfer function (MTF) of a lens, (b) far field determined with a binary band-limited circularly-shaped MTF (for MTF-compensated systems or for Fresnel diffraction).
Fig. 3.
Fig. 3. Recovery of sub-wavelength sized objects from a far-field interference pattern by deconvolution: (a) Three $15$ nm objects placed on the grating (length expressed in the units of diffraction limit). The corresponding far-field interference pattern is shown in shown in Fig. 1(b) (b) Objects recovered from the far-field interference pattern using Eq. (2). Isolated objects can be identified and localized but the image is diffraction limited and contains a spurious mirror image. The deconvolved image is the result of the first stage object recovery, which is later refined by numerical optimization beyond the diffraction limit.
Fig. 4.
Fig. 4. Criterion $F_1$ calculated in the case when two square objects with sizes $53\times 53$ nm and $23\times 23$ nm are located in close vicinity. The tested hypothesis assumes that we know the exact object locations and the size of the larger object. $F_1$ is analyzed as a function of the size of the smaller object. This example shows that although the global minimum appears for the correct hypothesis about the size of the object, other local minima also exist.
Fig. 5.
Fig. 5. Recovery of sub-wavelength sized objects from a far-field interference pattern with a genetic algorithm (red). The potential areas where the objects may be located (marked in yellow) were initially determined by deconvolution The actual object locations are shown in green. (a) Three $15$ nm objects found approximately (See. Figure 3(b.); (b) single linear object parametrized with $5$ coefficients found exactly. Optimization with the genetic algorithm is also shown in Visualization 1.
Fig. 6.
Fig. 6. Sensitivity of $\ell _1$ and $\ell _2$ norm based criteria to noise and distortions. Criteria $F_1$ and $F_2$ are calculated in the presence of noise when the actual object consists of a single $30$ nm square spot (a),(b) or of two such objects separated by $30$ nm (c). The change of the shape of the minimum with noise indicates that $\ell _1$-based criterion is more robust to noise. (a) the tested hypothesis assumes a correct size but includes a position error; (b) the tested hypothesis assumes a correct position and proportions but includes a scale error. c) the tested hypothesis assumes correct positions and sizes but unknown separation.
Fig. 7.
Fig. 7. Reconstruction fidelity of a sub-wavelength linear object found from far-field interference pattern in the presence of noise by optimization of the criterion defined in Eq. (3) with either $p=2$ (a),(c) or with $p=1$ (b),(d). An example of the object is shown in Fig. 5(b). (a),(b) results corresponding to a global minimum obtained by brute-force optimization, (c),(d) results obtained with a genetic algorithm. The object consists of a single $33$ nm-thick line with length varied between $30$ and $120$ nm. Results are extremely sensitive to noise and indicate a better noise robustness obtained with $\ell _1$ norm than with a $\ell _2$ norm.
Fig. 8.
Fig. 8. Sensitivity of $\ell _1$ based criterion to noise and distortions for spatially distributed sub-wavelength sized objects with varying spatial densities $\rho$. (a),(b) Objects distributed with the surface density of $\rho =10\%$ over a circular area shown in dark blue, and object locations corresponding to a hypothesis having fidelity equal to (a) $80\%$ or (b) $0\%$ drawn in light blue. (c),(d) Normalized criterion $F_1$ as a function of fidelity of the hypothesis, with different curves corresponding to different levels of noise. Surface density equals $\rho =10\%$ (c), and $\rho =35\%$ (d). Increasing object density and surface improves noise robustness of the $\ell _1$ based criterion but the fidelity criterion loses a clear connection to resolution.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

R ^ ( k x , k y ) = P ^ ( k x , k y ) S ^ ( k x , k y ) / Q ^ ( k x , k y ) ,
O d e c o n v ( x , y ) = F 1 { I Q ^ P ^ S ^ | H ^ | 2 P ^ S ^ Q ^ } ,
F p ( O h y p ) = ( i Ω | I ( k x i , k y i ) I h y p ( O ^ h y p ( k x i , k y i ) ) | 2 | p ) 1 / p ,
f i d e l i t y = m a x ( 1 S e r r S o b j , 0 ) ,
v ^ k x , k y m x = 0 N x 1 m y = 0 N y 1 v m x , m y exp ( 2 π i ( m x k x N x + m y k y N y ) ) .
v ^ k x , k y m x = 0 N x 1 m y = 0 N y 1 v m x , m y ϕ ( ( m x P x k x + m y P y k y ) m o d N ) ,
v ^ k Ω m Θ v m ϕ ( ( m x ( m ) P x k x ( k ) + m y ( m ) P y k y ( k ) ) m o d N ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.