Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Camera phasing in multi-aperture coherent imaging

Open Access Open Access

Abstract

The resolution of a diffraction-limited imaging system is inversely proportional to the aperture size. Instead of using a single large aperture, multiple small apertures are used to synthesize a large aperture. Such a multi-aperture system is modular, typically more reliable and less costly. On the other hand, a multi-aperture system requires phasing sub-apertures to within a fraction of a wavelength. So far in the literature, only the piston, tip, and tilt type of inter-aperture errors have been addressed. In this paper, we present an approach to correct for rotational and translational errors as well.

© 2012 Optical Society of America

1. Introduction

A conventional imaging system with a circular aperture without aberrations is said to be diffraction-limited and its point spread function (PSF) is an Airy function whose width is inversely proportional to its aperture diameter. However, aberrations present in the aperture necessarily spread energy from the PSF peak resulting in blurred imagery. Resolution improves with increasing aperture diameter, but unfortunately so does the weight, volume and cost. Instead of using a single large aperture, multiple small apertures are combined to synthesize a large aperture. This aperture synthesis requires phasing the sub-apertures to within a fraction of a wavelength which is challenging when wavelength is on the order of one micron. The multi-aperture imaging system described uses a coherent detection method to measure the field (both amplitude and phase) within each sub-aperture. In a digital, post-detection process, each measured sub-aperture field is placed into a common pupil plane corresponding to the physical location of the sub-aperture. The composite pupil plane field is then digitally propagated to the image plane. If no relative phase errors exist between the sub-apertures, the sub-apertures are said to be phased. If the sub-apertures are diffraction-limited and phased, an image is formed with improved resolution based on the synthetic array dimensions [1].

1.1. Optical field measurement via spatial heterodyne detection

In order to synthesize of large aperture from multiple small apertures, the complex-valued field radiating off the object must be measured within each sub-aperture. The object is flood illuminated with a coherent laser source. Coherent detection is accomplished at the receiver by optically mixing the field radiating off the object with a local oscillator reference beam [2, 3]. In the spatial heterodyne coherent detection technique, a two-dimensional detector array at a sub-aperture records

Ik(x,y)=|ak(x,y)+r(x,y)|2,
where ak(x, y) is the complex object field captured at the kth sub-aperture and r(x, y) = ej2π(u0x+v0y) is the tilted reference beam, also known as the local oscillator. The complex-valued field is obtained by taking the Fourier transform of Ik(x, y). Defining âk(u, v) and (u, v) as the Fourier transforms of the pupil plane field ak(x, y) and the local oscillator r(x, y), the Fourier transform of Ik(x, y) becomes
(Ik(x,y))=(|ak(x,y)+r(x,y)|2)=(|ak(x,y)|2)+(|r(x,y)|2)+(ak(x,y)r*(x,y))+(ak*(x,y)r(x,y)),=R(a^k(u,v))+R(r^(u,v))+a^k(uu0,vv0)+a^k*(u+u0,v+v0),
where R(·) is the autocorrelation function, and the offset (u0, v0) is due to the tilted local oscillator in the spatial heterodyne mixing. The âk(u, v) term is spatially separated from its conjugate term and the autocorrelation terms by virtue of the spatial offset. Therefore, the desired pupil plane field, âk(u, v), is obtained by cropping the region of 𝒡(Ik(x, y)) around (u0, v0). In addition to the spatial heterodyne technique, the object field ak(x, y) can be measured using other interferometric methods such as phase shifting interferometry or estimated from the object intensity using a phase retrieval algorithm [4, 5].

1.2. Aperture synthesis

The field radiating from the object is measured across each of the k sub-aperture pupils. Each measured pupil plane field ak(x, y) is placed on a blank aperture field at their corresponding spatial locations to form a composite pupil field. Mathematically, the composite pupil field can be written as acomp(1:K)(x,y)=k=1Kak(xxk,yyk), where (xk, yk) is the center location of the kth sub-aperture, and K is the number of sub-apertures. (The notation acomp(1:K)(x, y) indicates that all sub-apertures from 1 to K are used to form the composite as opposed to acomp(1,k)(x, y) where the first and kth sub-apertures are used to form the composite.) The composite pupil field is then digitally propagated to the image plane, where the magnitude squared of the field becomes the intensity image. In Fig. 1, we show realizations b(u,v) = |âcomp(1:K)(u, v)|2 obtained with single-aperture (K = 1) and multi-aperture (K = 3) configurations. First, notice that a single realization is of poor quality due to speckle noise. When multiple realizations are averaged, the image quality improves since the statistically independent realizations of speckle noise tend to cancel out. It is known that the signal-to-noise ratio in case of speckle noise is inversely proportional with the square root of the number of realizations averaged [6]. Second, notice that in case of the three-aperture configuration, the resolution improves along the horizontal axis but not the vertical axis, which is obviously due to the arrangement of the sub-apertures [7]. (A more detailed description of the simulations will be given in Section 3.)

 figure: Fig. 1

Fig. 1 (a) Single aperture configuration; magnitude of the pupil field ak(x, y) is shown. (b) Three-aperture configuration; magnitude of the pupil field ak(x, y) is shown. (c) One realization b(u,v) from single aperture. (d) Average of 15 realizations from single aperture. (e) Average of 60 realizations from single aperture. (f) Average of 60 realizations from three-aperture configuration.

Download Full Size | PDF

The imaging process requires correction of aberrations, including atmospheric turbulence, intra-aperture aberrations such as defocus, astigmatism, coma and spherical aberration, and inter-aperture aberrations such as piston, tip, and tilt. It has been shown that these aberrations could be modeled as phase distortions in the pupil plane and fixed through phase correction (also known as phasing). A commonly used method is to define a sharpness measure on the object image and determine the optimal weights of Zernike polynomials applied to the phases of the sub-apertures ak(x, y) [811]. The Zernike polynomials, illustrated in Fig. 2, form an orthogonal basis, and have been used extensively for identifying and fixing lens aberrations over circular apertures [12].

 figure: Fig. 2

Fig. 2 Zernike polynomials up to fifth degree are shown.

Download Full Size | PDF

So far in the literature, only the piston, tip, and tilt type of inter-aperture errors have been addressed. In this paper, we will show how to fix rotational and translational errors in addition to piston/tip/tilt errors. We will discuss that rotational errors could easily be fixed through a coordinate transformation process, followed by a Fourier-domain phase correction, while translational errors and piston/tip/tilt errors are fixed in image plane and pupil plane, respectively.

In Sections 2.1 and 2.2, we will review the Zernike-based intra-aperture and inter-aperture (piston/tip/tilt) corrections, respectively. We will then present our underlying idea for correcting rotational errors in Section 2.3, and translational errors in Section 2.4. The algorithm to fix intra- and inter-aperture aberrations (including piston/tip/tilt,rotation, and translation) in multi-aperture systems is presented in Section 2.5. In Section 3, we will provide experimental results to show the performance of the algorithm. Conclusions are given in Section 4.

2. Phase correction of sub-apertures

2.1. Intra-aperture correction

Intra-aperture aberrations, including defocus, astigmatism, coma, and spherical aberrations, can be modeled as phase distortions of the pupil field ak(x, y), and can be corrected by determining the Zernike polynomial weights that optimize a measure S(·) applied on the constructed image b(u,v). Depending on whether the measure S(·) outputs a low or a high value for sharp images, the optimization is either a minimization or a maximization problem. In this paper, we use the convention of minimizing the measure S(·). As the correction is applied on the phase of each pupil field, the optimal Zernike coefficients ŵk,p for the kth sub-aperture are

w^k,1,w^k,P=argmin{S(|(ak(x,y)ejp=1Pwk,pZp(x,y))|2)},
where Z1(x, y),..., ZP(x, y) are the Zernike polynomials used. When there are multiple realizations, input to the measure S(·) is the average of all realizations.

2.2. Piston/Tip/Tilt correction

The inter-aperture aberrations piston, tip, and tilt can also be modeled as phase distortions on the pupil plane and typically corrected using the Zernike polynomials in a similar way. In case of multiple sub-apertures, one of the sub-apertures is taken as the reference sub-aperture, and all other sub-apertures are corrected with respect to the reference. Setting the first sub-aperture as the reference, the composite of the first and the kth fields is

acomp(1,k)(x,y)=a1(xx1,yy1)+ak(xxk,yyk)ejp=1Pwk,pZp(x,y),
where the Zernike polynomials Zp include Z00, Z11, and Z11 from Fig. 2, wk,p are the coefficients of these polynomials. The optimal values of these coefficients are determined by
w^k,1,w^k,P=argmin{S|(acomp(1,k)(x,y))|2)}.

2.3. Rotation correction

In addition to the piston/tip/tilt errors, it is possible that the sensor at a sub-aperture is rotated by some angle θ along the optical axis. In other words, the measured pupil plane field at a sub-aperture is ak(x,y′) instead of ak(x, y), where (x,y′) = (xcosθysinθ, xsinθ + ycosθ). Such a rotational error cannot be fixed with pupil plane phasing using Zernike polynomials; and it would also degrade the accuracy of piston/tip/tilt correction.

The problem can be solved through transforming the pupil plane field from Cartesian coordinate system to polar coordinate system, where the rotation becomes a circular shift along the angular axis. (The polar coordinates are calculated as (ρ,θ)=(x2+y2,tan1(y/x)); and an example is provided in Fig. 3.) Defining ak(polar)(ρ,ϕ) as the coordinate transformed version of ak(x, y), the rotated field ak(x,y′) will have the polar version ak(polar)(ρ,ϕ+θ). Let a^k(polar)(uρ,vϕ) be the Fourier transform of ak(polar)(ρ,ϕ), then the shifted version ak(polar)(ρ,ϕ+θ) will have a Fourier transform of a^k(polar)(uρ,vϕ)ej2πθvϕ. That is, a rotational error in ak(x, y) corresponds to a linear phase shift in a^k(polar)(uρ,vϕ). This phase shift can be corrected similar to the way piston/tip/tilt errors are fixed. Following the same convention, the rotational correction of the kth sub-aperture is achieved by determining the optimal coefficient

w^k=argmin{S(|(acomp(1,k)(x,y))|2)},
in which case the composite is formed as
acomp(1,k)(x,y)=a1(xx1,yy1)+𝒫1[1((𝒫[ak(xxk,yyk)])ejwkZ11(uρ,vϕ))],
where 𝒫 operator transforms from Cartesian to polar coordinates, and 𝒫−1 is the inverse transformation.

 figure: Fig. 3

Fig. 3 The phase of a sub-aperture in Cartesian and polar coordinates is shown. A rotation in Cartesian coordinate system corresponds to a circular shift along the θ axis in the polar coordinate system.

Download Full Size | PDF

Notice that we use the Zernike polynomial Z11, which corresponds to linear phase shift along the horizontal (that is, θ) axis. However, one should be careful that the domain of the polynomial is now rectangular unlike the previous cases, where it is circular. (Therefore, strictly speaking, the term Zernike can be avoided; and the polynomial can simply be referred to as a linear phase polynomial.)

Here, one may argue to fix the rotational error directly in pupil plane through optimizing the angle. This approach, however, is problematic as it requires resampling of the pupil field and is not as accurate as the Fourier domain approach. In fact, within the context of incoherent image registration, Fourier domain phase correlation has been demonstrated to achieve subpixel accurate registration with much less computational complexity compared to spatial domain approach [13].

2.4. Shift correction

Now, suppose that the actual position of kth sub-aperture is offset by amount (δxk, δyk); that is, the measurement of the camera is Ik(x +δxk, y +δyk). This spatial shift (translational error) will correspond to a phase shift in the image plane:

(Ik(x+δxk,y+δy,k))=(|ak(x+δxk,y+δyk)+r(x+δxk,y+δyk)|2)=(|ak(x+δxk,y+δyk)|2)+(|r(x+δxk,y+δyk)|2)+(ak(x+δxk,y+δyk)r*(x+δxk,y+δyk))+(ak*(x+δxk,y+δyk)r(x+δxk,y+δyk)),=R(a^k(u,v)ej2π(δxku+δykv))+R(r^(u,v)ej2π(δxku+δykv))+a^k(uu0,vv0)ej2π(δxk(uu0)+δyk(vv0))ej2π(δxku0+δykv0)+a^k*(u+u0,v+v0)ej2π(δxk(u+u0)+δyk(v+v0))ej2π(δxku0+δykv0).

The local region around (u0, v0) is then âk(u, v)ej2π(δxku+δykv)ej2π(δxku0+δykv0). That is, the phase distortion in âk(u, v) consists of a constant term and two linear terms, one in each direction. This can be handled by the first three Zernike polynomials, Z00, Z11, and Z11.

2.5. Proposed phasing algorithm

The overall illustration of the idea is given in Fig. 4. The corrections are done in three domains: the intra-aperture and piston/tip/tilt corrections are done on the pupil plane field; the rotation correction is done on the Fourier transform of the polar transformed pupil plane field; and the shift correction is done on the image plane field. The transformations between these domains are invertible; therefore, the signal can be transformed to a particular domain, fixed for a certain type of aberration, and then taken back to another domain. In all cases, the optimization is done by minimizing the measure S(·), which is applied on b(u,v).

 figure: Fig. 4

Fig. 4 Illustration of the domains to do different corrections. The indices of the complex fields and their Fourier transforms are not included in this illustration.

Download Full Size | PDF

The proposed algorithm to correct for intra- and inter-aperture aberrations is given in Algorithm 1. First, the intra-aperture aberrations are corrected. Then, the inter-aperture piston/tip/tilt, rotation, and translation corrections are done iteratively. In the first iteration, one of the sub-apertures is set as the reference sub-aperture; each of the remaining sub-apertures is corrected with respect to the reference sub-aperture sequentially. For the rest of the iterations, the input to the sharpness measure includes all sub-apertures. When all the sub-apertures are included the resulting image has a finer spatial resolution, which yields a more accurate parameter estimation. In our experiments, two iterations were sufficient, and subsequent iterations did not produce significant visual quality improvement.

Tables Icon

Algorithm 1:. Multi-aperture phasing algorithm.

3. Experimental evaluation

The imaging simulation models an optically diffuse object at range, L, which is flood illuminated by a coherent laser source of wavelength λ. The complex-valued field reflected off the object is modeled with amplitude equal to the square root of the object’s intensity reflectance and phase a uniformly distributed random variable over −π to π. The complex-valued field in the receiver plane, subject to the paraxial approximation, is given by the Fresnel diffraction integral [14]. Analytic evaluation of this Fresnel diffraction integral is difficult for all but a few very simple object geometries. Therefore, the Fresnel diffraction integral was numerically evaluated using the angular spectrum propagation method [15]. However, the angular spectrum method of wave propagation is limited by discrete Fourier transform wraparound effects that occur when the wavefront spreads in the transverse dimensions and reflects at the computational grid boundaries. This wraparound effect is the most onerous limitation for wavefront propagations from diffuse objects which have inherently large divergence. In order to mitigate this wraparound effect, the wave propagation is performed in multiple partial propagations. After each partial propagation, the wavefront energy reflected at the computational grid boundary is absorbed by an annular attenuating function. The central, on-axis region of the propagating wavefront is unattenuated. The partial propagation distance is limited such that spreading wavefront does not reflect into the central, on-axis region.

The object plane and receiver pupil planes in the simulation consisted of N = 2048 × 2048 computational grids with identical 182μm sample spacings in both planes. The optical wavelength, λ, is 1.55μm, and the range, L, from the receive pupil plane to the object is 100 meters. The numerical propagation consisted of 10 partial propagations of 10 meters each to avoid the wraparound effects described above.

The optical field in the receiver pupil plane was measured in three sub-apertures with 48mm diameters and 70mm center-to-center spacings. In Fig. 5, we display the focused image after 60 realizations in the three-aperture configuration. This is the best possible realization that can be achieved when there was no aberrations.

 figure: Fig. 5

Fig. 5 Focused image, obtained by averaging 60 realizations, in the three sub-aperture design is shown.

Download Full Size | PDF

To simulate inter-aperture aberrations, we added random piston/tip/tilt and rotation errors to each sub-aperture. Figures 6(a), 6(b), and 6(c) show the average realizations in each sub-aperture. The first step of the algorithm is the intra-aperture correction. This is achieved as given in (3). The Zernike polynomials used are Z22, Z20, Z22, Z42. The optimization is done using the f minunc function of MATLAB, minimizing the well-known power measure ∑u,v |b(u,v)|0.5 [10]. The output of this step is given in Fig. 6(d), 6(e), and 6(f). As shown in Fig. 6(g), the composite at this point suffers from piston/tip/tilt and rotation errors. Next, we do the inter-aperture corrections as described in Algorithm 1. The same optimization technique and the sharpness measure mentioned above is used. Figure 6(h) shows the result with piston/tip/tilt correction only. Figure 6(i) shows the result when both piston/tip/tilt and rotation/shift corrections are done. Figure 7 shows selected zoomed regions to highlight the effectiveness of the algorithm. (Compare the recovered result in Fig. 7(d) with the best possible result given in Fig. 5.)

 figure: Fig. 6

Fig. 6 (a)(b)(c) Average sub-aperture realizations. (d)(e)(f) Intra-aperture correction. (g) Composite without any inter-aperture correction. (h) After piston/tip/tilt correction. (i) Result with piston/tip/tilt and rotation/shift correction.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Zoomed regions from the previous figure. (a) Averaged first sub-aperture. (b) Composite before any inter-aperture correction. (c) Result with piston/tip/tilt correction. (d) Result with piston/tip/tilt and rotation/shift correction.

Download Full Size | PDF

4. Conclusions

In this paper, we present a method to correct for rotational and translational errors in addition to piston/tip/tilt errors in multi-aperture coherent imaging. For rotational correction, the pupil field is transformed from Cartesian to polar coordinates, where a rotation becomes a circular shift, which is then fixed in Fourier domain by a linear phase correction. For translational errors, the correction is done on the phase of the image plane field. As the piston/tip/tilt, rotational, and translational corrections are done on different domains, a sequential algorithm is adopted, where one domain correction is done at a time. The corrections can be done iteratively; in our experiments, two iterations (first iteration with two sub-apertures as inputs, second iteration with all sub-apertures as inputs) resulted in satisfactory satisfactory results. It is also possible to apply the idea to dynamic scenes, where frame-by-frame error correction has to be done. Finally, a straightforward extension of the method can be done address pupil magnification errors as well: Instead of transforming the pupil data to polar coordinates from Cartesian coordinates, we may transform it to log-polar coordinates, where a magnification in pupil data corresponds to a shift in the log axis. Such a shift would become a phase shift when Fourier transform is taken, and can be fixed through linear phase correction.

References and links

1. N. J. Miller, M. P. Dierking, and B. D. Duncan, “Optical sparse aperture imaging,” Appl. Opt. 46(23), 5933–5943 (2007). [CrossRef]   [PubMed]  

2. E. N. Leith and J. Upatnieks, “Wavefront reconstruction and communication theory,” J. Opt. Soc. Am. 52(10), 1123–1128 (1962). [CrossRef]  

3. J. C. Marron, R. L. Kendrick, N. Seldomridge, T. D. Grow, and T. A. Hoft, “Atmospheric turbulence correction using digital holographic detection: experimental results,” Opt. Express 17(14), 11638–11651 (2009). [CrossRef]   [PubMed]  

4. D. Malacara, Optical Shop Testing (Wiley, 2007). [CrossRef]  

5. J. R. Fienup, “Lensless coherent imaging by phase retrieval with an illumination pattern constraint,” Opt. Express 14(2), 498–508 (2006). [CrossRef]   [PubMed]  

6. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts and Company, 2006).

7. D. Rabb, D. Jameson, A. Stokes, and J. Stafford, “Distributed aperture synthesis,” Opt. Express 18(10), 10334–10342 (2010). [CrossRef]   [PubMed]  

8. R. A. Muller and A. Buffington, “Real-time correction of atmospherically degraded telescope images through image sharpening,” J. Opt. Soc. Am. 64(9), 1200–1210 (1974). [CrossRef]  

9. R. G. Paxman and J. C. Marron, “Aberration correction of speckled imagery with an image sharpness criterion,” Proc. SPIE 976, 37–47 (1988).

10. J. R. Fienup and J. J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. A 20(4), 609–620 (2003). [CrossRef]  

11. S. T. Thurman and J. R. Fienup, “Phase-error correction in digital holography,” J. Opt. Soc. Am. A 25(4), 983–994 (2008). [CrossRef]  

12. R. J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. 66(3), 207–211 (1976). [CrossRef]  

13. B. S. Reddy and B. N. Chatterji, “An FFT-based technique for translation, rotation, and scale-invariant image registration,” IEEE Trans. Image Process. 5(8), 1266–1271 (1996). [CrossRef]   [PubMed]  

14. J. W. Goodman, Introduction to Fourier Optics (Roberts and Company, 2004).

15. J. D. Schmidt, Numerical Simulation of Optical Wave Propagation (SPIE, 2010).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 (a) Single aperture configuration; magnitude of the pupil field ak(x, y) is shown. (b) Three-aperture configuration; magnitude of the pupil field ak(x, y) is shown. (c) One realization b(u,v) from single aperture. (d) Average of 15 realizations from single aperture. (e) Average of 60 realizations from single aperture. (f) Average of 60 realizations from three-aperture configuration.
Fig. 2
Fig. 2 Zernike polynomials up to fifth degree are shown.
Fig. 3
Fig. 3 The phase of a sub-aperture in Cartesian and polar coordinates is shown. A rotation in Cartesian coordinate system corresponds to a circular shift along the θ axis in the polar coordinate system.
Fig. 4
Fig. 4 Illustration of the domains to do different corrections. The indices of the complex fields and their Fourier transforms are not included in this illustration.
Fig. 5
Fig. 5 Focused image, obtained by averaging 60 realizations, in the three sub-aperture design is shown.
Fig. 6
Fig. 6 (a)(b)(c) Average sub-aperture realizations. (d)(e)(f) Intra-aperture correction. (g) Composite without any inter-aperture correction. (h) After piston/tip/tilt correction. (i) Result with piston/tip/tilt and rotation/shift correction.
Fig. 7
Fig. 7 Zoomed regions from the previous figure. (a) Averaged first sub-aperture. (b) Composite before any inter-aperture correction. (c) Result with piston/tip/tilt correction. (d) Result with piston/tip/tilt and rotation/shift correction.

Tables (1)

Tables Icon

Algorithm 1: Multi-aperture phasing algorithm.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

I k ( x , y ) = | a k ( x , y ) + r ( x , y ) | 2 ,
( I k ( x , y ) ) = ( | a k ( x , y ) + r ( x , y ) | 2 ) = ( | a k ( x , y ) | 2 ) + ( | r ( x , y ) | 2 ) + ( a k ( x , y ) r * ( x , y ) ) + ( a k * ( x , y ) r ( x , y ) ) , = R ( a ^ k ( u , v ) ) + R ( r ^ ( u , v ) ) + a ^ k ( u u 0 , v v 0 ) + a ^ k * ( u + u 0 , v + v 0 ) ,
w ^ k , 1 , w ^ k , P = argmin { S ( | ( a k ( x , y ) e j p = 1 P w k , p Z p ( x , y ) ) | 2 ) } ,
a comp ( 1 , k ) ( x , y ) = a 1 ( x x 1 , y y 1 ) + a k ( x x k , y y k ) e j p = 1 P w k , p Z p ( x , y ) ,
w ^ k , 1 , w ^ k , P = argmin { S | ( a comp ( 1 , k ) ( x , y ) ) | 2 ) } .
w ^ k = argmin { S ( | ( a comp ( 1 , k ) ( x , y ) ) | 2 ) } ,
a comp ( 1 , k ) ( x , y ) = a 1 ( x x 1 , y y 1 ) + 𝒫 1 [ 1 ( ( 𝒫 [ a k ( x x k , y y k ) ] ) e j w k Z 1 1 ( u ρ , v ϕ ) ) ] ,
( I k ( x + δ x k , y + δ y , k ) ) = ( | a k ( x + δ x k , y + δ y k ) + r ( x + δ x k , y + δ y k ) | 2 ) = ( | a k ( x + δ x k , y + δ y k ) | 2 ) + ( | r ( x + δ x k , y + δ y k ) | 2 ) + ( a k ( x + δ x k , y + δ y k ) r * ( x + δ x k , y + δ y k ) ) + ( a k * ( x + δ x k , y + δ y k ) r ( x + δ x k , y + δ y k ) ) , = R ( a ^ k ( u , v ) e j 2 π ( δ x k u + δ y k v ) ) + R ( r ^ ( u , v ) e j 2 π ( δ x k u + δ y k v ) ) + a ^ k ( u u 0 , v v 0 ) e j 2 π ( δ x k ( u u 0 ) + δ y k ( v v 0 ) ) e j 2 π ( δ x k u 0 + δ y k v 0 ) + a ^ k * ( u + u 0 , v + v 0 ) e j 2 π ( δ x k ( u + u 0 ) + δ y k ( v + v 0 ) ) e j 2 π ( δ x k u 0 + δ y k v 0 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.