Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multi-aperture Fourier transform imaging spectroscopy: theory and imaging properties

Open Access Open Access

Abstract

Fourier transform imaging spectroscopy (FTIS) can be performed with a multi-aperture optical system by making a series of intensity measurements, while introducing optical path differences (OPD’s) between various subapertures, and recovering spectral data by the standard Fourier post-processing technique. The imaging properties for multi-aperture FTIS are investigated by examining the imaging transfer functions for the recovered spectral images. For systems with physically separated subapertures, the imaging transfer functions are shown to vanish necessarily at the DC spatial frequency. Also, it is shown that the spatial frequency coverage of particular systems may be improved substantially by simultaneously introducing multiple OPD’s during the measurements, at the expense of limiting spectral coverage and causing the spectral resolution to vary with spatial frequency.

©2005 Optical Society of America

1. Introduction

Multi-aperture systems use a number of relatively small-aperture optics together in such a way that the resolution is comparable to that of a larger single-aperture system. Such systems include segmented-aperture telescopes and multiple-telescope arrays (MTA’s), an example of which is illustrated in Fig.1. Such resolutions can only be achieved when the optical path length through each subaperture (one segment of the aperture or a single telescope in the array) are equal. In a real system, this is accomplished by adjusting path length control elements for each subaperture. In Fig. 1 these elements are shown as “optical trombones,” the length of which may be adjusted by moving a corner mirror. Advantages of multi-aperture systems over comparable monolithic systems include lower weight and volume [1], and reduced cost [2]. Reduced weight and volume are especially important for space-deployed systems. For example, the design for NASA’s James Webb Space Telescope includes a segmented primary that will be folded up during launch [3]. One challenging aspect of using multi-aperture systems is phasing the subapertures. Kendrick et al. [4] have demonstrated closed-loop phasing of a nine-aperture system while imaging an extended object using the phase diversity technique [5]. If a multi-aperture system is sparse, additional tradeoffs include longer exposure times [6] and increased need for image post-processing [7].

Fourier transform spectroscopy [8] is a standard method for obtaining spectral data through post-processing of a series of polychromatic intensity measurements. The technique can be employed in an imaging system by relaying the image through a Michelson interferometer [9], which is used to introduce the optical path differences (OPD’s) necessary for performing the spectroscopy [10,11]. One alternative to the Michelson design is double Fourier transform interferometry [12,13], where the spectroscopy and imaging are performed by Fourier transforming temporal and spatial coherence measurements, respectively.

 figure: Fig. 1.

Fig. 1. Illustration of multiple-telescope array with four subaperture telescopes.

Download Full Size | PDF

Another alternative for performing Fourier transform imaging spectroscopy (FTIS) with multi-aperture systems is to use the path length control elements associated with each subaperture to introduce the required OPD’s [14]. This technique (patent pending) was demonstrated by Kendrick et al. [15], who used a two-telescope system to obtain spectra for an array of point-like objects. Here, we develop a theory for this technique based on the principles of physical optics and partial coherence theory. Section 2 describes a system model and gives an expression for the intensity measurements. Section 3 shows how spectral data can be calculated from the image intensity using the standard Fourier transform technique. Section 4 discusses several aspects of the system related to fact that the spectral images are typically complex-valued. Section 5 discusses the imaging properties of such systems and shows that spectral images obtained with this technique are missing low spatial frequency content. Also, it is shown that the imaging properties of some systems can be improved significantly by introducing multiple OPD’s simultaneously during data collection. Section 6 deals with the spectral resolution of the instrument. Section 7 presents simulation results that illustrate many of the points made in earlier sections. Section 8 is a concluding summary with some comments on image reconstruction techniques. The appendix details a partial coherence analysis of the optical system.

2. Imaging model

While an ideal spectroscopic system is reflective, our modeling is based on the simplified, equivalent thin-lens refractive system shown in Fig. 2. Shown are: (i) an object plane with coordinates (xo,yo ), (ii) a collimating lens of focal length fo , (iii) a pupil plane with coordinates (ξ,η), containing the various subapertures and associated path-delay elements, (iv) an imaging lens of focal length fi , and (v) an image plane with coordinate (x,y). The subapertures are grouped together according to the path delays introduced during data collection. In general there are Q groups indexed by the integer q∈[1,Q]. The amplitude transmittance of the pupil and associated delay elements is written as

Tpup(ξ,η,ν,τ)=q=1QTq(ξ,η,ν)exp(i2πνγqτ),

where ν is the optical frequency, τ is a time-delay variable, and Tq (ξ,η,ν) and γq are respectively the amplitude transmittance and relative delay rate of the qth subaperture group. Each Tq (ξ,η,ν) is written as a function of ν to allow for aberrations. The path delay common to the qth group is given by qτ, where c is the speed of light (note that this restricts the model to delays that are linear in time). Without loss of generality, the subaperture groups are organized such that γ 1=0, γ q+1>γq , and γQ =1. In this context, a conventional FTIS system based on a Michelson interferometer can be modeled as a system with two identical, overlapping subaperture groups (formed by the beamsplitter in a real system) with a path delay equal to the OPD between the arms of the interferometer.

 figure: Fig. 2.

Fig. 2. Simplified refractive model for a multi-aperture optical system.

Download Full Size | PDF

For a spatially incoherent object, the image plane intensity I(x,y,τ), which is a function of the time-delay variable τ, can be written in terms of the object spectral density So (xo,yo) as

I(x,y,τ)=κ1M2So(xM,yM,ν)h(xx,yy,ν,τ)dxdydν,

where κ is a constant, M=-fi/fo is the system magnification, x′=Mxo, y′=Myo , and h(x,y,ν,τ) is the monochromatic point spread function (PSF) (intensity impulse response) for the system, which can be written as

h(x,y,ν,τ)=q=1Qhq,q(x,y,ν)+p=1Qq=1qpQhp,q(x,y,ν)exp[i2πν(γpγq)τ].

The terms hp,q (x,y,ν) are referred to as spectral point spread functions (SPSF’s) and are defined as

hp,q(x,y,ν)=tp(x,y,ν)tq*(x,y,ν),

where tq (x,y,ν) is the coherent impulse response of the qth subaperture group, given by

tq(x,y,ν)=1λ2fi2Tq(ξ,η,ν)exp[i2π(xλfiξ+yλfiη)]dξdη.

The terms tq (x,y,ν) can be complex-valued since the subaperture groups are asymmetric about, or offset from, the optical axis in the pupil plane. Note that the path delays introduced for the spectroscopy are included in Eq. (3) as additional phase terms; any other phase terms (aberrations) are included in Tq (ξ,η,ν). The spectroscopy is based on temporal coherence effects, but the role of spatial coherence may not be immediately obvious. For this reason, the Appendix contains a derivation of Eq. (2) based on partial coherence theory.

The normalized monochromatic optical transfer function (OTF) for the system can be written as

H(fx,fy,ν,τ)=h(x,y,ν,τ)exp[i2π(fxx+fyy)]dxdyh(x,y,ν,τ)dxdy
=Tpup(λfifx,λfify,ν,τ)Tpup(λfifx,λfify,ν,τ)Tpup(ξ,η,ν,τ)2dξdη
=q=1QHq,q(fx,fy,ν)+p=1Qq=1qpQHp,q(fx,fy,ν)exp[i2πν(γpγq)τ],

where the ⋆ symbol indicates a two-dimensional cross-correlation with respect to the spatial-frequency coordinates fx and fy , the second equality follows from Eqs. (4) and (5), and the terms Hp,q (fx,fy) are referred to as spectral optical transfer functions (SOTF’s), which are defined as the normalized two-dimensional Fourier transform of the corresponding SPSF’s, i.e.,

Hp,q(fx,fy,ν)=hp,q(x,y,ν)exp[i2π(fxx+fyy)]dxdyh(x,y,ν,τ)dxdy
=Tp(λfifx,λfify,ν)Tq(λfifx,λfify,ν)Tpup(ξ,η,ν,τ)2dξdη.

For the multiple aperture case, the denominator of this expression is independent of τ and it is equal to the area of the entire pupil when the pupil is binary. Note that both the PSF and the OTF consist of a double summation of terms that are modulated with respect to the time-delay variable τ, and a single summation of unmodulated terms. For a Michelson system, note that the SPSF and SOTF are equivalent to the normal PSF and OTF for incoherent imaging, since the subaperture groups are identical and overlapping.

3. Spectral data

Spectral information can be obtained from a series of image-plane intensity measurements by the standard Fourier technique: (i) subtracting the fringe bias at each image point and (ii) Fourier transforming the data along the τ-dimension to the ν′-domain. Starting from Eq. (2) and performing these steps yields the spectral image

Si(x,y,ν)=κp=1Qq=1qpQ1γpγqM2So(xM,yM,νγpγq)
×hp,q(xx,yy,νγpγq)dxdy.

Transforming this equation to the spatial frequency domain yields

Gi(fx,fy,ν)=κp=1Qq=1qpQ1γpγqGo(Mfx,Mfy,νγpγq)Hp,q(fx,fy,νγpγq),

where the spectral-spatial transforms Gi (fx,fy,ν′) and Go (fx,fy) are the two-dimensional spatial Fourier transforms of the spectral image and the object spectral density, respectively. Notice that the spectral image in Eq. (8) is a double summation of the object spectral density convolved with each of the SPSF terms that are modulated in Eq. (3), i.e., terms for which γpq ≠0. Thus, each term contains unique spatial information as it is convolved with a different SPSF. This is evident in Eq. (9) by the fact that the only spatial frequencies in the recovered spectral data are those passed by SOTF terms that are modulated in Eq. (6). Also notice that the spectral dimension of each term in Eqs. (8) and (9) is scaled by the factor 1/(γpq ). Thus, terms for which |γpq |≠1 appear at scaled optical frequencies ν′=(γpq )ν in the ν′-domain. This will occur only for Q ≥ 3. If there are just two groups of subapertures (Q=2), then there is just a single value of |γ 1-γ 2|=1.

When Q≥3, it is desirable to map the data in each term of Eqs. (8) and (9) back to the base optical frequencies, thus forming a composite spectral image that contains all of the collected spatial information in each spectral band. Typically, a multi-aperture system is designed such that the OTF does not have any gaps with missing spatial frequencies. This implies that the SOTF terms Hq,p (fx,fy) will overlap somewhat in the (fx,fy ) plane, and one cannot completely separate the different terms in that plane. However, the terms can be separated with respect to ν′ by limiting the spectral bandwidth of the object and choosing the relative delay rates appropriately. To illustrate, suppose the object spectrum is limited to optical frequencies in the range ν 1νν 2 by a spectral filter placed in the system during the measurements. Then spectral data will appear in Si (x,y,ν′) at multiple intervals in the ν′-domain given by ν 1/(γpq )≤ν′≤ν 2/(γpq ) for all unique, non-zero values of γqp . The band limits (ν 1 and ν 2) and the relative delay rates (γq’s) can be chosen such that these intervals do not overlap, making the data separable in the ν′-domain. For example, for Q=3, γ 1=0, γ 2=1/3, and γ 3=1, the spectra are separated in ν′ space if ν 2-ν 1<ν 2/3. An example of this is shown in Sec. 7. Assuming this is the case, the data in each term of Eq. (8) can be mapped to the base optical frequencies ν to form a composite spectral image

Scomp(x,y,ν)=Δγ>0ΔγSi(x,y,Δγν)forν1νν2,

where Δγ=γpq , the relative delay rate differences. Substituting from Eq. (8) yields

Scomp(x,y,ν)=κp=1Qq=1Δγ>0Q1M2So(xM,yM,ν)
×hp,q(xx,yy,ν)dxdyforν1νν2.

4. Complex-valued spectral images

The image intensity I(x,y,τ) is real-valued and nonnegative, since both the object spectral density and the PSF are real and nonnegative. However, the spectral images derived from the FTIS measurements can be complex-valued, since Si (x,y,ν′) is related to the measurements by a complex Fourier transform. Since the measurements are real-valued, the recovered spectral data has particular symmetry properties. Specifically, the spectral image possesses Hermitian symmetry about the zero temporal frequency, i.e.,

Si(x,y,ν)=Si*(x,y,ν),

and the spectral-spatial transform is Hermitian about the origin of the (fx,fy,ν′) domain, i.e.,

Gi(fx,fy,ν)=Gi*(fx,fy,ν).

Note that the object spectral density So (xo,yo) is a one-sided spectrum, i.e., non-zero only for positive frequencies ν>0, but the spectral image cube has a two-sided spectrum. Referring to Eqs. (8) and (9), one can see that the spectral data at positive and negative temporal frequencies consists of terms for which γpq >0 and γpq <0, respectively. Equation (12) states that the spectral image values at negative optical frequencies are the complex conjugate of those at positive frequencies. This can be seen from Eq. (8) and by the fact that hp,q (x,y,ν)=h*q,p (x,y,ν) [see Eq. (4)]. The Hermitian symmetry in the (fx,fy′) domain expressed by Eq. (13) is apparent in Eq. (9), since the Fourier transform of the real-valued object spectral density is Hermitian about the DC spatial frequency, i.e., Go (fx,fy)=G*o(-fx,-fy), and by the fact that Hp,q (fx,fy)=H*q,p(-fx,-fy) [see Eq. (7)]. At positive optical frequencies, the spectral image only contains spatial frequencies corresponding to vector separations oriented from subaperture group p toward subaperture group q where γpq >0. Spatial frequencies corresponding to the oppositely-oriented vector separations appear at negative optical frequencies.

In certain cases, which include Michelson-based systems, the spectral images are real-valued, because the subaperture groups possess a particular symmetry in the pupil plane. In this case, the spatial frequency data and thus the SOTF terms must possess Hermitian symmetry about the DC spatial frequency in each spectral band, i.e., Hp,q (fx,fy)=H*p,q(-fx,-fy). Along with Eq. (7), this condition implies that Hp,q (fx,fy)=Hq,p (fx,fy). Note that this relation holds for Michelson-based systems (with common-path aberrations only), since the subaperture groups are identical and overlapping. Also, real-valued spectral images imply that the fringe packets described by I(x,y,τ) are symmetric with respect to the time delay variable τ, while complex-valued spectral images imply that the fringe packets are asymmetric. In systems, like those based on Michelson interferometer design, where the fringe packets are symmetric, measurements only need to be made for either positive or negative time delays. For a general multi-aperture system however, the fringes will usually be asymmetric, and thus measurements need to be made for both positive and negative time delays.

Returning to the more general case, it is important to note that the real and imaginary parts of Si (x,y,ν′) are linearly related to the object spectral density So (x,y,ν). Hence, these quantities are the appropriate ones for image reconstruction. On the other hand, the magnitude of Si (x,y,ν′) and the phase of Si (x,y,ν′) are nonlinearly related to So (x,y,ν). Hence, the spatial frequency content of |Si (x,y,ν′)| or arg{Si (x,y,ν′)}, unlike that of Re{Si (x,y,ν′)} or Im{Si (x,y,ν′)}, is not directly related to the spatial frequency content of So (x,y,ν′). The point-object simulation of Section 7.1 illustrates how the magnitude and phase of Si (x,y,ν′) can vary with position in the image plane.

The spatial frequency content of both the real and imaginary parts of a spectral image are directly related to the spatial frequency content of the object spectral density. To show this, note that the real part of a complex-valued spectral image can be written as

Si(Re)(x,y,ν)=12[Si(x,y,ν)+Si*(x,y,ν)],

and its spatial Fourier transform is given by

Gi(Re)(fx,fy,ν)=12[Gi(fx,fy,ν)+Gi*(fx,fy,ν)],

where it is emphasized that Gi(Re)(fx,fy,ν′) is ordinarily complex-valued. Using Eq. (9) and the fact that Go (fx,fy)=G*o(-fx,-fy) yields

Gi(Re)(fx,fy,ν)=κp=1Qq=1qpQ1γpγqGo(Mfx,Mfy,νγpγq)
×12[Hp,q(fx,fy,νγpγq)+Hp,q*(fx,fy,νγpγq)].

Similarly, the spatial transform of the imaginary part of the spectral images Si(Im)(fx,fy,ν′) can be written as

Gi(Im)(fx,fy,ν)=κp=1Qq=1qpQ1γpγqGo(Mfx,Mfy,νγpγq)
×12i[Hp,q(fx,fy,νγpγq)Hp,q*(fx,fy,νγpγq)].

Thus, the spatial frequency content of the real or imaginary parts of the complex-valued spectral image is related to the spatial frequency content of the object spectral density through the SOTF terms. In a system with no aberrations, each SOTF term Hp,q (fx,fy) is real-valued and nonnegative, and the spatial frequency content of Gi(Re)(fx,fy,ν′) and Gi(Im)(fx,fy,ν′) is equivalent (to within a multiple of π/2 phase shift) at spatial frequencies where there is no overlap between the these terms, according to Eqs. (16) and (17). In regions where the terms do overlap, the phase shifts associated with various terms in the expression for Gi(Im)(fx,fy,ν′) may cause the net transfer function to vanish, while the terms in the summation for Gi(Re)(fx,fy,ν′) will add in phase. In such cases, the real part of the spectral image will contain more information than the imaginary part. The example in Sec. 7.2 illustrates this point.

5. Imaging properties

In essence, the SOTF’s are the spatial transfer functions for the spectral images and thus determine the imaging properties of the system. According to Eq. (7), the SOTF’s are calculated as the cross-correlation between subaperture groups, rather than the autocorrelation of the entire aperture as is the OTF for a normal imaging system. Above, it was noted that the SOTF for a Michelson-based FTIS is equivalent to the traditional OTF, since the system can be described as two identical, overlapping subaperture groups. In a multi-aperture system however, the subaperture groups are physically separated in the pupil plane, and thus the SOTF’s vanish necessarily at the DC spatial frequency and in some neighborhood around it. If the minimum separation in the pupil plane between two subaperture groups is d, then the SOTF vanishes for spatial frequencies below the cutoff frequency given by fc=d/(λfi ). For this reason, spectral images from a multi-aperture system are zero-mean, high-pass-filtered versions of the object.

For a given arrangement of subapertures, the spatial frequency content of the spectral images is dependent on the grouping of the subapertures. In some cases, the use of more than two groups can improve the imaging properties by providing additional spatial frequency content. However, having more than two groups implies the use of fractional delay rates, i.e., 0<γq <1 for q≠1 or Q. In such cases, spectral data will appear at scaled optical frequencies (which can be corrected), bandwidth limitations must be imposed on the system, and the relative delay rates must be chosen such that the data is separable in the ν′-dimension, as discussed in Sec. 3. An additional trade-off for using fractional delay rates is variable spectral resolution at different spatial frequencies, as will be shown in the next section.

6. Spectral resolution

In practice, the image intensity can only be measured over a finite range of time delay values, i.e., -τmaxττmax . Taking this into account yields the following expression for the image spectral data instead of Eq. (8)

Si(x,y,ν)=κp=1Qq=1qpQ1M2So(xM,yM,ν)hp,q(xx,yy,ν)
×2τmaxsin⁡c[2τmax(γpγq)(νγpγqν)]dxdydν,

where sinc(ν)=sin(πν)/(πν). Notice that the spectral image is now convolved in the spectral dimension with a sinc function that limits the spectral resolution. If the object data is bandlimited in the spectral dimension to the interval ν 1νν 2, and the data leakage between the each of the intervals ν 1/(γpq )≤ν′≤ν 2/(γpq ) is negligible, then the composite spectral image is given approximately by

Scomp(x,y,ν)κp=1Qq=1Δγ>0Q1M2So(xM,yM,ν)hp,q(xx,yy,ν)
×2τmax(γpγq)sin⁡c[2τmax(γpγq)(νν)]dxdydνforν1νν2.

In this equation, it is easy to see that each term in the summation is convolved with a sinc function having a zero-to-first-null width of 1/[2τmax (γpq )]. The spectral resolution of each term decreases with the quantity γpq , because the effective time-delay range over which data is collected for each term is scaled by the same factor. By transforming Eq. (19) to the spatial frequency domain, it is easy to see that the spectral resolution varies with spatial frequency for Q>2.

7. Simulation examples

This section presents two multi-aperture FTIS simulations based on an aberration-free system having three subapertures in the equilateral-triangle arrangement shown in Fig. 4. Each subaperture is circular of radius R, and the displacement of each subaperture from the optical axis is r=1.5R. The coordinates for the center of each subaperture are given by (ξ 1,η 1)=(0, r), (ξ 2,η 2)=(√3r/2, -r/2), and (ξ 3,η 3)=(-√3r/2, -r/2), making the closest separation between two subapertures √3r-2R. The subapertures are grouped individually with the following relative delay rates: γ 1=0, γ 2=1/3, and γ 3=1. In both simulations the spectrum is assumed to be limited to the interval ν 1νν 2, where ν 1=0.9ν 0, ν 2=1.1ν 0, and ν 0 is the mean optical frequency.

Fig. 4 shows a single frame of a movie that illustrates the effect of the OPD’s on the pupil function, the PSF, and the OTF of the three-telescope system used for the simulations at the mean optical frequency ν 0 over the range of time-delays 0≤τ≤3/ν 0. Fig. 4(a) indicates the magnitude of the relative phase delay modulo 2π for each subaperture by grayscale tone (white represents zero phase delay and black represents ±π phase delay). Fig. 4(b) shows the monochromatic PSF, where the circle represents the Airy disk radius for a single subaperture at ν=ν 0. The PSF can be viewed as a set of interference fringes underneath an Airy envelope function, which is the diffraction pattern for a single subaperture. As the time delay variable changes, the fringes move under the envelope. Fig. 4(c) shows the magnitude of the real part of the OTF. Notice that only spatial frequencies that correspond to vector separations between subapertures are modulated during the movie, and the rate of modulation for various spatial frequencies is proportional to the difference in the relative delay rates of each corresponding pair of subapertures.

 figure: Fig. 3.

Fig. 3. Pupil of optical system used in simulations.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Movie (455KB) showing the effect of the OPD’s on the optical system at ν=ν 0 as the time-delay variable is changed from τ=0 to τ=3/ν 0: (a) the magnitude of the relative phase delay of each subaperture, where white represents 0 and black represents ±π, (b) the PSF, and (c) the magnitude of the real part of the OTF.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Localization of FTIS signal in: (a) the raw intensity data cube, (b) the spectral image cube, and (c) the spectral-spatial transform cube. In each cube, the FTIS signal is localized to the darkly shaded regions.

Download Full Size | PDF

Fig. 5 shows the localization of the FTIS signal in three transform domains for the example parameters above. Fig. 5(a) represents the intensity measurements I(x,y,τ). In this domain, the signal, which is essentially a fringe packet at each image point, occupies the whole domain. Fig. 5(b) represents the spectral image Si (x,y,ν′). In these examples, the signal is localized to six spectral bands along the ν′-dimension. By Hermitian symmetry about the plane ν′=0, the signal at negative ν′ is the complex conjugate of the signal at positive ν′. Of the three spectral bands for ν′>0, the one at largest ν′ represents spectral image data at the base optical frequencies (from the interaction of subapertures 1 and 3), the middle band represents data scaled to 2/3 of the base optical frequencies (from subapertures 2 and 3), and the third, at smallest ν′, represents data that appears at 1/3 of the base optical frequencies (from subapertures 1 and 2). Fig. 5(c) represents the spectral-spatial transform Gi (fx,fy,ν′). Here, the FTIS signal is further localized to the support of the SOTF terms. Each semi-transparent skewed cone represents the support of a SOTF term. From this figure we can see how the spectral and spatial frequency information can be separated. The sparsity of the data in this domain will also make possible significant noise filtering.

7.1. Point object

The purpose of this point-object example is to provide a physical understanding of the effects that contribute to the magnitude and phase of the recovered spectral data. This simulation is based on an object with the spectral density

So(x,y,ν)=Erect[(νν0)(ν2ν1)]δ(x,y),

where E is a constant with units of [W m-2 Hz-1], rect(ν) vanishes everywhere except for |ν|≤1/2, where it equals unity, and δ(x′,y′) is the two-dimensional Dirac delta function. This represents an on-axis point source with a uniform spectral exitance in the band of interest. The image intensity is obtained by substituting this expression into Eq. (2) and simplifying to yield

I(x,y,τ)=κEν1ν2h(x,y,ν,τ)dν.

where the PSF h(x,y,ν,τ) is given by Eqs. (3) and (4) with

tq(x,y,ν)=πR2λ2fi2jinc(2Rλfix2+y2)exp[i2πλfi(xξq+yηq)],

where jinc(ρ)=2J 1(πρ)/(πρ), and J 1 is the first-order Bessel function of the first kind. Fig. 6(a) and (b) show the calculated image intensity as a function of the time delay variable at two points in the image plane: (i) Point A with coordinates (xA,yA )=(0,0), and (ii) point B with coordinates (xB,yB )=(0,0.61λ 0 fi/R), where λ 0=c/ν 0. Note that Point A corresponds to the geometric image location of the point object, and the distance between the points is equal to the Airy disk radius corresponding to the diffraction pattern of a single subaperture at the mean optical frequency ν 0. The data in the figure is in units of I 0, which is the intensity at Point A for τ=0. The figure shows that the fringe packet at Point A is symmetric about τ=0, while the fringe packet at Point B is asymmetric. Fig. 6(c), (d), and (e) show the intensity contributions at Point B due to the interference between each pair of subapertures. In general, each contribution is symmetric about some non-zero time delay, i.e. τp,q for the contribution from subaperture groups p and q. The following expression for τp,q can be obtained by substituting Eqs. (4) and (22) into Eq. (3) and solving for the time delay that yields zero phase for the (q,p) term at Point B,

τp,q=xB(ξpξq)+yB(ηpηq)cfi(γpγq).

Since each contribution has a different shift, the fringe packet at Point B is asymmetric. The fringe packet at Point A is symmetric, because each contribution is centered about τ=0. All points in the image of an extended scene will have a mixture of the characteristics of Points A and B, especially for sparse-aperture systems, which have PSF’s with sidelobes much larger than those of conventional filled-aperture systems. Fig. 7 shows the spectral data at Points A and B for positive temporal frequencies in the ν′-domain. Notice that three scaled versions of the object spectral data are clearly visible. The data at the base optical frequencies is due to the interference between subapertures 1 and 3, since γ 3-γ 1=1, the data scaled to 2/3 of the base frequencies is associated with subapertures 2 and 3, since γ 3-γ 2=2/3 and the data closest to the origin associated with subapertures 1 and 2, is scaled to 1/3 of the base optical frequencies, since γ 2-γ 1=1/3. The recovered spectrum at Points A and B are real- and complex-valued, respectively, since the corresponding fringe packets are symmetric and asymmetric, respectively, as shown in Fig. 6. The spectral content at each point is dependent on the SPSF’s. Note that the spectral data at Point A is bluer than the actual object spectral density, since higher optical frequencies are focused more tightly onto the geometric image point than lower optical frequencies, as is the case for ordinary imaging of point objects. Also, the magnitude of the spectral data at Point B goes to zero at ν 0/3, 2ν 0/3, and ν 0 since Point B is located at a point where the SPSF’s (centered about Point A) vanish for ν 0. The ringing artifacts in the spectral data are due to the convolution in the spectral dimension with a sinc function [see Eq. (18)]. These artifacts can be reduced by applying a window function to intensity data in the τ-dimension before taking the Fourier transform.

 figure: Fig. 6.

Fig. 6. Image intensity versus τ for the point object simulation: (a) at Point A, (b) at Point B, and contributions to the intensity at Point B due to the interference between subapertures: (c) 1 and 2, (d) 2 and 3, and (e) 1 and 3.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Spectral data from point object simulation at positive temporal frequencies in the ν′-domain: (a) at Point A (real-valued) and (b) at Point B (real and imaginary parts).

Download Full Size | PDF

7.2. Extended object

The second simulation used data from NASA’s Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) [16] as object data. The dimensions of the object data cube are 128×128 samples in the spatial dimensions and 103 samples in the spectral dimension. While the AVIRIS data is uniformly sampled in wavelength over a specific spectral range, in our simulations the optical frequencies were arbitrarily assigned to each spectral band such that object data was uniformly sampled in frequency, spanning the range 0.9ν 0ν≤1.1ν 0. Fig. 8 shows a movie of the object data versus ν, the relative size of the pupil in the spatial frequency domain at ν=1.03ν 0, and a movie of the simulated image intensity versus τ. Note that the movie of the object data goes completely dark in frames that correspond to atmospheric absorption bands in the AVIRIS data. Also, note that the fringe modulation in the image intensity movie is largest near τ=0, which occurs halfway through the movie. Fig. 9 shows recovered spectral image for three values of ν′. The top row shows the real part of the complex-valued spectral images, and the bottom row indicates the spatial frequency content of each image by showing the magnitude of the spectral-spatial transform in the corresponding spectral bands. The left hand column shows data that appears at one-third of the base optical frequency at ν′=0.34ν 0 (from subapertures 1 and 2), the middle column shows data that appears at two-thirds of the base optical frequency at ν′=0.68ν 0 (from subapertures 2 and 3), and the right hand column shows data that appears at the base optical frequency at ν′=1.03ν 0 (from subapertures 1 and 3). This data clearly illustrates the advantage of using fractional delay rates. For example, if subaperture 2 were grouped with subaperture 1, then the spatial frequency content shown in the left hand column of Fig. 9 would be absent from the data. Fig. 10 shows the data for the real and imaginary parts of the composite spectral image at the base optical frequency 1.03ν 0. Notice that the spatial frequency content of the real and imaginary parts of the image are equivalent everywhere except in the vicinity of a diagonal line passing through the DC spatial frequency where the spatial frequency content of the real part of the image adds constructively, and the spatial frequency content of the imaginary part of the image adds destructively. Also notice that even though the composite spectral image has spatial frequency data in all directions, there is a finite region around DC where the spatial frequency data is missing. As a result, the spectral images are bipolar, zero-mean, high-pass filtered versions of the object spectral density.

 figure: Fig. 8.

Fig. 8. The extended object simulation: (a) movie (582KB) of the object data versus ν(the still frame shows the data at ν=1.03ν 0), (b) size of the pupil in spatial frequencies corresponding to ν=1.03ν 0, and (c) movie (746KB) of the image intensity versus τ(the still frame shows the image intensity at τ=0).

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Spectral image data from the extended-object simulation. The top row shows the real part of spectral images at: (a) ν′=0.34ν 0, (c) ν′=0.68ν 0, and (e) ν′=1.03ν0. The bottom row shows the Fourier magnitude of each image. For the spectral images, note that dark grays represent negative values and light grays represent positive values.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Composite spectral image data from extended object simulation: (a) the real part of the spectral image at ν=1.03ν 0, (b) the corresponding Fourier magnitude, (c) the imaginary part of the same spectral image, and (d) the corresponding Fourier magnitude. For the spectral images, note that dark grays represent negative values, middle gray represents zero, and light grays represent positive values.

Download Full Size | PDF

8. Discussion

Fourier transform imaging spectroscopy can be performed with a multi-aperture optical system by using existing path-length control elements to introduce the required OPD’s. The theory presented shows that spectral data can be obtained from polychromatic intensity measurements by the standard Fourier technique, but the DC spatial frequency components are missing from the resulting spectral images. This is due to the fact that the spatial transfer functions for these images, the SOTF’s, are given by the cross-correlations between the pupil functions for groups of subapertures that have different path-delays during data collection. Since the subapertures do not normally overlap physically, the SOTF’s vanish in some finite region around DC spatial frequency. Thus, the spectral images are also missing some low spatial frequency content. This poses an interesting image reconstruction problem. Linear algorithms, such as the Wiener-Helstrom filter [17], cannot reconstruct the missing low spatial frequencies. However, nonlinear algorithms may be able to reconstruct the missing data based on constraints and specific assumptions about the object. It is unclear whether superresolution algorithms [18], which are typically used to fill in missing high spatial frequencies, can fill in missing low spatial frequency data. However, we have had some success filling in the low spatial frequencies by maximizing a derivative-based sharpness metric [19], which assumes that the object consists of regions that are piecewise uniform in the spatial dimensions, subject to constraints, which require the reconstruction to be consistent with the panchromatic fringe bias data [20].

In particular systems, the imaging properties can be improved significantly by introducing multiple OPD’s between the subapertures for each intensity measurement instead of using a single OPD. This technique offers the ability to collect spectral data over a larger area of the spatial frequency plane, but has two significant trade-offs: (i) the spectral bandwidth of the system needs to be limited and (ii) the spectral resolution varies with spatial frequency.

Appendix

The multi-aperture FTIS is based on temporal coherence effects, but the role of spatial coherence effects may not be immediately obvious. For this reason, this section presents a description of the derivation of Eq. (2) based on partial coherence theory. The cross-spectral density function is propagated through the system using Fresnel-like transforms and generalized transmission functions. An expression for the image intensity is given for a general partially coherent object, which is then simplified for a spatially incoherent object. The final result shows that spatial coherence effects do not play a role in the measurements.

The cross spectral density in a plane z=constant is defined in Section 4.3.2 of Ref. [21] as

W(z)(x1,y1,x2,y2,ν)δ(νν')=V(x1,y1,ν)V*(x2,y2,ν),

where V(x,y,ν) is the generalized temporal Fourier transform of the analytic signal representation of the scalar electric field at the point (x,y) in the plane of interest. Note that this definition is the complex conjugate of the quantity in Ref. [21], in order to conform to the convention of Refs. [23,23]. The cross-spectral density obeys two Helmholtz equations and can be propagated from a plane z=0 to a plane z=d >0 by two applications of Rayleigh’s first diffraction formula (see Sec. 4.4.2 of Ref. [21]). By making the standard paraxial physical-optics approximations, the propagation equation can be written in the following form

W(d)(x1,y1,x2,y2,ν)=1λ2d2dx1dy1dx1dy2W(0)(x1,y1,x2,y2,ν)
×exp{iπλd[(x1x1)2+(y1y1)2(x2x2)2(y2y2)2]},

where W (0)(x 1,y 1,x 2,y 2,ν) and W (d)(x 1,y 1,x 2,y 2,ν) represent the cross-spectral densities in the planes z=0 and z=d, respectively, the distance between the planes is assumed to be many optical wavelengths (dλ), and the Fresnel approximation [24] has been used.

The concept of a generalized pupil function for the scalar optical field can be applied to the cross-spectral density. If T(x,y,ν) describes the complex amplitude transmission in the plane z=0, such that

Vtrans(x,y,ν)=T(x,y,ν)Vinc(x,y,ν),

where Vinc (x,y,ν) represents the field incident from the half-space z<0 and Vtrans (x,y,ν) represents the field transmitted into the half-space z>0, then by substitution into Eq. (24), one can write

Wtrans(0)(x1,y1,x2,y2,ν)=T(x1,y1,ν)T*(x2,y2,ν)Winc(0)(x1,y1,x2,y2,ν),

where W (0) inc(x 1,y 1,x 2,y 2,ν) and W (0) trans(x 1,y 1,x 2,y 2,ν) represent the incident and transmitted cross-spectral densities, respectively. The standard transmission function for a lens is given in Section 5.1 of Ref. [24], and the transmission function for the pupil plane Tpup (ξ,η,ν,τ) is given in Eq. (1). Note that the pupil transmission function is written explicitly as a function of the time-delay variable.

The cross-spectral density is propagated through the multi-aperture FTIS system shown in Fig. 2 by repeated application of Eqs. (25) and (27). After simplification, the cross-spectral density in the image plane W (i)(x 1,y 1,x 2,y 2,ν,τ) can be expressed as

W(i)(x1,y1,x2,y2,ν,τ)=dx1dy1dx2dy21M2W(o)(x1M,y1M,x2M,y2M,ν)
×exp[iπλfo(1d1fo)(x12M2+y12M2x22M2y22M2)]
×exp[iπλfi(1d2fi)(x12+y12x22y22)]
×{q=1Qp=1Qtq(x1x1,y1y1,ν)tp*(x2x2,y2y2,ν)
×exp[i2πν(γqγp)τ],

where W (o)(x 1,y 1,x 2,y 2,ν,τ) is the cross-spectral density in the object plane. The image intensity I(x,y,τ) is related to the cross-spectral density by [21]

I(x,y,τ)=W(i)(x,y,x,y,ν,τ)dν.

For a spatially incoherent object, the object spectral density can be written as [25]

W(o)(x1,y1,x2,y2,ν)=κSo(x1,y1,ν)δ(x1x2,y1y2)

where So (x′,y′,ν) is the spectral density of the object and κ=λ 2/π for a perfectly incoherent object. Substituting Eqs. (28) and (30) into Eq. (29) and simplifying yields Eq. (2).

Acknowledgment

This work was supported by Lockheed Martin Corporation.

References and Links

1. J. S. Fender, “Synthetic apertures: an overview,” in Synthetic Aperture Systems, J. S. Fender, ed., Proc. SPIE440, 2–7 (1983).

2. S.-J. Chung, D. W. Miller, and O. L. de Weck, “Design and implementation of sparse aperture imaging systems,” in Highly Innovative Space Telescope Concepts, H. A. MacEwen, ed., Proc. SPIE4849, 181–191 (2002).

3. D. Redding, S. Basinger, A. E. Lowman, A. Kissil, P. Bely, R. Burg, and R. Lyon, “Wavefront sensing for a next generation space telescope,” in Space Telescopes and Instruments V, P. Y. Bely and J. B. Breckinridge, eds., Proc. SPIE3356, 758–772 (1998).

4. R. L. Kendrick, A. L. Duncan, and R. Sigler, “Imaging Fizeau interferometer: experimental results,” presented at Frontiers in Optics, Tucson, Arizona, 5–9 Oct. 2003 (post-deadline paper 15).

5. R. G. Paxman, T. J. Schultz, and J. R. Fienup, “Joint estimation of object and aberrations by using phase diversity,” J. Opt. Soc. Am. A 9, 1072–1085 (1992). [CrossRef]  

6. J. R. Fienup, “MTF and integration time versus fill factor for sparse-aperture imaging systems,” in Imaging Technologies and Telescopes, J. W. Bilbro, et al., eds., Proc. SPIE4091, 43–47 (2000).

7. J. R. Fienup, D. Griffith, L. Harrington, A. M. Kowalczyk, J. J. Miller, and J. A. Mooney, “Comparison of reconstruction algorithms for images from sparse-aperture systems,” in Image Reconstruction from Incomplete Data II, P. J. Bones, et al., eds., Proc. SPIE4792, 1–8 (2002).

8. J. Kauppinen and J. Partanen, Fourier Transforms in Spectroscopy, (Wiley-VCH, Berlin, 2001). [CrossRef]  

9. N. J. E. Johnson, “Spectral imaging with the Michelson interferometer,” in Infrared Imaging Systems Technology, Proc. SPIE226, 2–9 (1980).

10. C. L. Bennett, M. Carter, D. Fields, and J. Hernandez, “Imaging Fourier transform spectrometer,” in Imaging Spectrometry of the Terrestrial Environment, G. Vane, ed., Proc. SPIE1937, 191–200 (1993).

11. M. R. Carter, C. L. Bennett, D. J. Fields, and F. D. Lee, “Livermore imaging Fourier transform infrared spectrometer,” in Imaging Spectrometry, M. R. Descour, J. M. Mooney, D. L. Perry, and L. R. Illing, eds., Proc. SPIE2480, 380–386 (1995).

12. K. Itoh and Y. Ohtsuka, “Fourier transform spectral imaging: retrieval of source information from three-dimensional spatial coherence,” J. Opt. Soc. Am. A 3, 94–100 (1986). [CrossRef]  

13. J.-M. Mariotti and S. T. Ridgeway, “Double Fourier spatio-spectral interferometry: combining high spectral and high spatial resolution in the near infrared,” Astron. Astrophys. 195, 350–363 (1988).

14. M. Frayman and J. A. Jamieson, “Scene imaging and spectroscopy using a spatial spectral interferometer,” in Amplitude and Intensity Spatial Interferometry, J. B. Breckingridge, ed., Proc. SPIE1237, 585–603 (1990).

15. R. L. Kendrick, E. H. Smith, and A. L. Duncan, “Imaging Fourier transform spectrometry with a Fizeau interferometer,” in Interferometry in Space, M. Shao, ed., Proc. SPIE4852, 657–662 (2003).

16. Provided through the courtesy of Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California (http://aviris.jpl.nasa.gov/).

17. C. W. Helstrom, “Image restoration by the method of least squares,” J. Opt. Soc. Am. 57, 297–303 (1967). [CrossRef]  

18. B. R. Hunt, “Super-resolution of images: algorithms, principles, performance,” International Journal of Imaging Systems and Technology 6, 297–304 (1995). [CrossRef]  

19. S. T. Thurman and J. R. Fienup, “Fourier transform imaging spectroscopy with a multiple-aperture telescope: band-by-band image reconstruction,” in Optical, Infrared, and Millimeter Space Telescopes, J. C. Mather, ed., Proc. SPIE5487-68 (2004).

20. S. T. Thurman and J. R. Fienup, “Reconstruction of multispectral image cubes from multiple-telescope array Fourier transform imaging spectrometer,” presented at Frontiers in Optics, Rochester, New York, 10–14 Oct. 2004, paper FTuB3.

21. L. Mandel and E. Wolf, Optical Coherence and Quantum Optics, (Cambridge University Press, Cambridge, 1995).

22. M. Born and E. Wolf, Principles of Optics, 7th (expanded) ed., (Cambridge University Press, Cambridge, 2002) Sec. 10.2.

23. J. W. Goodman, Statistical Optics, (Wiley, New York, 2000) Sec. 3.5.

24. J. Goodman, Introduction to Fourier Optics2nd ed., (McGraw-Hill, New York, 1996).

25. M. J. Beran and G. B. Parrent Jr., “The mutual coherence of incoherent radiation,” Nuovo Cimento 27, 1049–1065 (1963). [CrossRef]  

Supplementary Material (3)

Media 1: AVI (455 KB)     
Media 2: AVI (582 KB)     
Media 3: AVI (746 KB)     

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Illustration of multiple-telescope array with four subaperture telescopes.
Fig. 2.
Fig. 2. Simplified refractive model for a multi-aperture optical system.
Fig. 3.
Fig. 3. Pupil of optical system used in simulations.
Fig. 4.
Fig. 4. Movie (455KB) showing the effect of the OPD’s on the optical system at ν=ν 0 as the time-delay variable is changed from τ=0 to τ=3/ν 0: (a) the magnitude of the relative phase delay of each subaperture, where white represents 0 and black represents ±π, (b) the PSF, and (c) the magnitude of the real part of the OTF.
Fig. 5.
Fig. 5. Localization of FTIS signal in: (a) the raw intensity data cube, (b) the spectral image cube, and (c) the spectral-spatial transform cube. In each cube, the FTIS signal is localized to the darkly shaded regions.
Fig. 6.
Fig. 6. Image intensity versus τ for the point object simulation: (a) at Point A, (b) at Point B, and contributions to the intensity at Point B due to the interference between subapertures: (c) 1 and 2, (d) 2 and 3, and (e) 1 and 3.
Fig. 7.
Fig. 7. Spectral data from point object simulation at positive temporal frequencies in the ν′-domain: (a) at Point A (real-valued) and (b) at Point B (real and imaginary parts).
Fig. 8.
Fig. 8. The extended object simulation: (a) movie (582KB) of the object data versus ν(the still frame shows the data at ν=1.03ν 0), (b) size of the pupil in spatial frequencies corresponding to ν=1.03ν 0, and (c) movie (746KB) of the image intensity versus τ(the still frame shows the image intensity at τ=0).
Fig. 9.
Fig. 9. Spectral image data from the extended-object simulation. The top row shows the real part of spectral images at: (a) ν′=0.34ν 0, (c) ν′=0.68ν 0, and (e) ν′=1.03ν0. The bottom row shows the Fourier magnitude of each image. For the spectral images, note that dark grays represent negative values and light grays represent positive values.
Fig. 10.
Fig. 10. Composite spectral image data from extended object simulation: (a) the real part of the spectral image at ν=1.03ν 0, (b) the corresponding Fourier magnitude, (c) the imaginary part of the same spectral image, and (d) the corresponding Fourier magnitude. For the spectral images, note that dark grays represent negative values, middle gray represents zero, and light grays represent positive values.

Equations (44)

Equations on this page are rendered with MathJax. Learn more.

T pup ( ξ , η , ν , τ ) = q = 1 Q T q ( ξ , η , ν ) exp ( i 2 π ν γ q τ ) ,
I ( x , y , τ ) = κ 1 M 2 S o ( x M , y M , ν ) h ( x x , y y , ν , τ ) dx dy d ν ,
h ( x , y , ν , τ ) = q = 1 Q h q , q ( x , y , ν ) + p = 1 Q q = 1 q p Q h p , q ( x , y , ν ) exp [ i 2 π ν ( γ p γ q ) τ ] .
h p , q ( x , y , ν ) = t p ( x , y , ν ) t q * ( x , y , ν ) ,
t q ( x , y , ν ) = 1 λ 2 f i 2 T q ( ξ , η , ν ) exp [ i 2 π ( x λ f i ξ + y λ f i η ) ] d ξ d η .
H ( f x , f y , ν , τ ) = h ( x , y , ν , τ ) exp [ i 2 π ( f x x + f y y ) ] d x d y h ( x , y , ν , τ ) d x d y
= T pup ( λ f i f x , λ f i f y , ν , τ ) T pup ( λ f i f x , λ f i f y , ν , τ ) T pup ( ξ , η , ν , τ ) 2 d ξ d η
= q = 1 Q H q , q ( f x , f y , ν ) + p = 1 Q q = 1 q p Q H p , q ( f x , f y , ν ) exp [ i 2 π ν ( γ p γ q ) τ ] ,
H p , q ( f x , f y , ν ) = h p , q ( x , y , ν ) exp [ i 2 π ( f x x + f y y ) ] d x d y h ( x , y , ν , τ ) d x d y
= T p ( λ f i f x , λ f i f y , ν ) T q ( λ f i f x , λ f i f y , ν ) T pup ( ξ , η , ν , τ ) 2 d ξ d η .
S i ( x , y , ν ) = κ p = 1 Q q = 1 q p Q 1 γ p γ q M 2 S o ( x M , y M , ν γ p γ q )
× h p , q ( x x , y y , ν γ p γ q ) d x d y .
G i ( f x , f y , ν ) = κ p = 1 Q q = 1 q p Q 1 γ p γ q G o ( M f x , M f y , ν γ p γ q ) H p , q ( f x , f y , ν γ p γ q ) ,
S comp ( x , y , ν ) = Δ γ > 0 Δ γ S i ( x , y , Δ γ ν ) for ν 1 ν ν 2 ,
S comp ( x , y , ν ) = κ p = 1 Q q = 1 Δ γ > 0 Q 1 M 2 S o ( x M , y M , ν )
× h p , q ( x x , y y , ν ) d x d y for ν 1 ν ν 2 .
S i ( x , y , ν ) = S i * ( x , y , ν ) ,
G i ( f x , f y , ν ) = G i * ( f x , f y , ν ) .
S i ( Re ) ( x , y , ν ) = 1 2 [ S i ( x , y , ν ) + S i * ( x , y , ν ) ] ,
G i ( Re ) ( f x , f y , ν ) = 1 2 [ G i ( f x , f y , ν ) + G i * ( f x , f y , ν ) ] ,
G i ( Re ) ( f x , f y , ν ) = κ p = 1 Q q = 1 q p Q 1 γ p γ q G o ( M f x , M f y , ν γ p γ q )
× 1 2 [ H p , q ( f x , f y , ν γ p γ q ) + H p , q * ( f x , f y , ν γ p γ q ) ] .
G i ( Im ) ( f x , f y , ν ) = κ p = 1 Q q = 1 q p Q 1 γ p γ q G o ( M f x , M f y , ν γ p γ q )
× 1 2 i [ H p , q ( f x , f y , ν γ p γ q ) H p , q * ( f x , f y , ν γ p γ q ) ] .
S i ( x , y , ν ) = κ p = 1 Q q = 1 q p Q 1 M 2 S o ( x M , y M , ν ) h p , q ( x x , y y , ν )
× 2 τ max sin⁡ c [ 2 τ max ( γ p γ q ) ( ν γ p γ q ν ) ] d x d y d ν ,
S comp ( x , y , ν ) κ p = 1 Q q = 1 Δ γ > 0 Q 1 M 2 S o ( x M , y M , ν ) h p , q ( x x , y y , ν )
× 2 τ max ( γ p γ q ) sin⁡ c [ 2 τ max ( γ p γ q ) ( ν ν ) ] d x d y d ν for ν 1 ν ν 2 .
S o ( x , y , ν ) = E rect [ ( ν ν 0 ) ( ν 2 ν 1 ) ] δ ( x , y ) ,
I ( x , y , τ ) = κ E ν 1 ν 2 h ( x , y , ν , τ ) d ν .
t q ( x , y , ν ) = π R 2 λ 2 f i 2 jinc ( 2 R λ f i x 2 + y 2 ) exp [ i 2 π λ f i ( x ξ q + y η q ) ] ,
τ p , q = x B ( ξ p ξ q ) + y B ( η p η q ) c f i ( γ p γ q ) .
W ( z ) ( x 1 , y 1 , x 2 , y 2 , ν ) δ ( ν ν ' ) = V ( x 1 , y 1 , ν ) V * ( x 2 , y 2 , ν ) ,
W ( d ) ( x 1 , y 1 , x 2 , y 2 , ν ) = 1 λ 2 d 2 d x 1 d y 1 d x 1 d y 2 W ( 0 ) ( x 1 , y 1 , x 2 , y 2 , ν )
× exp { i π λ d [ ( x 1 x 1 ) 2 + ( y 1 y 1 ) 2 ( x 2 x 2 ) 2 ( y 2 y 2 ) 2 ] } ,
V trans ( x , y , ν ) = T ( x , y , ν ) V inc ( x , y , ν ) ,
W trans ( 0 ) ( x 1 , y 1 , x 2 , y 2 , ν ) = T ( x 1 , y 1 , ν ) T * ( x 2 , y 2 , ν ) W inc ( 0 ) ( x 1 , y 1 , x 2 , y 2 , ν ) ,
W ( i ) ( x 1 , y 1 , x 2 , y 2 , ν , τ ) = d x 1 d y 1 d x 2 d y 2 1 M 2 W ( o ) ( x 1 M , y 1 M , x 2 M , y 2 M , ν )
× exp [ i π λ f o ( 1 d 1 f o ) ( x 1 2 M 2 + y 1 2 M 2 x 2 2 M 2 y 2 2 M 2 ) ]
× exp [ i π λ f i ( 1 d 2 f i ) ( x 1 2 + y 1 2 x 2 2 y 2 2 ) ]
× { q = 1 Q p = 1 Q t q ( x 1 x 1 , y 1 y 1 , ν ) t p * ( x 2 x 2 , y 2 y 2 , ν )
× exp [ i 2 π ν ( γ q γ p ) τ ] ,
I ( x , y , τ ) = W ( i ) ( x , y , x , y , ν , τ ) d ν .
W ( o ) ( x 1 , y 1 , x 2 , y 2 , ν ) = κ S o ( x 1 , y 1 , ν ) δ ( x 1 x 2 , y 1 y 2 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.