Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single snap-shot double field optical zoom

Open Access Open Access

Abstract

In this paper we present a new approach providing super resolved imaging at the center of the field of view and yet allowing seeing the remaining of the original field of view with the original resolution. This operation resembles optical zooming while the zoomed and the non zoomed images are obtained simultaneously. This is obtained by taking a single snap shot and using a single imaging lens. The technique utilizes a special static/still coding element and a post processing algorithmic, without any mechanical movements.

©2005 Optical Society of America

1. Introduction

Optical zooming is basically a super resolution technique since its purpose is to obtain resolution higher than provided by the imaging system (prior to zooming). The physical restrictions that limit the spatial resolution of an imaging system are either the size of aperture of the imaging lens, or the geometrical parameters of the detection array such as its pitch and fill factor. Eventually the hardest limitation prevails.

The common optical realization of optical zoom includes several lenses and a mechanical mechanism as in Ref. [1]. Other principles do not include mechanical movements but rather other time adaptive concepts allowing variation of the overall focal length of the lens. In Refs. [2–13] one may see an example of several works dealing with zooming lenses. Thus basically the zooming operation is actually the increase in the focal length of the imaging module providing smaller foot print of each pixel in the detector, on top of the object. The spatial resolution improvement in the center of the field of view during the zooming process is obtained since the foot print of each pixel on the object equals to ΔxR/F, where Δx is the pitch of the pixels of the detector, F the focal length and R the distance to the object. Thus, the regular optical zooming operation has two major disadvantages. The first one is that the increase of the focal length, for instance by a factor of 3, while preserving the F-number will result in increase in the volume of the imaging module by a factor of 33=27. It means more weight and less reliability (due to the mechanical mechanism). The second disadvantage is that the zoomed and the non-zoomed images are not obtained simultaneously and the resolution improvement in the central part of the field of view comes on the expense of decreasing the field of view.

In this paper we present a novel zooming approach in which instead of a several lenses only a single lens is used. In addition since no movement is required and the focal length is not changed, the imaging module volume is not increased. Note that the resolution improvement in the center of the field of view is not due to the increase of the focal length F but is rather due to generation of smaller effective pixels in that spatial region. That is reduction of Δx by the same factor in which the F should have been increased. Finally, the zoomed and non zoomed images are obtained simultaneously in a single snap shot. It should be noted that the resolution improvement obtained in the central part of the field of view follows the idea presented in Ref. [14]. However, the ideal of Ref. [14] shows how to obtain the resolution improvement but in this paper we show how to obtain this improvement without sacrificing the field of view, i.e. obtaining also the non-zoomed resolution in the remaining part of the field of view. Note that having an improved resolution in the central part of the field of view and simultaneously preserving the original non zoomed resolution in the outer parts yields more spatially resolved points than the number of the pixels in the detector array. Such an outcome is made possible by a trade-off payment in the dynamic range of the captured image.

The operation principle is based on the follows: the image resolution obtained using a common single lens is higher in the center of the field of view, and degrades towards the periphery. Usage of this property is essential for the proposed operation principle. This is because the surface where a perfect image is obtained is rather a sphere than a plane. The optical limit for the resolution obtained in the center is proportional to λF/D (where λ is the wavelength, F is the focal length and D is the aperture of the lens). For many detectors this resolution limit is much less restrictive and harder to reach in comparison to the restriction coming due to the sampling pitch of the detector. Consequently, in such cases the detector forces poorer image quality. In our technique the optics shall provide, in the center of the field of view, an optical resolution that is limited by the diffraction. In the remaining part of the field of view the optics shall provide a resolution limit which equals to the detector’s sampling pitch. In this manner by exploiting the aliasing effect due to the sampling of the detector, and performing some digital post processing results with a super resolved image. It will have a diffraction limited resolution at the center region of the field of view and yet preserving the original geometrical resolution at its outer parts.

In section 2 we present the theory of the suggested approach. In section 3 we present the experimental investigation and section 4 concludes the paper.

2. Theory

We will now derive the theory showing how we may obtain simultaneously the improved resolution in the central part of the field of view (zoomed image) while preserving the original non-zoomed resolution in the outer parts.

2.1 Preliminary

For the sake of simplicity the analysis of the method will be one-dimensional (1-D). A two-dimensional deduction is straight-forward.

Let’s take a 1-D positive object s(x) (see Fig. 1). We denote by LT its spatial support. This object has minimal resolution detail denoted by δx in its central LC part. In the following mathematical analysis we will consider LC to be 1/6 of LT, although other ratio can be chosen. The finest optically resolved detail in the remaining periphery is three times larger - 3δx, that equals to the geometrical limitation of the pitch of the sampling detection array (see Fig. 1). This limitation is determined by the optics and exists prior to the digital sampling performed by the detection array.

 figure: Fig. 1.

Fig. 1. One-dimensional object. The minimal details in the central part are three times finer, than those in the periphery.

Download Full Size | PDF

One wishes to image this object using an ideal aberration-free optical system with magnification factor of 1. The image is captured using a camera with a pixel pitch of 3δx, while pixels are assumed to represent an ideal spatial Dirac impulse train. The proposed method enables resolving details with a high resolution in the central part, in spite of the larger pitch, without decrease in field of view. Using optical terms we obtain an optical zooming of X3 in the central 1/6 field of view and yet having simultaneously the X1 resolution (without zooming) in the other 5/6 field of view. All of this is obtained from a single optically coded and then digitally processed image. The penalty is introduction of some noise in the obtained image. The optical coding involves insertion of a certain spatial coding grating in the entrance pupil plane of the imaging lens. The super resolving approach that increases the resolution in the central 1/6 part of the field of view is based upon the approach presented in Ref. [14]. The investigated case here will deal with coherent illumination although extension into non coherent case is straight forward as described in Ref. [14].

Note that the geometrical super resolution method described in Ref. [14] is equivalent to the realization of an optical zoom in the central part of the field of view since the footprint seen, in the super resolved image, over the observed object equals to: (R/F)∙(Δx/κ) where R is the distance between the camera and the object, F is the focal length, Δx is the pitch of the pixels of the camera (Δx=3δ) and κ is the geometrical super resolution factor (we always discus the case of κ=3). In case that optical zoom of factor κ is performed the focal length is changed to κF and thus the footprint equals to (R/(κF)∙(Δx). It is easily seen that both expressions are identical. Thus, in Ref. [14] we have showed how without changing the focal length we perform optical zooming which is actually done by performing geometrical super resolution. However, the condition for the operation of the approach presented in Ref. [14] is that the input object occupies no more than 1/κ of the field of view. As previously mentioned, in this paper we will show how this field of view restriction is removed and one obtains simultaneously the super resolved image (the optical zoom image) in the center of the field of view and the original resolution image in its outer part.

2.2 Mathematical general description

We denote by S(ν) the Fourier transform of the object s(x), with ν the spatial frequency coordinate belonging to the spectral range of ∊[-νmax, νmax], where νmax is the maximal spatial frequency of the object. It is inversely related to the spatial resolution δx. We virtually divide the Fourier content into three equal regions:

  1. Left third S-1(ν) with ν ∊ [-νmax, -1/3 νmax]
  2. Central third S0(ν) with ν ∊ [-1/3 νmax, 1/3 νmax]
  3. Right third S1(ν) with ν ∊ [1/3 νmax, νmax].

The spatial grating multiplies these spectral components so that a certain degree of orthogonality between the components is created. The coding grating mask also consists of three regions:

  1. Left third G-1(ν) with ν ∊ [-νmax, -1/3 νmax]
  2. Central third G0(ν) with ν ∊ [-1/3 νmax, 1/3 νmax]
  3. Right third G1(ν) with ν ∊ [1/3 νmax, νmax].

The chosen mask fulfils the orthogonality condition of:

G1(v)Gk(v)=δ[1,k]

where δ[l,k] is Kronicker delta function. When the image is under-sampled by the detector -an aliasing effect takes place. In fact, the aliasing is essentially a folding of S-1(ν) and S1(ν) into a central third of the spectrum. Therefore, the spectrum of the captured image equals to:

I(ν)=k=11Sk(ν)×Gk(ν)ν[13νmax,13νmax]

To improve the clarity of this presentation let us now briefly recall the derivation made in Ref [14]. Let us examine a simple situation, in which we want to enhance the resolution by a factor of three. Assuming an ideal CCD in which the pixels are indefinitely small and are placed at a distance of Δx from one another (according to Fig. 1, Δx=3δx). We will show now that when one is willing to scarify 1/3 of the field of view he can obtain improvement of the resolution in that central 1/3 of the field of view by a factor of 3 (without increasing the focal length by a factor of 3). In case of ideal sampling the sampling function of the CCD [denoted as CCD(x)] is modeled as an infinite train of impulses:

CCD(x)=n=δ(xnΔx)

As previously mentioned the coding mask [denoted as CDMÃ(ν)] is divided into three sub functions as follows:

CDMA˜(ν)=n=11Gn(νnΔν)

The CDMA mask is multiplied in the Fourier plane with the spectrum of the input signal s(x) [denoted as S(ν)]. This is obtained since the coding mask is positioned in the coherent transfer function (CTF) plane of the imaging lens. In the coherent case in the CTF plane a Fourier of the imaged object is obtained. In the non coherent case this position is also related to the spectrum of the imaged object.

This spatial distribution is multiplied by CCD(x), the sampling grid of the CCD, which means that it is convolved with the Fourier of the CCD grid in the spectral domain:

D(v)=[S(ν)n=11Gn(νnΔν)][n=δ(νn2πΔx)]

Where * denotes convolution operation. Since Δν = 2π/Δx, the last expression can be simplified to:

D(ν)=S(ν)n=11Gn(νnΔν)n=δ(νnΔν)
=n=S(νnΔν)[k=11Gn(ν(n+k)Δν)]

Image retrieval, is simply achieved by Fourier transforming the grabbed output and multiplying it with the original coding mask and then downsampling:

R(ν)=D(ν)CDMA˜(ν)={n=S(νν)[k=11Gn(ν(n+k)Δν)]}[m=11Gm(νmΔν)]
=n=S(νnΔν)Gn(νnΔν)=S(ν)CDMA˜(ν)downsamplingS(ν)

We see that modulating the input’s spectrum by its multiplying with the coding mask correctly prevents data corruption due to aliasing. This insight was proven in Ref. [14] and demonstrated experimentally. It indeed demonstrates super resolution, i.e., an effect equivalent to seeing an image with zoom of X3 without changing the focal length. But this improvement is obtained only in the central 1/3 of the field of view while the input object occupies only 1/3 of the field of view. Let us now continue with proving that we can obtain the super resolved image in the central field of view without the need of paying with the outer 2/3 of the field.

The grating of Eq. (1) is illustrated in Fig. 2 in a folded manner: G-1(ν) and G1(ν) are folded into a central third part of the spectrum: G0(ν).

 figure: Fig. 2.

Fig. 2. The spatial grating positioned in CTF plane. Its three parts G-1(ν), G0(ν) and G1(ν) are plotted in a folded manner. The period of G0(ν) is three times smaller than the period of G-1(ν) and G0(ν).

Download Full Size | PDF

As a result I(ν) can be described as composed of so-called “macro-pixels”. Each macro-pixel consists of the S-1(ν), S1(ν) and S0(ν) contributions [please see Fig. 3(a)– Fig. 3(c)]. The structure presented in Fig. 3(a) and 3(b) is the theoretical goal since it provides full and simple orthogonality condition. In reality, however, such binary-like coding grating will have finite number of harmonics. Therefore, the spectral structure of the “macro pixels” will be different. However, if properly designed, it will yet remain orthogonal (when proper locations are observed), and will resemble the structure showed in Fig. 3(c).

 figure: Fig. 3.

Fig. 3. Orthogonality and macro pixels: (a) This is an example for orthogonal coding: in each spectral region there is a macro-pixel with a certain non-zero pixel. (b) After aliasing all nonzero pixel are folded in a non-overlapping way, providing orthogonality. (c) Due to the real realization of the grating the true structure is a bit different than the theory presented in 3(a) and 3(b).

Download Full Size | PDF

Next we formulate the reconstruction algorithm for the original image. The orthogonal coding grating mask is a Dammann like phase structure whose spatial effect is similar to replications. The mask is designed such that a different replication is generated for the high [G-1(ν) and G1(ν)] and low frequencies content [G0(ν)] as seen in Fig. 4. The replications for the high frequencies are 1/6 field of view apart and for the low frequencies are 1/2 field of view apart.

 figure: Fig. 4.

Fig. 4. Spatial effect of the coding mask. (a). Replication of high spectral content. (b). Replication of the low spectral content.

Download Full Size | PDF

  1. We shall first reconstruct the high frequency content S-1(ν) and S1(ν) by sampling I(ν): The spatial contents of S-1(ν) and S1(ν) occupy only a fraction of the field of view LT. Therefore it is possible to keep only each 6-th (LT/LC) sample without losing information. Other samples are calculated using interpolation. Figure 5(a) illustrates the sampling grid. Note that at the sampling points of S-1(ν) and S1(ν) are orthogonal. On other hand, there is a certain noise added to the sampled high frequency content due to the S0(ν). In order to minimize this noise effect, each sample value is taken to be as algebraic average in its neighborhood. Figure 5(b) shows the Fourier transform of the grating illustrated in Fig. 5(a). As one may note, it resembles 7 delta functions: the two pairs of delta functions appearing on both sides of the central delta resemble spatial derivative since each one of those two pairs contain one positive and one negative delta while small spatial shift is introduced between them. Those two pairs that make the derivative correspond to the two replications (the -1 and the 1 orders) related to the high frequencies [Fig. 4(a)]. The outer two deltas correspond to the two replications (again the -1 and 1 orders) of the low frequencies [Fig. 4(b)].

     figure: Fig. 5.

    Fig. 5. (a). Sampling high frequency content: S-1(ν) samples are marked with oe-13-24-9858-i001 S1(ν) samples are marked with oe-13-24-9858-i002. (b). The Fourier transform of the grating.

    Download Full Size | PDF

  2. Next, we shall subtract the reconstructed S-1(ν) and S1(ν) from I(ν). This shall leave us ideally with only low frequency content. It is expressed in the spatial domain as:

    iL(x)=(s0*g0)(x)rect(xLT)

    where s0 and g0 are the inverse Fourier transforms of S0(ν) and G0(ν), respectively and ‘*’ stands for convolution operation. rect (x/LT) is defined as:

    rect(xLT)={1xLT20otherwise

    The g0(x) is in fact consists out of three Dirac impulse functions:

    g0(x)=n=11an×δ(xnLT2)

  3. Now we divide each iL(x) and s0(x) into sets of 6 equally-supported functions, denoted correspondingly as rj(x) j=1,..,6 and fj(x) j=1,..,6. These 2 sets of functions are related through 6 linear equations. Those equations can be well understood after observing Fig. 4(b):

    r1(x)=a0f1(x)+a1f4(x)
    r2(x)=a0f2(x)+a1f5(x)
    r3(x)=a0f3(x)+a1f6(x)
    r4(x)=a0f4(x)+a1f1(x)
    r5(x)=a0f5(x)+a1f2(x)r6(x)=a0f6(x)+a1f3(x)

    or alternately through a 6×6 matrix:

    [r1(x)r2(x)r3(x)r4(x)r5(x)r6(x)]=[a000a1000a000a1000a000a1a100a0000a100a0000a100a0][f1(x)f2(x)f3(x)f4(x)f5(x)f6(x)]

    By inverting the matrix we find fj(x) and therefore s0(x) - which is the low frequency content of the original image information. Note that fi(x) are the original 6 spatial regions of s(x) while ri(x) are the spatial distributions obtained in each of the 6 regions after generation of the replications on the CCD plane. Eqs. 11–12 correspond to the low frequency shift seen in Fig. 4(b). ai are the coefficients with which each one of the 3 replication in Fig. 4(b) is multiplied.

3. Simulation investigation

In the experiment we assume that the test object is imaged with an optical imaging system having a resolution limit in the periphery equals to the detector’s array pitch. In the central part of the field of view the optical resolution is three times grater than in the periphery. In the simulations a Lena image is used as an object. A high frequency 2-D barcode is planted in the center of this image. This barcode is under-sampled if its every third pixel is taken into consideration. Therefore the central high frequency content of the image, that is the barcode pattern, is under-sampled or low-pass filtered by a detector. To adapt the notations of Fig. 1, the resolution of Lena image is 3δx while the resolution of the barcode pattern is δx. A grating element (the coding mask) was attached to the imaging lens (the CTF plane or the entrance pupil of the lens) as depicted in the experimental setup of Fig. 6(a). The grating contained a different Dammann grating (see Ref [15]) in the central and outer parts of the mask as described in Fig. 4. The mask itself is illustrated in Fig. 6(b).

 figure: Fig. 6.

Fig. 6. (a).The experimental setup. (b). The coding Dammann mask that was attached to the imaging lens.

Download Full Size | PDF

The three regions of the grating depicted in Fig. 3(c) are merely a shifted cosine functions. In this arrangement the high frequency content is sampled at 1/6 of the basic sampling rate, since the spatial extent of the S-1(ν) and S1(ν) is LC = LT/6. The Fourier transform of the grating is merely several impulse functions that in the spatial domain generate the 6 shifted replicas of the object, as shown in Fig. 5(b). After a recovering the high frequency content, one should solve a set of 6 linear equations (see Eqs. 11 or 12) in order to reconstruct the low frequency content S0(ν). Figure 7(a) presents the non-zoomed image in which the full field of view is seen. In this case, though, the central high resolution barcode structure can not be resolved [see Fig. 7(a)]. In Fig. 7(b) we performed regular optical zooming to the image of Fig. 7(a). Here the field of view is reduced by a factor of 3 but the spatial resolution is improved by the same factor and now the central barcode structure can be resolved.

 figure: Fig. 7.

Fig. 7. (a). The non zoomed test object used in simulations: Lena image with high frequency two dimensional barcode pattern at its center. (b). The X3 zoomed test target where one may see the high frequency barcode pattern.

Download Full Size | PDF

In the final stage we applied our post processing algorithm on the captured image. The resulted image is shown in Fig. 8. This image proves the concept presented in this paper: The high frequency central field of view (X3 optical zoom) is retrieved along with the non-zoomed remaining field of view. Obviously the 6×6 spatial blocks seen on the reconstructed image in Fig. 8 can be removed by proper image processing and enhancement that was not applied on the obtained image.

 figure: Fig. 8.

Fig. 8. The obtained result after the digital decoding. One may see the full field of view and the zoomed highly resolved barcode pattern in the center of the field of view.

Download Full Size | PDF

The approach was tested also with other input images and produced similar outcome. It is important to note that in this manuscript we show only the proof of principle for the suggested approach. The noise reduction algorithmic and the investigation of this concept under non coherent and polychromatic illumination are yet to be presented in future work.

4. Conclusions

In this paper we have presented a new approach for obtaining optical zooming in which no moving elements are required and a single lens with special coding mask is used. The main advantage of the proposed approach is that here the zoomed central field of view and the non zoomed full field of view are obtained simultaneously. This yields effectively a number of spatial pixels which exceed the number of the pixels in the detector array. The described outcome is obtained by attaching a special coding grating to the imaging lens and applying appropriate digital post processing algorithm.

References and links

1 . R. B. Johnson and C. Feng , “ Mechanically compensated zoom lenses with a single moving element ,” Appl. Opt. 31 , 2274 – 2280 ( 1992 ). [CrossRef]   [PubMed]  

2 . E. C. Tam , “ Smart electro optical zoom lens ,” Opt. Let. 17 , 369 – 371 ( 1992 ). [CrossRef]  

3 . H. Tsuchida , N. Aoki , K. Hyakumura , and K. Yamamoto , “ Design of zoom lens systems that use gradient-index materials ,” Appl. Opt. 31 , 2279 – 2286 ( 1992 ). [CrossRef]   [PubMed]  

4 . R. J. Pegis and W. G. Peck , “ First-order design theory for linearly compensated zoom systems ,” J. Opt. Soc. Am. 52 , 905 – 911 ( 1962 ). [CrossRef]  

5 . G. Wooters and E. W. Silvertooth , “ Optically Compensated Zoom Lens ,” JOSA , 55 , 347 – 355 ( 1965 ). [CrossRef]  

6 . T. ChunKan , “ Design of zoom system by the varifocal differential equation. I ,” Appl. Opt. 31 , 2265 – 2273 ( 1992 ). [CrossRef]   [PubMed]  

7 . Y. Ito , “ Complicated pin-and-slot mechanism for a zoom lens ,” Appl. Opt. 18 , 750 – 758 ( 1979 ).

8 . D. R. Shafer , “ Zoom null lens ,” Applied Optics , 18 , 3863 – 3870 ( 1979 ). [PubMed]  

9 . K. Tanaka , “ Paraxial analysis of mechanically compensated zoom lenses. 1: Four-component type ,” Appl. Opt. 21 , 2174 – 2181 ( 1982 ). [CrossRef]   [PubMed]  

10 . D. Y. Zhang , N. Justis , and Y. H. Lo , “ Integrated fluidic adaptive zoom lens ,” Opt. Let. , 29 , 2855 – 2857 ( 2004 ). [CrossRef]  

11 . A. Walter , “ Zoom lens and computer algebra ,” J. Opt. Soc. Am. A , 16 , 198 – 204 ( 1999 ). [CrossRef]  

12 . M. N. Akram and M. H. Asghar , “ Step-zoom dual-field-of -view infrared telescope ,” Appl. Opt. 42 , 2312 – 2316 ( 2003 ). [CrossRef]   [PubMed]  

13 . A. Walther , “ Angle eikonals for a perfect zoom system ,” J. Opt. Soc. Am. A , 18 , 1968 – 1971 ( 2001 ). [CrossRef]  

14 . J. Solomon , Z. Zalevsky , and D. Mendlovic , “ Geometrical super resolution by code division multiplexing ,” Appl. Opt. 44 , 32 – 40 ( 2005 ). [CrossRef]   [PubMed]  

15 . H. Dammann and E. Klotz , “ Coherent optical generation and inspection of two-dimensional periodic structures ,” Opt. Acta 24 , 505 – 515 ( 1977 ). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. One-dimensional object. The minimal details in the central part are three times finer, than those in the periphery.
Fig. 2.
Fig. 2. The spatial grating positioned in CTF plane. Its three parts G-1(ν), G0(ν) and G1(ν) are plotted in a folded manner. The period of G0(ν) is three times smaller than the period of G-1(ν) and G0(ν).
Fig. 3.
Fig. 3. Orthogonality and macro pixels: (a) This is an example for orthogonal coding: in each spectral region there is a macro-pixel with a certain non-zero pixel. (b) After aliasing all nonzero pixel are folded in a non-overlapping way, providing orthogonality. (c) Due to the real realization of the grating the true structure is a bit different than the theory presented in 3(a) and 3(b).
Fig. 4.
Fig. 4. Spatial effect of the coding mask. (a). Replication of high spectral content. (b). Replication of the low spectral content.
Fig. 5.
Fig. 5. (a). Sampling high frequency content: S-1(ν) samples are marked with oe-13-24-9858-i001 S1(ν) samples are marked with oe-13-24-9858-i002. (b). The Fourier transform of the grating.
Fig. 6.
Fig. 6. (a).The experimental setup. (b). The coding Dammann mask that was attached to the imaging lens.
Fig. 7.
Fig. 7. (a). The non zoomed test object used in simulations: Lena image with high frequency two dimensional barcode pattern at its center. (b). The X3 zoomed test target where one may see the high frequency barcode pattern.
Fig. 8.
Fig. 8. The obtained result after the digital decoding. One may see the full field of view and the zoomed highly resolved barcode pattern in the center of the field of view.

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

G 1 ( v ) G k ( v ) = δ [ 1 , k ]
I ( ν ) = k = 1 1 S k ( ν ) × G k ( ν ) ν [ 1 3 ν max , 1 3 ν max ]
CCD ( x ) = n = δ ( x nΔx )
CDM A ˜ ( ν ) = n = 1 1 G n ( ν nΔν )
D ( v ) = [ S ( ν ) n = 1 1 G n ( ν nΔν ) ] [ n = δ ( ν n 2 π Δx ) ]
D ( ν ) = S ( ν ) n = 1 1 G n ( ν nΔν ) n = δ ( ν nΔν )
= n = S ( ν nΔν ) [ k = 1 1 G n ( ν ( n + k ) Δν ) ]
R ( ν ) = D ( ν ) CDM A ˜ ( ν ) = { n = S ( ν ν ) [ k = 1 1 G n ( ν ( n + k ) Δν ) ] } [ m = 1 1 G m ( ν mΔν ) ]
= n = S ( ν nΔν ) G n ( ν nΔν ) = S ( ν ) CDM A ˜ ( ν ) downsampling S ( ν )
i L ( x ) = ( s 0 * g 0 ) ( x ) rect ( x L T )
rect ( x L T ) = { 1 x L T 2 0 otherwise
g 0 ( x ) = n = 1 1 a n × δ ( x n L T 2 )
r 1 ( x ) = a 0 f 1 ( x ) + a 1 f 4 ( x )
r 2 ( x ) = a 0 f 2 ( x ) + a 1 f 5 ( x )
r 3 ( x ) = a 0 f 3 ( x ) + a 1 f 6 ( x )
r 4 ( x ) = a 0 f 4 ( x ) + a 1 f 1 ( x )
r 5 ( x ) = a 0 f 5 ( x ) + a 1 f 2 ( x )
r 6 ( x ) = a 0 f 6 ( x ) + a 1 f 3 ( x )
[ r 1 ( x ) r 2 ( x ) r 3 ( x ) r 4 ( x ) r 5 ( x ) r 6 ( x ) ] = [ a 0 0 0 a 1 0 0 0 a 0 0 0 a 1 0 0 0 a 0 0 0 a 1 a 1 0 0 a 0 0 0 0 a 1 0 0 a 0 0 0 0 a 1 0 0 a 0 ] [ f 1 ( x ) f 2 ( x ) f 3 ( x ) f 4 ( x ) f 5 ( x ) f 6 ( x ) ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.