Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fractional Fourier single-pixel imaging

Open Access Open Access

Abstract

Single-pixel imaging technology has a number of advantages over conventional imaging approaches, such as wide operation wavelength region, compressive sampling, low light radiation dose and insensitivity to distortion. Here, we report on a novel single-pixel imaging based on fractional Fourier transform (FRFT), which captures images by acquiring the fractional-domain information of targets. With the use of structured illumination of two-dimensional FRFT base patterns, FRFT coefficients of the object could be measured by single-pixel detection. Then, the object image is achieved by performing inverse FRFT on the measurements. Furthermore, the proposed method can reconstruct the object image from sub-Nyquist measurements because of the sparsity of image data in fractional domain. In comparison with traditional single-pixel imaging, it provides a new degree of freedom, namely fractional order, and therefore has more flexibility and new features for practical applications. In experiments, the proposed method has been applied for edge detection of object, with an adjustable parameter as a new degree of freedom.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Single-pixel imaging [1] is an emerging technology developed from computational ghost imaging [26] and single-pixel camera [7,8]. Employing the principle of correlation measurement, it could reconstruct an image of object with the spatial information of illumination and the total light intensity transmitted or reflected from the object. At present, multi-pixel array detectors, including digital CCD/CMOS cameras based on photosensitive materials such as silicon and InGaAs, are widely used for conventional optical imaging. Although these array detectors have high spatial resolution, they are generally only responsive to visible light, near-infrared light and nearby optical bands. In other words, limited by the manufacturing technique for array detectors, conventional optical imaging cannot work in some specific bands of practical value, such as Terahertz and X-ray. Interestingly, single-pixel imaging technology enables the capture of object image with a single point detection, and therefore provides an ideal solution for imaging in optical bands where conventional imaging approaches are unavailable, including Terahertz [9] and X-ray [1013]. In addition, it has a number of advantages over conventional imaging approaches, such as sub-Nyquist sampling [7,14], low light exposure dose [13,15], insensitivity to turbid [16,17], scatting [18,19], nonlinear [20,21] and dispersive media [22]. As a result, it has been one of the research hotspots in recent years and has many potential applications in the fields of biomedical imaging [13,23,24], communication [22,25,26], environmental monitoring [27], scientific research [28,29] and three-dimensional imaging [30,31].

As an improved version of single-pixel imaging, Fourier single-pixel technique makes use of Fourier analysis theory in the processes of data acquisition and image reconstruction, and therefore offers many advantages such as high imaging quality and high efficiency [3238]. Using a series of Fourier base patterns to modulate the spatial distribution of the illumination light field, one could measure the Fourier spectrum of object image point by point with a single-pixel detector. Then the object image can be recovered by performing two-dimensional inverse Fourier transform on the measured Fourier spectrum. So far, Fourier single-pixel technique has developed from two-dimensional imaging to three-dimensional imaging [3941], from monochromatic imaging to color imaging [42,43], from static imaging to dynamic imaging [4446], from macroscopic imaging to microscopic imaging [47,48], and from single-modal imaging to multi-modal imaging [39,49]. However, as an extension of Fourier transform, fractional Fourier transform (FRFT) has not been applied to single-pixel imaging till now.

Mathematically, the interpretation of the Fourier transform is as a transformation of a time-domain signal into a frequency-domain signal. Conversely, the interpretation of the inverse Fourier transform is as a transformation of a frequency-domain signal into a time-domain signal. FRFT is the generalization of Fourier transform. It transforms a signal (either in time domain or frequency domain) into the domain between time and frequency domain. In other words, it is a rotation in the time-frequency domain and the rotation angle is determined by the fractional order. Therefore, fractional Fourier domains form a continuum of domains making arbitrary angles with the time or frequency domains on the time-frequency plane. In the field of imaging and image processing, FRFT provides many significant advantages, such as fractional order as the new degree of freedom and high efficiency for 2D chirp or chirp-like signal analysis/processing, that other methods including Fourier transform and Hadamard transform cannot [50]. As a result, it offers new perspectives for imaging and image applications and has been a powerful tool for image filtering, watermarking, encryption, deblurring, defocusing and pattern recognition [35,38,5159].

In this manuscript, we propose and demonstrate the FRFT single-pixel technique, which provides more degrees of freedom than Fourier single-pixel imaging. In the proof-of-principle experiment, the target image could be successfully achieved by the proposed method under different fractional orders. With 4N different illumination patterns, the proposed method and traditional Fourier single-pixel technique could both extract the N-pixel spatial information of the object, where N is the total pixel number of illumination patterns. Using the fast algorithm of FRFT [60], the reconstruction times for them are at the same level. Because image data is usually sparse in fractional domain, quite a number of measurements are saved by compressed sampling in FRFT single-pixel imaging. The improved FRFT single-pixel imaging has realized edge extraction of object, with an adjustable parameter which makes the proposed method more powerful and flexible. The energy of fractional spectrum of chirp/chirp-like patterns concentrates in a single or a few points. Therefore, the proposed technique could offer higher sampling efficiency than other methods in chirp/chirp-like pattern acquisition. Furthermore, it could be used for the analysis of chirp-like patterns, because it is very easy to extract the physical parameters of chirp-like patterns in fractional domain [61]. As a result, the proposed technique has new features and potential advantages in some applications of 2D chirp or chirp-like signal acquisition and analysis.

2. Principle

Mathematically, one-dimensional continuous FRFT is defined as

$${F_p}(u) = A\int_{ - \infty }^\infty {\textrm{exp} [{\textrm{j}\mathrm{\pi }({u^2}\cot\alpha - 2ux\csc\alpha + {x^2}\cot\alpha )} ]} f(x)\textrm{d}x,$$
where x and u denote the coordinates in spatial domain and fractional domain respectively, and j the imaginary unit. Besides, α=pπ/2≠nπ, where n is an integer. Fp(u) represents the p-order FRFT of the function f (x). A is given by
$$A = {{\textrm{exp} [{{{ - \textrm{j}\mathrm{\pi }\textrm{sgn(sin}\alpha \textrm{)}} / {4 + {{\textrm{j}\alpha } / 2}}}} ]} / {\sqrt {|{\sin \alpha } |} }}.$$

The spatial light modulator used in single-pixel imaging system, such as DMD and liquid crystal modulator, could only provide discrete modulation for spatial light filed. Therefore, let us derive the mathematical expression of discrete FRFT and inverse FRFT from continuous FRFT (See Appendix). Then the discrete FRFT and inverse FRFT could be further applied for single-pixel imaging and image reconstruction.

3. Experiment

3.1 Experimental setup and method

The experimental setup of FRFT single-pixel imaging is shown in Fig. 1. It consists of a commercial LCD-based digital projector (XQ-13, XianQi Technology Co., Ltd.) as the illumination device, a silicon photodiode power sensor (S130C, Thorlabs Inc.) as the single-pixel detection device and a panda toy as the target object to be imaged. The data of illumination patterns are generated by a computer before being sent to the projector via USB cable. By adjusting the focal length of the projector lens, these patterns are clearly projected upon the object plane with illuminated area of 15×15cm2. The projector switches the illumination patterns every 0.25s and the power sensor samples synchronically. The computer records the power values measured by the power sensor, i.e. the spectral information of the object in fractional domain, and then reconstructs the object image.

 figure: Fig. 1.

Fig. 1. Experimental set-up of FRFT single-pixel imaging.

Download Full Size | PDF

In experiment, the normalized brightness level of the pixel at r-th column, v-th row of illumination patterns for the projection is given by

$${P_{{\phi _0},s,w}}({r,v} )= {{\{{1 + \cos [{\phi_x}(s,r) + {\phi_y}(w,v) + {\phi_0}]} \}} / 2},$$
where s,w=1, 2, …, N are fractional-domain indices and ϕ0 the initial phase . When N=128 and p=0.5, some typical illumination patterns for FRFT single-pixel imaging, i.e. 2D FRFT basis patterns, are shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. When N=128 and p=0.5, typical illumination patterns for FRFT single-pixel imaging with (a) s=65, w=65, (b) s=65, w=128, (c) s=128, w=65, (d) s=128, w=128.

Download Full Size | PDF

The photocurrent of the power sensor is proportional to the total light power reflected from the object:

$${I_{{\phi _0}}}(s,w) = {i_0}\sum\limits_{v = 1}^N {\sum\limits_{r = 1}^N {{P_{{\phi _0},s,w}}({r,v} )\cdot f({x_r},{y_v})} } ,$$
where i0 is a constant which depends on the responsivity, the size and the location of the power sensor. f (xr, yv) represents the reflection function of the object. According to Eq. (18), we may obtain the proportional FRFT of f (xr, yv) with the measurements of I0(s, w), Iπ/2(s, w), Iπ(s, w) and I3π/2(s, w):
$${F^{\prime}_p}(u{x_s},u{y_w}) = {I_0}(s,w) - {I_\mathrm{\pi }}(s,w) + \textrm{j[}{I_{{{\textrm{3}\mathrm{\pi }} / 2}}}(s,w) - {I_{{\mathrm{\pi } / 2}}}(s,w)] = \frac{{2N{i_0}}}{{A_\alpha ^2}}{F_p}(u{x_s},u{y_w}).$$
where uxs, uyw are Fractional-domain coordinates

The illumination patterns for the projection cannot be exp(jϕ), because it must be real and non-negative. Accordingly, differential measurement technique has to been applied to obtain the fractional spectrum. By performing the reconstruction algorithm of Eq. (20) on the measured F′p(uxs, uyw), the object image could be easily recovered.

3.2 Experimental results of FRFT single-pixel imaging

According to the principle described above, 2D FRFT base patterns with resolution of 128×128 pixels are successively projected onto the object plane. The power sensor measures the corresponding optical powers from the object sequentially. Each FRFT coefficient corresponds to four illumination patterns/power measurements.

Under different fractional orders, the experimented results of fractional spectrum of the object image are shown in Fig. 3(a)-(j). When p is small, the distribution of measured fractional spectrum is quite close to the object image. As p increases, the energy concentrates in the components of low fractional-domain coordinates. Until p=1, the measurement degenerates into the Fourier spectrum of the object image. When p>1, the spectrum energy is dispersed with the increase of p. Furthermore, p-order fractional spectrum and (2-p)-order fractional spectrum are centrosymmetric all the time. These phenomena are completely consistent with the properties of FRFT [50], verifying the validity of the proposed method.

 figure: Fig. 3.

Fig. 3. Amplitudes of fractional spectrum measured by the proposed method under different fractional orders. The origin of the spectra is centered where the fractional Fourier spectra are shown. Note that the results in (a), (b) and (h)-(j) are calculated from the experimental data shown in (c)-(g) by Eqs. (8) and (12).

Download Full Size | PDF

By performing p-order inverse FRFT on the corresponding measurement of fractional spectrum, the object images could be recovered, as shown in Fig. 4(a)-(j). Basically, FRFT single-pixel technique can capture the object image successfully, regardless of fractional order. The data post-processing time for each reconstructed image is ∼0.3 ms using a laptop with six-core CPU (i5-8400, Intel corp.).

 figure: Fig. 4.

Fig. 4. Object images recovered by FRFT single-pixel imaging under different fractional orders. The number of illumination patterns/measurements used for every given result is 4×128×128 = 65536.

Download Full Size | PDF

3.3 FRFT single-pixel imaging using sub-Nyquist sampling

As discussed in Appendix, the fractional spectrum data is measured experimentally only when 0.5≤p≤1.5. For p<0.5 or 1.5<p, the fractional spectrum could be further calculated by Eqs. (8) and (12). Therefore, we only consider the case of 0.5≤p≤1.5 here. Obviously, the fractional spectrum data is quite sparse when 0.5≤p≤1.5, as shown in Fig. 3(c)-(g). As a result, it is unnecessary to acquire all data of the fractional spectrum for reconstruction. In other words, FRFT is actually a sparse representation algorithm for image data compression. Thus we introduce sub-Nyquist sampling to the FRFT single-pixel imaging for improving efficiency of data acquisition.

In experiment, even if only the spectral data at the central part of fractional domain is measured, as shown in Fig. 5(a)-(e), the corresponding object images could still be recovered, as shown in Fig. 5(f)-(j). The closer p is to 1, the sparser the fractional spectrum data of image is. Therefore, compression ratio (CR) could be as low as 37%, 28%, 19%, 28% and 37%, when p is 0.6, 0.8, 1, 1.2 and 1.4 respectively, where CR is defined as the ratio of the measured fractional-domain points to the pixels of reconstructed image. In addition, the mean-squared errors (MSE) of Fig. 5(f)-(j) are calculated to be 144, 120, 87, 106 and 134 respectively. The experimental relationship of CR and image signal-to-noise ratio (SNR) is shown in Fig. 5(k). The high-frequency noise could be removed by low-pass filtering, i.e. compressive sampling. As a result, there is roughly a negative correlation between the image quality and CR. When p is close to 0.5 or 1.5, fractional spectrum is relatively wide. In this case, the signal and noise of image are both reduced by compressive sampling if CR is small. This is the reason that SNR improvement for p=0.8 and 1.2 is always more significant than that for p=0.6 and 1.4, when CR≤50%.

 figure: Fig. 5.

Fig. 5. (a)-(e) Amplitudes of fractional spectrum obtained by sub-Nyquist sampling under different fractional orders. (f)-(j) Object images recovered by FRFT single-pixel imaging using sub-Nyquist sampling under different fractional orders. (k) The relationship of CR and image SNR. When CR=37%, 28% and 19%, the numbers of illumination patterns/measurements are 24248, 18350 and 12452 respectively.

Download Full Size | PDF

3.4 FRFT single-pixel imaging for edge extraction

In the process of image reconstruction for FRFT single-pixel imaging, a slight change of the order of inverse FRFT leads to a slight difference in the reconstructed image edge. Therefore, this property could be applied for edge extraction, as described in Eq. (6).

$$edge({x_r},{y_v})\textrm{ = }\left\{ {\textrm{ }\begin{array}{{cc}} {|{{F_{ - p}}[{{{F^{\prime}}_p}({u{x_s},u{y_w}} )} ]- {F_{ - ({p + \Delta p} )}}[{{{F^{\prime}}_p}({u{x_s},u{y_w}} )} ]} |,}&{0.5 \le p < 1,}\\ {|{{F_{ - p}}[{{{F^{\prime}}_p}({u{x_s},u{y_w}} )} ]- {F_{ - ({p - \Delta p} )}}[{{{F^{\prime}}_p}({u{x_s},u{y_w}} )} ]} |,}&{1 < p \le 1.5,} \end{array}} \right.$$
where F′p(uxs, uyw) denotes the measured fractional spectrum in p-order FRFT single-pixel imaging, F-p the p-order inverse FRFT and Δp a small value.

Mathematically, Eq. (6) could be further rewritten as Eq. (7), which shows that edge extraction effect actually only depends on Δp. In experiment, with different values of p and Δp, the object edges could be successfully extracted in different levels using the modified FRFT single-pixel imaging, as shown in Fig. 6. Because only Δp determines the difference in the edge of reconstructed image, the experimental edge extraction results are insensitive to p but sensitive to Δp, as indicated in Eq. (7).

$$edge({x_r},{y_v}) \propto \left\{ {\textrm{ }\begin{array}{{cc}} {|{f({{x_r},{y_v}} )- {F_{ - \Delta p}}\; [{f({{x_r},{y_v}} )} ]} |,}&{0.5 \le p < 1,}\\ {|{f({{x_r},{y_v}} )- {F_{\Delta p}}[{f({{x_r},{y_v}} )} ]} |,}&{1 < p \le 1.5,} \end{array}} \right.$$

When Δp is relatively small, e.g. 0.02, the major features of the object are well detected with a clear appearance, but some details are not distinguished, as shown in Fig. 6(a)-(d). When setting Δp to 0.07 or 0.12, texture and edge details of the object are more distinct and brighter, as shown in Fig. 6(e)-(l). When Δp is too big, e.g. 0.17, texture and edge details of the object are preserved, but the edge extraction results get blurred due to the thick edge lines, as shown in Fig. 6(m)-(p). Therefore, as an additional degree of freedom for edge extraction, the adjustable parameter Δp could make the proposed method more flexible and meet the needs of many types of applications easily.

 figure: Fig. 6.

Fig. 6. The edge extraction results of FRFT single-pixel imaging under different parameters. For the edge extraction results, the compression ratio is 37%, 28%, 28% and 37% respectively, when p is 0.6, 0.8, 1.2 and 1.4. When CR=37% and 28%, the numbers of illumination patterns/measurements are 24248 and 18350 respectively. The data post-processing time for each result is ∼1 ms using a laptop with six-core CPU (i5-8400, Intel corp.).

Download Full Size | PDF

4. Discussion and conclusion

Mathematically, single-frequency signal is sparser in Fourier domain than in fractional domain. Thus Fourier single-pixel imaging could achieve higher measuring efficiency than the proposed technique, if there are few spatial frequency components in the spectrum of object image. However, as an extension of classical Fourier transform, FRFT provides many special properties and inherent abilities that Fourier transform does not, such as high efficiency for 2D chirp or chirp-like signal acquisition and analysis.

For example, imaging chirp/chirp-like patterns is very important and necessary for some applications such as optical surface quality testing based on interferometer [6265], fingerprint matching [53,66] and chirp-based watermark recognition for images and videos [54,67,68]. In some cases of these applications, the 2D reference chirp/chirp-like patterns, i.e. Newton’s rings fringe pattern, template fingerprint and chirp watermark, is actually known. Therefore, these 2D chirp patterns can be represented by a single or a few points with known locations in fractional domain. The numerical results of fractional spectrum, Fourier spectrum and Hadamard spectrum [33] of a chirp pattern are presented to compare the performances of different single-pixel methods. The chirp pattern with resolution of 128×128 pixels, shown in Fig. 7(a), is used as the object. The energy of fractional spectrum concentrates in a single pixel, as shown in Fig. 7(b), because the basis patterns of FRFT are a set of chirp patterns with different initial phases and positions. However, the energies of Fourier spectrum and Hadamard spectrum disperse in a large area, as shown in Fig. 7(c) and (d). The CR for FRFT single-pixel imaging, Fourier single-pixel imaging and Hadamard single-pixel imaging is 0.006%, 67% and 61% respectively. Therefore, the proposed technique offers a far smaller CR than the other two methods in chirp pattern acquisition. Furthermore, it could be used for the analysis of chirp-like patterns, because it is very easy to extract the physical parameters of chirp-like patterns in fractional domain [61]. Although the FRFT single-pixel method is demonstrated using non-coherent configuration in proof-of-principle experiment, it is a general technique and therefore could also be applied in the coherent imaging such as laser interferometers for metrology applications [6265]. For interference patterns obtained by laser interferometer system, the proposed technique has potentially unique advantage in suppression of chirp noise caused by environmental factors such as dust. The reason is that chirp noise could be removed by fractional-domain filtering conveniently [61,69,70], due to energy concentration for chirp pattern in fractional domain.

 figure: Fig. 7.

Fig. 7. The numerical comparison of FRFT single-pixel imaging, Fourier single-pixel imaging and Hadamard single-pixel imaging when 2D chirp pattern is used as the object. (a) 2D chirp pattern. (b) The fractional spectrum obtained in FRFT single-pixel imaging with p=0.5. (c) The Fourier spectrum obtained in Fourier single-pixel imaging. (d) The Hadamard spectrum obtained in Hadamard single-pixel imaging.

Download Full Size | PDF

Moreover, compared with Fourier single-pixel imaging, arbitrary order FRFT single-pixel imaging has new features and potential advantages in some specific applications, such as image de-blurring, image de-noising, optical encryption, image hashing and pattern recognition [51,52,5559,7174]. In addition, just like traditional single-pixel technique, it could be further extended for dynamic, full-color and three-dimensional imaging [1,28,30,36,39,41,75]. Many strategies such as high-speed illumination using binarization algorithm and DMD [39], three different sampling [33] and simplex coding [76] could be used for upgrading imaging speed and reducing measurement number for the proposed method.

We propose and demonstrate a novel single-pixel imaging method using FRFT. Compared with traditional single-pixel imaging based on Fourier transform, the proposed method provides additional degree of freedom, namely fractional order, to meet various application requirements flexibly. The experimental results show that the image data measured by the proposed method is quite sparse. Therefore, sub-Nyquist sampling has been introduced into the proposed method for efficient compressive imaging. So far, many edge extraction methods, such as linear filtering in Fourier domain [77], have been presented. However, not all of them could offer adjustable edge extraction effect. The proposed FRFT single-pixel imaging has realized edge extraction of object, with an adjustable parameter which makes the method more flexible. We believe that this study will lead the development of the next generation of high-performance single-pixel imaging technology, and provide a powerful tool for many important applications in biomedical, military and industrial fields, such as optical encryption, edge extraction and target recognition.

Appendix

According to the periodic and symmetric properties of FRFT [50], we have

$$\begin{aligned} {F_{p + 4n}}(u) &= {F_p}(u),\\ {F_{p + 2}}(u) &= {F_p}( - u). \end{aligned}$$

Therefore, we let 0<p<2 and 0<α<π for simplicity. Equation (1) could be rewritten as

$${F_p}(u) = {A_\alpha }\textrm{exp} (\textrm{j}\mathrm{\pi }{u^2}\cot\alpha )\int_{ - \infty }^\infty {\textrm{exp} ( - 2\textrm{j}\mathrm{\pi }ux\csc\alpha + \textrm{j}\mathrm{\pi }{x^2}\cot\alpha )f(x)} \textrm{d}x,$$
where
$${A_\alpha } = {{\textrm{exp} \left[ {\textrm{j}(p - 1)\frac{\mathrm{\pi }}{4}} \right]} / {\sqrt {|{\sin \alpha } |} }}.$$

According to the digital computation algorithm of FRFT [60], the discrete version of Eq. (9) is expressed as

$${F_p}({u_s}) = \frac{{{A_\alpha }}}{{\sqrt {2N} }}\textrm{exp} [\frac{{\textrm{j}\mathrm{\pi }(\cot\alpha )m_s^2}}{{2N}}]\sum\limits_{r = 1}^N {\textrm{exp} [ - \frac{{\textrm{j}\mathrm{\pi }(\csc\alpha ){m_s}{n_r}}}{N} + \frac{{\textrm{j}\mathrm{\pi }(\cot\alpha )n_r^2}}{{2N}}]} f({x_r}),$$
where xr and us are sampling points on spatial and fractional domains. r and s are spatial and fractional-domain indices respectively. Sampling point number N=2Δx2. Δx is an integer which denotes the normalized interval length of spatial domain and frequency domain [60]. The spatial parameter nr=-N/2 + 1, -N/2 + 2, …, N/2, when r=1, 2, …, N. The fractional-domain parameter ms=-N+2, -N+4, …, N, when s=1, 2, …, N. With p=1, Eq. (11) degenerates into the discrete Fourier transform. Note that Eq. (11) is derived with 0.5≤p≤1.5 [60]. Using the index additivity property of FRFT shown in Eq. (12), one can extend the range of p from [0.5, 1.5] to [0, 2]. For example, let q=1 if 1.5≤p≤2, so we have 0.5≤p-q≤1. Thus Fp could be calculated by Fourier transform of Fp-q(u). Similarly, let q=-1 if 0≤p≤0.5. Then Fp could be calculated by inverse Fourier transform of Fp-q(u).
$${F_p}(u) = {F_q}[{F_{p - q}}](u).$$

For convenience of calculation, Eq. (11) can be written in the matrix form:

$${F_{N \times 1}} = FRF{T_{N \times N}} \cdot {f_{N \times 1}},$$
where FN×1 and fN×1 are the N×1 vectors of discretized Fp(us) and f (xr), given by
$$\begin{aligned} {F_{N \times 1}} &= {\left[ {\begin{array}{{cccc}} {{F_p}({u_1})}&{{F_p}({u_2})}& \cdots &{{F_p}({u_N})} \end{array}} \right]^\textrm{T}},\\ {f_{N \times 1}} &= {\left[ {\begin{array}{{cccc}} {f({x_1})}&{f({x_2})}& \cdots &{f({x_N})} \end{array}} \right]^\textrm{T}}. \end{aligned}$$

The N×N FRFT matrix is expressed as

$$FRF{T_{N \times N}} = \left[ {\begin{array}{{cccc}} {FRFT({1,1} )}&{FRFT({1,2} )}& \cdots &{FRFT({1,N} )}\\ {FRFT({2,1} )}&{FRFT({2,2} )}& \cdots &{FRFT({2,N} )}\\ \vdots & \vdots & \vdots & \vdots \\ {FRFT({N,1} )}&{FRFT({N,2} )}& \cdots &{FRFT({N,N} )} \end{array}} \right],$$
where the entries of the FRFT matrix are given by
$$FRFT({s,r} )= \frac{{{A_\alpha }}}{{\sqrt {2N} }}\textrm{exp} [\frac{{\textrm{j}\mathrm{\pi }(\cot\alpha )m_s^2}}{{2N}} - \frac{{\textrm{j}\mathrm{\pi }(\csc\alpha ){m_s}{n_r}}}{N} + \frac{{\textrm{j}\mathrm{\pi }(\cot\alpha )n_r^2}}{{2N}}].$$

Conversely, one can reconstruct fN×1 from FN×1 by the inverse FRFT:

$${f_{N \times 1}} = {(FRFT_{N \times N}^{})^{ - 1}} \cdot {F_{N \times 1}},$$
where (FRFTN×N)−1 is the inverse of the FRFT matrix.

In the two-dimensional case, discrete FRFT is further written as

$$\begin{aligned} {F_p}(u{x_s},u{y_w}) &= \frac{{A_\alpha ^2}}{{2N}}\sum\limits_{v = 1}^N {\sum\limits_{r = 1}^N {\textrm{exp} [\textrm{j}{\phi _x}(s,r) + \textrm{j}{\phi _y}(w,v)]} } f({x_r},{y_v})\\ &= \frac{{A_\alpha ^2}}{{2N}}\sum\limits_{v = 1}^N {\sum\limits_{r = 1}^N {\{ \cos [{\phi _x}(s,r) + {\phi _y}(w,v)] + \textrm{j}\sin [{\phi _x}(s,r) + {\phi _y}(w,v)]\} } } f({x_r},{y_v}), \end{aligned}$$
where ux and uy are fractional-domain coordinates corresponding to spatial coordinates x and y respectively. r and v are spatial indices, while s and w are fractional-domain indices. The phases ϕx(s, r) and ϕy(w, v) are expressed as
$$\begin{aligned} {\phi _x}(s,r) &= \frac{{\mathrm{\pi }(\cot\alpha )m_s^2}}{{2N}} - \frac{{\mathrm{\pi }(\csc\alpha ){m_s}{n_r}}}{N} + \frac{{\mathrm{\pi }(\cot\alpha )n_r^2}}{{2N}},\\ {\phi _y}(w,v) &= \frac{{\mathrm{\pi }(\cot\alpha )m_w^2}}{{2N}} - \frac{{\mathrm{\pi }(\csc\alpha ){m_w}{n_v}}}{N} + \frac{{\mathrm{\pi }(\cot\alpha )n_v^2}}{{2N}}. \end{aligned}$$

By performing the one-dimensional inverse FRFT along ux direction and uy direction in fractional domain, f (xr, yv) could be reconstructed from Fp(uxs, uyw). Thus the matrix form of reconstruction algorithm is

$${f_{N \times N}} = {(FRFT_{N \times N}^{})^{ - 1}} \cdot {[{(FRFT_{N \times N}^{})^{ - 1}} \cdot {({F_{N \times N}})^\textrm{T}}]^\textrm{T}},$$
where FN×N and fN×N are the N×N matrices of discretized Fp(uxs, uyw) and f (xr, yv). Alternatively, the fast algorithm of inverse FRFT could be used as the reconstruction algorithm to gain higher image reconstruction speed [60].

For a better understanding of the theory, the notations used here and their descriptions are listed in Table 1.

Tables Icon

Table 1. Nomenclature

Funding

National Natural Science Foundation of China (61905015); Open Research Fund Program of the State Key Laboratory of Low-Dimensional Quantum Physics (KF201908); Open Project of National Engineering Laboratory for Forensic Science (2020NELKFKT01).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13(1), 13–20 (2019). [CrossRef]  

2. R. I. Khakimov, B. M. Henson, D. K. Shin, S. S. Hodgman, R. G. Dall, K. G. H. Baldwin, and A. G. Truscott, “Ghost imaging with atoms,” Nature 540(7631), 100–103 (2016). [CrossRef]  

3. B. I. Erkmen and J. H. Shapiro, “Ghost imaging: from quantum to classical to computational,” Adv. Opt. Photon. 2(4), 405–450 (2010). [CrossRef]  

4. P. Moreau, E. Toninelli, T. Gregory, and M. J. Padgett, “Ghost imaging using optical correlations,” Laser Photonics Rev. 12(1), 1700143 (2018). [CrossRef]  

5. S. Liansheng, W. Jiahao, T. Ailing, and A. Asundi, “Optical image hiding under framework of computational ghost imaging based on an expansion strategy,” Opt. Express 27(5), 7213–7225 (2019). [CrossRef]  

6. Z. Ye, H.-C. Liu, and J. Xiong, “Computational ghost imaging with spatiotemporal encoding pseudo-random binary patterns,” Opt. Express 28(21), 31163–31179 (2020). [CrossRef]  

7. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE. Signal Proc. Mag 25(2), 83–91 (2008). [CrossRef]  

8. N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285–289 (2014). [CrossRef]  

9. R. I. Stantchev, B. Sun, S. M. Hornett, P. A. Hobson, G. M. Gibson, M. J. Padgett, and E. Hendry, “Noninvasive, near-field terahertz imaging of hidden objects using a single-pixel detector,” Sci. Adv. 2(6), e1600190 (2016). [CrossRef]  

10. D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, “Experimental X-Ray Ghost Imaging,” Phys. Rev. Lett. 117(11), 113902 (2016). [CrossRef]  

11. H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-Transform Ghost Imaging with Hard X Rays,” Phys. Rev. Lett. 117(11), 113901 (2016). [CrossRef]  

12. J. Cheng and S. Han, “Incoherent Coincidence Imaging and Its Applicability in X-ray Diffraction,” Phys. Rev. Lett. 92(9), 093903 (2004). [CrossRef]  

13. A. Zhang, Y. He, L. Wu, L. Chen, and B. Wang, “Tabletop x-ray ghost imaging with ultra-low radiation,” Optica 5(4), 374–377 (2018). [CrossRef]  

14. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009). [CrossRef]  

15. P. A. Morris, R. S. Aspden, J. E. C. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6(1), 5913 (2015). [CrossRef]  

16. M. Bina, D. Magatti, M. Molteni, A. Gatti, and F. Ferri, “Backscattering Differential Ghost Imaging in Turbid Media,” Phys. Rev. Lett. 110(8), 083901 (2013). [CrossRef]  

17. R. Meyers, K. Deacon, and Y. Shih, “Positive-Negative Turbulence-Free Ghost Imaging Experiments,” Appl. Phys. Lett. 100(13), 131114 (2012). [CrossRef]  

18. Y.-K. Xu, W.-T. Liu, E.-F. Zhang, Q. Li, H.-Y. Dai, and P.-X. Chen, “Is ghost imaging intrinsically more powerful against scattering?” Opt. Express 23(26), 32993–33000 (2015). [CrossRef]  

19. X. Zhang, H. Yin, R. Li, J. Hong, S. Ai, W. Zhang, C. Wang, J. Hsieh, Q. Li, and P. Xue, “Adaptive ghost imaging,” Opt. Express 28(12), 17232–17240 (2020). [CrossRef]  

20. X. Zhang, H. Yin, R. Li, J. Hong, Q. Li, and P. Xue, “Ghost network analyzer,” New J. Phys. 22(1), 013040 (2020). [CrossRef]  

21. X. Zhang, H. Yin, R. Li, J. Hong, C. Wang, S. Ai, J. Hsieh, B. He, Z. Chen, Q. Li, and P. Xue, “Distortion-free frequency response measurements,” J. Phys. D Appl. Phys. 53(39), 39LT02 (2020). [CrossRef]  

22. P. Ryczkowski, M. Barbier, A. T. Friberg, J. M. Dudley, and G. Genty, “Ghost imaging in the time domain,” Nat. Photonics 10(3), 167–170 (2016). [CrossRef]  

23. C. G. Graff and E. Y. Sidky, “Compressive sensing in medical imaging,” Appl. Opt. 54(8), C23–C44 (2015). [CrossRef]  

24. S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, K. Fujiu, K. Waki, and H. Noji, “Ghost cytometry,” Science 360(6394), 1246–1251 (2018). [CrossRef]  

25. F. Devaux, P.-A. Moreau, S. Denis, and E. Lantz, “Computational temporal ghost imaging,” Optica 3(7), 698–701 (2016). [CrossRef]  

26. Y.-K. Xu, S.-H. Sun, W.-T. Liu, G.-Z. Tang, J.-Y. Liu, and P.-X. Chen, “Detecting fast signals beyond bandwidth of detectors based on computational temporal ghost imaging,” Opt. Express 26(1), 99–107 (2018). [CrossRef]  

27. G. M. Gibson, B. Sun, M. P. Edgar, D. B. Phillips, N. Hempler, G. T. Maker, G. P. A. Malcolm, and M. J. Padgett, “Real-time imaging of methane gas leaks using a single-pixel camera,” Opt. Express 25(4), 2998–3005 (2017). [CrossRef]  

28. N. Huynh, E. Zhang, M. Betcke, S. Arridge, P. Beard, and B. Cox, “Single-pixel optical camera for video rate ultrasonic imaging,” Optica 3(1), 26–29 (2016). [CrossRef]  

29. R. S. Aspden, N. R. Gemmell, P. A. Morris, D. S. Tasca, L. Mertens, M. G. Tanner, R. A. Kirkwood, A. Ruggeri, A. Tosi, R. W. Boyd, G. S. Buller, R. H. Hadfield, and M. J. Padgett, “Photon-sparse microscopy: visible light imaging using infrared illumination,” Optica 2(12), 1049–1052 (2015). [CrossRef]  

30. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013). [CrossRef]  

31. M. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010 (2016). [CrossRef]  

32. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015). [CrossRef]  

33. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Hadamard single-pixel imaging versus Fourier single-pixel imaging,” Opt. Express 25(16), 19619–19639 (2017). [CrossRef]  

34. J. Huang, D. Shi, K. Yuan, S. Hu, and Y. Wang, “Computational-weighted Fourier single-pixel imaging via binary illumination,” Opt. Express 26(13), 16547–16559 (2018). [CrossRef]  

35. B. Xu, H. Jiang, H. Zhao, X. Li, and S. Zhu, “Projector-defocusing rectification for Fourier single-pixel imaging,” Opt. Express 26(4), 5005–5017 (2018). [CrossRef]  

36. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Fast Fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 12029 (2017). [CrossRef]  

37. R. Li, J. Hong, X. Zhou, C. Wang, Z. Chen, B. He, Z. Hu, N. Zhang, Q. Li, P. Xue, and X. Zhang, “SNR study on Fourier single-pixel imaging,” New J. Phys. 23(7), 073025 (2021). [CrossRef]  

38. Z. Ye, P. Qiu, H. Wang, J. Xiong, and K. Wang, “Image watermarking and fusion based on Fourier single-pixel imaging with weighed light source,” Opt. Express 27(25), 36505–36523 (2019). [CrossRef]  

39. Z. Zhang, S. Liu, J. Peng, M. Yao, G. Zheng, and J. Zhong, “Simultaneous spatial, spectral, and 3D compressive imaging via efficient Fourier single-pixel measurements,” Optica 5(3), 315–319 (2018). [CrossRef]  

40. Z. Zhang and J. Zhong, “Three-dimensional single-pixel imaging with far fewer measurements than effective image pixels,” Opt. Lett. 41(11), 2497–2500 (2016). [CrossRef]  

41. H. Jiang, H. Zhai, Y. Xu, X. Li, and H. Zhao, “3D shape measurement of translucent objects based on Fourier single-pixel imaging in projector-camera system,” Opt. Express 27(23), 33564–33574 (2019). [CrossRef]  

42. M. Yao, Z. Cai, X. Qiu, S. Li, J. Peng, and J. Zhong, “Full-color light-field microscopy via single-pixel Imaging,” Opt. Express 28(5), 6521 (2020). [CrossRef]  

43. M. Torabzadeh, I.-Y. Park, R. Bartels, A. Durkin, and B. Tromberg, “Compressed single pixel imaging in the spatial frequency domain,” J. Biomed. Opt. 22(3), 030501 (2017). [CrossRef]  

44. S. Jiao, M. Sun, Y. Gao, T. Lei, Z. Xie, and X. Yuan, “Motion estimation and quality enhancement for a single image in dynamic single-pixel imaging,” Opt. Express 27(9), 12841–12854 (2019). [CrossRef]  

45. T. Qi, Y. Jiang, H. Wang, and L. Guo, “Image reconstruction of dynamic infrared single-pixel imaging system,” Opt. Commun. 410, 35–39 (2018). [CrossRef]  

46. H. Chen, J. Shi, X. Liu, Z. Niu, and G. Zeng, “Single-pixel non-imaging object recognition by means of Fourier spectrum acquisition,” Opt. Commun. 413, 269–275 (2018). [CrossRef]  

47. J. Peng, M. Yao, J. Chen, Z. Zhang, and J. Zhong, “Micro-tomography via single-pixel imaging,” Opt. Express 26(24), 31094–31105 (2018). [CrossRef]  

48. Y. Chen, S. Liu, X.-R. Yao, Q. Zhao, X.-F. Liu, B. Liu, and G.-J. Zhai, “Discrete cosine single-pixel microscopic compressive imaging via fast binary modulation,” Opt. Commun. 454, 124512 (2020). [CrossRef]  

49. Z. Zhang, T. Lu, J. Peng, and J. Zhong, “Fourier single-pixel imaging techniques and applications,” Infrared and Laser Engineering 48(6), 603002 (2019). [CrossRef]  

50. H. M. Ozaktas, Z. Zalevsky, and M. A. Kutay, The Fractional Fourier Transform with Applications in Optics and Signal Processing (Wiley, 2001).

51. Z. Yan-Shan, Z. Feng, L. Bing-Zhao, and T. Ran, “Fractional domain varying-order differential denoising method,” Opt. Eng. 53(10), 1–8 (2014). [CrossRef]  

52. H. Wan, R. Tao, and Y. Wang, “Fractional Fourier-Contourlet Deblurring of Space Variant Degradation Coupled with Noise,” in 2010 First International Conference on Pervasive Computing, Signal Processing and Applications2010), pp. 632–635.

53. L. Xu, Y. Xin, X. Bai, and Q. Li, “Fingerprint Recognition Based on Joint Fractional Fourier Transform Correlator,” in IEEE International Conference on Information Theory and Information Security (ICITIS2011) 2011), pp. 1049–1052.

54. L. Gao, L. Qi, Y. Wang, E. Chen, S. Yang, and L. Guan, “Rotation Invariance in 2D-FRFT with Application to Digital Image Watermarking,” Journal of Signal Processing Systems 72(2), 133–148 (2013). [CrossRef]  

55. R. Tao, J. Lang, and Y. Wang, “Optical image encryption based on the multiple-parameter fractional Fourier transform,” Opt. Lett. 33(6), 581–583 (2008). [CrossRef]  

56. B. Hennelly and J. T. Sheridan, “Optical image encryption by random shifting in fractional Fourier domains,” Opt. Lett. 28(4), 269–271 (2003). [CrossRef]  

57. R. Tao, X. Meng, and Y. Wang, “Image Encryption With Multiorders of Fractional Fourier Transforms,” IEEE T. Inf. Foren. Sec. 5(4), 734–738 (2010). [CrossRef]  

58. L. Gao, L. Qi, and L. Guan, “The Property of Frequency Shift in 2D-FRFT Domain With Application to Image Encryption,” IEEE Signal Proc. Let. 28, 185–189 (2021). [CrossRef]  

59. D. Cui, J. Zuo, and M. Xiao, “A Modified Robust Image Hashing Using Fractional Fourier Transform for Image Retrieval,” in Advances in Wireless Networks and Information Systems, Q. Luo, ed. (Springer-Verlag, 2010).

60. H. M. Ozaktas, O. Arikan, M. A. Kutay, and G. Bozdagt, “Digital computation of the fractional Fourier transform,” IEEE T. Signal Proces. 44(9), 2141–2150 (1996). [CrossRef]  

61. M. Lu, W. Jin-Min, F. Zhang, and R. Tao, “Chirp images in 2-D fractional Fourier transform domain,” in 2016 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC) 2016), pp. 1–4.

62. G. S. D. Gordon, F. Feng, Q. Kang, Y. Jung, J. Sahu, and T. Wilkinson, “Coherent, focus-corrected imaging of optical fiber facets using a single-pixel detector,” Opt. Lett. 39(20), 6034–6037 (2014). [CrossRef]  

63. P. Clemente, V. Durán, E. Tajahuerce, P. Andrés, V. Climent, and J. Lancis, “Compressive holography with a single-pixel detector,” Opt. Lett. 38(14), 2524–2527 (2013). [CrossRef]  

64. S. Zhao, S. Chen, X. Wang, R. Liu, P. Zhang, H. Li, H. Gao, and F. Li, “Measuring the complex spectrum of orbital angular momentum and radial index with a single-pixel detector,” Opt. Lett. 45(21), 5990–5993 (2020). [CrossRef]  

65. J. Li, Y. Li, Y. Wang, K. Li, R. Li, J. Li, and Y. Pan, “Two-step Holographic Imaging Method based on Single-pixel Compressive Imaging,” J. Opt. Soc. Korea 18(2), 146–150 (2014). [CrossRef]  

66. H.-S. Lee, H.-J. Maeng, and Y.-S. Bae, “Fake Finger Detection Using the Fractional Fourier Transform,” in Biometric ID Management and Multimodal Communication (Springer Berlin Heidelberg, 2009), pp. 318–324.

67. F. Q. Yu, Z. K. Zhang, and M. H. Xu, “A Digital Watermarking Algorithm for Image Based on Fractional Fourier Transform,” in 2006 1ST IEEE Conference on Industrial Electronics and Applications2006), pp. 1–5.

68. F. Yu, Z. Zhang, and D. Yan, “A robust algorithm for image watermarking based on FRFT and DFT,” International Conference on Complex Systems and Application13, 903–907 (2006).

69. M.-F. Lu, G.-Q. Ni, T. Wang, F. Zhang, R. Tao, and J. Yuan, “Method for reducing Newton's rings pattern in the scanned image reproduced with film scanners,” in International Conference on Optical Instruments and Technology (OIT2013) (SPIE, 2013).

70. M.-F. Lu, G.-Q. Ni, T.-Z. Bai, R. Tao, and F. Zhang, “Method for suppressing the quantization error of Newton’s rings fringe pattern,” Opt. Eng. 52(10), 103105 (2013). [CrossRef]  

71. W. Yang, Z. Feng, W. Liu, and X. Zou, “Blurred defocused image restoration based on FRFT,” Wuhan Univ. J. Nat. Sci. 12(3), 496–500 (2007). [CrossRef]  

72. A. W. Lohmann, Z. Zalevsky, and D. Mendlovic, “Synthesis of pattern recognition filters for fractional Fourier processing,” Opt. Commun. 128(4-6), 199–204 (1996). [CrossRef]  

73. J. Cai and G. Feng, “Human action recognition in the fractional Fourier domain,” in 3rd IAPR Asian Conference on Pattern Recognition (ACPR) (2015), pp. 660–664.

74. X.-Y. Jing, H.-S. Wong, and D. Zhang, “Face recognition based on discriminant fractional Fourier feature extraction,” Pattern Recogn. Lett. 27(13), 1465–1471 (2006). [CrossRef]  

75. S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Opt. Express 21(20), 23068–23074 (2013). [CrossRef]  

76. K. M. Czajkowski, A. Pastuszczak, and R. Kotyński, “Single-pixel imaging with sampling distributed over simplex vertices,” Opt. Lett. 44(5), 1241–1244 (2019). [CrossRef]  

77. H. Ren, S. Zhao, and J. Gruska, “Edge detection based on single-pixel imaging,” Opt. Express 26(5), 5501–5511 (2018). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Experimental set-up of FRFT single-pixel imaging.
Fig. 2.
Fig. 2. When N=128 and p=0.5, typical illumination patterns for FRFT single-pixel imaging with (a) s=65, w=65, (b) s=65, w=128, (c) s=128, w=65, (d) s=128, w=128.
Fig. 3.
Fig. 3. Amplitudes of fractional spectrum measured by the proposed method under different fractional orders. The origin of the spectra is centered where the fractional Fourier spectra are shown. Note that the results in (a), (b) and (h)-(j) are calculated from the experimental data shown in (c)-(g) by Eqs. (8) and (12).
Fig. 4.
Fig. 4. Object images recovered by FRFT single-pixel imaging under different fractional orders. The number of illumination patterns/measurements used for every given result is 4×128×128 = 65536.
Fig. 5.
Fig. 5. (a)-(e) Amplitudes of fractional spectrum obtained by sub-Nyquist sampling under different fractional orders. (f)-(j) Object images recovered by FRFT single-pixel imaging using sub-Nyquist sampling under different fractional orders. (k) The relationship of CR and image SNR. When CR=37%, 28% and 19%, the numbers of illumination patterns/measurements are 24248, 18350 and 12452 respectively.
Fig. 6.
Fig. 6. The edge extraction results of FRFT single-pixel imaging under different parameters. For the edge extraction results, the compression ratio is 37%, 28%, 28% and 37% respectively, when p is 0.6, 0.8, 1.2 and 1.4. When CR=37% and 28%, the numbers of illumination patterns/measurements are 24248 and 18350 respectively. The data post-processing time for each result is ∼1 ms using a laptop with six-core CPU (i5-8400, Intel corp.).
Fig. 7.
Fig. 7. The numerical comparison of FRFT single-pixel imaging, Fourier single-pixel imaging and Hadamard single-pixel imaging when 2D chirp pattern is used as the object. (a) 2D chirp pattern. (b) The fractional spectrum obtained in FRFT single-pixel imaging with p=0.5. (c) The Fourier spectrum obtained in Fourier single-pixel imaging. (d) The Hadamard spectrum obtained in Hadamard single-pixel imaging.

Tables (1)

Tables Icon

Table 1. Nomenclature

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

F p ( u ) = A exp [ j π ( u 2 cot α 2 u x csc α + x 2 cot α ) ] f ( x ) d x ,
A = exp [ j π sgn(sin α ) / 4 + j α / 2 ] / | sin α | .
P ϕ 0 , s , w ( r , v ) = { 1 + cos [ ϕ x ( s , r ) + ϕ y ( w , v ) + ϕ 0 ] } / 2 ,
I ϕ 0 ( s , w ) = i 0 v = 1 N r = 1 N P ϕ 0 , s , w ( r , v ) f ( x r , y v ) ,
F p ( u x s , u y w ) = I 0 ( s , w ) I π ( s , w ) + j[ I 3 π / 2 ( s , w ) I π / 2 ( s , w ) ] = 2 N i 0 A α 2 F p ( u x s , u y w ) .
e d g e ( x r , y v )  =  {   | F p [ F p ( u x s , u y w ) ] F ( p + Δ p ) [ F p ( u x s , u y w ) ] | , 0.5 p < 1 , | F p [ F p ( u x s , u y w ) ] F ( p Δ p ) [ F p ( u x s , u y w ) ] | , 1 < p 1.5 ,
e d g e ( x r , y v ) {   | f ( x r , y v ) F Δ p [ f ( x r , y v ) ] | , 0.5 p < 1 , | f ( x r , y v ) F Δ p [ f ( x r , y v ) ] | , 1 < p 1.5 ,
F p + 4 n ( u ) = F p ( u ) , F p + 2 ( u ) = F p ( u ) .
F p ( u ) = A α exp ( j π u 2 cot α ) exp ( 2 j π u x csc α + j π x 2 cot α ) f ( x ) d x ,
A α = exp [ j ( p 1 ) π 4 ] / | sin α | .
F p ( u s ) = A α 2 N exp [ j π ( cot α ) m s 2 2 N ] r = 1 N exp [ j π ( csc α ) m s n r N + j π ( cot α ) n r 2 2 N ] f ( x r ) ,
F p ( u ) = F q [ F p q ] ( u ) .
F N × 1 = F R F T N × N f N × 1 ,
F N × 1 = [ F p ( u 1 ) F p ( u 2 ) F p ( u N ) ] T , f N × 1 = [ f ( x 1 ) f ( x 2 ) f ( x N ) ] T .
F R F T N × N = [ F R F T ( 1 , 1 ) F R F T ( 1 , 2 ) F R F T ( 1 , N ) F R F T ( 2 , 1 ) F R F T ( 2 , 2 ) F R F T ( 2 , N ) F R F T ( N , 1 ) F R F T ( N , 2 ) F R F T ( N , N ) ] ,
F R F T ( s , r ) = A α 2 N exp [ j π ( cot α ) m s 2 2 N j π ( csc α ) m s n r N + j π ( cot α ) n r 2 2 N ] .
f N × 1 = ( F R F T N × N ) 1 F N × 1 ,
F p ( u x s , u y w ) = A α 2 2 N v = 1 N r = 1 N exp [ j ϕ x ( s , r ) + j ϕ y ( w , v ) ] f ( x r , y v ) = A α 2 2 N v = 1 N r = 1 N { cos [ ϕ x ( s , r ) + ϕ y ( w , v ) ] + j sin [ ϕ x ( s , r ) + ϕ y ( w , v ) ] } f ( x r , y v ) ,
ϕ x ( s , r ) = π ( cot α ) m s 2 2 N π ( csc α ) m s n r N + π ( cot α ) n r 2 2 N , ϕ y ( w , v ) = π ( cot α ) m w 2 2 N π ( csc α ) m w n v N + π ( cot α ) n v 2 2 N .
f N × N = ( F R F T N × N ) 1 [ ( F R F T N × N ) 1 ( F N × N ) T ] T ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.