Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Programmable optical real-time processing system for ultra-wide bandwidth electronic reconnaissance

Open Access Open Access

Abstract

Electronic reconnaissance is to detect signals, extract their parameters, modulation types or direction of arrival and so on from a wide bandwidth range. It is difficult for digital signal processing device to process in real time under an ultra-wide bandwidth environment. This paper proposed a programmable optical system which can process signals from an instantaneous bandwidth up to 40GHz in real time. In the optical system, the signals are reconstructed at wavefront of a laser beam. The laser beam carrying signals passes through an optical system composed by lens, beam splitter, light modulator, etc. Signal processing operation is accomplished when laser beam arrives at a focal plane, and processing results are acquired by a high-speed camera. Typical pulse description words can be yielded from the results. The proposed optical system has a nano-second processing delay due to its meter-length light path.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

With the rapid progress of electronic technology, the frequency band range of electromagnetic signals becomes lager and the modulation types become more complex [1,2]. Modern electronic reconnaissance requires parameter extraction, matched filtering and direction of arrival (DOA) estimation of ultra-wide bandwidth (UWB) signals in real time [3,4]. However, limited by the processing ability of digital signal processing (DSP) devices, current electronic reconnaissance system is difficult to provide real-time processing of UWB signal [5,6], and the ability to process signals with unknown modulation type is also limited [7]. For UWB electronic reconnaissance signal, the processing methods [8] and requirements do not match with the traditional DSP devices. It is more suitable to use parallel devices for processing [9,10]. With the development of information optics [11,12], some researchers try to use phase mask and optical lens to process specific signals by spatial light [13,14]. Although this method improves the processing speed and reduces power consumption [15], it is difficult to adapt changeable signal processing because the system is very fixed [16,17]. In addition, due to the difficulty of manufacture and unreliable performance of cylindrical lens [18], it is inappropriate to use optical lens to realize one-dimensional Fourier transform of signal and other one-dimensional operations. Besides, many optical imaging works only care about the processing method, but ignore the necessity of loading the amplitude and phase into the wavefront at the same time [19,20]. It is difficult to realize accurate signal processing by using the similar methods.

Considering the problems mentioned above, this paper proposes a programmable optical system that exhibits good performance in real-time UWB electronic reconnaissance and can achieve instantaneous full coverage of 0–40GHz bandwidth and quantization accuracy of 16bit. The system consists of three modules. First, we designed an amplitude-phase modulation module to reconstruct signals at wavefront of a laser beam. Then, the processing operation is implemented by a programmable optical device. Last, the processed spatial light distribution is captured by analysis module, and the processing results will be calculated through this distribution. This system can realize nano-second processing delay due to its meter-length light path, and the programmable optical device can meet the requirements of changeable applications.

To demonstrate the performance of this system in real-time electronic reconnaissance, some typical applications of electronic reconnaissance are carried out for system testing. The experimental results show that the system can process the signals in the bandwidth of 0–40GHz in real-time, calculate unknown frequency modulation slope, and detect the DOA of 8 targets in 64 channels. In this paper, the principle and architecture of this system are illustrated in the section2, the experimental results are presented in section3, and the conclusion is drawn in the last section.

2. Principle

As shown in Fig. 1, the optical system proposed in this paper processes UWB electronic reconnaissance signals in three steps: signal modulation, processing, and analysis.

 figure: Fig. 1.

Fig. 1. Processing flow of electronic reconnaissance signal in optical system

Download Full Size | PDF

This section is divided into two parts. The first part introduces the principle of signal modulation and two modulation methods corresponding to different signals. The second part discusses three different signal processing methods and analysis algorithms for different electronic reconnaissance requirement.

2.1 Signal modulation

In order to realize the optical processing of signal, it is necessary to reconstruct the signal on the laser beam. We designed a signal modulation module by combining Digital Micromirror Device (DMD), Spatial Light Modulator (SLM), spatial filter, and an optical 4f system. DMD and SLM are both programmable reflective 2-D diffraction devices that can modulate spatial light [21]. Their reflective surfaces are made up of millions of pixels, which is independently controlled to realize the modulation of 8-bit quantization accuracy [22]. DMD can realize amplitude modulation on the spatial light reaching each pixel [23], while SLM can realize phase modulation [24]. According to the size of devices, a 532nm laser is selected as the signal carrier to ensure the compactness and adjustability of the system.

For the spatial light modulation of signal, the core problem is to modulate the amplitude information and phase information of the signal on the same wavefront. Therefore, we use optical 4f system and spatial filter to realize the cascade of DMD and SLM, as shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. Signal modulation module. DMD and SLM are cascaded through an optical 4f system composed of two lenses with different focal lengths. Where ${\textrm{x}_i}$ and ${y_i}$ are the coordinates of each plane that perpendicular to the laser propagation direction.

Download Full Size | PDF

In this module, the complex amplitude distribution of spatial light modulated by DMD and SLM meets the Fresnel diffraction formula:

$$\begin{aligned} U(x,y) &= \frac{1}{{j\lambda z}}\textrm{exp} (jkz)\int\!\!\!\int\limits_\infty {U({x_0},{y_0}) \times } \\ &\textrm{exp} \{ j\frac{k}{{2z}}[{(x - {x_0})^2} + {(y - {y_0})^2}]\} d{x_0}d{y_0} \end{aligned}, $$
where $U(x,y)$ is the complex amplitude distribution of spatial light on observation surface; x and y are the coordinates of the observation surface with respect to the origin of the optical axis in the spatial domain; $\lambda $ is the wavelength of laser; k is the wave number; z is the distance from observation surface to DMD or SLM; $U({x_0},{y_0})$ is the complex amplitude distribution of spatial light field after DMD or SLM modulation; ${x_0}$ and ${y_0}$ are the coordinates of the DMD or SLM surface. In this system, the propagation of spatial light meets Eq. (1). Spatial light passing through a lens with a focal length of f will be superimposed with the phase of:
$${t_l}(x,y) = \textrm{exp} [ - j\frac{k}{{2f}}({x^2} + {y^2})], $$
where x and y are the coordinates of the lens plane. According to formulas (1) and (2), assuming that the spatial optical complex amplitude distribution modulated by DMD is $C({x_0},{y_0})$, the complex amplitude distribution at the focal plane of lens1 is:
$$U({x_1},{y_1}) = \frac{1}{{j\lambda {f_1}}}{\cal F}{\{ C({x_0},{y_0})\} _{{f_x} = \frac{{{x_1}}}{{\lambda {f_1}}},{f_y} = \frac{{{y_1}}}{{\lambda {f_1}}}}}, $$
where ${x_1}$ and ${y_1}$ are the coordinates of the focal plane; ${f_1}$ is the focal length of lens1; ${\cal F}$ is the symbol of Fourier transform. Since DMD is a two-dimensional diffraction device, in order not to affect the result of optical 4f system, a spatial filter needs to be placed at the focal plane of lens1 to eliminate high-order diffraction. Similarly, lens2 can realize inverse Fourier transform:
$$A({x_2},{y_2}) = C( - \frac{{{f_2}{x_0}}}{{{f_1}}}, - \frac{{{f_2}{y_0}}}{{{f_1}}}), $$
where ${f_2}$ is the focal length of lens2; $A({x_2},{y_2})$ is the complex amplitude distribution on the surface of SLM. DMD and SLM have the same resolution, but the size of each pixel is different. According to formula (4), the ratio of ${f_1}$ and ${f_2}$ should be equal to the ratio of the reflecting surface size of DMD and SLM, and the signal amplitude should be loaded according to central symmetry on DMD to ensure the range of $A({x_2},{y_2})$ matches the reflecting surface of SLM. The phase information is modulated by SLM, and the modulation result is:
$$U({x_2},{y_2}) = C( - \frac{{{f_2}{x_0}}}{{{f_1}}}, - \frac{{{f_2}{y_0}}}{{{f_1}}}) \times P({x_2},{y_2}), $$
where $U({x_2},{y_2})$ is the output of modulation module; $P({x_2},{y_2})$ is the phase of signal. Thus, the signal to be processed is modulated on the spatial light, and we use a Non-Polarizing Beam Splitters (NPBS) to reflect the modulated spatial light to the next module.

According to different reconnaissance requirements, we design two signal loading strategies: for conventional signals and for multichannel signals. For conventional UWB electronic reconnaissance signals, we designed a segmented loading mode to make full use of the parallel processing ability of spatial light to extract the time-frequency parameters of signals in real time. The input signal in the 0–40 GHz is sub-band separated by comb filter, then down converted to the range of 0–20MHz. After these steps, the phase information and amplitude information are loaded into signal modulation module, as shown in Fig. 3.

 figure: Fig. 3.

Fig. 3. Position of each sub-band loaded on signal modulation module. The short side is divided into 2000 segments, and each segment is loaded with signals of different frequency bands, which can realize UWB signal loading.

Download Full Size | PDF

For multichannel signals, the host computer can change the signal loading strategy on DMD and SLM, as shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. Loading strategy of multichannel signals. The signals received by each channel are loaded into a line on DMD / SLM in sequence, so the short side of DMD / SLM represents the serial number of the channel, and the long side loads the amplitude / phase of signal.

Download Full Size | PDF

The signals received by array will form a unique phase difference in the channel dimension, and this phase difference can become a frequency point after processing. The analysis of signal direction is transformed into the analysis of spatial frequency in optics. And the direction analysis accuracy of this method depends on the sampling rate of SLM and the performance of receiving antenna. By analyzing the frequency point, the DOA and parameters of multichannel signals can be obtained at the same time.

2.2 Signal processing and analysis

After completing signal modulation, we designed the signal processing and analysis module according to the characteristics of electronic reconnaissance signal.

As shown in Fig. 5, we used another SLM for processing, and a qCMOS camera are used for receiving. Although the function of SLM1 is to modulate the phase of signals, it can process signal through phase superposition. This means that the system includes two programmable processing devices, and can work alone or together to achieve different processing methods. With the change of processing method, the algorithm for analyzing the results is changed at the same time by host computer. The detailed methods are discussed as follows.

 figure: Fig. 5.

Fig. 5. Signal processing and analysis module. SLM1 is the SLM in modulation module and SLM2 is another SLM for processing. The results are captured by a qCMOS camera and sent to the host computer.

Download Full Size | PDF

According to Fig. 3, to extract the parameters of conventional electronic reconnaissance signal, the one-dimensional Fourier transform of signal (long side) and imaging of each sub-band (short side) should be carried out simultaneously. The phase loaded on SLM2 is (Assume that the x dimension is Fourier transform dimension):

$${t_3}({x_3},{y_3}) = \textrm{exp} [ - j\frac{k}{{2d}}({x_3}^2 + \frac{{{y_3}^2}}{2})]. $$

According to the above formula, SLM2 can be seen as a virtual lens with different focal lengths in x and y dimensions, and the phase distribution is shown in Fig. 6.

 figure: Fig. 6.

Fig. 6. The phase to realize one-dimensional Fourier transform.

Download Full Size | PDF

Because of the 8-bit quantization accuracy of SLM, the spatial phase relation needs to be quantified as relation of gray-level, as shown in Fig. 7.

 figure: Fig. 7.

Fig. 7. 8-bit gray image of one-dimensional Fourier transform phase.

Download Full Size | PDF

According to formula (1), (3), and (6), the complex amplitude distribution on the camera plane is:

$$\left\{ \begin{array}{l} {U_4}({x_4}) = \frac{1}{{j\lambda d}}{\cal F}{\{ {U_2}({x_2})\}_{{f_x} = \frac{{{x_4}}}{{\lambda d}}}}\\ {U_4}({y_4}) = {U_2}( - {y_2}) \end{array} \right.. $$

From the above formula, each sub-band realizes one-dimensional Fourier transform separately, and the arrangement order of sub-bands does not change due to equal size imaging. Therefore, the type, frequency distribution, and other characteristic parameters of signals can be obtained through the results of camera. Different signals have different frequency characteristics, so the signal type can be determined according to the complex amplitude distribution captured by the qCMOS camera. The resolution of camera is ${N_c}\ast {M_c}$, and the pixel size is ${l_c}$. The resolution of SLM is ${N_s}\ast {M_s}$, and the pixel size is ${l_s}$. We designed an appropriate distance d to ensure the range of wavefront matched the size of SLM and camera:

$$d = \frac{{{l_s}{l_c}{M_c}}}{{2\lambda }}$$

The center frequency ${f_c}$ can be calculated in two parts:

$${f_c} = 20[n({y_4}) + \frac{{{x_4}}}{{{M_c}}}](MHz), $$
where ${x_4}/{M_c}$ indicates the position of the signal in the 20MHz sub-band; $n({y_4}) = {l_c}{y_4}/{l_s}$ represents the order number of this sub-band. For some typical electronic reconnaissance signals, such as linear frequency modulation (LFM) signal, its bandwidth B needs to be calculated. The calculation method is:
$$B = \frac{{m \times 20}}{{{M_c}}}(MHz), $$
where m is the total number of pixels that the signal exists.

LFM signal processing requires not only obtaining its parameters, but also matched filtering. If matched filtering in x dimension is to be realized, a secondary phase is required to be superimposed on SLM1:

$${t_2}({x_2},{y_2}) = \textrm{exp} [ - j\frac{k}{{2d}}({x_2}^2 + {y_2}^2)]. $$

According to formula (5) and (11), spatial complex amplitude distribution modulated by SLM1 is ${U^{\prime}}({x_2},{y_2})$:

$$\begin{aligned} {U^{\prime}}({x_2},{y_2}) &= U({x_2},{y_2}) \times \textrm{exp} [ - j\frac{k}{{2d}}({x_2}^2 + {y_2}^2)]\\ &= C( - \frac{{{f_2}{x_0}}}{{{f_1}}}, - \frac{{{f_2}{y_0}}}{{{f_1}}}) \times P({x_2},{y_2}) \times \textrm{exp} [ - j\frac{k}{{2d}}({x_2}^2 + {y_2}^2)] \end{aligned}. $$

When the spatial light propagates to SLM2, the spatial complex amplitude distribution is:

$$U({x_3},{y_3}) = \frac{1}{{j\lambda d}} \times \textrm{exp} [j\frac{k}{{2d}}({x_3}^2 + {y_3}^2)] \times {\cal F}{\{ U({x_2},{y_2})\} _{{f_x} = \frac{{{x_3}}}{{\lambda d}},{f_y} = \frac{{{y_3}}}{{\lambda d}}}}. $$

According to the above formula, a secondary phase $\textrm{exp} [jk/(2d)\ast ({x_3}^2 + {y_3}^2)]$ is generated. In order to eliminate the influence of this phase and realize inverse Fourier transform, we superimpose an opposite secondary phase with twice the curvature while loading the matched filter phase on SLM2. The phase loaded on SLM2 is:

$${t_3}({x_3},{y_3}) = S({x_3},{y_3}) \times \textrm{exp} [ - j\frac{k}{d}({x_3}^2 + {y_3}^2)], $$
where $S({x_3},{y_3})$ is matched filter phase in frequency domain. The phase relation is shown in Fig. 8.

 figure: Fig. 8.

Fig. 8. The phase loaded on SLM2 to realize matched filtering.

Download Full Size | PDF

The phase after 8-bit quantization is shown in Fig. 9.

 figure: Fig. 9.

Fig. 9. 8-bit gray image of matched filtering phase.

Download Full Size | PDF

After the propagation and processing of SLM2, the LFM signal completed matched filtering and inverse Fourier transform. However, the frequency modulation slope of LFM signal in general electronic reconnaissance is unknown. For LFM signals with unknown frequency modulation rate, we designed a fast iterative way to realize adaptive matched filtering. By evaluating the processing results captured by the qCMOS camera, host computer constantly updates the frequency modulation slope of the matched filter phase on SLM2, as shown in Fig. 10.

From Fig. 10, continuously changing the frequency modulation slope of the phase loaded by SLM2 until the bright line on the qCMOS camera is focused into a bright spot, the focusing is completed. In the same way as the calculation method of point frequency signal as formula (9), the center frequency of LFM signal can also be calculated from the pixel coordinates of bright spots. At the same time, in the process of focusing, the extraction of the frequency modulation slope of the signal is also completed.

 figure: Fig. 10.

Fig. 10. Partial phase change of SLM2 in iteration. The final phase is the matched filter that suits the LFM signal with unknown frequency modulation slope, and the frequency modulation slope can be known.

Download Full Size | PDF

For DOA of signals, the direction of multichannel signals can be obtained in real time by realizing one-dimensional Fourier transform through the signal processing module. Then, the channel dimension will show a certain frequency, which represents the phase change between each channel in the optical system. The frequency calculated by the optical system is:

$${f_o} = \frac{{y - {y_0}}}{{d\lambda }}, $$
where y is pixel coordinates in this direction; ${y_0}$ is the central coordinate of the channel dimension of the camera plane. It is worth noting that the frequency of the optical system needs to be converted into the frequency generated by the actual multi-channel phase difference. The calculation method is as follows:
$${f_r} = \frac{{2{f_o}}}{{{f_s}l}}, $$
where ${f_r}$ is the real frequency; ${f_o}$ is the frequency calculated by the optical system; ${f_s}$ is the sampling rate of SLM, which is the reciprocal of pixel size ${l_s}$; l is the receiving antenna interval. After obtaining ${f_r}$, the direction of arrival can be calculated according to the signal wavelength and antenna spacing. For the signal dimension, the required signal parameters can be obtained by selecting the processing method mentioned above according to the characteristics of the signal.

3. Experimental result

In order to evaluate the performance of our system, we performed experiments on different electronic reconnaissance signals in 0-40GHz and different processing methods. The signals in the experiment are generated by the signal source (8564E, HP) and arbitrary wave generator (M8190A, KESIGHT). The signals are received by down converter (SFDC45A, Sinolink) and sent to optical system.

As shown in Fig. 11, the system on the optical experimental platform is built to quantitatively test the signal processing ability. The amplitude of signal was loaded on the DMD (HDSLM756D, UPOLabs), and phase-only SLMs (GAEA2, Holoeye) were used to load the phase of signal and processing. The system was illuminated by a coherent laser light of wavelength 532nm after collimated beam expansion, and the optical 4f system consists of two lenses ($\phi = 25.4mm$, Thorlabs) and a spatial filter (KT310, Thorlabs). All processing results were captured by an ORCA-Quest qCMOS camera (C15550-20UP, Hamamatsu).

 figure: Fig. 11.

Fig. 11. Processing system experimental prototype.

Download Full Size | PDF

In order to verify the instantaneous bandwidth of the system, eight sinusoidal signals with different frequencies in the frequency range of 0-40GHz are sent to optical system simultaneously, and the processing result is shown in Fig. 12.

As shown in Fig. 12, the spatial coordinates of eight bright spots are read out through the qCMOS camera. Since the pixel size of the camera is 4.6µm and the pixel size of the SLM is 3.74µm, $n({y_4})$ in formula (8) is expressed as $n({y_4}) = 4.6\ast {y_4}/3.74$ (If the bright spot occupies more than one pixel, the brightest pixel in the middle is taken). And the resolution of camera is 2309*4096, the formula (8) is expressed as:

$${f_c} = 20(\frac{{4.6{y_4}}}{{3.74}} + \frac{{{x_4}}}{{4096}})(MHz). $$

Bring the pixel coordinates of eight points into the above formula, calculate the frequency of each point, and compare it with the set frequency. The results are shown in Table 1.

 figure: Fig. 12.

Fig. 12. Instantaneous full coverage of frequency range.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Processing result of LFM signal.

Download Full Size | PDF

Tables Icon

Table 1. Processing results in the instantaneous frequency range of 0–40GHz

From the results in the above table, this system can realize instantaneous processing with the frequency range of 0–40GHz and has very accurate processing performance in all frequency bands.

For the LFM signal (Fig. 13), the number of signal pixels m is obtained through the camera and brought into formula (10). According to this formula, the bandwidth B is:

$$B = \frac{{m \times 20}}{{4096}}(MHz). $$

The processing result is as follows:

From mentioned in section 2, the matched filtering of LFM signal is realized by iteration, and the frequency modulation slope is obtained. The result is shown in Fig. 14.

 figure: Fig. 14.

Fig. 14. Focused LFM signal.

Download Full Size | PDF

In the same way as the calculation method of point frequency signal, the center frequency of focused LFM signal can also be calculated as formula (17), and Table 2 shows the results of different SLM signals:

Tables Icon

Table 2. Frequency modulation slope analysis of SLM signal

For the DOA of multichannel signals, 64 channels are used to receive signals from an unknown direction and send them to the optical system. In this experiment, we used LFM signal as the test signal. Our system performs matched filtering on the LFM signal and judges the direction of the signal at the same time, which are all completed in real time. The processing results are shown in the figure below:

According to the formula (15) and (16), the direction of arrival can be calculated. And the frequency characteristics of the signal can be obtained from the distribution of another dimension. The distribution of the long side in Fig. 15 is the matched filtering result of LFM signal. The frequency modulation slope can be obtained by the method shown in Fig. 10. If further analysis of frequency domain waveform is needed, the section in this direction can be made. On this basis, signals in another seven directions are added. Signals in eight different directions are received at the same time, and be send to the optical system for processing. The result are as follows (Fig. 15 and Fig. 16):

 figure: Fig. 15.

Fig. 15. Processing results of multi-channel signals (1 direction).

Download Full Size | PDF

 figure: Fig. 16.

Fig. 16. Processing results of multi-channel signals (8 directions).

Download Full Size | PDF

The actual signal direction and calculation results are shown in the table below:

From the results in Table 3, this system can accurately distinguish signals from different directions and process them in real time.

Tables Icon

Table 3. Processing results of multi-channel signals (8 directions, and ${y_0} = 677$)

4. Conclusion

The programmable optical system proposed in this paper can realize real-time extraction of UWB signal parameters, modulation types, and DOA in electronic reconnaissance. This system solves the problems of large processing delay and single processing method that traditional electronic reconnaissance processing system faces. It also provides a new idea for UWB signal processing. Through DMD and SLM cascaded by optical 4f system, the complete reconstruction of signal amplitude and phase is realized. The programmable high-speed processing module provides pixel level accurate processing in real time, and realizes the replacement of traditional optical devices. In experiments, the real electromagnetic signal is input into the system, and high accuracy results are obtained. In this system, all computation operations are processed in optics after the signal is received by the down converter, and digital processing is used only for the result analysis. The processing speed of this system is the speed of light, which is much higher than the speed of traditional electronic reconnaissance processor in processing large amount of data and UWB signals. In dealing with signals with complex modulation type and under large bandwidth, this system can realize adaptive focusing. Also, the architecture is scalable, and devices are programmable, which can meet the requirements of complex and changeable signal processing methods.

However, the photoelectric devices such as SLM and DMD used in the system will be restricted by many factors such as phase quantization accuracy, refresh rate and resolution, which is a major obstacle to further improve the signal processing speed and accuracy. Future studies must focus on the development of new optoelectronic materials and devices to realize the optical processing of electronic reconnaissance signals with larger scale, faster speed and higher accuracy.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. X. Fan, T. Li, and S. Su, “Intrapulse modulation type recognition for pulse compression radar signal,” J. Appl. Remote Sens 11(03), 1 (2017). [CrossRef]  

2. F. Guo, Y. Fan, Y. Zhou, C. Xhou, and Q. Li, Space electronic reconnaissance: localization theories and methods (John Wiley & Sons, 2014).

3. Y. Li, J. Yu, and L. He, “TOA and DOA Estimation for IR-UWB Signals: An Efficient Algorithm by Improved Root-MUSIC,” Wireless Communications and Mobile Computing 2021(2021).

4. L. Shi, Z. Huang, and X. Feng, “Recognizing the formations of CVBG based on shape context using electronic reconnaissance data,” Electron. Lett. 57(14), 562–563 (2021). [CrossRef]  

5. K. J. Bowers, R. A. Lippert, R. O. Dror, and D. E. Shaw, “Improved twiddle access for fast fourier transforms,” IEEE Trans. Signal Process. 58(3), 1122–1130 (2010). [CrossRef]  

6. C. Nader, N. Björsell, and P. Händel, “Unfolding the frequency spectrum for undersampled wideband data,” Signal Processing 91(5), 1347–1350 (2011). [CrossRef]  

7. D. Zeng, H. Cheng, J. Zhu, and B. Tang, “Parameter estimation of LFM signal intercepted by synchronous Nyquist folding receiver,” PIER C 23, 69–81 (2011). [CrossRef]  

8. M. Guo, Y. D. Zhang, and T. Chen, “DOA estimation using compressed sparse array,” IEEE Trans. Signal Process. 66(15), 4133–4146 (2018). [CrossRef]  

9. A. Yz, B. Msa, B. Pwa, B. Qma, X. Li, A. Dl, and B. Bwa, “Space-based correction method for LED array misalignment in Fourier ptychographic microscopy,” Opt. Commun. 514, 128163 (2022). [CrossRef]  

10. Z. Gu, Y. Gao, and X. Liu, “Optronic convolutional neural networks of multi-layers with different functions executed in optics for image classification,” Opt. Express 29(4), 5877–5889 (2021). [CrossRef]  

11. J. W. Goodman, Introduction to Fourier optics (Introduction to Fourier optics, 1968).

12. J. A. and Fox, “Optical information processing: Society of Photo-optical Instrumentation Engineers,” Opt. Laser Technol. 83, 155 (1977).

13. P. M. Duffieux, “The Fourier transform and its applications to optics,” Wiley (1983).

14. W. Liu, Z. Li, H. Cheng, C. Tang, J. Li, S. Zhang, S. Chen, and J. Tian, “Metasurface Enabled Wide-Angle Fourier Lens,” Adv. Mater. 30, 1706368 (2018). [CrossRef]  

15. D. Wang, R. Ouyang, K. Wang, T. Fu, and X. Zhang, “Optical SAR data processing configuration with simultaneous azimuth and range matching filtering,” Appl. Opt. 59(33), 10441 (2020). [CrossRef]  

16. X. Lin, Y. Rivenson, N. T. Yardimci, M. Veli, M. Jarrahi, and A. Ozcan, “All-Optical Machine Learning Using Diffractive Deep Neural Networks,” Science 361(6406), 1004–1008 (2018). [CrossRef]  

17. P. Berger, Y. Attal, M. Schwarz, S. Molin, A. Louchet-Chauvet, T. Chanelière, J. Gouët, D. Dolfi, and L. Morvan, “RF Spectrum Analyzer for Pulsed Signals: Ultra-Wide Instantaneous Bandwidth, High Sensitivity, and High Time-Resolution,” J. Lightwave Technol. 34(20), 4658–4663 (2016). [CrossRef]  

18. W. Wang, L. Lin, X. Yang, J. Cui, C. Du, and X. Luo, “Design of oblate cylindrical perfect lens using coordinate transformation,” Opt. Express 16(11), 8094–8105 (2008). [CrossRef]  

19. J. Chang, S. Vincent, D. Xiong, H. Wolfgang, and W. Gordon, “Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification,” Sci. Rep. 8(1), 12324 (2018). [CrossRef]  

20. J. Bueno, S. Maktoobi, L. Froehly, I. Fischer, and D. Brunner, “Reinforcement Learning in a large scale photonic Recurrent Neural Network,” Optica 5(6), 756 (2018). [CrossRef]  

21. M. Papaioannou, E. Plum, J. O. Valente, E. T. Rogers, and N. I. Zheludev, “Two-dimensional control of light with light on metasurfaces,” Light: Sci. Appl. 5(4), e16070 (2016). [CrossRef]  

22. . “GAEA-2 10 MEGAPIXEL PHASE ONLY LCOS-SLM (REFLECTIVE)” (HOLOEYE, 2016), https://holoeye.com/gaea-4k-phase-only-spatial-light-modulator.

23. . “Introduction to digital micromirror device (DMD)” (Texas Instruments Incorporated, 2020), https://www.ti.com/lit/pdf/dlpa008.

24. V. a. David, “Microdisplays: Liquid crystal on silicon,” Nat. Photonics 4(11), 752–754 (2010). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1.
Fig. 1. Processing flow of electronic reconnaissance signal in optical system
Fig. 2.
Fig. 2. Signal modulation module. DMD and SLM are cascaded through an optical 4f system composed of two lenses with different focal lengths. Where ${\textrm{x}_i}$ and ${y_i}$ are the coordinates of each plane that perpendicular to the laser propagation direction.
Fig. 3.
Fig. 3. Position of each sub-band loaded on signal modulation module. The short side is divided into 2000 segments, and each segment is loaded with signals of different frequency bands, which can realize UWB signal loading.
Fig. 4.
Fig. 4. Loading strategy of multichannel signals. The signals received by each channel are loaded into a line on DMD / SLM in sequence, so the short side of DMD / SLM represents the serial number of the channel, and the long side loads the amplitude / phase of signal.
Fig. 5.
Fig. 5. Signal processing and analysis module. SLM1 is the SLM in modulation module and SLM2 is another SLM for processing. The results are captured by a qCMOS camera and sent to the host computer.
Fig. 6.
Fig. 6. The phase to realize one-dimensional Fourier transform.
Fig. 7.
Fig. 7. 8-bit gray image of one-dimensional Fourier transform phase.
Fig. 8.
Fig. 8. The phase loaded on SLM2 to realize matched filtering.
Fig. 9.
Fig. 9. 8-bit gray image of matched filtering phase.
Fig. 10.
Fig. 10. Partial phase change of SLM2 in iteration. The final phase is the matched filter that suits the LFM signal with unknown frequency modulation slope, and the frequency modulation slope can be known.
Fig. 11.
Fig. 11. Processing system experimental prototype.
Fig. 12.
Fig. 12. Instantaneous full coverage of frequency range.
Fig. 13.
Fig. 13. Processing result of LFM signal.
Fig. 14.
Fig. 14. Focused LFM signal.
Fig. 15.
Fig. 15. Processing results of multi-channel signals (1 direction).
Fig. 16.
Fig. 16. Processing results of multi-channel signals (8 directions).

Tables (3)

Tables Icon

Table 1. Processing results in the instantaneous frequency range of 0–40GHz

Tables Icon

Table 2. Frequency modulation slope analysis of SLM signal

Tables Icon

Table 3. Processing results of multi-channel signals (8 directions, and y 0 = 677 )

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

U ( x , y ) = 1 j λ z exp ( j k z ) U ( x 0 , y 0 ) × exp { j k 2 z [ ( x x 0 ) 2 + ( y y 0 ) 2 ] } d x 0 d y 0 ,
t l ( x , y ) = exp [ j k 2 f ( x 2 + y 2 ) ] ,
U ( x 1 , y 1 ) = 1 j λ f 1 F { C ( x 0 , y 0 ) } f x = x 1 λ f 1 , f y = y 1 λ f 1 ,
A ( x 2 , y 2 ) = C ( f 2 x 0 f 1 , f 2 y 0 f 1 ) ,
U ( x 2 , y 2 ) = C ( f 2 x 0 f 1 , f 2 y 0 f 1 ) × P ( x 2 , y 2 ) ,
t 3 ( x 3 , y 3 ) = exp [ j k 2 d ( x 3 2 + y 3 2 2 ) ] .
{ U 4 ( x 4 ) = 1 j λ d F { U 2 ( x 2 ) } f x = x 4 λ d U 4 ( y 4 ) = U 2 ( y 2 ) .
d = l s l c M c 2 λ
f c = 20 [ n ( y 4 ) + x 4 M c ] ( M H z ) ,
B = m × 20 M c ( M H z ) ,
t 2 ( x 2 , y 2 ) = exp [ j k 2 d ( x 2 2 + y 2 2 ) ] .
U ( x 2 , y 2 ) = U ( x 2 , y 2 ) × exp [ j k 2 d ( x 2 2 + y 2 2 ) ] = C ( f 2 x 0 f 1 , f 2 y 0 f 1 ) × P ( x 2 , y 2 ) × exp [ j k 2 d ( x 2 2 + y 2 2 ) ] .
U ( x 3 , y 3 ) = 1 j λ d × exp [ j k 2 d ( x 3 2 + y 3 2 ) ] × F { U ( x 2 , y 2 ) } f x = x 3 λ d , f y = y 3 λ d .
t 3 ( x 3 , y 3 ) = S ( x 3 , y 3 ) × exp [ j k d ( x 3 2 + y 3 2 ) ] ,
f o = y y 0 d λ ,
f r = 2 f o f s l ,
f c = 20 ( 4.6 y 4 3.74 + x 4 4096 ) ( M H z ) .
B = m × 20 4096 ( M H z ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.