Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-resolution ISAR imaging based on photonic receiving for high-accuracy automatic target recognition

Open Access Open Access

Abstract

A scheme of high-resolution inverse synthetic aperture radar (ISAR) imaging based on photonic receiving is demonstrated. In the scheme, the linear frequency modulated (LFM) pulse echoes with 8 GHz bandwidth at the center frequency of 36 GHz are directly sampled with the photonic analog-to-digital converter (PADC). The ISAR images of complex targets can be constructed without detection range swath limitation due to the fidelity of the sampled results. The images of two pyramids demonstrate that the two-dimension (2D) resolution is 3.3 cm × 1.9 cm. Furthermore, the automatic target recognition (ATR) is employed based on the high-resolution experimental dataset under the assistance of deep learning. Despite of the small training dataset containing only 50 samples for each model, the ATR accuracy of three complex targets is still validated to be 95% on a test dataset with the equal number of samples.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The common applications of unmanned aerial vehicles (UAV) make the supervision system a crucial factor to ensure social security [13]. Detection and recognition are important steps in a supervision system and the inverse synthetic aperture radar (ISAR) is supposed to be an effective method of detection and recognition due to its ability of imaging the contour feature of moving targets [4,5]. To improve the performance of automatic target recognition (ATR) based on ISAR imaging, the promotion of imaging resolution is needed [6]. The limited bandwidth of electronic devices makes it hard to achieve high imaging resolution in the electronic ISAR systems [7], whereas microwave photonic methods have been widely investigated in radar systems because they are promising to break the bandwidth limitation, solve the problem of multiple frequency octaves, and provide higher resolution for targets detection compared to electronic radar architectures [812]. In previous researches, photonic de-chirping reception for linear frequency modulated (LFM) signal is the mainstream method to expand the receiving bandwidth since the de-chirping process lessens the pressure of electronic analog-to-digital converters (EADC). However, the de-chirping reception causes the detection range swath limitation for exchange [13]. Extra efforts to estimate the coarse target position before imaging result in the inconvenience of ATR based on ISAR in practical applications. Photonic analog-to-digital converter (PADC) is promising to be adopted in radar receiver and solve the above problems owing to its high performance [14] and potential of integration [15].

In this work, we propose a new scheme of the ISAR receiver based on the photonic analog-to-digital converter (PADC) in order to promote the performance of small targets detection and recognition. We demonstrate that the PADC-based receiver can directly sample the LFM pulse signal with 8 GHz bandwidth at 36 GHz radiofrequency (RF). The fidelity of sampling results makes the detection and recognition get rid of the range swath limitation. Based on the sampling results, we prove that recognizable two-dimension (2D) ISAR images of small and complex targets can be constructed. The resolution of ISAR images is validated to be ∼ 3.3cm × ∼ 1.9cm. To test the performance of automatic target recognition (ATR), we build an ISAR dataset of small and complex targets containing 100 samples for each model and divide it into a training dataset and a test dataset equally. Without the demanding of hundreds of training samples, the recognition accuracy based on the high-resolution dataset reaches 95% with a convolutional neural network (CNN) even though some of the ISAR images are hard to recognize for human.

2. Principle and experimental setup

2.1 PADC-based ISAR imaging

Figure 1(a) shows the experimental setup of ISAR imaging based on the channel-interleaved PADC. For the transmitting part, the RF signal is obtained with the arbitrary waveform generator (AWG, KEYSIGHT M8195A) and the microwave signal generator (MSG, KEYSIGHT N5166B). The baseband LFM pulse signal generated by the AWG possesses the bandwidth from 1 GHz to 9 GHz and the pulse width of 4 µs. The MSG generates the carrier signal with the carrier frequency of 31 GHz. With the electrical mixer (MIXER, Marki MM1-1140H), we obtain the RF LFM pulse signal with the bandwidth from 32 GHz to 40 GHz. Also, the RF signal could be generated with other photonic methods such as time-stretching method and adopting the 2nd order optical sidebands in practical applications [9,10]. After the signal power is amplified by the amplifier (AMP1, CONNPHY CMP-18G-40G-3021-K), the RF LFM pulse signal is transmitted by the horn antenna (TX, INFOMW LB-28-25-C2-KF). The transmitting LFM pulse signal is reflected by the moving targets with a rotating speed of around 2r/s. For the receiving part, another horn antenna (RX, INFOMW LB-28-25-C2-KF) is used to receive the LFM pulse signal echoes. To ensure that weak scattering points are extracted, another amplifier (AMP2, CONNPHY CLN-36G-40G-3030-K) is used before the signal is input to the channel-interleaved PADC.

 figure: Fig. 1.

Fig. 1. (a) Experimental setup of ISAR imaging based on the channel-interleaved PADC. (b) Structure of the CNN for ATR. AWG: arbitrary waveform generator; MSG: microwave signal generator; AMP: amplifier; OFC: optical frequency comb; MZM: Mach-Zehnder modulator; DOMZM: dual-output Mach-Zehnder modulator; TDL: tunable delay line; PD: photodetectors; EADC: electrical analog-to-digital converter; Tx: transmitting horn antenna; Rx: receiving horn antenna.

Download Full Size | PDF

In the channel-interleaved PADC, an optical frequency comb (OFC) is obtained with the actively mode-locked laser (AMLL, PSL-40-TT), generating the optical sampling pulse sequence and serving as the optical clock. The repetition frequency of the optical sampling pulse sequence is set to be 40 GHz, so the PADC has a sampling rate of 40 GSa/s. LFM pulse signal echoes are modulated on the optical sampling pulse sequence with a Mach-Zehnder modulator (MZM, EOSPACE AX-0MVS-40-PFA-PFA-LV). The receiving bandwidth of the radar receiver is equivalent to the bandwidth of the MZM (40 GHz). To lessen the pressure of broad bandwidth and high sampling rate in EADCs, a dual-output MZM (DOMZM, EOSPACE, AX-1 × 2-0MxSS-20-SFU-LV) is adopted as the demultiplexing module [16]. The demultiplexing process is driven by a microwave signal at 20 GHz, and it divides the modulated and discretized pulse sequence with a sampling rate of 40 GSa/s into two pulse sequences with a sampling rate of 20 GSa/s. The reduction of pulse sampling rate represents that the demanding for the sampling rate and the bandwidth of EADCs is lowered. Relative delay of the two discretized pulse sequences is adjusted to be 25 ps with a TDL (General Photonics MDL-002) array, assuring that the sampling results can be quantized at the same time in EADCs. After the photoelectrical conversion by the PD (EOT ET-3500F) array with a bandwidth of ∼10 GHz, the electrical sequences are digitized with the EADC (KEYSIGHT DSO-S 804A) array to acquire the quantized result. The bandwidth of EADCs is set as 10 GHz according to the principle of channel-interleaved PADCs [17], which is only a quarter of the receiving bandwidth of the radar receiver. Therefore, the adoption of PADC provides the radar receiver with an ultrawide receiving bandwidth and reduces the demanding of electrical devices. Owing to the ultrawide bandwidth, the wide-band RF LFM pulse signal can be directly quantized, and the fidelity of the signal is reserved after the quantization because the effective number of bits (ENOB) of the PADC has been validated to be over 7 bits [14]. It is worth nothing that more interleaved channels could be introduced so as to further reduce the bandwidth and sampling rate of the EADCs [14].

Based on the digitized results of the original RF LFM pulse signal, the ISAR imaging process can be operated in digital domain. A digitized RF LFM pulse echo from a single-point target can be expressed as:

$$\textrm{S}(t) = rect(\frac{{t - R/c - {T_p}/2}}{{{T_p}}})\cos (2\pi {f_c}(t - \frac{R}{c} - \frac{{{T_p}}}{2}) + \frac{B}{{{T_p}}}\pi {(t - \frac{R}{c} - \frac{{{T_p}}}{2})^2}),$$
where fc is the RF center frequency, c is the speed of light, B is the bandwidth of the RF LFM pulse signal, and R is the distance between the radar system and the single-point target, Tp and Tm are the pulse duration and signal duration, respectively. The exact value of R is supposed to be extracted with a pulse compression process. In the analog de-chirping reception for radars [10,12], the pulse compression is realized with two steps. First, a reference signal is multiplied with the pulse echo, and the low-frequency part of the product is digitized with a low-speed EADC. The reference signal is usually the transmitting RF LFM pulse signal with fixed delay to keep the coherence of the radar receiver, which is expressed as the following equation:
$$\mathrm{S^{\prime}}(t) = rect(\frac{{t - {R_{ref}}/c - {T_p}/2}}{{{T_p}}})\cos (2\pi {f_c}(t - \frac{{{R_{ref}}}}{c} - \frac{{{T_p}}}{2}) + \frac{B}{{{T_p}}}\pi {(t - \frac{{{R_{ref}}}}{c} - \frac{{{T_p}}}{2})^2}),$$
where Rref is the equivalent distance resulted from the delay of the optical link. Due to the different rectangular time windows in Eq. (1) and Eq. (2), the detection range swath of R is supposed to be [Rref, Rref +Tp]. Then, Fourier transform (FT) is applied to the digitized result of the low-speed EADC. Finally, the radial range resolution is derived as:
$${\delta ^{\prime}_r} = \frac{c}{{2B}}\frac{{{T_p}}}{{({T_p} - (R - {R_{ref}})/c)}}.$$

Therefore, the radial resolution is more deteriorated with a longer detection distance R as Eq. (3) shown. However, the deterioration of resolution can be eliminated with digital operations. Based on the fidelity of the digitized results, the coherence of the radar receiver is always promised and not limited by the form of the digital reference signal. In our experiment, the reference signal is expressed as:

$$\textrm{R}(t) = \cos (2\pi {f_c}(t - \frac{{{R_{ref}}}}{c} - \frac{{{T_p}}}{2}) + \frac{B}{{{T_p}}}\pi {(t - \frac{{{R_{ref}}}}{c} - \frac{{{T_p}}}{2})^2}).$$

Although the expanded bandwidth of the reference signal might result spectral aliasing, it will not influence the accuracy of the digital de-chirping process. And without the limitation of rectangular time window in Eq. (2), the radial resolution in our scheme is not influenced by the detection distance. After the accumulation of several pulses, the radial resolution and the cross resolution are calculated as [18]:

$$\begin{aligned} {\delta _c} &= \frac{c}{{2\theta {f_c}}}\\ {\delta _r} &= \frac{c}{{2B}}, \end{aligned}$$
where θ is the total viewing angle of moving targets. It is shown that through directly sampling the RF signal with broad bandwidth and high center frequency, the receiving method based on PADC provides a high resolution for ISAR imaging according to Eq. (5).

2.2 Automatic target recognition with a convolutional neural network

With high-resolution range profiles based on photonic methods, high accuracy of ATR on small targets was achieved [19]. However, the scheme of ATR in the above research aims to the static targets. In contrast, the ISAR imaging was supposed to be more suitable for ATR of moving targets. In the experiment, a 2D CNN is adopted, which is proved to be effective in image feature extraction and targets recognition [20]. The structure of the CNN is shown in Fig. 1(b). Before the ISAR images are input into the CNN, a standardization process is operated. The size of standardized ISAR images is 121 × 121 and the pixel value is normalized to the range from 0 to 255. In the CNN model, the features of ISAR images are extracted with multiple 2D convolution kernels and activated with rectified linear units (ReLU). Then, the input image is down-sampled through max-pooling in the Maxpool operation. The Conv and the Maxpool operation are repeated to extract more detailed features. Finally, a fully connected layer turns the multiple small feature maps to a vector representing the probability of recognition results, and the Softmax layer gives out the outcome. As the crucial operation, the process of feature extraction with 2D convolution kernels can be expressed as

$$\textrm{z}_{i,j}^k = \sum\limits_{s = 1}^S {\sum\limits_{x = 1}^X {\sum\limits_{y = 1}^Y {w_{x,y,s}^k\textrm{v}_{i + x - 1,j + y - 1}^s} } } ,$$
where v is the input of the 2D convolution, w is the weight of the 2D convolution kernels, z is the output of the convolution, X and Y are the size of input feature maps, k and s are the order of output feature maps and the order of input feature maps respectively. The value of vi,j is the intensity of scattering points on the targets, and the 2D convolution kernel extracts the topological shape consistent with the strong scattering points. The additional interactions of scattering points such as v1,6 and v2,5 make the network model more expressive compared to 1D convolution [21,22].

3. Experimental results

3.1 High-resolution ISAR imaging

First, we directly sample the RF LFM pulse signal and its echoes with our PADC-based radar receiver and validate the fidelity of digitized results. The time-domain diagram in Fig. 2(a) shows the digital echo waveform reflected from two pyramids, which are around 1.6 m away from the radar. The length of the pulse is around 4 µs. The RF LFM pulse signal ranges from 32 GHz to 40 GHz as shown in Fig. 2(b). Pulse compression is adopted to the digital echo waveform, as shown in Fig. 2(c). Two peaks exist at the distance of around 1.6 m, showing a resolution of ∼ 2 cm at least, which is corresponding to the nominal value of 1.875 cm according to Eq. (5). To furtherly validate that the high resolution is not deteriorated by the distance between targets and the radar, we simulate an increasing distance via adjusting the triggering delay of the AWG. When the equivalent distance is around 150 m, the receiving result is shown in Figs. 2(d) and (e), and the resolution is still ∼2 cm as shown in Fig. 2(f).

 figure: Fig. 2.

Fig. 2. Experimental results with the PADC-based receiver. The first row denotes the results with a detection distance of around 1.6 m, including (a) the digitized result in the time domain, (b) the digitized result in the time-frequency domain and (c) the pulse compression result. The second row represents the results with an equivalent detection distance of around 150 m, including (d) the digitized result in the time domain, (e) the digitized result in the time-frequency domain and (f) the pulse compression result.

Download Full Size | PDF

Second, we validate the experimental resolution of ISAR imaging, while the theoretic resolution is supposed to be 3.14 cm × 1.875 cm according to Eq. (5). In order to demonstrate the results of two pyramids more clearly, we image a single pyramid as a comparison above all. The pyramid is placed on a rotating platform with absorbing materials around it as shown in Fig. 3(a). The ISAR image of a single pyramid is shown in Fig. 3(b), and it is demonstrated that there is an only peak point at (0.113m, -0.074m) within the area of 1m × 1m. More details are revealed in the form of three-dimension diagram in Fig. 3(c). On the basis of the single pyramid, we measure the realistic resolution with two pyramids. Two pyramids are placed on the rotating platform with the cross distance of ∼ 3 cm and the radial distance of ∼ 1.9cm as shown in Fig. 3(d). Compared to the result of a single pyramid, there are two peaks within the imaging area as shown in Fig. 3(f). The coordinates of the two peaks are (-0.033m, 0.037m) and (-0.066m, 0.018m) respectively. Because there is no overlap between the half-height points of the two peaks, it is demonstrated that the resolution of our PADC-based ISAR is at least ∼ 3.3 cm × ∼ 1.9 cm according to the definition of radar imaging resolution. The experimental result is very close to the theoretical resolution.

 figure: Fig. 3.

Fig. 3. Photographs in the experiment and the ISAR images of a single pyramid and two pyramids. The first row denotes the results of a single pyramid, including (a) the photograph, (b) the ISAR image and (c) the ISAR image in 3D perspective. The second row represents the results of two pyramids, including (d) the photograph, (e) the ISAR image and (f) the ISAR image in 3D perspective.

Download Full Size | PDF

3.2 High-accuracy ATR

To evaluate the performance of the ATR based on our PADC-based ISAR, we build a dataset of ISAR images. Three small and complex models are used as the targets to be classified, including a fixed-wing aircraft model (Plane), a four-axis drone model (UAV) and a 1:144 Y-20 model (Y20), as Fig. 4(a)-(c) shown. In the process of building the dataset, the randomness and diversity of moving postures in our dataset are ensured. As a result, we generate 100 images for each model. Due to the high imaging resolution, the recognizable ISAR images are obtained as shown in Figs. 4(d)-(f). However, motion of models could influence the imaging performance especially when the models are in the improper moving postures. The ISAR images could be unrecognizable as shown in Fig. 4(g)-(i).

 figure: Fig. 4.

Fig. 4. (a)(b)(c) Three models to be recognized, in order of Plane, UAV, Y20. (d)(e)(f) ISAR images of the three models with proper moving postures. (g)(h)(i) ISAR images of the three models with improper moving postures.

Download Full Size | PDF

Before the images are fed into the CNN, we take some preprocess. First, we make a linear-to-logarithmic value conversion on the images and the value unit of pixels changes to decibel (dB). Then, we take a threshold decision and the threshold is set to be 45 dB, which is 15 dB lower than the peak value. Finally, the pixel value is normalized to an interval from 0 to 255. To validate the performance of the CNN with small training dataset, only 50 samples for each model are used in the training process. The initial learning rate in our 2D CNN model is set to be 0.0001, and an Adam optimizer [23] is used to adjust the learning rate automatically. We repeat the training and testing processes for 20 times using the K-Cross-Validation method. According to the median result, after 200 training epochs, the CNN converges and the accuracy of ATR reaches 95% as shown in Fig. 5(b) and Fig. 5(a), respectively.

 figure: Fig. 5.

Fig. 5. (a) Accuracy of different ATR schemes in contrast experiments with small datasets. (b) Loss curves of different ATR schemes in contrast experiments with small datasets. RP: 1D range profile.

Download Full Size | PDF

3.3 Discussion

The ISAR imaging based on the photonic receiving solve several problems for ATR in realistic applications. First, the high imaging resolution helps promote the accuracy of ATR. In our contrast experiments, we adopt a LFM signal with 4 GHz bandwidth in the same ATR processes. The final accuracy of ATR is validated to be only 89% as shown in Fig. 5(a). Second, it helps solve the difficulty of obtaining a huge training dataset in realistic applications. In our previous work, the ATR accuracy based on range profiles (RP) with hundreds of training samples reaches about 90% [19]. However, as compared in Fig. 5(a), the performance of ATR based on range profiles is limited to about 75% when the training dataset (50 samples) is the same as the ISAR imaging. On the contrary, since the migration through resolution cell (MTRC) is ignored in ISAR, the ISAR imaging can be viewed as the feature pre-extraction for the phase information from a same range profile. The feature pre-extraction helps solve the over-fitting problem in deep learning with small datasets [24].

The feature difference of network output is enlarged after the pre-extraction according to Fig. 6(a) and Fig. 6(b), promoting the accuracy of ATR. However, there is still an inconvenience for our method in realistic applications. Although the high sampling rate of the PADC enables flexible signal processing methods in digital domain, it increases the amount of data. As a result, the data processing and storage are still challenging. Therefore, it is necessary to study the pipeline algorithms and hardware implements according to the parallel configuration of the PADC in the near future.

 figure: Fig. 6.

Fig. 6. (a) t-SNE feature maps of different models in the ATR based on ISAR images. (b) t-SNE feature maps of different models in the ATR based on range profiles.

Download Full Size | PDF

4. Conclusion

We have demonstrated high-resolution ISAR imaging based on the photonic receiving method. Owing to the broad receiving bandwidth and the ability of RF direct sampling, the resolution is validated to be as high as ∼ 3.3 cm × ∼ 1.9 cm, which is not deteriorated by the detection range swath limitation. Furthermore, we have established a small ISAR imaging dataset of some moving and complex models. Based on the dataset, we have validated the ATR performance with a CNN. Due to the feature pre-extraction in ISAR imaging, the ATR with ISAR images achieves higher recognition accuracy and faster convergence speed than the ATR based on range profiles when only 50 training samples for each model are available. The final accuracy is verified to be at least 95% on a test dataset containing equal number of samples. With the ability of detecting and recognizing small objects, our method is also a promising solution for the airport bird detection and FOD (Foreign Object Debris) besides UAV supervision tasks.

Funding

National Key Research and Development Program of China (2019YFB2203700); National Natural Science Foundation of China (61822508).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but can be obtained from the authors upon reasonable request.

References

1. S. Hayat, E. Yanmaz, and R. Muzaffar, “Survey on Unmanned Aerial Vehicle Networks for Civil Applications: A Communications Viewpoint,” IEEE Commun. Surv. Tutorials 18(4), 2624–2661 (2016). [CrossRef]  

2. H. Menouar, I. Guvenc, K. Akkaya, A. S. Uluagac, A. Kadri, and A. Tuncer, “UAV-Enabled Intelligent Transportation Systems for the Smart City: Applications and Challenges,” IEEE Commun. Mag. 55(3), 22–28 (2017). [CrossRef]  

3. M. Lort, A. Aguasca, C. López-Martínez, and T. M. Marín, “Initial Evaluation of SAR Capabilities in UAV Multicopter Platforms,” IEEE J. Sel. Top. Appl. Earth Observations Remote Sensing 11(1), 127–140 (2018). [CrossRef]  

4. F. Berizzi, E. D. Mese, M. Diani, and M. Martorella, “High-resolution ISAR imaging of maneuvering targets by means of the range instantaneous Doppler technique: modeling and performance analysis,” IEEE Trans. on Image Process. 10(12), 1880–1890 (2001). [CrossRef]  

5. G. Li, H. Zhang, X. Wang, and X.-G. Xia, “ISAR 2-D Imaging of Uniformly Rotating Targets via Matching Pursuit,” IEEE Trans. Aerosp. Electron. Syst. 48(2), 1838–1846 (2012). [CrossRef]  

6. D. Pastina, F. Santi, and M. Bucciarelli, “Multi-angle distributed ISAR with stepped-frequency waveforms for surveillance and recognition,” in Proceedings of IEEE CIE International Conference on Radar (IEEE, 2011), pp. 528–532.

7. A. Khilo, S. J. Spector, M. E. Grein, A. H. Nejadmalayeri, C. W. Holzwarth, M. Y. Sander, M. S. Dahlem, M. Y. Peng, M. W. Geis, N. A. DiLello, J. U. Yoon, A. Motamedi, J. S. Orcutt, J. P. Wang, C. M. Sorace-Agaskar, M. A. Popović, J. Sun, G. Zhou, H. Byun, J. Chen, J. L. Hoyt, H. I. Smith, R. J. Ram, M. Perrott, T. M. Lyszczarz, E. P. Ippen, and F. X. Kärtner, “Photonic ADC: overcoming the bottleneck of electronic jitter,” Opt. Express 20(4), 4454–4469 (2012). [CrossRef]  

8. P. Ghelfi, F. Laghezza, F. Scotti, G. Serafino, A. Capria, S. Pinna, D. Onori, C. Porzi, M. Scaffardi, A. Malacarne, V. Vercesi, E. Lazzeri, F. Berizzi, and A. Bogoni, “A fully photonics-based coherent radar system,” Nature 507(7492), 341–345 (2014). [CrossRef]  

9. W. Zou, H. Zhang, X. Long, S. Zhang, Y. Cui, and J. Chen, “All optical central-frequency-programmable and bandwidth tailorable radar,” Sci. Rep. 6(1), 19786 (2016). [CrossRef]  

10. F. Zhang, Q. Guo, and S. Pan, “Photonics-based real-time ultra-high-range-resolution radar with broadband signal generation and processing,” Sci. Rep. 7(1), 13848 (2017). [CrossRef]  

11. S. Peng, S. Li, X. Xue, X. Xiao, D. Wu, X. Zheng, and B. Zhou, “High-resolution W-band ISAR imaging system utilizing a logic-operation-based photonic digital-to-analog converter,” Opt. Express 26(2), 1978–1987 (2018). [CrossRef]  

12. J. Dong, F. Zhang, Z. Jiao, Q. Sun, and W. Li, “Microwave photonic radar with a fiber-distributed antenna array for three-dimensional imaging,” Opt. Express 28(13), 19113–19125 (2020). [CrossRef]  

13. Z. Mo, C. Liu, J. Yang, Y. Sun, R. Wang, J. Dong, and W. Li, “Microwave photonic de-chirp receiver for breaking the detection range swath limitation,” Opt. Express 29(7), 11314–11327 (2021). [CrossRef]  

14. S. Xu, X. Zou, B. Ma, J. Chen, L. Yu, and W. Zou, “Deep-learning-powered photonic analog-to-digital conversion,” Light: Sci. Appl. 8(1), 66 (2019). [CrossRef]  

15. A. H. Nejadmalayeri, M. Grein, A. Khilo, J. P. Wang, M. Y. Sander, M. Peng, C. M. Sorace, E. P. Ippen, and F. X. Kärtner, “A 16-fs aperture-jitter photonic ADC: 7.0 ENOB at 40 GHz,” in CLEO:2011 - Laser Applications to Photonic Applications, OSA Technical Digest (CD) (Optica Publishing Group, 2011), paper CThI6.

16. N. Qian, L. Zhang, J. Chen, and W. Zou, “Characterization of the frequency response of channel-interleaved photonic ADCs based on the optical time-division demultiplexer,” IEEE Photonics J. 13(5), 1–9 (2021). [CrossRef]  

17. G. Yang, W. Zou, L. Yu, N. Qian, and J. Chen, “Investigation of electronic aperture jitter effect in channel-interleaved photonic analog-to-digital converter,” Opt. Express 27(6), 9205–9214 (2019). [CrossRef]  

18. V. C. Chen and M. Martorella, Inverse Synthetic Aperture Radar Imaging: Principles, Algorithms and Applications (SciTech, 2014), chap. 2.

19. J. Wan, S. Xu, and W. Zou, “High-accuracy automatic target recognition scheme based on a photonic analog-to-digital converter and a convolutional neural network,” Opt. Lett. 45(24), 6855–6858 (2020). [CrossRef]  

20. K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770–778.

21. T. Dettmers, P. Minervini, P. Stenetorp, and S. Riedel, “Convolutional 2D Knowledge Graph Embeddings,” AAAI Conference on Artificial Intelligence (2017), pp.1811–1818.

22. S. Ha, J. Yun, and S. Choi, “Multi-modal Convolutional Neural Networks for Activity Recognition,” IEEE International Conference on Systems, Man, and Cybernetics3017–3022 (2015). [CrossRef]  

23. D. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” https://arxiv.org/abs/1412.6980.

24. P. Rajpurkar, A. Park, J. Irvin, C. Chute, M. Bereket, D. Mastrodicasa, C. P. Langlotz, M. P. Langren, A. Y. Ng, and B. N. Pate, “AppendiXNet: Deep Learning for Diagnosis of Appendicitis from A Small Dataset of CT Exams Using Video Pretraining,” Sci. Rep. 10(1), 3958 (2020). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but can be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. (a) Experimental setup of ISAR imaging based on the channel-interleaved PADC. (b) Structure of the CNN for ATR. AWG: arbitrary waveform generator; MSG: microwave signal generator; AMP: amplifier; OFC: optical frequency comb; MZM: Mach-Zehnder modulator; DOMZM: dual-output Mach-Zehnder modulator; TDL: tunable delay line; PD: photodetectors; EADC: electrical analog-to-digital converter; Tx: transmitting horn antenna; Rx: receiving horn antenna.
Fig. 2.
Fig. 2. Experimental results with the PADC-based receiver. The first row denotes the results with a detection distance of around 1.6 m, including (a) the digitized result in the time domain, (b) the digitized result in the time-frequency domain and (c) the pulse compression result. The second row represents the results with an equivalent detection distance of around 150 m, including (d) the digitized result in the time domain, (e) the digitized result in the time-frequency domain and (f) the pulse compression result.
Fig. 3.
Fig. 3. Photographs in the experiment and the ISAR images of a single pyramid and two pyramids. The first row denotes the results of a single pyramid, including (a) the photograph, (b) the ISAR image and (c) the ISAR image in 3D perspective. The second row represents the results of two pyramids, including (d) the photograph, (e) the ISAR image and (f) the ISAR image in 3D perspective.
Fig. 4.
Fig. 4. (a)(b)(c) Three models to be recognized, in order of Plane, UAV, Y20. (d)(e)(f) ISAR images of the three models with proper moving postures. (g)(h)(i) ISAR images of the three models with improper moving postures.
Fig. 5.
Fig. 5. (a) Accuracy of different ATR schemes in contrast experiments with small datasets. (b) Loss curves of different ATR schemes in contrast experiments with small datasets. RP: 1D range profile.
Fig. 6.
Fig. 6. (a) t-SNE feature maps of different models in the ATR based on ISAR images. (b) t-SNE feature maps of different models in the ATR based on range profiles.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

S ( t ) = r e c t ( t R / c T p / 2 T p ) cos ( 2 π f c ( t R c T p 2 ) + B T p π ( t R c T p 2 ) 2 ) ,
S ( t ) = r e c t ( t R r e f / c T p / 2 T p ) cos ( 2 π f c ( t R r e f c T p 2 ) + B T p π ( t R r e f c T p 2 ) 2 ) ,
δ r = c 2 B T p ( T p ( R R r e f ) / c ) .
R ( t ) = cos ( 2 π f c ( t R r e f c T p 2 ) + B T p π ( t R r e f c T p 2 ) 2 ) .
δ c = c 2 θ f c δ r = c 2 B ,
z i , j k = s = 1 S x = 1 X y = 1 Y w x , y , s k v i + x 1 , j + y 1 s ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.