Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Complex image reconstruction and synthetic aperture laser imaging for moving targets based on direct-detection detector array

Open Access Open Access

Abstract

According to the principle of synthetic aperture ladar, high-resolution imaging can be achieved if the relative motion exists between the target and the ladar. The imaging system has characteristics including a large field of view, narrow-band laser signals applied, and easy engineering implementation. The complex image reconstruction and the synthetic aperture laser imaging method for moving targets based on the spatial light modulator and the direct-detection detector array are proposed. The far-field simulations and the near-field experiments for the stop-and-go target and the continuous-moving target were carried out. It is verified that the complex image reconstruction method can equivalently realize coherent detection for the target and reflect its phase information corresponding to the laser wavelength. Multi-frame complex images reconstructed can be applied to the synthetic aperture laser imaging, which forms high-resolution images for moving targets under far/near-field conditions.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Similar to optical imaging systems, the resolution of laser imaging is limited by the diffraction limit of the system, and systems with large apertures enable high-resolution imaging [1]. Due to difficulties in manufacturing and platform-loading of large-aperture systems, synthetic aperture ladar (SAL)/ inverse synthetic aperture ladar (ISAL) [2,3], which are based on the single-element coherent detector, and optical synthetic aperture (OSA) [46] are generally applied to improve imaging resolution.

With the development of laser detector technology, the scale of the coherent detector array (CDA) has been increasing [79]. If the CDA is set in the focal plane of the ladar to receive the complex image, the instantaneous field of view of the imaging system can be expanded with the resolution guaranteed, and the detection performance of the system can be improved. Combining the Fourier transform of the lens and the CDA, pitch-azimuth images can be generated, which are similar to conventional optical images. Compared with SAL/ISAL, the imaging based on the CDA dramatically simplified the imaging steps and has better noise immunity. What’s more, it is unnecessary for the imaging system to transmit or process broadband signals, which enables the system to be implemented in engineering easily.

However, the scale of the CDA is limited by the large amount of data and transmission difficulties, while the existing technology of the large-scale direct-detection detector array (DDA) has been widely applied, which is represented by charge coupled device (CCD) and complementary metal oxide semiconductor (CMOS) cameras. The research on the synthetic aperture digital holography [10] for high-resolution imaging has been conducted. For example, Pan Feng et al. combined Fresnel digital holography with incoherent processing to improve the resolution of microscopic images in 2009 [11]; A. Pelagotti et al. (2012) [12] and Samuel T. Thurman et al. (2015) [13] both used lensless holography, and enlarged the range of the hologram to form the high-resolution image through moving the DDA and irradiating the target with laser transmitting signals from multiple angles respectively.

Since complex image reconstruction methods such as Fresnel holography are limited by the target distance, the spatial light modulator (SLM) is applied to the complex image reconstruction based on the principle of the phase-shifted digital holography, and the approach to estimating the laser echo signal intensity image of a continuous-moving target is analyzed. Based on concepts of matched filtering and coherent imaging, we introduce the method of array-detectors-based synthetic aperture laser (ASAL) imaging, and provide the far-field target simulation, as well as the experiment results of near-field stop-and-go targets and continuous-moving targets.

2. Complex image reconstruction and ASAL imaging

The diagram of the complex image reconstruction and ASAL imaging system based on the SLM and DDA is shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. The diagram of the system

Download Full Size | PDF

The beam-expanded laser transmitting signal illuminates the moving target, and the scattered signal forms the laser echo signal. After being processed by the 2-D Fourier transform of the lens, the laser echo signal and the laser local oscillator (LLO), which is modulated by the SLM, mix in space and constitute a hologram. The hologram is received by the DDA and is applied in complex image reconstruction and ASAL imaging. The LLO is processed by the beam expander, the SLM, the polarizer, and the lens sequentially. The beam expander introduces the second-order phase to the LLO, which enlarges the spot on the SLM and reduces the focusing effect of the lens at the same time.

Based on the spatial optical path mixing, the complex image reconstruction method in Section 2.1.1 uses the non-polarizing cube beam splitter (NPBS) and the SLM to make the DDA equivalent to realize the function of the CDA. And it lays the foundation for ASAL imaging.

2.1 Complex image reconstruction method based on the SLM and DDA

2.1.1 Complex image reconstruction

To reduce the impact of the target distance, the complex image reconstruction method based on phase-shifting digital holography [14,15] is introduced in this section. And it combines the SLM, the LLO, and the DDA.

When the target moves continuously, the motion can be decomposed into the transverse motion parallel to the plane of the imaging system and the radial motion, and their speeds are denoted as and ${v_r}$. If the integration duration of the DDA is T, it is required that the transverse motion distance of the target within T does not exceed the field of view corresponding to 1 detector to avoid blurring the image, i. e.

$${v_c}T \le \frac{a}{f}R$$
where a is the size of each detector in the DDA, f is the focal length of the lens, R is the initial distance between the geometric center of the target and the imaging system. ${v_r}$ and the instability of the target surface cause the Doppler frequency ${f_d}({x,y} )$, where $x - y$ is the image domain. Taking the case of the LLO with 0 ° phase shift as an example, because of the integral processing of the DDA, the expression of the hologram is
$$\scalebox{0.88}{$\begin{array}{l} {I_0}({x,y,n} )\\ = \int\limits_{ - T/2}^{T/2} {\left\{ {I_{loc0}^2({x,y,n,t} )+ I_{img0}^2({x,y,n,t} )+ 2\sqrt {{I_{loc0}}({x,y,n,t} )} \sqrt {{I_{img0}}({x,y,n,t} )} \cos [{2\pi {f_d}({x,y} )t + \varDelta {\varphi_{img}}({x,y,n} )} ]} \right\}} dt \end{array}$}$$
where $n$ is the image frame number, t is the time domain of the laser echo signal and the LLO within T. If the magnitudes and phases of the LLO and the laser echo signal are independent of t within T, Eq. (2) can be simplified as
$$\scalebox{0.88}{$\begin{array}{l} {I_0}({x,y,n} )\\ = T \cdot [{I_{loc0}^2({x,y,n} )+ I_{img0}^2({x,y,n} )} ]+ 2\sqrt {{I_{loc0}}({x,y,n} )} \sqrt {{I_{img0}}({x,y,n} )} \int\limits_{ - T/2}^{T/2} {\cos [{2\pi {f_d}({x,y} )t + \varDelta {\varphi_{img}}({x,y,n} )} ]dt} \\ = T \cdot [{I_{loc0}^2({x,y,n} )+ I_{img0}^2({x,y,n} )} ]+ \frac{{2\sqrt {{I_{loc0}}({x,y,n} )} \sqrt {{I_{img0}}({x,y,n} )} }}{{\pi {f_d}}}\sin [{\pi {f_d}({x,y} )T} ]\cos [{\varDelta {\varphi_{img}}({x,y,n} )} ]\end{array}$}$$

Under the same premise, the expression of the hologram with 90 ° -phase-shift LLO is

$$\begin{array}{l} {I_{90}}({x,y,n} )\\ = T \cdot [{I_{loc90}^2({x,y,n} )+ I_{img90}^2({x,y,n} )} ]+ \frac{{2\sqrt {{I_{loc90}}({x,y,n} )} \sqrt {{I_{img90}}({x,y,n} )} }}{{\pi {f_d}}}\sin [{\pi {f_d}({x,y} )T} ]\sin [{\varDelta {\varphi_{img}}({x,y,n} )} ]\end{array}$$

Combining Eqs. (3) and (4), under the condition that ${f_d}({x,y} )T$ is a non-integer, it can be found that the Doppler frequency and the integral processing have little impact on the phase of holograms. Reducing the integral time of the DDA helps to alleviate the problem of complex image blurring due to the continuous motion of the target.

To simplify the analysis, the effect of the Doppler frequency and the integral processing on the holograms is not considered below, expressions of holograms when the LLO is phase-modulated at 0° and 90°, respectively, are

$${I_0}({x,y,n} )= I_{loc0}^2({x,y,n} )+ I_{img0}^2({x,y,n} )+ 2\sqrt {{I_{loc0}}({x,y,n} )} \sqrt {{I_{img0}}({x,y,n} )} \cos [{\varDelta {\varphi_{img}}({x,y,n} )} ]$$
$${I_{90}}({x,y,n} )= I_{loc90}^2({x,y,n} )+ I_{img90}^2({x,y,n} )+ 2\sqrt {{I_{loc90}}({x,y,n} )} \sqrt {{I_{img90}}({x,y,n} )} \sin [{\varDelta {\varphi_{img}}({x,y,n} )} ]$$

To reduce the steps of image acquisition by the DDA and make it possible to form complex images of continuous-moving targets, the phase of the LLO is modulated by the SLM, resulting in the 0 ° and 90 ° phase shift of 1-pixel interval in the row or column direction of the DDA. As shown in Fig. 2, taking the phase modulation of the LLO in the column direction of the DDA as an example, the hologram $I({x,y,n} )$, the intensity image of the LLO ${I_{loc}}({x,y,n} )$ and the intensity image of the laser echo signal ${I_{img}}({x,y,n} )$ acquired by the DDA can be split interval 1 line to form ${I_0}({x,y,n} )$ and ${I_{90}}({x,y,n} )$, ${I_{loc0}}({x,y,n} )$ and ${I_{loc90}}({x,y,n} )$, ${I_{img0}}({x,y,n} )$ and ${I_{img90}}({x,y,n} )$. They correspond to the LLO with 0 ° and 90 ° phase shifts, respectively. Under the condition that the intensity image of the laser echo signal has continuity in the direction of the LLO phase modulation, it can be found that ${I_{img0}}({x,y,n} )\approx {I_{img90}}({x,y,n} )$. And if the above 2 intensity images are both noted as ${I_{img}}^{\prime}({x,y,n} )= A({x,y,n} )$, the complex image can be reconstructed as

$$\begin{aligned} U({x,y,n} )&= A({x,y,n} )\cdot \exp \{{j\varDelta {\varphi_{img}}({x,y,n} )} \}\\ &= \frac{{{I_0}({x,y,n} )- {I_{loc0}}({x,y,n} )- {I_{img0}}({x,y,n} )}}{{2\sqrt {{I_{loc0}}({x,y,n} )} }} + j\frac{{{I_{90}}({x,y,n} )- {I_{loc90}}({x,y,n} )- {I_{img90}}({x,y,n} )}}{{2\sqrt {{I_{loc90}}({x,y,n} )} }}\\& \approx \frac{{{I_0}({x,y,n} )- {I_{loc0}}({x,y,n} )- {I_{img}}^{\prime}({x,y,n} )}}{{2\sqrt {{I_{loc0}}({x,y,n} )} }} + j\frac{{{I_{90}}({x,y,n} )- {I_{loc90}}({x,y,n} )- {I_{img}}^{\prime}({x,y,n} )}}{{2\sqrt {{I_{loc90}}({x,y,n} )} }} \end{aligned}$$
$$\varDelta {\varphi _{img}}({x,y,n} )= {\varphi _{img}}({x,y,n} )- {\varphi _{loc}}({x,y,n} )$$
where $A({x,y,n} )$ and $\varDelta {\varphi _{img}}({x,y,n} )$ are the magnitude of the complex image and the phase difference between the complex image and the LLO. ${\varphi _{loc}}({x,y,n} )$ is the initial phase of the LLO, ${\varphi _{img}}({x,y,n} )$ is the phase generated by the moving target distance, which can reflect the tiny movement of the target at the laser wavelength level.

 figure: Fig. 2.

Fig. 2. Schematic diagram of the complex image reconstruction method

Download Full Size | PDF

The complex image reconstruction method in this section introduces the SLM to reduce the number and time of the DDA collecting holograms, which makes it possible to generate complex images of the continuous-moving target.

On the foundation of the complex image, the spatial sampling signal (SpSaS) is defined below. As shown in Fig. 3(a), from a physical point of view, the spatial sampling domain ${x_{lens}} - {y_{lens}}$ is in the front of the lens, the distance between the image domain $x - y$ and the lens is equal to the focal length f of the lens. The complex image can be formed when the SpSaS is processed by the 2-D Fourier transform of the lens. The value of the SpSaS is the same as that of the complex image spectrum, whose unit is wave number. Since the spatial sampling domain is defined on the lens surface, the unit of the SpSaS can also be converted to the unit of length, such as mm. Since the unit of spatial sampling domain ${x_{lens}} - {y_{lens}}$ and the image domain $x - y$ are unified, the following sections will be referred to ${x_{lens}}$ and $x$ as the X-axis direction, ${y_{lens}}$ and $y$ as the Y-axis direction for ease of presentation.

 figure: Fig. 3.

Fig. 3. The image domain / spatial sampling domain conversion and geometric parameters of the imaging system. (a) Schematic conversion of the image domain and the spatial sampling domain. (b) Schematic of the imaging system geometric parameters.

Download Full Size | PDF

According to Fig. 3(b), the expression of ${\varphi _{img}}({x,y,n} )$ is

$$\scalebox{0.92}{$\displaystyle{\varphi _{img}}({x,y,n} )= angle\left\{ {{\cal F}\left\{ {P({{x_{lens}},{y_{lens}}} )\cdot \sum {{A_i}^{\prime}({{x_{lens}},{y_{lens}}} )\exp \left\{ { - \frac{{2\pi }}{\lambda }[{R_t^i(n )+ R_r^i({{x_{lens}},{y_{lens}},n} )} ]} \right\}} } \right\}} \right\}$}$$
$$R_t^i(n )= \sqrt {{{({x_{lens}^t - x_{lens}^{i,n}} )}^2} + {{({y_{lens}^t - y_{lens}^{i,n}} )}^2} + {{({z_{lens}^t - z_{lens}^{i,n}} )}^2}}$$
$$R_r^i({{x_{lens}},{y_{lens}},n} )= \sqrt {{{({{x_{lens}} - x_{lens}^{i,n}} )}^2} + {{({{y_{lens}} - y_{lens}^{i,n}} )}^2} + {{({{z_{lens}} - z_{lens}^{i,n}} )}^2}}$$
where $\lambda$ is the laser wavelength, $angle\{{\cdot} \}$ is the phase extraction processing, ${\cal F}\{{\cdot} \}$ is the 2-D Fourier transform, $P({{x_{lens}},{y_{lens}}} )$ represents the aperture of the imaging system, $({x_{lens}^t,y_{lens}^t,z_{lens}^t} )$ is the coordinate of the laser transmitting signal collimator. Since the complex moving target can be decomposed into a collection of multiple point targets $\sum {({x_{lens}^{i,n},y_{lens}^{i,n},z_{lens}^{i,n}} )}$, it can be set that $({x_{lens}^{i,n},y_{lens}^{i,n},z_{lens}^{i,n}} )$, $R_t^i(n )$ and $R_r^i({{x_{lens}},{y_{lens}},n} )$ are the coordinate of the $i$-th point target, and the propagation distances of the laser transmitting signal and the laser echo signal when the $n$-th hologram is acquired. The above coordinates are defined in the spatial sampling coordinate system ${x_{lens}} - {y_{lens}} - {z_{lens}}$.

2.1.2 Intensity image estimation of the laser echo signal for the continuous-moving target

Under the condition that the target moves continuously, the laser transmitting signal and the LLO are invariable. The reconstruction of multi-frame complex images can share the same intensity image of the LLO. However, the motion of the target leads to the time-varying hologram and intensity image of the laser echo signal ${I_{img}}({x,y,n} )$, and it is difficult for the DDA to collect the 2 images at the same time.

With the target rough, it is possible to estimate ${I_{img}}({x,y,n} )$ through constructing a filter based on holograms and the LLOs. The estimation steps are as follows.

  • ● Holograms ${I_0}({x,y,n} )$ and ${I_{90}}({x,y,n} )$ subtract their corresponding LLO intensity images ${I_{loc0}}({x,y,n} )$ and ${I_{loc90}}({x,y,n} )$ respectively, where amplitudes of ${I_{loc0}}({x,y,n} )$ and ${I_{loc90}}({x,y,n} )$ are similar
    $${I_0}({x,y,n} )- {I_{loc0}}({x,y,n} )= {I_{img}}^{\prime}({x,y,n} )+ 2\sqrt {{I_{img}}^{\prime}({x,y,n} )} \sqrt {{I_{loc0}}({x,y,n} )} \cos [{\varDelta {\varphi_{img}}({x,y,n} )} ]$$
    $${I_{90}}({x,y,n} )- {I_{loc90}}({x,y,n} )= {I_{img}}^{\prime}({x,y,n} )+ 2\sqrt {{I_{img}}^{\prime}({x,y,n} )} \sqrt {{I_{loc0}}({x,y,n} )} \sin [{\varDelta {\varphi_{img}}({x,y,n} )} ]$$
  • ● Subtract ${I_0}({x,y,n} )- {I_{loc0}}({x,y,n} )$ and ${I_{90}}({x,y,n} )- {I_{loc90}}({x,y,n} )$ to form the subtractive term
    $$\begin{array}{l} [{{I_0}({x,y,n} )- {I_{loc0}}({x,y,n} )} ]- [{{I_{90}}({x,y,n} )- {I_{loc90}}({x,y,n} )} ]\\ = 2\sqrt {{I_{img}}^{\prime}({x,y,n} )} \sqrt {{I_{loc0}}({x,y,n} )} \{{\cos [{\varDelta {\varphi_{img}}({x,y,n} )} ]- \sin [{\varDelta {\varphi_{img}}({x,y,n} )} ]} \}\end{array}$$

As can be seen in Eq. (14), ${I_{img}}({x,y,n} )$ is removed in the subtractive term.

  • ● Compute the SpSaS of the subtractive term, and perform the median filtering on it.
  • ● Construct the filter to estimate by setting a threshold for the processed SpSaS.
  • ● Add ${I_0}({x,y,n} )- {I_{loc0}}({x,y,n} )$ and ${I_{90}}({x,y,n} )- {I_{loc90}}({x,y,n} )$ to form the additive term
    $$\begin{array}{l} [{{I_0}({x,y,n} )- {I_{loc0}}({x,y,n} )} ]+ [{{I_{90}}({x,y,n} )- {I_{loc90}}({x,y,n} )} ]\\ = 2{I_{img}}^{\prime}({x,y,n} )+ 2\sqrt {{I_{img}}^{\prime}({x,y,n} )} \sqrt {{I_{loc0}}({x,y,n} )} \{{\cos [{\varDelta {\varphi_{img}}({x,y,n} )} ]+ \sin [{\varDelta {\varphi_{img}}({x,y,n} )} ]} \}\end{array}$$
  • ● Process the additive term with the constructed filter above, divide the result by 2, and the estimated ${I_{img}}({x,y,n} )$ can be achieved.

2.2 ASAL imaging method and far-field target simulation

The processing flow of ASAL imaging for the moving target includes target motion direction and parameter estimation, image registration, second-order phase compensation of SpSaSs, vibration phase compensation, SpSaS splicing, and image quality enhancement. Since steps such as second-order phase compensation of SpSaSs are consistent with those in Ref. [16], target motion direction and parameter estimation, vibration phase compensation, and image quality enhancement are mainly analyzed in this section.

To verify the effectiveness of the imaging method, simulations of the far-field target are also carried out according to the far-field condition $R > {{2{D^2}} / \lambda }$, where D is the effective aperture of the imaging system [16]. A 3-D satellite and a 3 × 3 point array are targets in simulations. When the targets move in the plane parallel to the imaging system, simulation parameters are shown in Table 1.

Tables Icon

Table 1. Simulation parameters of ASAL imaging in the far-field condition

2.2.1 Target motion direction estimation

When the target moves at a low speed or the DDA works with a high frame rate, the interferometric phase of the adjacent 2 complex images has sparse fringes, and the direction of target motion can be estimated. The processing steps are as follows.

  • ● Registrate complex images according to their magnitudes.
  • ● Calculate the interferometric phase of complex images.
  • ● Process the interferometric phase with the median filter.
  • ● Compute the gradient direction map of the interferometric phase. The gradient direction map represents the anticlockwise angle between the direction of the gradient and the positive direction of the X-axis.
  • ● Draw the histogram of the gradient direction map.
  • ● Find the gradient direction with the max quantity, and take it to be the estimated target motion direction.

When the target moves in the positive direction of the X-axis by 0.5 mm and in the positive direction of the X-axis and the Y-axis by 1 mm, respectively, the interferometric phases of the complex images before and after the target motion and their histograms of gradient direction maps are shown in Figs. 4. With histograms divided into 100 intervals, the angular resolution of target motion direction estimation is 3.6 °, and increasing the number of intervals of the histogram reduces the target motion direction estimation error. As shown in Fig. 4, the gradient direction is concentrated in the interval of 0 ° ∼3.6 ° and 43.2 ° ∼46.8 °, the motion direction estimation is 1.8 ° and 45 °, and the estimation accuracy is 1.8 ° and 0 °.

 figure: Fig. 4.

Fig. 4. Simulation for the far-field target motion direction estimation. (a) The interferometric phase and the histogram of gradient direction map when the target moves in the positive direction of the X-axis by 0.5 mm. (b) The interferometric phase and the histogram of gradient direction map when the target moves in the positive direction of the X-axis and the Y-axis by 1 mm

Download Full Size | PDF

When the interferometric phase has dense fringes, the number of fringes can be reduced by the calibration phase, which can be formed based on the geometric relationship between the imaging system and the target. The calibration phase is similar to the flat-earth phase of interferometric synthetic aperture radar (InSAR) [17,18]. The interferometric phase after calibration can be applied in the Target motion direction estimation.

2.2.2 Target motion parameter estimation

Taking the target moving in the positive direction of the X-axis as an example, the process of target motion parameter estimation is as follows.

  • ● Form the $x - y - n$ matrix with the strong scattering points of multi-frame complex images.
  • ● Perform the 2-D inverse Fourier transform in the $x - y$ plane to form the ${x_{lens}} - {y_{lens}} - n$ matrix.
  • ● Add the ${x_{lens}} - {y_{lens}} - n$ matrix coherently in the ${x_{lens}} - {y_{lens}}$ plane, and the signal $S(n )$ is constructed.
  • ● Calculate the unwrapping phase [19] $\varphi (n )$ of $S(n )$.
  • ● Fit the first-order phase of $\varphi (n )$ and remove it from $\varphi (n )$.
  • ● Fit the second-order phase $\varDelta \varphi (n )$ of $\varphi (n )$, and the change in distance $\varDelta R(n )$ of the moving target can be calculated according to $\varDelta \varphi (n )={-} \frac{{4\pi }}{\lambda }\varDelta R(n )$.
  • ● Estimate the distance of target motion in the X-axis direction $\varDelta x(n )$ according to $\varDelta R(n )= \sqrt {{R^2} + \varDelta x{{(n )}^2}} - R$.

According to the parameters in Table 1, under the noise-free condition and the condition that complex images’ signal-to-noise ratio (SNR) is 5 dB, simulations of target motion parameter estimation and the estimation error are shown in Fig. 5, and it can be found that the estimation error is in the order of 0.5 mm and 8 mm. Since the motion distance is far smaller than the resolution corresponding to the detector, the magnitude of the complex image cannot reflect the motion of the target, while the estimation method above can achieve accurate parameters to realize image registration and SpSaS splicing.

 figure: Fig. 5.

Fig. 5. Simulation for the far-field target motion parameter estimation. (a) The estimation value, actual value and estimation error of the target motion distance under the noise-free condition. (b) The estimation value, actual value and estimation error of the target motion distance under the condition that complex images’ SNR is 5 dB.

Download Full Size | PDF

If the target moves in 2 dimensions, $\varDelta x(n )$ above can be replaced by the target motion distance $\varDelta r(n )$ in the plane, which can be projected in the X-axis and Y-axis according to the motion direction to form $\varDelta x(n )$ and $\varDelta y(n )$.

2.2.3 Vibration phase compensation

In practical conditions, vibration leads to the ASAL image defocusing. To reduce the influence of vibration on imaging, under the condition that the rigid target moves in the positive direction of the X-axis and vibrates radially, the vibration phase compensation method is as follows.

  • ● Form the $x - y - n$ matrix with the strong scattering points of multi-frame complex images.
  • ● Perform the 2-D inverse Fourier transform in the $x - y$ plane to form the ${x_{lens}} - {y_{lens}} - n$ matrix, which includes multi-frame SpSaSs.
  • ● Set the coordinate of the strong scattering point to be $({{x_p},{y_p}} )$, and compensate SpSaSs with
    $$\varphi ({{x_{lens}},{y_{lens}},n} )= \frac{{2\pi }}{\lambda }\varDelta R^{\prime}({{x_{lens}},{y_{lens}},n} )$$
    $$\varDelta R^{\prime}({{x_{lens}},{y_{lens}},n} )= {R_p}({{x_{lens}},{y_{lens}},n} )- {R_c}({{x_{lens}},{y_{lens}},n} )$$
    $$\begin{array}{c} {R_p}({{x_{lens}},{y_{lens}},n} )= \sqrt {{{[{{x_{lens}} - {x_p} - \varDelta x(n )} ]}^2} + {{({{y_{lens}} - {y_p}} )}^2} + {R^2}} \\ + \sqrt {{{[{{x_p} + \varDelta x(n )} ]}^2} + y_p^2 + {R^2}} \end{array}$$
    $${R_c}({{x_{lens}},{y_{lens}},n} )= \sqrt {{{[{{x_{lens}} - \varDelta x(n )} ]}^2} + y_{lens}^2 + {R^2}} + \sqrt {\varDelta x{{(n )}^2} + y_{lens}^2 + {R^2}}$$
    where ${R_p}({{x_{lens}},{y_{lens}}} )$ and ${R_c}({{x_{lens}},{y_{lens}}} )$ are the distance variations of the strong scattering point and scene midpoint during the target motion.
  • ● Add the ${x_{lens}} - {y_{lens}} - n$ matrix coherently in the ${x_{lens}} - {y_{lens}}$ plane, and the 1-D signal $S^{\prime}(n )$ is constructed.
  • ● Obtain the unwrapping phase $\varphi (n )^{\prime}$ of $S^{\prime}(n )$, and remove $\varDelta \varphi (n )$ estimated in Section 2.2.2 from $\varphi (n )^{\prime}$.
  • ● Estimate the vibration phase ${\varphi _v}(n )$ from $\varphi (n )^{\prime}$ through the space correlation algorithm (SCA) [20].
  • ● Compensate each SpSaS with ${\varphi _v}(n )$, and perform the 2-D Fourier transform in the ${x_{lens}} - {y_{lens}}$ plane to transform the ${x_{lens}} - {y_{lens}} - n$ matrix to the $x - y - n$, which includes multi-frame complex images with vibration phase compensated.

Under the noise-free condition and the condition that complex images’ SNR is -25 dB, the vibration phase estimation and its error of the far-field rigid target are shown in Fig. 6. And it can be found that the method is still effective when the SNR of complex images is -25 dB. With the target nonrigid, multiple strong scattering points can be selected and vibration phases of corresponding areas can be estimated based on the method above.

 figure: Fig. 6.

Fig. 6. Simulations for the far-field target vibration phase estimation. (a) The estimation value, actual value and estimation error of the vibration phase under the noise-free condition. (b) The estimation value, actual value and estimation error of the vibration phase under the condition that complex images’ SNR is -25 dB.

Download Full Size | PDF

2.2.4 Spatial sampling signal splicing and resolution improvement of images

The range of the SpSaS corresponding to the ASAL image can be enlarged by SpSaS splicing to realize high-resolution imaging. If the target moves $\varDelta x(n )$ and $\varDelta y(n )$ in the X-axis direction and Y-axis direction, respectively, the complex image is multiplied by phase $\varDelta {k_x}(n )x + \varDelta {k_y}(n )y$ to shift the center of the corresponding SpSaS. The expressions of $\varDelta {k_x}(n )$ and $\varDelta {k_y}(n )$ can be summarized as

$$\varDelta {k_u}(n )= \frac{{2\pi }}{\lambda }\arctan \left[ {\frac{{2\varDelta u(n )}}{R}} \right],u = x\textrm{ or }y$$

Linear phase processing in the image domain is equivalent to the SpSaS moving $2\varDelta x(n )$ and $2\varDelta y(n )$.

Based on the parameters in Table 1, the single-frame complex image and the profile of the point array target are shown in Fig. 7, where the 3 × 3 point array target with a spacing of 0.3 m cannot be distinguished. When the target moves in the X-axis, the ASAL image and the profile of the point array target are shown in Fig. 8. Compared with Fig. 7, the resolution of the ASAL image is increased by 6 times to reach about 0.18 m, and the point array target can be distinguished.

 figure: Fig. 7.

Fig. 7. The single-frame complex image, its point array target profile, and SpSaS. (a) The magnitude and the point array target profile of the single-frame complex image. (b) The phase of the single-frame complex image. (c) The SpSaS of the single-frame complex image.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. The ASAL image, its point array target profile, and SpSaS. (a) The magnitude and the point array target profile of the ASAL image. (b) The phase of the ASAL image. (c) The SpSaS of the ASAL image.

Download Full Size | PDF

2.2.5 Image quality enhancement

The improvement of image quality in this section mainly includes three aspects: image focusing, sidelobe suppression, and SNR increase. Phase gradient autofocusing Algorithm (PGA) [21] and multi-look processing [22] can be applied to image focusing and SNR increase, respectively.

Sidelobe suppression of the image can be realized through Compressed Sensing (CS) [23] and Complex Dual Apodization (CDA) algorithms [24,25], and the processing steps are as follows.

  • ● Set a threshold and process each row and column of the image by CS in turn, and the CS-processed image 1 can be achieved.
  • ● Set another threshold and repeat the process above to each column and row of the image, and generate the CS-processed image 2.
  • ● Apply CDA to the CS-processed images, and the sidelobe suppression can be realized.

3. Experimental verification

3.1 Experimental system

The photo of the complex image reconstruction and ASAL imaging experimental system based on the SLM and the DDA, and the image of the model car collected by the DDA are shown in Fig. 9. And the model car is used as the stop-and-go target and the continuous-moving target in experiments.

 figure: Fig. 9.

Fig. 9. The photo of the experimental system and the image of the target. (a) The photo of the experimental system. (b) The photo of the model car target. (c) The DDA collected image of the model car target.

Download Full Size | PDF

A 1:9 PMFS is used to transmit the output of the laser to collimators of the laser transmitting signal and the LLO, which increases the spot size through a beam expander. The collimator of the laser transmitting signal achieves the equivalent function of a beam expander through defocusing processing. The LLO modulated by the phase-only reflective SLM (HOLOEY PLUTO-2.1) and the laser echo signal are combined by the NPBS, received by the DDA (a short-wave camera, Bobcat 320), and saved for the complex image reconstruction. The target moves in the plane parallel to the imaging system, the DDA acquires multi-frame holograms during the motion, and ASAL imaging can be carried out. The parameters of the experimental system are shown in Table 2.

Tables Icon

Table 2. Parameters of the experimental system

3.2 Experimental data processing

The following experiments use a metal disc and a model car as targets, which are not rigidly connected to the imaging system. To verify the effectiveness of ASAL imaging, multi-frame holograms of the stop-and-go target and the continuous-moving target are collected and processed to form high-resolution images. The image quality is evaluated through resolution, image entropy, and contrast [26].

3.2.1 Complex image reconstruction

Before and after moving the collimator of the laser transmitting signal, reconstruct complex images of the metal disc. And the interferometric phase of the target’s complex images can be formed. The photo, the complex image, and the interferometric phase with and without median filtering are illustrated in Figs. 10(a)-(f). Interference fringes reflect the phase change caused by the collimator motion.

 figure: Fig. 10.

Fig. 10. The photo, the complex image, the interferometric phase, and its calibration result of the metal disc target. (a) The photo of the metal disc target. (b)-(d) The magnitude, phase, and Lissajous figure of the metal disc target’s complex image. (e)-(f) The interferometric phase of the metal disc target’s complex images with and without median filtering. (g)-(h) The calibration results of (e) with and without median filtering.

Download Full Size | PDF

Figures 10(g)(h) show the calibration results of the interferometric phase, which is formed based on the 0.13 mm motion distance of the collimator and the imaging system structure. After the calibration, the interference fringes disappear, which verifies the effectiveness of the approach to constructing the phase information of complex images.

The complex image of the stop-and-go target and interferometric phases are shown in Fig. 11. According to the diagonal fringes in the interferometric phase of the non-simultaneous complex images shown in Fig. 11(d), it can be found that the phase error can be caused and the phase of the complex image varies even if the target is stationary, without rigid connection between the target and the imaging system.

 figure: Fig. 11.

Fig. 11. The complex image and the interferometric phase of the stop-and-go target. (a)-(c) The magnitude, phase and Lissajous figure of the stop-and-go target’s complex image. (d) The interferometric phase of stationary target’s complex images. (e) The interferometric phase of complex images before and after the target motion. (f) The normalized mean phase of (e) in the Y-axis direction.

Download Full Size | PDF

The interferometric phase of the complex images formed before and after the target moving 1.1 mm in the X-axis direction and its normalized mean phase in the Y-axis direction are illustrated in Figs. 11(e)(f). With the number of interference fringes consistent with the number of cycles of phase change due to the motion, it can be proved that the complex images reconstructed reflect tiny movements of the target, which lays a foundation for ASAL imaging.

The interferometric phases in Figs. 11(d)(e) are filtered by normalized magnitudes of complex images.

According to approaches in Section 2.1, the subtractive term’s SpSaS in Eq. (14), which has been normalized and median-filtered, and the filter are shown in Figs. 12(a)(b). The filter constructed can be used to estimate the intensity image of the laser echo signal for the continuous-moving target. The complex image reconstructed and its Lissajous figure are listed in Figs. 12(c)-(e). The Lissajous figure indicates that the real and imaginary parts of the complex image have good orthogonality.

 figure: Fig. 12.

Fig. 12. The SpSaS of the subtractive term, the filter constructed, the complex image and its Lissajous figure of the continuous-moving target. (a) The normalized median filtering result of the subtractive term’s SpSaS. (b) The filter constructed to estimate the intensity image of the laser echo signal for the continuous-moving target. (c) The magnitude of the continuous-moving target’s complex image. (d) The phase of the continuous-moving target’s complex image. (e) Lissajous figure of the continuous-moving target’s complex image.

Download Full Size | PDF

3.2.2 ASAL imaging for the stop-and-go target

Since the LLO fails to cover the DDA completely, the image covered is selected before the complex image reconstruction, and the detector scale corresponding to the selected image is 80 × 140. When the target is 1.5 m away, the field of view of the imaging system is 0.069 m × 0.12 m, the image resolution corresponding to the angular resolution of the single detector, and the diffraction limit of the imaging system are 0.857 mm and 0.113 mm, respectively.

Without phase, the image of the stop-and-go target and the profile of the point target received by the DDA are shown in Fig. 13(a). The resolution is about 1.33 mm, the entropy is 10.4873, and the contrast is 0.4549. With a resolution of about 0.907 mm, an entropy of 9.3469, and a contrast of 1.2877, the single-frame complex image reconstructed and its SpSaS are shown in Figs. 13(b)(c).

 figure: Fig. 13.

Fig. 13. The image received by the DDA, the single-frame complex image, the ASAL images, and their corresponding point target profiles of the stop-and-go target. (a) The image received by the DDA and the corresponding point target profile of the stop-and-go target. (b)(c) The SpSaS, magnitude, and the point target profile of the stop-and-go target’s single-frame complex image. (d)(e) The SpSaS, magnitude, and the point target profile of the stop-and-go target’s ASAL image with sparse SpSaS. (f)(g) The SpSaS, magnitude, and the point target profile of the stop-and-go target’s ASAL image with SpSaSs successively splicing. (h) The PAG compensation phase for (g). (i) The magnitude and the point target profile of (g) after PGA processing.

Download Full Size | PDF

When the stop-and-go target moves in the negative direction of the X-axis, 7 frames of complex images are formed, and the ASAL image and the SpSaS are listed in Figs. 13(d)(e). The experiment was carried out in the near-field condition. The motion parameters of the target can be estimated with the method in Section 2.2 and magnitude image comparison. It can be found that the motion distances of the target between images are about 1.8 mm, 2.5 mm, 4.1 mm, 1.4 mm, 2.5 mm, and 2 mm in turn. If SpSaSs are spliced according to the motion distances above, ASAL imaging improves the resolution by about 10.5 times. After PGA processing, the resolution, entropy, and contrast of the ASAL image are 0.06 mm, 9.1141, and 1.3892, respectively.

The sparse SpSaS of the ASAL image exacerbates the impact of sidelobes. ASAL imaging shown in Figs. 13(f)(g) splices SpSaSs successively to relieve problems caused by sparsity, and the resolution is enhanced by 6 times compared with low-resolution complex images. Processed by the PGA compensation phase in Fig. 13(h), the resolution, entropy and contrast of the ASAL image shown in Fig. 13(i) are 0.11 mm, 9.0983, and 1.3918. And the sidelobe suppression increases the contrast to 2.0413.

To verify the enhancement of multi-look processing to SNR, 9-frame complex images with motion distances of 1.13 mm, 1.1 mm, 1.5 mm, 0.6 mm, 1 mm, 1 mm, 0.9 mm, and 0.9 mm are applied to ASAL imaging. And the SpSaS of the ASAL image is hardly sparse if SpSaSs of complex images are spliced depending on the motion distances. The ASAL image based on all the complex images is shown in Fig. 14(a), and its SNR is -4.2 dB. Divide complex images into 3 groups, each 3-frame complex images can form an ASAL image with approximately 3 times of resolution improvement. SNRs of the ASAL images are -0.62 dB, -2.08 dB, and -3.53 dB, and Fig. 14(b) is one of the ASAL images. Figure 14(c) is the result of the incoherent addition based on the 3 ASAL images, which is equivalent to the 3-time multi-look processing of Fig 14(a), and its SNR is 2.48 dB. It can be found that multi-look processing effectively improves the SNR of the ASAL image at the expense of resolution. Thus it is necessary to balance the number of complex images for resolution and SNR improvement in ASAL imaging.

 figure: Fig. 14.

Fig. 14. The SNR improvement of the ASAL images through multi-look processing. (a) The ASAL image based on 9-frame complex images. (b) The ASAL image based on 3-frame complex images. (c) The 3-time multi-look processing of (a).

Download Full Size | PDF

3.2.3 ASAL imaging for the continuous-moving target

With the detector scale corresponding to the image covered by the LLO being 80 × 80 and the target distance being 1.6 m, the field of view of the imaging system is 0.073 m × 0.073 m. The image resolution corresponding to the angular resolution of the single detector and the diffraction limit of the imaging system are 0.914 mm and 0.12 mm, respectively. The single-frame complex image and its SpSaS are shown in Figs. 15(a)(b), and the resolution is 1.08 mm, the entropy is 10.1169, and the contrast is 0.7696.

 figure: Fig. 15.

Fig. 15. The single-frame complex image and ASAL images of the continuous-moving target. (a)(b) The SpSaS, the magnitude, and the point target profile of the continuous-moving target’s single-frame complex image. (c)(d) The SpSaS, the magnitude, and the point target profile of the continuous-moving target’s ASAL image. (e) Sidelobe suppression processing result of (d).

Download Full Size | PDF

Referring to the parameters in Table 2, it is required that the speed of the target must not exceed 1.828 m/s to avoid the image blurring. During the motion of the target in the negative direction of the X-axis, 11 frames of complex images are reconstructed. The target motion distances between images are about 0.4 mm, 0.55 mm, 0.55 mm, 0.92 mm, 1 mm, 0.57 mm, 0.65 mm, 0.7 mm, 0.9 mm, and 1 mm. It can be evaluated that the velocity of the transverse motion is around 0.058 m/s. The resolution of the ASAL image is improved by 6.3 times to reach 0.15 mm. The entropy of the ASAL image is 9.1141 and the contrast is 0.9164. The ASAL image, the SpSaS, and the point target profile are listed in Figs. 15(c)-(e). And sidelobe suppression processing further enhances the contrast to 1.6583.

When the target moves in the negative direction of the X-axis and Y-axis at the same time and passes through the spot of the laser transmitting signal, the holograms collected by the DDA are shown in Fig. 16(a). The target moves 0.0737 m in the X-axis direction and 0.0579 m in the Y-axis direction within 0.66 s, which enables the model car target to traverse the spot from beginning to end at the estimated transverse motion speed of 0.14 m/s, and 51 frames of complex images are reconstructed. Without resolution improvement, Fig. 16(b) is the incoherent addition of complex images after registration, with the entropy being 11.6910 and the contrast being 1.3468.

 figure: Fig. 16.

Fig. 16. ASAL imaging for the 2-D continuous-moving model car target. (a) Holograms collected by the DDA during the target passing through the spot of the laser transmitting signal. (b) The incoherent addition result based on 51-frame complex images. (c) The SpSaS of the ASAL image based on 51-frame complex images. (d) ASAL images of the front, body, and rear of the target with resolution improved about 10 times. (e) The high-resolution image of the entire model car target formed by 5-frame ASAL images added incoherently. (f) The sidelobe suppression result of (e).

Download Full Size | PDF

If all complex images are used in ASAL imaging, it can be found in Fig. 16(c) that the SpSaS will be highly sparse in ${x_{lens}} - {y_{lens}}$ domain. When frames 1-10, 11-21, 22-32, 33-43, and 44-51 of the complex images are applied to ASAL imaging, the resolution can be improved theoretically by a factor of about 10, and ASAL images of the front, body, and rear of the target are listed in Fig. 16(d). Incoherently add the registered ASAL images, and the image of the entire model car target can be formed as shown in Fig. 16(e), which has an entropy of 11.6342 and a contrast of 1.3572. As illustrated in Fig. 16(f), sidelobe suppression processing reduces the entropy to 11.0639 and enhances the contrast to 2.1875.

4. Conclusion

The ASAL experimental system for moving targets based on the SLM, the LLO, and the DDA is introduced above, and the complex image reconstruction method, the intensity image estimation of the laser echo signal for the continuous-moving target, and ASAL imaging approach are elaborated. Compared with SAL/ISAL, ASAL imaging can generate high-resolution images in pitch and azimuth directions and has the characteristics of a large field of view, fast imaging speed, good noise resistance, narrowband transmission signal, and a simple system with less equipment.

Through ASAL imaging for the stop-and-go target and the continuous-moving target, the resolutions are improved from 0.907 mm and 1.08 mm to 0.06 mm and 0.15 mm, respectively, and the entropies are reduced and contrasts are enhanced. The validity of the complex image reconstruction and ASAL imaging methods are verified. And it is indicated that the approaches introduced are of value in high-resolution imaging for moving targets in far and near-field conditions.

Due to the limitation of experimental conditions, errors exist in the complex image reconstruction and the target motion parameter estimation, and they affect the quality of ASAL images in the 2-D continuous-moving target experiment. Thus, it is necessary to optimize the experimental system. The subsequent research includes 2 aspects. On the one hand, experiments are to be carried out to test the performance of ASAL imaging method under far-field conditions. On the other hand, the single-element coherent detector connected to the analog-to-digital converter (ADC) is to be applied to the imaging system to obtain more information about the target, such as the distance and the speed, and the accuracy of the parameter estimation in ASAL imaging may be further improved.

Funding

National Natural Science Foundation of China (62371440).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. M. Soumekh, Synthetic Aperture Radar Signal Processing with MATLAB Algorithms (Wiley, 1999).

2. S. Crouch and Z. W. Barber, “Laboratory demonstrations of interferometric and spotlight synthetic aperture ladar techniques,” Opt. Express 20(22), 24237–24246 (2012). [CrossRef]  

3. A. Cui, D. Li, J. Wu, et al., “Moving target imaging of a dual-channel ISAL with binary phase shift keying signals and large squint angles,” Appl. Opt. 61(18), 5466–5473 (2022). [CrossRef]  

4. J. C. Mather, “An overview of the James Webb Space Telescope (JWST) project,” Proc. SPIE 5487, 550–563 (2004). [CrossRef]  

5. S. Shectman and M. Johns, “GMT overview,” Proceedings of SPIE - The International Society for Optical Engineering 7733(1), 54–58 (2010).

6. R. L. Kendrick, J. Aubrun, R. Bell, et al., “Wide-field Fizeau imaging telescope: experimental results,” Appl. Opt. 45(18), 4235–4240 (2006). [CrossRef]  

7. C. Rogers, A. Y. Piggott, D. J. Thomson, et al., “A universal 3D imaging sensor on a silicon photonics platform,” Nature 590(7845), 256–261 (2021). [CrossRef]  

8. J. Gao, J. Sun, and M. Cong, “Research on an FM/cw ladar system using a 64 × 64 InGaAs metal-semiconductor-metal self-mixing focal plane array of detectors,” Appl. Opt. 56(10), 2858–2862 (2017). [CrossRef]  

9. K. Hu, Y. Zhao, Y. Sheng, et al., “A low-power CMOS trans-impedance amplifier for FM/cw ladar imaging system,” Proc. SPIE 8905, 89052K (2013). [CrossRef]  

10. W. Zhang, L. Cao, and G. Jin, “Review on high resolution and large field of view digital holography,” Infrared and Laser Engineering 48(06), 104–120 (2019).

11. P. Feng, X. Wen, and R. Lu, “Long-working-distance synthetic aperture Fresnel off-axis digital holography,” Opt. Express 17(7), 5473–5480 (2009). [CrossRef]  

12. A. Pelagotti, M. Paturzo, M. Locatelli, et al., “An automatic method for assembling a large synthetic aperture digital hologram,” Opt. Express 20(5), 4830–4839 (2012). [CrossRef]  

13. S. T. Thurman and A. Bratcher, “Multiplexed synthetic-aperture digital holography,” Appl. Opt. 54(3), 559–568 (2015). [CrossRef]  

14. M. Zhang, P. Gao, K. Wen, et al., “A comprehensive review on parallel phase-shifting digital holography,” Acta Photonica Sinica 50(07), 9–31 (2021).

15. J. Liu and T. Poon, “Two-step-only quadrature phase-shifting digital holography,” Opt. Lett. 34(3), 250–252 (2009). [CrossRef]  

16. A. Cui, D. Li, J. Wu, et al., “Laser Synthetic Aperture Coherent Imaging for Micro-Rotating Objects Based on Array Detectors,” IEEE Photonics J. 14(6), 1–9 (2022). [CrossRef]  

17. Z. Bao, M. Xing, and T. Wang, Radar imaging technology (Publishing House of Electronics Industry, 2005).

18. K. Desai, P. Joshi, S. Chirakkal, et al., “Analysis of performance of flat earth phase removal methods,” Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci. XLII-5, 207–209 (2018). [CrossRef]  

19. The MathWorks, Inc., “MATLAB unwrap -Shift phase angles,” https://ww2.mathworks.cn/help/matlab/ref/unwrap.html

20. X. Hu, D. Li, J. Du, et al., “Vibration estimation of synthetic aperture lidar based on division of inner view field by two detectors along track,” in Proceedings of IEEE International Geoscience and Remote Sensing Symposium (IEEE, 2016), pp. 4561–4564.

21. D. E. Wahl, P. H. Eichel, D. C. Ghiglia, et al., “Phase gradient autofocus-a robust tool for high resolution SAR phase correction,” IEEE Trans. Aerosp. Electron. Syst. 30(3), 827–835 (1994). [CrossRef]  

22. K. Ouchi, “On the multilook images of moving targets by synthetic aperture radars,” IEEE Trans. Antennas Propagat. 33(8), 823–827 (1985). [CrossRef]  

23. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]  

24. H. C. Stankwitz, R. J. Dallaire, and J. R. Fienup, “Nonlinear apodization for sidelobe control in SAR imagery,” IEEE Trans. Aerosp. Electron. Syst. 31(1), 267–279 (1995). [CrossRef]  

25. V. T. Vu and M. I. Pettersson, “Sidelobe Control for Bistatic SAR Imaging,” IEEE Geoscience and Remote Sensing Letters 19, 1–5 (2021).

26. X. Li, G. Liu, and J. Ni, “Autofocusing of ISAR image based on entropy minimization,” IEEE Trans. Aerosp. Electron. Syst. 35(4), 1240–1252 (1999). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1.
Fig. 1. The diagram of the system
Fig. 2.
Fig. 2. Schematic diagram of the complex image reconstruction method
Fig. 3.
Fig. 3. The image domain / spatial sampling domain conversion and geometric parameters of the imaging system. (a) Schematic conversion of the image domain and the spatial sampling domain. (b) Schematic of the imaging system geometric parameters.
Fig. 4.
Fig. 4. Simulation for the far-field target motion direction estimation. (a) The interferometric phase and the histogram of gradient direction map when the target moves in the positive direction of the X-axis by 0.5 mm. (b) The interferometric phase and the histogram of gradient direction map when the target moves in the positive direction of the X-axis and the Y-axis by 1 mm
Fig. 5.
Fig. 5. Simulation for the far-field target motion parameter estimation. (a) The estimation value, actual value and estimation error of the target motion distance under the noise-free condition. (b) The estimation value, actual value and estimation error of the target motion distance under the condition that complex images’ SNR is 5 dB.
Fig. 6.
Fig. 6. Simulations for the far-field target vibration phase estimation. (a) The estimation value, actual value and estimation error of the vibration phase under the noise-free condition. (b) The estimation value, actual value and estimation error of the vibration phase under the condition that complex images’ SNR is -25 dB.
Fig. 7.
Fig. 7. The single-frame complex image, its point array target profile, and SpSaS. (a) The magnitude and the point array target profile of the single-frame complex image. (b) The phase of the single-frame complex image. (c) The SpSaS of the single-frame complex image.
Fig. 8.
Fig. 8. The ASAL image, its point array target profile, and SpSaS. (a) The magnitude and the point array target profile of the ASAL image. (b) The phase of the ASAL image. (c) The SpSaS of the ASAL image.
Fig. 9.
Fig. 9. The photo of the experimental system and the image of the target. (a) The photo of the experimental system. (b) The photo of the model car target. (c) The DDA collected image of the model car target.
Fig. 10.
Fig. 10. The photo, the complex image, the interferometric phase, and its calibration result of the metal disc target. (a) The photo of the metal disc target. (b)-(d) The magnitude, phase, and Lissajous figure of the metal disc target’s complex image. (e)-(f) The interferometric phase of the metal disc target’s complex images with and without median filtering. (g)-(h) The calibration results of (e) with and without median filtering.
Fig. 11.
Fig. 11. The complex image and the interferometric phase of the stop-and-go target. (a)-(c) The magnitude, phase and Lissajous figure of the stop-and-go target’s complex image. (d) The interferometric phase of stationary target’s complex images. (e) The interferometric phase of complex images before and after the target motion. (f) The normalized mean phase of (e) in the Y-axis direction.
Fig. 12.
Fig. 12. The SpSaS of the subtractive term, the filter constructed, the complex image and its Lissajous figure of the continuous-moving target. (a) The normalized median filtering result of the subtractive term’s SpSaS. (b) The filter constructed to estimate the intensity image of the laser echo signal for the continuous-moving target. (c) The magnitude of the continuous-moving target’s complex image. (d) The phase of the continuous-moving target’s complex image. (e) Lissajous figure of the continuous-moving target’s complex image.
Fig. 13.
Fig. 13. The image received by the DDA, the single-frame complex image, the ASAL images, and their corresponding point target profiles of the stop-and-go target. (a) The image received by the DDA and the corresponding point target profile of the stop-and-go target. (b)(c) The SpSaS, magnitude, and the point target profile of the stop-and-go target’s single-frame complex image. (d)(e) The SpSaS, magnitude, and the point target profile of the stop-and-go target’s ASAL image with sparse SpSaS. (f)(g) The SpSaS, magnitude, and the point target profile of the stop-and-go target’s ASAL image with SpSaSs successively splicing. (h) The PAG compensation phase for (g). (i) The magnitude and the point target profile of (g) after PGA processing.
Fig. 14.
Fig. 14. The SNR improvement of the ASAL images through multi-look processing. (a) The ASAL image based on 9-frame complex images. (b) The ASAL image based on 3-frame complex images. (c) The 3-time multi-look processing of (a).
Fig. 15.
Fig. 15. The single-frame complex image and ASAL images of the continuous-moving target. (a)(b) The SpSaS, the magnitude, and the point target profile of the continuous-moving target’s single-frame complex image. (c)(d) The SpSaS, the magnitude, and the point target profile of the continuous-moving target’s ASAL image. (e) Sidelobe suppression processing result of (d).
Fig. 16.
Fig. 16. ASAL imaging for the 2-D continuous-moving model car target. (a) Holograms collected by the DDA during the target passing through the spot of the laser transmitting signal. (b) The incoherent addition result based on 51-frame complex images. (c) The SpSaS of the ASAL image based on 51-frame complex images. (d) ASAL images of the front, body, and rear of the target with resolution improved about 10 times. (e) The high-resolution image of the entire model car target formed by 5-frame ASAL images added incoherently. (f) The sidelobe suppression result of (e).

Tables (2)

Tables Icon

Table 1. Simulation parameters of ASAL imaging in the far-field condition

Tables Icon

Table 2. Parameters of the experimental system

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

$${v_c}T \le \frac{a}{f}R$$
$$\scalebox{0.88}{$\begin{array}{l} {I_0}({x,y,n} )\\ = \int\limits_{ - T/2}^{T/2} {\left\{ {I_{loc0}^2({x,y,n,t} )+ I_{img0}^2({x,y,n,t} )+ 2\sqrt {{I_{loc0}}({x,y,n,t} )} \sqrt {{I_{img0}}({x,y,n,t} )} \cos [{2\pi {f_d}({x,y} )t + \varDelta {\varphi_{img}}({x,y,n} )} ]} \right\}} dt \end{array}$}$$
$$\scalebox{0.88}{$\begin{array}{l} {I_0}({x,y,n} )\\ = T \cdot [{I_{loc0}^2({x,y,n} )+ I_{img0}^2({x,y,n} )} ]+ 2\sqrt {{I_{loc0}}({x,y,n} )} \sqrt {{I_{img0}}({x,y,n} )} \int\limits_{ - T/2}^{T/2} {\cos [{2\pi {f_d}({x,y} )t + \varDelta {\varphi_{img}}({x,y,n} )} ]dt} \\ = T \cdot [{I_{loc0}^2({x,y,n} )+ I_{img0}^2({x,y,n} )} ]+ \frac{{2\sqrt {{I_{loc0}}({x,y,n} )} \sqrt {{I_{img0}}({x,y,n} )} }}{{\pi {f_d}}}\sin [{\pi {f_d}({x,y} )T} ]\cos [{\varDelta {\varphi_{img}}({x,y,n} )} ]\end{array}$}$$
$$\begin{array}{l} {I_{90}}({x,y,n} )\\ = T \cdot [{I_{loc90}^2({x,y,n} )+ I_{img90}^2({x,y,n} )} ]+ \frac{{2\sqrt {{I_{loc90}}({x,y,n} )} \sqrt {{I_{img90}}({x,y,n} )} }}{{\pi {f_d}}}\sin [{\pi {f_d}({x,y} )T} ]\sin [{\varDelta {\varphi_{img}}({x,y,n} )} ]\end{array}$$
$${I_0}({x,y,n} )= I_{loc0}^2({x,y,n} )+ I_{img0}^2({x,y,n} )+ 2\sqrt {{I_{loc0}}({x,y,n} )} \sqrt {{I_{img0}}({x,y,n} )} \cos [{\varDelta {\varphi_{img}}({x,y,n} )} ]$$
$${I_{90}}({x,y,n} )= I_{loc90}^2({x,y,n} )+ I_{img90}^2({x,y,n} )+ 2\sqrt {{I_{loc90}}({x,y,n} )} \sqrt {{I_{img90}}({x,y,n} )} \sin [{\varDelta {\varphi_{img}}({x,y,n} )} ]$$
$$\begin{aligned} U({x,y,n} )&= A({x,y,n} )\cdot \exp \{{j\varDelta {\varphi_{img}}({x,y,n} )} \}\\ &= \frac{{{I_0}({x,y,n} )- {I_{loc0}}({x,y,n} )- {I_{img0}}({x,y,n} )}}{{2\sqrt {{I_{loc0}}({x,y,n} )} }} + j\frac{{{I_{90}}({x,y,n} )- {I_{loc90}}({x,y,n} )- {I_{img90}}({x,y,n} )}}{{2\sqrt {{I_{loc90}}({x,y,n} )} }}\\& \approx \frac{{{I_0}({x,y,n} )- {I_{loc0}}({x,y,n} )- {I_{img}}^{\prime}({x,y,n} )}}{{2\sqrt {{I_{loc0}}({x,y,n} )} }} + j\frac{{{I_{90}}({x,y,n} )- {I_{loc90}}({x,y,n} )- {I_{img}}^{\prime}({x,y,n} )}}{{2\sqrt {{I_{loc90}}({x,y,n} )} }} \end{aligned}$$
$$\varDelta {\varphi _{img}}({x,y,n} )= {\varphi _{img}}({x,y,n} )- {\varphi _{loc}}({x,y,n} )$$
$$\scalebox{0.92}{$\displaystyle{\varphi _{img}}({x,y,n} )= angle\left\{ {{\cal F}\left\{ {P({{x_{lens}},{y_{lens}}} )\cdot \sum {{A_i}^{\prime}({{x_{lens}},{y_{lens}}} )\exp \left\{ { - \frac{{2\pi }}{\lambda }[{R_t^i(n )+ R_r^i({{x_{lens}},{y_{lens}},n} )} ]} \right\}} } \right\}} \right\}$}$$
$$R_t^i(n )= \sqrt {{{({x_{lens}^t - x_{lens}^{i,n}} )}^2} + {{({y_{lens}^t - y_{lens}^{i,n}} )}^2} + {{({z_{lens}^t - z_{lens}^{i,n}} )}^2}}$$
$$R_r^i({{x_{lens}},{y_{lens}},n} )= \sqrt {{{({{x_{lens}} - x_{lens}^{i,n}} )}^2} + {{({{y_{lens}} - y_{lens}^{i,n}} )}^2} + {{({{z_{lens}} - z_{lens}^{i,n}} )}^2}}$$
$${I_0}({x,y,n} )- {I_{loc0}}({x,y,n} )= {I_{img}}^{\prime}({x,y,n} )+ 2\sqrt {{I_{img}}^{\prime}({x,y,n} )} \sqrt {{I_{loc0}}({x,y,n} )} \cos [{\varDelta {\varphi_{img}}({x,y,n} )} ]$$
$${I_{90}}({x,y,n} )- {I_{loc90}}({x,y,n} )= {I_{img}}^{\prime}({x,y,n} )+ 2\sqrt {{I_{img}}^{\prime}({x,y,n} )} \sqrt {{I_{loc0}}({x,y,n} )} \sin [{\varDelta {\varphi_{img}}({x,y,n} )} ]$$
$$\begin{array}{l} [{{I_0}({x,y,n} )- {I_{loc0}}({x,y,n} )} ]- [{{I_{90}}({x,y,n} )- {I_{loc90}}({x,y,n} )} ]\\ = 2\sqrt {{I_{img}}^{\prime}({x,y,n} )} \sqrt {{I_{loc0}}({x,y,n} )} \{{\cos [{\varDelta {\varphi_{img}}({x,y,n} )} ]- \sin [{\varDelta {\varphi_{img}}({x,y,n} )} ]} \}\end{array}$$
$$\begin{array}{l} [{{I_0}({x,y,n} )- {I_{loc0}}({x,y,n} )} ]+ [{{I_{90}}({x,y,n} )- {I_{loc90}}({x,y,n} )} ]\\ = 2{I_{img}}^{\prime}({x,y,n} )+ 2\sqrt {{I_{img}}^{\prime}({x,y,n} )} \sqrt {{I_{loc0}}({x,y,n} )} \{{\cos [{\varDelta {\varphi_{img}}({x,y,n} )} ]+ \sin [{\varDelta {\varphi_{img}}({x,y,n} )} ]} \}\end{array}$$
$$\varphi ({{x_{lens}},{y_{lens}},n} )= \frac{{2\pi }}{\lambda }\varDelta R^{\prime}({{x_{lens}},{y_{lens}},n} )$$
$$\varDelta R^{\prime}({{x_{lens}},{y_{lens}},n} )= {R_p}({{x_{lens}},{y_{lens}},n} )- {R_c}({{x_{lens}},{y_{lens}},n} )$$
$$\begin{array}{c} {R_p}({{x_{lens}},{y_{lens}},n} )= \sqrt {{{[{{x_{lens}} - {x_p} - \varDelta x(n )} ]}^2} + {{({{y_{lens}} - {y_p}} )}^2} + {R^2}} \\ + \sqrt {{{[{{x_p} + \varDelta x(n )} ]}^2} + y_p^2 + {R^2}} \end{array}$$
$${R_c}({{x_{lens}},{y_{lens}},n} )= \sqrt {{{[{{x_{lens}} - \varDelta x(n )} ]}^2} + y_{lens}^2 + {R^2}} + \sqrt {\varDelta x{{(n )}^2} + y_{lens}^2 + {R^2}}$$
$$\varDelta {k_u}(n )= \frac{{2\pi }}{\lambda }\arctan \left[ {\frac{{2\varDelta u(n )}}{R}} \right],u = x\textrm{ or }y$$
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.