Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Pulse-compression ghost imaging lidar via coherent detection

Open Access Open Access

Abstract

Ghost imaging (GI) lidar, as a novel remote sensing technique, has been receiving increasing interest in recent years. By combining pulse-compression technique and coherent detection with GI, we propose a new lidar system called pulse-compression GI lidar. Our analytical results, which are backed up by numerical simulations, demonstrate that pulse-compression GI lidar can obtain the target’s spatial intensity distribution, range and moving velocity. Compared with conventional pulsed GI lidar system, pulse-compression GI lidar, without decreasing the range resolution, is easy to obtain high single pulse energy with the use of a long pulse, and the mechanism of coherent detection can eliminate the influence of the stray light, which is helpful to improve the detection sensitivity and detection range.

© 2016 Optical Society of America

1. Introduction

Ghost imaging (GI) is a novel non-scanning imaging method to obtain a target’s image with a single-pixel bucket detector [1–8]. Due to the capability of high detection sensitivity, GI has a great application prospect in remote sensing and some kinds of GI imaging lidar system have been proposed [9–12]. Recently, a pulsed three-dimensional (3D) GI lidar was invented and high-resolution 3D image of a natural scene at about 1.0 km range was reported [13]. In this system, the range image was obtained by using simple pulse ranging method, while the azimuth images were reconstructed by computing intensity correlation function between the receiving signal and reference spatial intensity distribution. Because pulsed 3D GI lidar employs direct energy detection, it requires both high single pulse energy and high peak power to obtain sufficient signal-to-noise ratio (SNR). What’s more, the range resolution of pulsed 3D GI lidar is determined by the laser’s pulse width. In order to obtain high range resolution, it requires a laser with shorter pulse width and a detector with boarder response bandwidth, which usually means that the transmitting system’s single pulse energy will be relative low for pulsed 3D GI lidar with a high pulse repetition frequency. However, for pulsed 3D GI lidar, the detection range mainly depends on single pulse energy, thus high range resolution and long detection range can not be simultaneously achieved.

Coherent detection and pulse-compression technique is valid to solve the conflict described above in chirped amplitude modulated (Chirped-AM) lidar [14,15]. A Chirped-AM light with long pulse duration is emitted, and the return light is received by coherent detection. By using de-chirping method, return light is compressed into sharp frequency component after Fast Fourier Transform (FFT) process, thus obtaining high range resolution. Based on this pulse compression method, high range resolution and long detection range can be obtained simultaneously [16]. Meanwhile, Chirped-AM lidar can achieve the velocity of a moving target [17]. Therefore, if we add chirped modulation to pseudo-thermal light source and use coherent detection method to gain the signal reflecting from targets, it is possible to propose a new GI lidar (called pulse-compression GI lidar) with better abilities and overcome the difficulties faced with pulsed GI lidar. The paper is organized as follows: in section 2, the system setup and theoretical scheme, including the signal model, light propagation, signal detection, image reconstruction and correction method, are presented; after that, in section 3, the numerical simulation are presented to back up our theoretical results and some discussions on our proposed GI lidar and conventional pulsed 3D GI lidar are given; in section 4, the conclusion is made.

2. System setup and analytical results

Figure 1 shows the schematic of pulse-compression GI lidar. The laser source with the wavelength λ = 1550 nm is connected to an amplitude modulator and a waveform generator provides the chirped waveform s (t) for the modulator. The chirped-AM light is split into two parts: detecting light and local oscillator (LO). The spatiotemporal modulated light Es (xs, t) is obtained by passing the chirped-AM detecting light through a rotating ground glass which is controlled by a step motor. Then the light is divided by a beam splitter (BS) into a reference and a test paths. In the reference path, the light is transformed into the far field by an frfr optical system and the spatial intensity distribution Ir (xr) is recorded by a charge-coupled device (CCD) camera. In the test path, the light illuminates a 3D target located in the far region of source (namely the rotating ground glass plane). The backscattered light from the target is coupled into a optical fiber and then mixed with LO light in a 2*4 90 degree optical hybrid. The four output ports of the optical hybrid are connected to two balanced detectors where de-chirping is performed. After two bandpass filters (BPF), the complex de-chirped current i˜(xt,t) is frequency-analyzed by FFT process and the corresponding intensity spectrum It (xt, f) can be obtained. By measuring the intensity correlation between Ir (xr) and It (xt, f), we try to obtain the target’s information.

 figure: Fig. 1

Fig. 1 The schematic of pulse-compression ghost imaging lidar via coherent detection.

Download Full Size | PDF

According to GI theory [1–6], the intensity fluctuation correlation function between the distribution Ir (xr) and the intensity spectrum It (xt, f) can be expressed as

ΔG(2,2)(xr,xt,f)=Ir(xr)It(xt,f)Ir(xr)It(xt,f),
where < · > denotes ensemble average of a function.

Since the temporal bandwidth of chirped waveform s (t) is narrow relative to the optical frequency, the spatiotemporal modulated light field can be treated as a quasi-monochromatic, classical scalar wave [18]. The complex envelope of the spatiotemporal light through the rotating ground glass can be described as

Es,n(xs,t)=[1+ms(tnT)]P(tnT)Es,n(xs),
where m is the modulation depth, n denotes the nth pulse, and T is the pulse interval; P (t) is a simple pulse waveform and denotes as
P(t)={1,0<tT00,else
where T0 is the pulse duration and satisfies T0 < T. Es,n (xs) = An (xs) exp [n (xs)] is the spatial amplitude and phase modulation with the following statistical moment
Es,n(xs)Es,n(xs)=I0δ(xsxs),
where xs denotes the transverse coordinate at the rotating ground glass plane, I0 is a constant and δ (x) is the Dirac’s delta function. The chirped waveform s (t) is
s(t)=cos(2πf0t+πβt2).
In Eq. (5)f0 is the starting frequency, and β = B/T0 is the frequency modulation rate where B is temporal bandwidth of the chirped waveform.

The propagation of light field is described by extended Huygens Fresnel principle [19]. For the optical system depicted in Fig. 1, because the reference CCD camera records all light intensity during the pulse duration, the light intensity distribution In (xr) is

In(xr)dt|dxsEs,n(xs,t2frc)exp(j2πxrxsλfr)|2=dt[1+ms(t2frcnT)]2P2(t2frcnT)|dxsEs,n(xs)exp(j2πxrxsλfr)|2|dxsEs,n(xs)exp(j2πxrxsλfr)|2,
where fr is the focal length of the lens, c is the speed of light and xr denotes the transverse coordinate at the CCD camera plane.

In the test path, the light illuminates the target at range zi, and the backscattered light of the target propagates to the receiving aperture plane. The light field at the target plane zi is

Eo,zi,n(xo,t)=exp(jkzi)jλzidxsEs,n(xs,tzic)exp[jπ(xoxs)2λzi],
where xo is the transverse coordinate at the target plane. And for the target at range zi, its backscattered light field at the receiving aperture plane is
Et,zi,n(xt,t)=exp(jkzi)jλzidxoEo,zi,n(xo,tzic)ozi(xo)exp[jπ(xtxo)2λzi],
where xt denotes the transverse coordinate at the receiving aperture and ozi(xo) is the average reflection coefficient of planar target at the target plane zi.

As depicted in Fig. 1, the target is modeled as a set of quasi planar, spatial extended objects that are located at discrete range zi, and the distance zi satisfies that Min(zi)>Ds2/λ (namely in the far field of the source), where Ds is the transverse size of the laser beam on the ground glass plane, and Min(zi) is the minimum distance between the target and the source. Moreover, the light illuminating the planar object at certain range cannot reach the object on the plane behind, which means those planar objects have no transverse overlap. Then the total light filed at the receiving aperture plane is given by

Et,n(xt,t)=iEt,zi,n(xt,t)=iexp(j2kzi)(jλzi)2×dxsdxoEs,n(xs,t2zic)exp[jπ(xoxs)2λzi]ozi(xo)exp[jπ(xtxo)2λzi].

If the object moves along the optical axis, the range can be denoted as zi=zi0+vit, where zi0 is the range when the first pulse meets the object, and vi is the radial velocity. Since the pulse duration is so short, we discriminate the object’s offset during the time of a single pulse except the phase item exp (j2kzi) in Eq. (9). Then Eq. (9) becomes

Et,n(xt,t)=i{1+ms[t2zi,ncnT]}P[t2zi,ncnT]exp[j2πfdit]Et,i,n(xt),
where zi,n=zi0+vinT is the range when the nth pulse meets the object, fdi=2vi/λ is the Doppler frequency and Et,i,n (xt) denotes as
Et,i,n(xt)=At,i,n(xt)exp[jϕt,i,n(xt)]exp(j2kzi0)(jλzi,n)2dxsdxoEs,n(xs)exp[jπ(xoxs)2λzi,n]ozi,n(xo)exp[jπ(xtxo)2λzi,n].

The LO light filed is assumed to be uniform, namely

ELO,n(xt,t)=[1+ms(tnT)]P(tnT)ALOexp[jϕLO,n],
where ALO and ϕLO,n is the amplitude and the phase of the nth LO pulse, respectively. In coherent detection system, the signal light must be spatially coherent at the receiving aperture to obtain maximum mixing efficiency. For example, in the system depicted in Fig. 1, if the transverse scale of the target is L, then the transverse coherent length of light field on receiving aperture is ∼ λzi/L. In order to keep coherence and a certain field of view, the transverse size of the receiver should not exceed this constraint [20].

The signal light Et,n (xt, t), together with the stray light Est,n (xt, t), enters into the signal port of the 2*4 90 degree optical hybrid. The LO light, are delayed into four quadrature states by the optical hybrid. The four output of the optical hybrid are [21]

Et,n(xt,t)+Est,n(xt,t)+ELO,n(xt,t)Et,n(xt,t)+Est,n(xt,t)ELO,n(xt,t)Et,n(xt,t)+Est,n(xt,t)+ELO,n(xt,t)exp(jπ/2)Et,n(xt,t)+Est,n(xt,t)ELO,n(xt,t)exp(jπ/2)
In the two balance detectors, light signal is converted into current, namely
I:|Et,n(xt,t)+Est,n(xt,t)+ELO,n(xt,t)|2|Et,n(xt,t)+Est,n(xt,t)ELO,n(xt,t)|2Q:|Et,n(xt,t)+Est,n(xt,t)+ELO,n(xt,t)exp(jπ/2)|2|Et,n(xt,t)+Est,n(xt,t)ELO,n(xt,t)exp(jπ/2)|2.
As Eq. (14) shows, baseband current is subtracted by differential detection, and only the interference terms are left. Since the stray light is incoherent with LO light and signal light, the output is
I:2[Et,n(xt,t)+ELO,n(xt,t)+Et,n(xt,t)ELO,n(xt,t)]Q:2[Et,n(xt,t)+ELO,n(xt,t)exp(jπ/2)+Et,n(xt,t)ELO,n(xt,t)exp(jπ/2)],
where * denotes conjunction. As Eq. (15) shows, coherent detection can eliminate the influence of stray light. The de-chirping process happens simultaneously in the balance detection, Eq. (15) becomes
iI,n(xt,t)i{1+m2cos(4πβzi,nt/c+αi,n)+ms(t2zi,n/cnT)+ms(tnT)}×P(tnT)ALOAt,i,n(xt)cos[2πfdit+ϕt,i,n(xt)φLO,n]iQ,n(xt,t)i{1+m2cos(4πβzi,nt/c+αi,n)+ms(t2zi,n/cnT)+ms(tnT)}×P(tnT)ALOAt,i,n(xt)sin[2πfdit+ϕt,i,n(xt)φLO,n],
where αi,n = 4π f0zi,n/c − 4π βnT0/c − π β(2zi,n/c)2. By choosing proper parameters to satisfy 2βzi,n/c < f0, only the beat frequency component is left. The complex output current of coherent detection is
i˜n(xt,t)=BPF[iI,n(xt,t)]+jBPF[iQ,n(xt,t)]=im2cos[4πβzi,nt/c+αi,n]P(tnT)×ALOAt,i,n(xt)exp{j[2πfdit+ϕt,i,n(xt)φLO,n]}.
After FFT process, the intensity spectrum of It,n (xt, f) is
It,n(xt,f)i{sinc2[T0(ffdi)]+m24sinc2[T0(ffdi2zi,nβc)]}|ALO|2|At,i,n(xt)|2,
where sinc (x) = sin (πx)/x. In Eq. (18), the items sinc2[T0(ffdi)] and sinc2[T0(ffdi2zi,nβc)] express the information of Doppler frequency and Doppler-range frequency, respectively, which suggests that the characteristic of the backscattered light intensity spectrum distribution is determined by the object’s motion status.

If the target is static during the sampling time, then the Doppler frequency fdi=0, which is filtered out by the BPF. Meanwhile, the Doppler-range frequency fbi,n=fdi+2zi,nβ/c degrades into a range frequency 2zi0β/c and doesn’t change during the whole sampling time. Then Eq. (18) becomes

It,n(xt,f)im24sinc2[T0(f2zi0βc)]|ALO|2|At,i,n(xt)|2.
Eq. (19) demonstrates that the frequency resolution is ∆f = 2βz/c = 1/T0, which means that the range resolution is ∆z = c/2B. If the targets shown in Fig. 1 have surfaces that are sufficiently rough (on the scale of an optical wavelength), then ozi(x)ozi(x)=Ozi(x)δ(xx) [11]. By substituting Eq. (4), Eq. (6), Eq. (11) and Eq. (19) into Eq. (1), and suppose the field fluctuations obey a complex circular Gaussian random process with zero mean [22], after some calculation, we can obtain
ΔG(2,2)(xr,xt,f=2zi0β/c})m24|ALO|2dxoOzi0(xo)sinc2[Dsλfr(xrfrzi0xo)].
Eq. (20) presents that the system’s angular resolution is λ/Ds.

If an object moves along the optical axis, Eq. (18) shows that the intensity spectrum contains two peak components, namely the Doppler frequency fdi=2vi/λ and the Doppler-range frequency fbi,n=fdi+2zi,nβ/c. For a moving object with a constant velocity, the Doppler frequency is fixed during the whole sampling time. If the object’s moving range is not beyond the axial correlation depth of light field in the process of sampling [8], by substituting the Doppler frequency item of Eq. (18), Eq. (4), Eq. (6) and Eq. (11) into Eq. (1), we can obtain

ΔG(2,2)(xr,xt,f=fdi)|ALO|2dxoOzi,n(xo)sinc2[Dsλfr(xrfrzi,nxo)].

The Doppler-range frequency can be expressed as fbi,n=fdi+2(zi0+vinT)β/c, which means that fbi,n is proportional to the radial velocity and the pulse index. Therefore, fbi,n changes linearly with the pulse index number n in the process of sampling. To reconstruct GI with Doppler-range frequency component, we should track fbi,n for each single pulse. By substituting the Doppler-range frequency intensity of Eq. (18), Eq. (4), Eq. (6) and Eq. (11) into Eq. (1), we can obtain

ΔG(2,2)(xr,xt,f=fbi,n)m24|ALO|2dxoOzi,n(xo)sinc2[Dsλfr(xrfrzi,nxo)].
From Eq. (21)(22), because the image magnification fr/zi,n changes with the pulse index n, the transverse resolution of GI will be decayed [23]. The motion blur is given as Δxr(fr/zi,nminfr/zi,nmax)Δxi, where zi,nmin, zi,nmax are the minimum and the maximum range between the object plane and the source plane in the whole sampling time, and ∆x is the object’s transverse offset from the optical axis [23]. For GI, we emphasize that if ∆xr is smaller than the speckle’s transverse size on the CCD camera plane λfr/Ds, the axial motion can not cause the degradation of imaging resolution. If ∆xr is larger than λfr/Ds, the image will be blurred but the motion blur can be easily removed by resizing the reference speckle since we can obtain the object’s range at each single pulse [23]. Moreover, it is clearly seen that Eq. (21) and Eq. (22) are identical in transverse dimension, which suggests that a moving planar object leads to two identical tomographic image. For the moving planar object, its corrected range at the nth pulse and its velocity can be obtained by computing
zi,n=c(fbi,nfdi)/2βvi=λfdi/2

If the target contains both static and moving planar objects, Eqs. (20)(22) imply that the reconstructed images are obtained from a static range frequency component, a Doppler frequency component or a Doppler-Range frequency component. Thus the most important step is to determine whether the reconstruction image is from a static scatter or a moving scatter. According to Eqs. (20)(22), the corresponding tomographic images in frequency f can be independently reconstructed. Because the target doesn’t overlap in transverse dimension, if there is no overlap for all tomographic images at xr=xr0, then the scatter is static, whereas if there is overlap for two tomographic images at xr=xr0, then the corresponding scatter is moving. For a moving scatter, the velocity and correct range can be obtained by Eq. (23), and we emphasize that the measured distance between the object plane and the lidar system is corresponding to the range at the time of emitting the last pulse in the process of sampling. If the deblur process is necessary, GI is reconstructed by using the resized reference speckle and the corresponding Doppler-range frequency intensity components. Finally, the target’s spatial intensity distribution, range and moving velocity can be achieved by pulse-compression GI lidar.

Furthermore, as displayed in Eqs. (20)(22), the target’s information can be extracted with the use of a single point-like detector when the measurement process reaches ensemble average. However, in practice, in order to obtain a proper information output rate, the measurement number is usually small and coherent detection effeciency experiences significant degradation due to the fluctuation of the backscattered light field, thus the visibility of GI with a single coherent detector is very poor [24]. Following [25], we can employ a random sparse coherent detection array to improve the detection SNR by summing the restored intensity spectrums, which is equivalent to increasing the measurement number and the visibility of GI will be enhanced.

3. Simulation results and discussion

In order to verify the analytical results of our proposed pulse-compression GI lidar above, we give some numerical simulations and Fig. 2 shows the simulation process. The spatial temporal source is discretized as a two-dimensional (2D) lattices, namely Ei,j(t) = [1 + ms(t)] Ai,j exp [i,j]. Following [26], the amplitude and phase are statistically independent of each other. And all element sources are independent identically distributed. The amplitude Ai,j obeys Rayleigh distribution and the phase ϕi,j is uniformly distributed on (0, 2π).

 figure: Fig. 2

Fig. 2 Schematic of simulation and process.

Download Full Size | PDF

The reference intensity distribution Ir,n (xr) is obtained by computing Eq. (6). In the test path, the target’s reflection function is discretized as a set of 2D lattices, corresponding to different planar objects. Since the planar objects do not occlude each other, the light propagation of each planar object can be computed independently. The spatiotemporal light fields Eo,zi,n(xo,t) and Et,zi,n(xt,t) are achieved by computing the numerical results of Eq. (7) and Eq. (8), respectively. To demonstrate our Lidar’s performance in scenarios with stray light, a random stray light field Est,n (xt, t) is generated at receiving aperture plane. Following Eq. (9), the total field Et,n (xt, t) at the receiving aperture is the coherent superposition of all planar objects’ return filed and stray light.

The detection process is simulated by computing Eqs. (1315). By using a digital BPF, we can get the baseband current i˜n(xt,t), corresponding to Eq. (17). To simulate a random sparse detection array, we randomly pick up some positions on the receiving aperture plane, as shown in Fig. 2. After FFT process, the intensity spectrum It,n,k (f) can be obtained, where k denotes the kth detector at xt. At last, the image of pulse-compression GI lidar is reconstructed by computing the following correlation function

ΔG(2,2)(xr,f)=1NnIr,n(xr)[kIt,n,k(f)γxrIr,n(xr)]
where γ=nkIt,n,k(f)/nxrIr,n(xr) and Eq. (21) presents differential ghost imaging (DGI) reconstruction algorithm [27,28]. In addition, N denotes the total measurement number.

In the numerical simulations, the specific parameters are set as follows: Ds = 2 mm, fr = 250 mm, T0 = 400 μs, T = 500 μs, m = 1 and B = 1 GHz. As shown in Fig. 1, the three planar objects are the identical double slit (slit width a=0.5 m, slit height h = 1.5 m, and center-to-center separation d = 0.87 m) at different ranges (object 1 at 199.9 m, object 2 at 200 m and object 3 at 200.3 m) with different transverse positions. The range between the center of the object 1, object 2 and object 3 and the optical axis are ∆x1 ≈ 1.99m, ∆x2 ≈ 1.64m, and ∆x3 ≈ 1.58m, respectively. For the scenario with moving components, object 1 and object 2 have radial velocities with 0.1 m/s and 1 m/s away from the lidar, respectively, while object 3 is static. The total measurement number is set as N=20000, thus the whole sampling time is 10 seconds. In this case, the motion blur for object 1 and object 2 are ∆xr,1 ≈ (fr/z1,0fr/z1,N) ∆x1 ≈ 12μm and ∆xr,2 ≈ (fr/z2,0fr/z2,N) ∆x2 ≈ 97μm, respectively. However, the speckle’s transverse size on the CCD camera plane is λfr/Ds ≈ 194μm, thus as discussed after Eq. (22), we don’t need to resize the reference speckle recorded by the CCD camera in the process of image reconstruction.

Figure 3 presents the results of pulse-compression GI lidar for a static scenario. Column (1) is the intensity spectrum of random sparse detection array with 1, 10 and 100 detectors, and the sampling number is 20000. Column (2) is the the intensity spectrum of a single pulse. In column (1), Line1 and Line2 are two peak frequency component, corresponding to P1 and P2 in column (2), respectively. Both Line1 and Line2 are perpendicular to the frequency axis, which means that the objects are static. By computing Eq. (24), the corresponding reconstruction images for the labeled peak frequency components Line1 and Line2 are illustrated in Column (3). As displayed in Fig. 3(a), the intensity spectrum’s SNR for a single coherent detector is low and so does the GI reconstruction result. However, when some random sparse receivers are used to collect the backscattered light from the target, both the intensity spectrum’s SNR and the reconstruction quality of GI dramatically increase with the number of receivers. In addition, the frequency of Line1–Line2 are fLine1 ≈ 3.331 MHz and fLine2 ≈ 3.338 MHz, and the corresponding distance are zLine1 ≈ 199.86 m and zLine2 ≈ 200.29 m. As predicted by the theory, the range resolution of pulse-compression GI lidar is c/2B ≈ 0.15 m, thus the spectrums of the object 1 and object 2 appear in P1 and as shown in Column (3), both the object 1 and object 2 appear in the same tomographic image.

 figure: Fig. 3

Fig. 3 Simulation results of pulse-compression GI lidar for a static scenario (the target consists of three planar objects). (a), (b) and (c) are the intensity spectrum and image reconstruction results by using 1, 25, and 100 coherent receivers, respectively (averaged 20000 measurements); Column (1)–(3) present the intensity spectrum of the 20000 pulse, the intensity spectrum of a single pulse, GI reconstruction results for peak frequency component Line1 and Line2, respectively. The different colors presents the tomographic images at different ranges. In addition, the frequency of Line1 and Line2 are fLine1 ≈ 3.331 MHz and fLine2 ≈ 3.338 MHz, and the corresponding distance are zLine1 ≈ 199.86 m and zLine2 ≈ 200.29 m.

Download Full Size | PDF

When imaging a scenario with moving objects, Fig. 4(a) is the Doppler domain of the frequency spectrum, while Fig. 4(b) shows Doppler-range domain. In Fig. 4(a), Line1 and Line2 are perpendicular to frequency axis, which suggests that both the two object move with two different constant velocities. In Fig. 1(b), Line3 stays in a resolution cell during the sampling time, so it corresponds to a static object, namely the object 3. Neither Line4 nor Line5 is perpendicular to the frequency axis, which means that the corresponding objects are moving during the process of sampling. For image reconstruction, we can simply pick peak values along the lines 1–5 into Eq. (24), and the corresponding results are shown in (c)–(g). According to the image correction process described above, we can identify that Fig. 4(c) and Fig. 4(f) correspond to the same planar object (namely the image of object 1), Fig. 4(e) is a static object (namely the image of object 3), while Fig. 4(d) and Fig. 4(g) are the image of object 2. Since the offset of the moving target during the sampling time cannot be ignored, the peak frequency of the last sampling pulse is used to compute the correct range. According to Fig. 4(a) and (b), the frequency of the Line1 to Line5 at the time of the last sampling pulse are fLine1 ≈ 0.129 MHz, fLine2 ≈ 1.290 MHz, fLine3 ≈ 3.338 MHz, fLine4 ≈ 3.476 MHz and fLine5 ≈ 4.790 MHz, respectively. By computing Eq. (23), the correct ranges and velocities are z1 ≈ 200.82 m, v1 = 0.100 m/s, z2 ≈ 210.00 m, v2 ≈ 0.9998 m/s and z3 ≈ 200.28 m, v3 = 0 m/s. Therefore, as shown in Fig. 4(k), the 3D image of the target scenario can be achieved.

 figure: Fig. 4

Fig. 4 Simulation results of pulse-compression GI lidar for a scenario with moving objects. The setup of the target is the same as Fig. 3 except for that planar object 1 and 2 has a radial velocity 0.1m/s and 1m/s, respectively. (a) and (b) are the Doppler domain and range-Doppler region of the intensity spectrum of the random sparse detection array with 100 detectors, respectively. (c)–(g) are GI reconstruction results for the peak frequency components Line1–Line5 (averaged 20000 measurements); (h)–(j) are the reconstruction image of the object 1, object 3 and object 2, respectively; (k) is the reconstruction image for the 3D target. The different colors presents the tomographic images at different ranges.

Download Full Size | PDF

In remote sensing applications, we usually cannot obtain a signal with high SNR due to the existence of the stray light. To illustrate the performance of pulse-compression GI lidar in scenarios with stray light, we carry out a comparison between conventional pulsed GI lidar and pulse-compression GI lidar. For conventional pulsed GI lidar, we have used a pulsed laser with pulse width 1 ns. Using the same simulation parameters of Fig. 3 and 25 random sparse detectors, Fig. 5 gives the reconstruction results of conventional pulse GI lidar and pulse-compression GI lidar when the detection SNR for a single detector is 1 dB, 3 dB, 5 dB, and 10 dB, respectively. The signal-to-noise ratio for image δ defined in [27] is given in Fig. 5 to evaluate the quality of the reconstruction results. It is clearly seen that the reconstruction quality of pulse GI lidar increases with the detection SNR. However, pulse-compression GI lidar hardly depends on the detection SNR because the LO light doesn’t interfere with the stray light, which means that pulse-compression GI lidar can eliminate the influence of the stray light on imaging quality. Moreover, for pulsed 3D GI lidar, the detection range mainly depends on single pulse energy, thus high range resolution and long detection range cannot be simultaneously achieved. However, for pulse-compression GI lidar, high emitting energy is obtained by using a long pulse. Meanwhile, the chirped amplitude modulation and pulse compression method makes it possible to obtain the same ranging resolution as the pulsed lidar with a short pulse width. Therefore, pulse-compression ghost imaging lidar has more advantages in remote sensing compared with conventional pulsed 3D GI lidar.

 figure: Fig. 5

Fig. 5 Simulation results of conventional pulsed 3D GI lidar and pulse-compression GI lidar in different level of stray light (with 25 random sparse detectors and averaged 5000 measurements). The upper line is the reconstruction results of conventional pulsed 3D GI lidar and the bottom line is the reconstruction results obtained by pulse-compression GI lidar. (a) SN R = 1 dB; (b) SN R = 3 dB; (c) SN R = 5 dB; (d) SN R = 10 dB. The signal-to-noise ratio for image δ is given upon each reconstruction results.

Download Full Size | PDF

4. Conclusion

In summary, we demonstrate by theoretical analysis and numerical simulation that coherent detection and pulse compression can be applied in GI lidar to image a 3D scenario with moving components. The emitting laser is spatiotemporally modulated, and the received pulse is de-chirped in optical domain using coherent detection. The proposed pulse-compression GI lidar uses long pulse width and low peak power pulse to obtain high range resolution. Compared with conventional pulsed 3D GI lidar, pulse-compression GI lidar can effectively eliminate the influence of the stray light on the imaging quality, which is very useful to weak remote sensing.

Funding

The Hi-Tech Research and Development Program of China (2013AA122901); National Natural Science Foundation of China (NSFC) (61571427); Youth Innovation Promotion Association of the Chinese Academy of Sciences (2013162).

References and links

1. M. D. Angelo and Y. Shih, “Quantum imaging,” Laser. Phys. Lett. 2, 567–596 (2005). [CrossRef]  

2. D. Z. Cao, J. Xiong, and K. Wang, “Geometrical optics in correlated imaging systems,” Phys. Rev. A 71, 013801 (2005). [CrossRef]  

3. D. Zhang, Y. Zhai, L. Wu, and X. Chen, “Correlated two-photon imaging with true thermal light,” Opt. Lett. 30, 2354–2356 (2005). [CrossRef]   [PubMed]  

4. F. Ferri, D. Magatti, A. Gatti, M. Bache, E. Brambilla, and L. A. Lugiato, “High-resolution ghost image and ghost diffraction experiments with thermal light,” Phys. Rev. Lett. 94, 183602 (2005). [CrossRef]   [PubMed]  

5. A. Gatti, M. Bache, D. Magatti, E. Brambilla, F. Ferri, and L. A. Lugiato, “Coherent imaging with pseudo-thermal incoherent light,” J. Mod. Opt. 53, 739–760 (2006). [CrossRef]  

6. W. Gong, P. Zhang, X. Shen, and S. Han, “Ghost “pinhole” imaging in Fraunhofer region,” Appl. Phys. Lett. 95, 071110 (2009). [CrossRef]  

7. J. H. Shapiro and R. W. Boyd, “The physics of ghost imaging,” Quantum Inf. Process. 11, 949–993 (2012). [CrossRef]  

8. W. Gong and S. Han, “The influence of axial correlation depth of light field on lensless ghost imaging,” J. Opt. Soc. Am. B 27, 675–678 (2010). [CrossRef]  

9. C. Zhao, W. Gong, M. Chen, E. Li, H. Wang, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. 101, 141123 (2012). [CrossRef]  

10. M. Chen, E. Li, W. Gong, Z. Bo, X. Xu, C. Zhao, X. Shen, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints in real atmosphere,” Opt. Photon. J. 3, 83–85 (2013). [CrossRef]  

11. N. D. Hardy and J. H. Shapiro, “Computational ghost imaging versus imaging laser radar for three-dimensional imaging,” Phys. Rev. A. 87, 023820 (2013). [CrossRef]  

12. Y. Zhu, J. Shi, H. Li, and G. Zeng, “Three-dimensional ghost imaging based on periodic diffraction correlation imaging,” Chin. Opt. Lett. 12, 071101 (2014). [CrossRef]  

13. W. Gong, C. Zhao, H. Yu, M. Chen, H. Wang, W. Xu, and S. Han, “Three-dimensional ghost imaging lidar via sparsity constraint,” Sci. Rep. 6, 26133 (2013). [CrossRef]  

14. B. Stann, B. C. Redman, W. Lawler, M. Giza, J. Dammann, and K. Krapels, “Chirped amplitude modulation ladar for range and Doppler measurements and 3-D imaging,” Proc. SPIE 6550, 655005 (2007). [CrossRef]  

15. A. Peter, C. Allen, and R. Hui, “Chirped lidar using simplified homodyne detection,” lightwave Technol. 27, 3351 (2009). [CrossRef]  

16. C. Allen, Y. Cobanoglu, S. K. Chong, and S. Gogineni, “Performance of a 1319 nm laser radar using RF pulse compression,” in Geoscience and Remote Sensing Symposium (IEEE, 2001), pp.997–999.

17. X. Yu, G. Hong, Y. Ling, and R. Shu, “Research on range-Doppler homodyne detection system,” Proc. SPIE 8196, 819618 (2011). [CrossRef]  

18. J. W. Strohbehn, Laser Beam Propagation in the Atmosphere (Springer, 1978). [CrossRef]  

19. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1968).

20. V. V. Protopopov, Laser Heterodyning (Springer, 2009). [CrossRef]  

21. L. G. Kazovsky, L. Curtis, W. C. Young, and N. K. Cheung, “All-fiber 90 optical hybrid for coherent communications,” Appl. Opt. 26, 437 (1987). [CrossRef]   [PubMed]  

22. J. W. Goodman, Statistical Optics (Wiley, 1985).

23. X. Li, C. Deng, M. Chen, W. Gong, and S. Han, “Ghost imaging for an axially moving target with an unknown constant speed,” Photon. Res. 3, 153–157 (2015). [CrossRef]  

24. C. Wang, D. Zhang, Y. Bai, and B. Chen, “Ghost imaging for a reflected object with a rough surface,” Phys. Rev. A. 82, 063814 (2010). [CrossRef]  

25. K. P. Chan and D. K. Killinger, “Enhanced detection of atmospheric-turbulence-distorted 1-μm coherent lidar returns using a two-dimensional heterodyne detector array,” Opt. Lett. 16, 1219 (1991). [CrossRef]   [PubMed]  

26. M. Zhang, Q. Wei, X. Shen, Y. Liu, H. Liu, and S. Han, “Statistical optics based numerical modeling of ghost imaging and its experimental approval,” Acta. Optica. Sinica. 27, 1858–1866 (2007).

27. W. Gong and S. Han, “A method to improve the visibility of ghost images obtained by thermal light,” Phys. Lett. A. 374, 1005 (2010). [CrossRef]  

28. F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104, 253603 (2010). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 The schematic of pulse-compression ghost imaging lidar via coherent detection.
Fig. 2
Fig. 2 Schematic of simulation and process.
Fig. 3
Fig. 3 Simulation results of pulse-compression GI lidar for a static scenario (the target consists of three planar objects). (a), (b) and (c) are the intensity spectrum and image reconstruction results by using 1, 25, and 100 coherent receivers, respectively (averaged 20000 measurements); Column (1)–(3) present the intensity spectrum of the 20000 pulse, the intensity spectrum of a single pulse, GI reconstruction results for peak frequency component Line1 and Line2, respectively. The different colors presents the tomographic images at different ranges. In addition, the frequency of Line1 and Line2 are fLine1 ≈ 3.331 MHz and fLine2 ≈ 3.338 MHz, and the corresponding distance are zLine1 ≈ 199.86 m and zLine2 ≈ 200.29 m.
Fig. 4
Fig. 4 Simulation results of pulse-compression GI lidar for a scenario with moving objects. The setup of the target is the same as Fig. 3 except for that planar object 1 and 2 has a radial velocity 0.1m/s and 1m/s, respectively. (a) and (b) are the Doppler domain and range-Doppler region of the intensity spectrum of the random sparse detection array with 100 detectors, respectively. (c)–(g) are GI reconstruction results for the peak frequency components Line1–Line5 (averaged 20000 measurements); (h)–(j) are the reconstruction image of the object 1, object 3 and object 2, respectively; (k) is the reconstruction image for the 3D target. The different colors presents the tomographic images at different ranges.
Fig. 5
Fig. 5 Simulation results of conventional pulsed 3D GI lidar and pulse-compression GI lidar in different level of stray light (with 25 random sparse detectors and averaged 5000 measurements). The upper line is the reconstruction results of conventional pulsed 3D GI lidar and the bottom line is the reconstruction results obtained by pulse-compression GI lidar. (a) SN R = 1 dB; (b) SN R = 3 dB; (c) SN R = 5 dB; (d) SN R = 10 dB. The signal-to-noise ratio for image δ is given upon each reconstruction results.

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

Δ G ( 2 , 2 ) ( x r , x t , f ) = I r ( x r ) I t ( x t , f ) I r ( x r ) I t ( x t , f ) ,
E s , n ( x s , t ) = [ 1 + m s ( t n T ) ] P ( t n T ) E s , n ( x s ) ,
P ( t ) = { 1 , 0 < t T 0 0 , e l s e
E s , n ( x s ) E s , n ( x s ) = I 0 δ ( x s x s ) ,
s ( t ) = cos ( 2 π f 0 t + π β t 2 ) .
I n ( x r ) d t | d x s E s , n ( x s , t 2 f r c ) exp ( j 2 π x r x s λ f r ) | 2 = d t [ 1 + m s ( t 2 f r c n T ) ] 2 P 2 ( t 2 f r c n T ) | d x s E s , n ( x s ) exp ( j 2 π x r x s λ f r ) | 2 | d x s E s , n ( x s ) exp ( j 2 π x r x s λ f r ) | 2 ,
E o , z i , n ( x o , t ) = exp ( j k z i ) j λ z i d x s E s , n ( x s , t z i c ) exp [ j π ( x o x s ) 2 λ z i ] ,
E t , z i , n ( x t , t ) = exp ( j k z i ) j λ z i d x o E o , z i , n ( x o , t z i c ) o z i ( x o ) exp [ j π ( x t x o ) 2 λ z i ] ,
E t , n ( x t , t ) = i E t , z i , n ( x t , t ) = i exp ( j 2 k z i ) ( j λ z i ) 2 × d x s d x o E s , n ( x s , t 2 z i c ) exp [ j π ( x o x s ) 2 λ z i ] o z i ( x o ) exp [ j π ( x t x o ) 2 λ z i ] .
E t , n ( x t , t ) = i { 1 + m s [ t 2 z i , n c n T ] } P [ t 2 z i , n c n T ] exp [ j 2 π f d i t ] E t , i , n ( x t ) ,
E t , i , n ( x t ) = A t , i , n ( x t ) exp [ j ϕ t , i , n ( x t ) ] exp ( j 2 k z i 0 ) ( j λ z i , n ) 2 d x s d x o E s , n ( x s ) exp [ j π ( x o x s ) 2 λ z i , n ] o z i , n ( x o ) exp [ j π ( x t x o ) 2 λ z i , n ] .
E L O , n ( x t , t ) = [ 1 + m s ( t n T ) ] P ( t n T ) A L O exp [ j ϕ L O , n ] ,
E t , n ( x t , t ) + E s t , n ( x t , t ) + E L O , n ( x t , t ) E t , n ( x t , t ) + E s t , n ( x t , t ) E L O , n ( x t , t ) E t , n ( x t , t ) + E s t , n ( x t , t ) + E L O , n ( x t , t ) exp ( j π / 2 ) E t , n ( x t , t ) + E s t , n ( x t , t ) E L O , n ( x t , t ) exp ( j π / 2 )
I : | E t , n ( x t , t ) + E s t , n ( x t , t ) + E L O , n ( x t , t ) | 2 | E t , n ( x t , t ) + E s t , n ( x t , t ) E L O , n ( x t , t ) | 2 Q : | E t , n ( x t , t ) + E s t , n ( x t , t ) + E L O , n ( x t , t ) exp ( j π / 2 ) | 2 | E t , n ( x t , t ) + E s t , n ( x t , t ) E L O , n ( x t , t ) exp ( j π / 2 ) | 2 .
I : 2 [ E t , n ( x t , t ) + E L O , n ( x t , t ) + E t , n ( x t , t ) E L O , n ( x t , t ) ] Q : 2 [ E t , n ( x t , t ) + E L O , n ( x t , t ) exp ( j π / 2 ) + E t , n ( x t , t ) E L O , n ( x t , t ) exp ( j π / 2 ) ] ,
i I , n ( x t , t ) i { 1 + m 2 cos ( 4 π β z i , n t / c + α i , n ) + m s ( t 2 z i , n / c n T ) + m s ( t n T ) } × P ( t n T ) A L O A t , i , n ( x t ) cos [ 2 π f d i t + ϕ t , i , n ( x t ) φ L O , n ] i Q , n ( x t , t ) i { 1 + m 2 cos ( 4 π β z i , n t / c + α i , n ) + m s ( t 2 z i , n / c n T ) + m s ( t n T ) } × P ( t n T ) A L O A t , i , n ( x t ) sin [ 2 π f d i t + ϕ t , i , n ( x t ) φ L O , n ] ,
i ˜ n ( x t , t ) = B P F [ i I , n ( x t , t ) ] + j B P F [ i Q , n ( x t , t ) ] = i m 2 cos [ 4 π β z i , n t / c + α i , n ] P ( t n T ) × A L O A t , i , n ( x t ) exp { j [ 2 π f d i t + ϕ t , i , n ( x t ) φ L O , n ] } .
I t , n ( x t , f ) i { sin c 2 [ T 0 ( f f d i ) ] + m 2 4 sin c 2 [ T 0 ( f f d i 2 z i , n β c ) ] } | A L O | 2 | A t , i , n ( x t ) | 2 ,
I t , n ( x t , f ) i m 2 4 sin c 2 [ T 0 ( f 2 z i 0 β c ) ] | A L O | 2 | A t , i , n ( x t ) | 2 .
Δ G ( 2 , 2 ) ( x r , x t , f = 2 z i 0 β / c } ) m 2 4 | A L O | 2 d x o O z i 0 ( x o ) sin c 2 [ D s λ f r ( x r f r z i 0 x o ) ] .
Δ G ( 2 , 2 ) ( x r , x t , f = f d i ) | A L O | 2 d x o O z i , n ( x o ) sin c 2 [ D s λ f r ( x r f r z i , n x o ) ] .
Δ G ( 2 , 2 ) ( x r , x t , f = f b i , n ) m 2 4 | A L O | 2 d x o O z i , n ( x o ) sin c 2 [ D s λ f r ( x r f r z i , n x o ) ] .
z i , n = c ( f b i , n f d i ) / 2 β v i = λ f d i / 2
Δ G ( 2 , 2 ) ( x r , f ) = 1 N n I r , n ( x r ) [ k I t , n , k ( f ) γ x r I r , n ( x r ) ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.