Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Ghost imaging lidar system for remote imaging

Open Access Open Access

Abstract

Research towards practical applications of ghost imaging lidar system especially in longer sensing distance has been urgent in recent years. In this paper we develop a ghost imaging lidar system to boost an extension of remote imaging, where the transmission distance of the collimated pseudo-thermal beam can be improved hugely over long range and just shifting the adjustable lens assembly generates wide field of view suiting for short-range imaging. Based on the proposed lidar system, the changing tendency of illuminating field of view, energy density, and reconstructed images is analyzed and verified experimentally. Some considerations on the improvement of this lidar system are also discussed.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Long-range light detection and ranging (lidar) of active optical imaging possesses advantages of high-resolution, high accuracy and all-time operation, allowing potential and widespread applications in remote sensing, airborne surveillance and target recognition [13]. Among the active optical imaging lidar system, ghost imaging (GI) is an optical sensing method capable of reconstructing the image of the target with intensity fluctuation correlation and provides an effective imaging strategy for the practical long-range applications due to the superiorities of non-local property and anti-disturbance capability [47]. In the early, Han et. al. proposed a GI lidar system to extend pseudo-thermal light GI to remote imaging [8] by miniaturizing the reference optical system [9]. Then tremendous efforts were devoted to prompt the application of remote ghost imaging [1016]. It is noted that the illuminating field of this system diverges rapidly with transmission distance because of the large divergent angle, which limits remote GI to extend to longer range partly. Moreover, a frontier question in the actual application of GI lidar system is always the distance limit and a significant aim is to extend the imaging range as possible.

Without loss of generality, we consider the effective distance in GI lidar system is as that over what distance can the imaging system work. The major obstacles to extend GI to the remote application may be echo signal strength, detectivity of the receiver and laser power. In fact, we can only receive part of photons from the target because of the limitation of the receiving system’s optical aperture and the power loss during the transmission process especially in long-range detection [8,13,17]. And the more target information that needs to be recovered, the more photons are required to be detected under the finite measurements [18]. Selecting a specific experimental configuration to diminish the turbulence influence [9] based on the analysis of turbulence effect on GI [19,20], exploiting special beam to suppress the detrimental propagation influence [21,22] and combining pulse-compression technique and coherent detection to improve the ability against background light [23,24], and so forth, these methods can reduce the transmission loss, thus improve the effective distance. When the echo light signal decreases rapidly with the imaging distance, a receiver with high sensitivity is required. Particularly, single-photon detectors and arrays can provide extraordinary single-photon sensitivity and better timing resolution than analog optical detectors [2530]. But a hard bound is physically imposed on the speed of photoelectric imaging procedure under low photon flux, because a long integration time to collect sufficient photons is indispensable to record or computationally recover the commensurate target information [26,27]. That is, the imaging efficiency is severely constrained by the measured photons and photon accumulation time for each measurement even with single photon avalanche diode (SPAD) with high detection sensitivity, which is seriously detrimental to real-time imaging. Moreover, for a specific outdoor environment with stable transmission loss factor and a lidar system with constant detection capability, the operational range is mainly determined by the laser power. However, the power of the current laser of GI system for long-range detection is limited by the operational safety and the system volume that can be accommodated on the payload platform, especially on the airborne and vehicular platforms. Further extending the imaging range in GI lidar system presents enormous challenges when the laser power and the detectivity of the receiver are restricted.

In this paper, we develop a ghost imaging lidar system with a set of lens assembly to boost an extension of remote imaging and to obtain wide FOV in short range. Based on Collins formula and optical transfer matrix theory, the expressions of ghost imaging and field of view (FOV) of the proposed lidar system are derived. The transmission distance of the collimated pseudo-thermal beam that can be probably considered as coherent light can be promoted hugely, which suits to long-range imaging. Larger FOV can be obtained in the staring imaging over short range when the illuminating beam output from the adjustable lens assembly is divergent and beam-expanded. Then, illuminating field of view, energy density in FOV and reconstructed ghost-images of the GI lidar system proposed herein are analyzed and the analysis is conformed experimentally.

2. Theoretical analysis

Due to the collimated performance and the beam-expanded capacity, a set of lens assembly can be introduced to make the illuminating field more applicative. Herein, a novel system configuration of GI lidar system is proposed, as depicted schematically in Fig. 1. The incoherent thermal light after passing through a collimated lens Ln1 with focal length f is split into the reference and object beams by beam splitter (BS), and the distance between the source and Ln1, L1, is equal to the focal length f. The intensity distribution of the speckle field after a transmission distance of Lr in the reference path is detected by a CCD camera. And in the object path, the light beam propagates a distance of Lt1, passes through a collimated and beam-expanded system consisting of Ln2 (focal length ft1) and Ln3 (focal length ft2), and interacts with an object after propagating freely a distance of Lt2, then the echo is collected by a photodetector. The distance between Ln2 and Ln3 is L2. According to the GI theory, the object information is obtained by calculating the intensity fluctuation correlation function [7]

$$G({{x_r},{x_t}} )= {\left|{\int {\left\langle {{E^ \ast }({{u_1}} )E({{u_2}^\prime } )} \right\rangle {h_r}({{u_1},{x_r}} )h_t^ \ast ({{u_2}^\prime ,{x_t}} )d{u_1}d{u_2}^\prime } } \right|^2}$$
where ui denotes the plane coordinates of light source, $\left\langle {{E^ \ast }({{u_1}} )E({u_2^{\prime}} )} \right\rangle$ is the first-order correlation function of the source, xr and xt denote the plane coordinates at CCD detector and the object to be imaged, and hr(u1, xr) and ht(u2, xt) are the impulse response functions of the reference and object paths, respectively. According to Collins formula [31] and related derivation method in our previous work [32], the ghost-image can be given by
$$G\left( {\frac{{{x_r}}}{M}} \right) \propto {|{t(\xi )} |^2} \otimes h(\xi )$$
with
$$\begin{aligned} h(\xi )&\propto \exp \left\{ { - \frac{{{\xi^2}}}{{\Delta_{PSF}^2}}} \right\}\\& = \exp \left\{ { - \frac{{{\xi^2}}}{{{{\left( {\frac{{\lambda |{{B_{t1}}} |}}{{\pi {w_0}}}\sqrt {1 + \frac{{{\pi^2}w_0^4}}{{4{\lambda^2}}}{{\left( {\frac{{{A_r}}}{{{B_r}}} - \frac{{{A_{t1}}}}{{{B_{t1}}}}} \right)}^2}} } \right)}^2}}}} \right\} \end{aligned}$$
where t(ξ) is the transmission function of the object with ξ being the object plane, M denotes the imaging magnification factor, ΔPSF represents the width of the point-spread function (PSF), λ is the light wavelength, w0 represents the 1/e2 intensity radius of Gaussian light, Ai, Bi, Ci and Di (i = r, t1 and t2) are the optical transfer matrix elements of the reference and object paths, respectively. And the magnification M can be denoted as M = Bt1 / Br [32]. From Eq. (3), the best imaging condition can be obtained when ΔPSF attains its minimum value, then the reconstructed image at the CCD plane is scaling down by a factor of M with respect to the actual object.

 figure: Fig. 1.

Fig. 1. System configuration of GI lidar system with lens assembly.

Download Full Size | PDF

In our system, the lens assembly in object path is designed to be adjustable. Stipulate that Ln3 position is the standard position when the output beam is collimated. We plot the optical pathway diagram of Ln3 deviating and not deviating from the standard position based on the geometric construction method for optical system in Ref. 33, as shown in Fig. 2, where S denotes the radius of the aperture size of Ln3, St denotes FOV radius at the object plane, d is the offset deviating from the standard position, θ is the divergence angle of the illuminating field, F denotes the back focus of Ln2 and the blue dashed line represents a lens positioned in standard position. It is mentioning that the distance between Ln2 and Ln3, L2 can be rewritten as ft1 + ft2 + d. During the diagram-plotted process, the light illuminating on F is considered as the point light to plot the optical pathway in order to make the diagram distinguishable clearly. In the theoretical analysis, we suppose that the diameter of light field illuminating Ln3 is always larger than the aperture size of Ln3 to guarantee that the FOV radius of the illuminating field can always be a size of S. Meanwhile we define that the adjustive offset of Ln3 d = 0 in Fig. 2(b), while d > 0 and d < 0 in Figs. 2(a) and 2(c) when Ln3 is positioned in the left and right of the standard position, respectively. From Fig. (2), the FOV radius at the object plane St and the divergence angle θ can be given by the following expressions after geometric calculation,

$${S_t} = \left\{\begin{aligned} \frac{{{L_{t2}}Sd}}{{2{f_{t2}}\left( {{f_{t2}} - d} \right)}} \qquad & \text{for}\,d > 0 \hfill \\S\,\qquad & \text{for}\,d = 0,\qquad\theta \hfill \\ \frac{{\left| {{f_{t2}}S\left( {{f_{t2}} + d} \right) - Sd{L_{t2}}} \right|}}{{{f_{t2}}\left( {{f_{t2}} + d} \right)}}&\text{for}\,d < 0\, \hfill \\ \end{aligned} \right. = \left\{ \begin{aligned} \arctan \left( {\frac{{Sd}}{{{f_{t2}}\left( {{f_{t2}} - d} \right)}}} \right)\,\, & \text{for}\,d > 0\,or < 0 \hfill \\&\text{for}\,d = 0\, \hfill \\ \end{aligned} \right.$$

 figure: Fig. 2.

Fig. 2. Optical pathway diagram of Ln3 deviating and not deviating from focal point.

Download Full Size | PDF

As schematically depicted in Fig. 2(b) where d = 0 with Ln3 being positioned in the standard position, FOV maintains constant as Lt2 increases. From Figs. 2(a) and 2(c), one can observe that FOV is being greater with increasing Lt2 when Ln3 is with a left deviation, while FOV will decrease firstly and then increase when Ln3 is located in the right of the standard position. Obviously, the output collimating beam under no offset of Ln3 can theoretically transmit to the infinity according to geometrical optics. And considering the actual transmission effect, we assume that the effective imaging distances of the divergent illuminating field with a divergence angle θ (see Figs. 2(a) and 2(c)) and the collimated illuminating field (see Fig. 2(b)) are respectively Ldiv and Lcoll when the output illuminating power of the two systems is fixed and the light energy unit area reaches a same threshold that the detector can run effectively under the threshold. That is, the corresponding FOVs of the two systems at object plane under the above circumstances should be equal. Then, the effective imaging distance Lcoll can be calculated as,

$${L_{coll}} = \frac{{\pi {p^2}\sqrt {{{\left( {\frac{{p + {L_{div}}\tan \theta }}{p}} \right)}^2} - 1} }}{\lambda }$$
where p is the exit pupil of the adjustable lens assembly in Fig. 2. To show the improvement of collimated system over divergent system in the imaging distance clearly, Table 1 shows the ratios of Lcoll to Ldiv under different exit pupil and divergence angle according to Eq. (5). Here λ is 532 nm and Ldiv is set to 1000 m as a reference value. It can be seen that the imaging distance of the collimated system is improved by orders of magnitude theoretically compared with that of the divergent system with a certain divergence angle. From the results, one can clearly conclude that the collimated GI lidar system in Fig. 2(b) suits greatly in long range. For the actual detection and imaging of moving target, supposing a practical scenario that an aerial vehicle is heading towards the receiver of the lidar system, the demand for remote detection is that the illuminating field can transmit far enough to hunt and track the target effectively and the collimated beam in Fig. 2(b) can satisfy this case well. When this scene changes from far to near, larger FOV may be required to cover the target adequately, thus one can image the target and get its global information in short range. Besides that, the illuminating field being divergent or with large divergence angle is always unfavorable for long-range imaging. Opportunely, it is possible to transfer the “disadvantage” of the divergent illuminating field to “applicability” over short range herein. In other words, we can take advantage of the position offset of Ln3 to have large FOV in short-range imaging, as shown in Figs. 2(a) and 2(c). And the transmission loss decreases gradually when the target is close to the lidar system, which contributes to the effective imaging with large FOV in short range. It can conclude that adjusting the position of Ln3 can form collimated illuminating field and well-matched FOV alternatively to meet the demand of long-range and short-range imaging simultaneously.

Tables Icon

Table 1. Ratios of Lcoll to Ldiv under different exit pupil p and divergence angle θ.

Furthermore, the imaging magnification M, minΔPSF determining the best imaging resolution and the reference path distance Lr under the best imaging condition in our lidar system can also be calculated by

$$M = \frac{{{B_{t1}}}}{{{B_r}}} ={-} \frac{{f_{t2}^2 + d({{f_{t2}} - {L_{t2}}} )}}{{{f_{t1}}{f_{t2}}}},$$
$$\min {\varDelta _{PSF}} = \frac{{\lambda |{{B_{t1}}} |}}{{\pi {w_0}}} = \frac{{\lambda f}}{{\pi {w_0}}}|M |,$$
$${L_r} = \frac{{ - {f_{t1}}f_{t2}^2 + f_{t2}^2{L_{t1}} - d({{f_{t1}} - {L_{t1}}} )({{f_{t2}} - {L_{t2}}} )+ f_{t1}^2({ - {f_{t2}} + {L_{t2}}} )}}{{f_{t2}^2 + d({{f_{t2}} - {L_{t2}}} )}}.$$

What’s more, the resolution limit of GI system that is determined by the property of the point-spread function of the system [34] can be calculated from Eq. (7). Then GI lidar system for adjustable range can be designed by selecting appropriate lens assembly and just adjusting the position of Ln3 according to Eqs. (4)–(8) in factual application. Further, we simulate the ghost-images of a double-slits object with Ln3 deviating and not deviating from focal point to show the impact of the position offset on imaging visually, as shown in the right of each figure in Fig. 2. The object distance is constant and the left and right offsets of Ln3 are also equal in the simulation. Some visualization results can be found that the reconstructed image is enlarged with the reduction of FOV and the imaging magnification calculated in the simulation also validates the rationality and the correctness of Eqs. (2) and (6).

3. Experimental verification

To verify the above results of GI lidar system, we carry out GI experiments within the optical configuration illustrated in Fig. 3. The pseudo-thermal light source consisting of a center-wavelength λ = 532 nm laser and a rotating ground-glass (RGG) passes through a collimated field lens Ln1 and forms the random speckle patterns with w0 = 0.5 mm. The transmission aperture of circular aperture (CA) can be adjusted to meet the demand of enough light flux at the target plane and the aperture size is maintained being constant during our whole experiments. In the object path, the distance between Ln2 and Ln3 is adjusted by a translation stage with stepping motor actuator (DHC, GCD-203100 M, the positioning accuracy is about 0.1 mm). The echo photons reflected from the target are collected by a light concentrator, then detected by a bucket detector (BD) without spatial resolution (Thorlabs PDA100A2; the gain ranges from 0 to 70 dB). In the reference path, a high-resolution CCD (Thorlabs DCCC3260C, 1936 × 1216 pixels, pixel size: 5.86 µm) is served to record the intensity distribution of speckle pattern.

 figure: Fig. 3.

Fig. 3. The experimental setup of GI lidar system with adjustable lens assembly. In the experimental process, L1 = f = 100 mm, Lt1 = 175 mm, ft1 = 50 mm and ft2 = 500 mm, respectively, and Lr is calculated by Eq. (8) under different conditions.

Download Full Size | PDF

According to the analysis above, the proposed lidar system allows us to achieve a stable FOV theoretically when the distance between Ln2 and Ln3 is the sum of the two focal lengths, i.e., L2 = ft1 + ft2. To validate the aforementioned FOV derivation with changeable Lt2 in Eq. (4), we measure the illuminating FOV and the light energy per unit area with different Lt2 when the lens assembly of Ln2 and Ln3 are collimated. For comparison, the measured FOVs with Lt2 = 0, 2, 5, 10, 15 and 20 m are shown in Fig. 4(a). Meanwhile, a rectangle with inner side 3 cm and outer side 6 cm printed in a cardboard is selected as a reference substance to estimate the FOV at the object plane visually. From the images shot by a simple hand-held camera, one can clearly see that compared with the original FOV with Lt2 = 0 presented in Fig. 4(a1), the illuminating FOVs with the increasing Lt2 do not change basically except for the diffraction behavior of the hard-edge aperture, which validates the derivation of illuminating field St in Eq. (4). Therefore, the collimated light is capable of promoting effective imaging distance being farther, and such enhancements in detection distance can provide great significance in remote imaging. Not only the FOV but the luminous flux arriving at the object plane does influence strongly on long-range imaging because the detected signal-to-noise ratio depends on the echo light highly. Next, we measure the total intensity of light transmitting through a rectangle aperture with area of 20 mm2 in FOV. Taking the maximal measured value as a dimension, the normalized distribution of the bucket values per unit area is also plotted for comparison, as schematically depicted in Fig. 4(b). We find that the normalized intensity values of same area in FOV fluctuate slightly but remain unchanged almost with the increasing Lt2. The results in Fig. 4 demonstrate that the illuminating FOV and the energy density over the collimated sensing range are constant, suggesting an advantage of long-range imaging.

 figure: Fig. 4.

Fig. 4. (a) FOV with Lt2 = 0, 2, 5, 10, 15 and 20 m. (b) Normalized intensity per unit area with Lt2 = 2 ∼ 20 m.

Download Full Size | PDF

The evaluation always prefers to be focus on the effective imaging distance in prizing the merits of GI lidar in factual application. Then we carry out the imaging experiments with different Lt2 by our collimated GI lidar system to verify the derivation in Eq. (2). From Eqs. (3) and (6), imaging result highly depends on imaging magnification. So, the imaging magnification M determined by the ratio of two lenses’ focal length ft2 / ft1 = 10 from Eq. (6) is set to be constant to eliminate the influence of the magnification. Being limited by the laboratory condition, we only accomplish GI experiments for verification of Lt2 at the range of 0 and 20 m, and the object is chosen as a double-slit with slit width 1.5 mm, center-to-center separation 4.5 mm and slit height 7 mm according to the relationship between critical resolution and speckle size [34]. During the imaging procedure, a total number of measurements is all set to 5000 here and in the following. The imaging results by GI are shown in Fig. 5, where Fig. 5(a) corresponds to the original object and Figs. 5(b)–(f) correspond to the reconstructed images with Lt2 = 2, 5, 10, 15 and 20 m, respectively. It can be seen that the sizes of reconstructed double-slit objects are almost exactly the same even with increasing Lt2. The scalar height of double-slit is always about 0.7 mm after multiple calculation according to factual pixel size of CCD camera and height pixels of the reconstructed double-slit. That is, the reconstructed images are reduced by a factor of about 10, which conforms the magnification M.

 figure: Fig. 5.

Fig. 5. (a) Original object. (b) - (f) Reconstructed objects through GI with different Lt2, i.e, Lt2 = 2, 5, 10, 15 and 20 m, respectively.

Download Full Size | PDF

Then we adjust the position of Ln3 of our lidar system to arise optical divergence and get larger FOV based on the depiction of Figs. 2 and 3, switching long-range to short-range imaging. Here, the distance L2 = ft1 + ft2 + d. Figures 6 and 7 show the experimental results with adjustable offset of Ln3 and the FOVs are always at the plane that Lt2 = 5 m. Here we specify again that the offset value d with Ln3 being located at left and right away from the standard position is positive and negative, respectively. Figure 6(a1) and Figs. 6(a2)–(a10) show the illuminating FOVs when Ln3 is no-offset and Ln3 is with the offsets ranging from -40 mm to 40 mm in 10 mm step, respectively. As displayed in Fig. 6, FOVs trend to be smaller first and then will be larger, which is compatible with the tendency of St in Fig. 2. We also measure the diameters of the FOVs by ruler and the measured values are 5.2, 4.5, 4, 3.4, 3, 2.4, 1.9, 1.35 and 0.82 cm, respectively, which are basically coincident with the derivation in Eq. (4). Additionally, the light intensity transmitting through the same cardboard aperture as that in Fig. 4(b) is detected, and the normalized intensities are plotted in Fig. 6(b). The offset d is set to 25 mm maximally due to the sensitivity upper-limit of our photodetector. It is obviously observed that the normalized values increase with the increasing d, yielding that a larger FOV lowers the energy density in illuminating FOV. Then, we conduct GI experiments to demonstrate the imaging diversification along with the changeable FOVs. The comparison between the original object and reconstructed ghost-images with the offsets of Ln3 is presented in Fig. 7. After multiple calculation and comparison between the scalar heights of reconstructed images and original double-slit, the imaging magnifications do match with Eq. (6) well. Moreover, from Figs. 7(a)–(k), the imaging quality is gradually decreased and the reason for this phenomenon is that the corresponding FOV decreases as the offset of Ln3 increases numerically (see Fig. 6(a)), resulting in the decreasing speckle size. That means that the matching degree between slit width (the smallest local detail that can be distinguished) and the speckle size at the object plane is reduced. Therefore, appropriate FOV should be selected by adjusting the position of Ln3 according to the size and characteristic of the target to be imaged in actual detection.

 figure: Fig. 6.

Fig. 6. (a) FOV and (b) Normalized intensity per unit area with different positions of Ln3.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. (a) Original object. (b) - (k) Reconstructed ghost-images with different positions of Ln3.

Download Full Size | PDF

4. Discussion

Different from traditional scanning imaging lidar that recovers target information by directly measuring light echo reflected from the target, GI lidar has high detection sensitivity and predominant anti-disturbance capability in target detection and information extraction because a point-like photodetector is exploited to collect the echo. The proposed GI lidar system in this paper has the capability of both concentrated illuminating power for long-range imaging and adjustable FOV in short range. As for the remote imaging, longer detection range than traditional lens projection GI lidar [8] can be obtained by our lidar system even when both the effective reflective area and reflection characteristics of the target and the detector’s sensitivity are the same, due to the collimated and beam-expanded lens assembly in our lidar system. Meanwhile, larger illuminating FOV to image comprehensively can also be obtained just by adjusting the lens assembly in object path when the target is close to the receiver, which is suitable for the application of short-range imaging.

In general, an intuitive and longstanding goal in the study of GI lidar in remote imaging is the coexistence of large FOV and long range. Compared with the existing GI lidar system, the illuminating field output from the collimated and beam-expanded lens assembly can arrive at longer position where the pseudo-thermal light distribution in the field of view is uniform and the illumination power is relatively higher. But the efficient imaging at the object plane is not insufficient for the large-scale targets of aircraft or guided missile, because the FOV is actually limited by the numerical aperture of lens and diaphragm. To address this issue, a beam steering installation, such as fast steering mirror and optical gimbal mount can be considered to mount after the lens assembly, yielding a larger scanning FOV for remote imaging. Without considering the higher mechanical complexity and the scanning refresh rate limited by the motor speed, the steering installation is accurate and stable over a wide FOV angle, which is extremely important in an unstable outdoor environment. Overall, the solutions of refining the experimental setup can facilitate the adaptation of factual GI lidar system with larger-FOV and longer-range to capture multiple targets, especially in moving platforms.

Additionally, in our experiments, the image of the object is reconstructed via intensity fluctuation correlation strategy, which is the classic but basic linear algorithm. In factual detection, real-time performance of the tracking and imaging method and better imaging quality are also significant to capture the target continuously and steadily. Given certain computation complexity for data processing, the quality and of image can be improved further using nonlinear algorithms [8,35,36] or deep learning [37,38]. In the following, experiments over longer range based on the GI lidar system proposed in this paper will be done outdoor, and we will also develop the imaging method combined with potential imaging algorithm. Meanwhile, advanced adaptive optics technology can also be introduced to the proposed system to compensate the influence of turbulence disturbance on imaging results over long range [39,40]. Without loss of generality, fine imaging in long range and wide FOV in short range can be implemented in our GI lidar scheme just by selecting a set of lens assembly with suitable parameters and adjusting the position of Ln3. The significant enhancements of experimental setup flexibility make our scheme promising in military mission, air traffic control and navigation for the application of target detection and imaging in remote sensing.

5. Conclusion

In conclusion, we have developed a ghost imaging lidar system with a set of lens assembly to boost an extension of long-range imaging and obtain wide FOV in short range. Changing tendency of field of view and analytical formula of imaging parameter with lens assembly deviating and not deviating from focal point are derived on the basis of Collins formula and optical transfer matrix theory. The improvement of collimated system to divergent system in imaging distance is also analyzed. Based on the guidance of the theoretical derivation, we contrast illuminating FOV, energy density in FOV and reconstructed ghost-images at different object distances by adjusting the lens position experimentally, and the experimental results match with the theory well. This adjustable lens assembly can be generalized to other GI lidar system in remote imaging, which has potential in long-range active illumination detection and imaging.

Funding

National Natural Science Foundation of China (61871431, 61971184, 62001162, 62101187); Fundamental Research Funds for the Central Universities (531118010757); Natural Science Foundation of Hunan Province (2022JJ40091).

Disclosures

The authors declare no conflicts of interest

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request

References

1. B. Schwarz, “Lidar: mapping the world in 3D,” Nat. Photonics 4(7), 429–430 (2010). [CrossRef]  

2. C. L. Glennie, W. E. Carter, R. L. Shrestha, and W. E. Dietrich, “Geodetic imaging with airborne lidar: the Earth’s surface revealed,” Rep. Prog. Phys. 76(8), 086801 (2013). [CrossRef]  

3. A. B. Gschwendtner and W. E. Keicher, “Development of coherent laser radar at lincoln laboratory,” Lincoln Lab. J. 12, 383–396 (2000).

4. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995). [CrossRef]  

5. D. V. Strekalov, A. V. Sergienko, D. N. Klyshko, and Y. H. Shih, “Observation of two-photon ‘ghost’ interference and diffraction,” Phys. Rev. Lett. 74(18), 3600–3603 (1995). [CrossRef]  

6. A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, “Ghost imaging with thermal light: comparing entanglement and classical correlation,” Phys. Rev. Lett. 93(9), 093602 (2004). [CrossRef]  

7. J. Cheng and S. S. Han, “Incoherent coincidence imaging and its applicability in X-ray diffraction,” Phys. Rev. Lett. 92(9), 093903 (2004). [CrossRef]  

8. C. Q. Zhao, W. L. Gong, M. L. Chen, E. R. Li, H. Wang, W. D. Xu, and S. S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. 101(14), 141123 (2012). [CrossRef]  

9. P. B. Dixon, G. A. Howland, K. W. C. Chan, C. O. Hale, B. Rodenburg, N. D. Hardy, J. H. Shapiro, D. S. Simon, A. V. Sergienko, R. W. Boyd, and J. C. Howell, “Quantum ghost imaging through turbulence,” Phys. Rev. A 83(5), 051803 (2011). [CrossRef]  

10. C. L. Wang, X. D. Mei, L. Pan, P. W. Wang, W. Li, X. Gao, M. L. Xie, J. F. Han, H. Zhang, Z. W. Bo, M. L. Chen, W. L. Gong, and S. S. Han, “NIR 3D GISC lidar on a balloon-borne platform,” in Imaging and Applied Optics 2017 (3D, AIO, COSI, IS, MATH, pcAOP), OSA Technical Digest (online) (Optica Publishing Group, 2017), pp. JTu5A.11.

11. R. Meyers and K. Deacon, “Quantum ghost imaging experiments at ARL,” SPIE Proceedings, Optical Engineering + Applications, Quantum Communications and Quantum Imaging 7815, 78150I (2010). [CrossRef]  

12. D. Li, D. Yang, S. Sun, Y. G. Li, L. Jiang, H. Z. Lin, and W. T. Liu, “Enhancing robustness of ghost imaging against environment noise via cross-correlation in time domain,” Opt. Express 29(20), 31068–31077 (2021). [CrossRef]  

13. C. L. Wang, X. D. Mei, L. Pan, P. W. Wang, W. Li, X. Gao, Z. W. Bo, M. L. Chen, W. L. Gong, and S. S. Han, “Airborne near infrared three-dimensional ghost imaging LiDAR via sparsity constraint,” Remote Sensing 10(5), 732 (2018). [CrossRef]  

14. X. D. Mei, L. Pan, P. W. Wang, W. L. Gong, and S. S. Han, “Experimental demonstration of vehicle-borne near infrared three-dimensional ghost imaging LiDAR,” in Conference on Lasers and Electro-Optics, OSA Technical Digest (Optica Publishing Group, 2019), paper JW2A.7.

15. W. L. Gong, C. Q. Zhao, H. Yu, M. L. Chen, W. D. Xu, and S. S. Han, “Three-dimensional ghost imaging lidar via sparsity constraint,” Sci. Rep. 6(1), 26133 (2016). [CrossRef]  

16. S. Ma, C. Y. Hu, C. L. Wang, Z. T. Liu, and S. S. Han, “Multi-scale ghost imaging LiDAR via sparsity constraints using push-broom scanning,” Opt. Commun. 448, 89–92 (2019). [CrossRef]  

17. X. P. Wang and Z. H. Lin, “Nonrandom microwave ghost imaging,” IEEE Trans. Geosci. Remote Sensing 56(8), 4747–4764 (2018). [CrossRef]  

18. S. D. Johnson, P. A. Moreau, T. Gregory, and M. J. Padgett, “How many photons does it take to form an image?” Appl. Phys. Lett. 116(26), 260504 (2020). [CrossRef]  

19. J. Cheng, “Ghost imaging through turbulent atmosphere,” Opt. Express 17(10), 7916–7921 (2009). [CrossRef]  

20. P. Zhang, W. L. Gong, X. Shen, and S. S. Han, “Correlated imaging through atmospheric turbulence,” Phys. Rev. A 82(3), 033817 (2010). [CrossRef]  

21. R. E. Meyers, K. S. Deacon, A. D. Tunick, and Y. H. Shih, “Virtual ghost imaging through turbulence and obscurants using bessel beam illumination,” Appl. Phys. Lett. 100(6), 061126 (2012). [CrossRef]  

22. W. Tan, Y. F. Bai, X. W. Huang, T. Jiang, S. Q. Nan, X. P. F. Zou, and X. Q. Fu, “Enhancing critical resolution of a ghost imaging system by using a vortex beam,” Opt. Express 30(9), 14061–14072 (2022). [CrossRef]  

23. C. J. Deng, W. L. Gong, and S. S. Han, “Pulse-compression ghost imaging lidar via coherent detection,” Opt. Express 24(23), 25983–25994 (2016). [CrossRef]  

24. L. Pan, C. J. Deng, Z. W. Bo, X. Yuan, D. M. Zhu, W. L. Gong, and S. S. Han, “Experimental investigation of chirped amplitude modulation heterodyne ghost imaging,” Opt. Express 28(14), 20808–20816 (2020). [CrossRef]  

25. X. L. Liu, J. H. Shi, X. Y. Wu, and G. H. Zeng, “Fast first-photon ghost imaging,” Sci. Rep. 8(1), 5012 (2018). [CrossRef]  

26. Z. P. Li, X. Huang, Y. Cao, B. Wang, Y. H. Li, W. J. Jin, C. Yu, J. Zhang, Q. Zhang, C. Z. Peng, F. H. Xu, and J. W. Pan, “Single-photon computational 3D imaging at 45 km,” Photonics Res. 8(9), 1532–1540 (2020). [CrossRef]  

27. Z. P. Li, J. T. Ye, X. Huang, P. Y. Jiang, Y. Cao, Y. Hong, C. Yu, J. Zhang, Q. Zhang, C. Z. Peng, F. H. Xu, and J. W. Pan, “Single-photon imaging over 200 km,” Optica 8(3), 344–349 (2021). [CrossRef]  

28. G. Buller and A. Wallace, “Ranging and three-dimensional imaging using time-correlated single-photon counting and point-by-point acquisition,” IEEE J. Select. Topics Quantum Electron. 13(4), 1006–1015 (2007). [CrossRef]  

29. R. H. Hadfield, “Single-photon detectors for optical quantum information applications,” Nat. Photonics 3(12), 696–705 (2009). [CrossRef]  

30. F. Villa, R. Lussana, D. Bronzi, S. Tisa, A. Tosi, F. Zappa, A. D. Mora, D. Contini, D. Durini, S. Weyers, and W. Brockherde, “CMOS imager with 1024 spads and tdcs for single-photon timing and 3-D time-of-flight,” IEEE J. Select. Topics Quantum Electron. 20(6), 364–373 (2014). [CrossRef]  

31. S. A. Collins, “Lens-systems diffraction integral written in terms of matrix optics,” J. Opt. Soc. Am. 60(9), 1168–1177 (1970). [CrossRef]  

32. Y. Gao, Y. F. Bai, and X. Q. Fu, “Point-spread function in ghost imaging system with thermal light,” Opt. Express 24(22), 25856–25866 (2016). [CrossRef]  

33. L. Levi, Applied Optics (Wiley, 1968).

34. W. Tan, X. W. Huang, T. Jiang, S. Q. Nan, Q. Fu, X. P. F. Zou, Y. F. Bai, and X. Q. Fu, “Critical resolution in ghost imaging system with pseudo-thermal light,” Results Phys. 32(5), 105104 (2022). [CrossRef]  

35. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009). [CrossRef]  

36. M. Aßmann and M. Bayer, “Compressive adaptive computational ghost imaging,” Sci. Rep. 3(1), 1545 (2013). [CrossRef]  

37. Y. C. He, G. Wang, G. X. Dong, S. T. Zhu, H. Chen, A. X. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci. Rep. 8(1), 6469 (2018). [CrossRef]  

38. F. Wang, H. Wang, H. C. Wang, G. W. Li, and G. H. Situ, “Learning from simulation: An end-to-end deep-learning approach for computational ghost imaging,” Opt. Express 27(18), 25560–25572 (2019). [CrossRef]  

39. J. W. Hardy, J. E. Lefebvre, and C. L. Koliopoulos, “Real-time atmospheric compensation,” J. Opt. Soc. Am. 67(3), 360–369 (1977). [CrossRef]  

40. J. C. Christou, J. Girkin, C. Kulcsar, and L. K. Young, “Feature issue introduction: applications of adaptive optics,” Opt. Express 29(8), 11533–11537 (2021). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. System configuration of GI lidar system with lens assembly.
Fig. 2.
Fig. 2. Optical pathway diagram of Ln3 deviating and not deviating from focal point.
Fig. 3.
Fig. 3. The experimental setup of GI lidar system with adjustable lens assembly. In the experimental process, L1 = f = 100 mm, Lt1 = 175 mm, ft1 = 50 mm and ft2 = 500 mm, respectively, and Lr is calculated by Eq. (8) under different conditions.
Fig. 4.
Fig. 4. (a) FOV with Lt2 = 0, 2, 5, 10, 15 and 20 m. (b) Normalized intensity per unit area with Lt2 = 2 ∼ 20 m.
Fig. 5.
Fig. 5. (a) Original object. (b) - (f) Reconstructed objects through GI with different Lt2, i.e, Lt2 = 2, 5, 10, 15 and 20 m, respectively.
Fig. 6.
Fig. 6. (a) FOV and (b) Normalized intensity per unit area with different positions of Ln3.
Fig. 7.
Fig. 7. (a) Original object. (b) - (k) Reconstructed ghost-images with different positions of Ln3.

Tables (1)

Tables Icon

Table 1. Ratios of Lcoll to Ldiv under different exit pupil p and divergence angle θ.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

G ( x r , x t ) = | E ( u 1 ) E ( u 2 ) h r ( u 1 , x r ) h t ( u 2 , x t ) d u 1 d u 2 | 2
G ( x r M ) | t ( ξ ) | 2 h ( ξ )
h ( ξ ) exp { ξ 2 Δ P S F 2 } = exp { ξ 2 ( λ | B t 1 | π w 0 1 + π 2 w 0 4 4 λ 2 ( A r B r A t 1 B t 1 ) 2 ) 2 }
S t = { L t 2 S d 2 f t 2 ( f t 2 d ) for d > 0 S for d = 0 , θ | f t 2 S ( f t 2 + d ) S d L t 2 | f t 2 ( f t 2 + d ) for d < 0 = { arctan ( S d f t 2 ( f t 2 d ) ) for d > 0 o r < 0 for d = 0
L c o l l = π p 2 ( p + L d i v tan θ p ) 2 1 λ
M = B t 1 B r = f t 2 2 + d ( f t 2 L t 2 ) f t 1 f t 2 ,
min Δ P S F = λ | B t 1 | π w 0 = λ f π w 0 | M | ,
L r = f t 1 f t 2 2 + f t 2 2 L t 1 d ( f t 1 L t 1 ) ( f t 2 L t 2 ) + f t 1 2 ( f t 2 + L t 2 ) f t 2 2 + d ( f t 2 L t 2 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.