Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Three-dimensional single-photon imaging through realistic fog in an outdoor environment during the day

Open Access Open Access

Abstract

Due to the strong scattering of fog and the strong background noise, the signal-to-background ratio (SBR) is extremely low, which severely limits the 3D imaging capability of single-photon detector array through fog. Here, we propose an outdoor three-dimensional imaging algorithm through fog, which can separate signal photons from non-signal photons (scattering and noise photons) with SBR as low as 0.003. This is achieved by using the observation model based on multinomial distribution to compensate for the pile-up, and using dual-Gamma estimation to eliminate non-signal photons. We show that the proposed algorithm enables accurate 3D imaging of 1.4 km in the visibility of 1.7 km. Compared with the traditional algorithms, the target recovery (TR) of the reconstructed image is improved by 20.5%, and the relative average ranging error (RARE) is reduced by 28.2%. It has been successfully demonstrated for targets at different distances and imaging times. This research successfully expands the fog scattering estimation model from indoor to outdoor environment, and improves the weather adaptability of the single-photon detector array.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

In recent years, active 3D imaging has been widely used in many fields, including non-line-of-sight imaging [1,2], underwater imaging [35] and imaging through scattering media [6]. For these cases, the key requirements of active 3D imaging are photodetectors with high detection sensitivity and fast acquisition rate, and sequential circuits with high timing accuracy and wide dynamic operation range. Due to the high optical sensitivity of single-photon detectors and the low echo energy requirements [7], single-photon counting lidar approach has emerged as an important candidate technology for many remote sensing applications which require three-dimensional imaging [8,9].

However, the remote 3D imaging results of single-photon counting lidar are often limited by environmental conditions, especially fog in nature. The strong absorption and scattering ability of fog lead to the obvious attenuation of laser energy in a relatively short transmission distance. Moreover, the multiple scattering of photons during transmission in fog is easy to change the original propagation direction of light. These factors eventually reduce the effective imaging distance of the lidar, increase the degree of imaging degradation [10], and reduce the imaging resolution [11]. How to overcome the scattering effect of fog and improve the imaging quality in fog environment is still an important scientific problem to be solved for single-photon counting lidar. Aiming at this problem, several studies have investigated the three-dimensional single-photon imaging through different types of atmospheric obscurants. Kijima estimates the scattering characteristics by using multiple time-gating and assigning one time-gated exposure, and realized the image defogging of indoor and outdoor close-range targets [12]. Heriot-Watt University uses four algorithms (pixel-wise cross-correlation [5], RDI-TV algorithm [13], unmixing algorithm [14] and M-NR3D algorithm [7]) to reconstruct the image of the target in high levels of atmospheric obscurants. Subsequently, they propose an M2R3D algorithm to reconstruct the target (about 150m) depth image through artificial smoke [15]. Our goal is different since we aim to establish a time-domain distribution model based on the collision theory of photons and fog particles.

At present, some scholars have studied the temporal distribution of smoke scattering photons. Based on the physical model of lidar, Sang uses expectation maximization (EM) algorithm to estimate the fog profile and realize image defogging [16]. Mau uses the mixed model of logarithmic normal distribution and Gaussian distribution to fit the echo signal [17]. Unfortunately, these two estimation models have poor image restoration results in fog environment. However, Satat shows that time profiles of light reflected from fog have a distribution (Gamma) that is different from light reflected from objects occluded by fog (Gaussian) [18]. Based on the Gamma distribution model, the reconstruction algorithm proposed by them has been effectively verified in the indoor dehazing experiment. This algorithm is called the all-parameter estimation algorithm (APEA) by Liu [19]. Then, Liu and Guo propose a single-parameter estimation algorithm (SPEA) based on this model [19,20]. The difference is that APEA uses the maximum likelihood algorithm to estimate the time profiles of fog directly, while SPEA uses the measured attenuation coefficient to estimate the time profiles of fog. In our recent work, we use the dual-parameter estimation algorithm based on continuous wavelet transform (CWT) to achieve more accurate indoor depth imaging through smoke [21,22]. It is worth noting that in these studies, short wavelength laser (wavelength not greater than 1064nm) is used to detect short-range targets. In addition to the reflected photons of the target, there are also obvious smoke scattered photons in the echo signal. The model estimation algorithm is suitable for different smoke types and visibility under the above conditions.

However, these studies are developed in indoor artificial smoke without considering background photons. In the actual fog scattering environment, the distribution length of fog in the detection range is hundreds of times that of indoor artificial smoke. Moreover, the strong background noise in nature is an important factor limiting the reconstruction ability of scattering estimation model. The main purpose of this paper is to introduce an imaging algorithm for single-photon detector array, which has good robustness for the reconstruction of long-range targets in the daytime foggy environment. With our algorithm, the target position of each pixel under low visibility can be accurately estimated. At the same time, the integrity of reconstructed depth image is expected to be increased significantly.

Therefore, in this paper, we develop a dual-Gamma estimation algorithm for outdoor depth imaging that is robust to fog using a single-photon detector array. As far as we know, this is the first time that the model estimation algorithm has been applied to long-range target depth imaging in outdoor fog environment. We first introduce the fog environment observation model of single-photon counting lidar based on multinomial distribution, which considers background photons and scattering photons. Then, aiming at the influence of strong noise photons on the scattering photons distribution model (Gamma), a depth imaging algorithm through fog based on dual-Gamma estimation is proposed to restrain noise photons and scattering photons. Finally, we use this algorithm to reconstruct the depth image of the targets with a visibility of 1.7 km, and compare it with the traditional signal extraction algorithm. The depth imaging ability of the proposed algorithm in low visibility is tested under different target distances and imaging times.

Compared with the previous algorithms [1922], the differences of model estimation algorithm in this paper are as follows:

  • (1) Different from the indoor artificial smoke, the three-dimensional imaging of the target in this paper is through the realistic fog. The influence of strong background photons on the time-domain echo signal is also considered.
  • (2) The fog in this paper is distributed in the range between the target and the lidar, not only in a small range. The distribution range of fog is hundreds of meters long.
  • (3) The smoke echo signal in the previous algorithms is only distributed in the local range within the gate, and the waveform is approximately Gaussian distribution. However, the fog echo signal in this paper is distributed throughout the gate, and the waveform is approximately e-exponential distribution.
  • (4) In previous algorithms, a single Gamma fitting was used to estimate the smoke echo signal. The algorithm proposed in this paper uses dual-Gamma fitting to estimate the fog echo signal. The first Gamma fitting is to estimate the counting results to correct the histogram results. The second Gamma fitting is to estimate the target echo photons after the correction histogram pile-up compensation.
  • (5) The algorithm proposed in this paper can adaptively estimate the fog echo signal received by each pixel of the single-photon detector array lidar. Pixel-by-pixel signal estimation and median filtering algorithm based on spatial information are used to improve the three-dimensional imaging ability through realistic fog.

2. Observation model through fog

The main challenge of single-photon counting lidar for three-dimensional imaging through fog is that the strong backscattering of fog leads to the uneven distribution of scattering photons in the observation results. Therefore, the photons received in fog environment mainly include scattering photons λfog, reflected photons λtarget, noise photons λb and system noise. Each photon count comes from the Poisson distribution P (.) distribution as follows

$${H_t} \sim P[\lambda '(t)] = P[{\lambda _{t\arg et}}(t) + {\lambda _{fog}}(t) + {\lambda _b} + {b_d}]$$
$${\lambda _{t\arg et}}(t) = \eta \alpha f(t - 2R/c)$$
where H is the observed histogram distribution, $\lambda '(t)$ is the average photon number. η is the detector efficiency. α is the reflected photons of the target, which can be calculated by the lidar equation. f represents the system impulse response assumed to be known by measurement. R is the target distance, c is the light speed, and bd is the dark count of the detector.

The wavelength of our lidar is 1064 nm. In addition to the reflected photons of the target, there are also a large number of photons scattered by fog in the echo signal. Therefore, we use the model estimation algorithm based on Gamma distribution to estimate the echo signal of fog. The time-domain distribution of photons after multiple scattering in the scattering medium is as follows [18,21]:

$${\lambda _{fog}}(t)\textrm{ = }\eta \textrm{r} \times \frac{{{\beta ^{K + 1}}}}{{K!}}{t^K}\textrm{exp} ( - \beta t)$$
where K is the maximum scattering number, and r is related to the backscattering coefficient of fog. β is the average scattering times of photons arriving at the detector plane in each time Bin.

Due to the existence of dead-time of single-photon counting lidar, the detection probability of the i-th time Bin is related to the detection probability of the previous time Bin. For the lidar system with long dead-time, the single-photon detector in the gate can only respond to one photon signal, namely single-trigger mode [23]. When there are always strong scattering photons and background photons in the gate, the SPAD (single-photon avalanche diode) only records the first returning photon in each emission pulse cycle. And it will not store the photons arriving after that. This leads to a systematic bias between the observed histogram and the real echo photon response. In particular, this bias reduces the number of signal photons arriving longer in the gate. This phenomenon of photon accumulation at the front of the gate is called pile-up [24].

To compensate for the pile-up, we use multinomial distributions to model our observed histograms. This model is suitable for single-photon detectors working in low-flux regime and high-flux regime. The details are shown in Ref. 24. Therefore, the probability P(H | $\lambda '$) of histogram H observed in the daytime outdoor fog environment is given as:

$$\begin{aligned} P({H|\lambda }' )&= \frac{{N!}}{{{H_1}! \cdots {H_T}!(N - {1^T}H)!}}P{(no\det ections)^{N - {1^T}H}}\prod\limits_{i = 1}^T {P{{(i|\lambda ')}^{{H_i}}}} \\ &= \frac{{N!}}{{{H_1}! \cdots {H_T}!(N - {1^T}H)!}}\textrm{exp} {( - {1^T}\lambda' )^{N - {1^T}H}}\prod\limits_{i = 1}^T {{{(\textrm{exp} ( - \sum\limits_{k = 1}^{i - 1} {{\lambda' _k}} ) - \textrm{exp} ( - \sum\limits_{k = 1}^i {{\lambda' _k}} ))}^{{H_i}}}} \end{aligned}$$
where T is the maximum Bin number and N is the total number of laser pulses. The log-likelihood function is obtained from formula (4).
$$\begin{aligned} \Lambda (H) &={-} \log P(H|\lambda' )\\ &= {1^T}\lambda '(N - {1^T}H) + \sum\limits_{i = 1}^T {{H_i}} \sum\limits_{k = 1}^{i - 1} {{\lambda' _k}} - \sum\limits_{i = 1}^T {{H_i}} \log (1 - {e^{ - {\lambda' _i}}}) \end{aligned}$$

The photon number of the i-th time Bin is obtained by maximizing the log-likelihood.

$${\lambda' _i} = \log \left( {1 + \frac{{{H_i}}}{{N - \sum\limits_{k = 1}^i {{H_k}} }}} \right)$$
where $\lambda '$ is the echo photon rate function of a single laser pulse. Therefore, the target echo photon of a single laser pulse is λtarget = $\lambda '$ - λfog - λb- bd.

For the single-photon detector array, there are certain differences in the laser spot distribution, fog distribution and detector efficiency corresponding to each pixel. This means that the average scattering times β of each pixel are different, which leads to a large difference in the received scattering echo signals. Therefore, accurate estimation of the fog echo signal of each pixel is the key to outdoor depth imaging in fog. In this paper, multinomial distribution observation model and Gamma distribution model are used to achieve 3D single-photon imaging through realistic fog in outdoor environment during the day.

3. Depth reconstruction based on dual-Gamma estimation

When using single-photon counting lidar for depth imaging in an outdoor fog environment, the reflected photons of the target are not only affected by scattered photons, but also by the sky background photons. In the daytime, the sky background noise contribution to the registered signal at wavelength λ can be evaluated as [25]:

$${\lambda _b} = \eta {\eta _r}{P_b}\pi {(\frac{\psi }{2})^2}\Delta \lambda Ar\Delta t\frac{\lambda }{{hc}}$$
where ηr represents the transmittance of the receiving optical system, Pb is the amount of sky spectral radiance (W / m2 / Sr / nm), ψ is the receiving field angle (rad), Δλ is the bandwidth of the filter (nm), and Ar represents the effective aperture area of the receiving optical system. For the wavelength of 1064 nm, the sky spectral radiance under daytime conditions can be obtained from the solar spectrum, Pb = 0.05 W / m2 / Sr / nm. Therefore, based on our system parameters, the noise photon in each time Bin is set as λb= 0.005 in the simulation.

Since the echo photons of fog satisfy the Gamma distribution, we use different β and K to simulate the echo signal of fog under different conditions. Figure 1 shows the influence of noise photons on the fog observation results. When there is no noise photon, the fog histogram distribution obtained by Monte Carlo simulation is consistent with the distribution of photon numbers. However, when there are noise photons, the distribution of fog histogram obtained by Monte Carlo simulation is quite different from that of photon numbers. There is a photon pile-up in front of the gate. Strong noise photons change the Gamma distribution model of fog.

 figure: Fig. 1.

Fig. 1. Scattering echoes of different fog distributions, λb = 0.005. The first row: β = 0.003, K = 1.1, the second row: β = 0.01, K = 1.4. The first column is the photon numbers, the second column is the detection probability, and the third column is the Monte Carlo (MC) simulation results and Gamma fitting curve.

Download Full Size | PDF

The Monte Carlo simulation results with different parameters are fitted by Gamma distribution. Table 1 shows the maximum scattering times K and average scattering times β obtained by fitting. It can be seen that when the histogram counting results contain noise photons, there is a large error in the parameters estimated directly by Gamma distribution. For the echo signal containing scattering and noise photons, although it can be well estimated by Gamma function, the estimated parameters are quite different from the actual parameters. Moreover, the pile-up leads to serious waveform distortion. These phenomena are very harmful to the depth image reconstruction of single-photon detector array in foggy conditions during daytime. For better depth image reconstruction, we focus on accurate estimation of total echo photons λ and smoke scattering photons λfog.

Tables Icon

Table 1. Gamma distribution fitting results under different fog conditions

To solve the above problems, the target depth image reconstruction algorithm in outdoor fog environment adopts a three-step strategy.

The first step is histogram correction, corresponding to the first Gamma estimation. Because the system in this paper is aimed at outdoor long-distance targets, noise photons lead to some anomalous points in the echo signal. According to formula (6), the value of each time Bin in the histogram directly affects the estimation result of photon numbers. In order to reduce the estimation error, it is necessary to denoise the observed histogram data H1 by pixel. The simulation results in Fig. 1 show that the histogram distribution triggered by Poisson also satisfies Gamma distribution (including scattered photons and noise photons). Therefore, we first perform Gamma distribution fitting based on the original histogram data. The estimation result y1 is the blue line in Fig. 2. Then, the estimated residual is calculated as ΔH = H1 - y1. Since the residual ΔH contains the signal, the traditional log-matched filtering algorithm [26] is used to search for the initial position of the signal. The position is set as the center of the signal window ɛ. The window width is twice the width of the system impulse response. We retain the counts in the residual ΔH that belong to the window ɛ and define them as the initial signal (the green curve in Fig. 2). The threshold value of the counts in the residual ΔH is set to ξ. If the residual ΔH outside the signal window is satisfied |ΔH| > ξ, we replace the value on the corresponding time Bin with the value after smooth filtering (the filter window size is twice the width of the system impulse response). The final corrected histogram data H2 is shown as the red line in Fig. 2.

 figure: Fig. 2.

Fig. 2. Histogram correction example of a single pixel. The visibility is 1.7 km. The target distance is 1.4 km. The delay of the gate is 9000 ns. The time Bin is 1.25 ns.

Download Full Size | PDF

The second step is to separate the signal photons from the non-signal photons, corresponding to the second Gamma estimation. Based on the corrected histogram H2, the photon numbers $\lambda '$ is obtained by solving formula (6) with maximum likelihood. This process is also called pile-up compensation. Since the background noise can be approximately regarded as a constant level, the median estimation algorithm is used to estimate the noise parameters. Further, the echo signal H3 containing only scattered photons and signal reflected photons is obtained. Then, the maximum likelihood estimation algorithm is used to estimate the parameters of fog echo signal again. Finally, the least square method is used to optimize the estimation parameters (r, β, K) and obtain the best estimation results λfog. These processes can separate signal photons from scattered photons, λtarget = H3 - λfog. Our processing is based on N pulses to calculate scattered photons.

The third step is target depth image reconstruction. For complex targets (different distance, area ratio and reflectivity) in the field of view of a single pixel, there are multiple target peaks in the target signal extracted by the above processing algorithm. The main purpose of this paper is to restore the image of 64 × 64 pixels. There is no need to achieve super-resolution imaging. Therefore, we use the pulse-like selection algorithm to select the echo signal with the maximum pulse width as the target signal of the pixel. For the separated signal photons, log-matched filtering algorithm is used to estimate the target position per pixel. However, there is still some noise in the reconstructed target depth image caused by strong noise photons and strong fog scattering. Therefore, this paper makes histogram statistics on the reconstructed depth image, and removes the noise according to the histogram distribution. Meanwhile, the median filter operator based on spatial information is used to compensate for the missing pixels of the depth image after denoising.

The reconstruction algorithm proposed in this paper is shown in Fig. 3, which is called the dual-Gamma estimation algorithm for depth imaging in outdoor foggy environment. A detailed description of the Gamma estimation algorithm may refer to our previous work [21].

 figure: Fig. 3.

Fig. 3. Brief introduction of depth imaging algorithm in outdoor foggy environment based on dual-Gamma estimation.

Download Full Size | PDF

4. Data acquisition and result analysis

4.1 Experiment and data description

In our recent research work, we carried out the target depth imaging experiment in indoor smoke environment and achieved good reconstruction results [21]. In order to test the reconstruction ability of the model estimation algorithm in the actual environment, we carry out outdoor long-range target depth imaging in foggy environment. According to the requirements of long-range target detection in fog, we adjusted the system. The maximum average pulse power of this system is 3 W. Gaussian fitting is used to obtain the system impulse response, FWHM = 4ns. The main parameters are shown in Table 2. Due to the circuit design problem of our system, it is easy to produce a large number of photon counts in the first 19 time Bins in the gate. These photon counts are much larger than the photon counts of the target. They are stable in these time Bins. This is very harmful to the model estimation algorithm. Therefore, we use the time-delay circuit to filter out this part of the noise, that is, the detector does not record the number of photoelectrons in the first 19 time Bins in the gate. Therefore, the histogram in Fig. 2 starts from the 20th time Bin.

Tables Icon

Table 2. Summary of main parameters

The experimental scene is shown in Fig. 4(a). At this moment, the illuminance of the sky is 6200lx. This scene mainly contains four targets with different distances: A, B, C and D. Target D is the farthest, about 1.7 km. However, we can only see the very blurred outline of target D from the image taken by the camera. For targets with a distance greater than 1.7km, we cannot find them in Fig. 4(a). Atmospheric visibility refers to the maximum distance that a person with normal vision can see and recognize the contour and shape of a target during the day [27]. Therefore, we approximately assume that the atmospheric visibility is about 1.7 km. We try to reconstruct the depth image of target D. Unfortunately, the reconstruction result is very poor due to the low SBR. However, the reconstruction result of target A with a distance of 1.4 km is very good. The content of the next section also confirms this conclusion. This paper mainly tests the reconstruction ability of the proposed algorithm based on target A.

 figure: Fig. 4.

Fig. 4. Scene and depth image reconstruction results under different conditions. (a) Camera image in foggy environment, (b) log-matched filter reconstruction result of target A with fog, (c) the camera image of target A without fog, (d) log-matched filter reconstruction results of target A without fog.

Download Full Size | PDF

The order of target detection in our experiment is target D, target C, target B and target A. When the detection of target A is completed, we turn off the lidar system without changing the parameters of the lidar. Then, when there is no fog, restart the lidar system to detect target A again. The experimental scene at this time is shown in Fig. 4(c). The depth image of target A is reconstructed by log-matched filter, and the result is shown in Fig. 4(d). We take this result as the ideal image of target A, and use this image to evaluate the restoration ability of all algorithms.

4.2 Results and analysis

In order to test the target depth image reconstruction ability of the proposed algorithm in the actual fog environment, especially in low visibility or very short acquisition time, we reconstruct the depth image of target A in Fig. 4(a). Figure 5 shows the comparison results of our algorithm with the traditional Coates algorithm (compensate for the pile-up) [24,28] and APEA (all parameter estimation algorithm) [18]. The number of imaging frames of the system is consistent with the number of laser emission pulses. When the number of imaging frames is 20 k, the image acquisition time is 1s. At this time, the proposed algorithm recovers more target surface information, especially in the lower region of the target. As the number of imaging frames decreases, the recovery ability of all algorithms decreases. When the number of imaging frames is 5 k, the acquisition time is 0.25s. Compared with the other two algorithms, the proposed algorithm recovers more target contours, but the image integrity is very poor. And the reconstruction results in some local areas of the target are not as good as APEA algorithm.

 figure: Fig. 5.

Fig. 5. Depth image reconstruction results of target A under different imaging frames.

Download Full Size | PDF

For single photon lidar system, SBR (signal-to-background ratio) decreases rapidly with the increase of imaging distance R, which seriously limits the reconstruction of useful images [8]. Here, we use SBR to describe the quality of the imaging environment in fog, and then compare the reconstruction ability of the four algorithms under different SBR conditions. We define the SBR as the number of signal photons (i.e., the back-reflections from the target) divided by the number of noise photons (i.e., ambient light, dark counts and scattering photons) within the 600 Bins timing gate. Figure 6 shows the histogram and photon number distribution corresponding to the minimum SBR of target A reconstructed by the four algorithms. As expected, compared with the other three algorithms, the proposed algorithm can extract the signal under the minimum SBR condition. At this time, the SBR is 0.003. Meanwhile, we also use SNR [29] and photon per pixel (PPP) [8,9] to describe the strength of the returned signal when the acquisition time is 1s. The results are shown in Table 3. The proposed algorithm has the minimum SNR and PPP, which are 0.474 and 75.7, respectively. It is noted that these indicators are calculated after pile-up compensation based on Eq. (6). Therefore, the proposed algorithm has stronger extraction ability for a small number of signal photons under low SBR.

 figure: Fig. 6.

Fig. 6. Histogram corresponding to the lowest SBR value that can be processed by different algorithms. The number of imaging frames is 20k. The first row is the histogram, and the second row is the photon number distribution after pile-up compensation.

Download Full Size | PDF

Tables Icon

Table 3. Minimum index of reconstruction by different algorithms

In order to compare the reconstruction ability of the three algorithms, this paper still uses the two indexes of target recovery (TR) and relative average ranging error (RARE) [21] to evaluate the reconstructed image. The target recovery is as follows:

$${N_{ef}}(i) = \left\{ {\begin{array}{{c}} 1\\ 0 \end{array}\begin{array}{{c}} ,\\ , \end{array}} \right.\begin{array}{{c}} {}\\ {} \end{array}\begin{array}{{c}} {|{d(i) - {d_\textrm{s}}(i)} |\le {d_{th}}}\\ {|{d(i) - {d_\textrm{s}}(i)} |> {d_{th}}} \end{array}$$
$$TR = \frac{{\textrm{sum}({N_{ef}}(i))}}{{{N_{total}}}}$$
where d is the reconstructed distance, ds is the ideal distance, dth is the distance error threshold, i is the i-th pixel of the image; Nef is the pixel that meets the range of the distance error, and Ntotal is the total number of pixels in the image. The echo signal FWHM of the target of this system is 4ns. This article believes that the error within half of the FWHM is satisfied, that is, the distance error threshold dth is 2ns. The closer the TR is to 1, the better the algorithm will recover the target.

The relative average ranging error is defined as:

$$RARE = \sqrt {\frac{{\sum\limits_{j = 1}^{\textrm{sum}({N_{ef}})} {{{({d_{ef}}(j) - {d_s}(j))}^2}} }}{{\textrm{sum}({N_{ef}})}}} $$
where def is the distance judged to be the reconstructed target pixel, ds is the ideal distance of the pixel corresponding to def, and j is the j-th pixel that satisfies the range of distance error. The smaller the RARE, the more accurate the target image reconstructed by the algorithm.

Table 4 shows the TR and RARE calculated by different algorithms. With the decrease of the number of imaging frames, the TR of the images reconstructed by the three algorithms is decreasing, while the RARE is increasing. The results show that for target imaging in outdoor low visibility fog environment, the target detection ability of the lidar system can be improved by increasing the acquisition time. However, the proposed algorithm is most seriously affected by the number of imaging frames. The reason is that dual-Gamma estimations based on echo signal are needed in the processing. When the number of imaging frames is small, the number of echo photons received is less. A small number of photon counts are easy to cause errors for parameter estimation, and even lead to estimation failure. With the increase of imaging frames, our algorithm shows great reconstruction advantages. When the number of image frames is 20k, the TR is 0.3088, and the RARE is 0.6162. Compared with APEA algorithm, TR is increased by 20.5% and RARE is reduced by 28.2%. Therefore, when the number of imaging frames meets certain conditions, compared with the other two algorithms, the proposed algorithm has the maximum TR and the minimum RARE.

Tables Icon

Table 4. Comparison of evaluation results of reconstructed images

In view of the increase of acquisition time can improve the detection ability of the system, we set the acquisition time to 1 s, that is, the number of imaging frames is 20 k. Imaging of targets B and C in Fig. 4(a), their reconstructed depth images are shown in Fig. 7. The distances of target B and target C are 0.8km and 0.5km, respectively. Because the two targets are close relative to the visibility of 1.7km, the number of photons reflected by the targets is very large. Therefore, compared with the results of target A, the reconstruction results of targets B and C by the four algorithms are more complete. The closer the target distance is, the better the reconstruction result is. However, whether target is B or C, the proposed algorithm has the best target recovery ability, and the target edge contour recovery is more complete. For target B, the proposed algorithm can achieve better edge extraction for buildings within the red box (the image reconstructed by the proposed algorithm). The outline of the target is more obvious. For target C, the proposed algorithm can reconstruct the window and column above the roof (the red box in the proposed algorithm reconstruction image). This is impossible for the other three algorithms. The results show that the proposed algorithm has stronger spatial resolution without increasing the reconstructed image pixels. Therefore, when the target distance is less than visibility, the proposed algorithm is more suitable for target depth imaging in foggy environment.

 figure: Fig. 7.

Fig. 7. Depth image reconstruction results of different targets. The number of imaging frames is 20 k. The first row is the image of target B without fog and the reconstruction results of target B with fog. The second row is the image of target C without fog and the reconstruction results of target C with fog.

Download Full Size | PDF

5. Conclusion

Due to the strong scattering of fog and the strong background noise, the signal-to-background ratio (SBR) of each pixel of single-photon detector array is extremely low. The traditional signal extraction algorithms have some limitations in this extreme condition. Therefore, this paper proposes an outdoor three-dimensional imaging algorithm suitable for single-photon detector array, named dual-Gamma estimation algorithm, which considers the influence of noise photons and scattered photons on the observation results. Meanwhile, an observation model based on multinomial distribution is used to compensate for the pile-up caused by scattering and noise photons. It has been successfully demonstrated for targets at different distances and imaging times. When the visibility is 1.7 km, the SBR of the target with a distance of 1.4 km in foggy environment is as low as 0.003. The proposed algorithm successfully separates the reflected signal photons from non-signal photons. The target recovery of the reconstructed image is 0.3088, and the relative average ranging error is 0.6162. Compared with the traditional algorithm, the target recovery is improved by 20.5%, and the relative average ranging error is reduced by 28.2%. For targets with different distances of 1.7 km visibility, the proposed algorithm has the best target recovery ability. And the target edge contour in the restored image is more complete. This research successfully expands the fog scattering estimation model from indoor to outdoor environment, and improves the weather adaptability of single-photon detector array. The algorithm proposed in this paper can only reconstruct the depth image when the target distance is less than visibility. In order to further improve the detection ability of the system, in addition to increasing the laser power, it is also necessary to suppress scattered photons and background photons in optical design. Of course, laser waveform shaping is also a very effective technical means. These work will be the core research we need to focus on next.

Acknowledgments

The authors are grateful to the anonymous reviewers for their constructive comments.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555(7696), 338–341 (2018). [CrossRef]  

2. D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graph. 38(4), 1–13 (2019). [CrossRef]  

3. A. Maccarone, A. McCarthy, X. Ren, R. E. Warburton, A. M. Wallace, J. Moffat, Y. Petillot, and G. S. Buller, “Underwater depth imaging using time-correlated single-photon counting,” Opt. Express 23(26), 33911–33926 (2015). [CrossRef]  

4. A. Halimi, A. Maccarone, A. McCarthy, S. McLaughlin, and G. S. Buller, “Object depth profile and reflectivity restoration from sparse single-photon data acquired in underwater environments,” IEEE Trans. Comput. Imaging 3(3), 472–484 (2017). [CrossRef]  

5. A. Maccarone, F. M. D. Rocca, A. McCarthy, R. Henderson, and G. S. Buller, “Three-dimensional imaging of stationary and moving targets in turbid underwater environments using a single-photon detector array,” Opt. Express 27(20), 28437–28456 (2019). [CrossRef]  

6. D. B. Lindell and G. Wetzstein, “Three-dimensional imaging through scattering media based on confocal diffuse tomography,” Nat Commun 11(1), 4517 (2020). [CrossRef]  

7. R. Tobin, A. Halimi, A. McCarthy, M. Laurenzis, F. Christnacher, and G. S. Buller, “Three-dimensional single-photon imaging through obscurants,” Opt. Express 27(4), 4590–4611 (2019). [CrossRef]  

8. Z. Li, X. Huang, Y. Cao, B. Wang, Y. Li, W. Jin, C. Yu, J. Zhang, Q. Zhang, C. Peng, F. Xu, and J. Pan, “Single-photon computational 3D imaging at 45 km,” Photonics Res. 8(9), 1532–1540 (2020). [CrossRef]  

9. Z. Li, J. Ye, X. Huang, P. Jiang, Y. Cao, Y. Hong, C. Yu, J. Zhang, Q. Zhang, C. Peng, F. Xu, and J.-W. Pan, “Single-photon imaging over 200 km,” Optica 8(3), 344–349 (2021). [CrossRef]  

10. Y. Xu, J. Wen, L. Fei, and Z. Zhang, “Review of Video and Image Defogging Algorithms and Related Studies on Image Restoration and Enhancement,” In IEEE Access4, 165–188 (2016).

11. S. Zhao, L. Zhang, S. Huang, Y. Shen, S. Zhao, and Y. Yang, “Evaluation of Defogging: A Real-World Benchmark Dataset, A New Criterion and Baselines,” 2019 IEEE International Conference on Multimedia and Expo (ICME), 1840–1845 (2019).

12. D. Kijima, T. Kushida, H. Kitajima, K. Tanaka, H. Kubo, T. Funatomi, and Y. Mukaigawa, “Time-of-flight imaging in fog using multiple time-gated exposures,” Opt. Express 29(5), 6453–6467 (2021). [CrossRef]  

13. A. M. Pawlikowska, A. Halimi, R. A. Lamb, and G. S. Buller, “Single-photon three-dimensional imaging at up to 10 kilometers range,” Opt. Express 25(10), 11919–11931 (2017). [CrossRef]  

14. J. Rapp and V. K. Goyal, “A Few Photons Among Many: Unmixing Signal and Noise for Photon-Efficient Active Imaging,” in IEEE Transactions on Computational Imaging, 3(3), 445–459 (2017).

15. R. Tobin, A. Halimi, A. McCarthy, P. J. Soan, and G. S. Buller, “Robust real-time 3D imaging of moving scenes through atmospheric obscurant using single-photon LiDAR,” Sci Rep 11(1), 11236 (2021). [CrossRef]  

16. T. H. Sang, S. Tsai, and T. Yu, “Mitigating Effects of Uniform Fog on SPAD Lidars,” in IEEE Sensors Letters4(9), 1–4 (2020).

17. J. Mau, V. Devrelis, G. Day, J. Trumpf, and DV Delic, “The use of statistical mixture models to reduce noise in SPAD images of fog-obscured environments,” Proc. SPIE 11525, SPIE Future Sensing Technologies, 115250P (2020).

18. G. Satat, M. Tancik, and R. Raskar, “Towards photography through realistic fog,” 2018 IEEE International Conference on Computational Photography (ICCP), 1–10 (2018).

19. D. Liu, J. Sun, S. Gao, L. Ma, P. Jiang, S. Guo, and X. Zhou, “Single-parameter estimation construction algorithm for Gm-APD ladar imaging through fog,” Opt. Commun. 482, 126558 (2021). [CrossRef]  

20. S. Guo, W. Lu, J. Sun, D. Liu, X. Zhou, and P. Jiang, “Single quantity estimation method for single photon lidar dehazing imaging,” Optics and Precision Engineering 29(6), 1234–1241 (2021). [CrossRef]  

21. Y. Zhang, S. Li, J. Sun, D. Liu, X. Zhang, X. Yang, and X. Zhou, “Dual-parameter estimation algorithm for Gm-APD Lidar depth imaging through smoke,” Measurement 196, 111269 (2022). [CrossRef]  

22. Y. Zhang, S. Li, P. Jiang, J. Sun, D. Liu, X. Yang, X. Zhang, and H. Zhang, “Depth imaging through realistic fog using Gm-APD Lidar,” Proc. SPIE 11907, Sixteenth National Conference on Laser Technology and Optoelectronics, 119070N (2021).

23. Y. Zhang, S. Li, J. Sun, X. Zhang, and R. Zhang, “Detection of the near-field targets by non-coaxial underwater single-photon counting lidar,” Optik 259, 169010 (2022). [CrossRef]  

24. F. Heide, S. Diamond, D. B. Lindell, and G. Wetzstein, “Sub-picosecond photon-efficient 3D imaging using single-photon sensors,” Sci Rep 8(1), 17726 (2018). [CrossRef]  

25. J. Xian, D. Sun, S. Amoruso, W. Xu, and X. Wang, “Parameter optimization of a visibility LiDAR for sea-fog early warnings,” Opt. Express 28(16), 23829–23845 (2020). [CrossRef]  

26. D. Shin, A. Kirmani, V. K. Goyal, and J. H. Shapiro, “Photon-efficient computational 3-d and reflectivity imaging with single-photon detectors,” in IEEE Transactions on Computational Imaging1(2), 112–125 (2015).

27. R. Rao, “Vision through atmosphere and atmospheric visibility,” Acta optica sinica 30(9), 2486–2492 (2010). [CrossRef]  

28. P. B. Coates, “The correction for photon `pile-up’ in the measurement of radiative lifetimes,” Journal of Physics E: Scientic Instruments 1(8), 437878 (1968).

29. S. Pellegrini, G. S. Buller, J. M. Smith, A. M. Wallace, and S. Cova, “Laser-based distance measurement using picosecond resolution time-correlated single photon counting,” Meas. Sci. Technol. 11(6), 712–716 (2000). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Scattering echoes of different fog distributions, λb = 0.005. The first row: β = 0.003, K = 1.1, the second row: β = 0.01, K = 1.4. The first column is the photon numbers, the second column is the detection probability, and the third column is the Monte Carlo (MC) simulation results and Gamma fitting curve.
Fig. 2.
Fig. 2. Histogram correction example of a single pixel. The visibility is 1.7 km. The target distance is 1.4 km. The delay of the gate is 9000 ns. The time Bin is 1.25 ns.
Fig. 3.
Fig. 3. Brief introduction of depth imaging algorithm in outdoor foggy environment based on dual-Gamma estimation.
Fig. 4.
Fig. 4. Scene and depth image reconstruction results under different conditions. (a) Camera image in foggy environment, (b) log-matched filter reconstruction result of target A with fog, (c) the camera image of target A without fog, (d) log-matched filter reconstruction results of target A without fog.
Fig. 5.
Fig. 5. Depth image reconstruction results of target A under different imaging frames.
Fig. 6.
Fig. 6. Histogram corresponding to the lowest SBR value that can be processed by different algorithms. The number of imaging frames is 20k. The first row is the histogram, and the second row is the photon number distribution after pile-up compensation.
Fig. 7.
Fig. 7. Depth image reconstruction results of different targets. The number of imaging frames is 20 k. The first row is the image of target B without fog and the reconstruction results of target B with fog. The second row is the image of target C without fog and the reconstruction results of target C with fog.

Tables (4)

Tables Icon

Table 1. Gamma distribution fitting results under different fog conditions

Tables Icon

Table 2. Summary of main parameters

Tables Icon

Table 3. Minimum index of reconstruction by different algorithms

Tables Icon

Table 4. Comparison of evaluation results of reconstructed images

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

H t P [ λ ( t ) ] = P [ λ t arg e t ( t ) + λ f o g ( t ) + λ b + b d ]
λ t arg e t ( t ) = η α f ( t 2 R / c )
λ f o g ( t )  =  η r × β K + 1 K ! t K exp ( β t )
P ( H | λ ) = N ! H 1 ! H T ! ( N 1 T H ) ! P ( n o det e c t i o n s ) N 1 T H i = 1 T P ( i | λ ) H i = N ! H 1 ! H T ! ( N 1 T H ) ! exp ( 1 T λ ) N 1 T H i = 1 T ( exp ( k = 1 i 1 λ k ) exp ( k = 1 i λ k ) ) H i
Λ ( H ) = log P ( H | λ ) = 1 T λ ( N 1 T H ) + i = 1 T H i k = 1 i 1 λ k i = 1 T H i log ( 1 e λ i )
λ i = log ( 1 + H i N k = 1 i H k )
λ b = η η r P b π ( ψ 2 ) 2 Δ λ A r Δ t λ h c
N e f ( i ) = { 1 0 , , | d ( i ) d s ( i ) | d t h | d ( i ) d s ( i ) | > d t h
T R = sum ( N e f ( i ) ) N t o t a l
R A R E = j = 1 sum ( N e f ) ( d e f ( j ) d s ( j ) ) 2 sum ( N e f )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.