Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Photon-limited single-pixel imaging

Open Access Open Access

Abstract

Photon-limited imaging technique is desired in tasks of capturing and reconstructing images by detecting a small number of photons. However, it is still a challenge to achieve high photon-efficiency. Here, we propose a novel photon-limited imaging technique that explores the consistency of photon detection probability in a single pulse and light intensity distribution in a single-pixel correlated imaging system. We demonstrated theoretically and experimentally that our technique can reconstruct a high-quality 3D image by using only one pulse each frame, thereby achieving a high photon efficiency of 0.01 detected photons per pixel. Long-distance field experiments for 100 km cooperative target and 3 km practical target are conducted to verify its feasibility. Compared with the conventional single-pixel imaging, which requires hundreds or thousands of pulses per frame, our technique saves two orders of magnitude in the consumption of total light power and acquisition time.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

To reconstruct an image for an object, the conventional imaging techniques usually requires $10^3-10^5$ photons per pixel [13]. However, in many situations only a small number of photons can be detected because of the limited detection time and light power. The photon-limited imaging technique emerges as a potential solution to these problems; such technique has attracted great interest because of its important applications in extremely low light conditions, such as night vision [4], biological imaging [5], remote sensing [6], and so forth.

In the photon-limited imaging, researchers attempt to increase the photon-efficiency by using fewer photon per pixel to restore the image. Kirmani and Shin, et. al., investigated the photon-limited imaging and presented the first-photon imaging (FPI) technique which can recover the scene’s reflectivity from the first reflected photon of each pixel by using a scanning single-photon detector or a single-photon camera [7,8]. Using total variation restoration optimization algorithm [9,10], Pawlikowska et al reconstructed kilometer range depth profiles with average signal returns of less than one photon per pixel [11]. Nevertheless, the performance of single-photon cameras or scanners [7,8,11,12] for photon-limited imaging are constrained by the resolution of the optical system or the scanning and detection times when the objective scene is too large. Recently, by implementing ghost imaging [1315] and the single-pixel imaging [1620] configurations with single-photon detector, some interesting results based on the photon-limited imaging technique have been achieved [2123]. Especially, by employing an entanglement source and compressed sensing algorithm, Morris et. al., reconstructed the images from raw data comprised of fewer-than-one detected photons per pixel [22,23]. However, the imaging system with entangled photon sources is currently difficult to apply in practical scenarios like remote sensing. With a classical source, our group investigated the photon-limited computational ghost imaging based on the time-correlated single-photon-counting (TCSPC) technique at low light level [24]. In addition, we studied the first-photon ghost imaging (FPGI) technique [25] which can reconstruct images by using no more than 0.1 photons per pixel. Conventional ghost imaging requires hundreds of photons to be detected, involving tens of thousands of pulses per frame; and at least thousands of frames are needed to restore an image. FPGI technique offers less data manipulation and less detection time in an order of two, which is more efficient than the conventional ghost imaging [17,24]. But the FPGI technique still requires hundreds of laser pulses in each frame, the unused empty reflected pulses occupy most of the entire detection time.

Since the single-photon detector is inherently a 1-bit single pixel measurement device with certain probability distribution, a time-based depth imaging could be obtained by simply superimpose the measurement patterns which have a single-photon event (each pattern corresponds to one pulse illumination). We call this novel photon-limited imaging technique as single-pulse single-photon single-pixel imaging (SP$^3$I). This technique is based on a specific property regarding the probability distribution of photon-detection with single-photon detector in the single-pixel imaging. That is, neglecting the patterns in which no photons are detected will not change the original probability distribution. Considering that such neglecting will lead to difficulties in estimating the intensity and distance of targets, we operated within the constraints of Poisson statistics and exploited the sparsity of targets and the measurement patterns, both in the spatial domain and their discrete distribution in the time domain. Therefore, when the pulse energy is reduced, our system can still reconstruct high-quality images by increasing the pulse repetition frequency, as long as the total detection time and light power remain unchanged. This helps the SP$^3$I technique in practice. Since our techinique only requires a single pulse illumination for each frame, which reduce the magnitude of the detected photons, laser pulses and acquisition time by an order of two compared with the state-of-the-art single-pixel imaging methods [1619]. Furthermore, our system out-performs the scanning and array-detection imaging system, because there is no need for scanner, spatial-resolved detector nor coaxial transceiver architecture, which greatly simplifies the setup and lowers the requirement of detectors’ spatial resolution (for example, single-photon array detectors for some frequency bands, such as infrared light, are immature and costly). Another advantage of our system is that it can image with less photon detection. That is because the SP$^3$I technique employs the single-pixel correlated imaging configuration so that the spatial and temporal information of many pixels can be extracted from one photon detection; while the scanning and array-detection imaging cannot reconstruct an image unless they detect the same number of photons as the number of pixels in that image [7,11]. To demonstrate the feasibility of the SP$^3$I technique, we have performed numerical simulations and conducted the imaging experiments of a $100$km cooperative target and a $3$km practical target. In this scenario, a $192\times 108$ pixel depth image has been reconstructed at the limitation of $0.01$ detected photons per pixel. In addition, a contrast-to-noise ratio (CNR) model has been established to characterise the imaging quality of the SP$^3$I imaging system. Results have shown that the theoretical CNRs are in accord with the simulation and experimental results.

2. Methods

2.1 Imaging setup

The schematic diagram for our proposed imaging system is plotted in Fig. 1 and it is based on the SP$^3$I technique with single-photon detection. In the employed schematic, a laser beam irradiates the target and then the reflected photons are collected by lens. After that, the light illuminates the programmable patterns of a digital micromirror device (DMD). DMD is an array of micromirrors consisting of $1080\times 1920$ independent addressable units for spatial modulation. For setting the intervals, the DMD controller loads each memory cell with the value ’1’ or ’0’, which indicates the light illuminating on the corresponding pixels can be detected or not. The signal light passes through a ground glass, a filter, and a convergence lens, and is finally detected by a single-photon avalanche diode (SPAD) detector. Since the receiving area of SPAD is limited, the ground glass homogenizer is used to uniform the spatial intensity distribution of the light field, so we can get accurate intensity fluctuations of the entire light field under different DMD modulations. The spatial modulation is conducted through a series of binary random speckle patterns. The digital raw signal includes the photovoltaic click events recorded by SPAD, the synchronous signals from the DMD and the laser pulses. Those signals are then fed into the TCSPC module (Siminics) and finally calculated by a computer.

 figure: Fig. 1.

Fig. 1. The schematic diagram of our imaging system based on the SP$^3$I technique. DMD: digital micromirror device, SPAD: single-photon avalanche diode, TCSPC: time-correlated single-photon-counting module.

Download Full Size | PDF

SP$^3$I technique

According to the configuration presented in Fig. 1, we described the mechanism for the SP$^3$I technique in this subsection. Let the reflection intensity distribution of the object be ${O(x,y)}$, the total photon number ${N}_{i}$ associated with the $i^{th}$ measurement pattern $R_i(x,y)$ is given by

$${{{N}}_{{i}}} = \iint{N_0{R_i}(x,y)O(x,y)dxdy },$$
where $(x,y)$ denotes two-dimensional index for transverse location, maintaining a strict one-to-one correlation between object plane and modulation plane (i.e., the working region of the DMD on which the returned light field is modulated), and $N_0$ is the average photon number reflected from a unit pixel of the target illuminated by a laser pulse. Assume that the imaging system detects $m$ reflective points after DMD’s modulation at the $i^{th}$ measurement pattern and denote the reflectivity of each point by $\alpha _j(j=1:m)$, the total photon number collected by the single-pixel detector is expressed as,
$$N_i(m) = N_0\sum_{j = 1}^m {{\alpha _j}} = N_0m\bar \alpha,$$
where $\bar \alpha$ is the mean reflectivity value after the interaction between the target and DMD pixels. Generally, $\bar \alpha$ can be regarded as a constant because the number of sampling pixels of each measurement pattern is large enough.

The probability distribution function for the photon number $N_i$ with $m$ reflective points is a hypergeometric distribution as following

$$P_h(m) = \frac{{C_{{M_1}}^mC_{M - {M_1}}^{{M_2} - m}}}{{C_M^{{M_2}}}},$$
where $C_{n}^k$ refers to the combination of k out of n at a time without repetition which equals to $\frac {{n!}}{{k!(n - k)!}}$, $m\le \min \{M_1,M_2\}$, $M$ is the total number of pixels for each measurement pattern, and the target reconstructed by these patterns also has $M$ pixels, $M_1$ is the high-reflection-pixel number on the target plane, and $M_2$ is the number of the high-reflectivity sampling pixels on the DMD, which are marked by ‘1’ in the measurement pattern. Thus, the sparsity of target and measurement patterns are $M_1/M$ and $M_2/M$ respectively. In the conditions of low flux measurements, the single-photon detection obeys the Poisson statistics. Let $\eta$ be the efficiency of photon detection. Commonly, the background photons from the enviroment can be neglected when the pulse duration is much shorter than the pulse repetition period. Then, the probability of no reflective photons being detected from a single-pulse shot is given by
$$P_0=e^{-\eta N_i}=e^{-m N_r/ \bar m}$$
where $N_r=\eta N_0 \bar \alpha \bar m$ and $\bar m$ is the average value of $m$. When $N_r$ is small, one may regard it as the photon counting rate which depends closely on the light source intensity.

The frames which have no photon detection in their pulses haven’t contributed to the imaging, therefore we only identify the pulse with single-photon event as an effective detection. Then, under the various intensity modulations performed by DMD, the probability distribution of photon detections is given by

$$P_r\left( {m} \right) = \frac{P_h( 1 - P_0)}{\tilde{P}_1}.$$
where $\tilde {P}_1$ denotes the average of $1-P_0$. Easily, one may learn the hypergeometric distribution feature of the obtained probability $P_r(m)$. In Fig. 2, we compared the curves of the normalized $P_r$ and $P_h$. Clearly, the probability $P_r(m)$, which is the conditional distribution of $m$ after the photon detection, is almost the same as the original intensity distribution $P_h(m)$. This means that the detection results with only single-photon event follow the same statistic distribution law for the case of all patterns’ reflected intensity being detected. Note here that detecting all received pulses are necessary for the conventional single-pixel imaging, while the detection results with output ‘0’ have been ignored in the proposed SP$^3$I technique. In addition, due to the inherent 1-bit property of the single-photon detector in the single pixel imaging, the image can be reconstructed by only simply superimposing each measurement pattern weight by 0 or 1. This is clearly different from the conventional single-pixel imaging and ghost imaging configurations.

 figure: Fig. 2.

Fig. 2. Comparison of the probability distributions $P_r(m)$ and $P_h(m)$.

Download Full Size | PDF

Two ways, i.e., the compressive sensing algorithm and the correlation computation algorithm [1417], are usually employed to reconstruct the image by a fixed single-pixel detector. Using the results from the effective detection, we reconstruct the image by superposing those effective patterns $R_j(x,y)$ by the correlation algorithm,

$$O(x,y) = \frac{1}{K_e}\sum_{j = 1}^{K_e} ({R_{j}}(x,y) - \overline {{R_{j}}(x,y)} ),$$
where $K_e$ is the number of effective patterns chosen by photon events, and $\overline {R_j(x,y)}$ is the ensemble average for $K$ patterns. Because our photon counting rate is very low, the data processing efficiency is remarkably improved by discarding patterns that did not register any photon.

The signal intensity $I_s(m)$ and the background intensity $I_b(m)$ of the reconstructed image can be given as following,

$${I_s(m)} = m N_r/M_1$$
and
$${I_b(m)} = N_r({M_2 - m})/({M - M_1}).$$
respectively. CNR is employed to characterize the imaging quality [26]. It can be defined by the ratio of the average signal value ${\mu }$ to the standard deviation of the signal ${\sigma }$. Suppose we measure $K$ frames, they are independent and identically distributed; according to the Central Limit Theorem and a Law of Large Numbers, the population expected values is the sample mean for a single measurement, and the population variance is one Mth of the sample variance. Therefore, the CNR is given by,
$$CNR = 20\log_{10}{\frac{{\mu}_{s}-{\mu}_{b}}{\sqrt {\left. ({\sigma}^2_{s}+{\sigma}^2_{b}) \middle /2 {K} \right.}}} ,$$
where ${\mu _s} = \sum\limits _m {I_s(m)P_r(m)}$, ${\sigma _s}^2=\sum\limits _m {\left [I_s(m)-\mu _s\right ]^2P_r(m)}$, ${\mu _b} = \sum\limits _m {I_b(m)P_r(m)}$, and ${\sigma _b}^2=\sum\limits _m {\left [I_b(m)-\mu _b\right ]^2P_r(m)}$. To illustrate the characteristic of the CNR in the SP$^3$I technique, the dependences of the CNR on various variables are plotted in Fig. 3. In Fig. 3(a), at first the CNR rises as the increase of sparsity of target and measurement patterns then keeps a stable value, and declines rapidly at the end. This process could be understood as the tradeoff between the amount of information in one detection and the difficulty of recovering the image with that detection. Fig. 3(b) shows when the sparsity is relatively small the CNR increases as the photon counting rate goes up, while when it is relatively large the CNR increases first and then declines with the rising photon counting rate. That is because in the case of detecting many points by a bucket detector in ghost imaging systems, the high photon counting rate makes it difficult to distinguish which patterns are effective. Figures 3(c) and 3(d) demonstrate clearly that CNR rises as the number of measurements increases, and under the other default parameters’ values we set, CNR goes up with the increasing of the photon-counting rate, and decreases as the sparsity increases.

 figure: Fig. 3.

Fig. 3. Impacts of different variables on CNR. (a) CNR vs. sparsity of target and measurement patterns $M_1/M$ and $M_2/M$; (b) CNR vs. sparsity of measurement patterns $M_2/M$ and the photon counting rate $N_r$; (c)CNR vs. the total number of measurements $K$ and the sparsity of measurement patterns $M_2/M$; (d) CNR vs. $K$ and photon counting rate $N_r$. The default values of $K$, $N_r$, $M_1$, $M_2$ and $M$ are $2\times 10^4$, $0.1$, $200$, $200$ and $20736$ respectively.

Download Full Size | PDF

Let $N_t$ be the total number of measured photons for $K$ measurements at a given CNR value, and $M$ is the number of pixels of the retrieved image. The photon-efficiency (i.e., photons per pixel) can be defined generally as follows,

$$\eta_p=\frac{N_t}{M} \,\,\,\,\,\, s.t.\,\,\,\, CNR,$$
where $s.t.$ denotes the ‘subject to’.

3. Results

3.1 Verification in simulation conditions

We compared our SP$^3$I technique with FPI in simulation, and as shown in Fig. 4, SP$^3$I is able to capture and reconstruct a small target in a large field more quickly and clearly that than FPI. To demonstrate the effectiveness of the SP$^3$I technique, we performed more detailed simulation work, in which we adopted a $192 \times 108$ image of $4$ letters, i.e., ‘S’,‘J’,‘T’,‘U’, with a longitudinal distance of $10$cm between adjacent letters. The dependence of imaging quality upon the photon counting rate $N_r$, which is controled by light pulse power, is demonstrated in Fig. 5. Firstly we consider the relationship between the imaging quality and the photon counting rate $N_r$. Figure 5(a-f) shows that the reconstructed image becomes better with the increase of the photon counting rate $N_r$, provided that the target and the measurement patterns are properly sparse. This feature is in accordance with Fig. 3(b) when $M_2$ is relatively small. Equivalently, CNR marked under the images grows monotonically while its growth rate declines. This is because as more patterns are selected with the increase of photon counting rate, more effective patterns are overlapped for the image reconstruction at the beginning. In that case, the signal increments are greater than the noise. However, when more and more measurement patterns are overlaid, the signal increment becomes small, so that the figures can only be distinguished through the differences of the TOF. Subsequently, the growth rate of the CNR declines. In the case of $M_2$ being larger, the noise would increase and deteriorate the image as demonstrated in Fig. 3(b). Next, we consider the dependence of imaging quality upon the the total number of measurements. In this case, it is clear that the quality of the images understandably gets better as the number of measurements rises as shown in Fig. 5(g-l).

 figure: Fig. 4.

Fig. 4. Comparison of FPI and SP$^3$I from different number of measurements. FPI is performed by scanning each pixel in turn from top to bottom, from left to right. The gray areas are waiting to be scanned. The number of scanning measurements is equal to the number of pixels recovered.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Reconstructed images under different simulation conditions. (a-f) Results with the photon-counting rate from $0.01$ to $0.3$; (g-l) results under different numbers of measurements from $2$k to $30$k; (m-r) results with the number of measurements and photon-counting rate changing simultaneously at a photon-efficiency of $0.48$; (s-x) results of targets modulated by patterns of different sparsity.

Download Full Size | PDF

Taking the parameters of a relatively clear image, Fig. 5(j), as the reference, we changed the light source power and the total number of measurements at the same time and kept the patterns of superimposed patterns unchanged. As shown in Fig. 5(m-r), we found that the reconstructed image has the similar CNR. This means that the pulse energy can be reduced by increasing the laser pulse repetition frequency during a certain detection time. This point can extend the practical applications of the SP$^3$I technique. Figure 5(s-x) shows the results of the targets with different sparsity by 0.01 photons detection per pixel. They demonstrate the same trend as shown in the former theoretical model [Figs. 3(b) and 3(c)] that the CNR decreases with the increasing $M2$. From Figs. 5 and 3, one can find that our proposed technique can obtain an optimal imaging scheme under almost any photon-limited imaging condition by choosing optimal parameters.

With representative parameters set, the reconstructed images are shown in Fig. 5. These results are obtained under the conditions of $0.1$ photon counting rate and $30$k measurement patterns. The original image for the object is plotted in Fig. 6(a). Collecting the time gate according to the time of flight (TOF) distribution of the photons reflected from 4 different longitudinal distance and overlaying the corresponding effective patterns, the reconstructed image is shown in Fig. 6(b). In addition, we plot the intensity map and the depth map in Figs. 6(c) and 6(d). Our simulation demonstrates that a completely clear and intact object image with depth resolution can be obtained under the condition of $0.1$ photons per pixel at $CNR=8.64$dB.

 figure: Fig. 6.

Fig. 6. Simulaiton results for targets from different longitudinal distance. (a) The original target image; (b) Result by time slice at $CNR=8.64$dB; (c) Intensity map; (d) Longitudinal distance map.

Download Full Size | PDF

3.2 Verification in far-field illumination conditions

To demonstrate the feasibility of the SP$^3$I technique, we have also conducted a proof-of-principle remote field experiment and a practical target experiment. The geographical environment located in Qinghai province of China and the real equipments of the field experiment are shown in Fig. 7. $532$nm laser pulses with $100$ps duration and $10$KHz repetition frequency propagate in free space and irradiate the target located at a hundred km away. The target consists of ten reflecting arrays arranged in pairs spaced $6$ meters apart on the slope of a mountain, with a distance of $12$ meters between each pair in a column along the light propagation direction. Each array is composed of $9$ corner cubes of $64$ millimeters diameter, arranged in a $3\times 3$ square matrix on a $1\times 1m^2$ iron rack. The reflected light from the target is collected by a telescope. At each measurement, the DMD projects a pattern with $192\times 108$ pixels for spatial light field modulation, and each pixel consists of $10\times 10$ micromirror units. A TCSPC module (SIMINICS MT6420) combined with a SPAD (SIMINICS MSPD64: Multichannel SPD System) was used to record photon events. The timing circuits of this module allow high measurement rate up to $12.5$ million counts per second (Mcps), and provide a time resolution of $64$ ps. The expected depth resolution is $3$cm for $100$ps pulse duration, and the spatial resolution is $2.3$m. The same imaging system is also available for a 3-D letter of "pi" $3$ km away from the transceivers. The target object "pi" is made by a white foam plastic board sticking on a black paperboard, and its left leg is thinner than the other part of the letter by $9$ cm in the longitudinal direction. The atmospheric visibility [27], the distance at which the object under observation can no longer be seen by the naked eye, was about $60$ km when we performed experiments. The experimental results are shown in Fig. 8 and Fig. 9, which are $100$km cooperative target and $3$km practical target obtained under the conditions of $0.1$ photon counting rate and $30$k measurement patterns. Fig. 8(a) and Fig. 8(b) show the 2D and 3D reconstructed image of $100$km target, respectively. The target appears on five transverse slices with different longitudinal distances, and each slice has two faintly distinguishable points. Fig. 9(b) shows the two-dimension reconstructed image of $3$km target. To make it clear, we filter out background noise by zeroing small numerical terms, intercept and enlarge the local area of the target, and then plot the intensity map and depth map as shown in Figs. 89(c) and 9(d). The experimental results demonstrate that our system is able to reconstruct 3D images of $11$ dB CNR from $100$Km away cooperative target as well as $10.4$ dB CNR from $3$km away practical target with only around $0.1$ photon detection per pixel.

 figure: Fig. 7.

Fig. 7. 100Km field experiment. (a) Geographic map of the Qinghai Lake. (b) the transceiver devices a pulsed laser (Onefive GmbH Katana-10 XP, 532nm); a DMD (ViALUX V4395 DLP) with mirror pitch 10.8$\mu m$; a telescope (Celestron EdgeHD 1100 279mm (11") Optical Tube Assembly (OTA)); a single-photon single-pixel detector with 64 optical fiber probes(SIMINICS MSPD64: Multichannel SPD System); a TCSPC module (SIMINICS MT6420). (c) One of ten reflector arrays, each composed of nine corner cubes.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Experimental results for $100$km targets including $5$ different longitudinal distances. (a) 2D reconstruction of $192\times 108$ pixels image with $11$dB CNR by $0.1$ photons detection per pixel. (b) 3D Slice map. The distances are $99.993$, $100.005$, $100.017$, $100.029$, $100.041$ km, respectively, as calculated by the flight time of reflected photons. (c) Intensity map. (d) Longitudinal Distance map.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Experimental results for the practical $3$km target including two different longitudinal distances with a difference of $9$cm. (a) Original target in field secene; (b) 2D reconstruction of $192\times 108$ pixels image with $10.4$dB CNR by $0.1$ photons detection per pixel; (c) Intensity map; (d) Longitudinal distance map. The longitudinal distances are $3013.05$, $3013.14$ meters as calculated by the two peaks of flight time of reflected photons at $2087000$ps and $2087600$ps.

Download Full Size | PDF

Photon counting rate and the number of measurement patterns are two key factors in the SP$^3$I imaging system. To figure out the imaging efficiency of our proposed technique a practical scenario, we plotted the curves of the CNR in experiment and in theory with the total number of measurements under four different photon counting rates in Fig. 10. When the photon count rate is as low as $0.01$, we can still reconstruct an image of $5$dB CNR with only $0.01$ detection photons per pixel. The theoretical curves from Eq. (9) are in good accordance with the experimental one. Under the conditions of the chosen sparsity of the target and the measurement patterns, the CNR of the reconstructed image becomes higher when the photon counting rate improves. It also continues to grow with a declining growth rate as the number of measurement increases. These results are consistent with the simulation, both of which verify the feasibility and advantages of the proposed SP$^3$I technique.

 figure: Fig. 10.

Fig. 10. Comparison of CNR values of experimental results (scattered points) and theoretical CNR curves plotted by Eq.9.

Download Full Size | PDF

In these field experiments, we irradiate the entire imaging area and then modulate the returned light field with sparse sampling pattern, which waste a large amount of source energy, and the speed of current modulator is no more than tens of thousands of Hertz. This limitation can be overcome by generating a rapidly modulated sparse light field. If the spatial light changes fast enough, the imaging time will be greatly shortened. Also, the updates and improvement of system components and structure can improve the detecting efficiency of our system, so that the application of the SP$^3$I is no longer limited to the cooperation targets. We are seeking specific experimental solutions to improve the reception efficiency of the system, and will apply our technique in more scenarios of natural objectives in the near future.

4. Conclusion

In conclusion, we proposed a novel SP$^3$I technique in this research. Compared with the conventional single-pixel imaging techniques, our method can extract more spatial and temporal information from a single-photon detection, thereby improving the photon efficiency and imaging quality. During signal acquisition, only one pulse is required each frame; therefore, it reduces the laser power and acquisition time consumption by two orders of magnitude [1619,22,25]. The simulation and field experiment demonstrated the feasibility and availability of the proposed SP$^3$I technique which can restore a $5$dB CNR image by only $0.01$ detected photon per pixel. Our approach could facilitate the efficiency of various existing single photon detection single pixel imaging scenarios, such as passive imaging system for sparse fluorescence targets, to reconstruct a super-resolution image efficiently with a single-pixel detector and a quick spatial light modulator. Additionally, an CNR model is established to analyze the noise components that affect the image quality. We anticipate that the proposed technique can be widely applied to enhance the performance of computational images that rely on sequential correlation measurements from biological microscopy to remote sensing.

The modulation and reception efficiency of the optical field based on SP$^3$I technique can be improved through experimental design optimization and devices update, which will enable our technique to be used in a variety of photon-limited conditions. Moreover, the imaging performance can be further improved by involving some extra variations of the reconstruction algorithm, such as the use of the standard numerical solver of Refs. [11,28,29] or the sparsity of natural images in the spatial frequency domain [22]. These issues will be further investigated in our future work.

Funding

National Natural Science Foundation of China (61471239, 61631014); National High-tech Research and Development Program (2013AA122901).

Acknowledgments

G.H. Zeng defined the scientific goals and conceived the project. J. H. Shi designed the experiment, J. H. Shi, X.L. Liu, L. Sun and Y.H. Li conducted the experiments, X.L. Liu and J.P. Fan performed the data analysis. G.H. Zeng, J. H. Shi, X.L. Liu and J.P. Fan wrote the manuscript.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. W. M. Ruyten, “Ccd arrays, cameras, and displays, by gerald c. holst,” Optics & Photonics News 8 (1997).

2. Y. Chen, J. D. Müller, P. T. So, and E. Gratton, “The photon counting histogram in fluorescence fluctuation spectroscopy,” Biophys. J. 77(1), 553–567 (1999). [CrossRef]  

3. A. McCarthy, R. J. Collins, N. J. Krichel, V. Fernández, A. M. Wallace, and G. S. Buller, “Long-range time-of-flight scanning sensor based on high-speed time-correlated single-photon counting,” Appl. Opt. 48(32), 6241–6251 (2009). [CrossRef]  

4. J.-E. Kallhammer, “Imaging: The road ahead for car night-vision,” Nat. Photonics sample, 12–13 (2006). [CrossRef]  

5. D. M. McClatchy, E. J. Rizzo, W. A. Wells, P. P. Cheney, J. C. Hwang, K. D. Paulsen, B. W. Pogue, and S. C. Kanick, “Wide-field quantitative imaging of tissue microstructure using sub-diffuse spatial frequency domain imaging,” Optica 3(6), 613–621 (2016). [CrossRef]  

6. A. McCarthy, N. J. Krichel, N. R. Gemmell, X. Ren, M. G. Tanner, S. N. Dorenbos, V. Zwiller, R. H. Hadfield, and G. S. Buller, “Kilometer-range, high resolution depth imaging via 1560 nm wavelength single-photon detection,” Opt. Express 21(7), 8904–8915 (2013). [CrossRef]  

7. A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. Wong, J. H. Shapiro, and V. K. Goyal, “First-photon imaging,” Science 343(6166), 58–61 (2014). [CrossRef]  

8. D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nat. Commun. 7(1), 12046 (2016). [CrossRef]  

9. A. Halimi, Y. Altmann, A. McCarthy, X. Ren, R. Tobin, G. S. Buller, and S. McLaughlin, “Restoration of intensity and depth images constructed using sparse single-photon data,” in 2016 24th European Signal Processing Conference (EUSIPCO), (2016), pp. 86–90.

10. E. D. Kolaczyk, “Bayesian multiscale models for poisson processes,” J. Am. Stat. Assoc. 94(447), 920–933 (1999). [CrossRef]  

11. A. M. Pawlikowska, A. Halimi, R. A. Lamb, and G. S. Buller, “Single-photon three-dimensional imaging at up to 10 kilometers range,” Opt. Express 25(10), 11919–11931 (2017). [CrossRef]  

12. J. P. Oliveira, J. M. Bioucas-Dias, and M. A. Figueiredo, “Adaptive total variation image deblurring: a majorization–minimization approach,” Signal Processing 89(9), 1683–1693 (2009). [CrossRef]  

13. T. Pittman, Y. Shih, D. Strekalov, and A. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995). [CrossRef]  

14. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]  

15. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009). [CrossRef]  

16. M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE signal processing magazine 25(2), 83–91 (2008). [CrossRef]  

17. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013). [CrossRef]  

18. M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010 (2016). [CrossRef]  

19. W. Gong, C. Zhao, H. Yu, M. Chen, W. Xu, and S. Han, “Three-dimensional ghost imaging lidar via sparsity constraint,” Sci. Rep. 6(1), 26133 (2016). [CrossRef]  

20. W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk, and D. M. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Appl. Phys. Lett. 93(12), 121105 (2008). [CrossRef]  

21. J. Ke and E. Y. Lam, “Fast compressive measurements acquisition using optimized binary sensing matrices for low-light-level imaging,” Opt. Express 24(9), 9869 (2016). [CrossRef]  

22. P. A. Morris, R. S. Aspden, J. E. Bell, R. W. Boyd, and M. J. Padgett, “Imaging with a small number of photons,” Nat. Commun. 6(1), 5913 (2015). [CrossRef]  

23. R. S. Aspden, N. R. Gemmell, P. A. Morris, D. S. Tasca, L. Mertens, M. G. Tanner, R. A. Kirkwood, A. Ruggeri, A. Tosi, R. W. Boyd, G. S. Buller, R. H. Hadfield, and M. J. Padgett, “Photon-sparse microscopy: visible light imaging using infrared illumination,” Optica 2(12), 1049–1052 (2015). [CrossRef]  

24. Y. Yang, J. Shi, F. Cao, J. Peng, and G. Zeng, “Computational imaging based on time-correlated single-photon-counting technique at low light level,” Appl. Opt. 54(31), 9277–9283 (2015). [CrossRef]  

25. X. Liu, J. Shi, X. Wu, and G. Zeng, “Fast first-photon ghost imaging,” Sci. Rep. 8(1), 5012 (2018). [CrossRef]  

26. Z. Yang, Y. Sun, S. Qu, Y. Yu, R. Yan, A.-X. Zhang, and L.-A. Wu, “Noise reduction in computational ghost imaging by interpolated monitoring,” Appl. Opt. 57(21), 6097–6101 (2018). [CrossRef]  

27. W. Dan, L. Quan, F. Junhong, J. Xiaowei, G. Rui, and L. Meiqi, “Change characteristic of low visibility along highways in hebei province during 2016-2017# br,” J. Arid Meteorol. 37, 639–647 (2019).

28. Z. T. Harmany, R. F. Marcia, and R. M. Willett, “This is spiral-tap: Sparse poisson intensity reconstruction algorithms–theory and practice,” IEEE Trans. on Image Process. 21(3), 1084–1096 (2012). [CrossRef]  

29. M.-J. Sun, M. P. Edgar, D. B. Phillips, G. M. Gibson, and M. J. Padgett, “Improving the signal-to-noise ratio of single-pixel imaging using digital microscanning,” Opt. Express 24(10), 10476–10485 (2016). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. The schematic diagram of our imaging system based on the SP $^3$ I technique. DMD: digital micromirror device, SPAD: single-photon avalanche diode, TCSPC: time-correlated single-photon-counting module.
Fig. 2.
Fig. 2. Comparison of the probability distributions $P_r(m)$ and $P_h(m)$ .
Fig. 3.
Fig. 3. Impacts of different variables on CNR. (a) CNR vs. sparsity of target and measurement patterns $M_1/M$ and $M_2/M$ ; (b) CNR vs. sparsity of measurement patterns $M_2/M$ and the photon counting rate $N_r$ ; (c)CNR vs. the total number of measurements $K$ and the sparsity of measurement patterns $M_2/M$ ; (d) CNR vs. $K$ and photon counting rate $N_r$ . The default values of $K$ , $N_r$ , $M_1$ , $M_2$ and $M$ are $2\times 10^4$ , $0.1$ , $200$ , $200$ and $20736$ respectively.
Fig. 4.
Fig. 4. Comparison of FPI and SP $^3$ I from different number of measurements. FPI is performed by scanning each pixel in turn from top to bottom, from left to right. The gray areas are waiting to be scanned. The number of scanning measurements is equal to the number of pixels recovered.
Fig. 5.
Fig. 5. Reconstructed images under different simulation conditions. (a-f) Results with the photon-counting rate from $0.01$ to $0.3$ ; (g-l) results under different numbers of measurements from $2$ k to $30$ k; (m-r) results with the number of measurements and photon-counting rate changing simultaneously at a photon-efficiency of $0.48$ ; (s-x) results of targets modulated by patterns of different sparsity.
Fig. 6.
Fig. 6. Simulaiton results for targets from different longitudinal distance. (a) The original target image; (b) Result by time slice at $CNR=8.64$ dB; (c) Intensity map; (d) Longitudinal distance map.
Fig. 7.
Fig. 7. 100Km field experiment. (a) Geographic map of the Qinghai Lake. (b) the transceiver devices a pulsed laser (Onefive GmbH Katana-10 XP, 532nm); a DMD (ViALUX V4395 DLP) with mirror pitch 10.8 $\mu m$ ; a telescope (Celestron EdgeHD 1100 279mm (11") Optical Tube Assembly (OTA)); a single-photon single-pixel detector with 64 optical fiber probes(SIMINICS MSPD64: Multichannel SPD System); a TCSPC module (SIMINICS MT6420). (c) One of ten reflector arrays, each composed of nine corner cubes.
Fig. 8.
Fig. 8. Experimental results for $100$ km targets including $5$ different longitudinal distances. (a) 2D reconstruction of $192\times 108$ pixels image with $11$ dB CNR by $0.1$ photons detection per pixel. (b) 3D Slice map. The distances are $99.993$ , $100.005$ , $100.017$ , $100.029$ , $100.041$ km, respectively, as calculated by the flight time of reflected photons. (c) Intensity map. (d) Longitudinal Distance map.
Fig. 9.
Fig. 9. Experimental results for the practical $3$ km target including two different longitudinal distances with a difference of $9$ cm. (a) Original target in field secene; (b) 2D reconstruction of $192\times 108$ pixels image with $10.4$ dB CNR by $0.1$ photons detection per pixel; (c) Intensity map; (d) Longitudinal distance map. The longitudinal distances are $3013.05$ , $3013.14$ meters as calculated by the two peaks of flight time of reflected photons at $2087000$ ps and $2087600$ ps.
Fig. 10.
Fig. 10. Comparison of CNR values of experimental results (scattered points) and theoretical CNR curves plotted by Eq.9.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

N i = N 0 R i ( x , y ) O ( x , y ) d x d y ,
N i ( m ) = N 0 j = 1 m α j = N 0 m α ¯ ,
P h ( m ) = C M 1 m C M M 1 M 2 m C M M 2 ,
P 0 = e η N i = e m N r / m ¯
P r ( m ) = P h ( 1 P 0 ) P ~ 1 .
O ( x , y ) = 1 K e j = 1 K e ( R j ( x , y ) R j ( x , y ) ¯ ) ,
I s ( m ) = m N r / M 1
I b ( m ) = N r ( M 2 m ) / ( M M 1 ) .
C N R = 20 log 10 μ s μ b ( σ s 2 + σ b 2 ) / 2 K ,
η p = N t M s . t . C N R ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.