Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Focal-plane three-dimensional imaging method based on temporal ghost imaging: a proof of concept simulation

Open Access Open Access

Abstract

A new focal-plane three-dimensional (3D) imaging method based on temporal ghost imaging is proposed and demonstrated. By exploiting the advantages of temporal ghost imaging, this method enables the utilization of slow integrating cameras and facilitates 3D surface imaging within the framework of sequential flood-illumination and focal-plane detection. The depth information is achieved by a temporal correlation between received and reference signals with multiple-shot, and the reflectivity information is achieved by flash imaging with a single-shot. The feasibility and performance of this focal-plane 3D imaging method have been verified through theoretical analysis and numerical experiments.

© 2020 Optical Society of America

1. INTRODUCTION

In order to accurately perceive the physical world around us, various 3D surface imaging technologies have been proposed, developed, and commercialized in the past several decades [13]. Each 3D imaging technique has its own set of advantages and disadvantages when considering key performance indexes, such as accuracy, resolution, speed, cost, and reliability. From the perspectives of system stability, hardware maturity, and cost, several focal-plane 3D imaging technologies using standard framing cameras have attracted significant research attention, including structured-light 3D surface imaging [1], monocular and binocular vision [4,5], and intensity-encoded 3D flash imaging [6,7], etc. The main principle of structured-light 3D surface imaging techniques is to extract the 3D surface shape based on the information from a distorted pattern of projected structured light. In the case of close-range imaging, structured-light 3D imaging can achieve high accuracy, but its performance degrades significantly in a long-range imaging scenario. Both monocular vision and binocular vision exploit simple systems, but their imaging accuracy is insufficient for applications that require depth information with high-resolution. Intensity-encoded 3D flash imaging converts the time-of-flight of pulse light into intensity information using a polarization and a Pockels cell to achieve high-resolution 3D imaging with high speed. However, this technique is unlikely to be used for long-range applications, particularly those that require low size, weight, and power [8].

Ghost imaging is an interesting non-local imaging technique. It originates from research on entangled photons, but it has been demonstrated that the approach can be implemented using classical thermal light [916]. An important feature of ghost imaging is that it extracts information with high-resolution by correlation measurements between received and reference signals. However, the received signals have no resolution information and the reference signals do not interact with the object of interest [1721]. It manifests in the time domain as a slow integrating detector without time-resolving capability. It is also used to detect temporal objects, and the temporal structure can be reconstructed on the reference signal with high-resolution [2224]. In recent years, temporal ghost imaging has received extensive attention, and its characteristics and applications have been extensively studied [2531]. This slow integrating detection model is suitable for framing cameras, which are commercially available among all the large array sensors. Based on this idea, we propose a new 3D imaging method via temporal ghost imaging (TGI) in the framework of sequential flood-illumination and focal-plane detection. This imaging process is mainly realized in two steps: 2D imaging by single-shot and focal-plane depth imaging by multiple-shot. The principle of focal-plane depth imaging is converting the differences of laser flight time of each pixel into the differences of effective energy integrating time (EEIT). The value of EEIT is achieved by TGI between the reference and received signals. The imaging scheme and reconstruction principle are presented in this paper. In addition, the feasibility and performance were also verified and investigated using numerical experiments.

2. MODEL AND METHOD

Figure 1 presents the implementation scheme to accomplish focal-plane 3D imaging based on TGI. An amplitude modulated laser generator, which transmits laser segments with random fluctuation in the time domain, is utilized as the illuminating source. The laser is first divided into two beams using a beam-splitter, namely, the reference beam and the object beam. The reference beam is directly detected by a fast detector, which can record the intensity fluctuation of each laser segment as follows: $ {I_r}(i \cdot \tau ) $, $ i = 1,2 \cdots P $. $ \tau $ is the sampling period of the fast detector, and $ P $ is the sampling number of each segment. A schematic diagram of $ K $ laser segments is shown in the upper left part of Fig. 1. The object beam is shaped and expanded by a beam expander to make its transverse intensity uniform. Laser segments are projected to the object in sequence and then reflected back. A receiving telescope is used to image the object onto the sensitive plane of the camera, which is located on the focal-plane of the telescope. Since the surfaces of 3D objects are non-planar along the optical propagation axis, reflected light from different points on the object’s surface will experience different flight time. For example, light reflected from the front parts of the object will reach the sensitive plane earlier than light from the back parts. A suitable delay time of the camera’s shutter is set relative to the laser emission and the shutter width to make sure that only part of the reflected light segments can reach the camera before the shutter closes. The inset drawing, located at the bottom right of Fig. 1, shows the timing structure of the camera shutter and the reflected segments at different pixels. In this detection mode, the EEIT of each pixel is determined by the relative time-delay of the reflected segments, but not the shutter width. This means that the depth information of the object is proportional to the difference of EEIT of each pixel. The camera sequentially detects and records all the received light fields, and $ {I_t}(x,y) $, $ (x,y) $ presents the coordinate of each pixel of the camera. The amplitude modulated laser generator, fast detector, and camera work synchronously during the sampling process. Due to the received signals only containing part segments of the reference sequence, the correlation between received signals $ {I_t} $ and different accumulated intensities of reference signals $ {I_{\rm sum}}(S) $ will be different as the accumulated time of reference signals (ATRS) $ S $ changes. The accumulated intensity of the reference signals $ {I_{\rm sum}} $ can be presented as a function of variable $ S $ as follows:

$${I_{{\rm sum}}}(S) = \sum\limits_{i = 1}^S {I_r}(i \cdot \tau ),\quad S = 1,2 \cdots P.$$
For pixel $ (x,y) $, the correlation function $ C(x,y,S) $ between received signals and reference signals with different ATRS is defined as
$$C(x,y,S) = \frac{{\left\langle {\Delta {I_t}(x,y)\Delta {I_{{\rm sum}}}(S)} \right\rangle }}{{\sqrt {\left\langle {{{\left[ {\Delta {I_t}(x,y)} \right]}^2}} \right\rangle \left\langle {{{\left[ {\Delta {I_{{\rm sum}}}(S)} \right]}^2}} \right\rangle } }},$$
where $ \Delta I = I - \langle I \rangle $, $ \langle \rangle $ denotes the ensemble average. Because of the incoherence of random intensity fluctuation, the correlation function $ C(x,y,S) $ will grow larger as the variable $ S $ increases firstly. However, $ C(x,y,S) $ will drop as the value of $ S $ becomes bigger than the EEIT of the received signals $ {I_t}(x,y) $, because the temporal correlation of the reference signals between any two different time moments is not completely incoherent. The additional accumulated energy will destroy the correlation between received and reference signals. Therefore, $ C(x,y,S) $ will achieve maximum value when $ S $ equals the EEIT of the received signals $ {I_t}(x,y) $. In this regard, Eq. (6) below will provide a more accurate theoretical analysis. The EEIT of pixel $ (x,y) $ can be achieved by
$$T(x,y) = \arg \mathop {\max }\limits_{S:S \gt 0} C(x,y,S).$$
Performing correlation computation pixel by pixel, the EEIT of each pixel $ T(x,y) $ is reconstructed. Since the relative difference of $ T(x,y) $ is proportional to the relative depth information of each pixel, the depth image of the object can be acquired by $ R(x,y) = c \cdot T(x,y)/2 $, where $ c $ is the speed of light. Combining the relative depth information by TGI and the 2D image from a single-shot, the 3D image of the imaging object can be easily obtained.
 figure: Fig. 1.

Fig. 1. Implementation scheme of the focal-plane 3D imaging method based on TGI. Inset drawing in the bottom right is the timing structure of the camera shutter and the reflected segments at different pixels.

Download Full Size | PDF

3. NUMERICAL EXPERIMENTS

In order to investigate the feasibility and performance of the proposed 3D imaging method, numerical experiments were performed according to the experimental scheme shown in Fig. 1. The amplitude modulated laser source with a wavelength of 1064 nm sequentially transmits laser segments with uniform random distribution. The imaging object is a simulated city scene, as shown in Fig. 2(a), which consists of several kinds of city buildings with different heights and a road dividing the scene into two parts. A tower with a height of 160 mm, which corresponds to the EEIT of 700 sampling periods of the reference signal, is located in the bottom right corner and is the highest building. The other size of the city scene is $ 150\;{\rm mm} \times 150\;{\rm mm} $. The optical aperture and focal length of the receiving telescope is 25.4 mm and 20 mm, respectively. The city scene is 10 m away from the telescope, which is far enough to make the 3D city scene within the depth of field of the receiving telescope. Figure 2(b) displays the 2D image of the city scene by single-shot when observing from top down, while Fig. 2(c) shows the depth image of the 3D object by multiple-shot using the proposed method. In the process of focal-plane depth imaging, additive white Gaussian noise that may be caused by dark current or background radiation is taken into consideration. In the framework of ghost imaging, detection signal-to-noise ratio (DSNR) is defined as the ratio of mean value of detection signals to noise standard deviation. As the EEIT of each pixel is different, the DSNR will also be different from each other, although the variance of noise on each pixel has little difference. In this paper, we define DSNR of the pixel with maximum EEIT as the DSNR of the whole imaging object. Figure 2(c) is achieved in a case of $ K = 6000 $ measurements and $ {\rm DSNR} = 15\;{\rm dB} $. Figure 2(d) is the 3D composite image of Figs. 2(b) and 2(c). The depth information of the city scene is displayed with different colors, which follows the HSV (hue, saturation, value) color model. Results of Fig. 2 demonstrate that the focal-plane 3D imaging method based on temporal ghost imaging is feasible to get 3D information of an object.

 figure: Fig. 2.

Fig. 2. Simulation result of focal-plane 3D imaging method based on TGI. (a) Imaging target with a city scene. (b) 2D image of the city scene when observing from top down. (c) Depth image of the city scene. (d) 3D composite image of (b) and (c).

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Effect of $ M $ and $ \rho $ on depth imaging quality. (a) Correlation function $ C(x,y,S) $ when $ M $ is equal to 100, 200, and 600, which correspond to the red, blue, and green curves, respectively. (b) RMSE of reconstructed depth image when $ M $ changes from 100 to 600. (c) Correlation function $ C(x,y,S) $ when $ \rho $ is equal to 0.001, 0.01, and 0.1, which correspond to the red, blue, and green curves, respectively. (d) RMSE of reconstructed depth image when $ \rho $ changes from 0.001 to 0.2.

Download Full Size | PDF

Because the acquisition of 2D images using focal-plane flash imaging is very mature, the imaging quality of this proposed 3D imaging method depends mainly on the depth imaging process. In order to clarify the factors that may affect the depth imaging, properties of the correlation functions $ C(x,y,S) $ are investigated. It can be predicted that $ C(x,y,S) $ will increase first and then decrease as the variable $ S $ gradually changes from 1 to $ P $. A bigger slope of $ C(x,y,S) $ may make the determination of EEIT more robust from noise interference. Assuming that the EEIT of pixel $ (x,y) $ corresponds to $ M $ sampling periods, we can get

$${I_t}(x,y) = \sum\limits_{i = 1}^M {I_r}(i \cdot \tau ).$$
The averaged intensity of illuminating lasers at any time is uniform, which means $ \langle {I_r}(i \cdot \tau )\rangle = \langle I\rangle $, and temporal correlation of the laser between any two different time moments is a constant $ \rho $, $ 0 \le |\rho | \lt 1 $, defined as
$$\rho = \frac{{\left\langle {\Delta {I_r}(i \cdot \tau )\Delta {I_r}(j \cdot \tau )} \right\rangle }}{{\sqrt {\left\langle {{{\left[ {\Delta {I_r}(i \cdot \tau )} \right]}^2}} \right\rangle \left\langle {{{\left[ {\Delta {I_r}(j \cdot \tau )} \right]}^2}} \right\rangle } }},\quad I \ne j.$$
Substituting Eqs. (1), (4), and (5) into Eq. (2), Eq. (2) can be expressed as a piecewise function as follows:
$$C(x,y,S) = \left\{ {\begin{array}{cc}{\sqrt {\frac{{S[1 + (M - 1)\rho ]}}{{M[1 + (S - 1)\rho ]}}} ,}&{S \le M;}\\[7pt]{\sqrt {\frac{{M[1 + (S - 1)\rho ]}}{{S[1 + (M - 1)\rho ]}}} ,}&{S \gt M.}\end{array}} \right.$$
 figure: Fig. 4.

Fig. 4. Effect of measurements and DSNR on depth imaging quality. (a)–(d) Trend of reconstruction quality as the DSNR varies from 0 dB to 20 dB with 6000 measurements. (e)–(h) Trend of reconstruction quality as the number of measurements varies from 2000 to 30,000 with a DSNR of 10 dB. The $ x $-coordinate of (b)–(d) and (f)–(h) represents spacial distribution of the 1D object, while the $ y $-coordinate represents depth information.

Download Full Size | PDF

Considering the derivative of Eq. (6) and the feasible region of variable $ S $ and parameters $ \rho $ and $ M $, $ C(x,y,S) $ is monotonically increasing when $ S \ge M $ and decreasing when $ S \lt M $. Furthermore, the slope of $ C(x,y,S) $ is related to both $ \rho $ and $ M $. For confirmation of this analysis, another numerical experiment is performed. Figure 3 displays the variation of $ C(x,y,S) $ with different $ M $ or $ \rho $ and its influences on the root-mean-square error (RMSE) of reconstructed depth image in a condition of $ K = 6000 $ measurements and $ {\rm DSNR} = 15\;{\rm dB} $. Figure 3(a) shows that the slope of $ C(x,y,S) $ will become gentle with the increase of $ M $. The red, blue, and green curve corresponds to $ M = 100 $, 300, and 600, respectively. As a result, the reconstructed quality of the depth image gradually degrades, which is revealed by RMSE in Fig. 3(b). For another, Fig. 3(c) shows that little changes of $ \rho $ will dramatically affect the slope of the correlation function. The red, blue, and green curve corresponds to $ \rho = 0.001 $, 0.01, and 0.1, respectively. When $ \rho $ equals 0.1, $ C(x,y,S) $ approaches a flat top function, which is susceptible to noise interference. Figure 3(d) also reveals that the RMSE of the depth image increases obviously as $ \rho $ grows from 0.001 to 0.2. It indicates that little EEIT and temporal incoherent laser sources are helpful for reconstructing the depth information accurately.

Measurements and DSNR are two other important factors that may have influence on the imaging quality. Numerical experiments with different measurements and different DSNR are carried out in Fig. 4. In order to exhibit the reconstructed results more clearly, a one-dimensional (1D) object is utilized in this numerical experiment. The 1D imaging object has 160 pixels and its maximum EEIT corresponds to 700 sampling periods. This numerical experiment is implemented in a condition of $ \rho = 0.001 $. Figure 4(a) presents the trend of reconstruction quality, which is also described by RMSE, as the DSNR varies from 0 dB to 20 dB with a measurement of $ K = 6000 $. It is obvious that the imaging quality improves significantly with the increase of the DSNR. When the DSNR reaches a level of 20 dB, the proposed 3D imaging method nearly reconstructs the depth information of the object without any deviation. Figures 4(b)4(d) separately display the reconstructed results with the DSNR of 6 dB, 10 dB, and 20 dB. On the other hand, Fig. 4(e) presents the curve of reconstructed quality as the number of measurements varies from 2000 to 30,000 with a DSNR of 10 dB. Figures 4(f)4(h) separately display the reconstructed results with the measurements of 2000, 10,000 and 30,000. It implies that increasing measurements can improve imaging quality in some degree. However, a high-quality result may need a large number of measurements when the DSNR is relatively low.

In the framework of focal-plane imaging with a frame camera as the detecting sensor, the exposure time jitter of the camera is an essential factor affecting reconstruction quality. The jitter of the exposure start time will result in the uncertainty of EEIT among different measurements. It can be predicted as that the greater amplitude of the jitter of the exposure start time, the lower the DSNR of the measurements will be. The results of Fig. 5 have confirmed this simple prediction by simulation experiments using the 1D object mentioned earlier in a condition of $ {\rm DSNR} = 10\;{\rm dB} $, $ \rho = 0.001 $, and $ K = 30,000 $. Figures 5(a)5(c), which correspond to the standard deviation (STD) of the jitter of the exposure start time are 5, 20, and 30 sampling periods, shows that the accuracy of the reconstructed depth information will decrease as the exposure jitter increases. The top right corner of (a)–(c) is the histogram of the corresponding jitter of the exposure start time. Figure 5(d) quantitatively displays that the changes of RMSE of the retrieved depth images as the STD of the exposure jitter vary from 0 to 35 sampling periods.

 figure: Fig. 5.

Fig. 5. Influence of frame camera exposure jitter on depth imaging quality. (a)–(c) Reconstructed depth images of a 1D object when the standard deviation (STD) of camera exposure jitter is 5, 20, and 35 sampling periods. The top right corner of each figure is the corresponding histogram of camera exposure jitter. (d) Trend of reconstructed depth images’ RMSE as STD of camera exposure jitter varies from 0 to 35 sampling periods.

Download Full Size | PDF

4. DISCUSSION AND CONCLUSION

A focal-plane 3D imaging method based on temporal ghost imaging is proposed, and many factors affecting the imaging quality are investigated. Of all the factors, the DSNR is probably the most important factor that will determine the imaging quality. Although factors in Fig. 3 affect the slope of the correlation function, a high DSNR will make the curve of the correlation function $ C(x,y,S) $ very smooth, which is beneficial for determining the maximum value of $ C(x,y,S) $, thereby generating a depth image with high accuracy. For camera exposure jitter, it will directly affect DSNR. In a situation of relative low DSNR, increasing the number of measurements is a good way to overcome the effects of noise, but this method has limitations, and too many measurements are not suitable for rapid imaging. However, this proposed 3D imaging method has some obvious advantages in other aspects. First, it can be performed with a simple focal-plane imaging setup, while the depth information can be easily achieved by a slow frame camera. Second, this method also relaxes the energy stability requirements of the light source. The random intensity fluctuations of a laser diode have been utilized to realize temporal ghost imaging in the work of Ref. [22].

In conclusion, this composite 3D imaging method is realized by a single-shot to get the 2D image and multiple-shot to get the depth image of the object, respectively. The most important procedure of depth image reconstruction is implemented by illumination with an amplitude modulated laser source and detection with a standard framing camera which locates on the focal-plane of a receiving telescope. This simple imaging framework not only increases the stability of the imaging system, but also reduces hardware cost. It may provide a 3D surfacing imaging method for some imaging fields where only frame cameras are available.

Funding

National Natural Science Foundation of China (61571427); Youth Innovation Promotion Association of the Chinese Academy of Sciences (2013162); Civil Aerospace Program (D040301).

Acknowledgment

Some good advice for the writing of this paper was provided by Enrong Li, Key Laboratory for Quantum Optics and Center for Cold Atom Physics of CAS, Shanghai Institute of Optics and Fine Mechanics.

Disclosures

The authors declare no conflicts of interest.

REFERENCES

1. J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photon. 3, 128–160 (2011). [CrossRef]  

2. P. McManamon, “Review of ladar: a historic, yet emerging, sensor technology with rich phenomenology,” Opt. Eng. 51, 060901 (2012). [CrossRef]  

3. H. Nguyen, D. Nguyen, Z. Wang, H. Kieu, and M. Le, “Real-time, high-accuracy 3D imaging and shape measurement,” Appl. Opt. 54, A9–A17 (2015). [CrossRef]  

4. J. Michels, A. Saxena, and A. Y. Ng, “High speed obstacle avoidance using monocular vision and reinforcement learning,” in 22nd International Conference on Machine Learning (ACM, 2005), pp. 593–600.

5. S. D. Cochran and G. Medioni, “3-D surface description from binocular stereo,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 981–994 (1992). [CrossRef]  

6. L. A. Tamburino and J. Taboada, “Laser imaging and ranging system using one camera,” U.S. patent 5,162,861 (November 10, 1992).

7. J. Taboada and L. A. Tamburino, “Laser imaging and ranging system using two cameras,” U.S. patent 5,157,451 (October 20, 1992).

8. K. W. Ayer, W. C. Martin, J. M. Jacobs, and R. H. Fetner, “Laser imaging and ranging system (LIMARS): a proof-of-concept experiment,” Proc. SPIE 1633, 54–63 (1992). [CrossRef]  

9. T. Pittman, Y. Shih, D. Strekalov, and A. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52, R3429–R3432 (1995). [CrossRef]  

10. D. Strekalov, A. Sergienko, D. Klyshko, and Y. Shih, “Observation of two-photon “ghost” interference and diffraction,” Phys. Rev. Lett. 74, 3600–3603 (1995). [CrossRef]  

11. A. F. Abouraddy, B. E. A. Saleh, A. V. Sergienko, and M. C. Teich, “Role of entanglement in two-photon imaging,” Phys. Rev. Lett. 87, 123602 (2001). [CrossRef]  

12. R. S. Bennink, S. J. Bentley, and R. W. Boyd, ““two-photon” coincidence imaging with a classical source,” Phys. Rev. Lett. 89, 113601 (2002). [CrossRef]  

13. A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, “Correlated imaging, quantum and classical,” Phys. Rev. A 70, 013802 (2004). [CrossRef]  

14. J. Cheng and S. Han, “Incoherent coincidence imaging and its applicability in x-ray diffraction,” Phys. Rev. Lett. 92, 093903 (2004). [CrossRef]  

15. D. Zhang, Y.-H. Zhai, L.-A. Wu, and X.-H. Chen, “Correlated two-photon imaging with true thermal light,” Opt. Lett. 30, 2354–2356 (2005). [CrossRef]  

16. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78, 061802 (2008). [CrossRef]  

17. J. H. Shapiro and R. W. Boyd, “The physics of ghost imaging,” Quantum Inf. Process. 11, 949–993 (2012). [CrossRef]  

18. D.-Z. Cao, J. Xiong, and K. Wang, “Geometrical optics in correlated imaging systems,” Phys. Rev. A 71, 013801 (2005). [CrossRef]  

19. F. Ferri, D. Magatti, A. Gatti, M. Bache, E. Brambilla, and L. A. Lugiato, “High-resolution ghost image and ghost diffraction experiments with thermal light,” Phys. Rev. Lett. 94, 183602 (2005). [CrossRef]  

20. S. Nan, Y. Bai, X. Shi, Q. Shen, L. Qu, H. Li, and X. Fu, “Experimental investigation of ghost imaging of reflective objects with different surface roughness,” Photon. Res. 5, 372–376 (2017). [CrossRef]  

21. Y.-K. Xu, W.-T. Liu, E.-F. Zhang, Q. Li, H.-Y. Dai, and P.-X. Chen, “Is ghost imaging intrinsically more powerful against scattering?” Opt. Express 23, 32993–33000 (2015). [CrossRef]  

22. P. Ryczkowski, M. Barbier, A. T. Friberg, J. M. Dudley, and G. Genty, “Ghost imaging in the time domain,” Nat. Photonics 10, 167–170 (2016). [CrossRef]  

23. F. Devaux, P.-A. Moreau, S. Denis, and E. Lantz, “Computational temporal ghost imaging,” Optica 3, 698–701 (2016). [CrossRef]  

24. Z. Chen, H. Li, Y. Li, J. Shi, and G. Zeng, “Temporal ghost imaging with a chaotic laser,” Opt. Eng. 52, 076103 (2013). [CrossRef]  

25. J. Liu, J. Wang, H. Chen, H. Zheng, Y. Liu, Y. Zhou, F.-L. Li, and Z. Xu, “High visibility temporal ghost imaging with classical light,” Opt. Commun. 410, 824–829 (2018). [CrossRef]  

26. X. Yao, W. Zhang, H. Li, L. You, Z. Wang, and Y. Huang, “Long-distance thermal temporal ghost imaging over optical fibers,” Opt. Lett. 43, 759–762 (2018). [CrossRef]  

27. Y.-K. Xu, S.-H. Sun, W.-T. Liu, G.-Z. Tang, J.-Y. Liu, and P.-X. Chen, “Detecting fast signals beyond bandwidth of detectors based on computational temporal ghost imaging,” Opt. Express 26, 99–107 (2018). [CrossRef]  

28. J. Tang, Y. Tang, K. He, L. Lu, D. Zhang, M. Cheng, L. Deng, D. Liu, and M. Zhang, “Computational temporal ghost imaging using intensity-only detection over a single optical fiber,” IEEE Photon. J. 10, 1–9 (2018). [CrossRef]  

29. H. Wu, P. Ryczkowski, A. T. Friberg, J. M. Dudley, and G. Genty, “Temporal ghost imaging using wavelength conversion and two-color detection,” Optica 6, 902–906 (2019). [CrossRef]  

30. W. Jiang, X. Li, S. Jiang, Y. Wang, Z. Zhang, G. He, and B. Sun, “Increase the frame rate of a camera via temporal ghost imaging,” Opt. Lasers Eng. 122, 164–169 (2019). [CrossRef]  

31. J. Wu, F.-X. Wang, W. Chen, S. Wang, D.-Y. He, Z.-Q. Yin, G.-C. Guo, and Z.-F. Han, “Temporal ghost imaging for quantum device evaluation,” Opt. Lett. 44, 2522–2525 (2019). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Implementation scheme of the focal-plane 3D imaging method based on TGI. Inset drawing in the bottom right is the timing structure of the camera shutter and the reflected segments at different pixels.
Fig. 2.
Fig. 2. Simulation result of focal-plane 3D imaging method based on TGI. (a) Imaging target with a city scene. (b) 2D image of the city scene when observing from top down. (c) Depth image of the city scene. (d) 3D composite image of (b) and (c).
Fig. 3.
Fig. 3. Effect of $ M $ and $ \rho $ on depth imaging quality. (a) Correlation function $ C(x,y,S) $ when $ M $ is equal to 100, 200, and 600, which correspond to the red, blue, and green curves, respectively. (b) RMSE of reconstructed depth image when $ M $ changes from 100 to 600. (c) Correlation function $ C(x,y,S) $ when $ \rho $ is equal to 0.001, 0.01, and 0.1, which correspond to the red, blue, and green curves, respectively. (d) RMSE of reconstructed depth image when $ \rho $ changes from 0.001 to 0.2.
Fig. 4.
Fig. 4. Effect of measurements and DSNR on depth imaging quality. (a)–(d) Trend of reconstruction quality as the DSNR varies from 0 dB to 20 dB with 6000 measurements. (e)–(h) Trend of reconstruction quality as the number of measurements varies from 2000 to 30,000 with a DSNR of 10 dB. The $ x $ -coordinate of (b)–(d) and (f)–(h) represents spacial distribution of the 1D object, while the $ y $ -coordinate represents depth information.
Fig. 5.
Fig. 5. Influence of frame camera exposure jitter on depth imaging quality. (a)–(c) Reconstructed depth images of a 1D object when the standard deviation (STD) of camera exposure jitter is 5, 20, and 35 sampling periods. The top right corner of each figure is the corresponding histogram of camera exposure jitter. (d) Trend of reconstructed depth images’ RMSE as STD of camera exposure jitter varies from 0 to 35 sampling periods.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

I s u m ( S ) = i = 1 S I r ( i τ ) , S = 1 , 2 P .
C ( x , y , S ) = Δ I t ( x , y ) Δ I s u m ( S ) [ Δ I t ( x , y ) ] 2 [ Δ I s u m ( S ) ] 2 ,
T ( x , y ) = arg max S : S > 0 C ( x , y , S ) .
I t ( x , y ) = i = 1 M I r ( i τ ) .
ρ = Δ I r ( i τ ) Δ I r ( j τ ) [ Δ I r ( i τ ) ] 2 [ Δ I r ( j τ ) ] 2 , I j .
C ( x , y , S ) = { S [ 1 + ( M 1 ) ρ ] M [ 1 + ( S 1 ) ρ ] , S M ; M [ 1 + ( S 1 ) ρ ] S [ 1 + ( M 1 ) ρ ] , S > M .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.