Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Underwater computational ghost imaging

Open Access Open Access

Abstract

Since the quality of underwater classical optical imaging is constrained by the absorptive and scattering nature for underwater environment, it is a challenging task to obtain long distance imaging. Ghost imaging is a second-order correlation imaging method using co-incidence measurement, which has the advantages of disturbance-free and wider angle of view (AOV), which can effectively improve the imaging result. We have investigated the computational ghost imaging under the underwater conditions for different turbidities and from different angles. The reconstruction obtained by computational ghost imaging can be rather desirable, in the scenarios that the classical optical imaging method completely fails. Moreover, the underwater computational ghost imaging can bear a wide range of AOV, where the obtained imaging results can be satisfiable in regardless of the detector’s locations. This result brings a better alternative for underwater optical imaging.

© 2017 Optical Society of America

1. Introduction

In underwater environment, the quality of classical optical imaging is inevitably constrained by absorptive and scattering effects. It becomes particularly difficulty for long distance imaging in underwater conditions. In contrast, ghost imaging (GI) [1–3], a new imaging method using coincidence measurement, has the advantage of disturbance-free [4–8], which can effectively improve the imaging distance. In a typical GI setup, one beam from light source is divided into signal and reference beams. There is an object in the signal beam and is detected with a bucket detector. The reference beam goes through a charged coupled device (CCD), which can be regarded as a spatial distribution detector. Neither the bucket detector nor CCD can reveal the spatial distribution of the object. One can achieve the object image by the coincidence measurement of the two beams. Pittman et al. [1] first demonstrated GI using the entangled signal and reference beams produced by spontaneous parametric down conversion in 1995. More recently, it has been empirically shown that a classical-state source could yield a ghost image as in [11, 12]. Still pseudothermal light, i.e., laser light has been rendered spatially incoherent by passage through a rotating ground glass, could be utilized to form a ghost imaging [13–19]. High-order thermal ghost imaging was also implemented [20, 21]. Recent studies shows that GI is now gradually moving from laboratory to practical applications such as x-ray ghost imaging [24,25], ghost imaging lidar [26], and other applications [27–34]. GI consists of two beams which is inconvenient for practical use. Remarkably, computational ghost imaging (CGI) can solve this problem by keeping the object beam and exploiting calculated field pattern rather than reference beam [22, 23]. Although the setup is simplified, the advantage of GI, i.e. disturbance-free, remain.

Although the influence of turbulence or scattering media in GI has been discussed in previous work [5–10], the application of GI with underwater conditions and the angle of view (AOV) of GI are rarely reported. In this paper, we have empirically investigated the CGI under the conditions of underwater for long distance and from different angles, where extensive simulations of marine environment have been performed. To obtain the effects of long distance in the experiments, the china clay is added into the water media in certain levels to change the turbidity, which can be equivalently considered as increasing the distance. The experimental results have validated that CGI is not sensitive to the changes of turbidity in the water. The reconstruction obtained by CGI can be more desirable than the classical optical imaging method. Furthermore, the underwater CGI has good AOV, namely, it can bear a wide range of imaging angles, where the obtained imaging results can be satisfiable in regardless of the detector’s locations.

This paper is structured as follows. In Section 2, the theory of underwater CGI is introduced. The experimental evaluation in underwater conditions has been given in Section 3. Finally, the concluding remarks are given in Section 4.

2. Theory

In order to provide a basis for the following discussion, the CGI method is briefly reviewed, based on [22]. Figure 1 shows a schematic diagram for CGI. The field EL(ρL,t) and its corresponding intensity I1(ρ,t) at any distance L from the light source E(ρ0,t) can be calculated as follow

EL(ρL,t)=E(ρ0,t)tiLλeikLdρ0,
I1(ρL,t)=|EL(ρL,t)|2,
where ρ0, ρL are the spacial coordinates of the plane at light source and distance L respectively, λ is the wavelength, and k = 2π/λ. The intensity transmitted or reflected by the object is collected by the bucket detector and can be obtained by
I2(t)= A1|EL(ρL,t)|2|T(ρ)|2dρ,
where T(ρ) is the aperture function of the object. Then we can get the correlation function of CGI as follow
C(ρL)=I1(ρL,t)I2(t)=q2η2A2(2PπaL2)2 A1e |ρLρ0|2/ρl2|T(ρ)|2dρ,
where a0, aL=2L/(kρ0) are the intensity radii at at light source and distance L respectively, ρl=2L/ka0, η is the quantum efficiency of bucket detector, A1 is the area of the bucket detector, and A2 is the computed area of the reference beam pattern propagating distance L, P is the fluctuation of the light source. By using this correlation function, the image of the object can be obtained. Considering the noise (scattering effect), the intensity of each sample can be expressed in matrix form as follow to explains the anti-scattering feature of CGI.
[I2(t1)I2(t2)I2(tM)]=[I1(t1,ρ1)I1(t1,ρ2)I1(t1,ρQ)I1(t2,ρ1)I1(t2,ρ2)I1(t2,ρQ)  I1(tM,ρ1)I1(tM,ρ2)I1(tM,ρQ)][O(ρ1)O(ρ2)O(ρQ)]+[n(t1)n(t2)n(tM)],
where {ρ1,ρ2,ρQ}ρL, {I1(t1,ρQ),I1(t2,ρQ),I1(tM,ρQ)}I1(ρ,t) is the reference CFP, ρQ is the under testing vector, O(ρQ) is the object scattering coefficient vector, and n(tM) is the noise.

 figure: Fig. 1

Fig. 1 The computational ghost imaging setup. DMD: Digital Micromirror Device, CFP: Calculated Filed Pattern, BD: Bucket Detector.

Download Full Size | PDF

3. Experiments and results

We have empirically investigated the CGI under the conditions of underwater for long distance by changing water media to corresponding turbidity and different angles, where extensive simulations of marine environment have been performed.

3.1. The disturbance-free ability of underwater CGI

The scheme for underwater CGI is shown in Fig. 2. The light source which is a spatially incoherent beam generated by applying pseudorandom binary illumination patterns on the Digital Micromirror Device (DMD) with LED on a sliding rail. The beam illuminates the object which is a reflecting cross in the water media on different positions (A, B and C), and the photons diffracted from the object are then collected by a bucket detector (BD) which is also on a sliding rail. Finally the object image is obtained by correlating the intensity measured by the BD with the CFP at the object plane.

 figure: Fig. 2

Fig. 2 Experimental setup for underwater computational ghost imaging with object placed on different positions (A, B and C).

Download Full Size | PDF

In the experiment, the object is placed in a square container with sides that are 1 meter in length filled with water media to simulate the underwater environment. To demonstrate the performance of underwater CGI, we designed experiments to search for the most practical solution for underwater imaging or detection. The object was placed on position A, position B and position C, respectively, shown in Fig. 2. The light source and the BD are sliding to corresponding position for the reconstruction of the object. In detail, object on position A can be imaged when the light source illuminates the object through window II, and the bucket detector collects photons diffracted from the object through window 2. Object on position B can be imaged when the light source illuminates the object through window I, and the bucket detector collects photons diffracted from the object through window 2. Object on position C can be imaged when the light source illuminates the object through window I, and the bucket detector collects photons diffracted from the object through window 1.

Firstly, by means of simulation method with the Gaussian Blur as the disturbance factor to simulate water media scattering, the disturbance-free ability of CGI on different positions (A, B and C) is qualitatively analyzed. The Gaussian Blur function can be defined as G(x, y) = (2πσ2exp(x2 + y2))/2σ2, where x is the distance from the origin in the horizontal axis, y is the distance from the origin in the vertical axis, and σ is the standard deviation of the Gaussian distribution. When applied in two dimensions, this formula produces a surface whose contours are concentric circles with a Gaussian distribution from the center point. Considering the practical situation, disturbance factor G(x, y) is introduced to correct Eq. (5). For object on position A, it acts on the first term of the right side of the equation. For object on position B, it acts on both the first and second term. For object on position C, it acts on the second term. The simulated imaging results of the object on position A, position B and position C shown in Fig. 3 indicate that the imaging results varies distinctly when position of the object is changed. The imaging results with object on position A and position B are not as good as the imaging result with object on position C.

 figure: Fig. 3

Fig. 3 Simulation analysis imaging results of the setup with object on position A, position B and position C.

Download Full Size | PDF

Next, to demonstrate the simulated results mentioned above in Fig. 3, the corresponding experiments using the setup from Fig. 2 are implemented. By adding china clay to change the turbidity of the water media from pure water (0 NTU) to 80 NTU (NTU is the unit of turbidity), CGI reconstruction results with different sampling frames of 500 frames, 1000 frames, 2000 frames and 3000 frames are obtained as shown in Fig. 4. The imaging results with object on position A turn out to be average, while the water media influences the CFP, and makes it an unknown state. The imaging results with object on position B are similar to the previous setup (position A). The imaging results with object on position C show that CGI can obtain the image of the object although the turbidity is increased to a fairly high level (over 50 NTU) when the previous setups with the object on position A and position B fail to image well with the same sampling frames of 3000. The experimental results are in accordance with the simulation analysis results. Corrected Eq. (5) with disturbance factor shows why we can get the better results with object on position C rather than other conditions, since the reference CFP I1(tM,ρQ) is almost unchanged with object on position C. From the imaging results with object on position C, we can see that I1(tM,ρQ) is not affected by the scattering media from the water when the object is so close to the light source, and only the noise n(tM) is influenced. Therefore, according to the previous analysis of the two cases (case 1: object on position A and position B whose CFPs are influenced by turbidity, respectively; case 2: object on position C whose CFP is not influenced by turbidity), the disturbance-free ability of CGI is researched respectively.

 figure: Fig. 4

Fig. 4 Experimental imaging results of the setup with object on position A, position B and position C. NTU: the unit of turbidity, fs: frames.

Download Full Size | PDF

Since the disturbance-free ability of case 1 is determined by the maintenance of the CFP, we focused on the dependence of CFP on parameter of the water media. Figure 5(a) shows that CFPs vary on object plane with turbidity increasing, as the pictures are taken every 5 NTU with a camera of the same aperture. The luminance of the CFPs decrease while turbidity increases, which can be divided into three regions. Region of low turbidity is from 0 NTU to 40 NTU, and the CFPs are almost unchanged. Region of medium turbidity is from 40 NTU to 85 NTU, and the CFPs are marked by gradual changes along with the turbidity differences. Region of high turbidity is over 85 NTU, and the CFPs are tremendously changed. Figures 5(a1)5(a4), extracted from Fig. 5(a) with turbidity of 0, 40, 85, and 90 NTU, are to illustrate the impact from turbidity to CFP. The circle in Fig. 5(a1) is selected to be the model of the profile. The CFP circled in Fig. 5(a2) is recognizable in the region of low turbidity, for example under 40 NTU. The CFP circled in Fig. 5(a3) is distorted in the region of medium turbidity from 40 NTU to 85 NTU. In the region of high turbidity of more than 85 NTU, such as the CFP circled in Fig. 5(a4) cannot be resolved at all. However, both in the region of medium and high turbidity, CGI method manages to obtain the object image with sufficient sampling frames.

 figure: Fig. 5

Fig. 5 (a) The actual CFPs with different turbidity captured with CCD. (a1), (a2), (a3) and (a4) are the contours of CFPs with 0NTU, 40NTU, 85NTU, and 90NTU, respectively. (b) The plots of the light power attenuation of CFPs and water sample attenuation coefficient with different turbidity. (c) The plot of correlation coefficient of CFPs with different turbidities.

Download Full Size | PDF

Moreover, the plots of the light power attenuation of CFPs and water media attenuation coefficient with different turbidities are shown in Fig. 5(b). The attenuation coefficient is defined as C = ln(Io/Ii)/z, where Ii, Io are the light intensity before and after the water media, and z is the propagation distance. To make further exploration on the CFPs on object plane, the plot of correlation coefficient of CFPs with different turbidities are shown in Fig. 5(c). The correlation coefficient can be defined as Corr(x, y) = cov(x, y)xσy, where cov is the covariance, σx is the standard deviation of x and σy is the standard deviation of y. The water media with 0 NTU is selected to be the reference, namely the correlation coefficient is the autocorrelation of the CFPs through pure water (0 NTU) which is considered to be 1. The correlation coefficients of the rest CFPs are crosscorrelation between 0 NTU and the corresponding turbidity shown in Fig. 5(c). A clear trend of correlation coefficient declining from 0 NTU to 25 NTU is observed, and the the curve of correlation coefficient slows down from 25 NTU to 50 NTU. When turbidity is increased to more than 50 NTU, correlation coefficient fluctuation occurs. The correlation coefficient of CFPs is higher than 0.2, namely turbidity of the water media is less than 50 NTU, the image can be obtained using CGI by placing object respectively on different positions (A, B and C). However, the correlation coefficient of CFPs is lower than 0.2, namely turbidity of the water media is more than 50 NTU, the imaging results with object on position C is better performed compared with the setups with the object on position A and position B with the same sampling frames. It also shows that the imaging quality of CGI is due to the maintenance of CFP on the object plane.

In case 2, the greatest advantage is that the CFP is not affected by disturbance which only acts on the second term of Eq. (5), making a better performance in the imaging process. The imaging results with object on position C show that CGI can obtain the image under extreme conditions, when the previous setups with the object on position A and position B failed as shown in Fig. 3 and Fig. 4. Therefore, what we are concerned about such case is how the turbidity changes can influence the imaging quality. Usually the imaging quality is characterized with signal-to-noise ratio (SNR), which we used in current work. Here, SNR is defined as SNR = S/δ, where S is average of the signal intensity, and δ is the variance of the background intensity.

Figure 6(a) shows the contrasting plots of SNR with different turbidities of CGI and CCD with sampling number of 3000 frames. It shows that the quality of traditional imaging method which uses CCD instead of BD, declines with the turbidity increasing, while that of CGI appears robust against the increase of turbidity. In the region of low turbidity of 0 NTU to 40 NTU, SNR of traditional imaging method (CCD) is higher than that of CGI. The imaging quality of CGI appears robust in the region of medium turbidity of 40 NTU to 85 NTU, while the traditional non-correlated imaging method declines severely. In the region of high turbidity of more than 85 NTU, the imaging quality of CGI still maintains relatively higher level than that of the traditional imaging method (CCD). Since the reference CFP is scarcely affected by the scattering factor from the water media, this makes CGI more sensitive and can further enhance the result of underwater imaging.

 figure: Fig. 6

Fig. 6 (a) The plot of SNR with different turbidities (CGI vs. CCD) with sampling of 3000 frames. (b) The plot of SNR with different turbidities (0 NTU/40 NTU/85 NTU/90 NTU) with respect to number of sampling.

Download Full Size | PDF

In addition, CGI can weaken the influence from the water sample with turbidity changes of object on position C for its anti-scattering nature, even the CFP changes along with the turbidity variation. Under extreme condition, for example simulating by using high turbidity water media, CGI can still obtain object image as long as getting sufficient number of sampling frames, as shown in Fig. 6(b). To get better imaging results, more number of sampling frames should be achieved. With the increasing of turbidity, the imaging results remain distinguishing. Not only in the region of low turbidity of 0 NTU to 40 NTU, but in the region of medium turbidity of 40 NTU to 85 NTU or high turbidity of more than 85 NTU, the CGI turns to be a good solution over extreme underwater environment. While under the same condition, the quality of traditional imaging method degrades dramatically.

3.2. The AOV of underwater CGI

Figure 7 shows the experiment apparatus for the angle of view (AOV) of CGI, where the position of light source is kept unchanged and the BD position is changed on the circular sliding rail to get imaging results from different view angles. First, the object plane is selected to be perpendicular to the beam from the light source, which is denoted as 0 degree. A pseudothermal light beam is generated by light source which is composed by a DMD and LED just like the previous experimental setup. The image of the object which is a reflecting cross is obtained by correlating the intensity measured by the BD with CFP at the object plane.

 figure: Fig. 7

Fig. 7 (a) Experimental setups for underwater CGI from different view angles. (b) Principle of BD. (c) Principle of CCD. BD: Bucket Detector.

Download Full Size | PDF

Firstly, the reflection characteristic of the object with different experimental setups are studied. The spatial intensity distribution of the reflected light, which shows the object illuminated respectively by the light source on left 20 degrees, right 20 degrees and 0 degree from its initial position, illustrate the reflection characteristic of the object shown in Figs. 8(a)8(c). These profiles follow the Gaussian distribution centered with its reflection axis. For example, when light source is located on left 20 degrees as shown in Fig. 8(a), its spacial distribution of reflected beam is centered with right 20 degrees. While the curves in Figs. 8(b) and 8(c) are partially presented due to the light source and BD are placed on the same circular sliding rail.

 figure: Fig. 8

Fig. 8 (a) The different angles’ light power curve with light source on left 20 degrees from its initial position. (b) The different angles’ light power curve with light source on right 20 degrees from its initial position. (c) The different angles’ light power curve with light source on its initial position and the plot of SNR of 0 degree with different angles with sampling number of 30000. (d) The plot of SNR with different angles with respect to number of sampling with light source on its initial position. (1)–(8) : Angles from 10 to 80 degree respectively.

Download Full Size | PDF

Secondly, the AOV of CGI is studied in the following experiment. In current work, AOV of CGI takes the case of 0 degree as an example. The AOV range of CGI is characterized by SNR of the image. As the view angle degree varies, the imaging results turn out to be robust, since the SNR can be acceptable even the reflection angle is far away outside of the full width at half maximum (FWHM), as shown in Fig. 8(c), which demonstrates that CGI has better AOV. Usually, AOV of traditional CCD imaging method is constrained within the FWHM of this distribution. In addition, the array detection of CCD is limited by the threshold value of the photon sensitive layer of each unit. Therefore, we can define such case that the imaging sensitivity is equal to the detector sensitivity as shown in Fig. 7(c). However, the imaging sensitivity of CGI is higher than the detector sensitivity due to the bucket detector’s photon collection feature as in Fig. 7(b). Therefore, we believe that the AOV of CGI is wider than the traditional CCD imaging method in this way.

Lastly, as shown in Fig. 8(d), the object image can be obtained from wide range of view angles and the imaging capability of CGI can be further enhanced by this particular feature for underwater conditions. By increasing the sampling frame number, the quality of CGI is getting better. For the object to be detected is diffuse reflective, as long as the detector collects enough photons, the image of the object can be reconstructed. Meanwhile, the obtained image will be reconstructed without any distortion even from fairly large imaging angles, which is determined by the position of the light source. Graphic distortion may occur in traditional optical imaging process when the detector is placed with large imaging angle. This feature of CGI is not only useful for practical applications, but is also fundamentally important.

4. Conclusion

In conclusion, we designed experiments to demonstrate the possible utility of CGI for underwater conditions which might be useful for practical underwater imaging applications. CGI can weaken the influence from the water media with turbidity changes which might be utilized to simulate long distance underwater imaging or detection, and CGI has a wide range of AOV for extreme conditions, which are the advantages superior to the traditional underwater optical imaging methods. CGI can be a better solution for underwater imaging applications.

Funding

National Basic Research Program of China (973 Program) (Grant No. 2015CB654602); 111 Project of China (Grant No.B14040); and Fundamental Research Funds for the Central Universities.

References and links

1. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52, R3429–R3432 (1995). [CrossRef]   [PubMed]  

2. R. Meyers, K. S. Deacon, and Y. Shih, “Ghost-imaging experiment by measuring reflected photons,” Phys. Rev. A 77, 1912–1917 (2008). [CrossRef]  

3. Y. Shih, The Physics of Ghost Imaging (Springer, 2012).

4. D. Magatti, “Coherent imaging with pseudo-thermal incoherent light,” J. Mod. Optic. 53, 739–760 (2006). [CrossRef]  

5. P. Zhang, W. Gong, S. Xia, and S. Han, “Correlated imaging through atmospheric turbulence,” Phys. Rev. A 82, 5183–5191 (2010).

6. A. K. Jha, G. A. Tyler, and R. W. Boyd, “Effects of atmospheric turbulence on the entanglement of spatial two-qubit states,” Phys. Rev. A 81, 1532 (2010). [CrossRef]  

7. R. E. Meyers, K. S. Deacon, and Y. Shih, “Turbulence-free ghost imaging,” Appl. Phys. Lett. 98, 111115 (2011). [CrossRef]  

8. B. Dixon, “Quantum ghost imaging through turbulence,” Physical Review A 83, 911–915 (2011). [CrossRef]  

9. W. Gong and S. Han, “Correlated imaging in scattering media,” Opt. Lett. 36, 394–396 (2011). [CrossRef]   [PubMed]  

10. M. Bina, D. Magatti, M. Molteni, A. Gatti, L. A. Lugiato, and F. Ferri, “Backscattering differential ghost imaging in turbid media,” Phys. Rev. Lett. 110, 1–7 (2012).

11. R. S. Bennink, S. J. Bentley, and R. W. Boyd, “"two-photon" coincidence imaging with a classical source,” Phys. Rev. Lett. 89, 113601 (2002). [CrossRef]   [PubMed]  

12. R. S. Bennink, S. J. Bentley, R. W. Boyd, and J. C. Howell, “Quantum and classical coincidence imaging,” Phys. Rev. Lett. 92, 069901 (2004). [CrossRef]  

13. A. Gatti, E. Brambilla, M. Bache, and L. Lugiato, “Correlated imaging, quantum and classical,” Phys. Rev. A 70, 235–238 (2004). [CrossRef]  

14. A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, “Ghost imaging with thermal light: comparing entanglement and classical correlation,” Phys. Rev. Lett. 93, 093602 (2004). [CrossRef]   [PubMed]  

15. Y. Cai and S. Y. Zhu, “Ghost imaging with incoherent and partially coherent light radiation,” Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 71, 122–133 (2005). [CrossRef]  

16. V. Alejandra, S. Giuliano, D. Milena, and S. Yanhua, “Two-photon imaging with thermal light,” Phys. Rev. Lett. 94, 063601 (2005). [CrossRef]  

17. F. Ferri, D. Magatti, A. Gatti, M. Bache, E. Brambilla, and L. A. Lugiato, “High-resolution ghost image and ghost diffraction experiments with thermal light,” Phys. Rev. Lett. 94, 183602 (2005). [CrossRef]   [PubMed]  

18. L. Basano and P. Ottonello, “Experiment in lensless ghost imaging with thermal light,” Appl. Phys. Lett. 89, 091109 (2006). [CrossRef]  

19. L. Basano and P. Ottonello, “A conceptual experiment on single-beam coincidence detection with pseudothermal light,” Opt. Express 15, 12386–12394 (2007). [CrossRef]   [PubMed]  

20. K. W. C. Chan, M. N. O’Sullivan, and R. W. Boyd, “High-order thermal ghost imaging,” Opt. Lett. 34, 3343–3345 (2009). [CrossRef]   [PubMed]  

21. K. W. Chan, M. N. O’Sullivan, and R. W. Boyd, “Optimization of thermal ghost imaging: high-order correlations vs. background subtraction,” Opt. Express 18, 5562–5573 (2010). [CrossRef]   [PubMed]  

22. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78, 1–2 (2009).

23. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79, 1744–1747 (2009). [CrossRef]  

24. H. Yu, R. H. Lu, S. S. Han, H. L. Xie, G. H. Du, T. Q. Xiao, and D. M. Zhu, “Fourier-transform ghost imaging with hard X-rays,” Phys. Rev. Lett. 117, 113901 (2016). [CrossRef]  

25. D. Pelliccia, A. Rack, M. Scheel, V. Cantelli, and D. M. Paganin, “Experimental x-ray ghost imaging,” Phys. Rev. Lett. 117, 113902 (2016). [CrossRef]   [PubMed]  

26. C. Q. Zhao, W. L. Gong, M. L. Chen, E. R. Li, H. Wang, W. D. Xu, and S. S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. 101, 141123 (2012). [CrossRef]  

27. N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1, 285–289 (2014). [CrossRef]  

28. W. K. Yu, X. R. Yao, X. F. Liu, R. M. Lan, L. A. Wu, G. J. Zhai, and Q. Zhao, “Compressive microscopic imaging with ‘positive-negative’ light modulation,” Opt. Commun. 371, 105–111 (2016). [CrossRef]  

29. W. L. Gong and S. S. Han, “High-resolution far-field ghost imaging via sparsity constraint,” Sci. Rep. 5, 9280 (2015). [CrossRef]   [PubMed]  

30. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95, 131119 (2009). [CrossRef]  

31. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013). [CrossRef]   [PubMed]  

32. M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nature Commun. 7, 12010 (2016). [CrossRef]  

33. D. Shin, F. H. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K Goyal, F. N. C. Wong, and J. H. Shapiro, “Photon-efficient imaging with a single-photon camera,” Nature Commun. 7, 12046 (2016). [CrossRef]  

34. R. I. Khakimov, B. M. Henson, D. K. Shin, S. S. Hodgman, R. G. Dall, K. G. H. Baldwin, and A. G. Truscott, “Ghost Imaging with Atoms,” arXiv:1607.02240v1 (2016).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 The computational ghost imaging setup. DMD: Digital Micromirror Device, CFP: Calculated Filed Pattern, BD: Bucket Detector.
Fig. 2
Fig. 2 Experimental setup for underwater computational ghost imaging with object placed on different positions (A, B and C).
Fig. 3
Fig. 3 Simulation analysis imaging results of the setup with object on position A, position B and position C.
Fig. 4
Fig. 4 Experimental imaging results of the setup with object on position A, position B and position C. NTU: the unit of turbidity, fs: frames.
Fig. 5
Fig. 5 (a) The actual CFPs with different turbidity captured with CCD. (a1), (a2), (a3) and (a4) are the contours of CFPs with 0NTU, 40NTU, 85NTU, and 90NTU, respectively. (b) The plots of the light power attenuation of CFPs and water sample attenuation coefficient with different turbidity. (c) The plot of correlation coefficient of CFPs with different turbidities.
Fig. 6
Fig. 6 (a) The plot of SNR with different turbidities (CGI vs. CCD) with sampling of 3000 frames. (b) The plot of SNR with different turbidities (0 NTU/40 NTU/85 NTU/90 NTU) with respect to number of sampling.
Fig. 7
Fig. 7 (a) Experimental setups for underwater CGI from different view angles. (b) Principle of BD. (c) Principle of CCD. BD: Bucket Detector.
Fig. 8
Fig. 8 (a) The different angles’ light power curve with light source on left 20 degrees from its initial position. (b) The different angles’ light power curve with light source on right 20 degrees from its initial position. (c) The different angles’ light power curve with light source on its initial position and the plot of SNR of 0 degree with different angles with sampling number of 30000. (d) The plot of SNR with different angles with respect to number of sampling with light source on its initial position. (1)–(8) : Angles from 10 to 80 degree respectively.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

E L ( ρ L , t ) = E ( ρ 0 , t ) t i L λ e i k L d ρ 0 ,
I 1 ( ρ L , t ) = | E L ( ρ L , t ) | 2 ,
I 2 ( t ) =   A 1 | E L ( ρ L , t ) | 2 | T ( ρ ) | 2 d ρ ,
C ( ρ L ) = I 1 ( ρ L , t ) I 2 ( t ) = q 2 η 2 A 2 ( 2 P π a L 2 ) 2   A 1 e   | ρ L ρ 0 | 2 / ρ l 2 | T ( ρ ) | 2 d ρ ,
[ I 2 ( t 1 ) I 2 ( t 2 ) I 2 ( t M ) ] = [ I 1 ( t 1 , ρ 1 ) I 1 ( t 1 , ρ 2 ) I 1 ( t 1 , ρ Q ) I 1 ( t 2 , ρ 1 ) I 1 ( t 2 , ρ 2 ) I 1 ( t 2 , ρ Q )     I 1 ( t M , ρ 1 ) I 1 ( t M , ρ 2 ) I 1 ( t M , ρ Q ) ] [ O ( ρ 1 ) O ( ρ 2 ) O ( ρ Q ) ] + [ n ( t 1 ) n ( t 2 ) n ( t M ) ] ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.