Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Improving the noise immunity of 3D computational ghost imaging

Open Access Open Access

Abstract

Computational ghost imaging (CGI) can build a three-dimensional (3D) image using the reconstructed shading images. However, it could be easily affected by the noise accumulated in the 3D reconstruction. More importantly, the selection of initial growing position will also affect the quality of the formed 3D image significantly. In this paper, we apply the technique of sub-pixel displacement to achieve smooth shading images in noisy environments and propose a method for selecting the optimal initial growing position to preserve the stereo feature of the object. We demonstrate that the surfaces of the reconstructed 3D images are more accurate using our proposed method as compared to the ones achieved by previously used methods. Our research would promote the development of 3D imaging using CGI in noisy environments.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Ghost imaging (GI), known as correlation imaging, is a new imaging method that utilizes intensity correlation to image an object. In 1995, Shih et al confirmed the spatial correlation of a pair of entangled photons exist and realized ghost imaging for the first time [1]. Later Boyd et al implemented ghost imaging experiments with pseudo-thermal light source [2]. In 2004, S.S Han et al proposed a lensless Fourier transform scheme based on a ghost imaging system [3], and implemented this scheme in the hard X-ray band [4]. In 2008, J.H.Shapiro proposed the computational ghost imaging (CGI) structure which simplifies the traditional ghost imaging [5], and was realized by Silberberg et al. [6]. In CGI, the incident light is encoded by a spatial light modulator or digital micromirror device, and the diffraction pattern at a certain distance is calculated as the virtual reference light. This removes the need of an array detector and therefore has attracted much interest and significant research has been made to improve the GI technology [7–15].

Ghost imaging also has three-dimensional (3D) imaging capabilities [16–19]. In 2013, Sun successfully reconstructed the 3D image by the CGI system based on a method similar to the technology of photometric stereo [20]. In this method a series of shading images, each with the same illumination and taken from different viewpoint, are achieved by a set of spatially separated bucket detectors and the corresponding gradient map is calculated. The 3D shape is reconstructed from the initial growing position (IGP) and works outwards to get the whole surface of the object.

However, the quality of the reconstructed 3D ghost image limits the application of this technique. In practical, bucket detector noise and illumination noise reduce the quality of the reconstructed images. More importantly, the effect of noise on a 2D shading image will be accumulated in the 3D reconstruction process, which significantly distorts the 3D images. This condition can easily come across in low light level environment. In general, two points affect the quality of the formed 3D image using this method [20] in noisy environments: 1. The quality of the obtained shading images; 2. The selection of IGP. As demonstrated in GI theory, the signal-to-noise ratio (SNR) of the formed 2D images decreases as the resolution increases [21]. Therefore how to achieve a high-resolution shading image with low reconstruction noise is the key question for building a smooth 3D image. Additionally, as IGP is at the beginning of the reconstruction, the selection of IGP is also critical to prevent the 3D features of object from being damaged by accumulated error resulting from the noise.

In this paper, we propose a strategy to solve these two problems in noisy environments where two kinds of noise, the device noise and the unstable illumination, are considered. The proposed strategy includes two parts: firstly, we apply the technique of sub-pixel displacement (SPD) to get high-resolution shading images with obviously improved quality and thus, the correspondingly formed 3D images are more smooth than the ones achieved using traditional method. Secondly, the method for selecting the optimal initial growing position (OIGP) is proposed, so as to preserve the stereo features of the object. With increasing noise, the surface of the 3D image reconstructed using our strategy is significantly more accurate, while the one obtained by the traditional method is almost indistinguishable. Therefore, our approach has a positive effect on 3D imaging in noisy environments.

2. SPD methodology

2.1. 3D CGI system

The schematic of the 3D imaging with bucket detectors is shown in Fig. 1. Our experiment setup consists of a digital light projector (DLP4500, Texas Instruments) to illuminate an object with a set of patterns, and three bucket detectors (DET100A, Thorlabs) are spatially separated to collect the reflected light. The signal of the bucket detector is collected by a data acquisition board (NI6210, National Instruments). The data acquisition rate is 1.5 × 106 Hz and the exposure time of the bucket detector for each measurement is ∼ 0.0267 s. All devices are controlled by LabVIEW and the measurement rate in given system is ∼ 3.4 Hz. The spatially separated bucket detectors are located in a plane 325 mm away from the object. The whole space can be described by a Cartesian coordinate system with three axises (x, y, z), and the center of the projected light patterns on the object plane is defined as the origin. The 2D image of object can be reconstructed by Eq. (1):

Gk(x,y)=sikpi(x,y)sikpi(x,y),
where 〈· · · 〉 denotes the assemble average over N patterns, pi plays the role of i-th virtual projected pattern, sik is the i-th measurement by the bucket detector k (k = up, left, right).

 figure: Fig. 1

Fig. 1 A schematic of experiment setup. The light projector illuminates the object with computer generated patterns. The three bucket detectors which are marked as left, up, right respectively, are located at (−165, 79.5, −325), (0, −203.5, −325), (165, 79.5, −325) in the unit of millimeter. The 2D shading images in the box with dash line are obtained from three bucket detectors respectively.

Download Full Size | PDF

2.2. 3D reconstruction with CGI

“Shape from shading” is a classical method in the field of 3D imaging. This method derives the 3D image from a single image with one source of illumination, which assumes that the object exhibits uniform Lambertian (matte) reflectance. Besides, for the 3D CGI in previous work [20], the 3D image is reconstructed by multiple images achieved by different detectors, which is similar to a technique called photometric stereo [20]. In this method, the intensity of a pixel in formed shading images can be expressed as:

Ik(x,y)=Isα(dkn),
where Is is the intensity of the light source, α, a constant for the perfect Lambertian surface, is the object surface albedo, d⃗k is the unit vector from the object surface pointing to the k-th bucket detector and n⃗ is the vector normal to the object surface, n⃗ = (nx, ny, nz)T. n⃗ can be expressed as:
n=1Isα(D1I),
where I is the array containing the corresponding image intensities, I = [Iup, Ileft, Iright]T. D is the matrix including three unit detector vectors, D = [d⃗up, d⃗left, d⃗right]T.

The surface gradient can be obtained by n⃗:

p=zx=nxnz,q=zy=nynz,
where p and q are the gradient in the direction of x and y respectively.

We then integrate the gradient map to achieve a height map of the object surface. The 3D image reconstruction starts from the pixel in the object’s region of the shading images, and works outwards to the four pixels adjacent to it. Since the gradient of each pixel is known, the height of a pixel can be estimated based on the known height of its nearest neighbor and the gradient of the pixel itself. The height of next pixel is solved repeatedly until all the pixels’ heights are obtained.

2.3. SPD-based CGI

The SPD method uses four low-resolution M2×M2 images to achieve a high-resolution M × M image. The left of Fig. 2 shows the method of shifting patterns to obtain four low-resolution images. Firstly, a set of patterns (M2×M2 pixels) are projected on the object and the original-pattern image (IOP) can be reconstructed. And then the patterns are shifted to the right, bottom and bottom right diagonal direction by half a pixel respectively to achieve the right-pattern image(IRP), bottom-pattern image(IBP) and diagonal-pattern image(IDP). Four low-resolution images are overlaid on the M × M high-resolution mesh for co-registration in its shifted location, the high-resolution image can be retrieved by Eq. (5):

ISPD=14(IOP+IBP+IRP+IDP).

 figure: Fig. 2

Fig. 2 The SPD-based image reconstruction method. The low-resolution image IOP is obtained by using the patterns with the resolution of M2×M2, then the patterns are shifted to right, bottom, and bottom right by half a pixel (as shown in left side) to get the low-resolution images IRP, IBP, and IDP as shown in right side.

Download Full Size | PDF

Although this method makes a modest reduction in the contrast, it does significantly improve the quality of the image compared to the ones achieved by the traditional method (named normal high-resolution imaging (NHR)). In this paper, the quality of the formed 2D shading image is evaluated by SNR. Without the noise involved, SPD is equivalent to the convolution of NHR with a kernel as shown in Eq. (6) [21]:

κ=116[121242121],

However, the SNR of images obtained by traditional methods is inversely proportional to the resolution when the noise is considered. Therefore, SNR of low-resolution images is higher than that of high-resolution images. SPD-based image is composed of several normal low-resolution images, so it inherits the corresponding high SNR without reducing resolution. In particular, the number of measurement of SPD method required to obtain the 2D image equals to the one of NHR.

2.4. 3D reconstruction with CGI based on SPD

To verify the effectiveness of applying the SPD in 3D CGI, the relevant experiments are conducted. A face#1-model with approximate dimensions 100 mm × 60 mm × 55 mm (see the upper left of Fig. 3) and a face#2-model of 60 mm × 80 mm × 50 mm (see the upper right of Fig. 3) are used in the 3D CGI system and pixelated into 32 × 32 through the patterns. When there is only device noise, it can be expressed as n0. Then five different random fluctuations (marked by f1 to f5) are added to the measured intensity of the bucket detector to simulate the unstable illumination respectively. It is assumed that the averaged intensity of the bucket detector is , the average of f1 to f5 increases from 2% to 10% in steps of 2%, and the total noise in these five conditions are marked by n1 to n5 respectively where ni = fi + n0. Figures 3(a)–3(c) and Figs. 3(j)–3(l) show high-resolution images obtained from the three bucket detectors (up, left and right) by using SPD with noise n4. In order to make a fair comparison, NHR experiments are performed to obtain the 64 × 64 high-resolution images, and the NHR images are convolved with a kernel (see Eq. (6)) to get the smoothed shading images namely NHR-C as shown in Figs. 3(d)–3(f) and Figs. 3(m)–3(o). As can be seen from the figures, for the two 2D images obtained from the same bucket detector, the 2D image obtained by SPD is much more smooth than the one achieved by NHR-C. Here, we take the 2D shading image of the face#1-model using the bucket detector up for example. The facial features (Fig. 3(a)) such as the eyes and the mouth are obviously distinguishable in SPD-based 2D image. However, NHR-C-based 2D images are greatly affected by noise. A similar behavior of SPD can be observed in the 2D images of face#2-model as shown in Figs. 3(j)–3(l) and Figs. 3(m)–3(o). Therefore, it can be found that SPD decreases the effect of noise noticeably, which can offer high quality 2D images for further 3D reconstruction.

 figure: Fig. 3

Fig. 3 (a)–(f)(j)–(o) when the noise is n4, 2D shading images from three bucket detectors obtained by SPD and NHR-C respectively. (g)–(i)(p)–(r) SNR curves of images reconstructed by SPD and NHR-C at different noise levels.

Download Full Size | PDF

The SNR curves of the 2D images obtained by the two methods are calculated with different noise levels, and presented in Figs. 3(g)–3(i) and Figs. 3(p)–3(r). According to [21], SNR is given by:

SNR=2×(IfIb)σf+σb,
where 〈If〉, 〈Ib〉, σf and σb denote the average intensity of the object, the average intensity of the background, the standard deviations of the intensities of the object and the standard deviations of the intensities of the background respectively.

As seen in Fig. 3, the SNR curves of SPD and NHR-C based 2D images both show a downward trend with the increase of noise level, but the SNR curves of SPD are significantly higher than the ones of NHR-C. Therefore, it can be concluded that applying SPD techque to 3D-CGI can significantly reduce the influence of noise on 2D shading images, which provides high quality 2D shading images for the subsequent 3D reconstruction.

The steps of 3D reconstruction with CGI based on SPD are as following:

  1. The computer pre-generates the Hadamard matrix H [22,23] of (M2)2×(M2)2, and reshape each row of H into a new matrix in the size of M2×M2 as projected patterns. Then the i-th pattern and its inverse pattern are projected onto the object. The differential value of the two captured intensities by the k-th bucket detector is recorded as sik.
  2. Four low-resolution images IOP, IRP, IBP, and IDP of the object are generated using SPD respectively.
  3. Four low-resolution images are substituted into Eq. (5) to build a high-resolution shading image of the object. Three high-resolution shading images are reconstructed from the three different detectors.
  4. Three high-resolution shading images are substituted into Eq. (3) to get the normal vector of the object surface using the photometric stereo.
  5. A gradient map of the object surface is obtained from the normal vector.
  6. The gradient map is integrated to obtain the height map of the object surface. The 3D shape reconstruction of object is completed.

In order to verify the behavior of SPD applied to 3D CGI, the 2D shading images obtained by SPD and NHR-C are used to reconstruct corresponding 3D images respectively, as shown in the Fig. 4, where n0, n2 and n4 are considered.

 figure: Fig. 4

Fig. 4 (a)(h) Objects; (b)–(d)(i)–(k) 3D reconstruction results using NHR-C with different noise for face#1-model and face#2-model respectively; (e)–(g)(l)–(n) 3D reconstruction results using SPD with different noise for face#1-model and face#2-model respectively.

Download Full Size | PDF

Firstly, the quality of reconstructed 3D images are affected by n0 only. The 3D shape (as shown in Figs. 4(e) and 4(l)) of the objects are built with three SPD-based high-resolution shading images. As a comparison, the 3D images reconstructed from shading images by NHR-C are shown in Figs. 4(b) and 4(i). Taking the face#1-model as an example, the SPD-based 3D image is more smooth than the NHR-C-based one, especially on the cheek part (see the area shown in the box with red dash line).

Secondly, the 3D reconstruction results of the two different noise levels n2 and n4 are shown in Figs. 4(c) and 4(f), Figs. 4(j) and 4(m), Figs. 4(d) and 4(g), Figs. 4(k) and 4(n) respectively. When the noise reached n2, the facial folds on the NHR-C-based 3D image are more pronounced and the shape of the nose is distorted, while in the SPD-based 3D images the smoothness is remained. When the noise is increased to n4, the reconstruction results of NHR-C have been seriously damaged as the object is highly distorted, which is avoided in the SPD-based 3D images. This phenomenon could also be observed in the results for the face#2-model, which proves that the SPD could maintain the smoothness of the reconstructed 3D image.

Note that a hill-shape exists on the reconstructed face#1-model (see the area indicated by the box with red dash line in Fig. 4(a)), which should be flat. This is caused by the IGP without optimization and will be improved in next section.

3. Selecting the optimal initial growing position (OIGP)

3.1. The principle of OIGP

The quality of the 3D image formed by the three shading images is also determined by the initial growing position (IGP). The 3D image reconstructed from the position in a flat area is much better than the one reconstructed from an uneven area (the results will be shown below). Therefore we propose a method to automatically select a relatively flat IGP to reduce the error of the formed 3D image.

Firstly, we calculate the degree of flatness (Gpq) of the object by Eq. (8) (as shown in Fig. 5(b)) where |px|, |py|, |qx|, |qy| equal to |px|, |py|, |qx| and |qy|. A window (3 × 3 pixels) scans the image of Gpq to evaluate the degree of flatness of each local region by Eq. (9) where the superscript t indicates the t-th position and the map of v is shown in Fig. 5(c) (see white window in Fig. 5(b). The arrows indicate the directions of scanning). For a specific kind of objects, the size of scanning window should be calibrated firstly. For example, for the object where the curvature of its surface changes slowly, the corresponding window size should be relative bigger, otherwise a smaller window would be applied.

Gpq=|px|+|py|+|qx|+|qy|
v(t)(x,y)=19i=0,j=0i=2,j=2Gpq(x+t+i,y+t+j),

In Fig. 5(c), the smaller values represent the relative flat places which could be roughly obtained by the condition v(t)T. In our given system, T is set as 20% ( indicates the average value of v), and the selected candidate positions are marked in yellow dots in Fig. 5(c). After calculating the standard deviation σz of nz of each candidate, the center pixel of the window with the minimum σz is regarded as the OIGP for reconstruction algorithm as shown in Fig. 5(d).

 figure: Fig. 5

Fig. 5 The flow chart of selection of OIGP. (a) Images of parameters p and q; (b) The image of Gpq; (c) The image of v with the selected candidate positions marked in yellow; (d) The selected OIGP.

Download Full Size | PDF

To verify OIGP, we compare the formed 3D images of NHR-C with n0 with and without using OIGP in Fig. 6. The results using OIGP are shown in Fig. 6(b) and Fig. 6(g) for two objects respectively and the selected OIGP are marked in circular symbol. The OIGP chosen by our proposed method leads to the preservation of the object’s stereo features as shown in Fig. 6(b) and Fig. 6(g), despite the fact that the formed 3D image is still affected by reconstruction noise. As the comparison, the 3D images reconstructed from three other random IGPs are shown in Figs. 6(c)–6(e) and Figs. 6(h)–6(j) for two objects respectively where the features of object suffer a significant distortion, especially on the area of nose and cheek in the face#1-model. Therefore, the proposed method is beneficial for maintaining the local features in the 3D reconstruction process.

 figure: Fig. 6

Fig. 6 (a)(f) 2D shading images by NHR-C; (b)(g) 3D reconstruction results with OIGP by NHR-C; (c)–(e), (h)–(j) 3D reconstruction results with random IGPs by NHR-C.

Download Full Size | PDF

3.2. 3D image built by SPD and OIGP

As shown in the previous discussions, we found that the SPD-based CGI formed shading images can be significantly smoothed without the need of additional filtering algorithms or more measurements, however this cannot guarantee the accuracy of the reconstructed 3D shape. On the other hand, the proposed method for selecting the OIGP has an ability to reduce the accumulated error in reconstruction and maintain the features of the object. Therefore, combining these two methods shall present us with more optimal results.

The 2D images are achieved based on SPD and then an OIGP is estimated by the proposed method. The reconstructed 3D images are shown in Figs. 7(b)–7(d) and Figs. 7(l)–7(n). It can be seen that the smooth and accurate 3D images of two different objects are formed. Taking the face#1-model as an example, the uneven fluctuation disappears on the cheek of the formed image, which is benefited from the use of SPD, and the part of nose is as sharp as the object because the OIGP is applied. Furthermore, the main features are also maintained by using SPD and OIGP with the noise level increases (see Figs. 7(c), 7(d), 7(m) and 7(n)). For fair comparison, the results of SPD, NHR-C and OIGP are also shown in Fig. 7. It can be seen that the reconstruction results of SPD and OIGP are significantly superior to those of other methods. Note that, for face#2-model, the 3D reconstruction results of SPD in Fig. 7 are different from those shown in Fig. 4, which reflects that the IGP might also bring the instability of 3D image, although the smooth 2D images are achieved by SPD. It does show the necessity for developing OIGP to achieve a reliable 3D shape.

 figure: Fig. 7

Fig. 7 (a)(k) Error probe fluctuation; (b)–(d)(l)–(n) SPD and OIGP in different noise conditions (n0, n2, n4); (e)–(g)(o)–(q) SPD in different noise conditions (n0, n2, n4); (h)–(j)(r)–(t) NHR-C and OIGP in different noise conditions (n0, n2, n4).

Download Full Size | PDF

To quantitatively evaluate the quality of the formed 3D image using our strategy, we asked two trained observers to use a vernier caliper to measure the relative depth of different positions in the object as error probes. In the face#1-model, we chose the representative positions as the error probes including: corners of the eyes, the nose, and the mouth. According to the measurements the averaged error ea of probes is ∼ 3.88 mm when only n0 is considered (corresponding to the Fig. 7(b)). With the increase of noise, ea increases slightly, yet not obviously. For example, even when the noise is n4, ea equals to 4.41 mm approximately (corresponding to the Fig. 7(d)). Among all error probes, the depth errors of the probes close to selected OIGP are relatively small, while the errors of the positions close to the boundary of the object are relatively large as a result that the error in 3D image is accumulated in reconstruction. For the face#2-model, the error probes are selected using the similar criteria in Fig. 7(a), as shown in Fig. 7(k). The average error is ∼ 4.55 mm with only n0 existing. When the noise is n4, ea increases to ∼ 5.24 mm. The above error analysis verifies the accuracy of our proposed strategy.

4. Conclusion

We propose a strategy to form a 3D image of an object in noisy environments by several single pixel detectors. With the same number of measurement, we firstly obtain four low-resolution images, and then use them to compose a high-resolution image by SPD technique, which improves the smoothness of the formed 3D image at the expense of a modest reduction in the contrast of the 2D images. Furthermore the method for selecting OIGP is proposed to maintain the stereo features of the object based on the gradient images of the object’s surface.

The reconstructed 3D shape using our method is more smooth and accurate to the original object when compared to the ones formed by the traditional method [20] in noisy environments without need of more measurements or an de-noise algorithm, which would increase the efficiency of 3D imaging technique in low light level conditions where the object might be light sensitive.

Funding

National Natural Science Foundation of China (NSFC) (61501242).Open Research Fund in 2017 of Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense (3091601410409).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52, R3429 (1995). [CrossRef]  

2. R. S. Bennink, S. J. Bentley, and R. W. Boyd, “”two-photon” coincidence imaging with a classical source,” Phys. Rev. Lett. 89, 113601 (2002). [CrossRef]  

3. J. Cheng and S. Han, “Incoherent coincidence imaging and its applicability in x-ray diffraction,” Phys. Rev. Lett. 92, 093903 (2004). [CrossRef]  

4. H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard X rays,” Phys. Rev. Lett. 117, 113901 (2016). [CrossRef]  

5. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78, 061802 (2008). [CrossRef]  

6. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79, 1744–1747 (2008).

7. F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104, 253603 (2010). [CrossRef]   [PubMed]  

8. S. C. Song, M. J. Sun, and L. A. Wu, “Improving the signal-to-noise ratio of thermal ghost imaging based on positive negative intensity correlation,” Opt. Commun. 366, 8–12 (2016). [CrossRef]  

9. B. Sun, M. Edgar, R. Bowman, L. Vittert, S. Welsh, A. Bowman, and M. Padgett, “Differential computational ghost imaging,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2013), pp. CTu1C–4.

10. K. H. Luo, B. Huang, W. M. Zheng, and L. A. Wu, “Nonlocal imaging by conditional averaging of random reference measurements,” Chin. Phys. Lett. 29(5), 74216–74220 (2012). [CrossRef]  

11. M. J. Sun, M. F. Li, and L. A. Wu, “Nonlocal imaging of a reflective object using positive and negative correlations,” Appl. Opt. 54, 7494 (2015). [CrossRef]   [PubMed]  

12. B. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, “Normalized ghost imaging,” Opt. Express 20, 16892–16901 (2012). [CrossRef]  

13. D. B. Phillips, M. J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. G. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3e1601782 (2017). [CrossRef]  

14. S. Ota, R. Horisaki, Y. Kawamura, M. Ugawa, I. Sato, K. Hashimoto, R. Kamesawa, K. Setoyama, S. Yamaguchi, and K. Fujiu, “Ghost cytometry,” Science 360, 1246–1251 (2018). [CrossRef]  

15. L. Meng, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci Rep 717865 (2017).

16. Y. Zhang, M. P. Edgar, B. Sun, N. Radwell, G. M. Gibson, and M. J. Padgett, “3D single-pixel video,” J. Opt. 18, 035203 (2016). [CrossRef]  

17. M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016). [CrossRef]  

18. W. Gong and S. Han, “High-resolution far-field ghost imaging via sparsity constraint,” Sci. Reports 5, 9280 (2015). [CrossRef]  

19. E. Salvador-Balaguer, P. Latorre-Carmona, C. Chabert, F. Pla, J. Lancis, and E. Tajahuerce, “Low-cost single-pixel 3D imaging by using an LED array,” Opt. Express 26, 15623–15631 (2018). [CrossRef]  

20. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340, 844–847 (2013). [CrossRef]  

21. M. J. Sun, M. P. Edgar, D. B. Phillips, G. M. Gibson, and M. J. Padgett, “Improving the signal-to-noise ratio of single-pixel imaging using digital microscanning,” Opt. Express 24, 10476–10485 (2016). [CrossRef]  

22. W. Pratt, J. Kane, and H. C. Andrews, “Hadamard transform image coding,” Proc. IEEE 57, 58–68 (1969). [CrossRef]  

23. N. J. Sloane and M. Harwit, “Masks for hadamard transform optics, and weighing designs,” Appl. Opt. 15, 107 (1976). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 A schematic of experiment setup. The light projector illuminates the object with computer generated patterns. The three bucket detectors which are marked as left, up, right respectively, are located at (−165, 79.5, −325), (0, −203.5, −325), (165, 79.5, −325) in the unit of millimeter. The 2D shading images in the box with dash line are obtained from three bucket detectors respectively.
Fig. 2
Fig. 2 The SPD-based image reconstruction method. The low-resolution image IOP is obtained by using the patterns with the resolution of M 2 × M 2 , then the patterns are shifted to right, bottom, and bottom right by half a pixel (as shown in left side) to get the low-resolution images IRP, IBP, and IDP as shown in right side.
Fig. 3
Fig. 3 (a)–(f)(j)–(o) when the noise is n4, 2D shading images from three bucket detectors obtained by SPD and NHR-C respectively. (g)–(i)(p)–(r) SNR curves of images reconstructed by SPD and NHR-C at different noise levels.
Fig. 4
Fig. 4 (a)(h) Objects; (b)–(d)(i)–(k) 3D reconstruction results using NHR-C with different noise for face#1-model and face#2-model respectively; (e)–(g)(l)–(n) 3D reconstruction results using SPD with different noise for face#1-model and face#2-model respectively.
Fig. 5
Fig. 5 The flow chart of selection of OIGP. (a) Images of parameters p and q; (b) The image of Gpq; (c) The image of v with the selected candidate positions marked in yellow; (d) The selected OIGP.
Fig. 6
Fig. 6 (a)(f) 2D shading images by NHR-C; (b)(g) 3D reconstruction results with OIGP by NHR-C; (c)–(e), (h)–(j) 3D reconstruction results with random IGPs by NHR-C.
Fig. 7
Fig. 7 (a)(k) Error probe fluctuation; (b)–(d)(l)–(n) SPD and OIGP in different noise conditions (n0, n2, n4); (e)–(g)(o)–(q) SPD in different noise conditions (n0, n2, n4); (h)–(j)(r)–(t) NHR-C and OIGP in different noise conditions (n0, n2, n4).

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

G k ( x , y ) = s i k p i ( x , y ) s i k p i ( x , y ) ,
I k ( x , y ) = I s α ( d k n ) ,
n = 1 I s α ( D 1 I ) ,
p = z x = n x n z , q = z y = n y n z ,
I SPD = 1 4 ( I OP + I BP + I RP + I DP ) .
κ = 1 16 [ 1 2 1 2 4 2 1 2 1 ] ,
SNR = 2 × ( I f I b ) σ f + σ b ,
G p q = | p x | + | p y | + | q x | + | q y |
v ( t ) ( x , y ) = 1 9 i = 0 , j = 0 i = 2 , j = 2 G p q ( x + t + i , y + t + j ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.