Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Adaptive compressed 3D ghost imaging based on the variation of surface normals

Open Access Open Access

Abstract

Three-dimensional (3D) imaging can be reconstructed by a computational ghost imaging system with single pixel detectors based on a photometric stereo, but the requirement of large measurements and long imaging times are obstacles to its development. Also, the compressibility of the target’s surface normals has not been fully studied, which causes the waste in sampling efficiency in single-pixel imaging. In this paper, we propose a method to adaptively measure the object’s 3D information based on surface normals. In the proposed method, the regions of object’s surface are illuminated by patterns of different spatial resolutions according to the variation of surface normals. The experimental results demonstrate that our proposed scheme can reduce measurements and preserve the quality of the formed 3D image.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Ghost imaging (GI) is a novel imaging technique that images using the intensity correlation between light fields. In 1995, Y.H. Shi realized ghost imaging with entangled photons for the first time [1]. Later, Boyd etc. achieved classical ghost imaging experiments with pseudo-thermal light source [2]. In 2008, J.H.Shapiro proposed a computational ghost imaging (CGI) system [3], which utilizes optical modulation devices such as spatial light modulators (SLM) and digital micromirror arrays (DMD) to generate predictable reference light fields/patterns, removing the requirement for array detectors. This method simplifies ghost imaging system, improves its practicality, and attracts a lot of attention to ghost imaging [49]. Of course, ghost imaging technology has some limitations. The most obvious one is its rather long imaging time, including the time for data acquisition and post-processing. Furthermore, the higher the resolution of the acquired image, the larger the amount of data required, and therefore the longer the sampling and reconstruction time. Generally speaking, the resolution and imaging rate of ghost imaging system are a contradiction.

In order to solve this problem, scholars improve ghost imaging systems from two aspects, imaging setups [10,11] and algorithms [12,13]. For the second aspect, some ghost imaging schemes based on compressed sensing (CS) have been proposed [13,14]. However, as Averbuch $et\ al$ pointed out, the reconstruction processes of these methods usually rely on iterative algorithms to overcome optimization problems, which leads to an increase in computational burden. What’s more, as image resolution increases, the consumed time increases exponentially. For these defects, they proposed a new image sampling technique suitable for single-pixel imaging systems called adaptive compressed sampling (ACS) [15]. This technique adaptively predicts the edge region according to the wavelet tree of the low-resolution image, performs high-resolution sampling on this region, and finally recovers the target image by executing inverse wavelet transform. Similar ideas have been applied to improve the performance of ghost imaging systems [16,17].

In recent years, three-dimensional (3D) imaging techniques based on ghost imaging system emerge, which often achieved by applying photon time-of-flight (TOF) or photometric stereo method in 3D imaging field to ghost imaging system [1823]. In 2013, B. Sun successfully reconstructed 3D images based on CGI system using photometric stereo theory [24]. However, for 3D ghost imaging, time and computational consumption are also major obstacles to its development. In the field of TOF-based 3D imaging, some scholars have proposed a photon counting 3D imaging technique based on ACS method [25]. The scheme improves imaging speed while preserves image details by performing adaptive resolution sampling based on the wavelet information of the target’s depth map. But for 3D imaging system based on photometric stereo theory, the method of adaptive sampling hasn’t be discussed, leaving the system troubled by time and computational cost.

Aiming at this issue, we propose a multi-resolution 3D ghost imaging scheme, combining photometric stereo vision technology. In this paper, we explore the compressibility of three-dimensional information based on the variation of surface normals. The imaging region is divided into two parts according to the object’s local flatness degree. This degree is evaluated by the variation of surface normals calculated by low-resolution images. In the flat area, sampling can be reduced, while more samples are performed in the unflat area to maintain the stereo feature of the object. By this way, the redundancy in sampling process can be digged out to reduce the number of measurements and thus achieve adaptive compressed sampling. This technology can alleviate the contradiction between sampling rate and image resolution, and of course, it is in favor of improving the speed of imaging system.

2. Methodology

2.1 3D CGI system

By means of computational ghost imaging system (shown in Fig. 1), we obtain shading images of the target from different detectors, and then form the 3D shape of the target using photometric stereo. In this 3D CGI system, a computer pre-generates a series of Hadamard-based patterns, projecting them to the target by a digital light projector ((DLP4500, Texas Instruments). Three spatially separated bucket detectors (DET100A, Thorlabs) receive the intensity of the reflected light. The signal of each bucket detector is collected by a data acquisition board (NI6210, National Instruments) and transferred to the computer to form a image of the target according to Eq. (1). All devices in the given system are controlled by LabVIEW and the measurement rate is $\sim 3.5Hz$.

$$I_i (x,\;y)=\left \langle A^k(x,\;y) \cdot s_i^k \right\rangle -\left \langle A^k(x,\;y) \right\rangle \left \langle s_i^k \right\rangle$$
where $i$ ($i=up,\;left,\;right)$ is the detector label, $I_i(x,\;y)$ is the shading image captured by bucket detector $i$. $A^k(x,\;y)$ is the $k\,-\,th$ illumination patterns, and $s_i^k$ is the $k\,-\,th$ measurement by bucket detector $i$, $\left \langle \cdots \right \rangle$ indicates the assemble average over $N$ patterns.

 figure: Fig. 1.

Fig. 1. A schematic of 3D CGI system. The entire space can be represented by Cartesian coordinates $(x,\;y,\;z)$ originating from the center of projected patterns on the object plane. The three bucket detectors marked as up, left and right are located at $(15, -224, -425)$, $(-187, 77, -425)$, $(145, 65, -425)$ in the unit of millimeter, respectively.

Download Full Size | PDF

According to “Shape from shading”, a classical method in the field of 3D imaging, assuming that the object exhibits uniform Lambertian reflectance and the light source is located at infinity, the intensity image is purely influenced by local surface direction:

$$I(x,\;y) = R(p(x,\;y),q(x,\;y))$$
where $R(p,\;q)$ is the reflection map, and $(p,\;q)=(z_x,\;z_y)$ is the surface gradient ($p=\frac {\partial z}{\partial x}=\frac {n_x}{n_z}$, $q=\frac {\partial z}{\partial y}=\frac {n_y}{n_z}$). The unit normal vector at any point on the surface of the object can be expressed as:
$$\vec{n} = (n_x,\;n_y,\;n_z)^T = \frac{(p,\;q,\;1)^T}{\sqrt{p^2+q^2+1}}$$
Furthermore, according to the principle of photometric stereo applied in previous 3D CGI works [22,24], the intensity of shading image is presented in Eq. (4):
$$I_i (x,\;y) = I_s \alpha (\vec{d_i}\cdot \vec{n})$$
where $I_s$ represents the intensity of light source, $\alpha$, a constant for a Lambertian reflector, donates the object surface albedo, and $\vec {d_i}$ represents the unit vector from the object surface pointing to the $i\,-\,th$ detector. According to Eq. (4), $\vec {n}$ can be calculated by:
$$\vec{n} = \frac{1}{I_s \alpha (\boldsymbol{{D}}^{{-}1}\cdot \boldsymbol{{I}})}$$
where $\boldsymbol { {D}}$ is a matrix of three unit detector vectors, $\boldsymbol { {D}} = [\vec {d}_{up},\vec {d}_{left},\vec {d}_{right}]^T$, and $\boldsymbol { {I}}$ is an array of corresponding image intensities, $\boldsymbol { {I}} = [I_{up},I_{left},I_{right}]^T$. The surface gradient $(p,\;q)$ can be obtained by $\vec {n}$, and then a height map of the object surface is achieved by integrating the surface gradient. Thereby, a 3D image of the target is reconstructed.

2.2 Adaptive compressed 3D ghost imaging based on surface normals

To take a balance between imaging time and image resolution, we propose an adaptive compressed 3D ghost imaging technique based on the variation of surface normals. As shown in Fig. 2, the steps are as follows:

  • 1. Form three low-resolution 2D ghost images $I_i^L\,(i=up,\;left,\;right)$ of the target with bucket detectors located at the top, left and right of it respectively;
  • 2. Calculate surface normals of the target based on $I_i^L$, further obtain the target surface flatness parameter $G_{VN}$ and the local surface flatness map $v(x,\;y)$;
  • 3. Estimate the high and low resolution regions according to $v(x,\;y)$, and generate template $m(x,\;y)$ to distinguish the high and low resolution regions;
  • 4. Build corresponding Hadamard patterns of high resolution based on $m(x,\;y)$;
  • 5. Use high-resolution patterns to form three high-resolution shading images $I_i^H\,(i=up,\;left,\;right)$ of partial target;
  • 6. Fuse low-resolution images $I_i^L$ with high-resolution images $I_i^H$ to get adaptive-resolution shading images $I_i^A\,(i = up,\;left,\;right)$;
  • 7. With $I_i^A$, a three-dimensional image of target can be reconstructed under the principle of photometric stereo.

 figure: Fig. 2.

Fig. 2. Flow chart of 3D ghost imaging based on adaptive sampling. (a) The object. (b) Low-resolution patterns based on Hadamard matrix. (c) Low-resolution shading images. (d) The local flatness map $v(x,\;y)$. (e) Template $m(x,\;y)$. (f) High-resolution patterns. (g) High-resolution shading images of target’s uneven region. (h) Adaptive-resolution shading images. (i) The 3D image of object.

Download Full Size | PDF

2.2.1 Adaptive compressed sampling scheme based on surface normals

The surface gradients $(p,\;q)$ demonstrate the trend for surface variation. The area can be considered flat if its trend of change is consistent. While obtaining the depth information of the object, it is the degree of flatness that indicates where more measurements required. For flat areas, the number of sampling can be reduced, because the height of each point can be restored by low-resolution information. However, an uneven area requires relatively more measurements to preserve the details of target’s shape, so it should be imaged with high-resolution. Therefore, the flat area can be regarded as compressible area, while the non-flat area is incompressible. The flatness degree of target can be expressed with vector $(p,\;q)$ as:

$$G_{VN} =|p_x|+|p_y|+|q_x|+|q_y|$$
where $p_x$, $p_y$, $q_x$ and $q_y$ indicate $\frac {\partial {p}}{\partial {x}}$, $\frac {\partial {p}}{\partial {y}}$, $\frac {\partial {q}}{\partial {x}}$ and $\frac {\partial {q}}{\partial {y}}$, respectively.

In the method of adaptive compressed sampling based on surface normals, we preview the target scene by using a set of low-resolution patterns with a resolution of $2^N\times 2^N$ pixels at the first stage, and calculate $G_{VN}$ with the three obtained images. A window of $3\times 3$ pixels is used to scan the $G_{VN}$ map to evaluate the degree of flatness of the target’s local region:

$$v(x,\;y) = \frac{1}{9}\sum_{i={-}1,\;j={-}1}^{i=1,\;j=1}G_{VN}(x+i,\;y+j)$$
The smaller the value of $v(x,\;y)$, the flatter the local area is. A threshold $T$ is in need to divide imaging area into two parts according to the local flatness. The region composed of pixels satisfying $v(x,\;y)\,>\,T$ is $r_H$, representing the uneven area. In order to take advantage of orthogonality of the Hadamard matrix, the threshold should be set to insure that the number of pixels in $r_H$ is a power of 2. For example, assuming $r_H$ occupies $2^{2N-k}(k=1,2,3,\ldots )$ pixels, which means $r_H$ is a $2^kth$ of the initial imaging area. Here, the parameter $k$ determines the threshold and affects the accuracy in identifying flat/uneven area. In practical applications, it is necessary to train the value of $k$ for different targets. The division result can be represented by a binary matrix $m(x,\;y)$:
$$m(x,\;y) = \begin{cases} 0 & v(x,\;y)>T\\ 1 & v(x,\;y) \le T \end{cases}$$
where the pixels with a value of one in $m(x,\;y)$ constitute the area $r_H$, which is expected to be imaged with high resolution.

2.2.2 Generate the high-resolution template and patterns

Reference patterns of high-resolution required for the second sampling stage are generated according to template $m(x,\;y)$, as shown in Fig. 3(a). The advance task is to upgrade the resolution of illumination pattern. That is, to divide each pixel in a low-resolution pattern into $2^{\alpha } \times 2^{\alpha }\,(\alpha = 1,2,3,\ldots )$ pixels. In this way, the resolution of the reference light field turns into $2^L\times 2^L\,(L=N+\alpha )$. The process above can be expressed as:

$$M(X,Y)=m(x,\;y)\otimes \underbrace{\begin{bmatrix} 1 & \cdots & 1\\ \vdots & \ddots & \vdots \\ 1 & \cdots & 1 \end{bmatrix}}_{2^{\alpha}}$$
where $\otimes$ indicates the Kronecker product. The part of $M(X,Y)$ with a value of one is the high-resolution imaging area, marked as $R_H$. The same as $r_H$, $R_H$ is a $2^kth$ of the initial imaging area. Therefore, $R_H$ occupies $2^T\,(T= 2L-k)$ pixels, and the task in the second sampling stage is to measure the intensity map of these $2^T$ pixels by Hadamard-based patterns.

 figure: Fig. 3.

Fig. 3. Flow chart for generation process of high-resolution illumination patterns based on Hadamard matrix. The area with blue grid in (a), (b) and (c) is the predicted high resolution area, while shaded area is low resolution area. (a) The low-resolution template $m(x,\;y)\,(size: 2^N\times 2^N)$. (b) The high-resolution template $M(X,Y)\,(size:2^L\times 2^L)$. (c) $M(n)\,(size: 1\times 2^{2L})$. (d) Hadamard matrix $H_T\,(size: 2^T\times 2^T)$. (e) Measurement matrix $H_L\,(size: 2^T\times 2^{2L})$. (f) High-resolution patterns.

Download Full Size | PDF

According to the high-resolution template $M(X,Y)$, the corresponding Hadamard-based patterns are generated as follows. Firstly, the matrix $M(X,Y)$ is reshaped into a vector $M(n)$ (shown in Figs. 3(b) and 3(c)), and the relationship between point $(X,Y)$ and $n\,(0\le n\le 2^{2L})$ can be described as Eq. (10).

$$\begin{cases} X = [n/L]+1\\ Y = n\,mod\,L \end{cases}$$
where $[\cdots ]$ indicates taking the integer value, and $mod$ means taking the remainder value. Then, according to the number of pixels $2^T$ to be imaged in the second stage, a Hadamard matrix $H_T$ in the size of $2^T\times 2^T$ is demanded, from which the high-resolution measurement matrix $H_L(m,\;n)$ can be expressed as:
$$H_L(m,\;n) = \begin{cases} H_T\,(m,\;n') & M(n)=1\\ 0 & M(n)=0 \end{cases}$$
The row number $m\,(1\le m \le 2^T)$ of matrix $H_L$ is the serial number of patterns. And the parameter $n'$ satisfies Eq. (12):
$$n' = \sum_{i=1}^n M(i)$$
Finally, the high-resolution measurement matrix $H_L(m,\;n)$ (Fig. 3(e)) is taken out and reshaped in rows to generate $2^T$ frames high-resolution patterns with size of $2^L\times 2^L$ pixels (Fig. 3(f)). High-resolution shading images $I_i^H\,(i=up,\;left,\;right)$ are captured by using patterns formed above.

2.2.3 Fuse images with different resolutions

The scheme of compressed sampling based on surface normals involves the problem of fusing images with different resolutions. Ideally, the gray value of a pixel in a low resolution image is the sum of gray values of several pixels at corresponding position in a high-resolution image.

The first step is to adjust the size of low-resolution images $I_i^L$, according to Eq. (9), to get $2^L\times 2^L$ pixels images $I_i^T$. The adaptive resolution images $I_i^A$ could be achieved by fusing images $I_i^T$ and $I_i^H$, as shown in Eq. (13).

$$I_i^A(x,\;y) = (1-M(x,\;y))\,I_i^T(x,\;y)+\kappa_i\cdot M(x,\;y)\,I_i^H(x,\;y)$$
where $\kappa _i$ is a factor to guarantee the energy of the region in $I_i^T$ equals to that of the corresponding part in $I_i^H$.
$$\kappa_i = \frac{\mathop{\sum}\limits_{x=1}^{x=2^L}\mathop{\sum}\limits_{y=1}^{y=2^L}I_i^T(x,\;y)M(x,\;y)}{\mathop{\sum}\limits_{x=1}^{x=2^L}\mathop{\sum}\limits_{y=1}^{y=2^L}I_i^H(x,\;y)M(x,\;y)}\qquad (i=up,\;left,\;right)$$

2.2.4 Sampling compression ratio

To capture an image with $2^L\times 2^L$ pixels, traditional Hadamard-based single pixel imaging scheme requires $2^L\times 2^L$ samples. However, the scheme proposed in this paper can reduce sample number significantly. There are two sampling stages in the execution of the whole scheme. The initial sampling stage obtains images with $2^N\times 2^N$ pixels, and the sampling time for each image is $2^N\times 2^N$. Although the second sampling stage forms images with a resolution of $2^L\times 2^L$, the actual number of samples for each image is $2^T$, depending on the number of pixels in imaging region.

In the case of taking images of the same resolution, the ratio of the total number of samples required by our method compared to that of conventional scheme can be expressed as:

$$\begin{aligned} \frac{N_A}{N_T} & = \frac{2^N\times 2^N+2^T}{2^L\times 2^L}=\frac{1}{2^{2\alpha}}+\frac{1}{2^k} \end{aligned}\qquad(\alpha = 1,2,3,\ldots,\;k = 1,2,3,\ldots)$$
where $N_A$ is the total sampling number required by the scheme of this paper while $N_T$ is what required by conventional sampling scheme. It can be seen from Eq. (15) that $N_A<N_T$ and $\frac {N_A}{N_T}\le \frac {3}{4}$ which means the proposed scheme can indeed reduce the sampling number, and the compression efficiency is no less than 25%. Also, the sampling compression ratio can be controlled by the value of $\alpha$ and $k$.

3. Experiment

The schematic diagram of experimental setup is shown in Fig. 1. We take a plane with a hemispherical protruding in the center and a part of the face plaster model as experimental targets. Experimental results are shown in Fig. 4, we first illuminate the object with patterns in the resolution of $32\times 32\,(2^N\times 2^N, N=5)$ pixels where each pixel is in the size of 1.25mm. By 3D CGI system, shading images with low resolution $(32\times 32)$ of the two targets are formed by three spatial separated detectors as shown in Figs. 4(a.1)-(a.3) and 4(g.1)-(g.3). Then, the flatness degree $G_{VN}$ can be calculated, as shown in Figs. 4(b) and (h), and the corresponding local flatness maps $v(x,\;y)$ are achieved. Here, for the simple model, we select the first 256 pixels with larger values in $v(x,\;y)$ map as $r_H$, which means only a quarter $(k=2)$ of the initial imaging area needs to be measured in the high-resolution imaging stage. However, for the face model, the number of pixels to be selected is 512 $(k=1)$ because the target is relatively complicated. After that, areas with sharp changes in the two targets are selected. For example, in the face model, the cheek area is removed while the portion of the nose and eyes are retained (see Fig. 4(i)), which is consistent with the nature of the target. Then we directly image the selected area with $64\times 64\,(2^L\times 2^L, L=6,\, \alpha =1)$ pixels patterns (see Figs. 4(j.1)-(j.3)), and merge them with low-resolution images to get the adaptive-resolution images (see Figs. 4(k.1)-(k.3)). The fusion process is shown in section 2.2.3. Finally, a 3D image (Fig. 4(l)) can be reconstructed.

 figure: Fig. 4.

Fig. 4. Results of 3D ghost imaging based on adaptive compressed sampling. The pictures above dotted line are images of a simple model consisting of a plane and a hemisphere while below are images of local face model. (a.1), (a.2), (a.3) and (g.1), (g.2), (g.3) are low-resolution images of the two targets, obtained by three bucket detectors located at up, left and right, respectively. (b) and (h) are maps of $G_{VN}$, and (c) and (i) are area division template $m(x,\;y)$. (d.1), (d.2), (d.3) and (j.1), (j.2), (j.3) are high-resolution images. (e.1), (e.2), (e.3), and (k.1), (k.2), (k.3) are adaptive-resolution images. (f) and (l) are three-dimensional images of the two targets reconstructed by our proposed scheme.

Download Full Size | PDF

To verify the efficiency of our proposed method, we select two regions (shown in Fig. 5(a)) in the face model for further analysis, containing a relatively flat region (labeled as region 1) and a strongly variable area (labeled as region 2). We compare the result of our proposed method to the low and high resolution 3D images formed by traditional scheme respectively. For Region 1, it can be seen that, inheriting the high image quality of low resolution (Fig. 5(b.2)), the method of this paper reconstructs the flat region (Fig. 5(b.1)) smoothly. This is mainly benefited from the fusion strategy shown in Eq. (13). Moreover, the high resolution imaging area by our method is smaller than the traditional one, leading to a decrease in the noise from measurement, which presents as fewer local fluctuations in 3D images. The same phenomenon can be observed in region 2, verifying the ability of our proposed method to maintain local 3D details with noise reduced. As seen in Fig. 6, the behavior of our proposed algorithm is governed by the value of $k$. While $k$ increasing, the capability of our method to reduce noise is strengthening. In contrast, while $k$ decreasing, more details are measured along with more noises. So, with the parameter $k$, our method achieves control over not only the sampling number but also the property of the 3D result to some extent.

 figure: Fig. 5.

Fig. 5. (a) 3D reconstruction result of the scheme proposed by this paper. (b.1) and (c.1) are the results for region 1 and region 2 by our proposed method. (b.2) and (c.2) are the low-resolution results by traditional scheme, while (b.3) and (c.3) are high-resolution results.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. (a) object. (b.1), (c.1) and (d.1) are area division template m(x, y) for the schemes reduce the sampling number by 0% $(k = 0)$, 25% $(k = 1)$ and 50% $(k = 2)$ respectively while (b.1), (c.1) and (d.1) are the results of these conditions respectively.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. (a)(e)error probe fluctuation. (b.1), (b.2) and (f.1), (f.2) are low-resolution 3D imaging results by traditional scheme of simple model and local face model; (c.1), (c.2) and (g.1), (g.2) are high-resolution 3D imaging results of traditional scheme; (d.1), (d.2) and (h.1), (h.2) are results of the scheme proposed by this paper.

Download Full Size | PDF

In order to evaluate the quality of the proposed 3D ghost imaging scheme quantitatively, we select several representative positions in the target to measure their relative depths by a vernier caliper and calculate the depth errors of the recovered 3D images. For the hemishpere, we select several points marked in red as shown in Fig. 7(a) and measure their relative depths to the plane. The average depth error recovered by proposed scheme is $\sim 0.635mm$, and the depth error of traditional scheme is $\sim 1.123mm$. For the partial face model, we select the points marked red in Fig. 7(e) and measure their relative depths to the tip of the nose. According to the measurement, the average error of result obtained by our proposed method is about $2.76 mm$, and the average error of traditional method is $\sim 2.15mm$. Considering the sampling compression ratio, it can be calculated by Eq. (15) that the adaptive sampling scheme reduces the the sampling number by 50% for the simple object while 25% for the face model. In general, the method in this paper reduces the number of samples without increasing error.

4. Conclusion

We propose an adaptive compressed 3D ghost imaging scheme based on the variation of surface normals. We find out the compressibility of a target in three-dimensional space according to its surface normals and realize sampling compression to form an adaptive-resolution 3D image of the target. Data compression is implemented during the sampling process, which reduces the data redundancy during 3D imaging and facilitates the transmission and storage of data as well. In the field of high-resolution 3D imaging, this kind of strategy prevents the system from significantly increasing the computation overhead. Also, benefiting from the flexible of the single-pixel configuration, our technique can be extended to infrared wavebands. In addition, we propose a portion imaging method based on Hadamard, which makes full use of the orthogonality of Hadamard matrix and takes advantage of single-pixel imaging at the same time. The scheme successfully realizes imaging discontinuous and irregular regions. Of course, the limitation of this local measurement scheme is obvious. The Hadamard matrix restricts the number of pixels to be imaged, limiting the adaptability of compression ratio. Other measurement matrices can be used instead of Hadamard to overcome this defect. We believe that our work can provide an effective solution for 3D imaging and single pixel imaging.

Funding

National Natural Science Foundation of China (61501242, 61905108, 61875088).

References

1. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995). [CrossRef]  

2. R. S. Bennink, S. J. Bentley, and R. W. Boyd, ““two-photon” coincidence imaging with a classical source,” Phys. Rev. Lett. 89(11), 113601 (2002). [CrossRef]  

3. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]  

4. F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104(25), 253603 (2010). [CrossRef]  

5. S.-C. Song, M.-J. Sun, and L.-A. Wu, “Improving the signal-to-noise ratio of thermal ghost imaging based on positive-negative intensity correlation,” Opt. Commun. 366, 8–12 (2016). [CrossRef]  

6. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “Differential computational ghost imaging,” in Imaging and Applied Optics (Optical Society of America, 2013), p. CTu1C.4.

7. L. Kai-Hong, H. Bo-Qiang, Z. Wei-Mou, and W. Ling-An, “Nonlocal imaging by conditional averaging of random reference measurements,” Chin. Phys. Lett. 29(7), 074216 (2012). [CrossRef]  

8. M.-J. Sun, M.-F. Li, and L.-A. Wu, “Nonlocal imaging of a reflective object using positive and negative correlations,” Appl. Opt. 54(25), 7494–7499 (2015). [CrossRef]  

9. B. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, “Normalized ghost imaging,” Opt. Express 20(15), 16892–16901 (2012). [CrossRef]  

10. Z.-H. Xu, W. Chen, J. Penuelas, M. Padgett, and M.-J. Sun, “1000 fps computational ghost imaging using LED-based structured illumination,” Opt. Express 26(3), 2427–2434 (2018). [CrossRef]  

11. S. Jiang, X. Li, Z. Zhang, W. Jiang, Y. Wang, G. He, Y. Wang, and B. Sun, “Scan efficiency of structured illumination in iterative single pixel imaging,” Opt. Express 27(16), 22499–22507 (2019). [CrossRef]  

12. W. Jiang, X. Li, S. Jiang, Y. Wang, Z. Zhang, G. He, and B. Sun, “Increase the frame rate of a camera via temporal ghost imaging,” Opt. Lasers Eng. 122, 164–169 (2019). [CrossRef]  

13. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009). [CrossRef]  

14. P. Zerom, K. W. C. Chan, J. C. Howell, and R. W. Boyd, “Entangled-photon compressive ghost imaging,” Phys. Rev. A 84(6), 061804 (2011). [CrossRef]  

15. A. Averbuch, S. Dekel, and S. Deutsch, “Adaptive compressed image sensing using dictionaries,” SIAM J. Imaging Sci. 5(1), 57–89 (2012). [CrossRef]  

16. M. Aßmann and M. Bayer, “Compressive adaptive computational ghost imaging,” Sci. Rep. 3(1), 1545 (2013). [CrossRef]  

17. W.-K. Yu, M.-F. Li, X.-R. Yao, X.-F. Liu, L.-A. Wu, and G.-J. Zhai, “Adaptive compressive ghost imaging based on wavelet trees and sparse representation,” Opt. Express 22(6), 7133–7144 (2014). [CrossRef]  

18. Y. Zhang, M. P. Edgar, B. Sun, N. Radwell, G. M. Gibson, and M. J. Padgett, “3D single-pixel video,” J. Opt. 18(3), 035203 (2016). [CrossRef]  

19. M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010 (2016). [CrossRef]  

20. W. Gong and S. Han, “High-resolution far-field ghost imaging via sparsity constraint,” Sci. Rep. 5(1), 9280 (2015). [CrossRef]  

21. E. Salvador-Balaguer, P. Latorre-Carmona, C. Chabert, F. Pla, J. Lancis, and E. Tajahuerce, “Low-cost single-pixel 3D imaging by using an led array,” Opt. Express 26(12), 15623–15631 (2018). [CrossRef]  

22. L. Zhang, Z. Lin, R. He, Y. Qian, Q. Chen, and W. Zhang, “Improving the noise immunity of 3D computational ghost imaging,” Opt. Express 27(3), 2344–2353 (2019). [CrossRef]  

23. M.-J. Sun and J.-M. Zhang, “Single-pixel imaging and its application in three-dimensional reconstruction: a brief review,” Sensors 19(3), 732 (2019). [CrossRef]  

24. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013). [CrossRef]  

25. H. Dai, G. Gu, W. He, L. Ye, T. Mao, and Q. Chen, “Adaptive compressed photon counting 3D imaging based on wavelet trees and depth map sparse representation,” Opt. Express 24(23), 26080–26096 (2016). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. A schematic of 3D CGI system. The entire space can be represented by Cartesian coordinates $(x,\;y,\;z)$ originating from the center of projected patterns on the object plane. The three bucket detectors marked as up, left and right are located at $(15, -224, -425)$, $(-187, 77, -425)$, $(145, 65, -425)$ in the unit of millimeter, respectively.
Fig. 2.
Fig. 2. Flow chart of 3D ghost imaging based on adaptive sampling. (a) The object. (b) Low-resolution patterns based on Hadamard matrix. (c) Low-resolution shading images. (d) The local flatness map $v(x,\;y)$. (e) Template $m(x,\;y)$. (f) High-resolution patterns. (g) High-resolution shading images of target’s uneven region. (h) Adaptive-resolution shading images. (i) The 3D image of object.
Fig. 3.
Fig. 3. Flow chart for generation process of high-resolution illumination patterns based on Hadamard matrix. The area with blue grid in (a), (b) and (c) is the predicted high resolution area, while shaded area is low resolution area. (a) The low-resolution template $m(x,\;y)\,(size: 2^N\times 2^N)$. (b) The high-resolution template $M(X,Y)\,(size:2^L\times 2^L)$. (c) $M(n)\,(size: 1\times 2^{2L})$. (d) Hadamard matrix $H_T\,(size: 2^T\times 2^T)$. (e) Measurement matrix $H_L\,(size: 2^T\times 2^{2L})$. (f) High-resolution patterns.
Fig. 4.
Fig. 4. Results of 3D ghost imaging based on adaptive compressed sampling. The pictures above dotted line are images of a simple model consisting of a plane and a hemisphere while below are images of local face model. (a.1), (a.2), (a.3) and (g.1), (g.2), (g.3) are low-resolution images of the two targets, obtained by three bucket detectors located at up, left and right, respectively. (b) and (h) are maps of $G_{VN}$, and (c) and (i) are area division template $m(x,\;y)$. (d.1), (d.2), (d.3) and (j.1), (j.2), (j.3) are high-resolution images. (e.1), (e.2), (e.3), and (k.1), (k.2), (k.3) are adaptive-resolution images. (f) and (l) are three-dimensional images of the two targets reconstructed by our proposed scheme.
Fig. 5.
Fig. 5. (a) 3D reconstruction result of the scheme proposed by this paper. (b.1) and (c.1) are the results for region 1 and region 2 by our proposed method. (b.2) and (c.2) are the low-resolution results by traditional scheme, while (b.3) and (c.3) are high-resolution results.
Fig. 6.
Fig. 6. (a) object. (b.1), (c.1) and (d.1) are area division template m(x, y) for the schemes reduce the sampling number by 0% $(k = 0)$, 25% $(k = 1)$ and 50% $(k = 2)$ respectively while (b.1), (c.1) and (d.1) are the results of these conditions respectively.
Fig. 7.
Fig. 7. (a)(e)error probe fluctuation. (b.1), (b.2) and (f.1), (f.2) are low-resolution 3D imaging results by traditional scheme of simple model and local face model; (c.1), (c.2) and (g.1), (g.2) are high-resolution 3D imaging results of traditional scheme; (d.1), (d.2) and (h.1), (h.2) are results of the scheme proposed by this paper.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

I i ( x , y ) = A k ( x , y ) s i k A k ( x , y ) s i k
I ( x , y ) = R ( p ( x , y ) , q ( x , y ) )
n = ( n x , n y , n z ) T = ( p , q , 1 ) T p 2 + q 2 + 1
I i ( x , y ) = I s α ( d i n )
n = 1 I s α ( D 1 I )
G V N = | p x | + | p y | + | q x | + | q y |
v ( x , y ) = 1 9 i = 1 , j = 1 i = 1 , j = 1 G V N ( x + i , y + j )
m ( x , y ) = { 0 v ( x , y ) > T 1 v ( x , y ) T
M ( X , Y ) = m ( x , y ) [ 1 1 1 1 ] 2 α
{ X = [ n / L ] + 1 Y = n m o d L
H L ( m , n ) = { H T ( m , n ) M ( n ) = 1 0 M ( n ) = 0
n = i = 1 n M ( i )
I i A ( x , y ) = ( 1 M ( x , y ) ) I i T ( x , y ) + κ i M ( x , y ) I i H ( x , y )
κ i = x = 1 x = 2 L y = 1 y = 2 L I i T ( x , y ) M ( x , y ) x = 1 x = 2 L y = 1 y = 2 L I i H ( x , y ) M ( x , y ) ( i = u p , l e f t , r i g h t )
N A N T = 2 N × 2 N + 2 T 2 L × 2 L = 1 2 2 α + 1 2 k ( α = 1 , 2 , 3 , , k = 1 , 2 , 3 , )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.