Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Pseudo-thermal imaging by using sequential-deviations for real-time image reconstruction

Open Access Open Access

Abstract

Ghost imaging technologies acquire images through intensity correlation of reference patterns and bucket values. Among them, an interesting method named correspondence imaging can generate positive-negative images by only conditionally averaging reference patterns, but still requires full/over sampling to treat the ensemble average of bucket values as a selection criteria, causing a long acquisition time. Here, we propose a sequential-deviation ghost imaging approach, which can realize real-time reconstructions of positive-negative images with a high image quality close to that of differential ghost imaging. Since it is no longer necessary to compare with the ensemble average, this method can improve the real-time performance. An explanation of its essence is also given here. Both simulation and experimental results have demonstrated the feasibility of this technique. This work may complement the theory of ghost imaging.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Ghost imaging (GI) indirectly acquires the object information by correlating two beams, one (object beam) interacts with an object but is measured by a single-pixel (bucket) detector, the other (reference beam) has not interacted with this object but is recorded by a high spatial-resolution scanning photodetector or a spatially resolved array detector. Neither detector’s signal alone can form an image of the object, hence the recovered result of GI is called ghost image. The first GI experiment used a biphoton source [1], and was explained as a quantum phenomenon. Later, GI was also realized with thermal light [2,3] and pseudo-thermal light [4], thus GI could also be interpreted with classical intensity correlation (i.e., statistical weighted averaging). A basic paradigm of pseudo-thermal GI utilizes a rotating ground-glass diffuser to change a coherent laser beam into time-varying light field, and a beam splitter to divide this light field into two beams. While in computational GI [5,6], it was found that the pseudo-thermal source and beam splitter could be replaced with a deterministic illuminator which made the laser beam transmit through a spatial light modulator (SLM). In this way, GI could be simplified as one light path with only a single-pixel detector, by precomputing the reference patterns. Once we know the total intensities and patterns, we can reconstruct the image. To our knowledge, the idea of encoding and sampling image information can be traced back to the early flying-spot camera which used a Nipkow disc patterned with a spiral array of holes to raster-scan the scene. Obviously, in single-pixel scanning schemes, the illumination flux is limited and the imaging mainly relies on the mechanical scanning device. One solution depends on the development of computational imaging techniques, such as Hadamard imaging [7,8], Fourier imaging [9] (both use complete orthogonal basis for sampling and matrix inversion for reconstruction [10,11]), GI abovementioned, computational illumination imaging, single-pixel camera via compressive sampling [12], etc.

As we know, the single-pixel detector offers lower cost and faster timing response than array detectors. And single-pixel imaging (SPI) is generally referred to a technology that yields images by using a series of patterns to sample the object and recording the total intensities with such a single-pixel detector [13]. With this definition, computational GI is one of SPI techniques. When considering the history of SPI, it is worth noting that the term SPI was first used in a pioneering published paper by Duarte et al. [12]. We can see that the SPI schemes [1416] actually run counter to the way of increasing the number of pixels in traditional cameras, and provide possibilities for many scenarios where pixelated array detectors are unavailable.

Generally, the calculation of GI is faster than matrix inversion or optimized iterative algorithms. But in order to obtain high quality images, its required measurement number is much larger than the signal dimension, which undoubtedly increases the acquisition time. Moreover, the transmission, storage, and loading of these large reference pattern matrices gradually erases the calculation bonus. These issues are particularly critical in real-time image acquisition applications. For example, using X-ray GI techniques [17,18] to observe the living organisms, the long-time acquisition will cause irreversible damages to the specimens. In recent years, many GI methods [1923] have sprung up, such as background term removal method ($\Delta$GI) [19], differential ghost imaging (DGI) [20], and the like. Among them, DGI is recognized as the best correlation function. Lately, an interesting GI mechanism named correspondence imaging (CI) was put forward [2428], which can achieve positive-negative images with a high visibility just by conditional averaging of partial reference patterns, without any correlation calculations. The positive (bright object) and negative (dark object opposite to the former) images can be defined as two images which are visually complementary, and the sum of them (or each separately weighted by a coefficient) is generally a uniformly distributed light field. The shortcomings of CI are also obvious that it still needs full or over sampling, and requires to calculate an ensemble average of the bucket values as a threshold to split the reference patterns into two subsets before reconstructions. Therefore, to some extent, the principle of CI limits the development of its real-time practical applications, and its imaging mechanism still deserves further improvement.

In this paper, based on the previous work of complementary/differential measurements [2933], a method named sequential-deviation GI (SGI) is presented. The main idea is to take full advantage of shift-multiplexing and to make a difference of two adjacent bucket values or random patterns (the $(i+1)$th bucket-value/pattern minus the $i$th one, or vice versa), where double measurements of a complementary pattern pair (one pattern followed by its complementary one) are no longer necessary [30,32]. Since it is a gradual addition process when reconstructing the ghost images, the object can be imaged in real time during data collection without all the continuous measurements being done. So we can terminate the sampling at any time when the quality requirement is satisfied. Compared with CI, this method can not only improve the quality of positive-negative images, but also reduce the memory consumption. We have verified its feasibility with numerical simulation. And without loss of generality, this technique is performed with a classic double-arm lensless GI setup using a pseudo-thermal light source. Note that the patterns can also be directly encoded onto a SLM like in computational GI, our method can also provide new ideas for low-calculation-cost SPI. Additionally, since the background noise is independently and identically distributed, using sequential-deviations can magnify the difference between adjacent patterns or single-pixel (bucket) values especially in double-arm GI setup and helps to effectively reduce the DC component of the background noise. At last, we provide an explanation of the SGI principle via a probability theory.

2. GI theory and SGI method

In this section, we will briefly review the conventional GI approaches first to show the evolutionary process and development tendency of GI. Then, on the basis of previous frameworks, a new GI scheme named SGI with a real-time simple reconstruction function as well as a good imaging quality is proposed. The SGI has three modes, in which SGI mode-1 can obtain legible ghost images, and its variants mode-2 and mode-3 can acquire clear positive and negative images.

2.1 Brief review of GI

In GI, the second-order correlation of reference signal $I_R(x_R)$ and bucket signal $S_B$ is defined as

$$G^{(2)}=\left\langle S_BI_R(x_R)\right\rangle,$$
where $\left \langle u\right \rangle =\frac {1}{N}\sum _{i=1}^Nu^i$ denotes the ensemble average of the signal $u$, and the coordinate $x_R$ refers to the spatial position in the reference arm. For the sake of simplicity, here we express $x_R$ in one-dimension. $S_{B}=\int _{A_l}I_B(x_B)T(x_B)dx_B$, where $x_B$ and $I_B$ are the spatial coordinate and the light field of the object beam, respectively, $T(x_B)$ denotes the transmission function of the object, $A_l$ stands for the integration area. By deducting the ensemble average terms from $S_B$ and $I_R(x_R)$, we will have
$$\begin{aligned} \Delta G^{(2)}&=\left\langle(S_B-\left\langle S_B\right\rangle)(I_R(x_R)-\left\langle I_R(x_R)\right\rangle)\right\rangle\\ &=\left\langle S_BI_R(x_R)\right\rangle-\left\langle S_B\right\rangle\left\langle I_R(x_R)\right\rangle. \end{aligned}$$
This formula is called $\Delta$GI, a typical correlation function of GI. However, this method is prone to failure in the case of unstable light sources.

Using $\frac {\left \langle S_B\right \rangle }{\left \langle S_R\right \rangle }S_R$ to replace $\left \langle S_B\right \rangle$, we can get

$$\begin{aligned} G_{DGI}^{(2)}&=\left\langle(S_B-\frac{\left\langle S_B\right\rangle}{\left\langle S_R\right\rangle}S_R)(I_R(x_R)-\left\langle I_R(x_R)\right\rangle\right\rangle\\ &=\left\langle S_BI_R(x_R)\right\rangle-\frac{\left\langle S_B\right\rangle}{\left\langle S_R\right\rangle}\left\langle S_RI_R(x_R)\right\rangle, \end{aligned}$$
where $S_R=\int _{A_l}I_R(x_R)dx_{R}$. This complicated form is named DGI, which greatly improves the imaging quality compared with traditional GI. Since DGI divides its subtracted term by $S_{R}$, it works well under harsh measurement environments.

As we know, $\left \langle \cdots \right \rangle$ is a $O(n)$-complex operation, and $\left \langle u\right \rangle$ at $i$th sampling moment can be recursively calculated from $\left \langle u\right \rangle$ at the $(i-1)$th sampling moment in $O(1)$ time, i.e., $\left \langle u\right \rangle _i=\frac {(i-1)\left \langle u\right \rangle _{i-1}+u_i}{i}$. By this means, other correlation-based GI algorithms such as ${G^{(2)}}$, $\Delta G^{(2)}$ and $G_{DGI}^{(2)}$ seem to be able to calculate the ensemble averages for every incoming measurement. But in $\Delta G^{(2)}$ and $G_{DGI}^{(2)}$, the role of deducting $\left \langle S_B\right \rangle \left \langle I_R(x_R)\right \rangle$ or $\frac {\left \langle S_B\right \rangle }{\left \langle S_R\right \rangle }\left \langle S_RI_R(x_R)\right \rangle$ is to shift the mean of the background part of the recovered images to almost 0, generally requiring a large amount of measurements to improve the accuracy of $\left \langle S_B\right \rangle$ or $\frac {\left \langle S_B\right \rangle }{\left \langle S_R\right \rangle }$ in the subtrahends.

In CI, one can conditionally average the reference signals, which are tactfully divided into two pattern subsets according to the bucket mean or the sign of $S_B-\left \langle S_B\right \rangle$, to produce the positive and negative ghost images:

$$\begin{cases} G_{CI_+}=\left\langle I_{R_+}\right\rangle,\textrm{ for }\left\{S_{B_+}\mid S_B-\left\langle S_B\right\rangle\geq 0\right\},\\ G_{CI_-}=\left\langle I_{R_-}\right\rangle,\textrm{ for }\left\{S_{B_-}\mid S_B-\left\langle S_B\right\rangle\;<\;0\right\}. \end{cases}$$
Without the need to multiply the patterns by corresponding bucket values (weights), CI undoubtedly reduces the computational complexity. Since the average value $\left \langle S_B\right \rangle$ is necessary for pattern division, CI still requires full sampling before image reconstruction and is a process of massive data acquisition followed by selection and calculation. Deserved to be mentioned, although we can pre-sample a certain amount of the bucket values and then determine whether each subsequent measurement should be left for the final reconstruction (each logic judgment is actually based on the sampling), the actual number of measurements is still very high. This is wasteful, one can think about a scene that we have sampled a lot of signals, but eventually only a few of them are useful for the final reconstruction. This raises a fundamental question: why do we spend so much effort acquiring all the data when we know that most of it will be discarded? Could it be possible to use the differential technique in adjacent bucket values or adjacent random patterns so that one can realize the process of data acquisition accompanied by reconstruction without the need to throw away anything?

2.2 Sequential-deviation ghost imaging

In order to address above problems, we propose the SGI method here. First, the reference patterns and bucket values are numbered according to the measurement sequence. Then, also starting from Eq. (1), a sequential-deviation strategy is utilized. Different from Eq. (2), we use $S_{B_{k}}$ and $I_{R_{k}}$ to replace $\left \langle S_B\right \rangle$ and $\left \langle I_R(x_R)\right \rangle$. In this way, we define three modes: mode-1 for performing double sequential-deviation operations, mode-2 and mode-3 for carrying out sequential-deviation operation only on adjacent bucket values or neighbouring reference patterns. Mathematically, for $k=1,2,\ldots ,N-1$, we have

$$\textrm{mode-1: }G^{(2)}_{both}=\left\langle(S_{B_{k+1}}-S_{B_k})(I_{R_{k+1}}-I_{R_k})\right\rangle;$$
$$\textrm{mode-2: } \begin{cases} G^{(2)}_{B_+}=\left\langle(S_{B_{k+1}}-S_{B_k})I_{R_{k+1}}\right\rangle,\\ G^{(2)}_{B_-}=\left\langle(S_{B_{k+1}}-S_{B_k})I_{R_k}\right\rangle; \end{cases}$$
$$\textrm{mode-3: } \begin{cases} G^{(2)}_{R_+}=\left\langle S_{B_{k+1}}(I_{R_{k+1}}-I_{R_k})\right\rangle,\\ G^{(2)}_{R_-}=\left\langle S_{B_k}(I_{R_{k+1}}-I_{R_k})\right\rangle. \end{cases}$$
For $N$ measurements, $N-1$ pairs can be obtained. Certainly, we can further let the last measurement (or pattern) minus the first one to form complete $N$ pairs when using a stable light source. As a comparison, in traditional complementary modulation schemes [30,32], $N$ measurements only generate $N/2$ effective pairs. Moreover, it is impossible to modulate complementary patterns in a double-arm GI setup with a rotating ground glass. Note that here we only shift one subscript to compute the difference, we can also shift multiple subscripts for the difference operation, and their results are similar. In mathematical expressions, the biggest difference between CI, DGI and SGI is that SGI does not need to subtract the ensemble averages ($\left \langle S_B\right \rangle$ or $\left \langle I_R(x_R)\right \rangle$). And SGI can realize the process of data acquisition accompanied by reconstruction, without the need to discard most of the measurement data like CI. Since the whole calculation of SGI is linear and cumulative, thus we can reconstruct the positive-negative images in real time and can terminate the sampling at any time when the image is clear enough.

3. Simulation and experimental results

In order to evaluate the image quality, we introduce the peak signal-to-noise ratio (PSNR) as a quantitative measure:

$$\textrm{PSNR}=10\log(255^2/\textrm{MSE}),$$
where $\textrm {MSE}=\frac {1}{st}\sum \nolimits _{p,\;q=1}^{s,\;t}[U_o(p,\;q)-\tilde U(p,\;q)]^2$ describes the squared distance between the recovered image $\tilde U$ and the original image $U_o$ for all $s\times t$ pixels. Generally, the larger is the PSNR value, the better is the reconstructed image quality.

To demonstrate the feasibility and performance of SGI, we create a gray-scale image of $128\times 128$ pixels written “BIT” (abbreviation of Beijing Institute of Technology) as the original image (see Fig. 1(a)), and use the random patterns of the same pixel-size for numerical simulations of $\Delta G^{(2)}$, DGI, CI and SGI. In Fig. 1, we give the PSNR value below each reconstructed image. For the sake of fairness, the total numbers of measurements used by these algorithms are the same, all $N=16,384$. Since the function $G^{(2)}$ depends on the statistics of a large amount of data and suffers from light fluctuation and noise problem, it is difficult for us to distinguish the object information of the recovered image directly using $\left \langle S_BI_R(x_R)\right \rangle$ with 100% sampling ratio. Thus, to give a fair comparison, here we use $\Delta G^{(2)}$ instead of $G^{(2)}$ for calculation. Although CI separately uses about a half of the patterns to reconstruct positive and negative images, it does not reduce the total number of measurements. Here, Figs. 1(b)–1(d) are the results of $\Delta G^{(2)}$, DGI, and SGI mode-1, respectively. It can be seen that the performance of SGI and DGI is better than that of $\Delta G^{(2)}$, and the SGI can obtain a good image quality almost close to that of DGI, just by subtracting the neighboring patterns as well as bucket values (rather than the ensemble averages) in the classic second-order correlation function. Additionally, similar to CI (Figs. 1(e)–1(f)), our SGI mode-2 and mode-3 also happen to recover positive-negative ghost images with a better image quality from the same total number of measurements, as shown in Figs. 1(g)–1(h) and 1(i)–1(j). That is, to obtain the same image quality of positive-negative images, SGI mode-2 and mode-3 require fewer measurements than CI. Besides, the image qualities of SGI mode-2 and mode-3 are almost the same, both of them can reduce redundant measurements as much as possible, but still cannot reduce the number of measurements by half to get the same image quality as CI, which will be our future research focus. Furthermore, since there is no need to compute the ensemble average of bucket values for pattern grouping, SGI allows the data measurement process and the reconstruction operation to be synchronized, improving the real-time performance in practical applications. It should be noted that we take the negative image of the original object as a reference when calculating the PSNRs for the negative images. All the PSNR calculations of other negative images hereafter follow the same operation.

 figure: Fig. 1.

Fig. 1. Simulation results of $\Delta G^{(2)}$, DGI, CI, SGI mode-1, mode-2 and mode-3. (a) is the original picture of $128\times 128$ pixels. (b)–(d) are the recovered results of $\Delta G^{(2)}$, DGI and SGI mode-1. (e)–(f) are CI positive-negative images, i.e., $G_{CI_+}$ and $G_{CI_-}$. (g)–(h) are retrieved positive-negative images of SGI mode-2, i.e., $G^{(2)}_{B_+}$ and $G^{(2)}_{B_-}$, and (i)–(j) are restored positive-negative images of SGI mode-3, i.e., $G^{(2)}_{R_+}$ and $G^{(2)}_{R_-}$.

Download Full Size | PDF

It can be seen from the calculation time that although the positive-negative image acquisition of CI involves only the averaging operation of the patterns, the calculation of which is very simple, but it involves a large number of pattern classification and logical judgment operations, thus its image reconstruction time is the longest. The second longest calculation time is DGI and $\Delta G^{(2)}$, then followed by SGI mode-1, mode-3 positive-negative images, and mode-2 positive-negative images. The computational complexity of SGI mode-1 is inherently lower than that of DGI, and its double sequential-deviation operations are more complex than the single sequential-deviation operations of SGI mode-2 and mode-3. The differential operations between the neighbouring bucket values (in SGI mode-2) take less computation time than the differential operations of adjacent patterns (in SGI mode-3). Since there is little difference in the formulas of $G_{DGI}^{(2)}$ and $\Delta G^{(2)}$, their calculation time is almost the same. Given this, to some extent, SGI can be seen as a compromise between CI and DGI, fostering strengths and circumventing weaknesses.

Our optical experiments are based on a classic double-arm lensless GI setup, as shown in Fig. 2, where a beam of the semiconductor laser light of wavelength $\lambda =532$ nm passes through a diffuser (here is a ground-glass disk rotating at 0.3 rad/s controlled by a stepper motor) to produce the pseudothermal light. The beam expander makes the laser light be expanded to 1.92 mm diameter and illuminate a larger region of the ground glass to obtain relatively finer speckles. The light field of time-varying random speckles passes through an aperture and is reflected by a mirror to a 50:50 beam splitter (BS), which divides the light into two arms. One arm (reference arm) merely records the light field distribution $I_R(x_R)$ of the source by a charge-coupled-device (CCD), i.e., an array detector, and another (object arm) collects the total intensity transmitted through the object with a single-pixel (bucket) detector, denoted by $S_B$. After $N$ measurements, we can reconstruct the object images via intensity correlations. Since the ground-glass disk can be continuously rotated via the control of the stepper motor, the imaging speed of this optical system mainly depends on the frame frequency of the CCD. Here, the frame frequency of the CCD is 20 Hz. Of course, we can use commercially available industrial array detector with ultrafast frame frequency to further improve the imaging speed of this system.

 figure: Fig. 2.

Fig. 2. Schematic diagram of the double-arm lensless GI setup.

Download Full Size | PDF

In experiments, we used a film printed with a Chinese character “light” as the object to be detected, and chose a fixed imaging region of $300\times 300$ pixels, thus the speckle patterns and the recovered images were also of $300\times 300$ pixels. Using a high-speed camera (CCD) and a bucket detector with a synchronization between them, we recorded both reference and bucket signals. Here another CCD was served as the bucket detector, by integrating the total light intensity falling on its photosensitive surface during each synchronization period. The projection of one speckle pattern onto the object can be seen in Fig. 3(a). In order to avoid the periodicity of the ground glass patterns, we added some disturbances/displacements in the central axis of rotation. The experimental results of $\Delta G^{(2)}$, DGI and SGI mode-1 are presented in Figs. 3(b)–3(d), and the reconstructed positive-negative images of CI, SGI mode-2 and mode-3 are given in Figs. 3(e)–3(f), 3(g)–3(h) and 3(i)–3(j), all with 50,000 measurements.

 figure: Fig. 3.

Fig. 3. Experimental results. (a) is the light field distribution on the photosensitive surface of the bucket detector in the object arm, i.e., the interaction result of one speckle pattern and the light field of the object. (b)–(d) are the recovered results of $\Delta G^{(2)}$, DGI and SGI mode-1; (e)–(f), (g)–(h) and (i)–(j) are the positive and negative ghost images of CI, SGI mode-2 and mode-3, respectively; (k) gives a cross section plot of the gray-scale images in (c) and (d) to make the image quality easier to be compared, by choosing the same row of the recovered images indicated by a red line. (l) is the cross section plot of retrieved gray-scale images (e), (g) and (i) with the same row, while (m) is the intensity profile of reconstructed gray-scale images (f), (h) and (j). The horizontal coordinate $x$ of (k)–(m) denotes the column pixel index of the restored images.

Download Full Size | PDF

In order to more intuitively compare the reconstructed images, we plotted the curves of the gray-scale values in the 87th row (containing 300 pixels in total) of recovered images (marked by the red lines in the Figs. 3(c)–3(j)). In Fig. 3(k), the data of DGI and SGI mode-1 fits very well with three distinct peaks, which correspond to three bright regions of the selected row in the reconstructed images (see Figs. 3(c)–3(d)). Thus, SGI mode-1 can obtain a similar performance as DGI from a reasonable number of measurements, with the outline of the object being sharply distinguished, which is consistent with the aforementioned simulation results. But the computational complexity of SGI mode-1 is lower than that of DGI. It can be intuitively seen that SGI mode-1 can obtain a much better image quality than those of CI, but without the need to compare each bucket values with the ensemble average to perform pattern grouping, realizing real-time positive-negative imaging. Figures 3(l) and 3(m) (the former for the positive images while the latter for negative images) give the another two cross section plots of retrieved gray-scale images, from which we can see that the peak values of CI are not very clear, almost submerged with the background noise. This also brings out the superiority of SGI mode-2 and mode-3 in the reconstruction of positive and negative images. For PSNR calculation, we took a photo of the object as the original image. The computed PSNRs below the recovered images show that both DGI and our SGI are more suitable for the actual harsh measurement environments.

As presented in Table 1, we also provide the memory consumption of runing SGI mode-1, DGI and CI algorithms with experimental data, indicated by the random access memory (RAM). In the experiments, each picture taken by the CCD is of hundreds of KB, then 50,000 sets of such picture data occupy 32.2 GB, and a large number of temporary variables are involved in programs. Since conventional CI needs to import all the data into the memory and then uses the computed ensemble average of bucket values to classify the patterns, which will inevitably lead to a surge in memory consumption. In CI, a total of 106,809 MB memory is required here, which is impossible for an ordinary computer with limited memory to run. The DGI algorithm can be written in an iterative form, which can greatly reduce memory consumption. Our SGI method performs the weighted summation in real time with each measurement and requires fewer intermediate variables than DGI. As a result, our SGI has the least memory consumption, which is only 6.27% of CI memory consumption, suitable for a laptop computer. The memory consumption of SGI mode-2 and mode-3 is almost the same as that of SGI mode-1.

Tables Icon

Table 1. Memory consumption of different methods

Next, we verified the real-time imaging performance of SGI scheme, compared with that of $\Delta$GI (see Figs. 4(a)–4(j)). Taking SGI mode-1 as an example, we gradually increased the number of sampling patterns from 300 to 30,000, the corresponding results of $300\times 300$ pixels were given in Figs. 4(k)–4(t), in which the image quality is approximately proportional to the measurement number. It could be clearly seen that the reconstruction quality of SGI mode-1 was better than that of $\Delta$GI at any sampling moment, also demonstrating the real-time imaging performance of SGI mode-1. In our experiments, when the number of patterns sampled equaled 500 (i.e, of 0.56% sampling ratio), we could faintly recognize the object contour from SGI mode-1 result. When the number of measurements was increasing to 5,000 (i.e, of 5.6% sampling ratio), we could acquire a relatively clearer image by using SGI mode-1, whose sharpness was actually sufficient for many practical applications. To acquire the same image quality of Fig. 4(p), $\Delta$GI required 12,000 to 20,000 measurements. For some applications with higher clarity requirements, we could continue to increase the measurements, and add the subsequent data in the SGI mode-1 function in real time. Although $\Delta$GI also has such real-time imaging capability, it usually requires much more measurements to achieve the same reconstruction quality as SGI mode-1, so its real-time performance is not as good as that of SGI mode-1. Furthermore, the sampling of SGI mode-1 can be terminated at any time depending on the visibility requirement, without the need to preset a larger measurement number to obtain satisfactory image quality, which is significantly better than CI that uses $\langle S_B\rangle$ as a selection criteria for pattern classification and logical judgment after all oversampling and takes longer acquisition time. Generally, the measurement noise existing in two adjacent instantaneous sampling with a very short time interval does not change much, thus using sequential-deviations can effectively suppress the noise, especially in the double-arm GI scheme with a rotating ground glass. To demonstrate the denoising accuracy and the reconstruction improvement, we recalculated the images using the measurement numbers from 500 to 5,500 (where the average background term is not accurate enough) with a 200 stepping increase. Then, we computed their variances in both background and object parts for $\Delta$GI and SGI mode-1, by stretching the $\Delta$GI and SGI mode-1 recovered pixel means in these two parts to two identical levels (e.g., 0 and 1) for a fair comparison. From Figs. 4(u)–4(v), we can intuitively see that at these low sampling ratios, our SGI mode-1 can always achieve smaller variances for both parts than those computed from $\Delta$GI. In theory, the smaller is the variance of the background part, the more accurate the denoising is; and the smaller is the variance of the object part, the more accurate the object information acquisition is. In addition, as the measurement number increases, the variances gradually decrease, which is also in line with the statistical law. Therefore, the curves further demonstrate the denoising advantage of our sequential-deviation method in real-time imaging and prove that SGI has better performance than $\Delta$GI.

 figure: Fig. 4.

Fig. 4. Comparisons of experimental $\Delta$GI (a)–(j) and SGI mode-1 (k)–(t) reconstructions, using 300 to 30,000 measurements. (u) and (v) are the variance curves of $\Delta$GI and SGI mode-1 recovered images as a function of the measurement number (from 500 to 5,500), corresponding to the background part (BP) and the object part (OP), respectively.

Download Full Size | PDF

In order to test the robustness of SGI (mode-1 for example, the other two modes are similar) against the temperature drift (TD) of light source, we simulated three kinds of fluctuations directly acting on each measured reference pattern and each bucket value, resulting in some changes on $S_{R}$, as shown in Fig. 5(a). Due to the limitations of experimental instruments, we did not directly change the intensity of the light source, because the intensity of our light source was constant. But the final effects of these two implementations are the same. If one has a light source with a programmable intensity adjustment function, the intensity-change experiment can be easily done. In Fig. 5(a), the ordinate $S_{R}$ denotes the total light intensity of each pattern, which also characterizes the stability of the light source. For traditional intensity correlation functions like $G^{(2)}$ and $\Delta$GI, the TDs of the source will have a great impact on the final results. Ultimately, these algorithms are not robust to the fluctuation of $S_{R}$ values. Here, the reconstructions of SGI mode-1 and $G_{CI_+}-G_{CI_-}$ under three kinds of TDs are presented in Figs. 5(b)–5(d) and 5(e)–5(g), all obtained from $N=16,384$ measurements. From these graphs, we can clearly see that our SGI mode-1 method can remove the effect of the source instability under different TD curves of $S_{R}$ and maintains a high PSNR value, while CI is deeply affected by different TDs. The reason is that SGI mode-1 involves the pairwise subtraction of successive adjacent patterns, which makes its modified $S_{R}$ fluctuate slightly around zero and leads to a good stability. While in CI, the statistical average function fails to take the $S_{R}$ term into account, thus CI is sensitive to the drastic fluctuations of the light caused by the source instability. Since DGI considers the influence of $S_{R}$ in its subtracted term, it can also robust against the TD. Therefore, our SGI method provides another alternative method to eliminate the influence of light instability.

 figure: Fig. 5.

Fig. 5. Reconstructed results under different kinds of temperature drift of the light source. (a) shows three different kinds of temperature drifts (TD-1, TD-2 and TD-3). (b)–(d) and (e)–(g) are the restored images of SGI mode-1 and $G_{CI_+}-G_{CI_-}$ under three kinds of temperature drifts of the light source, respectively.

Download Full Size | PDF

4. Discussion

Now, let us understand SGI from a probability theory, and its imaging principle will become very clear. Suppose that the object and the reference patterns are all divided into $m$ units (i.e., pixels) according to the size of the speckles, where $m$ is the total number of pixels in both object and reference planes. For simplicity of mathematics, we also assume that any two units (pixels) are statistically independent of each other, and that the gray-scale value of the $i$th pixel denoted by $d_{i}$ ($i=1,2,\ldots ,\;m$), where $i$ indicates the spatial coordinate of the pixel in one pattern matrix, fulfills the identical distribution $I$. For the $k$th pattern, the random variables $I_{1,\;k}$, $I_{2,\;k}$, $\ldots$, $I_{m,\;k}$ are independently identically distributed (IID).

Here, we assume that $I_{R_{i,\;k}}$ stands for the gray-scale vaule of the $i$th pixel on the $k$th reference pattern. The $k$th bucket value and the total intensity of the $k$th reference pattern can be defined as $S_{B_k}=\sum _{i=1}^{m}d_{i}I_{R_{i,\;k}}=d_{j}I_{R_{j,\;k}}+\sum _{i\neq j}d_{i}I_{R_{i,\;k}}$ and $S_{R_k}=\sum _{i=1}^{m}I_{R_{i,\;k}}$, respectively. Then, we have $G^{(2)}(j)=\frac {1}{n}\sum _{k=1}^{n}S_{B_{k}}I_{R_{j,\;k}}=\frac {1}{n}\sum _{k=1}^{n}(d_{j}I_{R_{j,\;k}}+\sum _{i\neq j}d_{i}I_{R_{i,\;k}})I_{R_{j,\;k}}$ and $G^{(2)}_{both}(j)=\frac {1}{n-1}\sum _{k=1}^{n-1}(S_{B_{k+1}}-S_{B_{k}})(I_{R_{j,\;k+1}}-I_{R_{j,\;k}})$. For $G^{(2)}$, we will have the mathematical expectation and variance of the $k$th pixel, written as

$$\begin{aligned} E(S_{B_k}I_{R_{j,\;k}})&=E[(d_{j}I_{R_{j,\;k}}+\sum_{i\neq j}d_{i}I_{R_{i,\;k}})I_{R_{j,\;k}}]\\ &=E(d_{j}I_{R_{j,\;k}}I_{R_{j,\;k}}+\sum_{i\neq j}d_iI_{R_{i,\;k}}I_{R_{j,\;k}})\\ &=E(d_jI_{R_{j,\;k}}I_{R_{j,\;k}})+E(\sum_{i\neq j}d_iI_{R_{i,\;k}}I_{R_{j,\;k}})\\ &=d_{j}E(I^2)+\sum_{i\neq j}d_{i}E(I)^2\\ &=d_{j}[E(I^2)-E(I)^2]+\sum d_{i}E(I)^{2}\\ &=const_1d_{j}+const_2=\mu_{1}, \end{aligned}$$
$$D(S_{B_k}I_{R_{j,\;k}})=E(S_{B_k}^{2}I_{R_{j,\;k}}^{2})-E(S_{B_k}I_{R_{j,\;k}})^{2}=\sigma^{2}_{1}.$$
For SGI mode-1, the mathematical expectation and the variance of the $k$th pixel are given by
$$\begin{aligned} &E[(S_{B_{k+1}}-S_{B_{k}})(I_{R_{j,\;k+1}}-I_{R_{j,\;k}})]\\ =&E(S_{B_{k+1}}I_{R_{j,\;k+1}})+E(S_{B_{k}}I_{R_{j,\;k}})-E(S_{B_{k+1}}I_{R_{j,\;k}})-E(S_{B_{k}}I_{R_{j,\;k+1}})\\ =&2[E(S_{B_k}I_{R_{j,\;k}})-E(S_{B_k})E(I_{R_{j,\;k}})]\\ =&2d_{j}(E(I^2)-E(I)^{2})\\ =&const_3d_{j}=\mu_{2}, \end{aligned}$$
$$\begin{aligned} &D[(S_{B_{k+1}}-S_{B_{k}})(I_{R_{j,\;k+1}}-I_{R_{j,\;k}})]\\ =&D[S_{B_{k+1}}I_{R_{j,\;k+1}}+S_{B_{k}}I_{R_{j,\;k}}-S_{B_{k+1}}I_{R_{j,\;k}}-S_{B_{k}}I_{R_{j,\;k+1}}]\\ =&2E(S_{B_k}^{2}I_{R_{j,\;k}}^{2})+2E(S_{B_k}^{2})E(I_{R_{j,\;k}}^{2})-4E(S_{B_k})^{2}E(I_{R_{j,\;k}})^{2}-4E(S_{B_k})E(S_{B_k}I_{R_{j,\;k}}^{2})\\ &+8E(S_{B_k}I_{R_{j,\;k}})E(S_{B_k})E(I_{R_{j,\;k}})-4E(S_{B_k}^{2}I_{R_{j,\;k}})E(I_{R_{j,\;k}})\\ =&\sigma^{2}_{2}. \end{aligned}$$
Therefore, $D(G^{(2)}(j))=D(S_{B_k}I_{R_{j,\;k}})/n=\sigma ^{2}_{1}/n$ and $D(G^{(2)}_{both}(j))=D[(S_{B_{k+1}}-S_{B_{k}})(I_{R_{j,\;k+1}}-I_{R_{j,\;k}})]/(n-1)=\sigma ^{2}_{2}/(n-1)$. Note that in the derivation of Eq. (12) we used the formula $D(X+Y)=D(X)+D(Y)+2Cov(X,Y)$, where $X$ and $Y$ represent two random variables, and $Cov(X,Y)$ denotes their covariance.

As we know, an IID random variable $X$ generally has a mathematical expectation and variance, i.e., $E(X)=\mu$, $D(X)=\sigma ^{2}$. According to the central limit theorem in mathematical statistics, for any $x$, there is $\mathop {\lim }_{n\to \infty }P\{\frac {\sum _{k=1}^{n}X_k-n\mu }{\sigma \sqrt {N}}\leq x\}=\frac {1}{\sqrt {2\pi }}\int _{-\infty }^{x}e^{-\frac {t^2}{2}}dt=\Phi (x)$, where $\Phi (x)$ denotes the standard normal distribution. The theorem states that when $n$ is very large, the random variable $Y_{n}=\sum _{k=1}^{n}X_{k}$ approximately obeys the normal distribution $N(n\mu ,\;n\sigma ^{2})$.

Based on Eqs. (9)–(12), let $X_k=S_{B_k}I_{R_{j,\;k}}$, then $Y_{n}=\sum _{k=1}^{n}X_{k}$ (i.e., $nG^{(2)}(j)$) theoretically obeys the normal distribution, according the central limit theorem. As for SGI mode-1, let $X_k=(S_{B_{k+1}}-S_{B_{k}})(I_{R_{j,\;k+1}}-I_{R_{j,\;k}})$, any two of them, which are far enough apart, are definitely independent of each other. Now let us focus on adjacent elements, take $X_1$ and $X_2$ as an example, there are $X_1=(S_{B_2}-S_{B_1})(I_{R_{j,2}}-I_{R_{j,1}})=S_{B_2}I_{R_{j,2}}-S_{B_2}I_{R_{j,1}}-S_{B_1}I_{R_{j,2}}+S_{B_1}I_{R_{j,1}}=S_{B_1}I_{R_{j,1}}+c_1$ and $X_2=(S_{B_3}-S_{B_2})(I_{R_{j,3}}-I_{R_{j,2}})=S_{B_3}I_{R_{j,3}}-S_{B_3}I_{R_{j,2}}-S_{B_2}I_{R_{j,3}}+S_{B_2}I_{R_{j,2}}=S_{B_3}I_{R_{j,3}}+c_2$. Although $c_1$ and $c_2$ have a certain relevance, even if they are exactly equal to each other, $S_{B_1}I_{R_{j,1}}$ and $S_{B_3}I_{R_{j,3}}$ are totally random, thus the neighbouring elements $X_1$ and $X_2$ are also independent of each other. So $Y_n=\sum _{k=1}^{n-1}X_{k}$ (i.e., $(n-1)G^{(2)}_{both}(j)$) also obeys the normal distribution for a large $n$, according the central limit theorem.

Now, we perform statistical analysis on the simulation results to verify above theory. For simplicity, we use an original 0-1 binary image of $128\times 128$, as shown in Fig. 6(a). We assume that two spatial coordinate sets of light intensities in both reference arm and object arm are the same, one set for bright pixels (1) and the other for dark pixels (0), which we call the object part and background part. Then, we separately computed the probability of recovered pixel values falling in these two sets for both $G^{(2)}$ and SGI mode-1 and plotted the corresponding probability density curves, compared with their Gaussian theoretical curves, as shown in Figs. 6(b)–6(c). The Gaussian distribution theoretical curves are obtained from the computed theoretical mean and variance. The abscissa of these graphs is the pixel value of reconstructed images, while the ordinate indicates the probability of occurrence of these values. The statistical data presented here proves that the recovered pixel value data obeys the normal distribution, which is consistent with the theory. It can also be clearly seen that the probability density curves (both statistical data and theoretical curves) of $G^{(2)}$ for two coordinate sets are almost completely coincident, while those of SGI mode-1 are clearly separated. This is why SGI mode-1 outperforms $G^{(2)}$.

 figure: Fig. 6.

Fig. 6. Probability density function of recovered pixel values of $G^{(2)}$, SGI mode-1, mode-2 and mode-3. (a) is an original binary image; (b)–(g) are the probability density distributions and the Gaussian theoretical curves of recovered pixel values falling in the object part (1) and the background part (0), of $G^{(2)}$, SGI mode-1, mode-2 $G_{B_+}^{(2)}$, mode-2 $G_{B_-}^{(2)}$, mode-3 $G_{R_+}^{(2)}$ and mode-3 $G_{R_+}^{(2)}$ respectively. The asterisks and circles represent the simulation data (SD), and the solid lines stand for the theoretical curves (TCs).

Download Full Size | PDF

Next, we will offer an explanation of forming positive and negative images. It seems that CI does not need to multiply each pattern by its weight to obtain images, just requiring to conditionally average partial reference patterns. But actually, the weights are completely binarized to 1 and 0 for acquiring positive images (or 0 and 1 for obtaining negative images), depending on the operations of subtracting the ensemble average from diverse bucket weights and followed by binarization. The former minus the latter generates a positive-negative distribution of weights. As for our SGI mode-2 and mode-3, also separately called bucket and pattern sequential-deviations (successive-deviations), they make either the bucket values or reference patterns minus their adjacent irrelevant ones. By this means, the values of the bucket signal or pattern signal will become a positive-negative distribution, similar to the process of CI, but without being binarized. Another notable feature is that SGI mode-2 and mode-3 also shift the direct current (DC) background to 0 via sequential-deviations, while both $G_{CI_+}$ and $G_{CI_-}$ fail to remove the background noise. Thereby, the image quality of SGI mode-2 and mode-3 under the same number of measurements is better than that of CI. Note that $G^{(2)}_{both}=G^{(2)}_{B}=G^{(2)}_{R}$, where $G^{(2)}_{B}=G^{(2)}_{B_+}-G^{(2)}_{B_-}$ and $G^{(2)}_{R}=G^{(2)}_{R_+}-G^{(2)}_{R_-}$, thus SGI mode-1 can be regarded as a function of the two forms in SGI mode-2 or mode-3.

Similar to Eqs. (9)–(12), we can also derive the mean and variance of SGI mode-2 and mode-3. Coincidentally, the mean of $(S_{B_{k+1}}-S_{B_{k}})I_{R_{j,\;k+1}}$ and $S_{B_{k+1}}(I_{R_{j,\;k+1}}-I_{R_{j,\;k}})$ is exactly equal, while the mean of $(S_{B_{k+1}}-S_{B_{k}})I_{R_{j,\;k}}$ and $S_{B_{k}}(I_{R_{j,\;k+1}}-I_{R_{j,\;k}})$ is exactly the same. They are

$$\begin{aligned} E[(S_{B_{k+1}}-S_{B_{k}})I_{R_{j,\;k+1}}]&=E[S_{B_{k+1}}(I_{R_{j,\;k+1}}-I_{R_{j,\;k}})]\\ &=E(S_{B_{k+1}}I_{R_{j,\;k+1}})-E(S_{B})E(I)\\ &=d_{j}[E(I^{2})-E(I)^{2}], \end{aligned}$$
$$\begin{aligned} E[(S_{B_{k+1}}-S_{B_{k}})I_{R_{j,\;k}}]&=E[S_{B_{k}}(I_{R_{j,\;k+1}}-I_{R_{j,\;k}})]\\ &=E(S_{B})E(I)-E(S_{B_{k}}I_{R_{j,\;k}})\\ &={-}d_{j}[E(I^{2})-E(I)^{2}]. \end{aligned}$$
From above equations, we can see that Eqs. (13) and (14) differ from each other only by a negative sign, which explains why positive-negative images are produced in SGI mode-2 and mode-3. Then, we can also deduce the corresponding variance formulas
$$\begin{aligned} &D[(S_{B_{k+1}}-S_{B_{k}})I_{R_{j,\;k+1}}]\\ =&D(S_{B_{k+1}}I_{R_{j,\;k+1}})+D(S_{B_{k}}I_{R_{j,\;k+1}})-2Cov(S_{B_{k+1}}I_{R_{j,\;k+1}},S_{B_{k}}I_{R_{j,\;k+1}})\\ =&D(S_{B_{k+1}}I_{R_{j,\;k+1}})+E(S_{B}^{2})E(I^{2})-E(S_{B})^{2}E(I)^{2}\\ &-2[E(S_{B_{k+1}}I_{R_{j,\;k+1}}^{2})E(S_{B})-E(S_{B_{k+1}}I_{R_{j,\;k+1}})E(S_{B})E(I)], \end{aligned}$$
$$\begin{aligned} &D[(S_{B_{k+1}}-S_{B_{k}})I_{R_{j,\;k}}]\\ =&D(S_{B_{k+1}}I_{R_{j,\;k}})+D(S_{B_{k}}I_{R_{j,\;k}})-2Cov(S_{B_{k+1}}I_{R_{j,\;k}},S_{B_{k}}I_{R_{j,\;k}})\\ =&E(S_{B}^{2})E(I^{2})-E(S_{B})^{2}E(I)^{2}+D(S_{B_{k}}I_{R_{j,\;k}})\\ &-2[E(S_{B_{k}}I_{R_{j,\;k}}^{2})E(S_{B})-E(S_{B_{k}}I_{R_{j,\;k}})E(S_{B})E(I)], \end{aligned}$$
$$\begin{aligned} &D[S_{B_{k+1}}(I_{R_{j,\;k+1}}-I_{R_{j,\;k}})]\\ =&D(S_{B_{k+1}}I_{R_{j,\;k+1}})+D(S_{B_{k+1}}I_{R_{j,\;k}})-2Cov(S_{B_{k+1}}I_{R_{j,\;k+1}},S_{B_{k+1}}I_{R_{j,\;k}})\\ =&D(S_{B_{k+1}}I_{R_{j,\;k+1}})+E(S_{B}^{2})E(I^{2})-E(S_{B})^{2}E(I)^{2}\\ &-2[E(S_{B_{k+1}}^{2}I_{R_{j,\;k+1}})E(I)-E(S_{B_{k+1}}I_{R_{j,\;k+1}})E(S_{B})E(I)], \end{aligned}$$
$$\begin{aligned} &D[S_{B_{k}}(I_{R_{j,\;k+1}}-I_{R_{j,\;k}})]\\ =&D(S_{B_{k}}I_{R_{j,\;k+1}})+D(S_{B_{k}}I_{R_{j,\;k}})-2Cov(S_{B_{k}}I_{R_{j,\;k+1}},S_{B_{k}}I_{R_{j,\;k}})\\ =&E(S_{B}^{2})E(I^{2})-E(S_{B})^{2}E(I)^{2}+D(S_{B_{k}}I_{R_{j,\;k}})\\ &-2[E(S_{B_{k}}^{2}I_{R_{j,\;k}})E(I)-E(S_{B_{k}}I_{R_{j,\;k}})E(S_{B})E(I)]. \end{aligned}$$
Their probability density distributions are given in Figs. 6(d)–6(e) and 6(f)–6(g), also compared with the theoretical Gaussian curves. From these curves, we can visually see the differences in the probability density distributions between the positive and negative images of SGI mode-2 or mode-3. In Figs. 6(d) and 6(f), both probability density curves calculated from the background part of the reconstructed image (simulation data 0) and their theoretical curves (TCs 0), are all located to the left of the probability density curves computed from the object part of the recovered image (simulation data 1) and the corresponding TCs 1. The results of Figs. 6(e) and 6(g) are just the opposite. These comparison results of the curves are well consistent with the physical theory. In addition, we also calculate the reconstructed pixel mean values of the background part and the object part, combined with the theoretical curves obtained from Eqs. (13) and (14), as shown in Figs. 7(a) and 7(b). The straight line with a positive slope (or the one with a negative slope) represents the functional relationship between the reconstructed pixel mean values of the positive (or negative) images calculated by Eq. (13) (or Eq. (14)) and the gray-scale value $d_j$ of the object. We can visually see from the Fig. 7 that the data points of reconstructed pixel averages in the object part and the background part just fall on the theoretical function curves. To better present the data, we add the error bar to each data point, and the height of each error bar indicates the standard deviation of each point. It is worth mentioning that, according to Eqs. (13) and (14), if a gray-scale object is used, then the reconstructed pixel mean computed from each region of the same original gray-scale value will also fall on these straight lines.

 figure: Fig. 7.

Fig. 7. Data analysis charts of SGI mode-2 (a) and mode-3 (b) reconstructions, where the abscissa represents the gray-scale value of the original object (here we use a binary object image with its gray-scale value being either 0 or 1), and the ordinate indicates the reconstructed pixel mean. The asterisks and circles denote the reconstructed pixel mean values calculated from the object part and the background part, respectively.

Download Full Size | PDF

Recalling $\Delta G^{(2)}$ or DGI, the role of deducting $\left \langle S_B\right \rangle \left \langle I_R(x_R)\right \rangle$ or $\frac {\left \langle S_B\right \rangle }{\left \langle S_R\right \rangle }\left \langle S_RI_R(x_R)\right \rangle$ is to shift the mean of the background part of the recovered images to almost 0 and to reduce the influence of the DC component of the background noise. From this perspective, our SGI method provides an alternative solution to reduce the noise. Different from CI, our SGI approach can realize the process of data acquisition accompanied by real-time reconstruction. Although the imaging quality of SGI is slightly inferior to that of DGI with the same number of measurements in simulation, but in actual experimental measurements, the difference of reconstruction performance between them is very little. Besides, our SGI method provides an alternative way to form positive and negative images. Furthermore, the three modes of SGI may have many potential applications, such as secure communication and optical encryption, which will be the focus of our future work. Since each mode has its own characteristic, one can choose a specific mode depending on the needs. Thus, it is difficult to evaluate which mode is the best.

As we know, GI and compressed sensing (CS) share the same mathematical measurement model, relying upon the spatial correlation between the modulated patterns and single-pixel intensities. Applying a CS algorithm to the pseudo-thermal GI data can significantly improve the image quality of recovered images with far fewer measurements than those are required by GI, and this technique is called compressive GI [34]. Thus, we also investigate whether our method can be processed by CS. Here, we use the TVAL3 algorithm [35] and a random matrix that is uniformly distributed over the interval (0, 1) for CS reconstruction. The sampling ratio used here is 10%. As shown in Fig. 8, compared with conventional compressive GI result (see Fig. 8(a)), we have given the reconstructed results of sequential-deviation CS (SCS) also with three modes (see Figs. 8(b)–8(f)). From the results, it can be seen that SCS mode-1 almost perfectly reconstructs the object information, with an image quality far superior to that of compressive GI. It is feasible for the SGI mode-2 being combined with the CS algorithm to generate positive and negative images (see Figs. 8(c)–8(d)), but in SGI mode-3, we are unable to acquire positive and negative images (see Figs. 8(e)–8(f)). This is because CS is strictly based on the linear equations and is more sensitive to the inconsistency of patterns than to that of single-pixel (bucket) values. Despite all this, they also give us more choices for image reconstruction.

 figure: Fig. 8.

Fig. 8. Reconstructions using SCS methods. (a) and (b) are the recovered images of conventional compressive GI and SCS mode-1. (c)–(d) are the positive and negative images restored by SCS mode-2. (e)–(f) are the reconstructed results of SCS mode-3, applying the CS algorithm to the data processed by SGI mode-3 $G^{(2)}_{R_+}$ and $G^{(2)}_{R_-}$.

Download Full Size | PDF

5. Conclusion

In summary, here we present an SGI method of three modes based on a simple sequential-deviation strategy. Among them, SGI mode-1 can acquire a high image quality close to that of DGI with less memory consumption, while SGI mode-2 and mode-3 can recover positive-negative images like CI, but without the need to compare bucket values with the ensemble average. In these modes, complementary pattern modulation is no longer necessary. Since our method is a gradual addition process for reconstructing images, it can realize the process of data acquisition accompanied by real-time reconstruction, i.e., the sampling can be terminated at any time when the image quality satisfies the requirements. We have demonstrated the feasibility of this technique with both numerical simulation and a double-arm lensless GI setup. On the basis of the probability theory, we have provided a brief explanation about the principle of SGI, offering new insights into the physical nature of forming positive and negative ghost images. The possibility of combining SGI with CS has also been investigated. We believe that this technology will be easily extended to computational GI schemes in the near future and enables many practical applications with real-time requirements.

Funding

Natural Science Foundation of Beijing Municipality (4184098); National Natural Science Foundation of China (61801022); National Key Research and Development Program of China (2016YFE0131500); Civil Space Project of China (D040301); International Science and Technology Cooperation Special Project of Beijing Institute of Technology (GZ2018185101); Beijing Excellent Talents Cultivation Project - Youth Backbone Individual Project (none).

Disclosures

The authors declare no conflicts of interest.

References

1. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995). [CrossRef]  

2. D. Zhang, Y.-H. Zhai, and L.-A. Wu, “Correlated two-photon imaging with true thermal light,” Opt. Lett. 30(18), 2354–2356 (2005). [CrossRef]  

3. F. Ferri, D. Magatti, A. Gatti, M. Bache, E. Brambilla, and L. A. Lugiato, “High-resolution ghost image and ghost diffraction experiments with thermal light,” Phys. Rev. Lett. 94(18), 183602 (2005). [CrossRef]  

4. J. Xiong, D.-Z. Cao, F. Huang, H.-G. Li, X.-J. Sun, and K. Wang, “Experimental observation of classical subwavelength interference with a pseudothermal light source,” Phys. Rev. Lett. 94(17), 173601 (2005). [CrossRef]  

5. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]  

6. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009). [CrossRef]  

7. J. A. Decker, “Hadamard-transform image scanning,” Appl. Opt. 9(6), 1392–1395 (1970). [CrossRef]  

8. N. Huynh, E. Zhang, M. Betcke, S. Arridge, P. Beard, and B. Cox, “Single-pixel optical camera for video rate ultrasonic imaging,” Optica 3(1), 26–29 (2016). [CrossRef]  

9. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015). [CrossRef]  

10. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Hadamard single-pixel imaging versus Fourier single-pixel imaging,” Opt. Express 25(16), 19619–19639 (2017). [CrossRef]  

11. W.-K. Yu, A.-D. Xiong, X.-R. Yao, G.-J. Zhai, and Q. Zhao, “Efficient phase retrieval based on dark fringe extraction and phase pattern construction with a good anti-noise capability,” Opt. Commun. 402, 413–421 (2017). [CrossRef]  

12. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008). [CrossRef]  

13. M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13(1), 13–20 (2019). [CrossRef]  

14. H.-C. Liu, B. Yang, Q. Guo, J. Shi, C. Guan, G. Zheng, H. Mühlenbernd, G. Li, T. Zentgraf, and S. Zhang, “Single-pixel computational ghost imaging with helicity-dependent metasurface hologram,” Sci. Adv. 3(9), e1701477 (2017). [CrossRef]  

15. K. Shibuya, T. Minamikawa, Y. Mizutani, H. Yamamoto, K. Minoshima, T. Yasui, and T. Iwata, “Scan-less hyperspectral dual-comb single-pixel-imaging in both amplitude and phase,” Opt. Express 25(18), 21947–21957 (2017). [CrossRef]  

16. G. Musarra, A. Lyons, E. Conca, Y. Altmann, F. Villa, F. Zappa, M. J. Padgett, and D. Faccio, “Non-line-of-sight three-dimensional imaging with a single-pixel camera,” Phys. Rev. Appl. 12(1), 011002 (2019). [CrossRef]  

17. H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard X rays,” Phys. Rev. Lett. 117(11), 113901 (2016). [CrossRef]  

18. A.-X. Zhang, Y.-H. He, L.-A. Wu, L.-M. Chen, and B.-B. Wang, “Tabletop x-ray ghost imaging with ultra-low radiation,” Optica 5(4), 374–377 (2018). [CrossRef]  

19. A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, “Ghost imaging with thermal light: comparing entanglement and classical correlation,” Phys. Rev. Lett. 93(9), 093602 (2004). [CrossRef]  

20. F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104(25), 253603 (2010). [CrossRef]  

21. W.-K. Yu, M.-F. Li, X.-R. Yao, X.-F. Liu, L.-A. Wu, and G.-J. Zhai, “Adaptive compressive ghost imaging based on wavelet trees and sparse representation,” Opt. Express 22(6), 7133–7144 (2014). [CrossRef]  

22. X.-R. Yao, W.-K. Yu, X.-F. Liu, L.-Z. Li, M.-F. Li, L.-A. Wu, and G.-J. Zhai, “Iterative denoising of ghost imaging,” Opt. Express 22(20), 24268–24275 (2014). [CrossRef]  

23. A. M. Paniagua-Diaz, I. Starshynov, N. Fayard, A. Goetschy, R. Pierrat, R. Carminati, and J. Bertolotti, “Blind ghost imaging,” Optica 6(4), 460–464 (2019). [CrossRef]  

24. L.-A. Wu and K.-H. Luo, “Two-photon imaging with entangled and thermal light,” AIP Conf. Proc. 1384(1), 223–228 (2011). [CrossRef]  

25. K.-H. Luo, B.-Q. Huang, W.-M. Zheng, and L.-A. Wu, “Nonlocal imaging by conditional averaging of random reference measurements,” Chin. Phys. Lett. 29(7), 074216 (2012). [CrossRef]  

26. J. Wen, “Forming positive-negative images using conditioned partial measurements from reference arm in ghost imaging,” J. Opt. Soc. Am. A 29(9), 1906–1911 (2012). [CrossRef]  

27. W.-K. Yu, X.-R. Yao, X.-F. Liu, L.-Z. Li, and G.-J. Zhai, “Ghost imaging based on Pearson correlation coefficients,” Chin. Phys. B 24(5), 054203 (2015). [CrossRef]  

28. M.-J. Sun, M.-F. Li, and L.-A. Wu, “Nonlocal imaging of a reflective object using positive and negative correlations,” Appl. Opt. 54(25), 7494–7499 (2015). [CrossRef]  

29. W.-K. Yu, X.-F. Liu, X.-R. Yao, C. Wang, Y. Zhai, and G.-J. Zhai, “Complementary compressive imaging for the telescopic system,” Sci. Rep. 4(1), 5834 (2015). [CrossRef]  

30. W.-K. Yu, X.-R. Yao, X.-F. Liu, L.-Z. Li, and G.-J. Zhai, “Compressive moving target tracking with thermal light based on complementary sampling,” Appl. Opt. 54(13), 4249–4254 (2015). [CrossRef]  

31. K. M. Czajkowski, A. Pastuszczak, and R. Kotyński, “Single-pixel imaging with sampling distributed over simplex vertices,” Opt. Lett. 44(5), 1241–1244 (2019). [CrossRef]  

32. W.-K. Yu, X.-R. Yao, X.-F. Liu, L.-Z. Li, and G.-J. Zhai, “Three-dimensional single-pixel compressive reflectivity imaging based on complementary modulation,” Appl. Opt. 54(3), 363–367 (2015). [CrossRef]  

33. W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, and G.-J. Zhai, “Compressive microscopic imaging with “positive-negative” light modulation,” Opt. Commun. 371, 105–111 (2016). [CrossRef]  

34. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009). [CrossRef]  

35. C. B. Li, “An efficient algorithm for total variation regularization with applications to the single pixel camera and compressive sensing,” M.Sc thesis, (Rice University, 2010).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Simulation results of $\Delta G^{(2)}$, DGI, CI, SGI mode-1, mode-2 and mode-3. (a) is the original picture of $128\times 128$ pixels. (b)–(d) are the recovered results of $\Delta G^{(2)}$, DGI and SGI mode-1. (e)–(f) are CI positive-negative images, i.e., $G_{CI_+}$ and $G_{CI_-}$. (g)–(h) are retrieved positive-negative images of SGI mode-2, i.e., $G^{(2)}_{B_+}$ and $G^{(2)}_{B_-}$, and (i)–(j) are restored positive-negative images of SGI mode-3, i.e., $G^{(2)}_{R_+}$ and $G^{(2)}_{R_-}$.
Fig. 2.
Fig. 2. Schematic diagram of the double-arm lensless GI setup.
Fig. 3.
Fig. 3. Experimental results. (a) is the light field distribution on the photosensitive surface of the bucket detector in the object arm, i.e., the interaction result of one speckle pattern and the light field of the object. (b)–(d) are the recovered results of $\Delta G^{(2)}$, DGI and SGI mode-1; (e)–(f), (g)–(h) and (i)–(j) are the positive and negative ghost images of CI, SGI mode-2 and mode-3, respectively; (k) gives a cross section plot of the gray-scale images in (c) and (d) to make the image quality easier to be compared, by choosing the same row of the recovered images indicated by a red line. (l) is the cross section plot of retrieved gray-scale images (e), (g) and (i) with the same row, while (m) is the intensity profile of reconstructed gray-scale images (f), (h) and (j). The horizontal coordinate $x$ of (k)–(m) denotes the column pixel index of the restored images.
Fig. 4.
Fig. 4. Comparisons of experimental $\Delta$GI (a)–(j) and SGI mode-1 (k)–(t) reconstructions, using 300 to 30,000 measurements. (u) and (v) are the variance curves of $\Delta$GI and SGI mode-1 recovered images as a function of the measurement number (from 500 to 5,500), corresponding to the background part (BP) and the object part (OP), respectively.
Fig. 5.
Fig. 5. Reconstructed results under different kinds of temperature drift of the light source. (a) shows three different kinds of temperature drifts (TD-1, TD-2 and TD-3). (b)–(d) and (e)–(g) are the restored images of SGI mode-1 and $G_{CI_+}-G_{CI_-}$ under three kinds of temperature drifts of the light source, respectively.
Fig. 6.
Fig. 6. Probability density function of recovered pixel values of $G^{(2)}$, SGI mode-1, mode-2 and mode-3. (a) is an original binary image; (b)–(g) are the probability density distributions and the Gaussian theoretical curves of recovered pixel values falling in the object part (1) and the background part (0), of $G^{(2)}$, SGI mode-1, mode-2 $G_{B_+}^{(2)}$, mode-2 $G_{B_-}^{(2)}$, mode-3 $G_{R_+}^{(2)}$ and mode-3 $G_{R_+}^{(2)}$ respectively. The asterisks and circles represent the simulation data (SD), and the solid lines stand for the theoretical curves (TCs).
Fig. 7.
Fig. 7. Data analysis charts of SGI mode-2 (a) and mode-3 (b) reconstructions, where the abscissa represents the gray-scale value of the original object (here we use a binary object image with its gray-scale value being either 0 or 1), and the ordinate indicates the reconstructed pixel mean. The asterisks and circles denote the reconstructed pixel mean values calculated from the object part and the background part, respectively.
Fig. 8.
Fig. 8. Reconstructions using SCS methods. (a) and (b) are the recovered images of conventional compressive GI and SCS mode-1. (c)–(d) are the positive and negative images restored by SCS mode-2. (e)–(f) are the reconstructed results of SCS mode-3, applying the CS algorithm to the data processed by SGI mode-3 $G^{(2)}_{R_+}$ and $G^{(2)}_{R_-}$.

Tables (1)

Tables Icon

Table 1. Memory consumption of different methods

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

G ( 2 ) = S B I R ( x R ) ,
Δ G ( 2 ) = ( S B S B ) ( I R ( x R ) I R ( x R ) ) = S B I R ( x R ) S B I R ( x R ) .
G D G I ( 2 ) = ( S B S B S R S R ) ( I R ( x R ) I R ( x R ) = S B I R ( x R ) S B S R S R I R ( x R ) ,
{ G C I + = I R + ,  for  { S B + S B S B 0 } , G C I = I R ,  for  { S B S B S B < 0 } .
mode-1:  G b o t h ( 2 ) = ( S B k + 1 S B k ) ( I R k + 1 I R k ) ;
mode-2:  { G B + ( 2 ) = ( S B k + 1 S B k ) I R k + 1 , G B ( 2 ) = ( S B k + 1 S B k ) I R k ;
mode-3:  { G R + ( 2 ) = S B k + 1 ( I R k + 1 I R k ) , G R ( 2 ) = S B k ( I R k + 1 I R k ) .
PSNR = 10 log ( 255 2 / MSE ) ,
E ( S B k I R j , k ) = E [ ( d j I R j , k + i j d i I R i , k ) I R j , k ] = E ( d j I R j , k I R j , k + i j d i I R i , k I R j , k ) = E ( d j I R j , k I R j , k ) + E ( i j d i I R i , k I R j , k ) = d j E ( I 2 ) + i j d i E ( I ) 2 = d j [ E ( I 2 ) E ( I ) 2 ] + d i E ( I ) 2 = c o n s t 1 d j + c o n s t 2 = μ 1 ,
D ( S B k I R j , k ) = E ( S B k 2 I R j , k 2 ) E ( S B k I R j , k ) 2 = σ 1 2 .
E [ ( S B k + 1 S B k ) ( I R j , k + 1 I R j , k ) ] = E ( S B k + 1 I R j , k + 1 ) + E ( S B k I R j , k ) E ( S B k + 1 I R j , k ) E ( S B k I R j , k + 1 ) = 2 [ E ( S B k I R j , k ) E ( S B k ) E ( I R j , k ) ] = 2 d j ( E ( I 2 ) E ( I ) 2 ) = c o n s t 3 d j = μ 2 ,
D [ ( S B k + 1 S B k ) ( I R j , k + 1 I R j , k ) ] = D [ S B k + 1 I R j , k + 1 + S B k I R j , k S B k + 1 I R j , k S B k I R j , k + 1 ] = 2 E ( S B k 2 I R j , k 2 ) + 2 E ( S B k 2 ) E ( I R j , k 2 ) 4 E ( S B k ) 2 E ( I R j , k ) 2 4 E ( S B k ) E ( S B k I R j , k 2 ) + 8 E ( S B k I R j , k ) E ( S B k ) E ( I R j , k ) 4 E ( S B k 2 I R j , k ) E ( I R j , k ) = σ 2 2 .
E [ ( S B k + 1 S B k ) I R j , k + 1 ] = E [ S B k + 1 ( I R j , k + 1 I R j , k ) ] = E ( S B k + 1 I R j , k + 1 ) E ( S B ) E ( I ) = d j [ E ( I 2 ) E ( I ) 2 ] ,
E [ ( S B k + 1 S B k ) I R j , k ] = E [ S B k ( I R j , k + 1 I R j , k ) ] = E ( S B ) E ( I ) E ( S B k I R j , k ) = d j [ E ( I 2 ) E ( I ) 2 ] .
D [ ( S B k + 1 S B k ) I R j , k + 1 ] = D ( S B k + 1 I R j , k + 1 ) + D ( S B k I R j , k + 1 ) 2 C o v ( S B k + 1 I R j , k + 1 , S B k I R j , k + 1 ) = D ( S B k + 1 I R j , k + 1 ) + E ( S B 2 ) E ( I 2 ) E ( S B ) 2 E ( I ) 2 2 [ E ( S B k + 1 I R j , k + 1 2 ) E ( S B ) E ( S B k + 1 I R j , k + 1 ) E ( S B ) E ( I ) ] ,
D [ ( S B k + 1 S B k ) I R j , k ] = D ( S B k + 1 I R j , k ) + D ( S B k I R j , k ) 2 C o v ( S B k + 1 I R j , k , S B k I R j , k ) = E ( S B 2 ) E ( I 2 ) E ( S B ) 2 E ( I ) 2 + D ( S B k I R j , k ) 2 [ E ( S B k I R j , k 2 ) E ( S B ) E ( S B k I R j , k ) E ( S B ) E ( I ) ] ,
D [ S B k + 1 ( I R j , k + 1 I R j , k ) ] = D ( S B k + 1 I R j , k + 1 ) + D ( S B k + 1 I R j , k ) 2 C o v ( S B k + 1 I R j , k + 1 , S B k + 1 I R j , k ) = D ( S B k + 1 I R j , k + 1 ) + E ( S B 2 ) E ( I 2 ) E ( S B ) 2 E ( I ) 2 2 [ E ( S B k + 1 2 I R j , k + 1 ) E ( I ) E ( S B k + 1 I R j , k + 1 ) E ( S B ) E ( I ) ] ,
D [ S B k ( I R j , k + 1 I R j , k ) ] = D ( S B k I R j , k + 1 ) + D ( S B k I R j , k ) 2 C o v ( S B k I R j , k + 1 , S B k I R j , k ) = E ( S B 2 ) E ( I 2 ) E ( S B ) 2 E ( I ) 2 + D ( S B k I R j , k ) 2 [ E ( S B k 2 I R j , k ) E ( I ) E ( S B k I R j , k ) E ( S B ) E ( I ) ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.