Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Optimization of retina-like illumination patterns in ghost imaging

Open Access Open Access

Abstract

Ghost imaging (GI) reconstructs images using a single-pixel or bucket detector, which has the advantages of scattering robustness, wide spectrum, and beyond-visual-field imaging. However, this technique needs large amounts of measurements to obtain a sharp image. Numerous methods are proposed to overcome this disadvantage. Retina-like patterns, as one of the compressive sensing approaches, enhance the imaging quality of the region of interest (ROI) while maintaining measurements. The design of the retina-like patterns determines the performance of the ROI in the reconstructed image. Unlike the conventional method to fill in ROI with random patterns, optimizing retina-like patterns by filling in the ROI with the patterns containing the sparsity prior of objects is proposed. The proposed method is then verified by simulations and experiments compared with conventional GI, retina-like GI, and GI using patterns optimized by principal component analysis. The method using optimized retina-like patterns obtains the best imaging quality in ROI among other methods. Meanwhile, the good generalization capability of the optimized retina-like pattern is also verified. The feature information of the target can be obtained while designing the size and position of the ROI of retina-like patterns to optimize the ROI pattern. The proposed method facilitates the realization of high-quality GI.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Multi-pixel detectors are used as light detection devices in conventional optical imaging systems. However, multi-pixel detectors may become expensive or impractical at some specific wavebands, such as the infrared or deep ultraviolet [1]. Pittman et al. first used entangled photon pairs to demonstrate the feasibility of ghost imaging (GI) in 1995 [2]. GI, also known as correlated or single-pixel imaging [38], provides object information by correlating illumination patterns of known light field distribution and a sequence of light intensity collected by a single-pixel or bucket detector. This unique imaging system, which endows GI with the advantages of scattering robustness, wide spectrum, and beyond-visual-field imaging, has been adopted in many related fields, including three-dimensional imaging [912], terahertz imaging [1315], multispectral imaging [1618], and scattering medium imaging [3,19,20].

In conventional GI, a large number of patterns must be illuminated on the target for high-quality images while costing a considerable amount of time [21]. Balancing imaging quality and efficiency is still a major challenge that must be overcome to obtain a good GI performance. Most of the studies are focused on the following two aspects: the modulation patterns of illumination [2224] and the reconstruction algorithm [2527]. Nevertheless, the tradeoff between efficiency and quality is still a problem in ROI imaging [28]. Inspired by the foveated vision found in the human eye, modulation patterns with retina-like structures are proposed to obtain high imaging quality of ROI while maintaining sampling times [28]. Retina-like patterns have attracted the attention of many scholars in recent years. Zhang et al. [29] proposed a model of three-dimensional GI combining with retina-like structures to improve imaging efficiency, and some retina-like properties, such as invariant scaling and rotation, have been realized in the proposed imaging system. Zhai et al. [30] introduced a foveal GI based on deep learning to realize the intelligent selection of ROI for foveal imaging. The intelligent selection of ROI has been realized in foveal GI by applying generative adversarial networks based on single-shot multibox detector architecture [31]. The high PSNR of the ROI can be achieved compared with uniform-resolution GI. Gao et al. [32] proposed a new compressive GI called ROI-guided compressive sensing GI based on human consciousness eye control for ROI acquisition. This compressive GI used previous imaging information from fast Fourier single-pixel imaging to achieve the improved visual effect and high imaging quality. The current study presented parallel retina-like computational GI, which is based on a multi-pixel detector combined with retina-like patterns and has better performance than conventional parallel GI and retina-like GI (RGI) [33]. We believe that studies on the optimization of illumination retina-like patterns are lacking despite the advantages of RGI. Thus, considerable attention is provided to the location and size of the ROI or the application of retina-like structures [2932]. However, studies on patterns filled in the ROI of retina-like patterns are unavailable. The illumination patterns applied to retina-like structures are the most random binary patterns. Random patterns are one of the poor performance methods used to obtain object information. Thus, improving the imaging quality of RGI by optimizing retina-like patterns, which is realized via filling the patterns containing the sparsity prior of objects in the ROI, is proposed in this paper. The methods used to obtain the sparsity prior of objects, such as imaging dictionary and principal component analysis (PCA), have been proposed in previous single-pixel imaging studies [34]. PCA is selected to generate patterns containing the sparsity prior of objects, and the retina-like patterns are then optimized to improve the imaging quality of ROI. The optimized retina-like patterns in GI are demonstrated using simulations and experiments. The imaging quality of the proposed scheme is evaluated by comparing the results of conventional GI, RGI, and GI based on PCA.

2. Principles

The principle of RGI using optimized patterns by PCA (PCA-RGI) is shown in Fig. 1. A sequence of optimized retina-like patterns is set up on the digital micromirror device (DMD) by computer. The reflected light with modulated pattern reaches the object to be imaged after beam light illumination from the light source on the DMD. Reflected or projected light illuminated on the object is converged by a focus lens onto the single-pixel detector. The light intensity is collected by data acquisition and then transmitted to the computer for reconstruction calculation with the corresponding patterns.

 figure: Fig. 1.

Fig. 1. Principle of PCA-RGI.

Download Full Size | PDF

The measurement principle of RGI can be described as follows:

$${I_t} = \sum\limits_{x,y} {{S_t}({x,y} )O({x,y} )}, $$
where It represents the light intensity collected by the signal-pixel detector, and t is the time index; O (x, y) represents the object, and (x, y) represents the 2D Cartesian coordinates in the scene; St (x, y) represents the optimized retina-like patterns. The conventional reconstruction algorithm is the second-order correlation algorithm. However, compressive sensing (CS) algorithms have been proven to obtain a better performance in image reconstruction than conventional algorithms [35]. The total variation (TV) regularization prior algorithm [36] was selected in this paper for image reconstruction. Unlike the second-order correlation algorithm, the TV-based CS reconstruction algorithm transforms the imaging reconstruction into a constrained optimization problem. The optimization model of TV-based CS is shown as follows:
$$\begin{array}{ll} \min &{||c ||_{{l_1}}}\\ s.t. &GO^{\prime} = c \\ &S^{\prime}O^{\prime} = I^{\prime} \end{array}$$
where G represents the gradient calculation matrix, and c is the corresponding coefficient vector; l1 represents the l1 norm; $S^{\prime} \in {R^{T\ast a}}$ represents the optimized retina-like patterns (T patterns are observed, and each pattern comprises a = x × y pixels); $O^{\prime} \in {R^{a\ast 1}}$ represents the object aligned as a vector; $I^{\prime} \in {R^{T\ast 1}}$ represents the light intensity.

The PCA-based illumination patterns used to fill in the ROI of retina-like patterns are generated for DMD in three steps. The first step is to generate grayscale PCA-based patterns from the training dataset. A large number of images with features are required to constitute a training dataset. The optimal patterns are then designed on the basis of the common features extracted from the training dataset. The dataset is assumed to contain M objects and N variables. The original training dataset is represented by a matrix X of order M × N, which is shown as:

$$X = \left[ {\begin{array}{cccc} {{x_{11}}}&{{x_{12}}}& \cdots &{{x_{1N}}}\\ {{x_{21}}}&{{x_{22}}}& \cdots &{{x_{2N}}}\\ \vdots & \vdots & \ddots & \vdots \\ {{x_{M1}}}&{{x_{M2}}}& \cdots &{{x_{MN}}} \end{array}} \right], $$
where each row represents one training image which is converted to a row, and xmn represents the value of the nth (n ≤ N) pixel in the mth (m ≤ M) training image. The raw dataset comprises many training images, and the variables are generally not calculated in the same units. Therefore, the raw dataset must be standardized. The standardized process can be expressed as:
$${x_{mn}} = ({x_{mn}} - \left\langle {{x_n}} \right\rangle )/{S_n}, $$
where $< {x_n} > $ represents the average value of each column, and Sn represents the variance of each column. The covariance matrix $\Sigma $ with a size of N × N is then calculated as shown below:
$$\Sigma = \left[ {\begin{array}{cccc} {{\mathop{\rm{cov}}} (x_1^\ast ,x_1^\ast )}&{{\mathop{\rm{cov}}} (x_1^\ast ,x_2^\ast )}& \cdots &{{\mathop{\rm{cov}}} (x_1^\ast ,x_N^\ast )}\\ {{\mathop{\rm{cov}}} (x_2^\ast ,x_1^\ast )}&{{\mathop{\rm{cov}}} (x_2^\ast ,x_2^\ast )}& \cdots &{{\mathop{\rm{cov}}} (x_2^\ast ,x_N^\ast )}\\ \vdots & \vdots & \ddots & \vdots \\ {{\mathop{\rm{cov}}} (x_N^\ast ,x_1^\ast )}&{{\mathop{\rm{cov}}} (x_N^\ast ,x_2^\ast )}& \cdots &{{\mathop{\rm{cov}}} (x_N^\ast ,x_N^\ast )} \end{array}} \right], $$
where cov represents covariance calculation, and x* represents column vectors in X. The covariance matrix $\Sigma $ is then decomposed into eigenvalues and eigenvectors, which is shown as:
$${Q^T}\Sigma Q = \left[ {\begin{array}{cccc} {{\lambda_1}}&{}&{}&{}\\ {}&{{\lambda_2}}&{}&{}\\ {}&{}& \ddots &{}\\ {}&{}&{}&{{\lambda_N}} \end{array}} \right], $$
where Q represents the eigenvector matrix of $\Sigma $ with size N × N, and λn represents the eigenvalue of $\Sigma $. The eigenvectors in Q are then arranged in the descending order of corresponding eigenvalues. Each row vector of Q, which represents one principal component, is also the illumination pattern after conversion to the two-dimensional array.

The second step is extracting the pixels with positive values of the PCA-based illumination patterns. The pixel intensity values of the principal components can be positive and negative. However, only positive pixel intensity values can be applied to the projection device in a practical optical system. The previous work [35] revealed that both pixels with positive and negative values are projected. The light intensity value corresponding to the original illumination pattern is then obtained by subtraction. The pixels with positive values are extracted from the PCA-based illumination patterns in the current study, and those with negative values are filled with zeros.

The third step is binarizing the grayscale PCA-based illumination patterns. Grayscale illumination patterns cannot be directly loaded into the DMD. The binarization of the grayscale patterns involves two typical solutions: spatial and temporal dithering strategies [4]. The temporal dithering strategy is selected to binarize the grayscale patterns. The grayscale patterns processed by the temporal dithering strategy are then split into eight binary patterns. The lowest bit binary image is used to replace the original grayscale image considering the imaging efficiency.

The ROI of retina-like patterns in the previous work was filled with random patterns without any features [2932]. Unlike conventional methods, the ROI is filled in with PCA-based patterns containing the sparsity prior of objects to optimize the retina-like illumination patterns. The proposed method is compared with the GI using random patterns (Random-GI), GI using conventional retina-like patterns (Random-RGI), and GI using PCA-based patterns (PCA-GI) through simulations and experiments and demonstrates its advantages.

3. Simulations and experiments

3.1 Simulations

Simulations with two different objects were performed separately to evaluate the performance of PCA-RGI compared with Random-GI, Random-RGI, and PCA-GI.

The performance of the final reconstructed images was compared quantitatively using the peak signal-to-noise ratio (PSNR) [37] and structural similarity index measure (SSIM) [38] as the evaluation indexes. The PSNR is defined as:

$$\left\{ \begin{array}{l} \textrm{PSNR} = 10{\log_{10}}\frac{{{{({{2^k} - 1} )}^2}}}{{\textrm{MSE}}}\\ MSE = \frac{1}{a}\sum {_{x,y}} {({O^{\prime}({x,y} )- O({x,y} )} )^2} \end{array} \right., $$
where MSE represents the mean square error, O’ (x, y) represents the reconstructed image, and k is the number of bits set as 8. A high PSNR represents an improved imaging quality. The SSIM is defined as:
$$\textrm{SSI}{\textrm{M}_{x,y}} = \frac{{({2\mu \mu^{\prime} + {c_1}} )({2w + {c_2}} )}}{{({{\mu^2} + u^{{\prime}2} + {c_1}} )({{\sigma^2} + \sigma^{{\prime}^2} + {c_2}})}}, $$
where μ and μ’ respectively represent the average values of O (x, y) and O’ (x, y); σ and σ’ represent the variance of O (x, y) and O’ (x, y), respectively; ω represents the covariance between O (x, y) and O’ (x, y); c1 = (k1 × L)2 and c2 = (k2 × L)2 are the constants with k1 = 0.01, k2 = 0.03, and L = 1. An SSIM value close to 1 indicates good image quality.

First, several simulation settings were clarified. A total of 60,000 unlabeled grayscale images were selected from the STL-10 [39] dataset as the training dataset of PCA-based illumination patterns. The dataset belongs to 10 categories, including bird, cat, car, and others. All images of the dataset were resized to 64 × 64 pixels, which are the same as the objects to be imaged. Sampling was simulated by two test images “coco” and “cameraman” under the different measurement conditions. The two images were chosen because “coco” belongs to the categories of the training dataset, while “cameraman” does not. The numbers of the illumination patterns required in the sampling of 10%, 20%, 30%, 40%, 50%, and 100% are 410, 819, 1229, 1638, 2048, and 4096, respectively. PSNR and SSIM are used to evaluate the imaging quality of the overall area and ROI for quantitative comparison.

The reconstruction results of “coco” are shown in Fig. 2. The figure reveals that the imaging quality of ROI for PCA-RGI is sharper than the other method, especially when the number of measurements is low. Moreover, the methods with PCA-based illumination patterns correspondingly obtain better performance than the two methods with random-based patterns.

 figure: Fig. 2.

Fig. 2. Comparison results of “coco” test image reconstructed by Random-GI, Random-RGI, PCA-GI, and PCA-RGI under different measurements.

Download Full Size | PDF

The quantitative comparison results for “coco” are shown in Table 1. The PSNR and SSIM of PCA-RGI are better than the three other methods in most conditions considering the imaging quality of the overall image. However, the PSNR and SSIM of methods with non-retina-like patterns are better than those with retina-like patterns when the number of measurements is slightly low. The PSNR and SSIM of PCA-RGI considering the imaging quality of ROI are always better than the three other methods, followed by PCA-GI, Random-RGI, and Random-GI. Thus, the illumination patterns optimized by PCA have a better performance in obtaining object information than random patterns.

Tables Icon

Table 1. Quantitative comparison results of test image “coco”

The simulation was conducted on the test image “cameraman” which does not belong to the training dataset, to verify the generalization capability of the optimized PCA-based retina-like illumination patterns. The settings of the second simulation are similar to those of the first one. The comparison results for “cameraman” test image are shown in Fig. 3. Therefore, the image quality of ROI for PCA-RGI is sharper than the three other methods, which is the same as the results of “coco.” The optimized retina-like illumination patterns also perform effectively considering the rapid acquisition of information regarding the object.

 figure: Fig. 3.

Fig. 3. Comparison results of “cameraman” test image reconstructed by Random-GI, Random-RGI, PCA-GI, and PCA-RGI under different measurements.

Download Full Size | PDF

The quantitative comparison results of “cameraman” are shown in Table 2. The PSNR and SSIM of PCA-GI are better than the three other methods considering the imaging quality of the overall image. This finding is different from that of “coco” for additional background information in the region of uninterest. However, the PSNR and SSIM of PCA-RGI considering the imaging quality of ROI are better than the three other methods, followed by PCA-GI, Random-RGI, and Random-GI. Thus, the method with optimized retina-like illumination patterns has effective performance because it contains sparsity prior information of the object even when the object does not belong to the category of the training dataset.

Tables Icon

Table 2. Quantitative comparison results of test image “cameraman”

The quantitative results of “coco” and “cameraman” in ROI are compared. The PSNR and SSIM of PCA-RGI minus the PSNR and SSIM of random-RGI is used to express the increment in imaging quality by applying optimized retina-like patterns. The comparison results are shown in Fig. 4, wherein the increment of the PSNR and SSIM of “coco” is more than that of “cameraman.” This result verifies that the method with optimized PCA-based retina-like patterns obtains improved performance for object reconstruction of the same category of the training dataset. Meanwhile, the method also has a generalization capability, but its increment is not as good as the objects of the training category.

 figure: Fig. 4.

Fig. 4. Quantitative comparison results for quantitative results of “coco” and “cameraman” in ROI. (a) The increment of the PSNR of “coco” and “cameraman” is obtained by subtracting the PSNR of Random-RGI from the PSNR of PCA-RGI. (b) The increment of the SSIM of “coco” and “cameraman” is obtained by subtracting the SSIM of Random-RGI from that of PCA-RGI.

Download Full Size | PDF

Measurements in practical optical systems are always corrupted with noise from ambient light and circuit current. The noise was not considered in the reconstruction in the above simulations. Simulations on the influence of measurement noise are performed, and the robustness of methods is compared with different illumination patterns in ROI. White Gaussian noise is added to the light intensity measurements to simulate different noise level conditions. The built-in function wgn() of MATLAB was used to add the white Gaussian noise. The power unit of added noise is dBw, which represents absolute power value of noise. The larger the value of the noise power, the greater the noise affect the imaging quality. The reconstructed results of the “coco” with measurements of 2048 and 4096 are considered original images. The comparison results are shown in Fig. 5. The results reveal that the imaging quality of different methods decreases to different degrees with the increase in the noise power. Considering the measurement results with 2048, methods with non-retina-like patterns have a better performance than those with retina-like patterns when noise power is high. Considering the measurement results with 4096, methods with retina-like patterns have a better performance than those with non-retina-like patterns. Quantitative comparison results are shown in Fig. 6. PCA-RGI has the best performance with the measurements of 2048 and 4096, but that with measurements of 2048 with the noise power of noise more than −10 dBw is poor. However, the PCA-RGI always has a better performance than Random-RGI. The robustness of the method with retina-like patterns is worse than methods with non-retina-like patterns under the low measurement condition and the sufficiently high noise power. This finding is due to the simultaneous enhancement of the ROI performance caused by the retina-like structure at the expense of the non-ROI information and the improvement of noise.

 figure: Fig. 5.

Fig. 5. Noise comparison results of “coco” test image in ROI reconstructed by Random-GI, Random-RGI, PCA-GI, and PCA-RGI under different power of noise.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Quantitative comparison results of “coco” test image in ROI reconstructed by Random-GI, Random-RGI, PCA-GI, and PCA-RGI under different power of noise. (a) The comparison of PSNR with the measurements of 2048. (b) The comparison of SSIM with the measurements of 2048. (c) The comparison of PSNR with the measurements of 4096. (d) The comparison of SSIM with the measurements of 4096.

Download Full Size | PDF

3.2 Experiments

The experimental setup is shown in Fig. 7. The setup comprises the illumination part, detection part, and objects. The illumination part comprises a light-emitting diode, DMD (Texas Instruments DLP Discovery 4100 development kit), and lens. The light-emitting diode operates at 400–760 nm (@20 W). The maximum binary modulation rate of DMD is up to 22 kHz. The focal length of the projection lens is 150 mm. The detection part comprises one photodetector (Thorlabs PDA36A, active area of 13 mm2), a data acquisition board (PICO6404E, sampling at 1 MS/s), and a computer. Three objects for the experiments, a modified United States Air Force resolution (USAF) test chart, an image of a cat, and an image of “BIT” are selected. The PCA-based patterns are trained by the STL-10 dataset. Only the image of a cat belongs to the categories of the training dataset. The experiments were conducted in an environment with ambient noise. The level of ambient noise depends on the working power of the light source. During our whole experiments, the light source works at the maximum power.

 figure: Fig. 7.

Fig. 7. Experimental setup.

Download Full Size | PDF

First, the image of a cat is used for imaging, and the measurements are set as 410, 819, 1229, 1638, 2048, and 4096. The experimental results are shown in Figs. 8 and 9. The imaging quality of PCA-RGI and Random-RGI improved as the measurements increased considering ROI imaging. PCA-RGI has the best performance, followed by PCA-GI, Random-GI, and Random-RGI. The performance of Random-RGI was substantially affected when the measurements are low due to the ambient noise. Moreover, Random-GI and PCA-GI are affected by noise, and the image quality will begin to decrease when close to full sampling. PCA-based patterns improve the capability to obtain information in ROI more effectively than other methods. This finding validates the superiority of the optimized retina-like illumination pattern.

 figure: Fig. 8.

Fig. 8. Experimental results of “CAT” image reconstruction by Random-GI, Random-RGI, PCA-GI, and PCA-RGI under different measurements.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. Quantitative comparison results of “CAT” image in ROI reconstruction by Random-GI, Random-RGI, PCA-GI, and PCA-RGI under different measurements. (a) PSNR value. (b) SSIM value.

Download Full Size | PDF

The experimental results of “USAF” and “BIT” images are respectively shown in Figs. 10 and 11. PCA-RGI has the best imaging quality in ROI. The imaging quality of Random-RGI is influenced by noise and is even worse than Random-GI. The imaging quality of PCA-RGI is also affected by noise, but its robustness is better than Random-RGI.

 figure: Fig. 10.

Fig. 10. Experimental results of “USAF” and “BIT” image reconstruction by Random-GI, Random-RGI, PCA-GI, and PCA-RGI under different measurements.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Quantitative comparison results of “USAF” and “BIT” image in ROI reconstruction by Random-GI, Random-RGI, PCA-GI, and PCA-RGI under different measurements. (a) PSNR value of “USAF”. (b) SSIM value of “USAF”. (c) PSNR value of “BIT”. (d) SSIM value of “BIT”.

Download Full Size | PDF

The quantitative results of “CAT,” “USAF,” and “BIT” in ROI are compared. The quantitative comparison results are shown in Fig. 12, and the increment of imaging quality of “CAT” is better than that of the two other objects because PCA-based illumination retina-like patterns are considerably improved for objects belonging to the training dataset. Therefore, the proposed method with optimized retina-like illumination patterns has better imaging quality in the ROI for the objects of the training dataset category, and this method also has good generalization capability.

 figure: Fig. 12.

Fig. 12. Quantitative comparison results for quantitative results of “CAT,” “USAF,” and “BIT” in ROI. (a) The increment of the PSNR of “CAT,” “USAF,” and “BIT” is obtained by subtracting the PSNR of Random-RGI from the PSNR of PCA-RGI. (b) The increment of SSIM of “CAT,” “USAF,” and “BIT” is obtained by subtracting the SSIM of Random-RGI from the SSIM of PCA-RGI.

Download Full Size | PDF

4. Discussions and conclusions

The conventional retina-like patterns fill in the ROI with random patterns, demonstrating poor performance in obtaining object information. Illumination patterns with sparsity prior of objects can be obtained by performing PCA. The optimization of the retina-like illumination pattern in ROI by PCA is presented in this paper. The optimized retina-like patterns are generated by filling the ROI with patterns trained by PCA and maintaining the rest of conventional random retina-like patterns. The simulation and experiment results suggest that PCA-RGI with optimized retina-like illumination patterns has better imaging quality in ROI than Random-GI, Random-RGI, and PCA-GI. Meanwhile, PCA-RGI has good generalization capability. PCA-RGI can also improve the imaging quality of ROI for objects that do not belong to the training dataset, and the improvement is better for objects that belong to the training dataset. If there are more than one objects in the scene and the category is unknown, a dataset of full category (e.g. ImageNet) can be selected for training. However, PCA only can obtain patterns with dataset features, which is similar to the function of one convolution layer in the deep learning network. Therefore, the patterns of ROI can be substantially improved by deep learning or other methods to add extra priori knowledge. A small amount of sampling is performed in the current applications of RGI to obtain the location and size of the ROI. The design and projection of the retina-like patterns are then performed. While obtaining the size and location of the target, it can be classified to obtain features. Therefore, adding prior knowledge to the design of the retina-like patterns can better improve the imaging quality of ROI. The proposed optimization method of retina-like patterns facilitates the realization of high-performance GI.

Funding

Funding of foundation enhancement program under Grant (2019-JCJQ-JJ-273); National Natural Science Foundation of China (61871031, 61875012, 61905014).

Acknowledgments

The authors thank the editor and the anonymous reviewers for their valuable suggestions.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13(1), 13–20 (2019). [CrossRef]  

2. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995). [CrossRef]  

3. W. Gong and S. Han, “Correlated imaging in scattering media,” Opt. Lett. 36(3), 394–396 (2011). [CrossRef]  

4. J. Huang, D. Shi, K. Yuan, S. Hu, and Y. Wang, “Computational-weighted Fourier single-pixel imaging via binary illumination,” Opt. Express 26(13), 16547–16559 (2018). [CrossRef]  

5. H. Deng, X. Gao, M. Ma, P. Yao, Q. Guan, X. Zhong, and J. Zhang, “Fourier single-pixel imaging using fewer illumination patterns,” Appl. Phys. Lett. 114(22), 221906 (2019). [CrossRef]  

6. H. Jiang, S. Zhu, H. Zhao, B. Xu, and X. Li, “Adaptive regional single-pixel imaging based on the Fourier slice theorem,” Opt. Express 25(13), 15118–15130 (2017). [CrossRef]  

7. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Fast Fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 12029 (2017). [CrossRef]  

8. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3-D Computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013). [CrossRef]  

9. M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010 (2016). [CrossRef]  

10. Z. Zhang and J. Zhong, “Three-dimensional single-pixel imaging with far fewer measurements than effective image pixels,” Opt. Lett. 41(11), 2497–2500 (2016). [CrossRef]  

11. Z. Zhang, S. Liu, J. Peng, M. Yao, G. Zheng, and J. Zhong, “Simultaneous spatial, spectruml, and 3D compressive imaging via efficient Fourier single-pixel measurements,” Optica 5(3), 315–319 (2018). [CrossRef]  

12. D. Z. Cao, J. Xiong, and K. Wang, “Geometrical optics in correlated imaging systems,” Phys. Rev. A 71(1), 013801 (2005). [CrossRef]  

13. C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8(8), 605–609 (2014). [CrossRef]  

14. W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk, and D. M. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Appl. Phys. Lett. 93(12), 121105 (2008). [CrossRef]  

15. R. I. Stantchev, B. Q. Sun, S. M. Hornett, P. A. Hobson, G. M. Gibson, M. J. Padgett, and E. Hendry, “Noninvasive, near-field terahertz imaging of hidden objects using a single-pixel detector,” Sci. Adv. 2(6), e1600190 (2016). [CrossRef]  

16. M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015). [CrossRef]  

17. L. Bian, J. Suo, G. Situ, Z. Li, J. Fan, F. Chen, and Q. Dai, “Multispectral imaging using a single bucket detector,” Sci. Rep. 6(1), 24752 (2016). [CrossRef]  

18. S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Opt. Express 21(20), 23068–23074 (2013). [CrossRef]  

19. G. Satat, M. Tancik, O. Gupta, B. Heshmat, and R. Raskar, “Object classification through scattering media with deep learning on time resolved measurement,” Opt. Express 25(15), 17466–27479 (2017). [CrossRef]  

20. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014). [CrossRef]  

21. B. I. Erkmen and J. H. Shapiro, “Signal-to-noise ratio of Gaussian-state ghost imaging,” Phys. Rev. A 79(2), 023833 (2009). [CrossRef]  

22. L. Wang and S. Zhao, “Fast reconstructed and high-quality ghost imaging with fast Walsh-Hadamard transform,” Photonics Res. 4(6), 240–244 (2016). [CrossRef]  

23. M. Alemohammad, J. R. Stroud, B. T. Bosworth, and M. A. Foster, “High-speed all-optical Haar wavelet transform for real-time image compression,” Opt. Express 25(9), 9802–9811 (2017). [CrossRef]  

24. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015). [CrossRef]  

25. S. Rizvi, J. Cao, K. Zhang, and Q. Hao, “Improving Imaging Quality of Real-time Fourier Single-pixel Imaging via Deep learning,” Sensors 19(19), 4190 (2019). [CrossRef]  

26. F. Wang, H. Wang, H. Wang, G. Li, and G. Situ, “Learning from simulation: An end-to-end deep-learning approach for computational ghost imaging,” Opt. Express 27(18), 25560–25572 (2019). [CrossRef]  

27. M. Lu, X. Shen, and S. Han, “Ghost imaging via compressive sampling based on digital micromirror device,” Acta Opt. Sin. 31(7), 0711002 (2011). [CrossRef]  

28. D. B. Phillips, M. J. Sun, and J. M. Taylor, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e1601782 (2017). [CrossRef]  

29. K. Y. Zhang, J. Cao, Q. Hao, F. H. Zhang, Y. C. Feng, and Y. Cheng, “Modeling and Simulations of Retina-Like Three-Dimensional Computational Ghost Imaging,” IEEE Photonics J. 11(1), 1–13 (2019). [CrossRef]  

30. X. Zhai, Z. Cheng, Y. Hu, Z. Liang, and Y. Wei, “Foveated ghost imaging based on deep learning,” Opt. Commun. 448, 69–75 (2019). [CrossRef]  

31. W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C. Y. Fu, and A. C. Berg, “Ssd: Single shot multibox detector,” European Conference on Computer Vision, Springer, 21–37 (2016).

32. Z. Q. Gao, X. M. Cheng, L. F. Zhang, Y. Hu, and Q. Hao, “Compressive ghost imaging in scattering media guided by region of interest,” J. Opt. 22(5), 055704 (2020). [CrossRef]  

33. J. Cao, D. Zhou, F. H. Zhang, H. Cui, Y. Q. Zhang, and Q. Hao, “A Novel Approach of Parallel Retina-Like Computational Ghost Imaging,” Sensors 20(24), 7093 (2020). [CrossRef]  

34. J. Feng, S. M. Jiao, Y. Gao, T. Lei, and L. P. Du, “Design of optimal illumination patterns in single-pixel imaging using image dictionaries,” IEEE Photonics J. 12(4), 1–9 (2020). [CrossRef]  

35. L. Bian, J. Suo, Q. Dai, and F. Chen, “Experimental comparison of single-pixel imaging algorithms,” J. Opt. Soc. Am. A 35(1), 78 (2018). [CrossRef]  

36. C. Zuo, Q. Chen, W. Qu, and A. Asundi, “Phase aberration compensation in digital holographic microscopy based on principal component analysis,” Opt. Lett. 38(10), 1724–1726 (2013). [CrossRef]  

37. H. C. Liu, B. Yang, Q. Guo, J. Shi, C. Guan, G. Zheng, H. Mühlenbernd, G. Li, P. Zentgraf, and P. Zhang, “Single-pixel computational ghost imaging with helicity-dependent metasurface hologram,” Sci. Adv. 3(9), e1701477 (2017). [CrossRef]  

38. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process 13(4), 600–612 (2004). [CrossRef]  

39. A. Coates, A. Y. Ng, and H. Lee, in Vol. 15, Proc. AISTATS, p.215.40 (2011)

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Principle of PCA-RGI.
Fig. 2.
Fig. 2. Comparison results of “coco” test image reconstructed by Random-GI, Random-RGI, PCA-GI, and PCA-RGI under different measurements.
Fig. 3.
Fig. 3. Comparison results of “cameraman” test image reconstructed by Random-GI, Random-RGI, PCA-GI, and PCA-RGI under different measurements.
Fig. 4.
Fig. 4. Quantitative comparison results for quantitative results of “coco” and “cameraman” in ROI. (a) The increment of the PSNR of “coco” and “cameraman” is obtained by subtracting the PSNR of Random-RGI from the PSNR of PCA-RGI. (b) The increment of the SSIM of “coco” and “cameraman” is obtained by subtracting the SSIM of Random-RGI from that of PCA-RGI.
Fig. 5.
Fig. 5. Noise comparison results of “coco” test image in ROI reconstructed by Random-GI, Random-RGI, PCA-GI, and PCA-RGI under different power of noise.
Fig. 6.
Fig. 6. Quantitative comparison results of “coco” test image in ROI reconstructed by Random-GI, Random-RGI, PCA-GI, and PCA-RGI under different power of noise. (a) The comparison of PSNR with the measurements of 2048. (b) The comparison of SSIM with the measurements of 2048. (c) The comparison of PSNR with the measurements of 4096. (d) The comparison of SSIM with the measurements of 4096.
Fig. 7.
Fig. 7. Experimental setup.
Fig. 8.
Fig. 8. Experimental results of “CAT” image reconstruction by Random-GI, Random-RGI, PCA-GI, and PCA-RGI under different measurements.
Fig. 9.
Fig. 9. Quantitative comparison results of “CAT” image in ROI reconstruction by Random-GI, Random-RGI, PCA-GI, and PCA-RGI under different measurements. (a) PSNR value. (b) SSIM value.
Fig. 10.
Fig. 10. Experimental results of “USAF” and “BIT” image reconstruction by Random-GI, Random-RGI, PCA-GI, and PCA-RGI under different measurements.
Fig. 11.
Fig. 11. Quantitative comparison results of “USAF” and “BIT” image in ROI reconstruction by Random-GI, Random-RGI, PCA-GI, and PCA-RGI under different measurements. (a) PSNR value of “USAF”. (b) SSIM value of “USAF”. (c) PSNR value of “BIT”. (d) SSIM value of “BIT”.
Fig. 12.
Fig. 12. Quantitative comparison results for quantitative results of “CAT,” “USAF,” and “BIT” in ROI. (a) The increment of the PSNR of “CAT,” “USAF,” and “BIT” is obtained by subtracting the PSNR of Random-RGI from the PSNR of PCA-RGI. (b) The increment of SSIM of “CAT,” “USAF,” and “BIT” is obtained by subtracting the SSIM of Random-RGI from the SSIM of PCA-RGI.

Tables (2)

Tables Icon

Table 1. Quantitative comparison results of test image “coco”

Tables Icon

Table 2. Quantitative comparison results of test image “cameraman”

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

I t = x , y S t ( x , y ) O ( x , y ) ,
min | | c | | l 1 s . t . G O = c S O = I
X = [ x 11 x 12 x 1 N x 21 x 22 x 2 N x M 1 x M 2 x M N ] ,
x m n = ( x m n x n ) / S n ,
Σ = [ c o v ( x 1 , x 1 ) c o v ( x 1 , x 2 ) c o v ( x 1 , x N ) c o v ( x 2 , x 1 ) c o v ( x 2 , x 2 ) c o v ( x 2 , x N ) c o v ( x N , x 1 ) c o v ( x N , x 2 ) c o v ( x N , x N ) ] ,
Q T Σ Q = [ λ 1 λ 2 λ N ] ,
{ PSNR = 10 log 10 ( 2 k 1 ) 2 MSE M S E = 1 a x , y ( O ( x , y ) O ( x , y ) ) 2 ,
SSI M x , y = ( 2 μ μ + c 1 ) ( 2 w + c 2 ) ( μ 2 + u 2 + c 1 ) ( σ 2 + σ 2 + c 2 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.