Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Color computational ghost imaging based on a plug-and-play generalized alternating projection

Open Access Open Access

Abstract

Computational ghost imaging (CGI), in which an image is retrieved from the known speckle patterns that illuminate the object and the total transmitted intensity, has shown great advances because of its advantages and potential applications at all wavelengths. However, high-quality and less time-consuming imaging has been proven challenging especially in color CGI. In this paper, we will present a new color CGI method that can achieve the reconstruction of high-fidelity images at a relatively low sampling rate (0.0625) by using plug-and-play generalized alternating projection algorithm (PnP-GAP). The spatial distribution and color information of the object are encoded into a one-dimensional light intensity sequence simultaneously by combining randomly distributed speckle patterns and a Bayer color mask as modulation patterns, which is measured by a single-pixel detector. A pre-trained deep denoising network is utilized in the PnP-GAP algorithm to achieve better results. Furthermore, a joint reconstruction and demosaicking method is developed to restore the target color information more realistically. Simulations and optical experiments are performed to verify the feasibility and superiority of our proposed scheme by comparing it with other classical reconstruction algorithms. This new color CGI scheme will enable CGI to obtain information in real scenes more effectively and further promote its practical applications.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Ghost imaging (GI), also known as correlated imaging, has received extensive attention and development since it was proposed [13]. The conventional GI consists of a reference arm which records the speckle field illuminating the object and an object arm which measures the total light intensity from the object by a single-pixel detector (SPD). The imaging process is accomplished by performing a correlation operation between the output signals of the two arms. In computational ghost imaging (CGI), the reference arm is replaced by the spatial light modulator which generates the deterministic speckle field [4,5]. Hence, the imaging system is greatly simplified and CGI has also brought out many applications, such as three-dimensional imaging [6,7], biomedical imaging [8,9], optical encryption [10,11] and so on. Although CGI owns many advantages, it generally requires a large number of samplings to reconstruct a high-quality image, and the image quality is easily restricted by hardware conditions and external interference. The required minimum number of sampling is equivalent to the total pixel number of the reconstructed target image, i.e., the Nyquist sampling limit [12,13]. The reduction of sampling may cause serious damage to image quality.

In the past decades, an increasing number of methods have been proposed to improve the imaging quality and decrease the sampling ratio (SR) (sub-Nyquist) [1416]. However, most of these methods focused on the reconstruction of grayscale images that lacked color information sensitive to human vision. A direct method of color image is to obtain the red (R), green (G), and blue (B) components separately in three monochromatic reconstruction. This would be quite inefficient and difficult to implement. Recently, many methods have been proposed to improve the imaging efficiency of color CGI. Welsh et al. proposed a scheme that three spectrally filtered SPDs were required to obtain the RGB components separately in one round of measurement [17]. However, this method increases the complexity of the CGI system and limits the practical application. Subsequently, combined with time-division multiplexing, color CGI has been achieved by dividing the whole sampling time into three independent time periods [18]. This scheme only requires one SPD and one round of measurement; however, the imaging quality is susceptible to disturbances from the intensity fluctuations of the ambient light [19,20]. Exploiting the sparsity and non-local self-similarity of the image, compressed sensing ghost imaging (CSGI) is utilized to improve the quality of the color CGI [21]. Fourier and Hadamard patterns, which are orthogonal to each other in time or spatial domain, are also applied to the color CGI [22,23]. Although these methods have good image quality, they tend to require more measurements and are difficult to achieve color CGI at very low sampling rates. Recently, deep learning has an outstanding performance in computational imaging field and is widely used to improve the efficiency of GI [2426]. It has been successfully applied to the color image reconstruction in CGI with the help of specially designed neural networks [2729]. However, these methods usually rely on supervised learning strategies, which require a large set of data to train a network in advance and the generalization ability of the network is also limited.

It is an ill-posed optimization problem to recover a two-dimensional image from the one-dimensional light intensity under the sub-Nyquist sampling. The general approach is to iteratively minimize a loss function consisting of a data-fidelity term and a substitutable hand-picked regularization term [30]. The data-fidelity term enforces the reconstructed image to be matched with the measurement under the constraints of the imaging model. The regularization term enforces prior knowledge of the target image, and the common priori characteristics include sparsity, non-negativity, non-local self-similarity and so on. However, the hand-picked priors are not necessarily representative of the target image. Recently, deep learning method has been demonstrated to be effective in capturing image priors from data, and has been widely used to solve denoising problems [31]. As an optimization algorithm solver, the generalized alternating projection (GAP) algorithm has demonstrated excellent performance in various compressive sensing (CS) problems [32]. Plug-and-play (PnP) [33] makes the GAP algorithm more effective in dealing with different problems by integrating modern denoising priors into the iterative optimization algorithm (PnP-GAP), such as total variation (TV), block-matching and 3D filtering (BM3D) or the deep learning-based denoisers [34,35]. To meet the imaging requirements of high compression ratio, high quality and high fidelity in practical application, a new actively color CGI scheme under sub-Nyquist is proposed. During the measurement, randomly distributed structured patterns and a Bayer color mask are projected to simultaneously encode the spatial and color information of the image into the one-dimensional light intensity. PnP-GAP combined with a pre-trained deep denoising prior network is used to denoise and reconstruct the spatial distribution of the target image. During each iteration, the demosaicking is fused to simultaneously recover the color information. Compared with traditional algorithms, the proposed method can achieve high signal-to-noise ratio reconstruction at a low SR (0.0625), and the color information of the reconstructed image is much closer to the original image. This method will break through the bottleneck of color CGI and enable it to obtain information in real scenes more quickly and with higher quality.

The outlines of this paper are as follows: section 2 describes the principles and procedure of this method, then we provide the verification through simulations and optical experiments in section 3, and finally we present conclusions in section 4.

2. Theoretical analysis

2.1 Principle of color CGI

The experimental setup of color CGI is shown in Fig. 1(a). The color illumination patterns are projected onto the target object by a projector (Panasonic 3 Liquid Crystal Display). After being modulated by the object, the illumination patterns freely propagate a certain distance to a SPD (Thorlabs PDA-100A) that collects the total light intensity. A data acquisition card (DAQ, NI USB-6341) digitizes and transmits the collected light intensity to the computer. The color illumination patterns are composed of random binary patterns Di(x, y) and Bayer color mask Sj(x, y). The specific generation process is shown in Fig. 1(b), and the Bayer color mask is designed with a set of 2 × 2 pixels [22]. In each set, different pixels are covered with RGB filter, and two of the four pixels are covered in green. The Bayer color mask can be decomposed into three color masks (bottom panel of Fig. 1(b)). The random binary patterns are modulated by the Bayer color mask to encode color information of the target object. Thus, the color illumination patterns Pi, j(x, y) can be written as:

$${P_{i,j}}(x,y) = {D_i}(x,y)\cdot S{}_j(x,y),$$
where, i = 1, 2, …, N, is the measurement index, and N is the total number of measurements. j = R, G, B, represents the three-color channels. “•” stands for the Hadamard product operation. (x, y) is the spatial position coordinates. After N color illumination patterns have been projected onto the object, the SPD collects a light intensity sequence Y. The process can be expressed as:
$$\begin{aligned} {Y_i} &= \sum\limits_{x,y} {\sum\limits_j {{\mu _j}{P_{i,j}}(x,y)} } \cdot {O_j}(x,y)\\ &= \sum\limits_{x,y} {\sum\limits_j {{\mu _j}{D_i}(x,y)\cdot S{}_j(x,y)} } \cdot {O_j}(x,y),\\ &= \sum\limits_{x,y} {{D_i}(x,y)\cdot {O_m}(x,y)} \end{aligned}$$
where, Oj(x, y) represents the reflectivity distribution of the color object, µ is the response coefficients of the SPD to R, G, B lights, and Om(x, y) stands for the mosaic image of the target color object. Generally, the response coefficients may inevitably result in color distortion during color CGI. Thus, we can make color correction by measuring them in advance during the optical experiment. After a correlation process between random binary patterns and the light intensity like the conventional CGI, we can reconstruct the mosaic image of the object as follows:
$${\widehat O_m}(x,y) = \frac{1}{N}\sum\limits_{\textrm{i} = 1}^N {({Y_i} - \left\langle Y \right\rangle )} {D_i}(x,y),$$
where, 〈Y〉 is the arithmetic average of light intensity sequence. After color correction and demosaicking algorithm, the target color image can be recovered successfully.

 figure: Fig. 1.

Fig. 1. (a) The experimental setup schematic diagram of the color CGI. (b) The generation process of the color illumination patterns.

Download Full Size | PDF

2.2 PnP-GAP for color CGI based on deep denoising and demosaicking prior

Assume that the total number of pixels in a single channel of the color object image is M. SR can be written as N / M. Each random binary pattern can be rearranged into a row vector di and stacked into a N × M matrix H in projection order. H can be expressed as:

$$H = {\left[ {\begin{array}{cccc} {{d_1}^T,}&{{d_2}^T,}&{ \cdots ,}&{{d_N}^T} \end{array}} \right]^T},$$
where T represents the transpose of the matrix. After vectorizing and rearranging the mosaic image into a column vector fm, Eq. (2) can be written as:
$$Y = H{f_m}.$$

Apparently, when the SR is relatively low (N < M), the image recovery from compressive samples is to solve an under-determined linear inverse system. To constrain the range of the solution more precisely, the regularization penalty term is generally employed. The inversion problem of solving the mosaic image can thus be modeled as:

$$\widehat {{f_m}} = \mathop {\arg \min }\limits_{{f_m}} ||{Y - H{f_m}} ||_2^2 + \lambda g({f_m}),$$
where, “arg min” stands for the process of solving the minimization problem. g(·) represents the regularization term, and λ is the weight coefficient between the data fidelity term and the regularization term. ‖·‖2 is the L2 norm operator.

To make the solution process of the optimization problem more flexible, the proposed scheme uses PnP-GAP [33,34]. By introducing an auxiliary parameter v, the forward physics model and the image prior are separated into two modules. The image reconstruction and denoising process are independent of each other and the change of the prior model involves only one module. Thus, the state-of-the-art denoising prior model can be used to match the beneficial forward model. In addition, the mosaic image of the object is reconstructed first in the previous color CGI scheme, and then the color information is recovered by the demosaicking algorithm. However, at low SR, the spatial information and color mask information of the image will be destroyed using CS algorithm. It is difficult to completely restore the color information of the image by using the demosaicking algorithm. And it complicates the reconstruction process even more. To break the bottleneck, we embed the demosaicking priors into the iterative procedure of the image reconstruction. The color information is recovered while reconstructing spatial information to prevent loss of color.

According to the GAP architecture, the unconstrained optimization in Eq. (6) can be converted to:

$$({\widehat {{f_m}},\widehat v} )= \mathop {\arg \min \frac{\textrm{1}}{\textrm{2}}}\limits_{({f_m},v)} ||{{f_m} - v} ||_2^2 + \lambda g(v),\textrm{subject to }Y = H{f_m},$$
this minimization problem can be solved by the following sequence of sub-problems:

Solving fm: fm(k+1) is updated via a Euclidean projection of v(k) on the linear manifold: Y = Hfm, it can be written as:

$${f_m}^{(k + 1)} = {v^{(k)}} + {H^T}{(H{H^T})^{ - 1}}(Y - H{v^{(k)}}),$$
where the superscript k denotes the iteration number.

Solving v: given fm, updating v can be realized by merging denoising and demosaicking priors.

$${\widetilde {{v_j}}^{(k + 1)}} = {\Phi _{Dm}}({{f_m}^{(k + 1)}} ),$$
$${v_j}{'^{(k + 1)}} = {\Phi _{D\sigma }}\left( {{{\widetilde {{v_j}}}^{(k + 1)}}} \right),$$
$${v^{(k + 1)}} = \sum\limits_j {{\mu _j}S{}_j\cdot{v_j}{'^{(k + 1)}}} ,$$
where ΦDm and Φ are off-the-shelf demosaicking and denoising algorithm used. The flowchart of the algorithm is shown in Fig. 2. Various denoising and demosaicking algorithms can be used to obtain different results. The deep neural network can learn the complete prior information of the image from a large set of datasets during the training process. Thus, We pre-train a Gaussian denoising network [36] and a deep demosaicking network [37] as the denoising and demosaicking priors in PnP-GAP, respectively. The image information is reconstructed as much as possible while eliminating noise and restoring color information.

 figure: Fig. 2.

Fig. 2. The flowchart of the proposed algorithm.

Download Full Size | PDF

3. Results

To verify the feasibility and effectiveness of the proposed method, we perform a comparative study by computer simulations and optical experiments.

3.1 Simulations

In the simulation, we assume that the detector has a consistent response for different spectra. The performance of the proposed method on different target images under different SRs is tested and compared with other traditional CGI reconstruction methods. Two 128 × 128 pixels color images from STL-10 and Mirflickr dataset [38,39] are selected as target images. The measurement matrix H consists of randomly distributed -1 and +1. Before the iteration starts, we initialize v(0)  = HTY to speed up the convergence. In each iteration, the image reconstructed by the physical forward model is subjected to a pre-trained deep demosaicking network (DDN) [37] and a pre-trained deep denoising network (FFD-Net) [36]. After multiple cycles of denoising and demosaicking, the color target image is reconstructed. In addition, we reconstruct the mosaic images of the target objects using the classic correlation algorithm CGI and the CSGI algorithm based on TVAL3 [40]. Then the same DDN is used to recover the color information of the image. We also test the performance of the PnP-GAP algorithm with only denoising priors, i.e. the demosaicking process and the iterative separation. The simulation results of different algorithms at SR from 0.0625 to 1 are shown in Fig. 3. From the perspective of vision, the proposed method has a good performance in color information recovery at different SRs, and with the increase of the SR, the reconstructed images contain richer details and are closer to the groundtruth. However, the traditional correlation algorithm is difficult to reconstruct clear results even when SR is 1. Although the CSGI and PnP-GAP algorithm can restore image details and reconstruct clear results, the color information is seriously damaged under the sub-Nyquist. In the process of reconstructing the mosaic images, the Bayer color mask information is lost due to multiple iterations and image priors, so it is difficult for the demosaicking algorithm to recover the color information. In the proposed method, because the demosaicking algorithm is embedded in each iteration, the Bayer color mask is well preserved, and the color information of reconstructed images is close to the original image. In addition, under low SR, the proposed method can also recover high frequency detail information well, such as the reconstruction of the target “STOP”, the edge of letters can be recovered. A more detailed quantitative description of the simulation results is provided in Supplement 1.

 figure: Fig. 3.

Fig. 3. Simulation results of different reconstruction algorithms at different SRs. (a) The “bird”. (b) The “STOP”. GT: groundtruth.

Download Full Size | PDF

In addition, we also compare the computation time of different reconstruction algorithms. The results of the comparison are shown in Table 1. The conventional correlation algorithm has the shortest computation time, but the reconstruction quality is extremely poor, and the recovered images are even indistinguishable. The calculation time of color CGI based on CS becomes larger with the increase of SR, that is, larger matrix operations take a longer time. In the reconstruction process of the PnP-GAP algorithm, each loop iteration includes an additional denoising algorithm, which increases a part of the calculation time. On this basis, the proposed method adds one more demosaicking prior per iteration, so the reconstruction time is longer than the traditional PnP-GAP algorithm. When the SR is lower than 0.25, its increase will increase the computational complexity, so the calculation time will be longer. When the SR is larger than 0.5, although the proposed algorithm faces a larger calculation matrix, it does not require too many cycles to obtain satisfactory reconstructed images. Although the computation time of the proposed method is a little longer, it is far superior to other methods in recovering the spatial and color information of the image at a low sampling rate.

Tables Icon

Table 1. The computation time(s) of different reconstruction algorithms

It can be seen from the above analysis and supplementary material that embedding the demosaicking algorithm into PnP-GAP can better reconstruct the color information of CGI. The flexible framework of PnP-GAP allows us to introduce different denoising and demosaicking prior algorithms, which affect the final reconstruction results. Through the effective combination of deep denoising network and deep demosaicking network with PnP-GAP algorithm, we realize the image high-quality reconstruction of color CGI under the SR of 0.0625.

3.2 Optical experiments

In optical experiments, the relative response coefficients of the CGI detection system to the three color channels were first measured. The measurement method is to project the red, green and blue patterns of the same size and amplitude respectively onto the white paper and record the reflected light intensity received by the SPD. The relative response of the CGI experimental system to different colors is µR : µG : µB = 1 : 1.572 : 1.158. In the color CGI reconstruction, the relative response of the detector to the spectrum is used to make a color correction before the demosaicking procedure. The colored English letters “SDU” printed on a white paper and the head of a colored object “Sprinkler” are sampled by the color CGI system shown in Fig. 1(a) as imaging objects. The random speckle patterns used in the experiments are consistent with the simulations and so the resolution of the reconstructed experimental images is also 128×128. Because of the existence of negative pixel values in patterns and the purpose of reducing the influence of noise in the experiments, differential measurement is adopted.

To compare with the proposed method, after the light intensities are collected by the SPD, the correlation-based CGI, CSGI and traditional PnP-GAP algorithm using FFD-Net are utilized to reconstruct the target mosaic images under different SRs. Furthermore, the reconstruction results without color correction before demosaicking priors of the iteration process are also compared in the proposed method. After the mosaic images are obtained by different algorithms, the same demosaicking deep neural network DNN is used to restore the color information of the targets. The reconstruction results of “SDU” and the sprinkler are shown in Fig. 4(a) and 4(b), respectively.

 figure: Fig. 4.

Fig. 4. Experimental results of different reconstruction algorithms at different SRs. (a) The “SDU”. (b) The “Sprinkler”.

Download Full Size | PDF

It can be seen from the figures that the experimental results are basically consistent with the conclusions obtained in the simulations. Due to the loss of the spatial information and color Bayer mask, the other three classical methods are difficult to reconstruct the spatial distribution and the color of the images with high quality at low SRs. The reconstructed images contain only the rough outline of the objects and the single color. However, when the SR is as low as 0.0625, the reconstructed images of the proposed method also contain more spatial distribution and the color information is closer to the groundtruth. For object “SDU” with simple structures, this method can achieve reconstructions that are almost consistent with the original image when the SR is 0.0625, and the edges of the English letters are also very clear. For the head of a sprinkler with a more complex structure, the reconstructed results of the proposed method also contain more detailed information. In addition, if the proposed method does not perform color correction, although the spatial distribution of the objects can be recovered, the color is far from the original objects. Since the SPD has a stronger response to green, the reconstructed results are greener than the original objects. Further, although the differential measurement method is used in the experiments, the system instrument itself will bring noise that is difficult to eliminate. The traditional iterative optimization algorithms are more significantly affected by noise, the quality of the reconstruction results deteriorates faster and more noise information appears in the results. The proposed method includes two priors, denoising and demosaicking, and the reconstruction results have less noise and better robustness to noise. A more detailed quantitative description of the optical experimental results is provided in section 4 of Supplement 1.

4. Conclusion

In conclusion, we proposed a new PnP-GAP algorithm for the reconstruction of color CGI at low sampling rate. By integrating the deep denoising prior into the PnP-GAP framework and a joint reconstruction and demosaicking method is applied simultaneously, we overcome the shortcomings of previous algorithms such as low imaging efficiency, difficult to implement and limited generalization ability. Subsequently, by simulation and optical experiments, we have demonstrated the superiority of the proposed algorithm, and compared it with classical correlation-based CGI, CSGI and PnP-GAP algorithm. Different denoising prior algorithms and demosaicking algorithms are utilized to explore their impacts on the proposed method. The pre-trained deep denoising network and the deep demosaicking network have the best performance. In addition, we analyzed the response of the SPD to different color channels, which enables us to get more realistic reconstruction results after the demosaicking operation. Finally, the proposed method can achieve high-quality and high-fidelity reconstructions of color objects under relatively low SR (0.0625). However, it is worth pointing out that this method relies on large-scale matrix operation, so the computational burden of the computer will increase when the reconstruction image resolution is relatively large. Further efforts should be made to solve the problem. In summary, we believe that this new reconstruction method of color CGI will enable CGI to obtain information in real scenes more quickly and effectively and further promote its practical applications.

Funding

National Natural Science Foundation of China (61775121); Beijing Municipal Natural Science Foundation (4222081).

Disclosures

The authors declare that there are no conflicts of interest related to this article. We thank the reviewers for some useful suggestions.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995). [CrossRef]  

2. R. S. Bennink, S. J. Bentley, and R. W. Boyd, “Two-Photon coincidence imaging with a classical source,” Phys. Rev. Lett. 89(11), 113601 (2002). [CrossRef]  

3. A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, “Correlated imaging, quantum and classical,” Phys. Rev. A 70(1), 013802 (2004). [CrossRef]  

4. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]  

5. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009). [CrossRef]  

6. B. Sun, M. Edgar, R. Bowman, L. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3D computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013). [CrossRef]  

7. M. Sun, M. Edgar, G. Gibson, B. Sun, N. Radwell, R. Lamb, and M. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010 (2016). [CrossRef]  

8. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012). [CrossRef]  

9. W. Gong and S. Han, “Correlated imaging in scattering media,” Opt. Lett. 36(3), 394–396 (2011). [CrossRef]  

10. P. Clemente, V. Durán, V. Torres-Company, E. Tajahuerce, and J. Lancis, “Optical encryption based on computational ghost imaging,” Opt. Lett. 35(14), 2391–2393 (2010). [CrossRef]  

11. J. Wu, Z. Xie, Z. Liu, W. Liu, Y. Zhang, and S. Liu, “Multiple-image encryption based on computational ghost imaging,” Opt. Commun. 359(1), 38–43 (2016). [CrossRef]  

12. R. L. Cook, “Stochastic sampling in computer graphics,” ACM Trans. Graph. 5(1), 51–72 (1986). [CrossRef]  

13. J. A. Tropp, J. N. Laska, M. F. Duarte, J. K. Romberg, and R. G. Baraniuk, “Beyond nyquist: Efficient sampling of sparse bandlimited signals,” IEEE Trans. Inform. Theory 56(1), 520–544 (2010). [CrossRef]  

14. O. Katz, Y. Bromberg, and Y. Silberberg, “Compressive ghost imaging,” Appl. Phys. Lett. 95(13), 131110 (2009). [CrossRef]  

15. F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104(25), 253603 (2010). [CrossRef]  

16. B. Sun, S. S. Welsh, M. P. Edgar, J. H. Shapiro, and M. J. Padgett, “Normalized ghost imaging,” Opt. Express 20(15), 16892–16901 (2012). [CrossRef]  

17. S. S. Welsh, M. P. Edgar, R. Bowman, P. Jonathan, B. Sun, and M. J. Padgett, “Fast full-color computational imaging with single-pixel detectors,” Opt. Express 21(20), 23068–23074 (2013). [CrossRef]  

18. E. Salvador-Balaguer, P. Clemente, E. Tajahuerce, F. Pla, and J. Lancis, “Full-color stereoscopic imaging with a single-pixel photodetector,” J. Disp. Technol. 12(4), 1 (2015). [CrossRef]  

19. T. Torii, Y. Haruse, S. Sugimoto, and Y. Kasaba, “Time division ghost imaging,” Opt. Express 29(8), 12081–12092 (2021). [CrossRef]  

20. A. J. C. Moreira, R. T. Valadas, and A. M. de Oliveira Duarte, “Optical interference produced by artificial light,” Wirel. Netw. 3(2), 131–140 (1997). [CrossRef]  

21. P. W. Wang, C. L. Wang, C. P. Yu, S. Yue, W. L. Gong, and S. S. Han, “Color ghost imaging via sparsity constraint and non-local self-similarity,” Chin. Opt. Lett. 19(2), 021102 (2021). [CrossRef]  

22. Z. Zhang, S. Liu, J. Peng, M. Yao, G. Zheng, and J. Zhong, “Simultaneous spatial, spectral, and 3d compressive imaging via efficient fourier single-pixel measurements,” Optica 5(3), 315–319 (2018). [CrossRef]  

23. Q. Yi, L. Z. Heng, L. Liang, Z. Guangcan, C. F. Siong, and Z. Guangya, “Hadamard transform-based hyperspectral imaging using a single-pixel detector,” Opt. Express 28(11), 16126–16139 (2020). [CrossRef]  

24. M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep learning-based ghost imaging,” Sci Rep 7(1), 17865 (2017). [CrossRef]  

25. Y. He, G. Wang, G. Dong, S. Zhu, H. Chen, A. Zhang, and Z. Xu, “Ghost imaging based on deep learning,” Sci Rep 8(1), 6469 (2018). [CrossRef]  

26. F. Wang, H. Wang, H. Wang, G. Li, and G. Situ, “Learning from simulation: An end-to-end deep-learning approach for computational ghost imaging,” Opt. Express 27(18), 25560–25572 (2019). [CrossRef]  

27. Y. Ni, D. Zhou, S. Yuan, X. Bai, Z. Xu, J. Chen, C. Li, and X. Zhou, “Color computational ghost imaging based on a generative adversarial network,” Opt. Lett. 46(8), 1840–1843 (2021). [CrossRef]  

28. H. Liu, Y. N. Chen, L. Zhang, D. H. Li, and X. W. Li, “Color ghost imaging through the scattering media based on A-cGAN,” Opt. Lett. 47(3), 569–572 (2022). [CrossRef]  

29. Z. Yu, Y. Liu, J. X. Li, X. Bai, Z. Z. Yang, Y. Ni, and X. Zhou, “Color computational ghost imaging by deep learning based on simulation data training,” Appl. Opt. 61(4), 1022–1029 (2022). [CrossRef]  

30. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sci. 2(1), 183–202 (2009). [CrossRef]  

31. D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 9446–9454 (2018).

32. X. Liao, H. Li, and L. Carin, “Generalized alternating projection for weighted-L2,1 minimization with applications to model-based compressive sensing,” SIAM J. Imaging Sci. 7(2), 797–823 (2014). [CrossRef]  

33. S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plug-and-play priors for model based reconstruction,” in Proc. IEEE Global Conf. Signal Inf. Process., 945–948 (2013).

34. X. Yuan, “Generalized alternating projection based total variation minimization for compressive sensing,” in Proc. IEEE Int. Conf. Image Process., 2539–2543 (2016).

35. M. Qiao, X. Liu, and X. Yuan, “Snapshot temporal compressive microscopy using an iterative algorithm with untrained neural networks,” Opt. Lett. 46(8), 1888–1891 (2021). [CrossRef]  

36. K. Zhang, W. Zuo, and L. Zhang, “FFDNet: Toward a fast and flexible solution for cnn-based image denoising,” IEEE Trans. on Image Process. 27(9), 4608–4622 (2018). [CrossRef]  

37. R. J. Tan, K. Zhang, W. M. Zuo, and L. Zhang, “Color image demosaicking via deep residual learning,” in Proc. IEEE Int. Conf. Multimedia Expo, 793–798 (2017).

38. A. Coates, A. Y. Ng, and H. Lee, “An analysis of single-layer networks in unsupervised feature learning,” J. Mach. Learn. Res. 15, 215–223 (2011).

39. M. J. Huiskes and M. S. Lew, “The mirflickr retrieval evaluation,” in Proc. 1st ACM Int. Conf. Multimedia Inf. Retr., 39–43 (2008).

40. C. Li, W. Yin, H. Jiang, and Y. Zhang, “An efficient augmented Lagrangian method with applications to total variation minimization,” Comput Optim Appl 56(3), 507–530 (2013). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Supplemental Document

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. (a) The experimental setup schematic diagram of the color CGI. (b) The generation process of the color illumination patterns.
Fig. 2.
Fig. 2. The flowchart of the proposed algorithm.
Fig. 3.
Fig. 3. Simulation results of different reconstruction algorithms at different SRs. (a) The “bird”. (b) The “STOP”. GT: groundtruth.
Fig. 4.
Fig. 4. Experimental results of different reconstruction algorithms at different SRs. (a) The “SDU”. (b) The “Sprinkler”.

Tables (1)

Tables Icon

Table 1. The computation time(s) of different reconstruction algorithms

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

P i , j ( x , y ) = D i ( x , y ) S j ( x , y ) ,
Y i = x , y j μ j P i , j ( x , y ) O j ( x , y ) = x , y j μ j D i ( x , y ) S j ( x , y ) O j ( x , y ) , = x , y D i ( x , y ) O m ( x , y )
O ^ m ( x , y ) = 1 N i = 1 N ( Y i Y ) D i ( x , y ) ,
H = [ d 1 T , d 2 T , , d N T ] T ,
Y = H f m .
f m ^ = arg min f m | | Y H f m | | 2 2 + λ g ( f m ) ,
( f m ^ , v ^ ) = arg min 1 2 ( f m , v ) | | f m v | | 2 2 + λ g ( v ) , subject to  Y = H f m ,
f m ( k + 1 ) = v ( k ) + H T ( H H T ) 1 ( Y H v ( k ) ) ,
v j ~ ( k + 1 ) = Φ D m ( f m ( k + 1 ) ) ,
v j ( k + 1 ) = Φ D σ ( v j ~ ( k + 1 ) ) ,
v ( k + 1 ) = j μ j S j v j ( k + 1 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.