Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Block matching low-rank for ghost imaging

Open Access Open Access

Abstract

High-quality ghost imaging (GI) under low sampling is very important for scientific research and practical application. How to reconstruct high-quality image from low sampling has always been the focus of ghost imaging research. In this work, based on the hypothesis that the matrix stacked by the vectors of image’s nonlocal similar patches is of low rank and has sparse singular values, we both theoretically and experimentally demonstrate a method that applies the projected Landweber regularization and blocking matching low-rank denoising to obtain the excellent image under low sampling, which we call blocking matching low-rank ghost imaging (BLRGI). Comparing with these methods of "GI via sparsity constraint," "joint iteration GI" and "total variation based GI," both simulation and experiment show that the BLRGI can obtain better ghost imaging quality with low sampling in terms of peak signal-to-noise ratio, structural similarity index and visual observation.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Ghost imaging (GI) is a newly developed imaging technique based on the correlation of the light field fluctuations, and it reconstructs the object by means of intensity correlation of the object beam and the reference beam. The total intensity of object beam which contains the object’s information is collected by a bucket detector, and the reference beam is detected by a detector with spatial resolution directly [1,2].

In 2008, computational ghost imaging (CGI) was proposed by Shapiro [3] and first experimentally verified by Bromberg [4]. Since then, many researchers have begun to pay close attention to the practical application of ghost imaging [5,6] due to its flexible optical design, compatibility with computer processing programs, and high signal-to-noise ratio (SNR) compared with conventional GI [79]. Sun and others [10] successfully achieved 3-D computational imaging with multiply single-pixel detectors and digital light projector (DLP), and further promoted the application of computational ghost imaging technology in the actual scene [11,12].

In order to apply ghost imaging to reality, it is necessary to solve the problem of long imaging time and low imaging quality. Recently, various ghost imaging algorithms have been proposed to solve these two problems, among which the most popular is compressive ghost imaging. It reconstructs the image based on compressive sensing. As a new signal processing method, compressive sensing exploits the sparse nature in the structure of natural images [13] and enables ghost imaging from sub-Nyquist sampling. It can obtain high quality reconstruction images while largely reduce the acquisition time [1317].

As far as we know, there are two main classses of compressive ghost imaging methods: one relies on a regularization step followed by denoising [1821], whereas another one is based on a variational optimization problem and penalty terms. The first category of method preserves the features like edges but suffer from ringing artifacts near edges. Another category of compressive ghost imaging method is to exploit the pixel-wise prior knowledge (e.g. minimizing the total variation to enforce to the local smoothness) or global prior features (e.g. forcing sparsity of discrete wavelet transform coefficients to ensure the dominance of low frequencies) of natural images for higher imaging quality [2225].

Due to the property of image nonlocal self-similarity, we can obtain many patches similar to the given patch across the whole image. In this paper, we stack the vectors of these similar patches to a low-rank matrix and design the threshold shrinkage algorithm by approximating the low-rank matrix. We integrate the low-rank method into the compressive ghost imaging scheme, which we call blocking matching low-rank ghost imaging (BLRGI). The iterative process consists of two parts: regualrization and denoising. The output of the regularization process is a sharp but noisy estimated image. During the denoising process, the low-rank matrix approximation method is applied to the output of the regularization step to suppress the undersampling noise and artifacts. Furthermore, we update the estimation of noise variance to compute a threshold parameter in each iteration. Simulation and experiment manifest that the proposed algorithm could obtain the high imaging quality in both numerical and visual perception.

To our knowledge, the paper [25] proposed a ghost imaging method under low-rank constraint, it takes advantage of regularity between rows or columns of a two-dimensional image. Compared to using low rank constraints directly on the entire image, our approach has the following advantages: first,we use a decoupled iterative scheme with SVD shrinkage, in which an efficient projected Landweber regularization is used in the preprocessing step for extracting more details; second, in the denoising step, we utilize the property of image nonlocal self-similarity and process the regularized image in a sliding window manner, where the block has a fixed size. we stack the array of these similar blocks to a low-rank matrix, the noise is attenuated by the threshold shrinkage. Using similar blocks for low rank constraints, the noise can be removed effectively while maintaining image texture detail. Our method can obtain high quality reconstructed images by alternately performing the regularization and denoising steps.

2. Block mathcing low-rank ghost imaging

Our scheme diagram is presented in Fig. 1. From this figure, we can see that the proposed method in this paper contains the regularization and denoising steps, and these two steps are alternately carried out in our scheme. In particular, our scheme is based on the decoupling of the regularization and denoising steps in the GI process: (1) the projected Landweber regularization in the first step and (2) a denoising step using a blocking matching low-rank approach. We will describe these two steps in detail in this section.

 figure: Fig. 1.

Fig. 1. The flowchart of BLRGI.

Download Full Size | PDF

In GI, the speckle field of the $m-$th sampling is recorded as $I_m(i,j)$ ($m=1,2,3,\ldots ,M$ represents the number of sampling), and the transmission beam modulated by the object with transmission coefficient $O(i,j)$ (the size is $r \times c$) is measured by the bucket detector. The result of the $m-$th sampling is recorded as $B_m$. Then, each of the speckle intensity $I_m (i,j)$ is rearranged as a row vector $\Psi _m$ of size $1\times N$ ($N=r\times c$). Repeated $M$ times, we obtain the following $M\times N$ sensing matrix $A$:

$$A=\left[ \begin{array}{c} \Psi_1\\ \Psi_2\\ \vdots \\ \Psi_M \end{array} \right]= \left[ \begin{array}{cccc} I_1(1,1) & I_1(1,2) & \cdots & I_1(r,c) \\ I_2(1,1) & I_2(1,2) & \cdots & I_2(r,c) \\ \vdots & \vdots & \ddots & \vdots \\ I_M(1,1) & I_M(1,2) & \cdots & I_M(r,c) \\ \end{array}\right]$$
The $M$ results from the bucket detector can be arranged as a $M \times 1$ column vector $Y$:
$$Y=[B_1,B_2,\ldots,B_M]^T$$
If we denote the unknown target object $O(i, j)$ as an $N$ dimensional column vector $X$ ($N \times 1$), then, we will have the framework $Y = AX$, the matrix form is expressed as following:
$$Y=\left[ \begin{array}{c} B_1\\ B_2\\ \vdots \\ B_M \end{array} \right]= \left[ \begin{array}{cccc} I_1(1,1) & I_1(1,2) & \cdots & I_1(r,c) \\ I_2(1,1) & I_2(1,2) & \cdots & I_2(r,c) \\ \vdots & \vdots & \ddots & \vdots \\ I_M(1,1) & I_M(1,2) & \cdots & I_M(r,c) \\ \end{array}\right] \left[ \begin{array}{c} x_1\\ x_2\\ \vdots \\ x_N \end{array} \right]$$
The image reconstruction is to obtain unknown $X$ by solving inverse problem $Y=AX$. And if the number of sampling $M$ is less than $N$ ($M\;<\;N$), this problem is ill-posed. When the random speckle matrix $A$ and bucket values $Y$ from the ghost imaging system are obtained, then we will apply our scheme to reconstruct image from these data. Next, we will introduce this method in detail.

2.1 Proposed ghost imaging scheme

In common, the quality of imaging result by second-order correlation imaging equation is poor at low measurement times. We expect to obtain high quality reconstructed image with fewer sampling numbers. The compressive ghost imaging could get high-quality imaging results with fewer measurements. The method proposed in this paper belong to one of them. Our method is based on iterative regularization and denoising steps.

Step 1: Projected Landweber regularization. In the regularization step, we obtain the preprocessed image by the following projected Landweber iteration regularization (PLIR). Compared with other regularization methods (such as Tikhonov regularization), the projected Landweber iteration regularization method is beneficial for solving large problems, and the results are stable and easy to implement. The result of PLIR is defined as $X^{(k)}$ (in our method, we set $X^{(0)}=\textbf {0}$.):

$$X^{(k)} = X^{(k-1)}+ DA^T(Y-AX^{(k-1)}), ~~~~~~k=1, 2, \ldots, K$$
where $D$ is set as pseudo-inverse of $A^T A$. Here, $X^{(k)}$ is the approximate image in the $k$ iteration, $A^T$ denotes the transposition matrix of $A$. By taking this regularization step, we first obtain initial estimation image $X^{(1)}$ with noise from $Y, A, X^{(0)}$ in a single iteration.

The goal of ghost imaging is to reconstruct a sharper image with low sampling. In the regularization step, Eq. (4) has the negative side effect of introducing new artifacts. This regularization step is recorded as $X^{(k)} = PLIR(A,Y,X^{(k-1)})$. To suppress the amplified noise and artifacts introduced in Eq. (4), we apply the blocking matching low-rank minimization method to filter the preprocessed image $X^{(k)}$ in the denoising step.

Step 2: Block matching low-rank denoising. The block matching low-rank (BMLR) approach has shown promising performance in image denoising problem. Hence, we integrate it into the compressive ghost imaging scheme.

The low-rank matrix recovery problem aims to estimate a low-rank matrix $S$ from its observation matrix $X$. This is a non-convex problem and can be solved by convex relaxation with the following nuclear norm:

$$\hat{S}= \arg \min_{S} \|S\|_{*}, ~~ s.t. \|X-S\|_2^2\leq \eta^2$$
where $\|S\|_{*}$ denotes the nuclear norm of a matrix $S$ and is defined as the sum of $S$’s singular values, that is, $\|S\|_{*}=\sum _i \sigma _{i}{(S)}$, $\sigma _{i}{(S)}$ denotes the $i-$th singular value of $S$, $\eta$ is a small number and measures the proximity between $X$ and $S$. To solve the Eq. (5), the following method is used in the SVD domain:
$$(U,\Sigma, V)=\arg \min_{U,\Sigma,V} \|X-U\Sigma V\|^2_2 +\sum_i \sigma_{i}(S)$$
Here, $U$ and $V$ are the orthogonal matrices. The authors of [26] have proved that the optimal solution of Eq. (6) can be simply achieved by the singular soft-thresholding operation:
$$\left\{ \begin{array}{lr} (U,\Sigma,V)={SVD}(X)\\ \hat{\Sigma}=S_{\tau}(\Sigma)\\ \end{array} \right.$$
where $S_{\tau }$ denotes the soft-thresholding opreatior with threshold $\tau$, and the reconstructed data matrix $\hat {S}$ can be obtained conveniently by $\hat {S}=U \hat {\Sigma } V^T$.

We explore the nonlocal self-similarity approach based on the SVD. For a given reference patch $p$ from a noisy image, we select a group of patches from the image which are similar to $p$. The similarity of two patches is defined in [27]. Let us suppose that there are $\mathcal {N}(p)$ such similar patches (including $p$) which are labeled as $j$, where $1\leq j \leq \mathcal {N}(p)$. Next, we stack these similar patch vectors to form a matrix $X_p = [x_1,x_2,\ldots ,x_j,\ldots ,x_{|\mathcal {N}(p)|}]$. Hence, this matrix is a low-rank matrix and has sparse singular values. We apply the low-rank minimization method to reduce the undersampling noise.

For each local patch $x_p$ (size:$b\times b$) from image $X_k$, we could search a group (number:$\mathcal {N}(p)$) of its nonlocal similar patches in the image (in practice, in a large-enough image area) by block matching. Here, we define the block distance as the $l^2$-norm of the difference between the two blocks to achieve this:

$$d(x_p,\bar{x}_p)=\frac{1}{b^2}\|x_p-\bar{x}_p\|^2_2$$
where $\bar {x}_p$ is an arbitrary block in the searching neighborhood. We select $\mathcal {N}(p)$ patches with a minimum block distance, and $\mathcal {N}(p)$ is set differently according to different images and sampling numbers.

By stacking these similar patches’ vectors into a $b^2\times \mathcal {N}(p)$ matrix, denoted by $X_p$, we get $X_p = S_p+\Gamma _p$, where $S_p$ and $\Gamma _p$ denote the patch matrices of clean image and undersampling noise, respectively.

Then, we utilize the SVD to calculate the singular values of the matrix formed by these similar patches. For the natural image, $S_p$ should be a low-rank matrix, thus, we could apply the low-rank matrix approximation [Eq. (7)] to estimate $S_p$ from $X_p$:

$$\widehat{S_p}=\arg \min_{S_p}\|S_p\|_{\ast},~~~ s.t. \|X_p-S_p\|_2^2\leq \eta^2$$
The $\|S_p\|_{\ast }$ denotes the nuclear norm of a matrix $S_p$.

The optimal solution of Eq. (9) can be obtained by the singular soft-thresholding operation.

$$\left\{ \begin{array}{lr} (U_p,\Sigma_p,V_p)={SVD}(X_p)\\ \widehat{\Sigma_p}=S_{\tau_p}(\Sigma_p)\\ \end{array} \right.$$
where $U_p$ and $V_p$ are the orthonormal matrices. The reconstructed image matrix $\widehat {S_p}$ is obtained by $\widehat {S_p}=U_p \widehat {\Sigma _p} V_p^T$.

Then the whole image can be reconstructed by aggregating all the denoised patches and is recorded as $q^{(k)}$. The block mathcing low-rank denoisng process is recorded as $q^{(k)}=BMLR(X^{(k)})$ . It will be returned to the step 1 to perform regularization, we set $X^{(k-1)} = q^{(k)}$.

We integrate a block matching low-rank minimization method into the compressive ghost imaging problem, leading to a powerful algorithm. In our scheme, regularization and denoising are performed alternately. When the iteration reaches a certain number of times, the iteration is stopped to obtain the final reconstructed image. The whole ghost imaging algorithm is summarized in Algorithm 1, and is recorded as BLRGI in this paper.

oe-27-26-38624-i001

3. Result

In order to test the effects of our proposed scheme, we demonstrate the performance via numerical simulation and experimental results, and compare it with the TV-based GI (TVAL3) [24], ghost imaging via sparsity constraint (GISC) in discrete cosine domain [16] and joint iteration compressive ghost imaging (JIGI) [18,21] algorithm.

3.1 Numerical simulation results

In order to objectively evaluate the performance of our proposed method, We measure the reconstruction quality quantitatively in terms of peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) [28,29]. PSNR and SSIM reflect the similarity between the reconstructed image and original image. They are defined as following:

$$\textrm{PSNR}=10\times \log_{10} \left[\frac{max Val^2}{\textrm{MSE}}\right],$$
where $\textrm {MSE}=\frac {1}{r\times c}\sum _{i=1}^{r}\sum _{j=1}^{c}[u(i,j)-x(i,j)]^2,$ and $max Val^2$ is the maximum possible pixel value of the image and:
$$\textrm{SSIM}(u,x)=\frac{(2\mu_u \mu_x+C_1)(2\sigma_{u x} +C_2)}{(\mu^2_u+\mu^2_x+C_1)(\sigma^2_u+\sigma^2_x+C_2)},$$
here, $u$ represents the original image consisting of $r\times c$ pixels, and $x$ denotes the reconstructed image. $L$ is the dynamic range of image pixels, which takes the values $255$ in our paper. $\mu _u$ and $\mu _x$ are (respectively) the means of $u$ and $x$, $\sigma _u$ and $\sigma _x$ are (respectively) the standard deviations of $u$ and $x$, and $\sigma _{ux}$ is the cross correlation of $u$ and $x$ after removing their means. The items $C_1$, $C_2$ are small positive constants that stabilize each term. In our simulations, we set $C_1=C_2=(0.05L)^2$. Naturally, the larger PSNR and SSIM values, the better quality of the image reconstructed.

In the block matching low-rank denoising, the block size $b$ may influence the imaging performance. If the block size $b$ is larger, we will get smoother imaging result. And, if the block size is too small, the purpose of edge preserving cannot be achieved. In the ghost imaging process, we need to trade off between smoothing and edge preserving. In this paper, we select the block size $b$ between $4, 5$ and $6$, and adjust the size of the block according to the best imaging results based on simulations or experiments. We choose the first reference patch from the top left of image and use step 3 in both rows and columns to go from one reference patch to the next. All the simulations are performed in MATLAB R2013a on an Intel(R) Core (TM) CPU i5-8250U processor (1.6 GHz), 32G memory. To estimate the complexity of our method, for an $N \times N$ image, we assume that the average time to compute similar patches for each reference patch is T. The SVD of each group with a size of $B \times b^2$ is $O~(B \times b^4)$. The regularization costs $O~( M\times N^2)$ for the iterative update. Hence, the total complexity for ghost imaging is $O~[N^2(B \times b^4+T+M\times N^2)]$.

The first object to be imaged is the binary object "gong" (the pixel number is $128 \times 128$). Figure 2 shows the reconstructed images of TVAL3, GISC, JIGI and BLRGI with sampling number being 500, 600, 700, 800 and 900 respectively. From Fig. 2, we can see that better quality of ghost imaging results with four methods are obtained according to the increasing sampling number. When the sampling number is 500, the ghost imaging results of BLRGI are better than the results of other methods, and the shape of the object could be distinguished but are blurry. But when the number of sampling increases to 900, the image quality of our method is better than others, and the result is close to the original image from visual observation.

 figure: Fig. 2.

Fig. 2. Simulation results of "gong" image with TVAL3, GISC , JIGI and BLRGI under different sampling numbers $M$.

Download Full Size | PDF

In order to numerically compare these four kinds of ghost imaging results, we have calculated their PSNRs and SSIMs under different sampling numbers which are shown in Figs. 3(a) and 3(b). In Fig. 3(a), we can observe that when the number of sampling is less than 950, the PSNR value of the BMLRGI is significantly larger than other algorithms, especially when $M=600$, the PSNR values of TVAL3, GISC, JIGI and BMLRGI are 9.89dB, 6.97dB, 9.88dB, 12.15dB respectively. From Fig. 2, we can find that the reconstruction result of BLRGI is significantly clearer than other results. The PSNR value of imaging result with JIGI is slightly greater than BMLRGI when $M\;>\;950$, e.g., the PSNR values of JIGI and BLRGI are 26.65dB, 23.24dB when the number of sampling is 1000. However, it can be found that the SSIM of BLRGI is larger than JIGI at this time from Fig. 3(b). We calculate that the PSNR value of JIGI is $12\%$ higher than BLRGI, but from the perspective of SSIM value, BLRGI is $14\%$ higher than JIGI. Therefore, the algorithm in our paper shows better image reconstruction ability under low sampling numbers.

 figure: Fig. 3.

Fig. 3. Numerical curves of PSNR and SSIM under different $M$ with TVAL3, GISC, JIGI and BLRGI for "gong" image.

Download Full Size | PDF

In fact, our method BLRGI is not only applicable to the binary object, but also applicable to the gray-scale object. In order to verify this point, we choose a more complicated gray-scale object "cameraman" to show the result of imaging intuitively. The ghost imaging results using TVAL3, GISC, JIGI and BLRGI are shown in Fig. 4 with different numbers of sampling $M$, and the corresponding numerical results of PSNR and SSIM are shown in Fig. 5. From Fig. 4, we can see that the quality of imaging results with BLRGI are clearer than imaging results of TVAL3, GISC and JIGI with the same sampling numbers. By careful observation from Figs. 5(a) and 5(b), we can find that the numerical values of PSNR and SSIM with BLRGI are also correspondingly higher than other methods under the same sampling number. From these figures and curves, we can see that our results are superior to other methods, both subjectively and objectively.

 figure: Fig. 4.

Fig. 4. Simulation results of "cameraman" image with TVAL3, GISC, JIGI and BLRGI under $M$ sampling.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Numerical curves of PSNR and SSIM under different samples with TVAL3, GISC, JIGI and BLRGI for "cameraman" image.

Download Full Size | PDF

As described above, our method BLRGI is an iterative process of regularization and denoising steps. In order to verify the effectiveness of each step of the method, Fig. 6 shows the regularized result images $X^{(k)}$ (the first row), denoised images $q^{(k)}$ (the second row) and residual images (the third row) between the original image $O$ and the reconstructed image $q^{(k)}$ in each iteration number $k$ for "gong" image with $M=900$. From left to right, we can find that as the number of iterations increases, the image obtained by regularization contains more and more details, the information of residual images between the original image and the denoised image $q^{(k)}$ becomes less and less, that is to say the reconstruction result of BLRGI is closer to the original image.

 figure: Fig. 6.

Fig. 6. Regularized result, denoising result and residual image under different iteration numbers with 900 samples for "gong" image.

Download Full Size | PDF

To illustrate this point, we draw a curve about the mean squared error (MSE) under different iterations $k$ as shown in Fig. 7. It measures the proximity of reconstructed image and original image. From Fig. 7, we can observe that with the growth of iteration number, the MSE curve decrease monotonically and ultimately become flat and stable, exhibiting good stability of the proposed method. One can also observe that about 50 iterations are typically sufficient. When the MSE is almost stable or $k$ reaches the maximum number $K$ of iterations, we stop iterating and get the final reconstructed image.

 figure: Fig. 7.

Fig. 7. The MSEs change curve of BLRGI reconstructed images under different iteration number for "gong" image with 900 samplings.

Download Full Size | PDF

3.2 Experiment results

The schematic of experimental system is shown in Fig. 8. In our experimental system, we use the binary Bernoulli random speckle matrix to obtain the ghost imaging results and apply the commercial digital light projector (DLP, Hitachi HCP-3050X, $1024 \times 768$ pixels with pixel size $=12.5\times 12.5 {\mu m}^2$, 3000 lumens) as the light source to illuminate the object. The object to be imaged is a rubber and modified 1951 USAF resolution test pattern printed on a A4 sheet of paper. We select the $128 \times 128$ pixels for each binary speckle pattern, and the field of view just covers the object region. The reflected signal light is collected by a Si transimpedance amplified photodetector (Thorlabs, PDA100A-EC, 320-1100nm, 2.4 MHz BW, 100 $mm^2$).

 figure: Fig. 8.

Fig. 8. Experiment schematic diagram of BLRGI.

Download Full Size | PDF

The object to be reconstructed is shown in Fig. 9(d) and the reconstructed results of TVAL3, GISC, JIGI and BLRGI with different sampling numbers ($M=3000, M=4000, M=5000$) are shown in Figs. 9(a)–9(c) respectively. From Figs. 9(a)–9(c), we can observe that our result is smoother than TVAL3, GISC and JIGI, it gets not only the contour of the object, but also the part of details of 1951 USAF resolution testing pattern with the same sampling. From these figures, we find that BLRGI could obtain higher resolution and visually more discernible images compared with other four methods under the same sampling number.

 figure: Fig. 9.

Fig. 9. Experimental reconstructed results with different sampling numbers (3000, 4000, 5000) and the original object.

Download Full Size | PDF

4. Conclusion

In this paper, the compressive ghost imaging via block matching low-rank approach is developed, which exploits the image’s nonlocal self-similarity. This scheme uses a decoupled iterative scheme with SVD shrinkage. First, the projected Landweber regularization is used to obtain preprocessing image, then the property of image nonlocal self-similarity is utilized to process the regularized image in a sliding window manner, where the block has a fixed size. The array of these similar blocks are stacked to a low-rank matrix, then the noise is attenuated by the threshold shrinkage. Both numerical simulations and experimental realizations have been used to show its superiority in visual observation and numerical values.

Funding

Shanghai Institute of Technology Talent Introduction (YJ2019-7); the Special Funds for Provincial Industrial Innovation in Jilin Province (2018C040-4, 2019C025).

Disclosures

The authors declare no conflicts of interest.

References

1. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995). [CrossRef]  

2. A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, “Ghost imaging with thermal light: comparing entanglement and classicalcorrelation,” Phys. Rev. Lett. 93(9), 093602 (2004). [CrossRef]  

3. J. H. Shapiro and B. I. Erkmen, “Ghost imaging: from quantum to classical to computational,” Adv. Opt. Photonics 2(4), 405–450 (2010). [CrossRef]  

4. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009). [CrossRef]  

5. W. Gong, C. Zhao, H. Yu, M. Chen, W. Xu, and S. Han, “Three-dimensional ghost imaging lidar via sparsity constraint,” Sci. Rep. 6(1), 26133 (2016). [CrossRef]  

6. W. K. Yu, X. R. Yao, X. F. Liu, R. M. Lan, L. A. Wu, G. J. Zhai, and Q. Zhao, “Compressive microscopic imaging with "positive-negative" light modulation,” Opt. Commun. 371, 105–111 (2016). [CrossRef]  

7. C. Luo, H. Xu, and J. Cheng, “High-resolution ghost imaging experiments with cosh-gaussian modulated incoherent sources,” J. Opt. Soc. Am. A 32(3), 482–485 (2015). [CrossRef]  

8. C. L. Luo, J. Cheng, A. X. Chen, and Z. M. Liu, “Computational ghost imaging with higher-order cosh-gaussian modulated incoherent sources in atmospheric turbulence,” Opt. Commun. 352, 155–160 (2015). [CrossRef]  

9. C. Zhou, T. Tian, C. Gao, W. Gong, and L. Song, “Multi-resolution progressive computational ghost imaging,” J. Opt. 21(5), 055702 (2019). [CrossRef]  

10. B. Sun, M. Edgar, R. Bowman, L. Vittert, S. Welsh, A. Bowman, and M. Padgett, “3d computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013). [CrossRef]  

11. L. Shen, Y. Xu-Ri, Y. Wen-Kai, W. Ling-An, and Z. Guang-Jie, “High-speed secure key distribution over an optical network based on computational correlation imaging,” Opt. Lett. 38(12), 2144–2146 (2013). [CrossRef]  

12. M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015). [CrossRef]  

13. M. Aßmann and M. Bayer, “Compressive adaptive computational ghost imaging,” Sci. Rep. 3(1), 1545 (2013). [CrossRef]  

14. W. Gong and S. Han, “Super-resolution far-field ghost imaging via compressive sampling,” arXiv preprint arXiv:0911.4750 (2009).

15. W. Hui and S. Han, “Coherent ghost imaging based on sparsity constraint without phase-sensitive detection,” Europhys. Lett. 98(2), 24003 (2012). [CrossRef]  

16. W. Gong and S. Han, “High-resolution far-field ghost imaging via sparsity constraint,” Sci. Rep. 5(1), 9280 (2015). [CrossRef]  

17. A. Averbuch, S. Dekel, and S. Deutsch, “Adaptive compressed image sensing using dictionaries,” SIAM J. on Imaging Sci. 5(1), 57–89 (2012). [CrossRef]  

18. H. Huang, C. Zhou, T. Tian, D. Liu, and L. Song, “High-quality compressive ghost imaging,” Opt. Commun. 412, 60–65 (2018). [CrossRef]  

19. D. Pelliccia, M. P. Olbinado, A. Rack, A. M. Kingston, G. R. Myers, and D. M. Paganin, “Towards a practical implementation of x-ray ghost imaging with synchrotron light,” IUCrJ 5(4), 428–438 (2018). [CrossRef]  

20. A. M. Kingston, G. R. Myers, D. Pelliccia, I. D. Svalbe, and D. M. Paganin, “X-ray ghost-tomography: Artefacts, dose distribution, and mask considerations,” IEEE Trans. Comput. Imaging 5(1), 136–149 (2019). [CrossRef]  

21. C. Zhou, G. Wang, H. Huang, L. Song, and K. Xue, “Edge detection based on joint iteration ghost imaging,” Opt. Express 27(19), 27295–27307 (2019). [CrossRef]  

22. X.-R. Yao, W.-K. Yu, X.-F. Liu, L.-Z. Li, M.-F. Li, L.-A. Wu, and G.-J. Zhai, “Iterative denoising of ghost imaging,” Opt. Express 22(20), 24268–24275 (2014). [CrossRef]  

23. X. Hu, J. Suo, T. Yue, L. Bian, and Q. Dai, “Patch-primitive driven compressive ghost imaging,” Opt. Express 23(9), 11092–11104 (2015). [CrossRef]  

24. Y. Huo, H. He, and F. Chen, “Compressive adaptive ghost imaging via sharing mechanism and fellow relationship,” Appl. Opt. 55(12), 3356–3367 (2016). [CrossRef]  

25. G. Wu, T. Li, J. Li, B. Luo, and H. Guo, “Ghost imaging under low-rank constraint,” Opt. Lett. 44(17), 4311–4314 (2019). [CrossRef]  

26. J. F. Cai, E. J. Candès, and Z. Shen, “A singular value thresholding algorithm for matrix completion,” SIAM J. on Optim. 20(4), 1956–1982 (2010). [CrossRef]  

27. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Trans.Image Process. 16(8), 2080–2095 (2007). [CrossRef]  

28. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans.Image Process. 13(4), 600–612 (2004). [CrossRef]  

29. A. Hore and D. Ziou, “Image quality metrics: Psnr vs. ssim,” in 2010 20th International Conference on Pattern Recognition, (IEEE, 2010), pp. 2366–2369.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. The flowchart of BLRGI.
Fig. 2.
Fig. 2. Simulation results of "gong" image with TVAL3, GISC , JIGI and BLRGI under different sampling numbers $M$.
Fig. 3.
Fig. 3. Numerical curves of PSNR and SSIM under different $M$ with TVAL3, GISC, JIGI and BLRGI for "gong" image.
Fig. 4.
Fig. 4. Simulation results of "cameraman" image with TVAL3, GISC, JIGI and BLRGI under $M$ sampling.
Fig. 5.
Fig. 5. Numerical curves of PSNR and SSIM under different samples with TVAL3, GISC, JIGI and BLRGI for "cameraman" image.
Fig. 6.
Fig. 6. Regularized result, denoising result and residual image under different iteration numbers with 900 samples for "gong" image.
Fig. 7.
Fig. 7. The MSEs change curve of BLRGI reconstructed images under different iteration number for "gong" image with 900 samplings.
Fig. 8.
Fig. 8. Experiment schematic diagram of BLRGI.
Fig. 9.
Fig. 9. Experimental reconstructed results with different sampling numbers (3000, 4000, 5000) and the original object.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

A = [ Ψ 1 Ψ 2 Ψ M ] = [ I 1 ( 1 , 1 ) I 1 ( 1 , 2 ) I 1 ( r , c ) I 2 ( 1 , 1 ) I 2 ( 1 , 2 ) I 2 ( r , c ) I M ( 1 , 1 ) I M ( 1 , 2 ) I M ( r , c ) ]
Y = [ B 1 , B 2 , , B M ] T
Y = [ B 1 B 2 B M ] = [ I 1 ( 1 , 1 ) I 1 ( 1 , 2 ) I 1 ( r , c ) I 2 ( 1 , 1 ) I 2 ( 1 , 2 ) I 2 ( r , c ) I M ( 1 , 1 ) I M ( 1 , 2 ) I M ( r , c ) ] [ x 1 x 2 x N ]
X ( k ) = X ( k 1 ) + D A T ( Y A X ( k 1 ) ) ,             k = 1 , 2 , , K
S ^ = arg min S S ,     s . t . X S 2 2 η 2
( U , Σ , V ) = arg min U , Σ , V X U Σ V 2 2 + i σ i ( S )
{ ( U , Σ , V ) = S V D ( X ) Σ ^ = S τ ( Σ )
d ( x p , x ¯ p ) = 1 b 2 x p x ¯ p 2 2
S p ^ = arg min S p S p ,       s . t . X p S p 2 2 η 2
{ ( U p , Σ p , V p ) = S V D ( X p ) Σ p ^ = S τ p ( Σ p )
PSNR = 10 × log 10 [ m a x V a l 2 MSE ] ,
SSIM ( u , x ) = ( 2 μ u μ x + C 1 ) ( 2 σ u x + C 2 ) ( μ u 2 + μ x 2 + C 1 ) ( σ u 2 + σ x 2 + C 2 ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.