Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Denoising in SVD-based ghost imaging

Open Access Open Access

Abstract

By the method of singular-valued decomposition (SVD), ghost imaging (GI) reconstructs the images with high efficiency. However, a small amount of noise can greatly degrade or even destroy the object information. In this paper, we experimentally investigate the method of truncated SVD (TSVD) by selecting the first few largest singular values to enhance the image quality. The contrast-to-noise ratio and structural similarity of the images are improved with appropriate truncation ratios. To further improve the image quality, we analyze the noise effects on TSVD-based GI and introduce additional filters. TSVD-based GI may find its applications in rapid imaging under complicated environment conditions.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Singular-valued decomposition (SVD) becomes a powerful tool in digital image processing. In SVD, a matrix can be represented in a space which is determined by two unitary matrices. SVD is effective in classical image filtering, enhancement, compression, and denoising [1]. Tufts et al. [2] estimated the signal components from a noisy data by SVD. Edelson et al. [3] robustly estimated the parameters of spaced exponentially damped sinusoidal signals in additive noise with techniques of SVD and linear prediction. Multichannel SVD-based image denoising was proposed by Wongsawat et al. with truncated eigenvalues [4]. Hou [5] adopted adaptive SVD for image denoising in wavelet domain. Jha et al. [6] applied SVD-denoising to electronic data processing. Soobhany et al. [7] used SVD to identify sources of camera phones. Li et al. [8] compared two SVD-based color image compression schemes.

Recently, SVD has been applied in ghost imaging (GI), which works with two correlated beams [913]. One illuminates the object and then is collected by a bucket detector. The other one travels freely and is recorded by a pixeled detector. By the method of SVD, GI reconstructs object information with high efficiency [1419]. The key technique is to introduce the pseudo-inverse matrix of the measurement matrix in pseudo-inverse GI [14], according to the Moore-Penrose pseudo-inverse theorem [20]. High-resolution images can be acquired in pseudo-inverse GI [15,16]. An experiment of iterative pseudo-inverse GI was performed with no prior information [17]. A blind watermarking algorithm was applied in pseudo-inverse GI to encrypt watermarking in optical images [18]. Optimized point spread functions were adopted in pseudo-inverse GI to realize sub-Nyquist sampling [19].

In practical applications, however, the object information is often destroyed when a small amount of noise appears in the object beam in SVD-based GI. The additive noise comes from i) individual propagation of the object beam and reference beam, ii) unbalance between the object and reference beams, and iii) ambient noise. To degrade the noise effect on GI, Zhou et al. [21] used a mask-based scheme. Wu et al. [22] set up a convolutional neutral network for denoising in computational GI. Wang et al. [23] denoised GI through principal components analysis and compandor. Besides reconstruction algorithms, these denoising scenarios needed auxiliary algorithms to degrade noise level. In this paper, we investigate truncated SVD (TSVD) method to reduce the influence of noise in SVD-based GI. Our method simultaneously reconstructs and denoises the ghost images. Such an integrated scheme can produce images with high-efficiency and high-quality for GI.

Our paper is organized as follows. Section 2 presents the experimental setup and the matrix theory of TSVD-based GI. The parameters of contrast-to-noise ratio (CNR) and structural similarity (SSIM) are introduced to assess the image quality. In Sec. 3, the experimental results of Hanbury-Brown Twiss (HBT) effect [24] and GI are exhibited with TSVD method. The noise analysis and further image filtering are discussed in Sec. 4 and Sec. 5, respectively. Conclusions are shown in Sec. 6.

2. Theory of TSVD-based GI

The experimental setup of traditional GI is shown in Fig. 1. A beam from a laser diode (LD) is projected onto a rotating ground glass (GG) plate, which is driven by a stepping motor. The diameter of the laser spot on the GG plate is 3.1 mm. The output dynamic speckles are divided by a beam splitter (BS) into two daughter beams. One beam illuminates the object, which is a USAF resolution test chart. The other beam propagates freely and is registered by a charge coupled device (CCD). The distances from GG to CCD2 and object T are equal to 25.0 cm. The object beam is collected by another CCD with the help of a collecting lens. The signals of the two CCDs are registered and quantized by the data acquisition system (DAS) and input a personal computer (PC). Images are reconstructed with SVD algorithms in PC.

 figure: Fig. 1.

Fig. 1. GI experimental setup. LD: laser diode. GG: ground glass. BS: beam splitter. T: object. L: lens. DAS: data acquisition system. PC: personal computer. CCD: charge coupled device.

Download Full Size | PDF

2.1 Theory of SVD-based GI

For simplicity in mathematics, we assume one-dimensional object ${\mathbf x} = {[{x_1},{x_2}, \cdots ,{x_N}]^T}$ to be imaged, $N$ is the total pixel number, T represents transpose. The temporally random laser patterns form the measurement matrix

$${{\mathbf A}_{M \times N}} = \left( {\begin{array}{*{20}{c}} {A(1,1)}&{A(1,2)}& \cdots &{A(1,N)}\\ {A(2,1)}&{A(2,2)}& \cdots &{A(2,N)}\\ \vdots & \vdots & \ddots & \vdots \\ {A(M,1)}&{A(M,2)}& \cdots &{A(M,N)} \end{array}} \right),$$
where M is the sampling number. The measurement matrix ${\mathbf A}$ is registered by CCD2 in Fig. 1, and each row of ${\mathbf A}$ is one of the samples. The bucket detector CCD1 measures the total object signals
$${{\mathbf y}_{M \times 1}} = {{\mathbf A}_{M \times N}}{{\mathbf x}_{N \times 1}} + {{\mathbf y^{\prime}}_{M \times 1}},$$
where ${{\mathbf y^{\prime}}_{M \times 1}}$ denotes noise in object signal. In traditional GI, the image is retrieved from the normalized second-order correlation function
$${{\mathbf g}_{N \times 1}} = \frac{{M \times {\mathbf A}_{N \times M}^T{{\mathbf y}_{M \times 1}}}}{{({\mathbf A}_{N \times M}^T{{\mathbf i}_{M \times 1}})({\mathbf y}_{1 \times M}^T{{\mathbf i}_{M \times 1}})}},$$
where ${{\mathbf i}_{M \times 1}}$ is a vector of all ones, and the division is to divide each element of the numerator by the corresponding element of the denominator. Due to the background contribution in Eq. (3), specifically, the image information is often obtained by
$${{\mathbf x^{\prime}}_{N \times 1}} = ({{\mathbf g}_{N \times 1}} - 1)\frac{{N \times {\mathbf y}_{1 \times M}^T{{\mathbf i}_{M \times 1}}}}{{{\mathbf i}_{1 \times N}^T{\mathbf A}_{NM}^T{{\mathbf i}_{M \times 1}}}}.$$

The quality of the image from Eq. (4) depends greatly on the sampling number M. In general, the greater M means the higher image quality.

In SVD-based GI, an approximate inverse matrix of ${\mathbf A}$ is obtained and applied. The measurement matrix ${\mathbf A}$ can be singular-value decomposed by

$${{\mathbf A}_{M \times N}} = {{\mathbf U}_{M \times M}}{{\mathbf \Sigma }_{M \times N}}{\mathbf V}_{N \times N}^T,$$
where ${\mathbf U}$ and ${\mathbf V}$ are unitary matrices, and ${{\mathbf \Sigma }_{M \times N}} = {\mathbf U}_{M \times M}^T{{\mathbf A}_{MN}}{{\mathbf V}_{N \times N}}$ is a diagonal matrix. According to Moore-Penrose theorem [15], the approximate inverse matrix of ${\mathbf A}$ is
$${\mathbf A}_{N \times M}^{ - 1} = {{\mathbf V}_{N \times N}}{\mathbf \Sigma }_{N \times M}^{ - 1}{\mathbf U}_{M \times M}^T,$$
where all the nondiagonal elements of ${{\mathbf \Sigma }_{M \times N}}$ and ${\mathbf \Sigma }_{N \times M}^{ - 1}$ are zeros. The image retrieved from SVDGI is
$${{\mathbf x^{\prime\prime}}_{N \times 1}} = {\mathbf A}_{N \times M}^{ - 1}{y_{M \times 1}} \simeq {{\mathbf x}_{N \times 1}} + {{\mathbf e}_{N \times 1}},$$
where the image error is ${{\mathbf e}_{N \times 1}} = {\mathbf A}_{N \times M}^{ - 1}{{\mathbf y^{\prime}}_{M \times 1}}$. If the noise is negligible, perfect image can be reconstructed. Else if the noise ${{\mathbf e}_{N \times 1}}$ is heavy enough, it will greatly degrade, or even destroy the image.

2.2 Image reconstruction in TSVD-based GI

To reduce the noise, we use the method of TSVD filtering, in which the approximate inverse matrix of A is replaced with

$$\tilde{{\mathbf A}}_{N \times M}^{ - 1} = {\tilde{{\mathbf V}}_{N \times r}}\tilde{{\mathbf \varSigma }}_{r \times r}^{ - 1}\tilde{{\mathbf U}}_{r \times M}^T,$$
where $r < \min (M,N)$. The elements in the diagonal matrix ${{\mathbf \Sigma }_{M \times N}}$ are arranged in descending order of their values, while the elements in the diagonal matrix ${\mathbf \Sigma }_{N \times M}^{ - 1}$ are in ascending order. We just keep the former (indices from 1 to r) elements in ${\mathbf \Sigma }_{N \times M}^{ - 1}$ to form $\tilde{{\mathbf \varSigma }}_{r \times r}^{ - 1}$. Indeed, the TSVD filter is to apply an ideal low-pass filter to the diagonal matrices ${\mathbf \Sigma }$ and ${{\mathbf \Sigma }^{ - 1}}$. The images can be reconstructed with the approximate inverse matrix ${\tilde{{\mathbf A}}^{ - 1}}$ that
$${\tilde{{\mathbf x}}_{N \times 1}} = {\mathbf A}_{N \times M}^{ - 1}{{\mathbf y}_{M \times 1}}.$$

With an appropriately chosen r, the reconstructed images can be made clear, since the noise occupies the high-frequency domain and TSVD works as a low-pass filter.

Due to the binary object, the image contrast can be defined by the difference

$$\overline {{x_1}} - \overline {{x_0}} = \frac{1}{{{N_1}}}\sum\limits_{{x_i} = 1} {{{\tilde{x}}_i}} - \frac{1}{{{N_0}}}\sum\limits_{{x_i} = 0} {{{\tilde{x}}_i}} ,$$
where ${N_1}$ (${N_0}$) is the number of the object elements ${x_i} = 1$ (${x_i} = 0$), $\overline {{x_1}}$ is the mean of the image pixels which correspond to the object units ${x_i} = 1$, and $\overline {{x_0}}$ to ${x_i} = 0$. Also, the image contrast-to-noise ratio (CNR) [25] is defined by
$${C_{nr}} = \frac{{\overline {{x_1}} - \overline {{x_0}} }}{{\sqrt {\overline {\Delta x_1^2} + \overline {\Delta x_0^2} } }},$$
where the fluctuations are
$$\overline {\Delta x_1^2} = \frac{1}{{{N_1}}}\sum\limits_{{x_i} = 1} {({{\tilde{x}}_i}} - \overline {{x_1}} {)^2},\;\overline {\Delta x_0^2} = \frac{1}{{{N_0}}}\sum\limits_{{x_i} = 0} {{{({{\tilde{x}}_i} - \overline {{x_0}} )}^2}} .$$

Another factor to assess image quality is the structural similarity index (SSIM), which is defined in Ref. [26].

3. Experimental results

We perform experiments of HBT effect, and TSVD-based GI. The experimental results are shown below.

3.1 TSVD for HBT experiment

In the experiment of HBT effect, the object is replaced by CCD1. In CCD1, we fix one pixel as the object and register the series of optical intensity on it. In CCD2, we select an area of $64 \times 64$ pixels, and record all the intensities to form the measurement matrix. Hence the total pixel number is $N = 4,096$. In experiment, the sampling number $M = 4,096$, and the size of the measurement matrix ${\mathbf A}$ is $4,096 \times 4,096$. Figure 2 shows the HBT curves (left column) and the corresponding patterns (right column) with different truncation parameters.

 figure: Fig. 2.

Fig. 2. Results of HBT effect with TSVD. The truncation parameters are (a) $r = N$, (b) $0.4N$, (c) ${0.4^2}N$, (d) ${0.4^3}N$, (e) ${0.4^4}N$, and (f) ${0.4^5}N$.

Download Full Size | PDF

We apply SVD to the measurement matrix in Eq. (5), and calculate out the pseudo-inverse matrix by Eq. (6). Accordingly, the normal HBT curve and pattern are calculated out with Eq. (7) and shown in Fig. 2(a). It is obvious that the data points are random and no peaks appear, indicating a failed HBT experiment.

We apply TSVD in Eqs. (8) and (9) to HBT effect by setting the truncation parameters $r = 0.4N$, ${0.4^2}N$, ${0.4^3}N$, ${0.4^4}N$, and ${0.4^5}N$ in Figs. 2(b), 2(c), 2(d), 2(e), and 2(f), respectively. We note that the curves and patterns are normalized by their own peaks in Fig. 2. No peaks in Figs. 2(a) and 2(b) suffice to illustrate the existence of noise in experiment, as declared in Sec. 1. However, the HBT peaks emerge as r decreases. The full widths at half maxima of the HBT peaks in Figs. 2(c), 2(d), 2(e), and 2(f) are 0.21mm, 0.32mm, 0.37mm, and 0.44mm, respectively. We can see that a smaller truncation parameter $r$ gives rise to a wider peak.

Generally speaking, the HBT effect signifies point-to-point imaging. The HBT peaks serve as the point spread function (PSF). The HBT peaks indicate that the TSVD method is effective for GI to rescue the object information under noisy environment. For special truncation parameters, TSVD is able to reconstruct high-quality images.

3.2 TSVD-based GI with a binary object

In the experiment of TSVD-based GI, the object is part of the USAF resolution test chart, as shown in Fig. 1. The image resolution is $80 \times 25$ pixels, therefore the pixel number is $N = 10,000$. We perform image reconstruction with sampling number $M = 10,000$. Therefore, the size of the measurement matrix ${\mathbf A}$ is $10,000 \times 10,000$.

After calculating the pseudo-inverse matrix of ${\mathbf A}$ and using Eqs. (7) and (9), we obtain the reconstructed images as shown in Fig. 3. No image information is obtained in Figs. 3(a) and 3(b), with respective truncation parameters $r = N$ and $r = 0.4N$. The reason is that SVD-based GI causes noise amplification. The heavy amplified noise floods the image information.

 figure: Fig. 3.

Fig. 3. Results of TSVD-based GI. The truncated parameters are (a) $r = N$, (b) $0.4N$, (c) ${0.4^2}N$, (d) ${0.4^3}N$, (e) ${0.4^4}N$, and (f) ${0.4^5}N$.

Download Full Size | PDF

The truncation parameters are $r = {0.4^2}N$, ${0.4^3}N$, ${0.4^4}N$, and ${0.4^5}N$ in Figs. 3(c), 3(d), 3(e), and 3(f), respectively. Now the object information is retrieved in these images. But the image quality first increases and then decreases when the truncated parameter r decreases. Among them, the image in Fig. 3(d) is the best one.

4. Noise analysis

To simulate the noise impact on TSVD-based GI, another measurement (random) matrix ${\mathbf A^{\prime}}$, with the same size of ${\mathbf A}$ as in Fig. 3, is acquired from the experiment shown in Fig. 1.

4.1 Additive noise

The noise contribution in Eq. (2) in the object beam is written by

$${{\mathbf y^{\prime}}_{M \times 1}} = {r_n}{{\mathbf A}_{M \times N}^{\prime}}{{\mathbf i}_{N \times 1}},$$
where ${r_n}$ is the noise ratio. The image error in Eq. (7) becomes
$${{\mathbf e}_{N \times 1}} = {\mathbf A}_{N \times M}^{ - 1}{{\mathbf y^{\prime}}_{M \times 1}} = {r_n}{\mathbf A}_{N \times M}^{ - 1}{{\mathbf A}_{M \times N}^{\prime}}{{\mathbf i}_{N \times 1}},$$
where ${\mathbf A}_{M \times N}^{ - 1}{{\mathbf A}_{M \times N}^{\prime}}$ is no longer a diagonal matrix as ${\mathbf \Sigma }_{N \times M}^{ - 1}$ in Eq. (5), and the error is completely independent of the image. Indeed, the nondiagonal elements in the noise transform of ${\mathbf A}_{M \times N}^{ - 1}{{\mathbf A}_{M \times N}^{\prime}}$ cause noise amplification, and greatly degrade the quality of the reconstructed images. The randomness and disorder of ${\mathbf A}_{M \times N}^{ - 1}{{\mathbf A}_{M \times N}^{\prime}}$ is analyzed in Supplement 1.

The CNRs (${R_{cn}}$) and SSIMs (${S_{sim}}$) of the reconstructed images under different ratios of additive noises are plotted in Fig. 4(a) and 4(b), respectively. The red solid, green dashed, blue doted, and cyan dash-dotted lines with rectangles are the numerical simulations for noise ratios ${r_n} = 0.1$, $0.2$, $0.3$, and $0.4$, respectively. It is obvious that a small amount of noise can completely ruin the reconstructed images, since both CNR and SSIM tend to zero without truncation $r = N$. These both quality parameters slowly increase and then rapidly decrease as the truncation ratio decreases. Also, the maxima of CNR and SSIM decrease as the noise ratio increases. It is hard to retrieve the object information for heavy noise in the object beam. We can also find from Fig. 4 that the optimal truncation parameter decreases as the noise ratio increases. One should choose different optimal parameter for different noise levels.

 figure: Fig. 4.

Fig. 4. CNRs (a) and SSIMs (b) of images with different additive noises.

Download Full Size | PDF

For comparison, the CNR and SSIM of the reconstructed images in our real experiment are plotted with the black lines with hollow circles in Fig. 4. Further by comparing the numerical simulations in Fig. 4 and experimental results in Fig. 6, we estimate the noise ratio about ${r_n} = 0.44$ for the images in Sec. 3. Such a heavy noise is the very reason why no object information was retrieved in Figs. 3(a) and 3(b).

4.2 Multiplicative noise

In the case of multiplicative noise, the object beam can be written by

$${{\mathbf y}_{M \times 1}} = ({{\mathbf A}_{M \times N}} \cdot {{\mathbf A^{\prime}}_{M \times N}}){{\mathbf x}_{N \times 1}} = {{\mathbf A}_{M \times N}}{{\mathbf x}_{N \times 1}} + ({{\mathbf A}_{M \times N}} \cdot {{\mathbf A^{\prime\prime}}_{M \times N}}){{\mathbf x}_{N \times 1}},$$
where the dot ${\cdot}$ means multiplication of matrices ${\mathbf A}$ and ${\mathbf A^{\prime}}$ element by element, and the ${{\mathbf A^{\prime\prime}}_{M \times N}} = {{\mathbf A^{\prime}}_{M \times N}} - {{\mathbf i}_{N \times 1}}$. Comparing Eq. (2), we write the noise contribution in Eq. (15) by
$${{\mathbf y^{\prime}}_{M \times 1}} = ({{\mathbf A}_{M \times N}} \cdot {{\mathbf A^{\prime\prime}}_{M \times N}}){{\mathbf x}_{N \times 1}}.$$

The image error in Eq. (7) becomes

$${{\mathbf e}_{N \times 1}} = {\mathbf A}_{N \times M}^{ - 1}{{\mathbf y^{\prime}}_{M \times 1}} = {\mathbf A}_{N \times M}^{ - 1}({{\mathbf A}_{M \times N}} \cdot {{\mathbf A^{\prime\prime}}_{M \times N}}){{\mathbf x}_{N \times 1}},$$
where ${\mathbf A}_{N \times M}^{ - 1}({{\mathbf A}_{M \times N}} \cdot {{\mathbf A^{\prime\prime}}_{M \times N}})$ is no longer a diagonal matrix either. The product of ${\mathbf A}_{N \times M}^{ - 1}({{\mathbf A}_{M \times N}} \cdot {{\mathbf A^{\prime\prime}}_{M \times N}})$ is also random and in disorder (See Supplement 1).

The CNRs and SSIMs of the reconstructed images under different multiplicative noises are plotted in Fig. 5(a) and 5(b), respectively. The red solid, green dashed, blue doted, and cyan dash-dotted lines with rectangles are the numerical simulations for noise ratios ${r_n} = 0.1$, $0.2$, $0.3$, and $0.4$, respectively. The noise ratio is the ratio of the noise average $\left\langle {{\mathbf A^{\prime\prime}}} \right\rangle$ to the measurement average $\left\langle {\mathbf A} \right\rangle$. Again, we see that a small amount of noise can completely ruin the reconstructed images, since both CNR and SSIM tend to zero without truncation $r = N$.

 figure: Fig. 5.

Fig. 5. CNR and SSIM of images with different multiplicative noises.

Download Full Size | PDF

A remarkable feature of the CNRs and SSIMs with multiplicative noise is that the simulation results do not agree with the experimental results. The experimental CNR and SSIM are plotted with black lines with hollow circles. The experimental CNR is very close to the simulation one for ${r_n} = 0.1$. However, the experimental value of SSIM deviates greatly from the simulation value, as shown in Fig. 5(b). By comparing the simulations of the additive (in Fig. 4) and multiplicative (in Fig. 5) noises, we can conclude that the additive noise dominates.

5. Further filtering

Though TSVD is helpful for image reconstruction in GI, noise greatly destroys the image quality. One need to further improve the image quality for the next application, such as pattern recognition and feature extraction (Algorithm 1).

Tables Icon

Algorithm 1. Further filtering the reconstructed images

In TSVD-based GI, the randomness and disorder of the transform matrices of ${\mathbf U}$ and ${\mathbf V}$ are analyzed in Supplement 1. This feature of TSVD breaks the links between different image pixels, especially in presence of noise. In other words, after TSVD, image correlation exists only in the nearest neighbors of a certain pixel. Such correlations can be exploited to further degrade noise level and enhance the image quality [27].

A simple algorithm in Algor. 1 is further applied to all the reconstructed images in Fig. 3. The kernel of the algorithm is to repeat convolution and deconvolution with the same PSF. To reflect the nearest-neighbor effects, the PSF is a disk with radius of $1$

$$h = \left( {\begin{array}{*{20}{c}} {0.0251}&{0.1453}&{0.0251}\\ {0.1453}&{0.3183}&{0.1453}\\ {0.0251}&{0.1453}&{0.0251} \end{array}} \right).$$

The cycle index in our algorithm is $k = 20$.

Though the algorithm is very simple, the optimization effects of the images are quite good. Figure 6 shows the images after further filtering that in Fig. 3. We can see that all the images are optimized in quality, except the image in Fig. 6(a).

 figure: Fig. 6.

Fig. 6. Further filtered images in TSVD-based GI in Fig. 3.

Download Full Size | PDF

The specific quality parameters of CNR and SSIM of the images in Figs. 3 and 6 are plotted in Fig. 7. The blue solid lines with circles are the quality parameter of images in Fig. 3, and the red dashed lines with squares are that of images in Fig. 6. The variations of these quality parameters with truncation parameters are in accordance with that in the simulations in Fig. 4. Nevertheless, the quality parameters of the images after further filtering greatly increase. Specifically, the quality parameters of the image in Fig. 6(b) are CNR ${R_{cn}} = 2.372$ and SSIM ${S_{sim}} = 0.179$, while that of the image in Fig. 3(b) are ${R_{cn}} = 0.104$ and ${S_{sim}} = 0.001$. In fact, the nearest-neighbor correlation of the image in Fig. 3(b) is retained, it is just coved in the noise. After some loops of convolution and deconvolution, the noise is degraded and the image information is enhanced. The quality improvement manifests the effectiveness of our technology.

 figure: Fig. 7.

Fig. 7. CNRs (a) and SSIMs (b) of the images in Figs. 3 and 6.

Download Full Size | PDF

6. Conclusion

In summary, we experimentally investigated image reconstruction in SVD-based GI under noisy circumstance. The object information is often washed out in reconstruction with pseudo-inverse matrices. We investigated the method of TSVD, in which just the first few largest singular values were selected, to improve the quality of the reconstructed images. The object information can be retrieved with appropriate choices of truncation ratios. We numerically simulated the effect of noise in the TSVD method, and found that the optimal truncation ratio depends on the noise level. High-quality images can be obtained by further filtering the reconstructed images after applying the TSVD method.

We should note that the TSVD method is also valid for GI in sub-Nyquist sampling cases ($M < N$). The image quality parameters have similar variation with the truncation ratio, as long as the sampling number M is enough. However, if the sampling number is severely inadequate, the quality of the reconstructed images will become very poor, even after further filtering. Since image reconstruction and denoising were united in our technology, TSVD-based GI may find its applications in rapid ghost imaging for image recognition and feature extraction.

Funding

National Natural Science Foundation of China (11674273).

Acknowledgments

The authors thank Suheng Zhang, Xinbing Song and Yuehua Su for helpful discussions.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. H. Andrews and C. Patterson, “Singular value decompositions and digital image processing,” IEEE Trans. Acoust., Speech, Signal Process. 24(1), 26–53 (1976). [CrossRef]  

2. D. W. Tufts, R. Kumarsen, and I. Kirsteins, “Data adaptive signal estimation by singular value decomposition of a data matrix,” Proc. IEEE 70(6), 684–685 (1982). [CrossRef]  

3. G. S. Edelson, R. Kumaresan, and D. W. Tufts, “A low rank weighted matrix approximation method for robust estimation of sinusoid parameters,” in IEEE International Conference on Acoustics, Speech, and Signal Processing5, 533–536 (1992).

4. Y. Wongsawat, K. R. Rao, and S. Oraintara, “Multichannel SVD-based image denoising,” in Proc. IEEE Int. Symp. Circuits and Systems6, 5990–5993 (2005). [CrossRef]  

5. Z. Hou, “Adaptive singular value decomposition in wavelet domain for image denoising,” Pattern Recognit. 36(8), 1747–1763 (2003). [CrossRef]  

6. S. K. Jha and R. D. S. Yadava, “Denoising by singular value decomposition and its application to electronic nose data processing,” IEEE Sensors J. 11(1), 35–44 (2011). [CrossRef]  

7. A. R. Soobhany, K. P. Lam, P. Fletcher, and D. Collins, “Source identification of camera phones using SVD,” in 2013 IEEE International Conference on Image Processing, 4497–4501 (2013).

8. Y. Li, M. Wei, F. Zhang, J. Zhao, and K. K. R. Choo, “Comparison of two SVD-based color image compression schemes,” PLoS One 12(3), e0172746 (2017). [CrossRef]  

9. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995). [CrossRef]  

10. R. S. Bennink, S. J. Bentley, and R. W. Boyd, “Two-photon coincidence imaging with a classical source,” Phys. Rev. Lett. 89(11), 113601 (2002). [CrossRef]  

11. F. Ferri, D. Magatti, A. Gatti, M. Bache, E. Brambilla, and L. A. Lugiato, “High-resolution ghost image and ghost diffraction experiments with thermal light,” Phys. Rev. Lett. 94(18), 183602 (2005). [CrossRef]  

12. D. Z. Cao, J. Xiong, and K. Wang, “Geometrical optics in correlated imaging systems,” Phys. Rev. A 71(1), 013801 (2005). [CrossRef]  

13. D. Zhang, Y. H. Zhai, L. A. Wu, and X. H. Chen, “Correlated two-photon imaging with true thermal light,” Opt. Lett. 30(18), 2354–2356 (2005). [CrossRef]  

14. C. Zhang, S. Guo, J. Cao, J. Guan, and F. Gao, “Object reconstitution using pseudo-inverse for ghost imaging,” Opt. Express 22(24), 30063–30073 (2014). [CrossRef]  

15. W. Gong, “High-resolution pseudo-inverse ghost imaging,” Photon. Res. 3(5), 234–237 (2015). [CrossRef]  

16. X. Zhang, X. Meng, X. Yang, Y. Wang, Y. Yin, X. Li, X. Peng, W. He, G. Dong, and H. Chen, “Singular value decomposition ghost imaging,” Opt. Express 26(10), 12948–12958 (2018). [CrossRef]  

17. X. Lv, S. Guo, C. Wang, C. Yang, H. Zhang, J. Song, W. Gong, and F. Gao, “Experimental investigation of iterative pseudoinverse ghost imaging,” IEEE Photonics J. 10(3), 3900708 (2018). [CrossRef]  

18. S. Wang, X. Meng, Y. Yin, Y. Wang, X. Yang, X. Zhang, X. Peng, W. He, G. Dong, and H. Chen, “Optical image watermarking based on singular value decomposition ghost imaging and lifting wavelet transform,” Opt. Lasers Eng. 114, 76–82 (2019). [CrossRef]  

19. W. Gong, “Sub-Nyquist ghost imaging by optimizing point spread function,” Opt. Express 29(11), 17591–17601 (2021). [CrossRef]  

20. A. Albert, “Regression and the Moore-Penrose pseudoinverse,” Academic: New York (1972).

21. Y. Zhou, S. X. Guo, F. Zhong, and T. Zhang, “Mask-based denoising scheme for ghost imaging,” Chin. Phys. B 28(8), 084204 (2019). [CrossRef]  

22. H. Wu, R. Wang, G. Zhao, H. Xiao, J. Liang, D. Wang, X. Tian, L. Cheng, and X. Zhang, “Deep-learning denoising computational ghost imaging,” Opt. Lasers Eng. 134, 106183 (2020). [CrossRef]  

23. G. Wang, H. Zheng, W. Wang, Y. He, J. Liu, H. Chen, Y. Zhou, and Z. Xu, “Denoising ghost imaging via principal components analysis and compandor,” Opt. Lasers Eng. 110, 236–243 (2018). [CrossRef]  

24. R. Hanbury Brown and R. Q. Twiss, “Correlation between photons in two coherent beams of light,” Nature 177(4497), 27–29 (1956). [CrossRef]  

25. K. W. C. Chan, M. N. O’Sullivan, and R. W. Boyd, “High-order thermal ghost imaging,” Opt. Lett. 34(21), 3343–3345 (2009). [CrossRef]  

26. W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]  

27. R. C. Gonzalez and R. E. Woods, “Digital image processing (4th Edition),” Prentice Hall (2018).

Supplementary Material (1)

NameDescription
Supplement 1       Supplement 1

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. GI experimental setup. LD: laser diode. GG: ground glass. BS: beam splitter. T: object. L: lens. DAS: data acquisition system. PC: personal computer. CCD: charge coupled device.
Fig. 2.
Fig. 2. Results of HBT effect with TSVD. The truncation parameters are (a) $r = N$, (b) $0.4N$, (c) ${0.4^2}N$, (d) ${0.4^3}N$, (e) ${0.4^4}N$, and (f) ${0.4^5}N$.
Fig. 3.
Fig. 3. Results of TSVD-based GI. The truncated parameters are (a) $r = N$, (b) $0.4N$, (c) ${0.4^2}N$, (d) ${0.4^3}N$, (e) ${0.4^4}N$, and (f) ${0.4^5}N$.
Fig. 4.
Fig. 4. CNRs (a) and SSIMs (b) of images with different additive noises.
Fig. 5.
Fig. 5. CNR and SSIM of images with different multiplicative noises.
Fig. 6.
Fig. 6. Further filtered images in TSVD-based GI in Fig. 3.
Fig. 7.
Fig. 7. CNRs (a) and SSIMs (b) of the images in Figs. 3 and 6.

Tables (1)

Tables Icon

Algorithm 1. Further filtering the reconstructed images

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

A M × N = ( A ( 1 , 1 ) A ( 1 , 2 ) A ( 1 , N ) A ( 2 , 1 ) A ( 2 , 2 ) A ( 2 , N ) A ( M , 1 ) A ( M , 2 ) A ( M , N ) ) ,
y M × 1 = A M × N x N × 1 + y M × 1 ,
g N × 1 = M × A N × M T y M × 1 ( A N × M T i M × 1 ) ( y 1 × M T i M × 1 ) ,
x N × 1 = ( g N × 1 1 ) N × y 1 × M T i M × 1 i 1 × N T A N M T i M × 1 .
A M × N = U M × M Σ M × N V N × N T ,
A N × M 1 = V N × N Σ N × M 1 U M × M T ,
x N × 1 = A N × M 1 y M × 1 x N × 1 + e N × 1 ,
A ~ N × M 1 = V ~ N × r Σ ~ r × r 1 U ~ r × M T ,
x ~ N × 1 = A N × M 1 y M × 1 .
x 1 ¯ x 0 ¯ = 1 N 1 x i = 1 x ~ i 1 N 0 x i = 0 x ~ i ,
C n r = x 1 ¯ x 0 ¯ Δ x 1 2 ¯ + Δ x 0 2 ¯ ,
Δ x 1 2 ¯ = 1 N 1 x i = 1 ( x ~ i x 1 ¯ ) 2 , Δ x 0 2 ¯ = 1 N 0 x i = 0 ( x ~ i x 0 ¯ ) 2 .
y M × 1 = r n A M × N i N × 1 ,
e N × 1 = A N × M 1 y M × 1 = r n A N × M 1 A M × N i N × 1 ,
y M × 1 = ( A M × N A M × N ) x N × 1 = A M × N x N × 1 + ( A M × N A M × N ) x N × 1 ,
y M × 1 = ( A M × N A M × N ) x N × 1 .
e N × 1 = A N × M 1 y M × 1 = A N × M 1 ( A M × N A M × N ) x N × 1 ,
h = ( 0.0251 0.1453 0.0251 0.1453 0.3183 0.1453 0.0251 0.1453 0.0251 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.