Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Implicit image processing with ghost imaging

Open Access Open Access

Abstract

In computational ghost imaging, the object is illuminated with a sequence of known patterns and the scattered light is collected using a detector that has no spatial resolution. Using those patterns and the total intensity measurement from the detector, one can reconstruct the desired image. Here we study how the reconstructed image is modified if the patterns used for the illumination are not the same as the reconstruction patterns and show that one can choose how to illuminate the object, such that the reconstruction process behaves like a spatial filtering operation on the image. The ability to directly measure a processed image allows one to bypass the post-processing steps and thus avoid any noise amplification they imply. As a simple example we show the case of an edge-detection filter.

Published by Optica Publishing Group under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. Introduction

Ghost imaging relies on the combination of two signals which individually are insufficient for image formation [13]: the sequence of patterns illuminating the object, and the transmitted (or scattered) light, measured with a single element (bucket) detector [411].

In computational ghost imaging one has a great deal of control over the choice of the projected patterns, allowing one to tailor them based on a knowledge of the nature of the object. In Principle Component Analysis the illuminating wavefront is designed to match the principle components of the object [12,13], and in other adaptive imaging works the spatial resolution of the wavefront is enhanced locally in response to the detection of high-frequency regions of the object [14,15]. It is however less common to see the illumination basis modified in response to the way in which the image is to be processed [16].

In this paper we show that any post-processing step which can be described by a matrix multiplication with the image, such as convolution with an image filter, can be incorporated into the illumination basis. Doing so enables one to avoid image noise amplification by the filtering process, at the cost of increasing the complexity of the projected patterns. We demonstrate this technique experimentally for a basic edge-detection filter in a modified raster basis (edge-detection is one of the fundamental feature identification steps, and has been incorporated into recent computational imaging strategies [1721]). We compare the resulting signal-to-noise ratios (SNRs) of our technique with those obtained via post-processing with the same filter. We also discuss a theoretical method to predict the performance of an arbitrary filter.

2. Changing the illumination basis

The ghost imaging measurement process can be described as follows in Bra-Ket notation, with $N \times N$ pixel projection patterns or images represented as $N^{2} \times 1$ element column vectors for convenience. The reconstructed image $|{I}\rangle$ of the object $|{O}\rangle$ can be written

$$|{I}\rangle = \sum_j \langle {\psi_j|O}\rangle|{\psi_j}\rangle,$$
where $|{\psi _j}\rangle$ is the $j$th pattern in the basis $\Psi$ illuminating the object. The inner product $\langle {\psi _j|O}\rangle$, which becomes the weighting coefficient of $|{\psi _j}\rangle$ in the reconstruction, measures the spatial overlap between the projected pattern and object and is recorded with a bucket detector.

Typically the illumination basis $\Psi$ is the same basis in which the image is reconstructed, but this need not be the case. A change from an illumination basis $\Psi$ to a new basis $\Phi$ can be written as $\Phi = B \, \Psi$, where B is the matrix that performs the basis change. If one makes this substitution for the illuminating basis in Eq. (1)

$$|{I}\rangle = \sum_j \langle {\phi_j|O}\rangle|{\psi_j}\rangle = \sum_j \langle {\psi_j|B^{T}|O}\rangle|{\psi_j}\rangle = \sum_j \left\langle \psi_j\right\rvert \left( B^{T} \left\lvert O \right\rangle \right)|{\psi_j}\rangle,$$
one can see by comparison with Eq. (1) that the reconstructed object effectively becomes $(B^{T}|{O}\rangle )$. The matrix $B$ can then be chosen such that it performs any desired operation, as long as it can be expressed as a matrix multiplication (which includes convolution with any filter kernel $K$), directly during the measurement process. There is a lot of freedom in choosing $\Psi$ and $K$ (and thus $B$) but, as we discuss below, this choice has an impact on both the amount of signal available, and the complexity of the patterns to be projected.

To demonstrate this equivalence, we chose as our operation convolution with the edge-detection filter kernel $K$:

$$K = \begin{bmatrix} 0 & -1 & 0\\ -1 & 0 & 1\\ 0 & 1 & 0 \end{bmatrix}.$$

A 1D convolution between two discrete signals can be written in the form of a matrix multiplication by converting one signal to the appropriate circulant (a subclass of Toeplitz) matrix [22]. The matrix $B$ that performs the convolution with a given kernel $K$ can be constructed by extending this process to 2D signals with additional zero-padding and flattening steps. Examples of the resulting illumination patterns ($\langle {\psi _j}|B^{T}$), are shown in Fig. 1 for the cases where the initial basis $\Psi$ is either the canonical or Hadamard basis.

 figure: Fig. 1.

Fig. 1. Examples of illumination patterns before and after multiplication with $B$, in the canonical basis (a) then (b) and Hadamard basis (c) then (d) respectively. The matrix $B$ is such that it performs the edge-detection operation of Eq. (3), with cyclic boundary conditions. The patterns shown are the 85$^{\text {th}}$ in 16$\times$16 resolution bases.

Download Full Size | PDF

The modified illumination patterns are the result of convolving the original patterns with the filter kernel $K$, providing a more intuitive method of generating the matrix $B$. As with any image convolution a choice of boundary conditions is required, for which we have chosen cyclic conditions which wrap the values at opposite edges [23].

In the following sections we demonstrate this method experimentally for the edge-detection kernel (Eq. (3)) in the canonical basis and compare with the results obtained when the filtering operation is instead applied after reconstruction.

3. Method

3.1 Experimental setup

The experimental setup used to compare post-processed ghost images with those generated with a modified illumination basis is shown in Fig. 2. A 455 nm fibre-coupled LED is collimated by plano-convex lens L1 (f = 35 mm) and illuminates the 1080p resolution digital micromirror device (DMD). A beamsplitter allows the LED output to be monitored by a photodiode (PD1). The patterned light from the DMD is then imaged onto the object at reduced magnification by plano-convex lens L2 (f = 200 mm) and biconvex lens L3 (f = 35 mm). Finally, the light transmitted by the object is collected by biconvex lens L4 (f = 25.4 mm) and focused onto the photodiode PD2. The signal from PD2 can then be divided by the signal from PD1, compensating for fluctuations in the LED output.

 figure: Fig. 2.

Fig. 2. Schematic of the experimental setup. A fibre coupled blue LED is collimated and illuminates a DMD. The DMD is imaged at reduced magnification by lenses L2 and L3 onto a planar transmissive object (Obj). The transmitted light is then collected by lens L4 and focused onto photodiode 2 (PD2). The photodiode 1 (PD1) and beam-splitter combination allow for compensation for fluctuations in the light source intensity.

Download Full Size | PDF

3.2 Performing a fair comparison

One important consequence of modifying the illumination basis is that it might increase the number of unique intensity values required to generate the new projection patterns, unless spatial dithering techniques are employed [24]. This can be seen in Fig. 1. To experimentally generate such patterns with a digital micromirror device, which is only capable of binary amplitude modulation, one needs to project and measure multiple times per desired pattern. The number of projectable sub-patterns needed is such that the desired pattern can be formed from a linear combination of sub-patterns.

For the edge-detection filter $K$ in Eq. (3), this results in a factor of 2 increase in the total number of projected patterns, when comparing the canonical basis with its edge-detection counterpart (e.g. Fig. 1(a) and (b) respectively), as the negative patterns need to be projected separately, and the difference between the two patterns taken. This is similar to the standard difference measurements employed to achieve negative mask values when using Hadamard patterns [25]. We compensate for this in our comparison by repeating each raster pattern twice and using their mean, keeping the total number of projections and measurement time for each method consistent.

3.3 Measurement and reconstruction

For the 64 $\times$ 64 resolution images used in this comparison, the number of patterns required in the raster basis is 4096. As discussed in Section 3.2, in the modified edge-detection basis a complimentary pair of measurements is required for each raster pattern, so the total number of measurements taken for each method is 8192, using repeat readings for the raster projections.

In the raster method an illumination similar to that in Fig. 1(a) is projected and photodiode 2 signal averaged for a given integration time, followed by a repeat reading for the same projection. The final weighting coefficient $\langle {\psi _j|O}\rangle$ as in Eq. (1) is the mean of these two detector readings, divided by the power normalisation reading on photodiode 1, which is recorded once per pair of patterns. In this conventional raster imaging the reconstruction pattern $|{\psi _j}\rangle$ for each pair is the same as the illumination pattern. This process is repeated for each of the 4096 patterns in the raster basis. The reconstructed image is then post processed by convolution with the kernel $K$ of Eq. (3) to form the ‘post-processed’ result used in the comparisons.

For the ‘basis-processed’ results using a modified form of the raster illumination basis, the process is similar. The pattern pairs are no longer repeats, but look akin to the positive and then negative components of Fig. 1(b). The weighting coefficient $\langle {\phi _j|O}\rangle$ as in Eq. (2) is the photodiode 2 reading for the second pattern subtracted from that of the first, then divided by that of photodiode 1. The reconstruction pattern $|{\psi _j}\rangle$ now differs from the illumination patterns and is the corresponding unmodifed raster pattern. This process is repeated for each of the 4096 raster patterns. No post-processing of the reconstructed image is necessary.

The detector integration times tested vary from 20 ms to 220 ms in 20 ms increments, and are as indicated in the figures for a given result.

3.4 Quantifying the signal-to-noise ratio

We characterize the quality of the reconstructed images from each method and over a range of detector integration times by means of the signal-to-noise ratio (SNR). We calculate the image signal-to-noise ratios by comparing the intensity values in defined peak signal ($\langle I_P \rangle$) and background ($\langle I_{B} \rangle$) regions as

$$\text{SNR} = \frac{\langle I_P \rangle - \langle I_{B} \rangle}{\sigma_{B}},$$
where $\langle.\rangle$ denotes the spatial average over a region and $\sigma _B$ the standard deviation of the intensity in the background region. The background region is selected manually and is shown in green in Fig. 3(b). We define the peak signal locations as those with the top 10$\%$ of intensity values, taken from a high SNR experimental image whilst excluding values at the borders.

 figure: Fig. 3.

Fig. 3. (a) USAF 1951 negative resolution target with the region imaged marked by a red dashed rectangle. (b) The region highlighted in (a), with the green dashed shape indicating the region defined as the background for later SNR calculations.

Download Full Size | PDF

4. Experimental results

The object, a USAF target as shown in Fig. 3, was imaged in the experimental configuration shown in Fig. 2 for a range of integration times. In Fig. 4 we show the main result of this paper, a comparison between the images obtained when projecting the modified illumination basis (‘basis-processed’, left column) and when using a raster basis which is then post-processed by convolution with $K$ (Eq. (3)) (‘post-processed’, right column). The experimental comparison between post-processing raster images and using a modified illumination basis, shown in Fig. 4, clearly demonstrates that the modified patterns perform the desired spatial filtering operation. When the integration times and thus SNRs of the images are high (Fig. 4(a) and (b)), the difference in image quality is subtle, whilst in the low SNR regime (Fig. 4(e) and (f)) the improvement offered by using a modified illumination basis becomes quite clear.

 figure: Fig. 4.

Fig. 4. Comparison between the ghost images obtained using a modified projection basis (‘basis-processed’, left column) and those measured with a raster basis and then convolved with the edge detection kernel $K$ (‘post-processed’, right column). The three rows show varying detector integration times, increasing from bottom to top as 20, 100 and 220 ms. The experimental method is as described in section 3. The images are 64 $\times$ 64 resolution.

Download Full Size | PDF

The visual differences apparent in Fig. 4 have two contributing factors. First, the SNR is higher in the basis-processed case. This is quantified in Fig. 5 as a factor of 2 enhancement over a range of integration times. The second difference is that the spatial character of the noise has been detrimentally modified in the post-processing case. A post-processing filter creates correlated noise due to the convolution theorem, whilst in the basis-processing approach the noise remains uncorrelated (white) and is less likely to be misinterpreted as part of the object signal.

 figure: Fig. 5.

Fig. 5. Comparison of calculated SNRs from experimental images acquired using the basis vs post-processing methods. The error bars are calculated from the variation in three repeat measurements. Images were 64 $\times$ 64 resolution. Lines are a guide to the eye.

Download Full Size | PDF

5. Noise amplification model

In order to explain theoretically the difference in the signal to noise ratios of the two approaches we consider a simple theoretical framework . We use additive white Gaussian noise as a model for the detector noise in our measurement system, which we assume dominates. If an image is corrupted by additive white noise, a given filter kernel $K$ will increase the standard deviation of the noise by a factor of $\sqrt {E_k}$, where $E_K$ is the energy of the filter defined as [26,27]:

$$E_K = \int_{\Re^{2}} |K(x,y)|^{2}\mathrm{d}x\mathrm{d}y.$$

For $K$ equal to the edge-detection filter of Eq. (3) used in the presented experiments, $E_K$ = 4 and the filter increases the standard deviation of the noise by a factor of 2. The additive Gaussian detector noise introduced with the measurement of the signal from the $i$th pattern is denoted by $\sigma _i$, and are assumed to all follow the same distribution.

We represent the power normalisation step with a factor $A$, which tracks gradual changes in the lamp intensity with time. The normalisation will generally be imperfect due to two problems: first, the measurement of $A$ is noisy in itself, and introduces a new detector noise $\sigma _A$. Second, that either of the photodiode measurements may contain a background, e.g. due to stray light. These are denoted $B_m$ and $B_n$ for the measurement and normalisation signals respectively.

If the basis we want to use is such that it needs $N$ patterns to be projected for the response to every basis element to be measured (for the kernel in Eq. (3), $N=2$), a fair comparison requires us to repeat and average each measurement in the post-processing method $N$ times, which results in a signal

$$\begin{aligned} S_P = & \, K * \left[\frac{1}{N} \sum_{i=1}^{N} \frac{A O +B_m + \sigma_i}{A +B_n+\sigma_A} \right] = K * \left[\frac{N\,AO +N\, B_m + \sum_{i=1}^{N}\sigma_i }{N\,A +N\, B_n +N\,\sigma_A} \right] = \\ = & \, K * \left[\frac{O +B_m/A + \sum_{i=1}^{N} \frac{\sigma_i}{N\,A}}{1 +B_n/A+\frac{\sigma_A}{A}} \right] = K * \left[\frac{O +\frac{1}{A} \left( B_m+\sum_{i=1}^{N}\frac{\sigma_i}{N} \right)} {1 +\frac{1}{A} \left(B_n+\sigma_A \right)} \right] , \end{aligned}$$
where $*$ denotes the 2D convolution. In the case where $A$ is large compared to both the noise and the background (i.e. there is a decent amount of signal compared with the artefacts), we can expand around $A^{-1}=0$ to obtain (to first order):
$$\begin{aligned} S_P & \simeq K * \left[ O + \frac{\left(B_m+\sum_{i=1}^{N}\frac{\sigma_i}{N} \right) - O \left( B_n+\sigma_A \right)}{A} \right]=\\ & = K*O -B_n\frac{K*O}{A}-\frac{K*(\sigma_A O)}{A} +\frac{K*B_m}{A}+ \frac{K*\left( \sum_{i=1}^{N}\frac{\sigma_i}{N} \right)}{A} . \end{aligned}$$

For the ‘basis-processed’ signal $S_B$, the kernel will be a linear combination of patterns that can be projected, and thus $K= \sum _{i=1}^{N} c_i K_i$, where $c_i$ are the coefficients (for the kernel in Eq. (3), $K = K_1 - K_2$) and we can proceed similarly

$$\begin{aligned} S_B & = \frac{\sum_{i=1}^{N} c_i\left( K_i * [AO+B_m] + \sigma_i \right) }{A + B_n + \sigma_A} = \frac{K * [O+ \frac{B_m}{A}] + \sum_{i=1}^{N} \frac{c_i\sigma_i}{A}}{1+ \frac{B_n + \sigma_A}{A} } =\\ & \simeq K*O -\frac{B_n K*O + \sum_{i=1}^{N} c_i\sigma_i - \sigma_A\, K*O}{A} =\\ & = K*O -B_n\frac{K*O}{A} - \sigma_A\frac{K*O}{A}+ \sum_{i=1}^{N} \frac{c_i\sigma_i}{A}, \end{aligned}$$
where we neglected all quadratic terms in $A^{-1}$. In both cases $K*O$ is the desired signal, and the other terms represent the unwanted propagated noise (which is always a positive quantity). If the amount of stray light is small, i.e. both $B_n$ and $B_m$ are negligible the signal to noise ratio in the two cases is
$$\begin{aligned} SNR_P & \simeq A \frac{K*O}{K*(\sigma_A O) + \frac{K*(\sum_{i=1}^{N} \sigma_i)}{N}} \\ SNR_B & \simeq A \frac{K*O}{\sigma_A K*O + \sum_{i=1}^{N} c_i\sigma_i } \end{aligned}$$
and thus
$$\frac{SNR_B}{SNR_P} \simeq \frac{K*(\sigma_A O) + \frac{1}{N}K*(\sum_{i=1}^{N} \sigma_i)}{\sigma_A K*O + \sum_{i=1}^{N} c_i \sigma_i.}$$

As $K*\sigma _i\ = \sqrt {E_K}\sigma _i$, and approximating $K*(\sigma _i O) \approx \sqrt {E_K} \sigma _i K*O$, the above equation simplifies to

$$\frac{SNR_B}{SNR_P} \sim \frac{\sqrt{E_K} \sigma_A K*O + \frac{\sqrt{E_K}}{N} \sum_{i=1}^{N} \sigma_i}{\sigma_A K*O + \sum_{i=1}^{N} c_i \sigma_i} .$$

If the $\sigma _A K*O$ term dominates, the ratio between the two signal-to-noise ratios is approximately $\sqrt {E_K}$, i.e. for the kernel in Eq. (3) we have an improvement of approximately 2, consistent with the experimental results. Notice that kernels with larger $E_K$ will lead to a higher ratio between the signal to noises. In the opposite limit, i.e. when $\sigma _A K*O$ is very small, the ratio of the SNRs will depend on how exactly the kernel has to be broken down to obtain a set of projectable patterns (i.e. it will depend on the coefficients $c_i$), but will be of the order of $\sqrt {E_K}/N$, which for the kernel in Eq. (3) is $1$.

6. Conclusions

At its very core ghost imaging is the choice of basis patterns (the illuminations), the measurement of the coefficients (the intensity measured by the bucket detector), and the reconstruction of the image using the two. A big advantage of the ghost imaging method is the total freedom in the choice of the basis used, and we have shown that the freedom of measuring the coefficient of the expansion in a different basis from the one used for the reconstruction allows for even more freedom. In particular one can use this freedom to recover any linear map of the image, thus skipping the post-processing step. We have shown this experimentally for the simple example of an edge-detection filter.

We also studied how the signal-to-noise ratios compare between performing a post-processing filtering, and directly measuring the processed image using ghost imaging. Whether the ghost imaging approach results in a smaller amount of noise depends on the linear map/filter used, and on the experimental details (e.g. the noise in the intensity normalization), but in our experiment we found that the ghost imaging SNR was a factor 2 better than the equivalent post-processed image.

Funding

Engineering and Physical Sciences Research Council (EP/L015331/1); Leverhulme Trust.

Acknowledgements

The authors acknowledge useful discussions with David B. Phillips.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data created during this research are openly available from Ref. [28].

References

1. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995). [CrossRef]  

2. R. S. Bennink, S. J. Bentley, and R. W. Boyd, ““Two-Photon” Coincidence Imaging with a Classical Source,” Phys. Rev. Lett. 89(11), 113601 (2002). [CrossRef]  

3. A. Gatti, E. Brambilla, M. Bache, and L. A. Lugiato, “Ghost imaging with thermal light: Comparing entanglement and classical correlation,” Phys. Rev. Lett. 93(9), 093602 (2004). [CrossRef]  

4. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]  

5. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009). [CrossRef]  

6. B. I. Erkmen, “Computational ghost imaging for remote sensing,” J. Opt. Soc. Am. A 29(5), 782–789 (2012). [CrossRef]  

7. A. Valencia, G. Scarcelli, M. D’Angelo, and Y. Shih, “Two-photon imaging with thermal light,” Phys. Rev. Lett. 94(6), 063601 (2005). [CrossRef]  

8. F. Ferri, D. Magatti, A. Gatti, M. Bache, E. Brambilla, and L. A. Lugiato, “High-resolution ghost image and ghost diffraction experiments with thermal light,” Phys. Rev. Lett. 94(18), 183602 (2005). [CrossRef]  

9. P. Zerom, Z. Shi, M. N. O’Sullivan, K. W. C. Chan, M. Krogstad, J. H. Shapiro, and R. W. Boyd, “Thermal ghost imaging with averaged speckle patterns,” Phys. Rev. A 86(6), 063817 (2012). [CrossRef]  

10. M. J. Padgett and R. W. Boyd, “An introduction to ghost imaging: Quantum and classical,” Philos. Trans. R. Soc., A 375(2099), 20160233 (2017). [CrossRef]  

11. J. H. Shapiro and R. W. Boyd, “The physics of ghost imaging,” Quantum Inf. Process. 11(4), 949–993 (2012). [CrossRef]  

12. M. A. Neifeld and P. Shankar, “Feature-specific imaging,” Appl. Opt. 42(17), 3379–3389 (2003). [CrossRef]  

13. M. Liang, Y. Li, H. Meng, M. A. Neifeld, and H. Xin, “Reconfigurable array design to realize principal component analysis (PCA)-based microwave compressive sensing imaging system,” Antennas Wirel. Propag. Lett. 14, 1039–1042 (2015). [CrossRef]  

14. M. Aßmann and M. Bayer, “Compressive adaptive computational ghost imaging,” Sci. Rep. 3(1), 1545 (2013). [CrossRef]  

15. D. B. Phillips, M. J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e1601782 (2017). [CrossRef]  

16. P. del Hougne, M. F. Imani, A. V. Diebold, R. Horstmeyer, and D. R. Smith, “Learned Integrated Sensing Pipeline: Reconfigurable Metasurface Transceivers as Trainable Physical Layer in an Artificial Neural Network,” Adv. Sci. 7(3), 1901913 (2020). [CrossRef]  

17. X.-F. Liu, X.-R. Yao, R.-M. Lan, C. Wang, and G.-J. Zhai, “Edge detection based on gradient ghost imaging,” Opt. Express 23(26), 33802 (2015). [CrossRef]  

18. H. Ren, S. Zhao, and J. Gruska, “Edge detection based on single-pixel imaging,” Opt. Express 26(5), 5501 (2018). [CrossRef]  

19. L. Wang, L. Zou, and S. Zhao, “Edge detection based on subpixel-speckle-shifting ghost imaging,” Opt. Commun. 407, 181–185 (2018). [CrossRef]  

20. H.-D. Ren, L. Wang, and S.-M. Zhao, “Efficient edge detection based on ghost imaging,” OSA Continuum 2(1), 64–73 (2019). [CrossRef]  

21. Z. Ye, J. Xiong, and H. C. Liu, “Ghost Difference Imaging Using One Single-Pixel Detector,” Phys. Rev. Appl. 15(3), 034035 (2021). [CrossRef]  

22. R. M. Gray, “Toeplitz and circulant matrices: A review,” Foundations Trends Commun. Inf. Theory 2(3), 155–239 (2005). [CrossRef]  

23. W. Burger and M. J. Burge, Digital Image Processing - An Algorithmic Introduction Using Java (Springer, London, 2008), 2nd ed.

24. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Fast Fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 12029 (2017). [CrossRef]  

25. L. Wang and S. Zhao, “Fast reconstructed and high-quality ghost imaging with fast Walsh–Hadamard transform,” Photonics Res. 4(6), 240–244 (2016). [CrossRef]  

26. M. Jacob and M. Unser, “Design of steerable filters for feature detection using Canny-like criteria,” IEEE Trans. Pattern Anal. Machine Intell. 26(8), 1007–1019 (2004). [CrossRef]  

27. R. N. Strickland and M. Y. Aly, “Image Sharpness Enhancement Using Adaptive 3X3 Convolution Masks,” Opt. Eng. 24(4), 244683 (1985). [CrossRef]  

28. H. Penketh, W. L. Barnes, and J. Bertolotti, “Dataset- Ghost Image Processing,” Zenodo (2021), https://doi.org/10.5281/zenodo.5779444.

Data availability

Data created during this research are openly available from Ref. [28].

28. H. Penketh, W. L. Barnes, and J. Bertolotti, “Dataset- Ghost Image Processing,” Zenodo (2021), https://doi.org/10.5281/zenodo.5779444.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Examples of illumination patterns before and after multiplication with $B$, in the canonical basis (a) then (b) and Hadamard basis (c) then (d) respectively. The matrix $B$ is such that it performs the edge-detection operation of Eq. (3), with cyclic boundary conditions. The patterns shown are the 85$^{\text {th}}$ in 16$\times$16 resolution bases.
Fig. 2.
Fig. 2. Schematic of the experimental setup. A fibre coupled blue LED is collimated and illuminates a DMD. The DMD is imaged at reduced magnification by lenses L2 and L3 onto a planar transmissive object (Obj). The transmitted light is then collected by lens L4 and focused onto photodiode 2 (PD2). The photodiode 1 (PD1) and beam-splitter combination allow for compensation for fluctuations in the light source intensity.
Fig. 3.
Fig. 3. (a) USAF 1951 negative resolution target with the region imaged marked by a red dashed rectangle. (b) The region highlighted in (a), with the green dashed shape indicating the region defined as the background for later SNR calculations.
Fig. 4.
Fig. 4. Comparison between the ghost images obtained using a modified projection basis (‘basis-processed’, left column) and those measured with a raster basis and then convolved with the edge detection kernel $K$ (‘post-processed’, right column). The three rows show varying detector integration times, increasing from bottom to top as 20, 100 and 220 ms. The experimental method is as described in section 3. The images are 64 $\times$ 64 resolution.
Fig. 5.
Fig. 5. Comparison of calculated SNRs from experimental images acquired using the basis vs post-processing methods. The error bars are calculated from the variation in three repeat measurements. Images were 64 $\times$ 64 resolution. Lines are a guide to the eye.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

| I = j ψ j | O | ψ j ,
| I = j ϕ j | O | ψ j = j ψ j | B T | O | ψ j = j ψ j | ( B T | O ) | ψ j ,
K = [ 0 1 0 1 0 1 0 1 0 ] .
SNR = I P I B σ B ,
E K = 2 | K ( x , y ) | 2 d x d y .
S P = K [ 1 N i = 1 N A O + B m + σ i A + B n + σ A ] = K [ N A O + N B m + i = 1 N σ i N A + N B n + N σ A ] = = K [ O + B m / A + i = 1 N σ i N A 1 + B n / A + σ A A ] = K [ O + 1 A ( B m + i = 1 N σ i N ) 1 + 1 A ( B n + σ A ) ] ,
S P K [ O + ( B m + i = 1 N σ i N ) O ( B n + σ A ) A ] = = K O B n K O A K ( σ A O ) A + K B m A + K ( i = 1 N σ i N ) A .
S B = i = 1 N c i ( K i [ A O + B m ] + σ i ) A + B n + σ A = K [ O + B m A ] + i = 1 N c i σ i A 1 + B n + σ A A = K O B n K O + i = 1 N c i σ i σ A K O A = = K O B n K O A σ A K O A + i = 1 N c i σ i A ,
S N R P A K O K ( σ A O ) + K ( i = 1 N σ i ) N S N R B A K O σ A K O + i = 1 N c i σ i
S N R B S N R P K ( σ A O ) + 1 N K ( i = 1 N σ i ) σ A K O + i = 1 N c i σ i .
S N R B S N R P E K σ A K O + E K N i = 1 N σ i σ A K O + i = 1 N c i σ i .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.