Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Dual-mode adaptive-SVD ghost imaging

Open Access Open Access

Abstract

In this paper, we present a dual-mode adaptive singular value decomposition ghost imaging (A-SVD GI), which can be easily switched between the modes of imaging and edge detection. It can adaptively localize the foreground pixels via a threshold selection method. Then only the foreground region is illuminated by the singular value decomposition (SVD) - based patterns, consequently retrieving high-quality images with fewer sampling ratios. By changing the selecting range of foreground pixels, the A-SVD GI can be switched to the mode of edge detection to directly reveal the edge of objects, without needing the original image. We investigate the performance of these two modes through both numerical simulations and experiments. We also develop a single-round scheme to halve measurement numbers in experiments, instead of separately illuminating positive and negative patterns in traditional methods. The binarized SVD patterns, generated by the spatial dithering method, are modulated by a digital micromirror device (DMD) to speed up the data acquisition. This dual-mode A-SVD GI can be applied in various applications, such as remote sensing or target recognition, and could be further extended for multi-modality functional imaging/detection.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Ghost imaging (GI) has received significant attention in the field of both classical and quantum physics, because of its ability to form images using a single-pixel detector without spatial resolution [13]. Traditional ghost imaging consists of two light beams, one is to illuminate a series of random light fields onto the object, the intensity of the scattered or transmitted light after the object is collected by a single-pixel detector, and another path of the reference beam is recorded by an array detector [4]. The object image can be retrieved from the second-order correlation function between the reference light fields and the total light intensities measured by the single-pixel detector. Various light sources have been demonstrated in ghost imaging, e.g. quantum entangled photons [5], pseudo-thermal light with rotating ground glass [6], LED light [7], X-rays [8,9], and terahertz bands [1014]. Without requiring the array detector to record reference patterns, computational ghost imaging is developed as a commonly used technique [1517], in which a spatial light modulator (SLM) is utilized to generate programmable illumination patterns. A passive version of computational ghost imaging shares the same experimental scheme with single-pixel imaging, in which the image of the object instead of the illumination light is modulated by a digital micromirror device (DMD) or a SLM. Since ghost imaging is a flexible and cost-effective way to retrieve images, to date, it has been applied to many fields such as lidar detection [18], time-resolved hyperspectral imaging [10,19], dark-field imaging [20], fluorescence or phase imaging [21], edge detection [22,23], etc.

Many modified methods have been proposed to improve its imaging quality. Differential ghost imaging (DGI) [24] can improve the signal-to-noise ratio (SNR) of the results, whose performance is still limited. Compressive sensing (CS) based ghost imaging [25] can recover high SNR images at the cost of computation time. The orthogonally structured patterns have also been applied to this field, such as the Hadamard basis [26], Fourier basis [27], discrete cosine basis [28], and wavelet transformation basis [29]. Recent advances in GI have also been reported with the help of machine learning [30,31] and optimization algorithms such as genetic algorithms [32], which can be used to focus light through scattering media [33,34]. Pseudo-inverse ghost imaging (PGI) [35] can acquire a high-quality image with fewer measurements. To further improve the recovered image quality and shorten the reconstruction time, the singular value decomposition ghost imaging (SVD GI) is proposed, in which orthogonal patterns are generated using the singular value decomposition (SVD) operation [36,37]. However, further investigations are needed, considering the variable sparsity of imaging scenes and versatile demands in practical applications. Meanwhile, the SVD GI requires a two-round differential detection to project the positive and negative illumination patterns, respectively.

In this paper, we propose a dual-mode adaptive-SVD ghost imaging (A-SVD GI) method for both imaging and edge detection of objects, with reduced measurement times by the strategy of region-adaptive detection. In the first step, the rough outline is obtained by illuminating low-resolution SVD patterns. All the pixels in the acquired low-resolution image are roughly classified into the foreground (containing the object) and background (containing no object) by a threshold selection method. In the second step, the high-resolution SVD patterns allocated only in the foreground region are illuminated to obtain final object images. By simply changing the selecting range of foreground pixels, A-SVD GI can be switched to the mode of edge detection, which can directly reveal the edge of objects. We numerically and experimentally demonstrate this method, followed by comparisons with other methods. We also halve the measurement numbers of SVD patterns by using a single-round detection method, instead of involving the positive and negative patterns in traditional SVD-GI. Moreover, the spatial dithering method is applied [38] to further improve the refreshing rate of illuminating patterns.

2. Principles

2.1 Principles of SVD-GI

In ghost imaging, the image of the object, with M illumination patterns and M corresponding detections, can be obtained as follows [35,36]:

$$\begin{array}{l}\overline {\overline {\hat{O}} } ({x,y} )= \frac{1}{M}\mathop \sum \limits_{i = 1}^M ({{B_i} - \bar{\langle B \rangle}} ){\overline {\overline{I}} _i}({x,y} )= {\overline {\overline{\Phi }} ^T}\bar{B}/M - \bar{\langle B \rangle}\langle\overline {\overline{{I}}}\rangle.\\= \frac{1}{M}\left[ {\begin{array}{cccc} {{I_1}({1,1} )}&{{I_2}({1,1} )}& \ldots &{{I_M}({1,1} )}\\ {{I_1}({1,2} )}&{{I_2}({1,2} )}& \ldots &{{I_M}({1,2} )}\\ \vdots & \vdots & \ddots & \vdots \\ {{I_1}({p,p} )}&{{I_2}({p,p} )}& \ldots &{{I_M}({p,p} )} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} {{B_1}}\\ {{B_2}}\\ \vdots \\ {{B_M}} \end{array}} \right] - \bar{B}\left[ {\begin{array}{c} \langle{I({1,1} )}\rangle\\ \langle{I({1,2} )}\rangle\\ {\ldots }\\ \langle{I({p,p} )}\rangle \end{array}} \right],\end{array}$$
where the matrix ${\overline {\overline{I}} _i}({x,y} )$ is the $i$-th illumination pattern, the vector $\bar{B}$ represents the total light intensities received by the single-pixel detector, the symbol $\langle{\cdots}\rangle $ is an ensemble average over M measurements, and $\overline {\overline{\Phi }} $ is the measurement matrix in which each column is formed by an illumination pattern.

If the object $\overline {\overline{O}} ({x,y} )$ is vectorized to a column vector of size ${p^2} \times 1$, there is a linear relationship between the detected signals $\bar{B}$ and the object $\overline {\overline{O}} ({x,y} )$ after M detections:

$$\bar{B} = \overline {\overline{\Phi }} {\left[ {\begin{array}{cccc} {O({1,1} )}&{O({1,2} )}&{\ldots }&{O({p,p} )} \end{array}} \right]^T}. $$

Thus, the image reconstruction of ghost imaging can be briefly expressed as follows:

$$\overline {\overline {\hat{O}} } ({x,y} )= \frac{1}{M}{\overline {\overline{\Phi }} ^T}\overline {\overline{\Phi }} \overline {\overline{O}} . $$

Appling singular value decomposition to the random measurement matrix $\overline {\overline{\Phi }} $, and replacing all the singular values with 1, then a brand-new measurement matrix ${\overline {\overline{\Phi }} _{svd}}$ can be obtained:

$${\overline {\overline{\Phi }} _{svd}} = \overline {\overline{U}} {\left[ {\begin{array}{cc} {{{\overline {\overline{E}} }_{M \times M}}}&0 \end{array}} \right]_{M \times N}}{\overline {\overline{V}} ^T}, $$
where ${\overline {\overline{E}} _{M \times M}}$ is a unit matrix, $\overline {\overline{U}} $ and $\overline {\overline{V}} $ are the left and right singular vectors, respectively.

The SVD GI can be expressed as replacing the measurement matrix $\overline {\overline{\Phi }} $ in Eq. (3) by ${\overline {\overline{\Phi }} _{svd}}$. The SVD measurement matrix is orthogonal, leading to better imaging quality in ghost imaging [39]. Another advantage of SVD patterns is that it has no limitation on the pattern size, while the Hadamard basis requires the image size to be the power of 2.

2.2 Dual-mode adaptive-SVD ghost imaging

In most imaging sceneries, the entire region is not fully occupied by the object. Thus, if we can localize the rough area occupied by the object, fewer measurements can be used to retrieve high-quality images, compared with conventional ghost imaging methods.

As shown in Fig. 1, the flowchart of the proposed A-SVD GI method consists of two steps. Firstly, a small number of low-resolution patterns are used to obtain a blurred image to localize the region that contains the object, as the preliminary detection. In the case shown in Fig. 1(a), a $160 \times 160$-pixel area is illuminated by the $32 \times 32$-superpixel SVD patterns, which means each superpixel consists of $5 \times 5$-pixels. The low-resolution image on the right of Fig. 1(a) is obtained by applying 1024 illumination patterns with a resolution of $32 \times 32$, corresponding to a sampling ratio of 1/25 for the final $160 \times 160$- pixel retrieved image.

 figure: Fig. 1.

Fig. 1. Flowchart of A-SVD GI for both imaging and edge detection of objects. (a) Schematic of the preliminary detection of SVD GI to obtain the rough outline of the object. The entire region containing $160 \times 160$ pixels is divided into $32 \times 32$ superpixels, each superpixel containing $5 \times 5$ pixels. A series of orthogonal patterns, with a resolution of $32 \times 32$, are generated by SVD operation. The low-resolution image is obtained in the preliminary detection. (b) Schematic of the imaging mode. The histogram of the pixel intensity values of the obtained low-resolution image is plotted on the right. By distinguishing ${N_S}$ superpixels in the preliminary result whose values are larger than the threshold ${k_1}$ into the foreground region, indicated by the green region in the histogram, a series of patterns, in which only the foreground regions are allocated with SVD matrixes and the background regions are allocated with 0, with a resolution of $160 \times 160$ are illuminated on the object plane. The rug plot at the bottom of the histogram visualizes the distribution of the pixel values. The ground-truth-like result is obtained using the proposed method. (c) Schematic of edge detection mode of A-SVD GI. The intensity histogram of the low-resolution image is also shown on the right. However, the superpixels with values in the range of $[{{k_1},\,{k_2}} ]$ (the green region) are selected as the foreground region to perform edge detection. A binary Mona Lisa image is adapted as the original object with permission (© Can Stock Photo Inc. / [YuriV] / www.canstockphoto.com).

Download Full Size | PDF

In the second step, it can be divided into two different modes of imaging and edge detection of objects. We normalize the result of the low-resolution image obtained in the first step into a histogram of $[{0,1} ]$, as shown in the right of Fig. 1(b)&(c). The normalization method used in this paper is min-max feature scaling and can be expressed as $\tilde{X} = ({X - {X_{\textrm{min}}}} )/({X_{\textrm{max}}} - {X_{\textrm{min}}})$. For imaging purposes, we select the foreground region in which the pixel value is in the range of $\sigma \in [{k,1} ]$. In other words, pixels with higher values are foreground and those with lower values are background. The threshold k is determined by using the Otsu variance-based algorithm [39], which can be expressed as finding the maxima of the between-class variance $\sigma _B^2$:

$$\sigma _B^2(k )= \frac{{{{[{{\mu_T}\omega (k )- \mu (k )} ]}^2}}}{{\omega (k )[{1 - \omega (k )} ]}}, $$
where $\omega (k )$ and $\mu (k )$ are the zeroth- and the first-order cumulative moments of the histogram up to the $k$-th level, ${\mu _T}$ is the total mean level of the image. In practice, the threshold value can be reduced by multiplying a factor to avoid missing image details. Then the selected foreground region is allocated with the values of SVD patterns, while the pixels in background regions are filled with ‘0’, to form the high-resolution illuminating patterns. Finally, a high-quality image is retrieved by Eq. (3)&(4), as shown in the left of Fig. 1(b). Note that the region of the background is directly ignored to reduce both the measurement number and the calculation time.

For the purpose of edge detection, we can simply localize a rough region of the object edge by choosing the pixels with values belonging to a selected range $\sigma \in [{{k_1},{k_2}} ]$. The potential edge pixels are those with values in the selected range between two thresholds ${k_1}$ and ${k_2}$, which are determined by the Otsu method and the empirical factor, respectively. The superpixel with its value larger than the higher bound ${k_2}$ can be regarded as the region located inside objects, while those superpixels with values between ${k_1}$ and ${k_2}$ can be considered as the region not fully occupied by objects, i.e., edge regions. Then, the high-resolution SVD patterns allocated only in the edge regions are generated to retrieve the edge of the object, as shown in the left of Fig. 1(c). Here, 1024 patterns and 5175 patterns are used in the two steps, respectively, corresponding to a sampling ratio of 24.21%.

2.3 Single-round SVD GI measurement

Welsh et al. proposed a differential method to split the original pattern into a positive pattern ${\overline {\overline{P}} _ + } = (1 + {\overline {\overline{P}} _O})/2$ and a negative pattern ${\overline {\overline{P}}_- } = (1 - {\overline {\overline{P}}_O})/2$, to solve the problem that negative values cannot be physically projected [40]. The SVD orthogonal patterns, generated by SVD operation, consist of positive and negative elements. So, they are also split into positive and negative patterns in traditional SVD GI, as shown in Fig. 2 (a). The difference between the two total light intensities corresponding to these two patterns is the response of the original orthogonal pattern. Thus, this method would double the measurement numbers and the measurement time during experiments.

 figure: Fig. 2.

Fig. 2. Single-round measurement scheme for A-SVD GI. (a) Traditional differential measurement method. Each original pattern is divided into a positive and a negative pattern. The detected signals are acquired by the difference of detected two light intensities using the positive and negative patterns. (b) Proposed single-round measurement. The original pattern is normalized to $[{0,\; 1} ]$, as the projection pattern. An all ‘1’ pattern is introduced as the auxiliary pattern for all projected SVD patterns. Each original pattern can be represented by the difference of the projection pattern and the auxiliary pattern.

Download Full Size | PDF

Here, we develop a single-round detection method for the illumination of the SVD pattern to halve the measurement numbers, as shown in Fig. 2 (b). The principle is formulated as follows:

$${\overline {\overline{P}} _O} = {c_1}{\overline {\overline{P}} _P} - {c_2}{\overline {\overline{P}} _A} = {c_1}({{{\overline {\overline{P}} }_O} - \textrm{min}({{{\overline {\overline{P}} }_O}} ))/(\textrm{max}({{{\overline {\overline{P}} }_O}} )- \textrm{min}({{{\overline {\overline{P}} }_O}} )} )- {c_2}\overline {\overline{E}} , $$
where $\overline {\overline {{P_O}} } $, $\overline {\overline {{P_P}} } $ and $\overline {\overline {{P_A}} } $ are the original SVD pattern, the projected pattern, and the auxiliary pattern, respectively. ${c_1}$ and ${c_2}$ are introduced as coefficients.

We normalize the original pattern $\overline {\overline {{P_O}} } $ into the range [0,1] to obtain the projected pattern $\overline {\overline {{P_P}} } $. An all-ones auxiliary pattern $\overline {\overline {{P_A}} } $ multiplied by a coefficient ${c_2}$ for all pixels is introduced. To make Eq. (6) satisfied, the coefficient ${c_1}$ equals the difference between the maximal and minimal value of original patterns and the coefficient ${c_2}$ equals the absolute value of the minimum of original patterns. Thus, by additionally projecting an auxiliary pattern, the measurement for the SVD orthogonal basis can be achieved in a single-round scheme in which only the normalized SVD patterns are projected. Then, half the number of projection patterns in traditional SVD GI is saved. Such a single-round measurement scheme can also be applied for ghost/single-pixel imaging using other orthogonal bases [41].

For A-SVD GI assisted by the proposed single-round measurement, the number of projected patterns, M, is represented as:

$$M = {M_1} + {M_2} + 2, $$
where ${M_1}$ and ${M_2}$ are the numbers of projected patterns in the two steps, respectively. Two auxiliary patterns are used for the detection in the two steps. Considering the size of the whole region is $N \times N$ pixels, ${M_1}$ is related to the size of defined superpixels. If every superpixel contains $n \times n$ pixels, ${M_1}$ can be expressed as:
$${M_1} = \frac{N}{n} \times \frac{N}{n}. $$

Assuming that there are ${N_S}$ selected foreground superpixels, the measurement number ${M_2}$ in the second step is:

$${M_2} = n \times n \times {N_S}. $$

Then, the total number of measurements is shown as follows:

$$M = {\left( {\frac{N}{n}} \right)^2} + {n^2}{N_S} + 2 \ge 2N\sqrt {{N_S}} + 2 \approx 2N\sqrt {{N_S}} . $$

In most cases, $N\sqrt {{N_S}} $ is several hundred or thousand times bigger than two. So, the last term is negligible. The sampling ratio of A-SVD GI is summarized as:

$$\eta = \frac{M}{{{N^2}}} = \frac{1}{{{n^2}}} + {n^2}\frac{{{N_S}}}{{{N^2}}} + \frac{2}{{{N^2}}} \ge 2\frac{{\sqrt {{N_S}} }}{N} + \frac{2}{{{N^2}}} \approx 2\frac{{\sqrt {{N_S}} }}{N}. $$
Eq. (11) shows that the sampling ratio of our A-SVD GI is related to the ratio between the number of selected foreground superpixels and the number of total pixels. Compared with the full sampling of the whole scene in traditional SVD GI, the sampling ratio of A-SVD GI is lower by ignoring the background pixels in the second step, especially for imaging the object that only occupies a small part of the scene.

3. Simulation results

3.1 Numerical comparison of different methods

We compare the numerical simulation of A-SVD GI with other methods, including DGI, PGI, and SVD GI, as shown in Fig. 3. Here, the original object is a $128 \times 128$ pixel image with multiple squares at different distances, as shown in the left part of Fig. 3 (a). The size of superpixels is defined as $2 \times 2$ in this part. To make all imaging results with the same color bar, the pixel intensity values of each result are normalized to [0,1]. The correlated coefficient (CC) [42] is introduced to evaluate the image quality. Under the sampling ratio of 41.65%, the reconstructed results of different methods are shown in the right part of Fig. 3 (a). The zoom-in comparison in Fig. 3 (b) clearly illustrates the different image quality of these methods for the smallest squares in the rectangle marked red. The result shows that all these methods can distinguish the squares of larger distances, while the results of DGI, PGI, and SVD GI have a lower SNR. For the tiny squares of smaller distances (1 or 2 pixels) shown in the rectangle marked red, A-SVD GI can retrieve a high-quality image. However, the other methods cannot reconstruct recognizable results, as shown in Fig. 3(b).

 figure: Fig. 3.

Fig. 3. Numerical comparison of imaging results between different methods under the sampling ratio of 41.65%. The original object is shown on the left of (a). The results of different methods, including DGI, PGI, SVD GI, and A-SVD GI, are shown in the right part of (a). (b) The zoom-in results of different methods for closely-distributed squares, as marked in the red square of (a). Here the superpixel size is $2 \times 2$.

Download Full Size | PDF

We also conduct simulations for another object with a different superpixel size. For the object ‘BUAA’ with a resolution of $128 \times 128$, the A-SVD GI requires a lower sampling ratio with the defined superpixel size of $4 \times 4$. The imaging results of different methods are compared, as shown in Fig. 4. Under the sampling ratio of 24.7%, A-SVD GI can achieve full sampling for the pixels containing objects, leading to a high imaging quality, which is also indicated by a high CC (0.999) shown in Fig. 4. While other GI methods are suffering from the background noise, leading to a poor SNR. PGI and SVD GI show similar imaging quality because PGI is also relied on SVD operation to calculate the pseudo-inverse matrix of the measurement matrix. However, PGI (11.88s) costs more calculation time than SVD GI (0.15s). Because the background pixels containing no object are ignored in the second detection step, the recovered result of A-SVD GI shows a higher image quality compared with other classical GI methods.

 figure: Fig. 4.

Fig. 4. Numerical comparison of different methods with the object ‘BUAA’. The original object is shown in the left part. Under the sampling ratio of 24.7%, the comparison between different methods is shown in the right part. Here the pixel resolution is $128 \times 128$. The superpixel size is $4 \times 4$.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. The relationship between the correlated coefficient (CC) and the sampling ratios for different methods. Four results under a sampling ratio of 18.97% are shown in (b): A-SVD GI; (c): PGI; (d): SVD GI; (e): DGI.

Download Full Size | PDF

The relationship between CC and sampling ratios of different methods is plotted in Fig. 5. Because the superpixel size is $4 \times 4$, the required sampling ratio to achieve full sampling for the low-resolution result in the first step is 6.25%. By only detecting the foreground region distinguished from the low-resolution result, instead of the full region, A-SVD GI shows a better performance than other methods, as shown in Fig. 5(b)-(e). The A-SVD GI will achieve a high imaging quality (CC = 0.999) when the sampling ratio is beyond the threshold (24.7%), corresponding to full sampling for all foreground pixels, shown as the red line in Fig. 5. These imaging results under the sampling ratio of 24.7% are shown in Fig. 4, which demonstrate that A-SVD GI can retrieve the ground-truth-like result. The CC of PGI (green line) and SVD GI (blue line) show similar trends, higher than that of DGI, with the increasing sampling ratio ranging from 7% to 100%.

We further study the performance of our method with a grayscale object. The original object contains two faces of ‘Happy’ and ‘Sad’, the letters of ‘Happy’, ‘Sad’ and the letters of ‘Ghost Imaging’ with gradient values, as shown in Fig. 6. The results show that, for the grayscale object, A-SVD GI still keeps a good imaging quality (CC = 0.999) with the time consumption of 23.28s that consists of both the two steps. Note that the generation of high-resolution patterns via SVD operation takes the majority of time consumption (22.57s). The time consumption of DGI and SVD-GI is 0.52s and 0.43s, respectively. PGI can keep the comparable imaging quality with SVD GI at the price of time consumption (129.87s). Even though there are more details shown in A-SVD GI compared with other GI methods, this result also reveals a minor limitation of A-SVD GI that some pixels with a low level of grayscale may be regarded as background pixels. In this case, a part of the capital letter ‘G’ with a grayscale of 0.03 is not revealed by A-SVD GI. Because the threshold k in Eq. (5) is difficult to distinguish between the background and object pixels with low values, some details of the objects are immersed in the background.

 figure: Fig. 6.

Fig. 6. Numerical comparison between different methods for grayscale objects.

Download Full Size | PDF

To test the robustness of A-SVD GI, we further compare the influence of detection noise by different methods under a sampling ratio of 50%, as shown in Fig. 7. The Gaussian white noise is added to detection signals $\bar{B}$ in Eq. (2).

 figure: Fig. 7.

Fig. 7. Performance comparison between different methods in a noisy environment under the sampling ratio of 50%. (a) The solid line with different colors shows the relationship between CC and the SNR of the detection. (b) is the result of A-SVD GI + under SNR of 20 dB. Five results under SNR of 15 dB are shown in (c): A-SVD GI+; (d): SVD GI; (e): PGI; (f): A-SVD GI; (g): DGI.

Download Full Size | PDF

The results show that A-SVD GI can keep high-quality imaging in a noisy environment with SNR higher than 20 dB, plotted as the blue line in Fig. 7(a). The added noise makes it hard to distinguish the foreground and background when SNR is lower than 15 dB. The background pixels influenced by the noise would be misrecognized as foreground and be illuminated in the second step. Under such conditions, the sampling ratio of 50% is not sufficient to achieve full sampling on the foreground region, leading to a decrease in CC of A-SVD GI. To increase the robustness of A-SVD GI, we multiply the results obtained in two steps, leading to more robust results denoted as A-SVD GI+, plotted as the red line in Fig. 7(a). The result of A-SVD GI + under SNR of 15 dB is also shown in Fig. 7(c), in which the letter ‘Sad’ is more recognizable than in other methods, as shown in Fig. 7(d)-(g).

In this section, we first choose the object containing tiny structures, as shown in Fig. 3, to reveal that A-SVD GI can reconstruct high-quality images. Then, we show that this method can be designed with varied ‘superpixels’ and retrieve the images with a lower sampling ratio in Fig. 4. The area occupied by objects in Fig. 4 (1764 pixels) is larger than that in Fig. 3 (1503 pixels), in which both the original images are 128 × 128 pixels. However, the sampling ratio in Fig. 4 (24.7%) is lower than that in Fig. 3 (41.5%), due to the use of larger superpixels (2 × 2 for Fig. 3 and 4 × 4 for Fig. 4). A further study about the relationship between image quality and sampling ratios is presented in Fig. 5. It is noted that A-SVD GI is also applicable for grayscale objects (Fig. 6) and have the advantages of better image quality when the SNR of measurements decreases (Fig. 7).

3.2 Mode of edge detection of A-SVD GI

As the schematic of the imaging mode of A-SVD GI shown in Fig. 1(b), the superpixels with the value $\sigma \in [{k,1} ]$ in the low-resolution image obtained by the preliminary detection are selected to be further detected, resulting in imaging with good quality. In this part, we select the superpixels with the value $\sigma \in [{{k_1},{k_2}} ]$ to roughly select the pixels containing the edge region of the object. Then further detection is conducted on these regions in the second step to reveal the edge of the object. The numerical results of the object ‘a square and a circle’ and the object ‘two gentlemen with hats’, are shown in Fig. 8(a) & (b), respectively, with different upper bounds ${k_2}$ varying from 0.55 to 1.

 figure: Fig. 8.

Fig. 8. Mode of edge detection of A-SVD GI. The original object of (a) ‘a square and a circle’ and (b) ‘two gentlemen with hats’ are shown in the left part. The right part shows the numerical results with varying ${k_2}$ ranging from 0.55 to 1. Here the pixel resolution is $128 \times 128$ for all the images.

Download Full Size | PDF

Figure 8(a) shows that A-SVD GI can be switched to the mode of edge detection of the object ‘a square and a circle’ if the upper bound ${k_2} \le 0.95$. A part of edge information starts to be revealed when ${k_2} = 0.55$. For ${k_2} = 0.95$, a high-quality image of the complete edge detection is presented. However, it would lose more and more details of the object edge as ${k_2}$ decreases. The lower bound ${k_1}$ is varying from 0.477 to 0.484 in different simulations shown in Fig. 8(a), determined by the Otsu method. The sampling ratio for edge detection in this case (${k_2} = 0.95$) is 28.61%. For the object ‘two gentlemen with hats’, the object’s edge line is still completely revealed by the mode of edge detection of A-SVD GI if $0.9 \le {k_2} \le 0.95$, as shown in Fig. 8(b). In this case, the lower bound ${k_1}$ is ranging from 0.457 to 0.461, the corresponding sampling ratios for ${k_2} = 0.9$ and ${k_2} = 0.95$ are 30.83% and 32.01%, respectively.

4. Experiments

4.1 Experimental setup

Our experimental setup is shown in Fig. 9(a). A 532 nm laser (MGL-FN-532-500 Mw, CNI) is used as the light source. After being expanded and collimated, the light beam is reflected by a digital micromirror device (DMD, ViALUX GmbH V-7001), where a series of binary modulation patterns are generated. These illumination patterns are then projected onto the object plane via two lenses. The object is a 1951USAF test target (R3L3S1N, Thorlabs). After passing through the object, the transmitted light is focused by a collection lens and measured by an amplified photodiode (PDA100A2, Thorlabs). A purpose-built Python algorithm and a multifunction data acquisition device (USB-6353, National Instruments) are used to control the DMD and synchronize between the detector and the DMD. The refresh rate of the DMD is set at 10kHz for fast data acquisition.

 figure: Fig. 9.

Fig. 9. (a) Experimental setup. M: mirror; L1- L5: lenses; DMD: Digital Micromirror Device; O: object; SPD: single-pixel detector; NI-DAQ: National Instruments-Data acquisition device; PC: Personal computer. (b) Schematic of the spatial dithering method used for the illumination of grayscale patterns by the DMD.

Download Full Size | PDF

The DMD is commonly used to illuminate binary patterns using the micromirror array. The spatial dithering method is used to generate grayscale patterns via DMD, shown in Fig. 9(b). This method can be summarized as three steps [38], as shown in Fig. 9(b):

  • 1) Enlarge the original patterns to $aN \times aN$ by interpolation.
  • 2) Every grayscale pixel is represented by $a \times a$ binary pixels.
  • 3) Apply diffusion dithering algorithm to generate a binarized grayscale pattern.

4.2 A-SVD ghost imaging

In the experiment, a part of the USAF target is imaged to test the performance of A-SVD GI. The resolution of the first sample is $96 \times 96$, while that of the second and third samples is $48 \times 48$. The superpixel size is also different for these samples, which is $4 \times 4$ for the second sample and $2 \times 2$ for the first and third samples. The sampling ratio $\eta $ for different samples is shown in the top part of Fig. 10, which is 26.7%, 29.6%, and 37.4%, respectively. The retrieved results by different methods, including DGI, SVD GI, and A-SVD GI, are shown in Fig. 10.

 figure: Fig. 10.

Fig. 10. Comparison of experimental results of different methods, including DGI, SVD GI, and A-SVD GI. The sampling ratios for samples 1, 2, and 3 are 26.7%, 29.6%, and 37.4%, respectively. The image sizes are $96 \times 96$, $48 \times 48,$ and $48 \times 48$, respectively. The sizes of superpixel are $2 \times 2$, $4 \times 4,$ and $2 \times 2$, respectively. All the results are normalized and share the same color bar, shown in the lower right corner.

Download Full Size | PDF

The experimental results of the three samples show that A-SVD GI has a better imaging quality and a higher SNR than the other two methods. The experiment of ‘Sample 2’ shows that if the size of superpixels is not defined properly, some redundant background pixels near the object are detected in the second step, leading to a lower SNR. The comparison between ‘Sample 2’ and ‘Sample 3’ shows that the sampling ratio changes as the superpixel size. As the size of the superpixel increases, the sampling number in the first step ${M_1}$ decreases, as shown in Eq. (8), and that in the second step ${M_2}$ increases, since it involves more background pixels near the object. In this case, the experiment of ‘Sample 2’ with the superpixel size of $4 \times 4$ shows a lower sampling ratio than that of ‘Sample 3’. Through the comparison between the imaging results of ‘Sample 1’ and ‘Sample 3’, the ratio between the number of foreground pixels containing objects and the number of the total pixels plays an important role in the sampling ratio, which agrees with Eq. (11).

4.3 Mode of edge detection of A-SVD GI

We experimentally demonstrate the mode of edge detection of A-SVD GI for four samples, as shown in Fig. 11. These samples are parts of the USAF target, including a square and numbers at different positions. The result of preliminary detection only can roughly show the outline of samples, as shown in the upper part of Fig. 11. After conducting further detection on the edge region, a clear edge of objects is revealed via A-SVD GI, as shown in the lower part of Fig. 11.

 figure: Fig. 11.

Fig. 11. Experimental result of A-SVD GI for edge detection. All the results are normalized and share the same color bar, as shown in the lower right corner.

Download Full Size | PDF

The experimental result confirms that A-SVD GI can be switched to edge detection mode by changing the range of selected pixels $\sigma \in [{{k_1},{k_2}} ]$. A clear edge can be directly revealed for sample 1, as shown in the first column of Fig. 11. To overcome the non-uniform light field in experiments for objects ‘1’ and ‘2’, a mean filtering method can be applied to the preliminary imaging result, which makes the threshold selection better to lose no edge details, as shown in Fig. 11.

5. Conclusions and discussion

To conclude, we propose and experimentally demonstrate a dual-mode and adaptive scheme of ghost imaging, termed A-SVD GI, which can be easily switched between the mode of imaging and edge detection by changing the selecting range of foreground pixels. With the help of a pre-detection, only the foreground regions are detected, while the pixels in background regions are directly ignored, toward reduced sampling ratios. The numerical and experimental results show that A-SVD GI can retrieve high-quality images in the mode of imaging. The mode of edge detection is also experimentally validated. Besides, we develop a single-round scheme to halve the number of illumination patterns. After binarized by the spatial dithering method, SVD patterns are modulated by a DMD with a high frame rate.

It is noted that the proposed method can also be used for grayscale objects, although most of the objects demonstrated in this work are binary objects. In the case of grayscale objects, part of the pixels with low intensity values of the grayscale object images may be ignored and recognized as the background region. However, it would not affect the proposed method for salient object detection in practical vision tasks such as object recognition or edge detection [43], in which the imaging system focuses more attention on the areas occupied by the main objects. A further modification may be conducted by compensating the area of low grayscale pixels from the measurement of preliminary detections.

Although we demonstrate the dual-mode A-SVD GI at visible wavelengths, the scheme can also be applied in other variants of computational imaging methods at various wavelengths. Even though the regional adaptive scheme of conventional GI has been reported, it is not suitable for multiple objects because its localization algorithm, the Fourier slice theorem, cannot recognize multiple objects efficiently [44]. Our proposed method could be widely used for applications where multiple objects only occupy a part of the whole region. Further study is still needed to improve the imaging speed and robustness of A-SVD GI. Apart from the Otsu threshold selection method used in this paper, many advanced image segmentation methods [45], including active contours [46], graph cuts [47], and Markov random fields [48], have the potential to localize the object more accurately. The efficiency of A-SVD GI may also be further enhanced by defining the superpixels with flexible shapes, such as the foveated patterns [49]. This dual-mode adaptive GI scheme could also be further extended toward multi-modality for versatile functional applications.

Funding

China Postdoctoral Science Foundation (2022M720347); National Natural Science Foundation of China (11804018, 62075004, 62275010); Beijing Municipal Natural Science Foundation (1232027, 4212051); Fundamental Research Funds for the Central Universities (YWF-22-L-1253).

Acknowledgments

We would like to thank Beihang University and the Fundamental Research Funds for the Central Universities for financial support.

Disclosures

The authors declare no conflicts of interest.

Data availability

Example datasets and reconstruction codes are available in [50]. Full datasets are available from the authors upon reasonable request.

References

1. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995). [CrossRef]  

2. G. B. Lemos, V. Borish, G. D. Cole, S. Ramelow, R. Lapkiewicz, and A. Zeilinger, “Quantum imaging with undetected photons,” Nature 512(7515), 409–412 (2014). [CrossRef]  

3. P.-A. Moreau, E. Toninelli, T. Gregory, and M. J. Padgett, “Imaging with quantum states of light,” Nat. Rev. Phys. 1(6), 367–380 (2019). [CrossRef]  

4. D. Zhang, Y.-H. Zhai, L.-A. Wu, and X.-H. Chen, “Correlated two-photon imaging with true thermal light,” Opt. Lett. 30(18), 2354–2356 (2005). [CrossRef]  

5. N. Bornman, M. Agnew, F. Zhu, A. Vallés, A. Forbes, and J. Leach, “Ghost imaging using entanglement-swapped photons,” npj Quantum Inform. 5(1), 63 (2019). [CrossRef]  

6. A. Gatti, M. Bache, D. Magatti, E. Brambilla, F. Ferri, and L. Lugiato, “Coherent imaging with pseudo-thermal incoherent light,” J. Mod. Opt. 53(5-6), 739–760 (2006). [CrossRef]  

7. Z.-H. Xu, W. Chen, J. Penuelas, M. Padgett, and M.-J. Sun, “1000fps computational ghost imaging using LED-based structured illumination,” Opt. Express 26(3), 2427–2434 (2018). [CrossRef]  

8. Y. Klein, A. Schori, I. Dolbnya, K. Sawhney, and S. Shwartz, “X-ray computational ghost imaging with single-pixel detector,” Opt. Express 27(3), 3284–3293 (2019). [CrossRef]  

9. H. Yu, R. Lu, S. Han, H. Xie, G. Du, T. Xiao, and D. Zhu, “Fourier-transform ghost imaging with hard X rays,” Phys. Rev. Lett. 117(11), 113901 (2016). [CrossRef]  

10. L. Olivieri, J. S. T. Gongora, L. Peters, V. Cecconi, A. Cutrona, J. Tunesi, R. Tucker, A. Pasquazi, and M. Peccianti, “Hyperspectral terahertz microscopy via nonlinear ghost imaging,” Optica 7(2), 186–191 (2020). [CrossRef]  

11. S.-C. Chen, Z. Feng, J. Li, W. Tan, L.-H. Du, J. Cai, Y. Ma, K. He, H. Ding, Z.-H. Zhai, Z.-R. Li, C.-W. Qiu, X.-C. Zhang, and L.-G. Zhu, “Ghost spintronic THz-emitter-array microscope,” Light: Sci. Appl. 9(1), 99 (2020). [CrossRef]  

12. J. S. Totero Gongora, L. Olivieri, L. Peters, J. Tunesi, V. Cecconi, A. Cutrona, R. Tucker, V. Kumar, A. Pasquazi, and M. Peccianti, “Route to Intelligent Imaging Reconstruction via Terahertz Nonlinear Ghost Imaging,” Micromachines 11(5), 521 (2020). [CrossRef]  

13. V. Kumar, V. Cecconi, L. Peters, J. Bertolotti, A. Pasquazi, J. S. Totero Gongora, and M. Peccianti, “Deterministic Terahertz Wave Control in Scattering Media,” ACS Photonics 9(8), 2634–2642 (2022). [CrossRef]  

14. R. I. Stantchev, D. B. Phillips, P. Hobson, S. M. Hornett, M. J. Padgett, and E. Hendry, “Compressed sensing with near-field THz radiation,” Optica 4(8), 989–992 (2017). [CrossRef]  

15. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79(5), 053840 (2009). [CrossRef]  

16. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]  

17. P. Zheng, Q. Tan, and H.-C. Liu, “Inverse computational ghost imaging for image encryption,” Opt. Express 29(14), 21290–21299 (2021). [CrossRef]  

18. S. Ma, Z. Liu, C. Wang, C. Hu, E. Li, W. Gong, Z. Tong, J. Wu, X. Shen, and S. Han, “Ghost imaging LiDAR via sparsity constraints using push-broom scanning,” Opt. Express 27(9), 13219–13228 (2019). [CrossRef]  

19. F. Rousset, N. Ducros, F. Peyrin, G. Valentini, C. D’andrea, and A. Farina, “Time-resolved multispectral imaging based on an adaptive single-pixel camera,” Opt. Express 26(8), 10550–10558 (2018). [CrossRef]  

20. L.-Y. Dou, D.-Z. Cao, L. Gao, and X.-B. Song, “Dark-field ghost imaging,” Opt. Express 28(25), 37167–37176 (2020). [CrossRef]  

21. M. P. Edgar, G. M. Gibson, and M. J. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics 13(1), 13–20 (2019). [CrossRef]  

22. H. Ren, S. Zhao, and J. Gruska, “Edge detection based on single-pixel imaging,” Opt. Express 26(5), 5501–5511 (2018). [CrossRef]  

23. C. Zhou, G. Wang, H. Huang, L. Song, and K. Xue, “Edge detection based on joint iteration ghost imaging,” Opt. Express 27(19), 27295–27307 (2019). [CrossRef]  

24. F. Ferri, D. Magatti, L. A. Lugiato, and A. Gatti, “Differential ghost imaging,” Phys. Rev. Lett. 104(25), 253603 (2010). [CrossRef]  

25. Z. Zhang, S. Liu, J. Peng, M. Yao, G. Zheng, and J. Zhong, “Simultaneous spatial, spectral, and 3D compressive imaging via efficient Fourier single-pixel measurements,” Optica 5(3), 315–319 (2018). [CrossRef]  

26. M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010 (2016). [CrossRef]  

27. D. Li, Z. Gao, and L. Bian, “Efficient large-scale single-pixel imaging,” Opt. Lett. 47(21), 5461–5464 (2022). [CrossRef]  

28. B.-L. Liu, Z.-H. Yang, X. Liu, and L.-A. Wu, “Coloured computational imaging with single-pixel detectors based on a 2D discrete cosine transform,” J. Mod. Opt. 64(3), 259–264 (2017). [CrossRef]  

29. M. Xi, H. Chen, Y. Yuan, G. Wang, Y. He, Y. Liang, J. Liu, H. Zheng, and Z. Xu, “Bi-frequency 3D ghost imaging with Haar wavelet transform,” Opt. Express 27(22), 32349–32359 (2019). [CrossRef]  

30. Z. Zhang, X. Li, S. Zheng, M. Yao, G. Zheng, and J. Zhong, “Image-free classification of fast-moving objects using “learned” structured illumination and single-pixel detection,” Opt. Express 28(9), 13269–13278 (2020). [CrossRef]  

31. Z. Wang, W. Zhao, A. Zhai, P. He, and D. Wang, “DQN based single-pixel imaging,” Opt. Express 29(10), 15463–15477 (2021). [CrossRef]  

32. B. Liu, F. Wang, C. Chen, F. Dong, and D. McGloin, “Self-evolving ghost imaging,” Optica 8(10), 1340–1349 (2021). [CrossRef]  

33. V. Cecconi, V. Kumar, A. Pasquazi, J. Totero Gongora, and M. Peccianti, “Nonlinear field-control of terahertz waves in random media for spatiotemporal focusing,” Open Res. Europe 2, 32 (2022). [CrossRef]  

34. I. M. Vellekoop and A. P. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32(16), 2309–2311 (2007). [CrossRef]  

35. C. Zhang, S. Guo, J. Cao, J. Guan, and F. Gao, “Object reconstitution using pseudo-inverse for ghost imaging,” Opt. Express 22(24), 30063–30073 (2014). [CrossRef]  

36. X. Zhang, X. Meng, X. Yang, Y. Wang, Y. Yin, X. Li, X. Peng, W. He, G. Dong, and H. Chen, “Singular value decomposition ghost imaging,” Opt. Express 26(10), 12948–12958 (2018). [CrossRef]  

37. L.-Y. Chen, C. Wang, X.-Y. Xiao, C. Ren, D.-J. Zhang, Z. Li, and D.-Z. Cao, “Denoising in SVD-based ghost imaging,” Opt. Express 30(4), 6248–6257 (2022). [CrossRef]  

38. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Fast Fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 12029 (2017). [CrossRef]  

39. N. Otsu, “A threshold selection method from gray-level histograms,” IEEE Trans. Syst. Man Cybern. 9(1), 62–66 (1979). [CrossRef]  

40. S. S. Welsh, M. P. Edgar, R. Bowman, B. Sun, and M. J. Padgett, “Near video-rate linear Stokes imaging with single-pixel detectors,” J. Opt. 17(2), 025705 (2015). [CrossRef]  

41. Z. Yu, X.-Q. Wang, C. Gao, Z. Li, H. Zhao, and Z. Yao, “Differential Hadamard ghost imaging via single-round detection,” Opt. Express 29(25), 41457–41466 (2021). [CrossRef]  

42. X.-F. Meng, L.-Z. Cai, X.-L. Yang, X.-X. Shen, and G.-Y. Dong, “Information security system by iterative multiple-phase retrieval and pixel random permutation,” Appl. Opt. 45(14), 3289–3297 (2006). [CrossRef]  

43. Y. Li, J. Shi, L. Sun, X. Wu, and G. Zeng, “Single-Pixel Salient Object Detection via Discrete Cosine Spectrum Acquisition and Deep Learning,” IEEE Photonics Technol. Lett. 32(21), 1381–1384 (2020). [CrossRef]  

44. H. Jiang, S. Zhu, H. Zhao, B. Xu, and X. Li, “Adaptive regional single-pixel imaging based on the Fourier slice theorem,” Opt. Express 25(13), 15118–15130 (2017). [CrossRef]  

45. S. Minaee, Y. Boykov, F. Porikli, A. Plaza, N. Kehtarnavaz, and D. Terzopoulos, “Image Segmentation Using Deep Learning: A Survey,” IEEE Trans. Pattern Anal. Mach. Intell. 44, 3523–3542 (2022).

46. L. Wang, G. Chen, D. Shi, Y. Chang, S. Chan, J. Pu, and X. Yang, “Active contours driven by edge entropy fitting energy for image segmentation,” Signal Process 149, 27–35 (2018). [CrossRef]  

47. Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001). [CrossRef]  

48. A. Blake, P. Kohli, and C. Rother, Markov random fields for vision and image processing (MIT press, 2011).

49. D. B. Phillips, M.-J. Sun, J. M. Taylor, M. P. Edgar, S. M. Barnett, G. M. Gibson, and M. J. Padgett, “Adaptive foveated single-pixel imaging with dynamic supersampling,” Sci. Adv. 3(4), e1601782 (2017). [CrossRef]  

50. D. Wang, “Dual-mode adaptive-SVD ghost imaging,” zenodo, (2023) https://doi.org/10.5281/zenodo.7740867.

Data availability

Example datasets and reconstruction codes are available in [50]. Full datasets are available from the authors upon reasonable request.

50. D. Wang, “Dual-mode adaptive-SVD ghost imaging,” zenodo, (2023) https://doi.org/10.5281/zenodo.7740867.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Flowchart of A-SVD GI for both imaging and edge detection of objects. (a) Schematic of the preliminary detection of SVD GI to obtain the rough outline of the object. The entire region containing $160 \times 160$ pixels is divided into $32 \times 32$ superpixels, each superpixel containing $5 \times 5$ pixels. A series of orthogonal patterns, with a resolution of $32 \times 32$, are generated by SVD operation. The low-resolution image is obtained in the preliminary detection. (b) Schematic of the imaging mode. The histogram of the pixel intensity values of the obtained low-resolution image is plotted on the right. By distinguishing ${N_S}$ superpixels in the preliminary result whose values are larger than the threshold ${k_1}$ into the foreground region, indicated by the green region in the histogram, a series of patterns, in which only the foreground regions are allocated with SVD matrixes and the background regions are allocated with 0, with a resolution of $160 \times 160$ are illuminated on the object plane. The rug plot at the bottom of the histogram visualizes the distribution of the pixel values. The ground-truth-like result is obtained using the proposed method. (c) Schematic of edge detection mode of A-SVD GI. The intensity histogram of the low-resolution image is also shown on the right. However, the superpixels with values in the range of $[{{k_1},\,{k_2}} ]$ (the green region) are selected as the foreground region to perform edge detection. A binary Mona Lisa image is adapted as the original object with permission (© Can Stock Photo Inc. / [YuriV] / www.canstockphoto.com).
Fig. 2.
Fig. 2. Single-round measurement scheme for A-SVD GI. (a) Traditional differential measurement method. Each original pattern is divided into a positive and a negative pattern. The detected signals are acquired by the difference of detected two light intensities using the positive and negative patterns. (b) Proposed single-round measurement. The original pattern is normalized to $[{0,\; 1} ]$, as the projection pattern. An all ‘1’ pattern is introduced as the auxiliary pattern for all projected SVD patterns. Each original pattern can be represented by the difference of the projection pattern and the auxiliary pattern.
Fig. 3.
Fig. 3. Numerical comparison of imaging results between different methods under the sampling ratio of 41.65%. The original object is shown on the left of (a). The results of different methods, including DGI, PGI, SVD GI, and A-SVD GI, are shown in the right part of (a). (b) The zoom-in results of different methods for closely-distributed squares, as marked in the red square of (a). Here the superpixel size is $2 \times 2$.
Fig. 4.
Fig. 4. Numerical comparison of different methods with the object ‘BUAA’. The original object is shown in the left part. Under the sampling ratio of 24.7%, the comparison between different methods is shown in the right part. Here the pixel resolution is $128 \times 128$. The superpixel size is $4 \times 4$.
Fig. 5.
Fig. 5. The relationship between the correlated coefficient (CC) and the sampling ratios for different methods. Four results under a sampling ratio of 18.97% are shown in (b): A-SVD GI; (c): PGI; (d): SVD GI; (e): DGI.
Fig. 6.
Fig. 6. Numerical comparison between different methods for grayscale objects.
Fig. 7.
Fig. 7. Performance comparison between different methods in a noisy environment under the sampling ratio of 50%. (a) The solid line with different colors shows the relationship between CC and the SNR of the detection. (b) is the result of A-SVD GI + under SNR of 20 dB. Five results under SNR of 15 dB are shown in (c): A-SVD GI+; (d): SVD GI; (e): PGI; (f): A-SVD GI; (g): DGI.
Fig. 8.
Fig. 8. Mode of edge detection of A-SVD GI. The original object of (a) ‘a square and a circle’ and (b) ‘two gentlemen with hats’ are shown in the left part. The right part shows the numerical results with varying ${k_2}$ ranging from 0.55 to 1. Here the pixel resolution is $128 \times 128$ for all the images.
Fig. 9.
Fig. 9. (a) Experimental setup. M: mirror; L1- L5: lenses; DMD: Digital Micromirror Device; O: object; SPD: single-pixel detector; NI-DAQ: National Instruments-Data acquisition device; PC: Personal computer. (b) Schematic of the spatial dithering method used for the illumination of grayscale patterns by the DMD.
Fig. 10.
Fig. 10. Comparison of experimental results of different methods, including DGI, SVD GI, and A-SVD GI. The sampling ratios for samples 1, 2, and 3 are 26.7%, 29.6%, and 37.4%, respectively. The image sizes are $96 \times 96$, $48 \times 48,$ and $48 \times 48$, respectively. The sizes of superpixel are $2 \times 2$, $4 \times 4,$ and $2 \times 2$, respectively. All the results are normalized and share the same color bar, shown in the lower right corner.
Fig. 11.
Fig. 11. Experimental result of A-SVD GI for edge detection. All the results are normalized and share the same color bar, as shown in the lower right corner.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

O ^ ¯ ¯ ( x , y ) = 1 M i = 1 M ( B i B ¯ ) I ¯ ¯ i ( x , y ) = Φ ¯ ¯ T B ¯ / M B ¯ I ¯ ¯ . = 1 M [ I 1 ( 1 , 1 ) I 2 ( 1 , 1 ) I M ( 1 , 1 ) I 1 ( 1 , 2 ) I 2 ( 1 , 2 ) I M ( 1 , 2 ) I 1 ( p , p ) I 2 ( p , p ) I M ( p , p ) ] [ B 1 B 2 B M ] B ¯ [ I ( 1 , 1 ) I ( 1 , 2 ) I ( p , p ) ] ,
B ¯ = Φ ¯ ¯ [ O ( 1 , 1 ) O ( 1 , 2 ) O ( p , p ) ] T .
O ^ ¯ ¯ ( x , y ) = 1 M Φ ¯ ¯ T Φ ¯ ¯ O ¯ ¯ .
Φ ¯ ¯ s v d = U ¯ ¯ [ E ¯ ¯ M × M 0 ] M × N V ¯ ¯ T ,
σ B 2 ( k ) = [ μ T ω ( k ) μ ( k ) ] 2 ω ( k ) [ 1 ω ( k ) ] ,
P ¯ ¯ O = c 1 P ¯ ¯ P c 2 P ¯ ¯ A = c 1 ( P ¯ ¯ O min ( P ¯ ¯ O ) ) / ( max ( P ¯ ¯ O ) min ( P ¯ ¯ O ) ) c 2 E ¯ ¯ ,
M = M 1 + M 2 + 2 ,
M 1 = N n × N n .
M 2 = n × n × N S .
M = ( N n ) 2 + n 2 N S + 2 2 N N S + 2 2 N N S .
η = M N 2 = 1 n 2 + n 2 N S N 2 + 2 N 2 2 N S N + 2 N 2 2 N S N .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.