Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Sparse Fourier single-pixel imaging

Open Access Open Access

Abstract

Fourier single-pixel imaging is one of the main single-pixel imaging techniques. To improve the imaging efficiency, some of the recent method typically select the low-frequency and discard the high-frequency information to reduce the number of acquired samples. However, sampling only a small amount of low-frequency components will lead to the loss of object details and will reduce the imaging resolution. At the same time, the ringing effect of the restored image due to frequency truncation is significant. In this paper, a new sparse Fourier single-pixel imaging method is proposed that reduces the number of samples explorations while maintaining increased image quality. The proposed method makes a special use of the characteristics of the Fourier spectrum distribution based on which the power of image information decreases gradually from low to high frequencies in the Fourier space. A variable density random sampling matrix is employed to achieve random sampling with Fourier single-pixel imaging technology, followed by the processing of the sparse Fourier spectra using compressive sensing algorithms to recover the high-quality information of the object. The new algorithm can effectively improve the quality of object restoration comparing with the existing Fourier single-pixel imaging methods which only acquire the low-frequency parts. Additionally, considering that the resolution of the system is diffraction limited, super-resolution imaging can also be achieved. Experimental results demonstrate the mainly correctness but also effectiveness of the proposed method.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Single-pixel imaging [16] uses a series of spatially distributed lights to illuminate an object, and then utilizes a nonscanning detector to acquire the desirable reflected light intensity from the object. The image of the object can be recovered by employing the correlation between the light intensity and the spatially distributed illuminating light pattern. Single-pixel imaging has been applied in many fields, such as in gas imaging, three-dimensional imaging and in other imaging types [26]. The spatial distribution of the illuminating light and the restoration algorithm are two key system factors. Their properties determine the imaging quality and imaging speed of the system.

Single-pixel imaging systems initially used illuminating light with a random distribution of light intensity and a simple correlation algorithm to acquire an image of the object. However, such implementation requires a large number of spatially distributed lights and a vast number of samples, yet the quality of the restored image is also not high enough. For example, even if Sun et al. [7] used 106 illumination patterns and a correlation algorithm to acquire the image of an object with an acquisition matrix of 200×200 pixels, the image quality was still not sufficiently high compared with the image obtained by traditional imaging techniques. Following research developments in the field, compressed sensing (CS) technology was introduced into the system to restore object image information [8]. Accordingly, it has become possible to use less random illumination patterns to obtain higher quality images than those required by the correlation algorithm. Nevertheless, the image quality was still not sufficiently high.

Recently, illumination of light with orthogonal Hadamard and Fourier distributions [915] has attracted the attention of researchers and has been extensively used in single-pixel imaging systems. Owing to the orthogonal properties of the illuminating light, the corresponding inverse operation was used to acquire the image of the object that led, however, to the tremendous progress regarding imaging quality. The image quality of the acquired object is still not much different from the image quality obtained with conventional imaging. Fourier single-pixel imaging is one of the main single-pixel imaging techniques. It employs the Fourier transform properties based on which the Fourier space of the object can be acquired using illumination light with a Fourier distribution. Finally, the inverse Fourier transform is used to obtain better image after image transformation. Due to the limitations of the existing optical modulator modulation frequencies [12,15], when the number of pixels of the image is large, real-time imaging cannot be performed when the object spectrum is fully sampled. When the number of required speckles becomes very large for high-resolution imaging, time consumption of the method becomes unacceptable high, and thus severely limits the application of Fourier single-pixel imaging. Therefore, previously papers [1215] used the characteristics based on which the energy of the Fourier spectrum is mostly concentrated in the low-frequency components. Accordingly, only the low-frequency part of the Fourier spectrum is used to reduce the number of samples, and real-time Fourier single-pixel imaging is thus realized. However, to sample only in a small amount of low-frequency parts will cause a loss of many object details and will reduce the imaging resolution. At the same time, the ringing effect in the restored image owing to frequency truncation becomes significant. Correspondingly, the problem that needs to be still solved relates to the development of a method that would reduce the number of structured light while maintaining the image quality. According to CS, the Fourier space of the most of objects is sparse, and the spectral components of the most of objects can be sparsely sampled to recover high-quality object information [16]. In this paper, a new sparse Fourier single-pixel imaging method is proposed, which consists of two parts. In the first one, the random spectral information of an object is obtained using Fourier single-pixel imaging technology, and then the sparse Fourier spectrum is processed with a CS algorithm [16] to recover the high-quality information of the object. To-this-date, several imaging systems with matrix detector have been built, and have experimentally confirmed the effectiveness of the CS approach [1722]. Our algorithm uses the characteristics of the Fourier spectrum distribution, and the variable density random sampling matrix is employed to obtain the spectrum randomly. Using the same sampling number, our algorithm can effectively improve the quality of object restoration comparing with the existing Fourier single-pixel imaging methods [1215], which only acquire the low-frequency part of Fourier space. Additionally, considering that the resolution of the system is diffraction limited, super-resolution imaging can also be achieved with our method.

This paper is divided into four parts. The first one is the introduction. In addition to the introduction, the second part presents and analyzes the principle of this method. The third part verifies the proposed method from the experimental point of view. The last part is a summary of the entire study.

2. Principles and methods

This section is divided into two parts to present and analyze on the theory based on which the algorithm is developed.

2.1 Fourier single-pixel imaging

The object t(x, y) is expected to be based on the illumination Bθ(x, y) speckle pattern which has a specific Fourier spatial distribution, and the reflection intensity information I from the object is collected using a single-pixel detector. This process can be expressed mathematically as follows:

$${\textrm{I}_\theta }({f_x},{f_y}) = \sum\limits_{\textrm{x,y}} {{\textrm{B}_\theta }(x,y;\,{f_x},{f_y}) \bullet t(x,y)} ,$$
where Σ represents the process of accumulating the reflected light of an object to obtain the total intensity of the reflected light, x and y represent the spatial coordinates, fx and fy represent the spectral parameters, and θ represents the phase parameter. The illumination speckle Bθ is represented by a cosine distribution, which is expressed as follows:
$${B_\theta }(x,y;{f_x},{f_y}) = a + b\cos (2\pi {f_x}x/M + 2\pi {f_y}y/N + \theta ),$$
where a is a constant, b is the intensity modulation coefficient, and M and N are the dimensions of the image. When four-step Fourier spectrum acquisition methods are used, the values of θ are equal to 0, π/2, π, and 3π/2, respectively. At this time, the obtained intensity values are expressed as follows:
$$\begin{array}{{c}} {{\textrm{I}_\textrm{0}}({f_x},{f_y}) = \sum\limits_{\textrm{x,y}} {\textrm{[}a + b\cos (2\pi {f_x}x/M + 2\pi {f_y}y/N + \textrm{0})] \bullet t(x,y)} }\\ {{\textrm{I}_{\pi \textrm{/2}}}({f_x},{f_y}) = \sum\limits_{\textrm{x,y}} {\textrm{[}a + b\cos (2\pi {f_x}x/M + 2\pi {f_y}y/N + \pi \textrm{/2})] \bullet t(x,y)} }\\ {{\textrm{I}_\pi }({f_x},{f_y}) = \sum\limits_{\textrm{x,y}} {\textrm{[}a + b\cos (2\pi {f_x}x/M + 2\pi {f_y}y/N + \pi )] \bullet t(x,y)} }\\ {{\textrm{I}_{\textrm{3}\pi \textrm{/2}}}({f_x},{f_y}) = \sum\limits_{\textrm{x,y}} {\textrm{[}a + b\cos (2\pi {f_x}x/M + 2\pi {f_y}y/N + \textrm{3}\pi \textrm{/2})] \bullet t(x,y)} } \end{array},$$
The four values obtained based on Eq. 3 are used to estimate the Fourier spectrum, which can be expressed with the following formula.
$$T({f_x},{f_y}) = \frac{1}{{\textrm{2}b}}\textrm{[}{\textrm{I}_\pi }({f_x},{f_y}) - {\textrm{I}_\textrm{0}}({f_x},{f_y})] + j[{\textrm{I}_{3\pi /2}}({f_x},{f_y}) - {\textrm{I}_{\pi \textrm{/2}}}({f_x},{f_y})],$$
where T is the Fourier spectrum of the object t. When the frequency spectrum of the object is obtained, the spatial distribution of the object can be obtained by applying the corresponding inverse transformation.

In theory, the number of projection samples needed to obtain a complex value in a particular spectrum is equal to four [4]. Because of the complex conjugate symmetry of the Fourier spectrum, the number of illumination patterns and sampling number required to obtain the full spectrum of an M×N pixel image are equal to 2×M×N. It is known that the spectral energy of most objects is concentrated in the low-frequency part. To reduce the number of system samples, the authors of [1215] only collected the specific low-frequency part to improve the imaging speed. However, such processing will reduce the resolution of the acquired image. Zhang et al. [11] studied three low-frequency acquisition schemes, based on square, diamond, and circle sampling patterns. The results obtained showed that the circular acquisition of low-frequencies can yield high-quality images. An example is shown in Fig. 1. In this case, only 20% of the low-frequency of the object was acquired with circular sampling, and the object image was then acquired with inverse Fourier transformation. As shown, only the low-frequency part was acquired, which invariably leads to considerable loss of detailed information as evidenced by the reconstructed image. Therefore, it is necessary to study the new sampling method to reduce the number of samples and maintain high image restoration quality.

 figure: Fig. 1.

Fig. 1. Low-frequency sampling: (A) and (B) represent the original image and its spectrum, (C) and (D) represent the low-frequency sampling spectrum and the restored image.

Download Full Size | PDF

2.2 Sparse sampling CS restoration

According to the sparse nature of the spectrum of most object images, high-quality object images can be acquired by sampling randomly the spectral information [1622]. The process of recovering high-quality object information from random sampling of known spectral information can be transformed into an optimization problem pertaining to the solution of the following formula:

$$\arg \min \textrm{ }||{Ft - T^{\prime}} ||_2^2 + {\lambda _1}{||t ||_1} + {\lambda _2}TV(t),$$
where TV is the total variation operator, ‖‖1 and ‖‖2 respectively represent the 1st and 2nd order regularization functions, and λ1 and λ2 are the respective regularization parameters, t is the object image to be reconstructed and a matrix of MN×1 through matrix rearrangement for object. The matrix T’ is the acquired undersampled spectral data, and its size is K×1. The matrix F is the undersampled Fourier transform, and its size is K×MN. The elements of each row in matrix F are reordered by the elements of the corresponding cosine distribution matrix Bs. For example, if the s-th element of matrix T’ represents the measured value under frequency fx and fy, then the s-th row of the matrix F is obtained through matrix rearrangement Bs, and its calculation form is expressed as follows.
$$\begin{array}{l} {B_\textrm{s}} = [{B_\pi }(x,y;{f_x},{f_y}) - {B_\textrm{0}}(x,y;{f_x},{f_y})]\\ \textrm{ } + j[{B_{3\pi /2}}(x,y;{f_x},{f_y}) - {B_{\pi \textrm{/2}}}(x,y;{f_x},{f_y})]. \end{array}$$
Equation (5) is the optimization equation of a standard CS algorithm [16].

In this study, three different types of frequency sampling operators were studied.

The first one sampled the entire set of spectral components with equal probability. The sampling points are thus distributed equally throughout the entire spectral domain.

The second form selected a specific spectral area with the use of a random sampling scheme with a variable sampling density, which was based on the characteristics of high-frequency sparseness and low-frequency concentration of the object spectrum. The random sampling matrix with variable sampling density has the characteristics of high-sampling density at low-frequency and low-sampling density at high-frequency. We shift zero-frequency component to center of the spectrum, and the probability of being sampled at the r is expressed as follows:

$$\rho (r) = \left\{ {\begin{array}{{ll}} 1&{r \le R}\\ {{{(1 - r)}^p}}&{r > R} \end{array}} \right.$$
where r is the distance from the sampling point to the spatial center, the normalized size of r is (0, 1), and P is the polynomial coefficient.

The third one used circular sampling with the center of the spectral domain as the center point of the sampling scheme. Accordingly, only the low-frequency part of Fourier space was acquired. This method was used in existing papers [915]. This form generally exists in Fourier single-pixel imaging systems which are diffraction limited, whereby the spectrum obtained by the system is limited to a low-frequency circular region [11].

Fourier single-pixel imaging can acquire the real and imaginary parts of the spectrum separately. According to the complex conjugate symmetry of the Fourier spectrum, random sampling points in the first and second quadrants of the rectangular coordinates are measured by the Fourier single-pixel imaging system, while the sampling points in the third and fourth quadrants are filled based on the symmetry of the circularly acquired samples. Figure 2 shows the forms of three sampling matrices at a sampling rate of 40%. The left graph is a random sampling matrix with equal probability. The middle figure is a random sampling matrix with variable density. The parameter R is equal to 0.05 and the parameter p is set to two. The sampling scheme is generated by a Monte Carlo algorithm. The figure on the right shows the form of a circular sampling matrix. The white points represent the sampling points which have unity values. The black points, which are equal to zero, represent nonsampled points.

 figure: Fig. 2.

Fig. 2. Sampling schemes. (Left) Equal probability random sampling scheme, (middle) variable-density random sampling scheme, and (right) circular sampling scheme.

Download Full Size | PDF

Three types of sampling schemes F have been used to sample the Fourier spectrum of the objects, and a compressed sensing algorithm was then used to restore the object information. Experiments and their results are described next.

3. Experiments

Because of the conjugate symmetry of the object spectrum, when a Fourier single-pixel imaging system is used to sample the Fourier space, only the first and second quadrants of the Cartesian coordinate system are sampled, and the third and fourth quadrants are automatically complemented based on the conjugate symmetry. In the experiment, root-mean-square error (RMSE) parameters [12] are used to evaluate the quality of the restored image. The relevant equation is expressed as follows:

$$RMSE = \sqrt {\sum\limits_{x,y = 1}^{M,N} {{{{{[{t_r}(x,y) - {t_o}(x,y)]}^2}} \mathord{\left/ {\vphantom {{{{[{t_r}(x,y) - {t_o}(x,y)]}^2}} {( M \times N) }}} \right.} {( M \times N) }}} }$$
where tr(x,y) and to(x,y) are the grayscale values of the (x,y)-th pixel in the reconstructed and original images, respectively. All images are normalized to unity. The smaller the RMSE is, the better the recovered quality is.

3.1 Computational simulation

In the first group of experiments, the simulations were carried out with binary resolution fringe plates. The results of the sampling rates (SR) of 40%, 30%, 20%, and 10% were compared and studied, respectively. The results are shown in Fig. 3. Low-resolution sampling implies that only the selected low-frequency part was sampled with the circular sampling, and the image information was then obtained with the inverse Fourier transform. Uniform density random undersampling implies that the spectral information was sampled with a random equal density sampling, and variable density random undersampling implies that the spectrum was sampled with variable density, nonuniform random sampling. Zero-fill indicates that the unsampled spectrum is filled with zeros, and the image information is obtained with inverse Fourier transform. CS represents the results of restoring object image information using a compressed sensing algorithm. The image in the figure represents the restored image, while the mask represents the form of the sampling matrix in the frequency domain. The points colored in white represent the sampled points, and the points colored in black represent the points which were not sampled. Assuming a 40% sampling rate as an example, the numbers of the sampling points are the same and equal to 6554, irrespective of whether the distributions of the sampling mask are different in the restoration scenarios. Based on the simulation results, it can be observed that the image restored by the inverse Fourier transform after filling the unsampled random spectrum with zero values yields poor results, and the image quality obtained by processing random spectra with the CS algorithm has been significantly improved. At the same time, the image quality obtained by variable density random sampling and CS restoration is the best. According to the results obtained, various image details are lost, and the edges become blurred due to low-frequency sampling and direct restoration. However, due to the use of the proposed sampling strategy and the CS reconstruction algorithm in this paper, the originally lost image details are now displayed and the edges become clear. The RMSE values are calculated, and the results are then shown in Table 1. It can also be observed from the results obtained that the evoked image quality owing to the variable density random sampling and CS restoration is the best. The various RMSE errors estimated include the low-resolution sampling (LR–RMSE), uniform density random undersampling with zerofilling (UZF–RMSE), uniform density random undersampling with CS (UCS–RMSE), variable density random undersampling with zerofilling (VZF–RMSE), and the variable density random undersampling with CS (VCS–RMSE).

 figure: Fig. 3.

Fig. 3. Restoration results of the binary fringe. Restoration results following low-resolution sampling exhibit edge expansion and details loss. Noted is the presence of noise in the zero-filled restoration results, while the intensity of noise for uniform sampling is larger than that of variable density sampling. CS restoration results yield high-quality outcomes, and a large number of image details can be presented. At the same time, the quality of the CS restoration results with variable density sampling is better than those of other methods.

Download Full Size | PDF

Tables Icon

Table 1. RMSE of restoration results of binary stripe resolution plate

The second group of experiments used grayscale photography to simulate the cameraman image. Similarly, the results obtained with the sampling rates (SR) of 40%, 30%, 20%, and 10%, were compared and studied, respectively. The settings in this simulation were consistent with those in the previous group, and the results are shown in Fig. 4. It can be observed from the results that there is considerable noise interference in the image which is restored by the inverse Fourier transformation after filling of the unsampled spectral space with zero values. Accordingly, the noise interference in the uniformly sampled image becomes more profound. The CS algorithm can improve the quality of the image obtained by processing the random spectrum. At the same time, the image quality obtained by variable density random sampling and CS restoration is the best. Similarly, in the case of low-frequency sampling and direct restoration, a large number of image details are lost, and the edges become blurred. However, use of the algorithm proposed in this paper allows the restoration of a large number of lost image details and the display of well-defined edges. Compared with the previous binary images, the quality of the restored image degrades at the same sampling rate, but the restored image based on the use of the CS algorithm is better than that obtained with direct sampling at low-frequency. The RMSE values of the obtained results were calculated, and the results are listed in Table 2. It can also be observed that the image quality obtained by variable density random sampling and CS restoration is better than those of the other methods for the same sampling number.

 figure: Fig. 4.

Fig. 4. Grayscale image restoration results. Restoration results following low-resolution sampling exhibit edge expansion and details loss. There is considerable noise in the zero-filled restored results, and the noise of uniform sampling has a larger intensity than that associated with variable density sampling. CS restoration results are of high quality, and a large number of image details can be observed. At the same time, the quality of the CS restoration results with variable density sampling is better than those of other methods.

Download Full Size | PDF

Tables Icon

Table 2. RMSE of cameraman image restoration

The third group of experiments considered the existence of the diffraction limit of the system. In this case, the system has a low-pass characteristic and a specific cut-off frequency. The cut-off frequency is denoted as f0 in the low-resolution spectrum in Fig. 5. Thus, the acquired system spectrum is shown as the middle circle area in the low-resolution spectrum in Fig. 5. The images obtained by the system are shown as the low-resolution images in Fig. 5. It can be observed that owing to the diffraction limit, the details of the image are lost and the edges are blurred. The spectral values obtained from this circular region are restored by the CS algorithm. Accordingly, detailed results are shown in the CS image. According to the results, it can be observed that the edges of the image become more clear after the use of the CS algorithm, and some of the details which were lost in the low-resolution image are recovered, while the ringing effect is suppressed effectively. According to the results, the proposed algorithm can effectively restore the spectral information beyond the cut-off frequency. Therefore, the proposed algorithm has the ability to perform super-resolution imaging beyond the diffraction limit. In addition, quantitative analyses were also carried out. The RMSE of the reconstructed image owing to the diffraction limit was 24.03%, while the RMSE of the image obtained by the proposed algorithm was 0.87%. The results show that the proposed algorithm can effectively improve the image quality and transcend the limitation of the diffraction limit.

 figure: Fig. 5.

Fig. 5. Restoration results based on the diffraction limit. The label “Low-resolution” represents the image and spectral information subject to the diffraction limit. The label “CS restored image” represents the spatial and spectral information that is restored by CS. The label “Original” represents the original image and its spectral information.

Download Full Size | PDF

3.2 Experimental data processing

In [12], we used Fourier single-pixel imaging to obtain the object spectrum and image information. The experimental setup is shown in Fig. 6. Detailed parameter settings can be viewed in [12]. In this paper, we use the static experimental results for processing and analyses, and use the 100% spectrum restoration results as the original image to calculate the RMSE values. Similarly, we use the results associated with different SR values (40%, 30%, 20%, and 10%) for analyses. Outcomes are shown in Fig. 7. Similar to the above results, the image acquired by inverse Fourier transformation using random sampling is affected by noise, and the noise associated with uniform random sampling was much larger in intensity than that associated to variable density random sampling. According to the results received, the image quality obtained by variable density nonuniform sampling and CS restoration is obviously higher than those of other methods. The local area of the restored image (the selected area is indicated by the red box in the original image) is enlarged. The results are shown in Fig. 8. It can be observed that the image details and signal-to-noise ratio obtained by the variable density compression sensing algorithm have been significantly improved. Considering a sampling rate of 10% as an example, the numbers recovered from the low-frequency sampled images can no longer be effectively identified, while the images recovered from the variable density random sampling and CS algorithm can effectively identify the numbers from the recovered image. The results are similar to the simulation results listed above, which prove that the proposed algorithm is also effective. In addition, we use the RMSE to quantitatively analyze the obtained results. As shown in Table 3, the quantitative results in the table prove the effectiveness of the proposed algorithm.

 figure: Fig. 6.

Fig. 6. Experimental setup.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Restoration outcomes. Low-resolution sampling recovers lost details. Extensive noise is observed in the zero-filled restoration results, and the noise of uniform sampling is larger than that of variable density sampling. The quality of the CS restoration results obtained from variable density sampling is higher than those of other methods. Considering a sampling rate of 10% as an example, the reconstructed results of CS with variable density sampling can identify the non-recognizable digits in other results. The red box is the selected image enlargement area.

Download Full Size | PDF

 figure: Fig. 8.

Fig. 8. Magnified views of experimental results.

Download Full Size | PDF

Tables Icon

Table 3. RMSE of experimental restoration results

Assuming that the system is limited by the diffraction limit, the highest frequency that can be obtained is f0 (identified in the spectral domain of Fig. 9). Figure 9 shows the corresponding results. It can be observed that owing to the diffraction limit, the edge of the image obtained by direct detection degrades, and the details are lost. At the same time, owing to the influence of frequency truncation, Gibb’s ringing appears in the image. After the image is restored with CS, the edge of the image becomes more clear, additional details can be presented, and the ringing effect in the image is suppressed. The frequency domain results also confirm this conclusion. As observed, the high-frequency information beyond the cut-off frequency in the image spectrum can be restored with the use of the proposed algorithm. Thus, a part of the detailed information lost in the low-resolution image can be restored. Two local areas in the image are selected for further illustration. As shown in the third row of Fig. 9, red and green borders are used for annotation. The third row in Fig. 9 shows two groups of three magnified views. Shown from the left to the right are the low-resolution image, image restored by the proposed algorithm, and the original image. It can be clearly observed from the local image that the proposed algorithm can effectively overcome the limitation of the diffraction limit of the system and obtain high-resolution image information. The RMSE of the low-resolution image and the image acquired by the proposed algorithm were also calculated, and the RMSE values were 5.69% and 0.76%, respectively. Quantitative results also prove the effectiveness of the proposed algorithm.

 figure: Fig. 9.

Fig. 9. Restoration outcomes based on the diffraction limit. The “low-resolution” label represents the spatial and spectral information of the image subject to the diffraction limit. The “CS restored” label represents the spatial and spectral information of the image restored by CS. The label “Original” represents the spatial and spectral information of the original image. The third line enlarges the corresponding local areas of the three groups of images.

Download Full Size | PDF

4. Discussion and conclusions

According to the sparsity of the spectral components in Fourier single-pixel imaging systems, random sparse sampling and CS algorithms can be used to restore object image information. Three sampling schemes were used to study the image quality. The results showed that the image quality obtained by variable density random sampling and compressed sensing was higher. Variable density random sampling is consistent with the property that the power of image information decreases gradually from low to high frequency in the Fourier spectrum. In other words, the degree of information sparsity in the low-frequency part is small, and the sampling density is thus high. The degree of information sparsity in the high-frequency parts of the spectral domain is large, and the sampling density is thus low. Compared with the existing methods, where only low-frequency information is selected and high-frequency information is discarded in Fourier single-pixel imaging, this algorithm can obtain high-quality image information using the same number of samples. At the same time (and in view of the diffraction limit), it can effectively expand the high-frequency information beyond the cut-off frequency to obtain high-definition and high-resolution images. The validity of the proposed algorithm was verified by simulation and experiments. The main limitation of this algorithm was the use of the CS algorithm that required a large number of iterations and increased the computational complexity. In future studies, optimization algorithms or professional computational and hardware systems will be explored to reduce the computational complexity, improve computational efficiency, and identify a balance between imaging quality and imaging efficiency.

Funding

National Natural Science Foundation of China (11404344, 41505019 and 41475001); State Key Laboratory of Pulsed Power Laser Technology (SKL2018ZR10).

References

1. R. S. Aspden, N. R. Gemmell, P. A. Morris, D. S. Tasca, L. Mertens, M. G. Tanner, R. A. Kirkwood, A. Ruggeri, A. Tosi, R. W. Boyd, G. S. Buller, R. H. Hadfield, and M. J. Padgett, “Photon-sparse microscopy: visible light imaging using infrared illumination,” Optica 2(12), 1049–1052 (2015). [CrossRef]  

2. N. Huynh, E. Zhang, M. Betcke, S. Arridge, P. Beard, and B. Cox, “Single-pixel optical camera for video rate ultrasonic imaging,” Optica 3(1), 26–29 (2016). [CrossRef]  

3. L. Olivieri, J. S. Totero Gongora, A. Pasquazi, and M. Peccianti, “Time-Resolved Nonlinear Ghost Imaging,” ACS Photonics 5(8), 3379–3388 (2018). [CrossRef]  

4. R. I. Stantchev, B. Q. Sun, S. M. Hornett, P. A. Hobson, G. M. Gibson, M. J. Padgett, and E. Hendry, “Noninvasive, near-field terahertz imaging of hidden objects using a single-pixel detector,” Sci. Adv. 2(6), e1600190 (2016). [CrossRef]  

5. M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7(1), 12010 (2016). [CrossRef]  

6. G. M. Gibson, B. Q. Sun, M. P. Edgar, D. B. Phillips, N. Hempler, G. T. Maker, G. P. A. Malcolm, and M. J. Padgett, “Real-time imaging of methane gas leaks using a single-pixel camera,” Opt. Express 25(4), 2998–3005 (2017). [CrossRef]  

7. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013). [CrossRef]  

8. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 83–91 (2008). [CrossRef]  

9. M. P. Edgar, G. M. Gibson, R. W. Bowman, B. Sun, N. Radwell, K. J. Mitchell, S. S. Welsh, and M. J. Padgett, “Simultaneous real-time visible and infrared video with single-pixel detectors,” Sci. Rep. 5(1), 10669 (2015). [CrossRef]  

10. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015). [CrossRef]  

11. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Hadamard single-pixel imaging versus Fourier single-pixel imaging,” Opt. Express 25(16), 19619–19639 (2017). [CrossRef]  

12. J. Huang, D. Shi, K. Yuan, S. Hu, and Y. Wang, “Computational-weighted Fourier single-pixel imaging via binary illumination,” Opt. Express 26(13), 16547–16559 (2018). [CrossRef]  

13. L. Bian, J. Suo, X. Hu, F. Chen, and Q. Dai, “Efficient single pixel imaging in Fourier space,” J. Opt. 18(8), 085704 (2016). [CrossRef]  

14. Z. Zhang, S. Liu, J. Peng, M. Yao, G. Zheng, and J. Zhong, “Simultaneous spatial, spectrum, and 3D compressive imaging via efficient Fourier single-pixel measurements,” Optica 5(3), 315–319 (2018). [CrossRef]  

15. Z. Zhang, X. Wang, G. Zheng, and J. Zhong, “Fast Fourier single-pixel imaging via binary illumination,” Sci. Rep. 7(1), 12029 (2017). [CrossRef]  

16. E. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]  

17. Y. Hitomi, J. W. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, “Video from a single coded exposure photograph using a learned over-complete dictionary,” in Proceedings of 2011 IEEE International Conference on Computer Vision (ICCV), (2011), pp. 287–294.

18. L. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carlin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21(9), 10526–10545 (2013). [CrossRef]  

19. L. Galvis, H. Arguello, and G. R. Arce, “Coded aperture design in mismatched compressive spectral imaging,” Appl. Opt. 54(33), 9875–9882 (2015). [CrossRef]  

20. X. Lin, G. Wetzstein, Y. B. Liu, and Q. H. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett. 39(7), 2044–2047 (2014). [CrossRef]  

21. R. Koller, L. Schmid, N. Matsuda, T. Niederberger, L. Spinoula, O. Cossairt, G. Schuster, and A. K. Katsaggelos, “High spatio-temporal resolution video with compressed sensing,” Opt. Express 23(12), 15992–16007 (2015). [CrossRef]  

22. H. Rueda, D. Lau, and G. R. Arce, “Multi-spectral compressive snapshot imaging using RGB image sensors,” Opt. Express 23(9), 12207–12221 (2015). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Low-frequency sampling: (A) and (B) represent the original image and its spectrum, (C) and (D) represent the low-frequency sampling spectrum and the restored image.
Fig. 2.
Fig. 2. Sampling schemes. (Left) Equal probability random sampling scheme, (middle) variable-density random sampling scheme, and (right) circular sampling scheme.
Fig. 3.
Fig. 3. Restoration results of the binary fringe. Restoration results following low-resolution sampling exhibit edge expansion and details loss. Noted is the presence of noise in the zero-filled restoration results, while the intensity of noise for uniform sampling is larger than that of variable density sampling. CS restoration results yield high-quality outcomes, and a large number of image details can be presented. At the same time, the quality of the CS restoration results with variable density sampling is better than those of other methods.
Fig. 4.
Fig. 4. Grayscale image restoration results. Restoration results following low-resolution sampling exhibit edge expansion and details loss. There is considerable noise in the zero-filled restored results, and the noise of uniform sampling has a larger intensity than that associated with variable density sampling. CS restoration results are of high quality, and a large number of image details can be observed. At the same time, the quality of the CS restoration results with variable density sampling is better than those of other methods.
Fig. 5.
Fig. 5. Restoration results based on the diffraction limit. The label “Low-resolution” represents the image and spectral information subject to the diffraction limit. The label “CS restored image” represents the spatial and spectral information that is restored by CS. The label “Original” represents the original image and its spectral information.
Fig. 6.
Fig. 6. Experimental setup.
Fig. 7.
Fig. 7. Restoration outcomes. Low-resolution sampling recovers lost details. Extensive noise is observed in the zero-filled restoration results, and the noise of uniform sampling is larger than that of variable density sampling. The quality of the CS restoration results obtained from variable density sampling is higher than those of other methods. Considering a sampling rate of 10% as an example, the reconstructed results of CS with variable density sampling can identify the non-recognizable digits in other results. The red box is the selected image enlargement area.
Fig. 8.
Fig. 8. Magnified views of experimental results.
Fig. 9.
Fig. 9. Restoration outcomes based on the diffraction limit. The “low-resolution” label represents the spatial and spectral information of the image subject to the diffraction limit. The “CS restored” label represents the spatial and spectral information of the image restored by CS. The label “Original” represents the spatial and spectral information of the original image. The third line enlarges the corresponding local areas of the three groups of images.

Tables (3)

Tables Icon

Table 1. RMSE of restoration results of binary stripe resolution plate

Tables Icon

Table 2. RMSE of cameraman image restoration

Tables Icon

Table 3. RMSE of experimental restoration results

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

I θ ( f x , f y ) = x,y B θ ( x , y ; f x , f y ) t ( x , y ) ,
B θ ( x , y ; f x , f y ) = a + b cos ( 2 π f x x / M + 2 π f y y / N + θ ) ,
I 0 ( f x , f y ) = x,y [ a + b cos ( 2 π f x x / M + 2 π f y y / N + 0 ) ] t ( x , y ) I π /2 ( f x , f y ) = x,y [ a + b cos ( 2 π f x x / M + 2 π f y y / N + π /2 ) ] t ( x , y ) I π ( f x , f y ) = x,y [ a + b cos ( 2 π f x x / M + 2 π f y y / N + π ) ] t ( x , y ) I 3 π /2 ( f x , f y ) = x,y [ a + b cos ( 2 π f x x / M + 2 π f y y / N + 3 π /2 ) ] t ( x , y ) ,
T ( f x , f y ) = 1 2 b [ I π ( f x , f y ) I 0 ( f x , f y ) ] + j [ I 3 π / 2 ( f x , f y ) I π /2 ( f x , f y ) ] ,
arg min   | | F t T | | 2 2 + λ 1 | | t | | 1 + λ 2 T V ( t ) ,
B s = [ B π ( x , y ; f x , f y ) B 0 ( x , y ; f x , f y ) ]   + j [ B 3 π / 2 ( x , y ; f x , f y ) B π /2 ( x , y ; f x , f y ) ] .
ρ ( r ) = { 1 r R ( 1 r ) p r > R
R M S E = x , y = 1 M , N [ t r ( x , y ) t o ( x , y ) ] 2 / [ t r ( x , y ) t o ( x , y ) ] 2 ( M × N ) ( M × N )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.