Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Projector-defocusing rectification for Fourier single-pixel imaging

Open Access Open Access

Abstract

Fourier single-pixel imaging (FSI) is an efficient single-pixel imaging method of obtaining high-quality (resolution/signal-to-noise ratio) 2D images, which projects sinusoid patterns on the object and reconstructs the image through reflected light. The typical system of FSI consists of a single-pixel detector and a digital projector. However, the defocusing of the projector lens blurs the projected patterns, which results in reduced imaging quality. In this work, we propose the projector-defocusing rectification for FSI, which optimizes projector defocusing for the first time. The proposed method rectifies Fourier coefficients using the amplitude ratio between original and defocused patterns, which we can acquire through a controlled experiment on a whiteboard. The enhancement of imaging quality in imperfect circumstances is demonstrated by simulations and experiments.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Traditional photographic techniques mainly adopt charge-coupled devices or complementary metal oxide semiconductors, both of which utilize multi-pixel arrangement to obtain images. By contrast, the novel single-pixel imaging (SPI) only uses single-pixel detector, which is unrestricted by special resolution to reconstruct images. Therefore, SPI has been widely used in 3D [1–3], terahertz [4, 5], and multispectral [6] imaging areas.

SPI was first introduced in the “flying-spot” camera [7]; it used a perforated disk (a Nipkow disk) to modulate a light source for scanning light spots, but the result image exhibited a low signal-to-noise ratio (SNR). SPI has recently been developed into conventional ghost imaging [8] on the basis of probabilistic mathematics but with low reliability. Indirect measurement, such as computational imaging [9], is later on used in the method, which means the detection unit does not need a direct line of sight to the object. For instance, Zhang et al. [10] proposed Fourier single-pixel imaging (FSI) to reconstruct images with indirect light. In their work, sinusoid structured patterns were illuminated on the object by a digital projector, and the reflected light was collected by a single-pixel detector to reconstruct the image through the Fourier spectrum. This method can achieve high-quality images than those of other methods. However, many factors causing inaccuracy remain in the FSI scheme. For instance, the projector defocusing will decrease the image quality in pattern projection, which is an important step of FSI.

In the pattern projection of FSI, the defocusing caused by the projector lens results in differences between original and defocused patterns. Fourier coefficients calculated by reflected lights are inaccurate and cannot be used to reconstruct the original image. Li et al. [11] used binary pattern optimization, which needed to reach maximum iteration and did not apply to SPI, to solve the defocusing problem. Sun et al. [12] adopted digital microscanning to improve the SNR of SPI. In [13–16], defocusing was corrected by using images with two different depths, which was inapplicable to SPI due to its low efficiency. Therefore, the projector defocusing resulting in stripe changes remains unsolved yet.

In our previous work [17], we build up conventional SPI system and present adaptive regional single-pixel imaging (ARSI) method to improve imaging efficiency. In this study, we propose a projector-defocusing rectification (PDR) method to rectify projector defocusing in the frequency domain, which can reconstruct high-quality images. PDR is based on the conventional FSI method and uses Fourier spectrums to reconstruct images. We first build a model of projector defocusing according to defocusing theory and its effect on four-step phase-shifting sinusoidal patterns to solve the defocusing problem. In this model, defocusing only changes the amplitude of fringes for particular sinusoid patterns [18]; thus, the amplitude ratio between original and defocused patterns is definite at a specified frequency. We then design a new algorithm to rectify projector defocusing on the basis of the defocusing model and the FSI method. We prove in our algorithm that Fourier coefficients are inversely proportional to amplitude as other conditions are fixed. Thus, the Fourier spectrum can be rectified by amplitude ratio and then reconstruct high-quality images. This PDR method innovatively introduces the method of correcting defocusing and improves image quality effectively.

2. Principles

In conventional FSI, sinusoidal patterns are projected onto the object, and reflected lights are collected by a single-pixel detector. The image is then reconstructed by a computer through the inverse Fourier transform (IFT). Projector defocusing becomes a factor causing inaccuracy in this imaging process. The defocusing causes patterns to be projected on the object blurry, which means the interaction between the object and patterns cannot precisely represent the Fourier spectrum of the object. Therefore, we propose a novel SPI method based on defocusing rectification to improve image quality.

In this section, we first briefly introduce the conventional FSI method and its key formulas. A module of projector defocusing is then built according to defocusing theory and its effect on sinusoidal patterns. We finally design a rectification algorithm for projector defocusing on the basis of the defocusing model and the FSI method.

2.1 Fourier single-pixel imaging

In the FSI, phase-shifting sinusoidal structured light patterns are projected on the scene, and reflected light is collected by a single-pixel detector [10]. The final image is reconstructed with the fast IFT algorithm.

The schematic of the FSI is shown in Fig. 1.

 figure: Fig. 1

Fig. 1 Schematic of FSI. The digital projector projects illuminated patterns onto the object, which is located 0.5 m away from the experimental system. The reflected light is collected by a lens and detected by a photodiode. An ADC transfers the detected signal to a computer. The image is reconstructed using the obtained data.

Download Full Size | PPT Slide | PDF

In [10], a 2D sinusoid pattern is determined by frequency (fx, fy) and initial phase Ф.

Pϕ(x,y;fx,fy)=B(fx,fy)+A(fx,fy)cos(2πfxx+2πfyy+ϕ),
where (x,y) represents the 2D Cartesian coordinates in the scene, B(fx, fy) denotes the DC term equal to the average image intensity at frequency (fx, fy), and A(fx, fy) is the amplitude of the pattern at frequency (fx, fy).

The total response of the single-pixel detector is

Dϕ(fx,fy)=Dn+kΩR(x,y)Pϕ(x,y;fx,fy),
where k depends on the size and location of the detector, and R is the distribution of the surface reflectivity of the imaged objects in the scene.

After projecting patterns, i.e.,P0,Pπ/2,Pπ,P3π/2, we obtain Fourier coefficients in the form of

F{R(x,y)}=12bk{[D0(fx,fy)Dπ(fx,fy)]+j[Dπ/2(fx,fy)D3π/2(fx,fy)]},
where j denotes the imaginary unit, and F is the Fourier transform.

In this process, the defocusing of projector affects the accuracy of patterns projected on the object, which results in the actual pattern P(x,y;fx, fy) and an inaccurate Fourier spectrum F{R(x,y)}. We build a defocusing model to establish the relationship between F{R(x,y)} and F{R(x,y)} to solve the issue.

2.2 Model of projector defocusing

The defocusing comes from the projector lens, as shown in Fig. 2. When the object is not placed on the focal plane of projector, the projected pattern is defocused illustrated as [Fig. 2(a)]. We establish the relationship between original and defocused patterns by defocusing theory and its effect on sinusoidal patterns to correctly rectify defocusing.

 figure: Fig. 2

Fig. 2 Projector defocusing. (a) Defocusing schematic. The light emitted by the object point is focused on the focal plane through the lens, and the image is defocused due to the noncoincidence of the image and focal planes. (b) PSF. The PSF is generally approximated by a circular Gaussian function as Eq. (5), where the standard deviation σ represents the defocusing level.

Download Full Size | PPT Slide | PDF

2.2.1 Defocusing theory

Given (fx, fy), the defocusing effect of projector can be expressed as

P'(x,y;fx,fy)=G(x,y)P(x,y;fx,fy),
G(x,y)=12πσ2exp(x2+y22σ2),
where P(x,y) is the generated sinusoidal fringe pattern, P(x,y) is the original pattern, G(x,y) is the point spread function (PSF) [Fig. 2(b)], and ⊗ represents convolution.

2.2.2 Defocusing effect on Fourier single-pixel imaging

According to [18], the pattern P(x,y) is sinusoid; thus, defocusing G(x,y) only reduces the amplitude of P(x,y). The amplitude varies significantly with the sinusoid frequency, i.e.,

A'(fx,fy)=α(fx,fy)A(fx,fy),
where A(fx, fy) represents the amplitude of the original pattern at frequency (fx, fy), A(fx, fy) is the amplitude of the defocused pattern at frequency (fx, fy), and α(fx, fy) denotes the amplitude ratio between the original and defocused patterns at frequency (fx, fy).

The average gray value of patterns does not change, which means

P'(x,y;fx,fy)B(fx,fy)=α(fx,fy)(P(x,y;fx,fy)B(fx,fy)),
where B(fx, fy) represents the average gray value of the original and defocused patterns at frequency (fx, fy).

The amplitude changing due to defocusing is shown in Fig. 3. The amplitude of the defocused pattern [Fig. 3(b)] is lower than that of the original pattern [Fig. 3(a)]. Curves of the central row extracted from the two patterns are compared in Fig. 3(c).

 figure: Fig. 3

Fig. 3 Amplitude changing due to defocusing. (a) Original pattern. (b) Defocused pattern. (c) Curves of the same row in the two patterns. The blue curve represents the original pattern, and the orange curve represents the defocused pattern. After defocusing, the average value maintains, whereas the pattern amplitude decreases.

Download Full Size | PPT Slide | PDF

The response of single-pixel detector considering that the projector exerts the defocusing effect is

Dϕ'(fx,fy)=Dn+kΩR(x,y)Pϕ'(x,y;fx,fy).

The inaccurate Fourier coefficient is

F'{R(x,y)}=12bk{[D0'(fx,fy)Dπ'(fx,fy)]+j[Dπ/2'(fx,fy)D3π/2'(fx,fy)]}=α(fx,fy)2bk{[D0(fx,fy)Dπ(fx,fy)]+j[Dπ/2(fx,fy)D3π/2(fx,fy)]}.

Equations (3) and (9) are combined, and we obtain

F'{R(x,y)}=α(fx,fy)F{R(x,y)}.

We obtain a defocusing model based on sinusoidal fringes by using the above principles. The amplitude is the only affected factor of defocusing. Hence, once we possess the amplitude ratio α(fx, fy), we can rectify the defocusing and attain accurate Fourier coefficients.

2.3 Rectification method for projector defocusing

We propose a PDR method on the basis of the conventional FSI method and the projector-defocusing model. We prove that the effect of defocusing on Fourier coefficients can be represented by the amplitude ratio between original and defocused patterns. Thus, we rectify the defocusing by taking Fourier coefficients multiplied by amplitude ratio as Eq. (10).

2.3.1 Amplitude ratio

From Eq. (5), the amplitude ratio can be calculated as

α(fx,fy)=A'(fx,fy)A(fx,fy).

The values of A(fx, fy) and B(fx, fy) of P(fx, fy) are set manually when generating image patterns. The amplitude A(fx, fy) of the defocused pattern P(fx, fy) is therefore required to compute the amplitude ratio α(fx, fy).

We adopt a pixelated camera to capture the protected pattern on a whiteboard as Pc(fx, fy). Considering the influence of light propagation between the projector and the camera, we conclude that

Pc(fx,fy)=rP'(fx,fy),
where r denotes the light propagation coefficient between the projector and the camera, including the whiteboard reflectivity.

Thus, for the captured pattern Pc(fx, fy), the amplitude and average value are

Ac(fx,fy)=rA'(fx,fy),
Bc(fx,fy)=rB'(fx,fy)=rB(fx,fy).

From Eqs. (13) and (14),

A'(fx,fy)=Ac(fx,fy)B(fx,fy)Bc(fx,fy).

To eliminate the influence of r,

α(fx,fy)=A'(fx,fy)A(fx,fy)=Ac(fx,fy)B(fx,fy)A(fx,fy)Bc(fx,fy).

The amplitude and average value of the captured pattern Pc(fx, fy) can be calculated according to the features of the four-step phase-shift images as

Ac(fx,fy)=12MNx=1My=1N(Pc0(x,y;fx,fy)Pcπ(x,y;fx,fy))2+(Pcπ/2(x,y;fx,fy)Pc3π/2(x,y;fx,fy))2,
B''(fx,fy)=14MNx=1My=1N(Pc0(x,y;fx,fy)+Pcπ/2(x,y;fx,fy)+Pcπ(x,y;fx,fy)+Pc3π/2(x,y;fx,fy)),
where the size of the pattern Pc(fx, fy) is M × N.

2.3.2 Rectified Fourier coefficients with amplitude ratio

To acquire the correct Fourier coefficient, we rectify it as

F{R(x,y)}=1α(fx,fy)F'{R(x,y)}.

We can then reconstruct an image of higher quality in the fast IFT algorithm.

2.3.3 Simulation

We simulate with an actual picture [Fig. 4], to validate the correctness of the rectification method.

 figure: Fig. 4

Fig. 4 Actual picture (200 × 200 pixels). The defocus function is set to a Gaussian filter (3 × 3 pixels with a standard deviation of 0.6 pixels) in this simulation. The pattern size is 200 × 200 pixels, the average is 100 pixels, and the amplitude is set to 100 pixels as standard.

Download Full Size | PPT Slide | PDF

The defocusing function is set to a Gaussian filter (3 × 3 pixels with a standard deviation of 0.6 pixels) in this simulation. The pattern size is 200 × 200 pixels, the average is 100 pixels, and the amplitude is set to 100 pixels as standard.

As to our rectification method, we first need to acquire the amplitude ratio of original and defocused patterns. As shown in Fig. 5, the ratio decreases with the frequency, which corresponds to the phenomenon that patterns with a high frequency have narrow fringes and are easily influenced by defocusing.

 figure: Fig. 5

Fig. 5 Amplitude ratio (moving zero-frequency component to the center). (a) Mesh figure of the amplitude ratio. (b) Tangent plane of the amplitude ratio. The ratio is decreasing with the frequency, which corresponds to the phenomenon that patterns with a high frequency have narrow fringes and are easily influenced by defocusing.

Download Full Size | PPT Slide | PDF

We then rectify the Fourier coefficients with the amplitude ratio and reconstruct the image with the inverse fast Fourier transform. The contrast among the original image, conventional FSI image without PDR, and FSI image with PDR is shown in Fig. 6. The image shown as Fig. 6(a) becomes blurred in the conventional FSI shown as Fig. 6(b), whereas it is almost the same as the original one in the FSI with PDR shown as Fig. 6(c).

 figure: Fig. 6

Fig. 6 Image quality contrast among reconstructed images of SPI. (a) Original image. (b) FSI image. (c) FSI image with PDR. The image shown as Fig. 6(a) becomes blurred in the conventional FSI shown as Fig. 6(b), whereas it is almost the same as the original one in the FSI with PDR shown as Fig. 6(c). We quantity these images’ quality by correlation coefficient r as shown in Eq. (20). The correlation coefficients of conventional FSI image and FSI image with PDR are 0.9867 and 0.9979 respectively shown as Fig. 6. Through comparing the correlations between conventional FSI and FSI with PDR, the image quality of FSI with PDR is evidently much higher than FSI without PDR.

Download Full Size | PPT Slide | PDF

We quantity these images’ quality by correlation coefficient r as shown in Eq. (20). This correlation coefficient increases with the resemblance of two images and is defined as:

r=mn(AmnA¯)(BmnB¯)mn(AmnA¯)2(mn(BmnB¯)2),
where r is the correlation of the two images; A and B are the image matrix with indexes m and n,‾A,‾B denotes the mean value of matrix A and B.

The correlation coefficients of conventional FSI image and FSI image with PDR are 0.9867 and 0.9979 respectively shown as Fig. 6. Through comparing the correlations between conventional FSI and FSI with PDR, the image quality of FSI with PDR is evidently much higher than FSI without PDR.

We further compare the details of the images by zooming in on a patch. Figure 7 shows that the patch in the FSI image has a low sharpness. With PDR, the patch is almost the same as the original one. We select the middle row of the patch to compare. The red curve represents the original patch, the green curve represents the FSI patch, and the blue curve represents the patch of the FSI with PDR. The gradient apparently decreases in the FSI image, whereas the FSI with PDR perfectly reduces the image.

 figure: Fig. 7

Fig. 7 Comparison in patch. (a) Original patch. (b) FSI patch. (c) Patch of FSI with PDR. (d) Gray value between the original and FSI patches. (e) Gray value between the original patch and the patch of FSI with PDR. We select the middle row of the patch to compare. The red curve represents the original patch. The green curve represents the FSI patch. The blue curve represents the patch of the FSI with PDR. The gradient apparently decreases in the FSI image, whereas the FSI with PDR perfectly reduces the image.

Download Full Size | PPT Slide | PDF

Taking noise into consideration, we conduct the simulation with black and white stripes as shown in Fig. 8. Compared to FSI stirpes, stripes of FSI with PDR has higher similarity with the original stripes. Because of noise, there are grey noise points in the background in Figs. 8(b) and 8(c). And PDR at present cannot deal with these noises, therefore bilateral filtering [19] is used to solve this problem in the experiments.

 figure: Fig. 8

Fig. 8 Comparison between stripes. (a) Original stripes. (b) FSI stripes. (c) Stripes of FSI with PDR. Compared to FSI stirpes, stripes of FSI with PDR has higher similarity with the original stripes. Because of noise, there are grey noise points in the background in Figs. 8(b) and 8(c).

Download Full Size | PPT Slide | PDF

3. Experiments

Our goal in the experiment is to demonstrate the feasibility and efficiency of our proposed rectification method. For the setup [Fig. 9], a digital projector is selected as the spatial light modulator (SLM) to produce high-contrast structured patterns. The used digital projector contains a light source with a wavelength of 455 nm. A pattern that illuminates the scene is projected onto the scene by the digital projector every 0.1 s. In step 1, the reflected light is collected by a camera. In step 2, the reflected light is collected by a lens and detected by a single-pixel detector, which is actually a photodiode. The 455 nm bandpass filter is fixed in front of the single-pixel detector. The collected analog signal is converted into a digital signal by an analog-to-digital converter (ADC) and processed using a computer. The object is placed 0.5 m away from the lens of the digital projector. The defocusing coefficient is determined by the projector parameters. The pattern size is determined by the matching relationship between the projector and the camera and the size of result image. The average value of pattern is 100 pixels, and the amplitude is set to 100 pixels as standard. The experiment is conducted in black environment to eliminate the influence of environmental illumination.

 figure: Fig. 9

Fig. 9 Experimental setup. A digital projector is selected as the SLM to produce high-contrast structured patterns. The used digital projector contains a light source with a wavelength of 455 nm. An illuminated pattern is projected onto the scene by the digital projector every 0.1 s. In (a), the reflected light is collected by a camera. In (b), the reflected light is collected by a lens and detected by a single-pixel detector, which is actually a photodiode. The 455 nm bandpass filter is fixed in front of the single-pixel detector. The collected analog signal is converted into a digital signal by an ADC and processed using a computer. The object is placed 0.5 m away from the lens of the digital projector.

Download Full Size | PPT Slide | PDF

We need to build the defocusing model by calibrating the amplitude ratio between original and defocused patterns in our method. A whiteboard is used as the control group in the experiment to improve reflectivity. Four-step phase-shifting sinusoidal patterns at all frequencies are projected on the whiteboard. The reflected patterns are captured by a camera. As for a specified frequency, the original pattern amplitude is set in the projector, and the defocused pattern amplitude can be calculated by the reflected pattern. The fundamental frequency is first measured on the basis of image size and symmetry of Fourier coefficients, and half of the coefficients are then measured. Image accuracy is affected by the uneven reflectivity of the printed paper surface, and the quantization noise exerts a certain influence on the correction of high-frequency information. Thus, we operate bilateral filter on the result of FSI with PDR.

As we adopt four-step phase-shifting sinusoidal structured light patterns, it needs four patterns to acquire one Fourier coefficient. And due to the symmetrical characteristic of Fourier coefficients, we only need to acquire half number of Fourier coefficients. Considering of the control experiment on the white board, it needs to project same number of patterns. But this control experiment only need to be done one time for one fixed experiment setup. In the first experiment, we project 16000 patterns and the whole illumination process will last around 4 hours. There are 64000 projected patterns in the second experiment and it takes around 16 hours.

We use three E letters in different sizes on a black background as the target object to quantitatively compare the imaging results of FSI and FSI with PDR as shown in Fig. 10. The widths of white bars are 2, 1.6, and 0.6 cm. The two reconstructed images have a solution of 80 × 50.

 figure: Fig. 10

Fig. 10 Image reconstruction. (a) FSI image. (b) Image of FSI with PDR. (c) Gray value curve of the center line in 2 cm bar. (d) Gray value curve of the center line in 1.6 cm bar. (e) Gray value curve of the center line in 0.6 cm bar.

Download Full Size | PPT Slide | PDF

We use root mean square (RMS) contrast to compare image quality numerically [20]. This algorithm does not depend on the angular frequency content or the spatial distribution of contrast in the image and is defined as the standard deviation of the pixel intensities.

C=1M×Nx=1M1y=1N1(IijI¯)2,
where C is the RMS contrast of the image; M and N are the width and height of the image (in pixel), respectively; intensities Iij are the i-th j-th element of the two-dimensional image of size M by N.‾ I is the average intensity of all pixel values in the image.

The RMS contrast of the FSI image is 22.6905, as calculated by the above algorithm. The RMS contrast of the PDR image is 27.1196, which is improved nearly 20 percent.

We select a normal printed picture as the object. After completing the control experiment, we formally image the object by FSI and FSI with PDR. We conduct an image-processing method on the FSI image to rectify the defocusing problem to prove the superiority of PDR. We select one of the best image restoration methods, Wiener filter, by comparing a series of methods and set the SPF manually to achieve the best reduction result. The contrast is shown in Fig. 11.

 figure: Fig. 11

Fig. 11 Image reconstruction. (a) Original reference image. (It is the photo of the printed paper, and it is not exactly the same as the perfect reconstruction image.) (b) FSI image. (c) Processed image of FSI with Wiener filter. (d) Image of FSI with PDR. The three images have a resolution of 160 × 100. Unlike the original image, the FSI image is blurred. The Wiener filter rectifies the defocusing to some extent, but the PDR image can reduce more details and achieves a higher image quality.

Download Full Size | PPT Slide | PDF

Intuitive perception of the two images shows that the details in FSI with PDR are more obvious, and the image is clearer. The calculation by the above algorithm presents that the RMS contrast of the original reference image shoot by a digital camera is 26.0981. The RMS contrast of the FSI image is 19.5656, the RMS contrast of the Wiener image is 27.3186, and the RMS contrast of the PDR image is 28.1790. The PDR achieves a better result. Similar to Wiener filter, for most image restoration methods, we need to determine the PSF of the defocusing, which means they are impractical in this circumstance. PDR can improve high-frequency information, but it does bring some noise at the same time, which results in several dark dots at the background.

The aforementioned experiments illustrate that the image quality is greatly improved with PDR compared with that using the conventional FSI method. The proposed PDR can be applied to any situation in which the defocusing is insufficient to lose all high-frequency information. The FSI accuracy can be significantly improved by rectifying projector defocusing.

4. Conclusions

We conduct FSI with PDR by adding a control experiment to acquire defocusing coefficients and optimize image quality of conventional FSI. The proposed PDR method achieves a high level of reconstructed image quality compared with that of the conventional SPI method.

The core of the proposed PDR is to calculate the influence of projector defocusing and rectify it via our newly proposed algorithm. In practice, we project patterns on a whiteboard and calculate amplitude ratio by reflected light, which can derive defocusing coefficients and make it work in the reconstruction of target object.

PDR is especially applied to conventional FSI. The influence of defocusing is significant in a high-frequency region; hence, the PDR method can reconstruct high-quality images with considerable details and specifics.

However, at high frequency, PDR cannot distinguish noises from effective signals and result in enhance of noises. How to eliminate high frequency noises is one direction of our future research although the bilateral filtering can reduce the image noises to some extent. . Besides, the objects for FSI in the experiments were 2D pictures and were placed at the constant focus distance in the experiments. When the scene to be imaged is 3D and complex, the PSF of projector will vary at different distances and PDR cannot work well. Thus, applying the PDR method to 3D scene is a challenging work, which is another direction of our future research.

FSI with PDR is experimentally capable of producing high-quality images of a 2D object compared with those of the conventional FSI method. The proposed method can be widely used in areas where projector defocusing needs to be considered and rectified to improve imaging accuracy. Moreover, PDR introduces a way to solve defocusing on the basis of Fourier spectrum, which can be applied to other frequency-domain methods.

Funding

National Natural Science Foundation of China (61475013, 61735003 and 61227806), Program for Changjiang Scholars and Innovative Research Team in University (IRT_16R02).

References and links

1. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3-D Computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013). [CrossRef]   [PubMed]  

2. Z. Zhang and J. Zhong, “Three-dimensional single-pixel imaging with far fewer measurements than effective image pixels,” Opt. Lett. 41(11), 2497–2500 (2016). [CrossRef]   [PubMed]  

3. M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016). [CrossRef]   [PubMed]  

4. W. Chan, K. Charan, D. Takhar, K. Kelly, R. Baraniuk, and D. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Appl. Phys. Lett. 93(12), 121105 (2008). [CrossRef]  

5. C. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. Smith, and W. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8(8), 605–609 (2014). [CrossRef]  

6. L. Bian, J. Suo, G. Situ, Z. Li, J. Fan, F. Chen, and Q. Dai, “Multispectral imaging using a single bucket detector,” Sci. Rep. 6(1), 24752 (2016). [CrossRef]   [PubMed]  

7. P. Sen, B. Chen, G. Garg, S. Marschner, M. Horowitz, M. Levoy, and H. Lensch, “Dual photography,” ACM Trans. Graph. 24(3), 745–755 (2005). [CrossRef]  

8. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995). [CrossRef]   [PubMed]  

9. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008). [CrossRef]  

10. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015). [CrossRef]   [PubMed]  

11. X. Li and Z. Zhang, “High-quality fringe pattern generation based on binary pattern optimization with projector defocusing,” J. Opt. Technol. 84(1), 32–40 (2017). [CrossRef]  

12. M. J. Sun, M. P. Edgar, D. B. Phillips, G. M. Gibson, and M. J. Padgett, “Improving the signal-to-noise ratio of single-pixel imaging using digital microscanning,” Opt. Express 24(10), 10476–10485 (2016). [CrossRef]   [PubMed]  

13. K. Fraser, D. V. Arnold, and G. Dellaire, “Projected Barzilai-Borwein Method with Infeasible Iterates for Nonnegative Least-Squares Image Deblurring,” in Proceedings of IEEE Conference on Computer and Robot Vision (IEEE, 2014), pp. 504–509. [CrossRef]  

14. A. R. Barakat, “Dilute aperture diffraction imagery and object reconstruction,” Opt. Eng. 29(2), 131–139 (1990). [CrossRef]  

15. C. W. Helstrom, “Image Restoration by the Method of Least Squares,” J. Opt. Soc. Am. 57(3), 297–303 (1967). [CrossRef]  

16. M. E. Daube-Witherspoon and G. Muehllehner, “An Iterative Image Space Reconstruction Algorthm Suitable for Volume ECT,” IEEE Trans. Med. Imaging 5(2), 61–66 (1986). [CrossRef]   [PubMed]  

17. H. Jiang, S. Zhu, H. Zhao, B. Xu, and X. Li, “Adaptive regional single-pixel imaging based on the Fourier slice theorem,” Opt. Express 25(13), 15118–15130 (2017). [CrossRef]   [PubMed]  

18. M. Gupta, Y. Tian, S. G. Narasimhan, and L. Zhang, “A Combined Theory of Defocused Illumination and Global Light Transport,” Int. J. Comput. Vis. 98(2), 146–167 (2012). [CrossRef]  

19. S. Paris, P. Kornprobst, and J. Tumblin, “Bilateral Filtering,” Int. J. Numer. Methods Eng. 63(13), 1911–1938 (2009).

20. E. Peli, “Contrast in complex images,” J. Opt. Soc. Am. A 7(10), 2032–2040 (1990). [CrossRef]   [PubMed]  

References

  • View by:

  1. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3-D Computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
    [Crossref] [PubMed]
  2. Z. Zhang and J. Zhong, “Three-dimensional single-pixel imaging with far fewer measurements than effective image pixels,” Opt. Lett. 41(11), 2497–2500 (2016).
    [Crossref] [PubMed]
  3. M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
    [Crossref] [PubMed]
  4. W. Chan, K. Charan, D. Takhar, K. Kelly, R. Baraniuk, and D. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Appl. Phys. Lett. 93(12), 121105 (2008).
    [Crossref]
  5. C. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. Smith, and W. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8(8), 605–609 (2014).
    [Crossref]
  6. L. Bian, J. Suo, G. Situ, Z. Li, J. Fan, F. Chen, and Q. Dai, “Multispectral imaging using a single bucket detector,” Sci. Rep. 6(1), 24752 (2016).
    [Crossref] [PubMed]
  7. P. Sen, B. Chen, G. Garg, S. Marschner, M. Horowitz, M. Levoy, and H. Lensch, “Dual photography,” ACM Trans. Graph. 24(3), 745–755 (2005).
    [Crossref]
  8. T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995).
    [Crossref] [PubMed]
  9. J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008).
    [Crossref]
  10. Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015).
    [Crossref] [PubMed]
  11. X. Li and Z. Zhang, “High-quality fringe pattern generation based on binary pattern optimization with projector defocusing,” J. Opt. Technol. 84(1), 32–40 (2017).
    [Crossref]
  12. M. J. Sun, M. P. Edgar, D. B. Phillips, G. M. Gibson, and M. J. Padgett, “Improving the signal-to-noise ratio of single-pixel imaging using digital microscanning,” Opt. Express 24(10), 10476–10485 (2016).
    [Crossref] [PubMed]
  13. K. Fraser, D. V. Arnold, and G. Dellaire, “Projected Barzilai-Borwein Method with Infeasible Iterates for Nonnegative Least-Squares Image Deblurring,” in Proceedings of IEEE Conference on Computer and Robot Vision (IEEE, 2014), pp. 504–509.
    [Crossref]
  14. A. R. Barakat, “Dilute aperture diffraction imagery and object reconstruction,” Opt. Eng. 29(2), 131–139 (1990).
    [Crossref]
  15. C. W. Helstrom, “Image Restoration by the Method of Least Squares,” J. Opt. Soc. Am. 57(3), 297–303 (1967).
    [Crossref]
  16. M. E. Daube-Witherspoon and G. Muehllehner, “An Iterative Image Space Reconstruction Algorthm Suitable for Volume ECT,” IEEE Trans. Med. Imaging 5(2), 61–66 (1986).
    [Crossref] [PubMed]
  17. H. Jiang, S. Zhu, H. Zhao, B. Xu, and X. Li, “Adaptive regional single-pixel imaging based on the Fourier slice theorem,” Opt. Express 25(13), 15118–15130 (2017).
    [Crossref] [PubMed]
  18. M. Gupta, Y. Tian, S. G. Narasimhan, and L. Zhang, “A Combined Theory of Defocused Illumination and Global Light Transport,” Int. J. Comput. Vis. 98(2), 146–167 (2012).
    [Crossref]
  19. S. Paris, P. Kornprobst, and J. Tumblin, “Bilateral Filtering,” Int. J. Numer. Methods Eng. 63(13), 1911–1938 (2009).
  20. E. Peli, “Contrast in complex images,” J. Opt. Soc. Am. A 7(10), 2032–2040 (1990).
    [Crossref] [PubMed]

2017 (2)

X. Li and Z. Zhang, “High-quality fringe pattern generation based on binary pattern optimization with projector defocusing,” J. Opt. Technol. 84(1), 32–40 (2017).
[Crossref]

H. Jiang, S. Zhu, H. Zhao, B. Xu, and X. Li, “Adaptive regional single-pixel imaging based on the Fourier slice theorem,” Opt. Express 25(13), 15118–15130 (2017).
[Crossref] [PubMed]

2016 (4)

M. J. Sun, M. P. Edgar, D. B. Phillips, G. M. Gibson, and M. J. Padgett, “Improving the signal-to-noise ratio of single-pixel imaging using digital microscanning,” Opt. Express 24(10), 10476–10485 (2016).
[Crossref] [PubMed]

Z. Zhang and J. Zhong, “Three-dimensional single-pixel imaging with far fewer measurements than effective image pixels,” Opt. Lett. 41(11), 2497–2500 (2016).
[Crossref] [PubMed]

M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref] [PubMed]

L. Bian, J. Suo, G. Situ, Z. Li, J. Fan, F. Chen, and Q. Dai, “Multispectral imaging using a single bucket detector,” Sci. Rep. 6(1), 24752 (2016).
[Crossref] [PubMed]

2015 (1)

Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015).
[Crossref] [PubMed]

2014 (1)

C. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. Smith, and W. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8(8), 605–609 (2014).
[Crossref]

2013 (1)

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3-D Computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref] [PubMed]

2012 (1)

M. Gupta, Y. Tian, S. G. Narasimhan, and L. Zhang, “A Combined Theory of Defocused Illumination and Global Light Transport,” Int. J. Comput. Vis. 98(2), 146–167 (2012).
[Crossref]

2009 (1)

S. Paris, P. Kornprobst, and J. Tumblin, “Bilateral Filtering,” Int. J. Numer. Methods Eng. 63(13), 1911–1938 (2009).

2008 (2)

W. Chan, K. Charan, D. Takhar, K. Kelly, R. Baraniuk, and D. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Appl. Phys. Lett. 93(12), 121105 (2008).
[Crossref]

J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008).
[Crossref]

2005 (1)

P. Sen, B. Chen, G. Garg, S. Marschner, M. Horowitz, M. Levoy, and H. Lensch, “Dual photography,” ACM Trans. Graph. 24(3), 745–755 (2005).
[Crossref]

1995 (1)

T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995).
[Crossref] [PubMed]

1990 (2)

E. Peli, “Contrast in complex images,” J. Opt. Soc. Am. A 7(10), 2032–2040 (1990).
[Crossref] [PubMed]

A. R. Barakat, “Dilute aperture diffraction imagery and object reconstruction,” Opt. Eng. 29(2), 131–139 (1990).
[Crossref]

1986 (1)

M. E. Daube-Witherspoon and G. Muehllehner, “An Iterative Image Space Reconstruction Algorthm Suitable for Volume ECT,” IEEE Trans. Med. Imaging 5(2), 61–66 (1986).
[Crossref] [PubMed]

1967 (1)

Arnold, D. V.

K. Fraser, D. V. Arnold, and G. Dellaire, “Projected Barzilai-Borwein Method with Infeasible Iterates for Nonnegative Least-Squares Image Deblurring,” in Proceedings of IEEE Conference on Computer and Robot Vision (IEEE, 2014), pp. 504–509.
[Crossref]

Barakat, A. R.

A. R. Barakat, “Dilute aperture diffraction imagery and object reconstruction,” Opt. Eng. 29(2), 131–139 (1990).
[Crossref]

Baraniuk, R.

W. Chan, K. Charan, D. Takhar, K. Kelly, R. Baraniuk, and D. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Appl. Phys. Lett. 93(12), 121105 (2008).
[Crossref]

Bian, L.

L. Bian, J. Suo, G. Situ, Z. Li, J. Fan, F. Chen, and Q. Dai, “Multispectral imaging using a single bucket detector,” Sci. Rep. 6(1), 24752 (2016).
[Crossref] [PubMed]

Bowman, A.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3-D Computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref] [PubMed]

Bowman, R.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3-D Computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref] [PubMed]

Chan, W.

W. Chan, K. Charan, D. Takhar, K. Kelly, R. Baraniuk, and D. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Appl. Phys. Lett. 93(12), 121105 (2008).
[Crossref]

Charan, K.

W. Chan, K. Charan, D. Takhar, K. Kelly, R. Baraniuk, and D. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Appl. Phys. Lett. 93(12), 121105 (2008).
[Crossref]

Chen, B.

P. Sen, B. Chen, G. Garg, S. Marschner, M. Horowitz, M. Levoy, and H. Lensch, “Dual photography,” ACM Trans. Graph. 24(3), 745–755 (2005).
[Crossref]

Chen, F.

L. Bian, J. Suo, G. Situ, Z. Li, J. Fan, F. Chen, and Q. Dai, “Multispectral imaging using a single bucket detector,” Sci. Rep. 6(1), 24752 (2016).
[Crossref] [PubMed]

Dai, Q.

L. Bian, J. Suo, G. Situ, Z. Li, J. Fan, F. Chen, and Q. Dai, “Multispectral imaging using a single bucket detector,” Sci. Rep. 6(1), 24752 (2016).
[Crossref] [PubMed]

Daube-Witherspoon, M. E.

M. E. Daube-Witherspoon and G. Muehllehner, “An Iterative Image Space Reconstruction Algorthm Suitable for Volume ECT,” IEEE Trans. Med. Imaging 5(2), 61–66 (1986).
[Crossref] [PubMed]

Dellaire, G.

K. Fraser, D. V. Arnold, and G. Dellaire, “Projected Barzilai-Borwein Method with Infeasible Iterates for Nonnegative Least-Squares Image Deblurring,” in Proceedings of IEEE Conference on Computer and Robot Vision (IEEE, 2014), pp. 504–509.
[Crossref]

Edgar, M. P.

M. J. Sun, M. P. Edgar, D. B. Phillips, G. M. Gibson, and M. J. Padgett, “Improving the signal-to-noise ratio of single-pixel imaging using digital microscanning,” Opt. Express 24(10), 10476–10485 (2016).
[Crossref] [PubMed]

M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref] [PubMed]

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3-D Computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref] [PubMed]

Fan, J.

L. Bian, J. Suo, G. Situ, Z. Li, J. Fan, F. Chen, and Q. Dai, “Multispectral imaging using a single bucket detector,” Sci. Rep. 6(1), 24752 (2016).
[Crossref] [PubMed]

Fraser, K.

K. Fraser, D. V. Arnold, and G. Dellaire, “Projected Barzilai-Borwein Method with Infeasible Iterates for Nonnegative Least-Squares Image Deblurring,” in Proceedings of IEEE Conference on Computer and Robot Vision (IEEE, 2014), pp. 504–509.
[Crossref]

Garg, G.

P. Sen, B. Chen, G. Garg, S. Marschner, M. Horowitz, M. Levoy, and H. Lensch, “Dual photography,” ACM Trans. Graph. 24(3), 745–755 (2005).
[Crossref]

Gibson, G. M.

M. J. Sun, M. P. Edgar, D. B. Phillips, G. M. Gibson, and M. J. Padgett, “Improving the signal-to-noise ratio of single-pixel imaging using digital microscanning,” Opt. Express 24(10), 10476–10485 (2016).
[Crossref] [PubMed]

M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref] [PubMed]

Gupta, M.

M. Gupta, Y. Tian, S. G. Narasimhan, and L. Zhang, “A Combined Theory of Defocused Illumination and Global Light Transport,” Int. J. Comput. Vis. 98(2), 146–167 (2012).
[Crossref]

Helstrom, C. W.

Horowitz, M.

P. Sen, B. Chen, G. Garg, S. Marschner, M. Horowitz, M. Levoy, and H. Lensch, “Dual photography,” ACM Trans. Graph. 24(3), 745–755 (2005).
[Crossref]

Hunt, J.

C. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. Smith, and W. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8(8), 605–609 (2014).
[Crossref]

Jiang, H.

Kelly, K.

W. Chan, K. Charan, D. Takhar, K. Kelly, R. Baraniuk, and D. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Appl. Phys. Lett. 93(12), 121105 (2008).
[Crossref]

Kornprobst, P.

S. Paris, P. Kornprobst, and J. Tumblin, “Bilateral Filtering,” Int. J. Numer. Methods Eng. 63(13), 1911–1938 (2009).

Krishna, S.

C. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. Smith, and W. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8(8), 605–609 (2014).
[Crossref]

Lamb, R.

M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref] [PubMed]

Lensch, H.

P. Sen, B. Chen, G. Garg, S. Marschner, M. Horowitz, M. Levoy, and H. Lensch, “Dual photography,” ACM Trans. Graph. 24(3), 745–755 (2005).
[Crossref]

Levoy, M.

P. Sen, B. Chen, G. Garg, S. Marschner, M. Horowitz, M. Levoy, and H. Lensch, “Dual photography,” ACM Trans. Graph. 24(3), 745–755 (2005).
[Crossref]

Li, X.

X. Li and Z. Zhang, “High-quality fringe pattern generation based on binary pattern optimization with projector defocusing,” J. Opt. Technol. 84(1), 32–40 (2017).
[Crossref]

H. Jiang, S. Zhu, H. Zhao, B. Xu, and X. Li, “Adaptive regional single-pixel imaging based on the Fourier slice theorem,” Opt. Express 25(13), 15118–15130 (2017).
[Crossref] [PubMed]

Li, Z.

L. Bian, J. Suo, G. Situ, Z. Li, J. Fan, F. Chen, and Q. Dai, “Multispectral imaging using a single bucket detector,” Sci. Rep. 6(1), 24752 (2016).
[Crossref] [PubMed]

Lipworth, G.

C. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. Smith, and W. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8(8), 605–609 (2014).
[Crossref]

Ma, X.

Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015).
[Crossref] [PubMed]

Marschner, S.

P. Sen, B. Chen, G. Garg, S. Marschner, M. Horowitz, M. Levoy, and H. Lensch, “Dual photography,” ACM Trans. Graph. 24(3), 745–755 (2005).
[Crossref]

Mittleman, D.

W. Chan, K. Charan, D. Takhar, K. Kelly, R. Baraniuk, and D. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Appl. Phys. Lett. 93(12), 121105 (2008).
[Crossref]

Montoya, J.

C. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. Smith, and W. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8(8), 605–609 (2014).
[Crossref]

Muehllehner, G.

M. E. Daube-Witherspoon and G. Muehllehner, “An Iterative Image Space Reconstruction Algorthm Suitable for Volume ECT,” IEEE Trans. Med. Imaging 5(2), 61–66 (1986).
[Crossref] [PubMed]

Narasimhan, S. G.

M. Gupta, Y. Tian, S. G. Narasimhan, and L. Zhang, “A Combined Theory of Defocused Illumination and Global Light Transport,” Int. J. Comput. Vis. 98(2), 146–167 (2012).
[Crossref]

Padgett, M. J.

M. J. Sun, M. P. Edgar, D. B. Phillips, G. M. Gibson, and M. J. Padgett, “Improving the signal-to-noise ratio of single-pixel imaging using digital microscanning,” Opt. Express 24(10), 10476–10485 (2016).
[Crossref] [PubMed]

M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref] [PubMed]

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3-D Computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref] [PubMed]

Padilla, W.

C. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. Smith, and W. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8(8), 605–609 (2014).
[Crossref]

Paris, S.

S. Paris, P. Kornprobst, and J. Tumblin, “Bilateral Filtering,” Int. J. Numer. Methods Eng. 63(13), 1911–1938 (2009).

Peli, E.

Phillips, D. B.

Pittman, T. B.

T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995).
[Crossref] [PubMed]

Radwell, N.

M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref] [PubMed]

Sen, P.

P. Sen, B. Chen, G. Garg, S. Marschner, M. Horowitz, M. Levoy, and H. Lensch, “Dual photography,” ACM Trans. Graph. 24(3), 745–755 (2005).
[Crossref]

Sergienko, A. V.

T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995).
[Crossref] [PubMed]

Shapiro, J. H.

J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008).
[Crossref]

Shih, Y. H.

T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995).
[Crossref] [PubMed]

Shrekenhamer, D.

C. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. Smith, and W. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8(8), 605–609 (2014).
[Crossref]

Situ, G.

L. Bian, J. Suo, G. Situ, Z. Li, J. Fan, F. Chen, and Q. Dai, “Multispectral imaging using a single bucket detector,” Sci. Rep. 6(1), 24752 (2016).
[Crossref] [PubMed]

Sleasman, T.

C. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. Smith, and W. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8(8), 605–609 (2014).
[Crossref]

Smith, D.

C. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. Smith, and W. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8(8), 605–609 (2014).
[Crossref]

Strekalov, D. V.

T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995).
[Crossref] [PubMed]

Sun, B.

M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref] [PubMed]

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3-D Computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref] [PubMed]

Sun, M. J.

M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref] [PubMed]

M. J. Sun, M. P. Edgar, D. B. Phillips, G. M. Gibson, and M. J. Padgett, “Improving the signal-to-noise ratio of single-pixel imaging using digital microscanning,” Opt. Express 24(10), 10476–10485 (2016).
[Crossref] [PubMed]

Suo, J.

L. Bian, J. Suo, G. Situ, Z. Li, J. Fan, F. Chen, and Q. Dai, “Multispectral imaging using a single bucket detector,” Sci. Rep. 6(1), 24752 (2016).
[Crossref] [PubMed]

Takhar, D.

W. Chan, K. Charan, D. Takhar, K. Kelly, R. Baraniuk, and D. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Appl. Phys. Lett. 93(12), 121105 (2008).
[Crossref]

Tian, Y.

M. Gupta, Y. Tian, S. G. Narasimhan, and L. Zhang, “A Combined Theory of Defocused Illumination and Global Light Transport,” Int. J. Comput. Vis. 98(2), 146–167 (2012).
[Crossref]

Tumblin, J.

S. Paris, P. Kornprobst, and J. Tumblin, “Bilateral Filtering,” Int. J. Numer. Methods Eng. 63(13), 1911–1938 (2009).

Vittert, L. E.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3-D Computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref] [PubMed]

Watts, C.

C. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. Smith, and W. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8(8), 605–609 (2014).
[Crossref]

Welsh, S.

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3-D Computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref] [PubMed]

Xu, B.

Zhang, L.

M. Gupta, Y. Tian, S. G. Narasimhan, and L. Zhang, “A Combined Theory of Defocused Illumination and Global Light Transport,” Int. J. Comput. Vis. 98(2), 146–167 (2012).
[Crossref]

Zhang, Z.

X. Li and Z. Zhang, “High-quality fringe pattern generation based on binary pattern optimization with projector defocusing,” J. Opt. Technol. 84(1), 32–40 (2017).
[Crossref]

Z. Zhang and J. Zhong, “Three-dimensional single-pixel imaging with far fewer measurements than effective image pixels,” Opt. Lett. 41(11), 2497–2500 (2016).
[Crossref] [PubMed]

Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015).
[Crossref] [PubMed]

Zhao, H.

Zhong, J.

Zhu, S.

ACM Trans. Graph. (1)

P. Sen, B. Chen, G. Garg, S. Marschner, M. Horowitz, M. Levoy, and H. Lensch, “Dual photography,” ACM Trans. Graph. 24(3), 745–755 (2005).
[Crossref]

Appl. Phys. Lett. (1)

W. Chan, K. Charan, D. Takhar, K. Kelly, R. Baraniuk, and D. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Appl. Phys. Lett. 93(12), 121105 (2008).
[Crossref]

IEEE Trans. Med. Imaging (1)

M. E. Daube-Witherspoon and G. Muehllehner, “An Iterative Image Space Reconstruction Algorthm Suitable for Volume ECT,” IEEE Trans. Med. Imaging 5(2), 61–66 (1986).
[Crossref] [PubMed]

Int. J. Comput. Vis. (1)

M. Gupta, Y. Tian, S. G. Narasimhan, and L. Zhang, “A Combined Theory of Defocused Illumination and Global Light Transport,” Int. J. Comput. Vis. 98(2), 146–167 (2012).
[Crossref]

Int. J. Numer. Methods Eng. (1)

S. Paris, P. Kornprobst, and J. Tumblin, “Bilateral Filtering,” Int. J. Numer. Methods Eng. 63(13), 1911–1938 (2009).

J. Opt. Soc. Am. (1)

J. Opt. Soc. Am. A (1)

J. Opt. Technol. (1)

X. Li and Z. Zhang, “High-quality fringe pattern generation based on binary pattern optimization with projector defocusing,” J. Opt. Technol. 84(1), 32–40 (2017).
[Crossref]

Nat. Commun. (2)

M. J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref] [PubMed]

Z. Zhang, X. Ma, and J. Zhong, “Single-pixel imaging by means of Fourier spectrum acquisition,” Nat. Commun. 6(1), 6225 (2015).
[Crossref] [PubMed]

Nat. Photonics (1)

C. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. Smith, and W. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics 8(8), 605–609 (2014).
[Crossref]

Opt. Eng. (1)

A. R. Barakat, “Dilute aperture diffraction imagery and object reconstruction,” Opt. Eng. 29(2), 131–139 (1990).
[Crossref]

Opt. Express (2)

Opt. Lett. (1)

Phys. Rev. A (2)

T. B. Pittman, Y. H. Shih, D. V. Strekalov, and A. V. Sergienko, “Optical imaging by means of two-photon quantum entanglement,” Phys. Rev. A 52(5), R3429–R3432 (1995).
[Crossref] [PubMed]

J. H. Shapiro, “Computational ghost imaging,” Phys. Rev. A 78(6), 061802 (2008).
[Crossref]

Sci. Rep. (1)

L. Bian, J. Suo, G. Situ, Z. Li, J. Fan, F. Chen, and Q. Dai, “Multispectral imaging using a single bucket detector,” Sci. Rep. 6(1), 24752 (2016).
[Crossref] [PubMed]

Science (1)

B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3-D Computational imaging with single-pixel detectors,” Science 340(6134), 844–847 (2013).
[Crossref] [PubMed]

Other (1)

K. Fraser, D. V. Arnold, and G. Dellaire, “Projected Barzilai-Borwein Method with Infeasible Iterates for Nonnegative Least-Squares Image Deblurring,” in Proceedings of IEEE Conference on Computer and Robot Vision (IEEE, 2014), pp. 504–509.
[Crossref]

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Schematic of FSI. The digital projector projects illuminated patterns onto the object, which is located 0.5 m away from the experimental system. The reflected light is collected by a lens and detected by a photodiode. An ADC transfers the detected signal to a computer. The image is reconstructed using the obtained data.
Fig. 2
Fig. 2 Projector defocusing. (a) Defocusing schematic. The light emitted by the object point is focused on the focal plane through the lens, and the image is defocused due to the noncoincidence of the image and focal planes. (b) PSF. The PSF is generally approximated by a circular Gaussian function as Eq. (5), where the standard deviation σ represents the defocusing level.
Fig. 3
Fig. 3 Amplitude changing due to defocusing. (a) Original pattern. (b) Defocused pattern. (c) Curves of the same row in the two patterns. The blue curve represents the original pattern, and the orange curve represents the defocused pattern. After defocusing, the average value maintains, whereas the pattern amplitude decreases.
Fig. 4
Fig. 4 Actual picture (200 × 200 pixels). The defocus function is set to a Gaussian filter (3 × 3 pixels with a standard deviation of 0.6 pixels) in this simulation. The pattern size is 200 × 200 pixels, the average is 100 pixels, and the amplitude is set to 100 pixels as standard.
Fig. 5
Fig. 5 Amplitude ratio (moving zero-frequency component to the center). (a) Mesh figure of the amplitude ratio. (b) Tangent plane of the amplitude ratio. The ratio is decreasing with the frequency, which corresponds to the phenomenon that patterns with a high frequency have narrow fringes and are easily influenced by defocusing.
Fig. 6
Fig. 6 Image quality contrast among reconstructed images of SPI. (a) Original image. (b) FSI image. (c) FSI image with PDR. The image shown as Fig. 6(a) becomes blurred in the conventional FSI shown as Fig. 6(b), whereas it is almost the same as the original one in the FSI with PDR shown as Fig. 6(c). We quantity these images’ quality by correlation coefficient r as shown in Eq. (20). The correlation coefficients of conventional FSI image and FSI image with PDR are 0.9867 and 0.9979 respectively shown as Fig. 6. Through comparing the correlations between conventional FSI and FSI with PDR, the image quality of FSI with PDR is evidently much higher than FSI without PDR.
Fig. 7
Fig. 7 Comparison in patch. (a) Original patch. (b) FSI patch. (c) Patch of FSI with PDR. (d) Gray value between the original and FSI patches. (e) Gray value between the original patch and the patch of FSI with PDR. We select the middle row of the patch to compare. The red curve represents the original patch. The green curve represents the FSI patch. The blue curve represents the patch of the FSI with PDR. The gradient apparently decreases in the FSI image, whereas the FSI with PDR perfectly reduces the image.
Fig. 8
Fig. 8 Comparison between stripes. (a) Original stripes. (b) FSI stripes. (c) Stripes of FSI with PDR. Compared to FSI stirpes, stripes of FSI with PDR has higher similarity with the original stripes. Because of noise, there are grey noise points in the background in Figs. 8(b) and 8(c).
Fig. 9
Fig. 9 Experimental setup. A digital projector is selected as the SLM to produce high-contrast structured patterns. The used digital projector contains a light source with a wavelength of 455 nm. An illuminated pattern is projected onto the scene by the digital projector every 0.1 s. In (a), the reflected light is collected by a camera. In (b), the reflected light is collected by a lens and detected by a single-pixel detector, which is actually a photodiode. The 455 nm bandpass filter is fixed in front of the single-pixel detector. The collected analog signal is converted into a digital signal by an ADC and processed using a computer. The object is placed 0.5 m away from the lens of the digital projector.
Fig. 10
Fig. 10 Image reconstruction. (a) FSI image. (b) Image of FSI with PDR. (c) Gray value curve of the center line in 2 cm bar. (d) Gray value curve of the center line in 1.6 cm bar. (e) Gray value curve of the center line in 0.6 cm bar.
Fig. 11
Fig. 11 Image reconstruction. (a) Original reference image. (It is the photo of the printed paper, and it is not exactly the same as the perfect reconstruction image.) (b) FSI image. (c) Processed image of FSI with Wiener filter. (d) Image of FSI with PDR. The three images have a resolution of 160 × 100. Unlike the original image, the FSI image is blurred. The Wiener filter rectifies the defocusing to some extent, but the PDR image can reduce more details and achieves a higher image quality.

Equations (21)

Equations on this page are rendered with MathJax. Learn more.

P ϕ ( x , y ; f x , f y ) = B ( f x , f y ) + A ( f x , f y ) cos ( 2 π f x x + 2 π f y y + ϕ ) ,
D ϕ ( f x , f y ) = D n + k Ω R ( x , y ) P ϕ ( x , y ; f x , f y ) ,
F { R ( x , y ) } = 1 2 b k { [ D 0 ( f x , f y ) D π ( f x , f y ) ] + j [ D π / 2 ( f x , f y ) D 3 π / 2 ( f x , f y ) ] } ,
P ' ( x , y ; f x , f y ) = G ( x , y ) P ( x , y ; f x , f y ) ,
G ( x , y ) = 1 2 π σ 2 exp ( x 2 + y 2 2 σ 2 ) ,
A ' ( f x , f y ) = α ( f x , f y ) A ( f x , f y ) ,
P ' ( x , y ; f x , f y ) B ( f x , f y ) = α ( f x , f y ) ( P ( x , y ; f x , f y ) B ( f x , f y ) ) ,
D ϕ ' ( f x , f y ) = D n + k Ω R ( x , y ) P ϕ ' ( x , y ; f x , f y ) .
F ' { R ( x , y ) } = 1 2 b k { [ D 0 ' ( f x , f y ) D π ' ( f x , f y ) ] + j [ D π / 2 ' ( f x , f y ) D 3 π / 2 ' ( f x , f y ) ] } = α ( f x , f y ) 2 b k { [ D 0 ( f x , f y ) D π ( f x , f y ) ] + j [ D π / 2 ( f x , f y ) D 3 π / 2 ( f x , f y ) ] } .
F ' { R ( x , y ) } = α ( f x , f y ) F { R ( x , y ) } .
α ( f x , f y ) = A ' ( f x , f y ) A ( f x , f y ) .
P c ( f x , f y ) = r P ' ( f x , f y ) ,
A c ( f x , f y ) = r A ' ( f x , f y ) ,
B c ( f x , f y ) = r B ' ( f x , f y ) = r B ( f x , f y ) .
A ' ( f x , f y ) = A c ( f x , f y ) B ( f x , f y ) B c ( f x , f y ) .
α ( f x , f y ) = A ' ( f x , f y ) A ( f x , f y ) = A c ( f x , f y ) B ( f x , f y ) A ( f x , f y ) B c ( f x , f y ) .
A c ( f x , f y ) = 1 2 M N x = 1 M y = 1 N ( P c 0 ( x , y ; f x , f y ) P c π ( x , y ; f x , f y ) ) 2 + ( P c π / 2 ( x , y ; f x , f y ) P c 3 π / 2 ( x , y ; f x , f y ) ) 2 ,
B ' ' ( f x , f y ) = 1 4 M N x = 1 M y = 1 N ( P c 0 ( x , y ; f x , f y ) + P c π / 2 ( x , y ; f x , f y ) + P c π ( x , y ; f x , f y ) + P c 3 π / 2 ( x , y ; f x , f y ) ) ,
F { R ( x , y ) } = 1 α ( f x , f y ) F ' { R ( x , y ) } .
r = m n ( A m n A ¯ ) ( B m n B ¯ ) m n ( A m n A ¯ ) 2 ( m n ( B m n B ¯ ) 2 ) ,
C = 1 M × N x = 1 M 1 y = 1 N 1 ( I i j I ¯ ) 2 ,

Metrics

Select as filters


Select Topics Cancel
© Copyright 2022 | Optica Publishing Group. All Rights Reserved