Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single-shot phase imaging with randomized light (SPIRaL)

Open Access Open Access

Abstract

We present a method for single-shot phase imaging with randomized light (SPIRaL). In SPIRaL, the complex (amplitude and phase) field of an object illuminated with a randomized coherent beam is captured with an image sensor, without the need for any reference light. The object field is retrieved from the single captured intensity image by a compressive sensing-based algorithm with a sparsity constraint. SPIRaL has higher observation speed, light efficiency, and flexibility of the implementation compared with previous methods. We demonstrate SPIRaL numerically and experimentally.

© 2016 Optical Society of America

1. Introduction

Phase imaging observes both the amplitude and phase, which together describe the complex field, of a light wave [1–3]. It has been applied to a wide range of fields, especially the life sciences, where phase information is important because many biological specimens have low light absorption. Various methods have been proposed for quantitative phase imaging, and they can generally be categorized into digital holography (DH) and coherent diffractive imaging (CDI) [4, 5].

DH and CDI have been widely used in various spectral regimes. However, they involve a serious tradeoff between the number of shots and the space-bandwidth product (SBP). In DH, where reference light is introduced to determine the interference quantitatively, off-axis holography can realize single-shot phase imaging, assuming a small frequency bandwidth, and phase-shifting holography can observe the full complex field by using multiple shots with different phase shifts [6,7]. The above tradeoff is more serious in CDI compared with DH, although CDI has the advantage that it does not need any reference light. In CDI, the size of an object must be much smaller than that of the captured image for the reconstruction process known as phase retrieval (PR) [8]. This limited region of an object is referred to as the support. Several multi-shot approaches for extending the limited object size in CDI have been demonstrated, such as ptychography [9–11].

To solve the above tradeoff problem in DH and CDI, we have recently demonstrated a technique called single-shot phase imaging with a coded aperture (SPICA) [12–14]. SPICA observes a single intensity image of the diffraction pattern through a coded aperture (CA), which is a randomized pinhole array placed between the object and the image sensor as the support in PR. The object complex field is reconstructed by PR with a CA-based support constraint and a sparsity-based algorithm used in compressive sensing [15]. However, the pinhole array-based CA causes several issues. The pinhole array reduces the amount of object light. Also, the non-pinhole area on the CA must perfectly obstruct object light for the PR. It is difficult to implement such a structure in several wavelength regions, such as the X-ray region, where most materials have very low light absorption.

In this paper, we extend the coded diffraction-based approach, which is used in SPICA, to single-shot phase imaging with randomized light (SPIRaL) based on structured illumination (SI). SPIRaL uses SI instead of the CA in SPICA, allowing it to utilize all of light from the object. Furthermore, speckle-based SI, which is easily implemented in various spectral regions, can be used in SPIRaL. Although SI has been used for multi-shot phase imaging because it enhances the uniqueness of a solution in the PR [16–20], SPIRaL accelerates the observation speed of those multi-shot methods. SI is also an established approach for increasing the SBP in imaging systems, such as super-resolution and multidimensional imaging [21–25]. SPIRaL is compatible with those SI-based high-SBP techniques.

2. Method

In SPIRaL, as shown in Fig. 1, the complex object is illuminated with coherent randomized light (RaL) from the generator, and the propagating field is observed by the image sensor, which captures a single intensity image without any reference light. This imaging process is modeled as follows

|g|2=|PzMf|2,
where g ∈ ℂ(Nx×Ny)×1 is the vectored complex field on the image sensor, Pz ∈ ℂ(Nx×Ny)×(Nx×Ny) is a Toeplitz matrix for Fresnel propagation at distance z [26], M ∈ ℂ (Nx×Ny)×(Nx×Ny) is a diagonal matrix for modulating the object field with RaL, and f ∈ ℂ (Nx×Ny)×1 is the vectored complex object field. Here, the captured image in SPIRaL corresponds to |g|2, z is the distance between the object and the image sensor, and Nx and Ny are the numbers of elements along the x- and y-axes, respectively, as shown in Fig. 1.

 figure: Fig. 1

Fig. 1 Setup of SPIRaL.

Download Full Size | PDF

The imaging process of SPIRaL is nonlinear, as shown in Eq. (1). The CA-based PR in SPICA cannot be applied to the inverse problem of SPIRaL because no PR support is employed in this case. To solve this inversion, Eq. (1) is rewritten as

|g|2=|Pz2Pz1Mf|2,
where z1 + z2 = z and Pz2 Pz1 = Pz. Here, the distances z1 and z2 are virtually introduced to use alternating projection, which has been applied to ptychography [27, 28], for the reconstruction of SPIRaL. The following two optimization sets are solved alternately:
g^=argming|g|2|Pz2a|22,
f^=argminfaPz1Mf2+τ(f),
where a ∈ ℂ(Nx×Ny)×1 is the auxiliary function for the alternating projection, ‖•‖2 is the 2-norm, (•) is a sparsity-based regularizer, and τ is a parameter for the regularization. The auxiliary function a is the complex field at a distance z1 from the object and at a distance z2 from the image sensor.

The optimization problem in Eq. (3) is solved with the Gerchberg–Saxton method:

g^=(|g|2|Pz2a|2)12Pz2a,
where ⊘ is element-wise division, and ⊗ is element-wise multiplication [29]. The optimization problem in Eq. (4) is solved by a sparse solver. Various sparse solvers have been proposed [30–32]. In the reconstruction of SPIRaL, the auxiliary function a is updated by using Eq. (5) and back-propagating ĝ at the auxiliary distance z2 from the image sensor in each iteration of the sparse solver. In this paper, the two-step iterative shrinkage/thresholding algorithm (TwIST) is used for the sparse solver, and the total variation is used for the regularizer [30, 33].

3. Simulation

SPIRaL was verified via simulation, as shown in Fig. 2. In the simulation, the distance z was 5 cm, and the wavelength was 532 nm. An image sensor with a pixel count of 512 × 512 (= Nx × Ny) pixels and a pixel pitch of 5 μm × 5 μm was assumed. The amplitude and phase of the object are shown in Figs. 2(a) and 2(b), respectively. The amplitude and phase of RaL on the object plane were randomly set, pixel-by-pixel, to a value between 0 and 1 and a value between −π and π, respectively, as shown in Figs. 2(c) and 2(d). Amplitude- or phase-only RaL can be also used in SPIRaL. Figure 2(e) is the captured image, where white Gaussian noise with a signal-to-noise ratio (SNR) of 30 dB was added. The auxiliary distances z1 and z2 were set to 3 cm and 2 cm, respectively. The number of iterations in the reconstruction was 500. Several trials were performed to adjust those parameters, and those that realized the best reconstruction fidelity were chosen. The amplitude and phase of the reconstruction are shown in Figs. 2(f) and 2(g). Both the amplitude and phase were reconstructed successfully and the peak SNR (PSNR) between the object and the reconstruction result was 27.3 dB. The reconstruction code was implemented with MATLAB without any parallelization. The calculation time was five minutes with a computer equipped with an Intel Core i7 processor with a clock rate of 2.8 GHz and 16 GB of memory.

 figure: Fig. 2

Fig. 2 Simulation of SPIRaL. The (a) amplitude and (b) phase of the object. The (c) amplitude and (d) phase of the illumination, where the upper-right parts show magnified patterns. (e) The captured image. The (f) amplitude and (g) phase of the reconstructed result. Phases are normalized in the interval [−π, π].

Download Full Size | PDF

Figure 3 plots the relationship between the measurement SNR and the reconstruction PSNR, which demonstrates the fidelity of the reconstruction. In the plots, crosses show averages and error-bars show minimums and maximums of the PSNRs with five different RaLs, respectively. The reconstruction PSNRs were about 28 dB for measurements with high SNRs. The solution of the reconstruction algorithm converged well when the measurement SNR was equal to or larger than 10 dB, as shown in the plots.

 figure: Fig. 3

Fig. 3 Relationship between the measurement noise and the reconstruction fidelity in the simulation in Fig. 2.

Download Full Size | PDF

4. Experiments

4.1. Oil and water

In the experimental demonstration of SPIRaL, a mixture of oil and water on a slide glass was used as the object. RaL was generated with a spatial light modulator (LC2002, manufactured by Holoeye, pixel count: 600 × 800 pixels, pixel pitch: 32 μm × 32 μm) and a laser diode (L658P050, manufactured by Thorlabs, wavelength: 658 nm). An image sensor (CoolSNAP MYO, manufactured by Photometrics, pixel count: 1460 × 1940 pixels, pixel pitch: 4.54 μm × 4.54 μm) was set at a distance of 3.9 cm (z) from the object. Nx and Ny were both 740.

The amplitude and phase of RaL on the object plane are shown in Figs. 4(a) and 4(b), respectively. Figure 4(c) is the captured intensity image. The amplitude and phase of the reconstruction, where the auxiliary distances z1 and z2 were chosen to be 2.4 cm and 1.5 cm, are shown in Figs. 4(d) and 4(e), respectively. Here the phase was unwrapped [34]. A reference image of the area surrounded by the square in Fig. 4(d) was captured by a microscope (IX71, manufactured by Olympus, 5× objective lens) as shown in Fig. 4(f). The microscopic image had a higher resolution than those of SPIRaL because the former had a higher numerical aperture.

 figure: Fig. 4

Fig. 4 SPIRaL experiment with a mixture of oil and water. The (a) amplitude and (b) phase, which is normalized in the interval [−π, π], of the illumination, where the upper-right parts show magnified patterns. (e) The captured image. The (d) amplitude and (e) phase, which is normalized in the interval [0, 6π], of the reconstructed result. (f) Image of the region surounded by the square in Fig. (d) observed with a microscope, where the scale bar is 300 μm.

Download Full Size | PDF

4.2. Spatial resolution

The spatial resolution of SPIRaL was verified using a wire, which is opaque to light, with a thickness of 30 μm, as shown in Fig. 5. The object distance z was 4.0 cm, and the auxiliary distances z1 and z2 were 2.5 cm and 1.5 cm, respectively. Other configurations were the same as those in the first experiment. The amplitude and phase of RaL on the object plane are shown in Figs. 5(a) and 5(b), respectively. Figure 5(c) is the captured image. Both sides of the captured image were computationally replaced with zeros, as shown in Fig. 5(c), to realize a larger point spread function (PSF) than the thickness of the wire for verifying the spatial resolution. The pixel count of the width of the nonzero area was 100 pixels. In this case, the size δ of the PSF of SPIRaL is approximately calculated based on DH as

δzλw,
where λ is the illumination wavelength (658 nm), and w is the width of the nonzero area (4.54 μm × 100 pixels), then δ=58 μm, which is larger than the thickness of the wire [26].

 figure: Fig. 5

Fig. 5 Experimental evaluation of the spatial resolution with a wire. The (a) amplitude and (b) phase of the illumination, where the upper-right parts show magnified patterns. (c) The captured image, where the both sides are replaced with zeros. (d) The reconstructed amplitude, where the upper-right part shows the magnified image of the square area, and (e) the reconstructed phase. (f) The line profile indicated by the black broken line in the magnified image on Fig. (d). Phases are normalized in the interval [−π, π].

Download Full Size | PDF

The reconstructed amplitude and phase are shown in Figs. 5(d) and 5(e), respectively. Figure 5(f) shows the line profiles of the reconstructed amplitude and the convolution result between the wire and the PSF as the theoretical profile, where the wire was approximated by an inverted rectangle with a width of 30 μm, and the PSF was approximated by a Gaussian distribution with a full width at half maximum (FWHM) of 58 μm. The FWHMs of the experimental and theoretical profiles were 63 μm and 61 μm, respectively. This experimental FWHM corresponded to the convolution between the wire and the PSF with a FWHM of 59 μm.

4.3. Quantitative phase measurement

The performance of the phase measurement in SPIRaL was quantitatively evaluated by imaging a convex lens with a focal length of 80 cm. The object distance z was 5.0 cm, and the auxiliary distances z1 and z2 were 3.5 cm and 1.5 cm, respectively. Other configurations were the same as those in the previous two experiments. The amplitude and phase of RaL are shown in Figs. 6(a) and 6(b), respectively. Figure 6(c) is the captured image. The reconstructed amplitude and phase after unwrapping are shown in Figs. 6(d) and 6(e), respectively. The reconstructed amplitude has artifacts because the gradient of the lens surface is large, especially, in the fringe area, and it is not sparse in the total variation domain. The line profiles of the amplitude and phase are plotted in Figs. 6(f) and 6(g), which also contain the theoretical profiles of the lens as references. The root mean square errors of the amplitude and phase between the experimental and theoretical profiles were 0.32 and 0.29, respectively.

 figure: Fig. 6

Fig. 6 Experimental evaluation of the phase measurement with a convex lens. The (a) amplitude and (b) phase, which is normalized in the interval [−π, π], of the illumination, where the upper-right parts show magnified patterns. (c) The captured image. The (d) amplitude and (e) phase, which is normalized in the interval [0, 9π], of the reconstructed result. The line profiles of the (f) amplitude and (g) phase indicated by the white broken line in Fig. (d).

Download Full Size | PDF

5. Conclusion

We have presented a method for single-shot phase imaging with randomized light (SPIRaL). In SPIRaL, an object is illuminated with a randomized coherent beam, and the propagating field is captured by a camera without the need for reference light. The complex field of the object is reconstructed from a single captured intensity image by using TwIST in conjunction with the Gerchberg–Saxton method based on alternating projection. SPIRaL was demonstrated numerically and experimentally.

SPIRaL can realize reference-free single-shot phase imaging for an object with a high SBP. Therefore SPIRaL can achieve improved throughput, such as the observation speed, higher light usage efficiency, and a simpler optical setup, compared with previous phase imaging methods. Although the computational cost of SPIRaL is high compared with conventional phase imaging methods, it may be alleviated by using a graphics processing unit because the computational processing in SPIRaL is pixel-wisely parallel. In future work, a more detailed analysis and optimization of the imaging conditions in SPIRaL, e.g. the sparsity of the object, should be addressed. This analysis will contain quantitative evaluations of SPIRaL by comparison with traditional phase imaging methods, such as DH. Also it will be interesting to integrate established SI techniques into SPIRaL [21–25].

Acknowledgments

This work was supported by JSPS KAKENHI Grant Number 15K21128.

References and links

1. B. Bhaduri, C. Edwards, H. Pham, R. Zhou, T. H. Nguyen, L. L. Goddard, and G. Popescu, “Diffraction phase microscopy: principles and applications in materials and life sciences,” Adv. Opt. Photon. 6, 57–119 (2014). [CrossRef]  

2. K. Lee, K. Kim, J. Jung, J. Heo, S. Cho, S. Lee, G. Chang, Y. Jo, H. Park, and Y. Park, “Quantitative phase imaging techniques for the study of cell pathophysiology: from principles to applications,” Sensors 13, 4170–4191 (2013). [CrossRef]   [PubMed]  

3. P. Marquet, C. Depeursinge, and P. J. Magistretti, “Review of quantitative phase-digital holographic microscopy: promising novel imaging technique to resolve neuronal network activity and identify cellular biomarkers of psychiatric disorders,” Neurophotonics 1, 20901 (2014). [CrossRef]  

4. G. Nehmetallah and P. P. Banerjee, “Applications of digital and analog holography in three-dimensional imaging,” Adv. Opt. Photon. 4, 472–553 (2012). [CrossRef]  

5. J. Miao, T. Ishikawa, I. K. Robinson, and M. M. Murnane, “Beyond crystallography: diffractive imaging using coherent x-ray light sources,” Science 348, 530–535 (2015). [CrossRef]   [PubMed]  

6. E. N. Leith and J. Upatnieks, “Reconstructed wavefronts and communication theory,” J. Opt. Soc. Am. 52, 1123–1128 (1962). [CrossRef]  

7. I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22, 1268–1270 (1997). [CrossRef]   [PubMed]  

8. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]   [PubMed]  

9. P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, “High-resolution scanning X-ray diffraction microscopy,” Science 321, 379–382 (2008). [CrossRef]   [PubMed]  

10. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7, 739–745 (2013). [CrossRef]  

11. N. Streibl, “Phase imaging by the transport equation of intensity,” Opt. Commun. 49, 6–10 (1984). [CrossRef]  

12. R. Horisaki, Y. Ogura, M. Aino, and J. Tanida, “Single-shot phase imaging with a coded aperture,” Opt. Lett. 39, 6466–6469 (2014). [CrossRef]   [PubMed]  

13. R. Horisaki and J. Tanida, “Multidimensional object acquisition by single-shot phase imaging with a coded aperture,” Opt. Express 23, 9696–9704 (2015). [CrossRef]   [PubMed]  

14. R. Horisaki, R. Egami, and J. Tanida, “Experimental demonstration of single-shot phase imaging with a coded aperture,” Opt. Express 23, 28691–28697 (2015). [CrossRef]   [PubMed]  

15. D. L. Donoho, “Compressed sensing,” IEEE Trans. Info. Theory 52, 1289–1306 (2006). [CrossRef]  

16. A. Fannjiang and W. Liao, “Phase retrieval with random phase illumination,” J. Opt. Soc. Am. A 29, 1847–1859 (2012). [CrossRef]  

17. A. Maiden, G. Morrison, B. Kaulich, A. Gianoncelli, and J. Rodenburg, “Soft X-ray spectromicroscopy using ptychography with randomly phased illumination,” Nat. Commun. 4, 1669 (2013). [CrossRef]   [PubMed]  

18. E. J. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Anal. 39, 277–299 (2014).

19. P. Gao, G. Pedrini, C. Zuo, and W. Osten, “Phase retrieval using spatially modulated illumination,” Opt. Lett. 39, 3615–3618 (2014). [CrossRef]   [PubMed]  

20. A. Suzuki and Y. Takahashi, “Dark-field X-ray ptychography,” Opt. Express 23, 16429–16438 (2015). [CrossRef]   [PubMed]  

21. J. Geng, “Structured-light 3D surface imaging: a tutorial,” Adv. Opt. Photonics 3, 128–160 (2011). [CrossRef]  

22. M. Saxena, G. Eluru, and S. S. Gorthi, “Structured illumination microscopy,” Adv. Opt. Photonics 7, 241–275 (2015). [CrossRef]  

23. E. Mudry, K. Belkebir, J. Girard, J. Savatier, E. Le Moal, C. Nicoletti, M. Allain, and A. Sentenac, “Structured illumination microscopy using unknown speckle patterns,” Nat. Photonics 6, 312–315 (2012). [CrossRef]  

24. R. Horisaki, N. Fukata, and J. Tanida, “A compressive active stereo imaging system with random pattern projection,” Appl. Phys. Express 5, 72501 (2012). [CrossRef]  

25. H. Matsui, R. Horisaki, and J. Tanida, “Computational structured illumination,” Appl. Opt. 54, 8742–8746 (2015). [CrossRef]   [PubMed]  

26. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).

27. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109, 1256–1262 (2009). [CrossRef]   [PubMed]  

28. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier Ptychography with an LED array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014). [CrossRef]   [PubMed]  

29. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of the phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

30. J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Proc. 16, 2992–3004 (2007). [CrossRef]  

31. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems,” IEEE J. Sel. Top. Signal Process. 1, 586–597 (2007). [CrossRef]  

32. S. J. Wright, R. D. Nowak, and M. A. T. Figueiredo, “Sparse reconstruction by separable approximation,” IEEE Trans. Image Proc. 57, 2479–2493 (2009). [CrossRef]  

33. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60, 259–268 (1992). [CrossRef]  

34. R. M. Goldstein, H. A. Zebker, and C. L. Werner, “Satellite radar interferometry: Two-dimensional phase unwrapping,” Radio Science 23, 713–720 (1988). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Setup of SPIRaL.
Fig. 2
Fig. 2 Simulation of SPIRaL. The (a) amplitude and (b) phase of the object. The (c) amplitude and (d) phase of the illumination, where the upper-right parts show magnified patterns. (e) The captured image. The (f) amplitude and (g) phase of the reconstructed result. Phases are normalized in the interval [−π, π].
Fig. 3
Fig. 3 Relationship between the measurement noise and the reconstruction fidelity in the simulation in Fig. 2.
Fig. 4
Fig. 4 SPIRaL experiment with a mixture of oil and water. The (a) amplitude and (b) phase, which is normalized in the interval [−π, π], of the illumination, where the upper-right parts show magnified patterns. (e) The captured image. The (d) amplitude and (e) phase, which is normalized in the interval [0, 6π], of the reconstructed result. (f) Image of the region surounded by the square in Fig. (d) observed with a microscope, where the scale bar is 300 μm.
Fig. 5
Fig. 5 Experimental evaluation of the spatial resolution with a wire. The (a) amplitude and (b) phase of the illumination, where the upper-right parts show magnified patterns. (c) The captured image, where the both sides are replaced with zeros. (d) The reconstructed amplitude, where the upper-right part shows the magnified image of the square area, and (e) the reconstructed phase. (f) The line profile indicated by the black broken line in the magnified image on Fig. (d). Phases are normalized in the interval [−π, π].
Fig. 6
Fig. 6 Experimental evaluation of the phase measurement with a convex lens. The (a) amplitude and (b) phase, which is normalized in the interval [−π, π], of the illumination, where the upper-right parts show magnified patterns. (c) The captured image. The (d) amplitude and (e) phase, which is normalized in the interval [0, 9π], of the reconstructed result. The line profiles of the (f) amplitude and (g) phase indicated by the white broken line in Fig. (d).

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

| g | 2 = | P z Mf | 2 ,
| g | 2 = | P z 2 P z 1 Mf | 2 ,
g ^ = arg min g | g | 2 | P z 2 a | 2 2 ,
f ^ = arg min f a P z 1 Mf 2 + τ ( f ) ,
g ^ = ( | g | 2 | P z 2 a | 2 ) 1 2 P z 2 a ,
δ z λ w ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.