Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Imaging objects through scattering layers and around corners by retrieval of the scattered point spread function

Open Access Open Access

Abstract

We demonstrate a high-speed method to image objects through thin scattering media and around corners. The method employs a reference object of known shape to retrieve the speckle-like point spread function of the scatterer. We extract the point spread function of the scatterer from a dynamic scene that includes a static reference object and uses this to image the dynamic objects. Sharp images are reconstructed from the transmission through a diffuser and from the reflection off a rough surface. The sharp and clean reconstructed images from single shot data exemplify the robustness of the method.

© 2017 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

When light passes through, or reflects off, a turbid medium, it gets scattered and the information it carries is scrambled. This makes looking through diffusing layers or around corners impossible [1]. Scattering is a problem in many imaging scenarios, for instance, the atmospheric agitation in ground-based astronomy, biological tissue in medical imaging and foggy weather in daily life [2–4]. It is in principle possible to retrieve the image of the object obscured by scattering media as the information is not completely lost. Indeed, it has been demonstrated that a speckle pattern propagated through clear space contains enough information to reconstruct the image of an object [5]. Recently, a variety of approaches have been demonstrated to solve this problem, such as time-of-flight imaging [6], time reversal or phase conjugation [7–14], transmission matrix measurement [15–18], wavefront shaping technique [19–23], digital holography [24–27], speckle auto-correlation method [28–34], and other speckle correlation methods [35,36]. The transmission matrix fully characterizes the effect of a scattering medium. Once the transmission matrix is measured, the image of the hidden object can be reconstructed. Speckle autocorrelation has been shown to be a viable method for non-invasive imaging through uncharacterized opaque media, within the range of the memory effect [37–39]. The method relies on the Gerchberg–Saxton (GS) algorithm to find the phase associated with the power spectrum of the autocorrelation signal, the norm of which is achieved from the autocorrelation of the speckle patterns. In wavefront shaping methods, the incident light wave is modulated with a designed spatial phase to generate a focused wave behind the scattering media. After the wavefront shaping process, objects that are obscured by the scattering media can be imaged. In a previous demonstration, He et al. have realized imaging an unknown object through a diffuser by exploiting wavefront shaping with a known object [21]. This method requires an iterative process to shape the speckle pattern to become the reference object at the beginning. Furthermore, the reference object should be removed after the calibration process. More recently, with the knowledge of the point spread function (PSF) measured before data acquisition [40], Zhuang et al. showed high speed full-color imaging through a diffuser [41]. Several other deconvolution experiments have been reported [42–44] which require the measurement of PSF in advance. Recently, Antipa et al. demonstrated the reconstruction of light field and digital refocus from the caustics pattern caused by a phase diffuser [45]. These methods require detailed characterization of the scattering medium, using full access to both sides of the medium which may not be available in many practical settings. A non-invasive PSF retrieval method was recently demonstrated [46], based on acquiring several speckle images in different planes.

In this paper, we present a single-shot speckle-imaging method to reconstruct incoherently illuminated objects that are hidden by thin scattering media, requiring only the placement of a reference object of known shape. We use the fact that the transmission pattern due to a single point of the reference object is a high-contrast interference pattern similar to laser speckle [47] as long as the coherence length of the light is longer than the typical difference in path lengths of the scattered light. The transmission pattern of an extended incoherently illuminated object is the intensity sum of many such speckle patterns, each due to a single coherence area of the object. The contrast (C) of the incoherent sum is approximatelyC=1/N, with N as the number of equally illuminated independent coherence areas in the object. Although their contrast is lower than that of laser speckle, we refer to such incoherent sums as speckle. In case the scattering medium is sufficiently thin, due to the optical memory effect [37] the speckle pattern from each coherence area has the same shape and is only displaced [38–40]. In this case one can regard it as the point spread function (PSF) of the scattering medium, and the observed intensity pattern is the convolution of the PSF and the object’s shape. Our imaging method is based upon retrieving the scatterer’s point spread function through a single shot deconvolution algorithm and requires no iteration. Moreover, in a dynamic scene, the reference speckle does not need a separate acquisition process and can be extracted by averaging the speckle patterns. The proposed method is so robust that reconstruction of the image of a hidden object is possible even with a part of the speckle pattern. We extend the proposed method to non-invasively image an object in the reflection from the rough surface of a metal plate. Important future applications of our method may be found in security monitoring and bio-medical imaging such as otoscopy and laryngoscopy.

2. Principle

The experimental setup is shown schematically in Fig. 1. A known object (transmittance plate of a letter “H”) [Fig. 2(a)] is illuminated by an incoherent beam from a light emitting diode (LED). The light is transmitted through the scattering medium, an optical diffuser, which scrambles the wavefront and a speckle pattern is observed on the CCD (charge coupled device). Under incoherent illumination, the speckle pattern of the reference object SR can be expressed as the convolution of its intensity pattern IR and the speckle-like PSF of the diffuser,

SR=IR(x,y)PSF(x,y),
where SR is the speckle pattern resulting from the known reference object and denotes the convolution product. When an unknown test object with intensity pattern IT is introduced next to the reference object, the speckle pattern Ssum on the CCD is the sum of the intensities of the individual speckle patterns:
Ssum=SR+ST=(IR+IT)PSF,
where ST is the speckle pattern of the unknown object “T”. For brevity, we have removed coordinates (x, y) from the equations. The relation of two speckle patterns can be exploited to retrieve the image of the unknown object “T”. The convolution in the spatial domain is a multiplication in the spatial frequency domain, which yields
F{SR}=F{IR}×F{PSF},
F{Ssum}=F{IR+IT}×F{PSF},
where “F” represents Fourier transform, and × indicates multiplication. The reconstructed intensity ID is found from a measured speckle pattern SM as
ID=F1{F{IR}×F{SM}F{SR}},
where “F1” stands for the inverse Fourier transform. We note that reference object is reconstructed along with the test object unless it is removed before recording the speckle pattern. In Eq. (5) the division in the frequency domain represents an effective Wiener deconvolution algorithm which is more stable against noise [48]. This procedure amounts to an implicit retrieval of the PSF. In principle the same result is obtained through explicit retrieval of the PSF, which necessitates one extra deconvolution step, and hence is slightly less robust. It can be seen from Eq. (5) that spatial frequencies for which F(IR) = 0 are not reconstructed. Hence the spatial frequency content of the reference object determines the resolution of the reconstruction.

 figure: Fig. 1

Fig. 1 Experimental setup.

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 (a) The reference object. The scale bar is 500µm, which is the same for all images. (b) the speckle pattern of the reference object. (c) the reference object and the unknown object (d) the corresponding speckle pattern. (e) the reconstructed image.

Download Full Size | PDF

3. Experiments

In the setup shown in Fig. 1, a commercial LED (λ = 630-650 nm) is used as the light source. The light passes through the object (transmittance plate) before impinging on the diffuser (a turbid plastic sheet). A CCD (Basler, dark-60 µm) captures the resulting speckle pattern. The distance from the object plane to the diffuser surface (d1) is 190 mm and that from the CCD to the diffuser surface (d2) is 100 mm. The effective magnification of the scattering lens is Mscat = d2/d1.

The image reconstruction process is depicted in Fig. 2. The speckle pattern of a reference object (letter “H” shown in Fig. 2(a)) is shown in Fig. 2(b). A test object, here a letter “T” [Fig. 2(c)], is then added adjacent to the reference object, resulting in a different speckle pattern, shown in Fig. 2(d). Using the reference object to retrieve the PSF, the image of the letters behind the diffuser is recovered according to Eq. (5). The reconstructed result is shown in Fig. 2(e). The reconstruction process is performed in parallel on an NVidia GTX 970GPU in approximately 25ms, which is fast enough for real time imaging applications

The reconstruction process depends on the magnification of the system. In Fig. 3 we show the reconstruction results versus the magnification assumed in the deconvolution process. When using the correct magnification (here, Mscat = 0.53), the reconstructed images are distinct and clear. A deviation from the actual magnification factor leads to increased background noise and artefacts. In systems where the position d2 of the reference object is unknown, one can generate several reference images with different estimated system magnifications. The clearest reconstruction result indicates the correct system magnification, and thereby yields the position d2 of the reference object.

 figure: Fig. 3

Fig. 3 Reconstructed images of the hidden object “T” versus assumed magnification factor. (a) the image of the transmittance plate. The magnifications of the “H” assumed in the deconvolution are (b) 0.3, (c) 0.4, (d) 0.53 (the experimental value), (e) 0.6, (f) 0.7.

Download Full Size | PDF

In our situation where the PSF is broader than the object, the speckle pattern has holographic properties, in the sense that only a fraction of the test speckle pattern is needed to reconstruct an image. Figure 4 depicts reconstructions based on partial data. The reconstructed results with one-half and one-quarter of the speckle image are shown in Fig. 4(c) and 4(f), respectively. As expected, the signal to noise ratio degrades as the fraction of the image used in the reconstruction is decreased.

 figure: Fig. 4

Fig. 4 Reconstructed images from part of the speckle pattern. (a) the half speckle pattern of the reference object “H”. (b) The speckle of the unknown object “T” and the reference object. (c) the corresponding reconstructed image. (d) the quarter speckle pattern of the reference object (e) quarter speckle pattern of the combined objects. (f) the corresponding reconstructed image.

Download Full Size | PDF

We measure the field of view (FOV) and the depth of field (DOF) of our method by reconstructing a 50-µm pinhole object. In order to measure the field of view, first, a reference object “H” is placed on the object plane and its speckle is recorded. After that, the reference object is replaced by the pinhole and it is moved along the horizontal axis and corresponding speckle patterns are recorded. The CCD and the object plane are placed close to the diffuser (Newport 10° light shaping diffuser) so that the speckle pattern can be fully recorded when the position of the pinhole is changed (d1 = 80 mm and d2 = 26 mm). The intensity of the pinhole in the reconstructed images drops gradually. The full width at half maximum (FWHM) of the intensity curve is 8 mm corresponding to a FOV of 100 mrad. Figure 5(a) shows the FOV measurement. In order to measure the DOF, the pinhole is moved along the optical axis and corresponding speckle images were recorded. The intensity of the pinhole in the reconstructed images is plotted in Fig. 5(b), showing a DOF of 20 mm. When we reconstruct an unknown object away from the reference plane, the quality of the reconstructed images degrades and the background noise increases [Fig. 5(c)]. This gives our method depth sectioning capability [41, 42]. The depth of field is expected to depend on the effective NA, which in our setup is restricted by the small angular spread of the light source. In contrast, phase retrieval methods that effectively use an object as its own reference [12, 29] have a semi-infinite DOF, as long as the object is far enough from the scattering layer.

 figure: Fig. 5

Fig. 5 The measurement of FOV and DOF. (a) The intensity of the reconstructed images of the pinhole moved in the lateral direction and the fitted curve of the memory-effect intensity correlations [38]. The value used for the effective thickness of the diffuser is 4.2 µm, leading to a FWHM of 4 mm. (b) The intensity of the reconstructed images of the pinhole versus axial position. The Gaussian fit has a FWHM of 20 mm. (c) Reconstructed images of the object located at different plane. A negative displacement corresponds to a larger distance between object and the diffuser.

Download Full Size | PDF

In case a “pointlike” reference source that is smaller than a coherence area is available, a measurement of the PSF becomes conceptually very simple, as the transmission pattern of a point source is directly the PSF with a contrast of order 1. However, if the illumination is incoherent (e.g. by fluorescence) a single point source gives rise to a weak signal. For the same illumination condition and exposure time (1ms here), the speckle pattern of a reference object is much brighter. An example is shown in Fig. 6, where the experiment is repeated with a small amount of background light. The background light is hardly noticeable in the speckle pattern of the extended reference object in Fig. 6(a), and a good quality image was obtained with our method, see Fig. 6(c). However, under the same illumination and exposure conditions the background light overwhelms the speckle of the point source, as shown in Fig. 6(b). As a result the deconvolution fails, yielding only noise, as demonstrated in Fig. 6(d). Hence an extended reference object offers a clear advantage when illumination is weak or spurious backgrounds are present.

 figure: Fig. 6

Fig. 6 The influence of background light (a) The intensity-normalized speckle pattern of the reference object (b) Speckle pattern of a 50 -µm pinhole on the same spatial scale. The expose time is 1 ms in both cases. (c) reconstructed image using the speckle of the reference object (d) reconstruction of the same object using the speckle of the point source (same spatial scale).

Download Full Size | PDF

4. Static reference speckle extraction from a dynamic scene and subsequent dynamic image reconstruction

In many speckle-based imaging methods, motion of the test object is a problem. On the contrary, in our deconvolution method it is highly advantageous if the test object is dynamic. In this case the reference pattern is extracted from the time-average speckle pattern of the dynamic scene, and no separate reference measurement is needed.

The experimental setup for dynamic imaging is shown in Fig. 7. A digital projector (with lens removed) is used to generate intensity objects at the plane of its intensity-mode liquid crystal screen. The diffuser (Newport 10° light shaping diffuser) scatters the light from the projector and a lens system composed of a 4x objective lens and a tube lens is used to collect the scattered light for a better signal to noise ratio (SNR) [41]. The collection lens system does not image the diffuser to the CCD. The distance from projector plane to the diffuser surface is 20.7 cm and that from the objective lens to the diffuser surface is 0.7 cm. The total magnification of the imaging system was measured to be M = 0.1 using the method of Fig. 3.

 figure: Fig. 7

Fig. 7 The experiment setup for imaging a dynamic object through a diffuser.

Download Full Size | PDF

The projector displays an image of a small disk moving around a bifurcation [Fig. 8(a)], which is a crude model of a fluorescent cell in motion around a static object. The CCD captures a series of speckle images of the scene with different positions of the disk. By averaging the series of speckle patterns, the speckle pattern of the reference object is obtained [Fig. 8(b)]. With the speckle pattern of the reference object, the image of the moving object is retrieved from the corresponding speckle pattern by applying Eq. (5). Figures 8(f)-8(h) show the reconstructed images of the moving object at different locations. A video of dynamic imaging through the diffuser can be found in the supplementary Visualization 1. The number of frames that is needed to obtain the speckle pattern of the reference object depends on the complexity of the dynamic scene. For the model in Fig. 8, 360 frames were used, however an acceptable SNR was already obtained with 100 images. This method could even work with time-varying scattering media, as long as the medium changes slowly enough that a sufficient number of reference images can be taken before the PSF changes appreciably. Motion of the test objects themselves during the exposure time of the camera will lead to motion blur comparable to that of a normal camera.

 figure: Fig. 8

Fig. 8 Imaging a moving object (a disk displayed on the projector) through a diffuser. (a) The reference object (bifurcation) and the disk (b) the approximated speckle pattern of the reference object (mean value of 360 speckle images with different positions of the disk.) (c), (d), (e): three original images with the disk in different positions. (f), (g), (h): the corresponding reconstructed images. The scale bar in (a) is 3mm and also applies to (c), (d) and (e). The scale bar in (b) is 0.3mm and also applies to (f), (g) and (h). (See Visualization 1 for an animation.)

Download Full Size | PDF

The key to successful implementation of the dynamic scene based reference speckle extraction method is the existence of a static reference object on the scene. The reconstruction algorithm only needs to know the shape of the reference object. An exciting but as yet speculative application is that of imaging near an implanted object in tissue [49]. The implanted object remains static while fluorescently marked objects or cells that flow around it are imaged.

5. Looking around corners using nonspecular reflections

In many situations such as laryngoscopy or otoscopy it is useful to extract an image from a reflection in a nonspecular surface. Here we demonstrate that PSF retrieval recovers images from the reflection of a nonspecular test sample, a rough metal plate. We use a 4f conjugated setup which images the surface of a mirror to that of the metal plate as shown in Fig. 9. First, a test object [Fig. 10(a)] is illuminated with a LED light and the transmitted light is incident on the metal plate [Fig. 10(c)] at an angle. The speckle pattern due to the scattered light is acquired by the CCD and is shown in Fig. 10(b). Afterwards, another beam is used to illuminate the reference object “H” at normal incidence, such that its diffraction pattern on the metal plate overlaps with that of the test object. The corresponding speckle pattern is captured on the CCD as shown in Fig. 10(e).

 figure: Fig. 9

Fig. 9 Imaging an object around a metal plate. The mirror and the metal plate are conjugated by the 4f systems, of which the focus length of the lens is 150 mm. The dotted line represents the light which illuminates the unknown object and reflected by the metal plate.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Imaging in the reflection form a rough surface (a) The unknown object “T”. The scale bar is 500µm, which is the same for the else images. (b) The speckle pattern of the object “T”. (c) is a metal plate of which the surface is rather rough. (d) is the reference object “H” and (e) is the corresponding speckle pattern. (f) is the reconstructed image of the unknown object “T”.

Download Full Size | PDF

Compared to the previous cases (sections 3 and 4), here the known and unknown objects are presented separately in space. The images of two speckle patterns and the prior information of the reference object are used to retrieve the unknown object, even though the unknown object is present at an angle (approx. 1 degree) that is different from that of the reference. Figure 10(f) shows the retrieved image with a magnification factor of 0.345. It is to be noted that, here the reference object is no longer required to be close to the test object, which really improve the practicability. It is found that the method is limited to a small incident angle as the reflected light should also go through the lens system and reach the CCD. If the angle is too large, no light will enter the lens system. This limitation of our apparatus could be improved by tilting the lens system to align with the near-specular reflected light.

6. Discussion

Our method uses the speckle-like pattern produced by a known reference object to retrieve the point spread function of the scattering medium. The reference object does not need to be point-like and it does not need to be present separately or to be removed before an image of another object is reconstructed. Compared to other methods like feedback-based wavefront shaping and transmission matrix methods, the speckle point spread function retrieval method is robust, high speed and requires a much simpler setup as it can work with incoherent light. Compared to time-of-flight methods, our method requires less specific hardware and can form an image in a single exposure [6].

However, point spread functions only work within the angular range of optical memory effect and consequently the corresponding FOV is rather small, and depends on the illuminated area on the diffusing medium. The FOV may be enlarged by shrinking and scanning the illumination light [21], or it could be enlarged computationally if a moving object is used as the reference. The signal to noise of the method is sensitive to the complexity of the reference object and the unknown object. Very complex and large objects give rise to a low-contrast speckle image. In Ref [29]. the influence of noise was studied in an imaging method based on iterative phase-retrieval. It was found that if the signal to noise falls below a certain threshold the iterations do not converge. In our method the signal and any noise are propagated through linear filters without any iteration. The reconstructed image gradually becomes noisier as a function of complexity of the unknown object, without any threshold. However, our method is not linear in the reference speckle which appears in the denominator in Eq. (5). A reconstruction is impossible if the noise overwhelms the reference, leading to spurious zeroes in the denominator. This corresponds to the situation shown in Fig. 6(d).

7. Conclusion

A method is presented to image objects behind or around a scattering medium based on retrieval of the point spread function by speckle deconvolution. The proposed method relies on the prior knowledge of the shape of a reference object, and makes use of the memory effect to faithfully reconstruct the image of objects near the reference object. The recovery approach is single-shot and high speed, with calculations performed on a millisecond time scale. Moreover, the speckle from the static reference object can be extracted from a dynamic scene. Thus the proposed method has the potential to image moving objects behind a scattering medium. This principle could give rise to applications in tissue optics, when reference objects whose shape is known, for example through non-optical imaging, are present. Moreover, we can use the same principle to image through reflections in non-specular surfaces, where the PSF is measured at a different angle from the object, opening up new possibilities for imaging in confined spaces.

Funding

Netherlands Organization for Scientific Research (Vici 68047618); National Natural Science Foundation of China (11534017 and 61575223); China Scholarship Council (201606380037).

Acknowledgments

We thank Jeroen Bosch, Siddharth Ghosh and Pritam Pai for helpful discussions.

References and links

1. I. Freund, “Looking through walls and around corners,” Physica A 168, 49–65 (1990).

2. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012).

3. V. Tuchin, Tissue Optics, (SPIE, 2007).

4. M. Gu, X. Gan, and X. Deng, Image Reconstruction. Microscopic Imaging through Turbid Media (Springer Berlin Heidelberg, 2015).

5. P. S. Idell, J. D. Gonglewski, D. G. Voelz, and J. Knopp, “Image synthesis from nonimaged laser-speckle patterns: experimental verification,” Opt. Lett. 14(3), 154–156 (1989). [PubMed]  

6. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012). [PubMed]  

7. Z. Yaqoob, D. Psaltis, M. S. Feld, and C. Yang, “Optical Phase Conjugation for Turbidity Suppression in Biological Samples,” Nat. Photonics 2(2), 110–115 (2008). [PubMed]  

8. C.-L. Hsieh, Y. Pu, R. Grange, G. Laporte, and D. Psaltis, “Imaging through turbid layers by scanning the phase conjugated second harmonic radiation from a nanoparticle,” Opt. Express 18(20), 20723–20731 (2010). [PubMed]  

9. X. Yang, C.-L. Hsieh, Y. Pu, and D. Psaltis, “Three-dimensional scanning microscopy through thin turbid media,” Opt. Express 20(3), 2500–2506 (2012). [PubMed]  

10. K. Si, R. Fiolka, and M. Cui, “Fluorescence imaging beyond the ballistic regime by ultrasound pulse guided digital phase conjugation,” Nat. Photonics 6(10), 657–661 (2012). [PubMed]  

11. Y. M. Wang, B. Judkewitz, C. A. Dimarzio, and C. Yang, “Deep-tissue focal fluorescence imaging with digitally time-reversed ultrasound-encoded light,” Nat. Commun. 3, 928 (2012). [PubMed]  

12. C. Ma, X. Xu, Y. Liu, and L. V. Wang, “Time-reversed adapted-perturbation (TRAP) optical focusing onto dynamic objects inside scattering media,” Nat. Photonics 8(12), 931–936 (2014). [PubMed]  

13. K. Wu, Q. Cheng, Y. Shi, H. Wang, and G. P. Wang, “Hiding scattering layers for noninvasive imaging of hidden objects,” Sci. Rep. 5, 8375 (2015). [PubMed]  

14. I. N. Papadopoulos, J. Jouhanneau, J. A. Poulet, and B. Judkewitz, “Scattering compensation by focus scanning holographic aberration probing (F-SHARP),” Nat. Photonics 11, 116–123 (2017).

15. S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010). [PubMed]  

16. S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104(10), 100601 (2010). [PubMed]  

17. Y. Choi, C. Yoon, M. Kim, T. D. Yang, C. Fang-Yen, R. R. Dasari, K. J. Lee, and W. Choi, “Scanner-Free and Wide-Field Endoscopic Imaging by Using a Single Multimode Optical Fiber,” Phys. Rev. Lett. 109(20), 203901 (2012). [PubMed]  

18. M. Kim, W. Choi, Y. Choi, C. Yoon, and W. Choi, “Transmission matrix of a scattering medium and its applications in biophotonics,” Opt. Express 23(10), 12648–12668 (2015). [PubMed]  

19. I. M. Vellekoop and A. P. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32(16), 2309–2311 (2007). [PubMed]  

20. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).

21. H. He, Y. Guan, and J. Zhou, “Image restoration through thin turbid layers by correlation with a known object,” Opt. Express 21(10), 12539–12545 (2013). [PubMed]  

22. I. M. Vellekoop, “Feedback-based wavefront shaping,” Opt. Express 23(9), 12189–12206 (2015). [PubMed]  

23. R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015). [PubMed]  

24. W. Harm, C. Roider, A. Jesacher, S. Bernet, and M. Ritsch-Marte, “Lensless imaging through thin diffusive media,” Opt. Express 22(18), 22146–22156 (2014). [PubMed]  

25. A. K. Singh, D. N. Naik, G. Pedrini, M. Takeda, and W. Osten, “Looking through a diffuser and around an opaque surface: a holographic approach,” Opt. Express 22(7), 7694–7701 (2014). [PubMed]  

26. S. Li and J. Zhong, “Dynamic imaging through turbid media based on digital holography,” J. Opt. Soc. Am. A 31(3), 480–486 (2014). [PubMed]  

27. A. K. Singh, D. N. Naik, G. Pedrini, M. Takeda, and W. Osten, “Exploiting scattering media for exploring 3D objects,” Light Sci. Appl. 6, e16219 (2017).

28. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012). [PubMed]  

29. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).

30. K. T. Takasaki and J. W. Fleischer, “Phase-space measurement for depth-resolved memory-effect imaging,” Opt. Express 22(25), 31426–31433 (2014). [PubMed]  

31. X. Yang, Y. Pu, and D. Psaltis, “Imaging blood cells through scattering biological tissue using speckle scanning microscopy,” Opt. Express 22(3), 3405–3413 (2014). [PubMed]  

32. E. Edrei and G. Scarcelli, “Optical imaging through dynamic turbid media using the Fourier-domain shower-curtain effect,” Optica 3(1), 71–74 (2016). [PubMed]  

33. T. Wu, O. Katz, X. Shao, and S. Gigan, “Single-shot diffraction-limited imaging through scattering layers via bispectrum analysis,” Opt. Lett. 41(21), 5003–5006 (2016). [PubMed]  

34. M. Cua, E. H. Zhou, and C. Yang, “Imaging moving targets through scattering media,” Opt. Express 25(4), 3935–3945 (2017). [PubMed]  

35. J. A. Newman and K. J. Webb, “Imaging optical fields through heavily scattering media,” Phys. Rev. Lett. 113(26), 263903 (2014). [PubMed]  

36. J. A. Newman, Q. Luo, and K. J. Webb, “Imaging Hidden Objects with Spatial Speckle Intensity Correlations over Object Position,” Phys. Rev. Lett. 116(7), 073902 (2016). [PubMed]  

37. I. Freund, M. Rosenbluh, and S. Feng, “Memory Effects in Propagation of Optical Waves Through Disordered Media,” Phys. Rev. Lett. 61(20), 2328–2331 (1988). [PubMed]  

38. S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and Fluctuations of Coherent Wave Transmission through Disordered Media,” Phys. Rev. Lett. 61(7), 834–837 (1988). [PubMed]  

39. S. Schott, J. Bertolotti, J. F. Léger, L. Bourdieu, and S. Gigan, “Characterization of the angular memory effect of scattered light in biological tissues,” Opt. Express 23(10), 13505–13516 (2015). [PubMed]  

40. X. Xie, Y. Chen, K. Yang, and J. Zhou, “Harnessing the point-spread function for high-resolution far-field optical microscopy,” Phys. Rev. Lett. 113(26), 263901 (2014). [PubMed]  

41. H. Zhuang, H. He, X. Xie, and J. Zhou, “High speed color imaging through scattering media with a large field of view,” Sci. Rep. 6, 32696 (2016). [PubMed]  

42. E. Edrei and G. Scarcelli, “Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media,” Sci. Rep. 6, 33558 (2016). [PubMed]  

43. S. Lee, K. Lee, S. Shin, and Y. Park, Generalized image deconvolution by exploiting spatially variant point spread functions,” arXiv:1703.08974 (2017).

44. S. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” arXiv:1707.09453 (2017).

45. N. Antipa, S. Necula, R. Ng, and L. Waller, “Single-shot diffuser-encoded light field imaging,” in 2016 IEEE International Conference on Computational Photography (ICCP) (IEEE, 2016), pp. 1–11.

46. T. Wu, J. Dong, X. Shao, and S. Gigan, “Imaging through a thin scattering layer and jointly retrieving the point-spread-function using phase-diversity,” Opt. Express 25(22), 27182–27194 (2017). [PubMed]  

47. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts & Co., 2007).

48. R. C. Gonzalez and R. E. Woods, Digital Image Processing (3rd Edition) (Prentice-Hall, Inc., 2006).

49. Y. Chan, J. P. Zimmer, M. Stroh, J. S. Steckel, R. K. Jain, and M. G. Bawendi, “Incorporation of Luminescent Nanocrystals into Monodisperse Core–Shell Silica Microspheres,” Adv. Mater. 16, 2092–2097 (2004).

Supplementary Material (1)

NameDescription
Visualization 1       Visualization of the reconstruction depicted in Fig. 6

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Experimental setup.
Fig. 2
Fig. 2 (a) The reference object. The scale bar is 500µm, which is the same for all images. (b) the speckle pattern of the reference object. (c) the reference object and the unknown object (d) the corresponding speckle pattern. (e) the reconstructed image.
Fig. 3
Fig. 3 Reconstructed images of the hidden object “T” versus assumed magnification factor. (a) the image of the transmittance plate. The magnifications of the “H” assumed in the deconvolution are (b) 0.3, (c) 0.4, (d) 0.53 (the experimental value), (e) 0.6, (f) 0.7.
Fig. 4
Fig. 4 Reconstructed images from part of the speckle pattern. (a) the half speckle pattern of the reference object “H”. (b) The speckle of the unknown object “T” and the reference object. (c) the corresponding reconstructed image. (d) the quarter speckle pattern of the reference object (e) quarter speckle pattern of the combined objects. (f) the corresponding reconstructed image.
Fig. 5
Fig. 5 The measurement of FOV and DOF. (a) The intensity of the reconstructed images of the pinhole moved in the lateral direction and the fitted curve of the memory-effect intensity correlations [38]. The value used for the effective thickness of the diffuser is 4.2 µm, leading to a FWHM of 4 mm. (b) The intensity of the reconstructed images of the pinhole versus axial position. The Gaussian fit has a FWHM of 20 mm. (c) Reconstructed images of the object located at different plane. A negative displacement corresponds to a larger distance between object and the diffuser.
Fig. 6
Fig. 6 The influence of background light (a) The intensity-normalized speckle pattern of the reference object (b) Speckle pattern of a 50 -µm pinhole on the same spatial scale. The expose time is 1 ms in both cases. (c) reconstructed image using the speckle of the reference object (d) reconstruction of the same object using the speckle of the point source (same spatial scale).
Fig. 7
Fig. 7 The experiment setup for imaging a dynamic object through a diffuser.
Fig. 8
Fig. 8 Imaging a moving object (a disk displayed on the projector) through a diffuser. (a) The reference object (bifurcation) and the disk (b) the approximated speckle pattern of the reference object (mean value of 360 speckle images with different positions of the disk.) (c), (d), (e): three original images with the disk in different positions. (f), (g), (h): the corresponding reconstructed images. The scale bar in (a) is 3mm and also applies to (c), (d) and (e). The scale bar in (b) is 0.3mm and also applies to (f), (g) and (h). (See Visualization 1 for an animation.)
Fig. 9
Fig. 9 Imaging an object around a metal plate. The mirror and the metal plate are conjugated by the 4f systems, of which the focus length of the lens is 150 mm. The dotted line represents the light which illuminates the unknown object and reflected by the metal plate.
Fig. 10
Fig. 10 Imaging in the reflection form a rough surface (a) The unknown object “T”. The scale bar is 500µm, which is the same for the else images. (b) The speckle pattern of the object “T”. (c) is a metal plate of which the surface is rather rough. (d) is the reference object “H” and (e) is the corresponding speckle pattern. (f) is the reconstructed image of the unknown object “T”.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

S R = I R ( x , y ) P S F ( x , y ) ,
S s u m = S R + S T = ( I R + I T ) P S F ,
F { S R } = F { I R } × F { P S F } ,
F { S s u m }= F { I R + I T } × F { P S F } ,
I D = F 1 { F { I R } × F { S M } F { S R } } ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.