Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Imaging moving targets through scattering media

Open Access Open Access

Abstract

Optical microscopy in complex, inhomogeneous media is challenging due to the presence of multiply scattered light that limits the depths at which diffraction-limited resolution can be achieved. One way to circumvent the degradation in resolution is to use speckle- correlation-based imaging (SCI) techniques, which permit imaging of objects inside scattering media at diffraction-limited resolution. However, SCI methods are currently limited to imaging sparsely tagged objects in a dark-field scenario. In this work, we demonstrate the ability to image hidden, moving objects in a bright-field scenario. By using a deterministic phase modulator to generate a spatially incoherent light source, the background contribution can be kept constant between acquisitions and subtracted out. In this way, the signal arising from the object can be isolated, and the object can be reconstructed with high fidelity. With the ability to effectively isolate the object signal, our work is not limited to imaging bright objects in the dark-field case, but also works in bright-field scenarios, with non-emitting objects.

© 2017 Optical Society of America

1. Introduction

Optical imaging is challenging in turbid media, where multiple scattering of light causes a degradation of resolution and limits the depths at which we can reliably image (< 1mm in biological tissue) without having to resort to destructive optical clearing or sectioning techniques [1]. Many approaches currently exist to filter out the multiply scattered light and detect only the unscattered (ballistic) or minimally scattered photons. These include methods such as time and coherence gating, which separate the ballistic photons from the scattered photons based on their transit time to the detector [2, 3]; methods that rely on preserving the initial angular momentum or polarization modulation [4–7]; and methods that rely on spatial confinement, such as confocal and multi-photon microscopy [1, 8]. An issue with methods that rely on detecting only the minimally scattered photons is the maximum achievable depth of penetration, since the chance of detecting a quasi-ballistic photons decreases exponentially with increasing depth.

Instead of rejecting the scattered photons, other approaches have aimed to take advantage of the information inherent within the detected speckle field that arises from multiply scattered light. Wavefront shaping (WFS) techniques exploit the principles of time-reversal to undo the effect of scattering and enable focusing of light in thick, scattering media [9–12]. However, WFS usually requires long acquisition times to measure the transmission matrix, and/or the presence of a guide star. On the other hand, speckle-correlation-based imaging (SCI) approaches exploit the angular correlations inherent within the scattering process to reconstruct the hidden object and do not need long acquisition times or a guide star [13,14]. However, SCI methods are limited to working in dark-field scenarios, with sparsely-tagged objects [14], since the detected light must consist solely of light arising from the object.

In this work, we demonstrate imaging of hidden moving objects in a bright-field scenario by leveraging the temporal correlations inherent in the scattering process to separate and remove the dominating contribution from the background [15, 16]. To create a spatially incoherent light source, a spatial light modulator (SLM) was used to apply the same set of random phase patterns during different acquisitions. The use of a deterministic phase modulator ensured that the background contribution remained constant across the detected images. By removing the background component, the speckle pattern from the object was isolated, and the object was reconstructed with high fidelity. Using this technique, we experimentally demonstrate successful recovery of moving objects that would otherwise be obscured by scattering media.

2. Principle

Figure 1 presents an overview of our system. A moving object, hidden at a distance u behind a scattering media, is illuminated using a spatially incoherent, narrow-band light source. The scattered light is detected by a high-resolution camera that is placed at a distance v from the scattering media.

 figure: Fig. 1

Fig. 1 Principle behind non-invasive imaging of obscured moving objects. A) A spatially incoherent light source illuminates a moving object hidden behind a visually opaque turbid media. The resultant speckle field is captured by a camera sensor. B) Speckle images are acquired by the camera sensor at different times, with the object moving between the captures. The scattering media prevents us from resolving the object. C) The hidden object can be retrieved from the seemingly random speckle images by taking advantage of inherent angular correlations in the scattering pattern. i) Each captured image In consists of a background, B, subtracted by the imaged object, where the imaged object is the convolution of the PSF of the scattering media, S, and the object pattern, O. ii) Although the background signal dominates over the object, it can be subtracted out by taking the difference between the two captured images ΔI. iii) The object autocorrelation OO is approximated by autocorrelating the difference image ΔI. iv) The hidden object can be reconstructed from the object autocorrelation by using phase retrieval techniques.

Download Full Size | PDF

In the absence of any correlations in the scattering pattern, the detected image is merely a speckle intensity field. However, by exploiting the deterministic nature of scattering, the hidden object can be recovered [Fig. 1(c)]. Let us first consider the case where light is confined to emit solely within an isoplanatic range, as defined by the angular memory effect (ME). In this case, the detected light can be mathematically represented as

I=S*O,
where S is the point spread function (PSF) of the light scattering process, or equivalently the speckle intensity distribution at the camera arising from a single point source at the object plane; and O is the object, defined as the collection of points through which light can be transmitted [14]. For this paper, we use the operator * to denote convolution. The memory effect region can be approximated as δx=uλπL, where L is the thickness of the scattering media, λ is the wavelength of light, and u is the distance between the scattering media and the object.

If we now consider the case of an absorptive object in a bright-field scenario, then the majority of the detected light arises from the background. Using superposition, the detected intensity image I can be mathematically described as

I=BS*O,
where B is the speckle intensity image arising from the scattered light transmitted through the medium, and S * O is the portion that the object would have contributed if it were transmitting, as opposed to blocking, light [Fig. 1(c,i)]. Due to the dominating contribution from the background B, we cannot retrieve O from I alone. By acquiring multiple intensity images with the background, but not the object, constant between acquisitions, we can remove the background signal and thereby retrieve the object.

One strategy to achieve this is to use a moving object. If the object dimensions falls within the ME region, the contribution of the object in each image can be represented as the convolution of the object pattern with an acquisition-dependent PSF. As long as the rest of the sample is static, the speckle field arising from the background will remain unchanged and can be subtracted out by taking the difference between captures. That is,

In=BSn*O,n=1,2,N
andΔIn=In+1In=(SnSn+1)*O,
where In denotes the nth captured image. Since the scattering PSF is a delta-correlated process (Sn(x) ★ Sn(x) ≈ δ(x)), taking the autocorrelation (AC) of the image ΔI yields the object autocorrelation (OAC), plus additional noise terms [Fig. 1(c,iii)]. That is,
ΔInΔIn2×(OO)(SnSn+1+Sn+1Sn)*O=2×(OO)noise,
where ★ denotes autocorrelation. We shall refer to ΔIn ★ ΔIn as the speckle autocorrelation (SAC).

The object can be recovered from the SAC by using phase retrieval techniques, such as the Fienup iterative phase retrieval methods, to recover the Fourier phase [Fig. 1(c,iv)] [17]. The resultant object will have an image size dictated by the magnification of the system, M=vu.

2.1. Effect of travel distance

Depending on the distance traveled by the object, the PSFs Sn, n = 1, 2, ... may or may not be correlated. Figure 2 illustrates the effect of travel distance, relative to the ME range, on the SAC. The speckle intensity images I1, I2 were determined using simulation. For comparison, the autocorrelation of the object/target, A = OO has also been provided [Fig. 2(A, “Object AC”)]. For simplicity, only the case of two image captures (n = 1, 2) has been considered.

 figure: Fig. 2

Fig. 2 Impact of object travel distance on the computed speckle autocorrelation (SAC). A) The scattering PSFs experienced by an object have a degree of correlation Cx) that depends on the distance the object traveled. When Cx) ≥ 0.5 (shown in red), the object is considered to have traveled within the memory effect (ME) region. For comparison, the object and its autocorrelation (AC) are displayed. B) When the object travels inside the ME region, the SAC contains three copies of the object autocorrelation (OAC): a centered, positive copy and two negative copies shifted by an amount proportional to the object travel distance. The OAC can be determined by either deconvolving the SAC or by thresholding out the negative portions (negative with reference to the mean, background level). The object can be reconstructed from the estimated OAC using phase retrieval techniques. C) When the object travels a distance where Cx) ≈ 0, only a single copy of the OAC is seen, with additional noise from the cross-correlation between uncorrelated PSFs. The normalized colormap used to display the AC and reconstructed object, with 0 corresponding to the mean background level.

Download Full Size | PDF

For a moving object, the associated PSFs S1, S2 will have a degree of correlation Cx) based on the object travel distance Δx. For scattering media with thicknesses L greater than the mean free path, the degree of correlation can be approximated using the angular correlation function

C(Δx)=[kΘLsinh(kΘL)]2
where k=2πλ, L is the thickness of the scattering medium, and ΘΔxu [18–20]. When Cx) > 0.5, the object is considered to have traveled within the ME field of view. The following sections describe three possible cases in more detail: Cx) ≈ 1, Cx) > 0.5, and Cx) → 0.

Case 1: Object travels distance where Cx) ≈ 1

In the case where the object travels a small distance (such that Cx) ≈ 1), we have

S2(xi)S1(xi+Δxi)
where x = (x, y), xi = (xi, yi) are coordinates in the object plane and image plane respectively, Δx is the distance the object traveled in the object plane, and Δxi = MΔx. We can equivalently consider the PSF to be the same in both captures and have the object travel between captures.

That is,

O2=O(xi+Δxi),
ΔI=S*[O(xi)O(xi+Δxi)],
andΔIΔI=2A(xi)A(xi+Δxi)A(xiΔxi).
where A = OO is the object autocorrelation (OAC). The SAC contains three copies of the OAC: a positive copy centered at x = (0, 0), and two negative copies shifted by an amount commensurate with the object travel distance [Fig. 2(B, “Speckle AC”)].

Since Cx) ≈ 1 when Δx ≈ 0, the object may travel a distance shorter than the extent of its autocorrelation. In this case, the SAC will yield positive and negative copies of the OAC that overlap [Fig. 2(i)]. The OAC can be recovered using deconvolution [Fig. 2(i, ”Deconv. SAC.”)]. Using thresholding to remove the negative portions will adversely impact the positive copy and result in an incomplete estimation of the OAC [Fig. 2(i,”SAC>0”)]. For the results presented in Fig. 2, the objects were reconstructed by applying an iterative phase retrieval algorithm on the deconvolved SAC ([13,14,17]).

Case 2: Object travels distance where Cx) > 0.5

In the regime where the object travels within the angular ME range (Cx) > 0.5), S1 and S2 are correlated. To highlight the impact of the degree of correlation Cx) on the SAC, we can mathematically represent S2 as:

S2=C(Δx)S1(xi+Δxi)+1[C(Δx)]2S,
where S is a speckle intensity pattern that is uncorrelated with S1. The scatter PSFs in the equation above are mean-subtracted speckle intensities. Representing S2 in the form above allows us to preserve speckle intensity statistics (that is, the speckle intensity variance and mean satisfy 𝕍[S1] = 𝕍[S2] and 𝔼[S1] = 𝔼[S2] respectively.)

Using Eq. (11), Eqs. (4) and (5) become

ΔI=(S1C(Δx)S1(xi+Δxi)1[C(Δx)]2S)*O
andΔIΔI2A(xi)C(Δx)A(xi±Δxi)+1[C(Δx)]2×noise,
where the last equation follows from noting that the speckle fields are a delta-correlated process and that the cross-correlation of two uncorrelated speckle intensities yields noise.

The SAC still contains three copies of the OAC. However, the ratio of the intensity of the positive and negative OAC copies is determined by the ME correlation function Cx). Moreover, since S2S1, there is an additional noise term that increases with decreasing Cx). Since there is no overlap between the positive and negative OAC copies, the OAC can be retrieved by either thresholding out the portions of the SAC that are smaller than the background value [Fig. 2(ii, “SAC>0”)], or by deconvolving the image [Fig. 2(ii, “Deconv. SAC.”)]. Appendix 1 provides more details on the deconvolution algorithm.

Case 3: Object travels distance where Cx) ≈ 0

In the case where the object travels outside the memory effect region between captures, S1 and S2 are uncorrelated, and Eq. (13) can be simplified as Eq. (5). Comparing the SAC in Fig. 2(iii) with those in Fig. 2(i–ii), we see that the SAC in the case where the object travels farther than the ME region exhibits more noise. This is expected due to the additional noise term caused by S1S2 that is not present in Case 1.

From above, in all cases (for Cx) ∈ [0, 1)), we can successfully retrieve the object autocorrelation from the acquired speckle images, S1, S2. From the estimated OAC, phase retrieval techniques can then be applied to reconstruct the object at diffraction-limited resolution.

3. Results

For the experimental demonstration, a laser light beam (CrystaLaser CS532-150-S; λ = 532 nm) was expanded (1/e2 diameter of 20 cm) and reflected off a phase-only spatial light modulator (SLM; Holoeye PLUTO-VIS) to generate a spatially incoherent light source (Fig. 3).

 figure: Fig. 3

Fig. 3 Experimental setup for imaging hidden moving objects. A spatially incoherent source is generated by reflecting an expanded laser beam (λ= 532 nm; 1/e2 diameter of 20 cm) off a spatial light modulator (SLM), which applies a temporally varying set of random phase patterns. The light source is transmitted through the moving object and scattered by the turbid media. The emitted scattered light is collected by a camera. An aperture controls the final object resolution and the speckle size at the camera. Lens focal length = 400 mm.

Download Full Size | PDF

An SLM was used in place of a rotating diffuser in order to generate a deterministic, temporally variant set of 50 to 100 random phase patterns. This set of patterns was used for all the acquisitions to ensure that the background light captured remained constant. The object and camera (pco.edge 5.5, PCO-Tech, USA) were placed at a distance u = 20 − 30 cm and v = 10 − 15 cm from the scattering media (DG10-120 diffuser; Thorlabs, USA) respectively (Fig. 3).

To ensure that only the object moved between successive image captures, a transmissive SLM (tSLM; Holoeye LC2002 with polarizer) coupled with a polarizer (Thorlabs, LPVISE200-A) was used for amplitude modulation, and served as the object (Fig. 4). For each object, a set of n=4 images, I1, ....I4 were acquired, with the object moving 1.5mm between each acquisition. The raw camera images [Fig. 4(b)] display a seemingly random light pattern that is similar for different objects. This is due to the dominant contribution of the background.

 figure: Fig. 4

Fig. 4 Experimental imaging of moving targets hidden behind a diffuser. A) The “object” is hidden behind a scattering medium and attenuates light transmission. The object was moved 1.5 mm between acquisitions. B) Due to the presence of the scattering medium, the object is obscured, and the camera image I1 is dominated by the scattered light from the background. C) The ideal object autocorrelation (AC). D) The speckle autocorrelation ΔI ★ ΔIOO. E) By applying phase retrieval on the speckle autocorrelation, the hidden object was reconstructed with high fidelity. Scale bar = 500 μm.

Download Full Size | PDF

From each successive pair of acquired images, the OAC [Fig. 4(d)] was estimated by deconvolving the SAC. The deconvolved SAC images were then averaged to reduce noise and yield a better estimate of the OAC. A Fienup-type iterative phase retrieval method was applied to reconstruct the hidden object with high fidelity [Fig. 4(e)] [13,14,17]. One modification that was made to the algorithm was to add an object support to the object constraints; this object support was determined from the OAC support [21,22]. In all cases, the obscured object was successfully reconstructed [Fig. 4(e)].

To experimentally demonstrate the effect of object travel distance, we moved an object a distance of 0.5, 1, and 3 mm between image acquisitions, and looked at the corresponding SAC and reconstructed object (Fig. 5). As expected, the SAC contained three copies of the OAC. We also compared the effect of processing the SAC using deconvolution [Fig. 5(b)] vs. thresholding [Fig. 5(c)].

 figure: Fig. 5

Fig. 5 Experimental results showing the effect of object motion distance on the speckle autocorrelation (SAC) and object reconstruction. A) A diagram showing the position and shape of the object at both time captures, and the SAC, showing three shifted copies of the object autocorrelation (OAC). The effect of applying B) deconvolution and C) thresholding to retain the positive portion (with respect to the mean level) for estimating the OAC from the SAC was compared in three cases (i–iii). The hidden object was reconstructed by applying Fienup phase retrieval on the estimated OAC. Colormap: green is positive, blue is negative (with respect to the mean value, in black). Scale bar: 500 μm.

Download Full Size | PDF

For Case i, the object traveled a distance Δx < δx, and both the object and SAC overlapped in space between successive acquisitions. In the case of object overlap, only the non-overlapping portion of the object can be retrieved [Fig. 5(i)]. Comparing the result of deconvolution vs thresholding, the reconstructed image from the deconvolved SAC more closely resembles the original object [Fig. 5(i,b)]. However, in both cases, what we are left with is an incomplete OAC and reonstructed object.

For Case ii, the object traveled a distance δx < Δx ≤ 2δx. Since the OAC support is approximately twice the object support, the positive and negative copies of the OAC overlapped [Fig. 5(ii)] [21]. Due to the overlap, thresholding resulted in an imperfect object reconstruction [Fig. 5(ii,c)]. In contrast, by deconvolving, the signal from the negative copies can be used to gain a better estimate of the OAC, from which the object can be reconstructed [Fig. 5(ii,b)].

For Case iii, the object traveled a distance Δx >> 2δx, and there was no overlap in the SAC. Due to the large Δx, Cx) decreased, and correspondingly, the noise increased. Since the signal-to-noise ratio (SNR) of the negative copies decreased, the entire OAC cannot be seen in the negative copies [Fig. 5(iii,a)]; thus, performing a deconvolution results in a noisy, imperfect OAC [Fig. 5(iii,b)], and it is more advisable to use thresholding to retain only the positive portion of the SAC [Fig. 5(iii,c)]. If we compare the reconstructed objects in both cases, we see that the object from the thresholded result more closely resembles the original object.

3.1. Imaging moving objects hidden between scattering media

To further demonstrate our imaging technique, we placed a moving object between two diffusers (Newport 10o Light Shaping Diffuser, Thorlabs DG10-220-MD) [Fig. 6(A)]. A moving object (a bent black wire) was flipped in and out of the light path between image captures, such that I2 = B. We blocked the partially-developed speckle field (from the propagation of the SLM phase pattern) and used only the fully-developed speckle pattern [23]. This fully-developed speckled pattern was transmitted through both scattering media and the moving object. The emitted scattered light was detected by a camera.

 figure: Fig. 6

Fig. 6 Experimental retrieval of moving targets hidden within a scattering object. A) Schematic of the experimental setup. A spatially incoherent light source is generated by reflecting an expanded laser beam off a spatial light modulator (SLM) that applied a temporally variant random phase pattern. The partially developed speckle field component is blocked, and only the fully-developed speckle field transmits through the moving object and two scattering layers. The emitted scattered light is collected by a camera. An aperture controls the resolution and the speckle size at the camera. B) Experimental result of a moving target. Two speckle intensity images, I1, I2, were captured, with the target present for the first capture, and absent for the second. The background halo from I1 and I2 were removed prior to computing the difference ΔI = I2I1S1 * O. The speckle autocorrelation yielded an estimate of the object autocorrelation, from which the target was retrieved by applying Fienup phase retrieval. Lens focal length = 400 mm.

Download Full Size | PDF

The background halo from each detected speckle intensity image was estimated and removed by performing Gaussian filtering (500×500 kernel, σ = 100), and then dividing each image by the background halo [14]. The SAC was then computed to estimate the OAC, from which phase retrieval was applied to reconstruct the hidden object. Although the object is fully obscured from both sides by scattering media and cannot be resolved from the camera image alone, using our technique, we were able to successfully reconstruct the hidden object with high fidelity [Fig. 6(B)].

4. Discussion and conclusion

In this paper, we demonstrated successful reconstruction of moving targets that were hidden behind an optically turbid media. Although the angular memory effect has already been used to demonstrate imaging of hidden targets, to the best of our knowledge, these prior systems were limited to imaging dark-field, sparsely-tagged objects [13,14,24]. We extended this work to imaging in the bright-field scenario by exploiting the temporal correlations inherent in the scattering process to remove the dominating contribution from the background and isolate the signal arising from the object [15,16]. Although we demonstrated our results on non-emitting objects in the bright-field scenario, our technique works equally well with transmissive or reflective objects. A cursory examination reveals that, when In = B + Sn * O and ΔI = InIn+1, the speckle autocorrelation is still given by Eq. (5), similar to imaging absorptive objects in the bright-field scenario. In the remainder of this section, we discuss some of the factors that impact system performance.

Firstly, our method depends on the angular correlations inherent in the scattering process. Thus, the object dimension should fall within the angular memory effect field of view (FOV), approximated using the full-width-half-maximum (FWHM) of the correlation function, uλπL. The axial extent of the object, δz, should also fall within the axial decorrelation length 2λπ(uD)2 [25]. Since the ME FOV is inversely proportional to L, our technique works best with thin scattering media, or through more anisotropically scattering media, since anisotropy enhances the angular memory effect range [20]. Strongly anisotropic media, such as biological tissue, also exhibit the translational memory effect, which may be exploited to further the fidelity of imaging through scattering layers [26].

Secondly, to maximize SNR and minimize overlap, the object travel distance should be such that δx < Δx and Cx) ≥ 0.5, since smaller values of Cx) results in higher levels of noise. However, if the object moves such a large distance as to not fall within the laser light beam, then I2 = B, and ΔI = S1 * O, and we can also retrieve the object with high fidelity. In all these cases, successful retrieval of the object is dependent on the background light pattern remaining constant between successive image captures. Thus, the illuminated portion of the tissue should remain constant between image captures, and the time between image captures should fall well within the temporal decorrelation time of the scattering sample. For biological samples, the temporal decorrelation time is related to the motion of scatterers embedded within [27].

Imaging through biological samples can be achieved using a faster system. The imaging speed in our current design was limited by the refresh rate of the SLM(≈ 8 Hz) and by the exposure time required to capture an image (50–200ms). With a more powerful laser, or a faster deterministic random phase modulator, it would be possible to shorten our imaging time, and extend our work to imaging within non-static samples, such as biological tissue.

A third factor in the fidelity of the reconstruction is the complexity of the object and the size of the background relative to the object. The dynamic range of the camera should be large enough to resolve the equivalent speckle signal from the object. Since the signal contrast is inversely related to the object complexity [14], the dynamic range of the camera limits the maximum object complexity. To maximize the SNR, the camera exposure and laser power should be adjusted such that the full well depth of the camera is utilized. A camera with a larger well depth and dynamic range would provide higher SNR and the capability to image more complex objects. The diameter of the aperture in the system can be adjusted to fine-tune the image resolution and control the object complexity.

Lastly, each speckle grain at the camera should satisfy the Nyquist sampling criterion and be easily resolvable. At the same time, the number of speckle grains that are captured in each image should also be maximized in order to maximize SNR. Although the scattering PSFs are ideally a delta-correlated process, in practice, we are only sampling a finite extent of the PSF. Thus, the PSF autocorrelation yields a delta function plus some background noise which can be minimized by increasing the number of captured speckle grains [14]. Due to Nyquist requirements, the maximum number of speckle grains is a function of the camera resolution; thus, a high resolution camera would provide lower noise. Another method to reduce this speckle noise is to take multiple acquisitions and compute the average of the speckle autocorrelation images.

In conclusion, we demonstrated successful imaging of hidden moving targets through scattering samples. The temporal and angular correlations inherent in the scattered light pattern allowed us to reconstruct the hidden object in cases where multiply scattered light dominates over ballistic light. This paper presented a first proof of concept. Although we demonstrated imaging of binary-amplitude targets, our system can also be extended to imaging gray-scale targets [28]. Since our imaging technique utilizes the angular memory effect, it is scalable. Moreover, our method does not require access inside the scattering media, and can therefore be used as a black box imaging system. With appropriate optimization, this opens up potential for use in applications involving the tracking of moving object in turbulent atmospheres, such as fog or underwater.

Appendix 1 - Deconvolving the speckle autocorrelation

To deconvolve the speckle autocorrelation (SAC), ΔI ★ ΔI, Weiner deconvolution was applied to reduce the deconvolution noise. We briefly describe the process here. We can rewrite Eq. (13) as

g=ΔIΔIA*h+n=y+n
where h(xi) = 2δ(xi) − Cx)δ(xi ± Δxi), A = OO, and n is the noise term. In this case, Weiner deconvolution estimates A by applying
(A)=(g)(h)|(h)|2+k(y)(h)
where is the Fourier transform operator, and k=(n)(g)1SNR estimates the SNR level of your signal [29]. Since all object ACs have a peak value of A(xi=(0,0))=xO2, to determine h from the SAC, we estimated the value of Cx) by taking the negative/positive peak values in the SAC. The locations of the negative peaks, with respect to the centered, positive peak, provided the value of the shift Δxi.

Funding

National Institutes of Health (NIH 1U01NS090577); GIST-Caltech Collaborative Research (CG2016); Natural Sciences and Engineering Research Council of Canada (NSERC PGSD3).

Acknowledgments

The authors would like to thank Joshua Brake for helpful feedback on the manuscript.

References and links

1. V. Ntziachristos, “Going deeper than microscopy: the optical imaging frontier in biology,” Nat. Methods 7, 603–614 (2010). [CrossRef]   [PubMed]  

2. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical coherence tomography,” Science 254, 1178 (1991). [CrossRef]   [PubMed]  

3. S. Andersson-Engels, O. Jarlman, R. Berg, and S. Svanberg, “Time-resolved transillumination for medical diagnostics,” Opt. Lett. 15, 1179–1181 (1990). [CrossRef]   [PubMed]  

4. G. H. Chapman, M. Trinh, N. Pfeiffer, G. Chu, and D. Lee, “Angular domain imaging of objects within highly scattering media using silicon micromachined collimating arrays,” IEEE J. Quantum Electron. 9, 257–266 (2003). [CrossRef]  

5. S. Kang, S. Jeong, W. Choi, H. Ko, T. D. Yang, J. H. Joo, J.-S. Lee, Y.-S. Lim, Q.-H. Park, and W. Choi, “Imaging deep within a scattering medium using collective accumulation of single-scattered waves,” Nat. Photon. 9, 253–258 (2015).

6. H. Ramachandran and A. Narayanan, “Two-dimensional imaging through turbid media using a continuous wave light source,” Opt. Commun. 154, 255–260 (1998). [CrossRef]  

7. S. Sudarsanam, J. Mathew, S. Panigrahi, J. Fade, M. Alouini, and H. Ramachandran, “Real-time imaging through strongly scattering media: seeing through turbid media, instantly,” Sci. Rep. 625033 (2016). [CrossRef]   [PubMed]  

8. F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods 2, 932–940 (2005). [CrossRef]   [PubMed]  

9. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photon. 6, 283–292 (2012). [CrossRef]  

10. I. M. Vellekoop and A. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32, 2309–2311 (2007). [CrossRef]   [PubMed]  

11. X. Xu, H. Liu, and L. V. Wang, “Time-reversed ultrasonically encoded optical focusing into scattering media,” Nat. Photon. 5, 154–157 (2011). [CrossRef]  

12. Y. M. Wang, B. Judkewitz, C. A. DiMarzio, and C. Yang, “Deep-tissue focal fluorescence imaging with digitally time-reversed ultrasound-encoded light,” Nat. Commun. 3, 928 (2012). [CrossRef]   [PubMed]  

13. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012). [CrossRef]   [PubMed]  

14. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photon. 8, 784–790 (2014). [CrossRef]  

15. E. H. Zhou, H. Ruan, C. Yang, and B. Judkewitz, “Focusing on moving targets through scattering samples,” Optica 1, 227–232 (2014). [CrossRef]  

16. C. Ma, X. Xu, Y. Liu, and L. V. Wang, “Time-reversed adapted-perturbation (trap) optical focusing onto dynamic objects inside scattering media,” Nat. Photon. 8, 931–936 (2014). [CrossRef]  

17. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]   [PubMed]  

18. S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. lett. 61, 834 (1988). [CrossRef]   [PubMed]  

19. R. Berkovits, M. Kaveh, and S. Feng, “Memory effect of waves in disordered systems: a real-space approach,” Phys. Rev. B 40, 737 (1989). [CrossRef]  

20. S. Schott, J. Bertolotti, J.-F. Léger, L. Bourdieu, and S. Gigan, “Characterization of the angular memory effect of scattered light in biological tissues,” Opt. Express 23, 13505–13516 (2015). [CrossRef]   [PubMed]  

21. J. R. Fienup, T. Crimmins, and W. Holsztynski, “Reconstruction of the support of an object from the support of its autocorrelation,” J. Opt. Soc. Am. 72, 610–624 (1982). [CrossRef]  

22. J. Fienup and C. Wackerman, “Phase-retrieval stagnation problems and solutions,” J. Opt. Soc. Am. A 3, 1897–1907 (1986). [CrossRef]  

23. B. Ruffing and J. Fleischer, “Spectral correlation of partially or fully developed speckle patterns generated by rough surfaces,” J. Opt. Soc. Am. A 2, 1637–1643 (1985). [CrossRef]  

24. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photon. 6, 549–553 (2012). [CrossRef]  

25. I. Freund, “Looking through walls and around corners,” Phys. A 168, 49–65 (1990). [CrossRef]  

26. B. Judkewitz, R. Horstmeyer, I. M. Vellekoop, I. N. Papadopoulos, and C. Yang, “Translation correlations in anisotropically scattering media,” Nat. Phys. 11, 684–689 (2015). [CrossRef]  

27. J. Brake, M. Jang, and C. Yang, “Analyzing the relationship between decorrelation time and tissue thickness in acute rat brain slices using multispeckle diffusing wave spectroscopy,” J. Opt. Soc. Am. A 33, 270–275 (2016). [CrossRef]  

28. H. Li, T. Wu, J. Liu, C. Gong, and X. Shao, “Simulation and experimental verification for imaging of gray-scale objects through scattering layers,” Appl. Opt. 55, 9731–9737 (2016). [CrossRef]   [PubMed]  

29. R. C. Gonzalez and R. E. Woods, Digital Image Processing (3rd Edition) (Prentice-Hall, Inc., 2006).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Principle behind non-invasive imaging of obscured moving objects. A) A spatially incoherent light source illuminates a moving object hidden behind a visually opaque turbid media. The resultant speckle field is captured by a camera sensor. B) Speckle images are acquired by the camera sensor at different times, with the object moving between the captures. The scattering media prevents us from resolving the object. C) The hidden object can be retrieved from the seemingly random speckle images by taking advantage of inherent angular correlations in the scattering pattern. i) Each captured image In consists of a background, B, subtracted by the imaged object, where the imaged object is the convolution of the PSF of the scattering media, S, and the object pattern, O. ii) Although the background signal dominates over the object, it can be subtracted out by taking the difference between the two captured images ΔI. iii) The object autocorrelation OO is approximated by autocorrelating the difference image ΔI. iv) The hidden object can be reconstructed from the object autocorrelation by using phase retrieval techniques.
Fig. 2
Fig. 2 Impact of object travel distance on the computed speckle autocorrelation (SAC). A) The scattering PSFs experienced by an object have a degree of correlation Cx) that depends on the distance the object traveled. When Cx) ≥ 0.5 (shown in red), the object is considered to have traveled within the memory effect (ME) region. For comparison, the object and its autocorrelation (AC) are displayed. B) When the object travels inside the ME region, the SAC contains three copies of the object autocorrelation (OAC): a centered, positive copy and two negative copies shifted by an amount proportional to the object travel distance. The OAC can be determined by either deconvolving the SAC or by thresholding out the negative portions (negative with reference to the mean, background level). The object can be reconstructed from the estimated OAC using phase retrieval techniques. C) When the object travels a distance where Cx) ≈ 0, only a single copy of the OAC is seen, with additional noise from the cross-correlation between uncorrelated PSFs. The normalized colormap used to display the AC and reconstructed object, with 0 corresponding to the mean background level.
Fig. 3
Fig. 3 Experimental setup for imaging hidden moving objects. A spatially incoherent source is generated by reflecting an expanded laser beam (λ= 532 nm; 1/e2 diameter of 20 cm) off a spatial light modulator (SLM), which applies a temporally varying set of random phase patterns. The light source is transmitted through the moving object and scattered by the turbid media. The emitted scattered light is collected by a camera. An aperture controls the final object resolution and the speckle size at the camera. Lens focal length = 400 mm.
Fig. 4
Fig. 4 Experimental imaging of moving targets hidden behind a diffuser. A) The “object” is hidden behind a scattering medium and attenuates light transmission. The object was moved 1.5 mm between acquisitions. B) Due to the presence of the scattering medium, the object is obscured, and the camera image I1 is dominated by the scattered light from the background. C) The ideal object autocorrelation (AC). D) The speckle autocorrelation ΔI ★ ΔIOO. E) By applying phase retrieval on the speckle autocorrelation, the hidden object was reconstructed with high fidelity. Scale bar = 500 μm.
Fig. 5
Fig. 5 Experimental results showing the effect of object motion distance on the speckle autocorrelation (SAC) and object reconstruction. A) A diagram showing the position and shape of the object at both time captures, and the SAC, showing three shifted copies of the object autocorrelation (OAC). The effect of applying B) deconvolution and C) thresholding to retain the positive portion (with respect to the mean level) for estimating the OAC from the SAC was compared in three cases (i–iii). The hidden object was reconstructed by applying Fienup phase retrieval on the estimated OAC. Colormap: green is positive, blue is negative (with respect to the mean value, in black). Scale bar: 500 μm.
Fig. 6
Fig. 6 Experimental retrieval of moving targets hidden within a scattering object. A) Schematic of the experimental setup. A spatially incoherent light source is generated by reflecting an expanded laser beam off a spatial light modulator (SLM) that applied a temporally variant random phase pattern. The partially developed speckle field component is blocked, and only the fully-developed speckle field transmits through the moving object and two scattering layers. The emitted scattered light is collected by a camera. An aperture controls the resolution and the speckle size at the camera. B) Experimental result of a moving target. Two speckle intensity images, I1, I2, were captured, with the target present for the first capture, and absent for the second. The background halo from I1 and I2 were removed prior to computing the difference ΔI = I2I1S1 * O. The speckle autocorrelation yielded an estimate of the object autocorrelation, from which the target was retrieved by applying Fienup phase retrieval. Lens focal length = 400 mm.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

I = S * O ,
I = B S * O ,
I n = B S n * O , n = 1 , 2 , N
and Δ I n = I n + 1 I n = ( S n S n + 1 ) * O ,
Δ I n Δ I n 2 × ( O O ) ( S n S n + 1 + S n + 1 S n ) * O = 2 × ( O O ) noise ,
C ( Δ x ) = [ k Θ L sinh ( k Θ L ) ] 2
S 2 ( x i ) S 1 ( x i + Δ x i )
O 2 = O ( x i + Δ x i ) ,
Δ I = S * [ O ( x i ) O ( x i + Δ x i ) ] ,
and Δ I Δ I = 2 A ( x i ) A ( x i + Δ x i ) A ( x i Δ x i ) .
S 2 = C ( Δ x ) S 1 ( x i + Δ x i ) + 1 [ C ( Δ x ) ] 2 S ,
Δ I = ( S 1 C ( Δ x ) S 1 ( x i + Δ x i ) 1 [ C ( Δ x ) ] 2 S ) * O
and Δ I Δ I 2 A ( x i ) C ( Δ x ) A ( x i ± Δ x i ) + 1 [ C ( Δ x ) ] 2 × noise ,
g = Δ I Δ I A * h + n = y + n
( A ) = ( g ) ( h ) | ( h ) | 2 + k ( y ) ( h )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.