Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

3D imaging in volumetric scattering media using phase-space measurements

Open Access Open Access

Abstract

We demonstrate the use of phase-space imaging for 3D localization of multiple point sources inside scattering material. The effect of scattering is to spread angular (spatial frequency) information, which can be measured by phase space imaging. We derive a multi-slice forward model for homogenous volumetric scattering, then develop a reconstruction algorithm that exploits sparsity in order to further constrain the problem. By using 4D measurements for 3D reconstruction, the dimensionality mismatch provides significant robustness to multiple scattering, with either static or dynamic diffusers. Experimentally, our high-resolution 4D phase-space data is collected by a spectrogram setup, with results successfully recovering the 3D positions of multiple LEDs embedded in turbid scattering media.

© 2015 Optical Society of America

1. Introduction

Imaging through scattering remains one of the most important problems in optics. Examples are ubiquitous: imaging in foggy weather, detecting objects behind diffused glass and in vivo biological studies. In many situations, scattering media does not absorb light, but scatters it many times, causing images to look diffused. The scattered images still contain significant information about the object, however, so one has a chance to undo scattering either optically or computationally. Some approaches aim to filter out the scattered light [1–5], leaving a very weak signal. Instead, we wish to use the scattered light in reconstructing the signal [6]. Phase conjugation does this [7–9], but only works at a single point in space. Related methods use the coherent transmission matrix [10–13] or adaptive optics [14] to undo scattering, either computationally or physically. Here, we demonstrate a new framework for 3D reconstruction of incoherent point objects (e.g. fluorophores, LEDs) well beyond the single-scattering regime, by measuring space-angle distributions (phase space).

Although the scattering process is completely deterministic, exact inversion through turbid media would require full characterization of all scattering events across 3D space and time. Since this is infeasible with current technology, we turn to statistical methods. In contrast to previous studies, we work in the 4D phase-space domain, which relates to 2nd order correlations of the wave-field [15]. Phase space describes light not only by its 2D positional information (x, y), but also by its 2D distributions of spatial frequency (ux, uy) for each position. Spatial frequency and angle of propagation are directly related, so phase-space can be thought of as the wave-optical generalization of light fields [16, 17]. The effect of scattering is to locally spread light into a statistical distribution of angles (spatial frequencies); hence, it is not surprising that angle-resolved measurements are useful for imaging through scattering.

The key idea in this paper is that each point source in 3D traces out a unique 2D plane within the 4D phase space. Just a few parameters (intercepts and slope of the plane) fully define a point source’s 3D position, so 4D phase-space measurements are extremely redundant. This redundancy can be exploited to mitigate the effects of volumetric scattering, which blurs phase space in all dimensions. By modeling the blur as a convolution in 4D phase space and using it to constrain the image reconstruction procedure, we show that information about the object remains long after the image appears completely diffused.

Previous studies have used lenslet arrays to recover 2D objects behind thin scattering screens by utilizing multiple perspectives [18–20], requiring coherent (laser) illumination and moving diffusers. Other works have used phase retrieval to image through thin diffusers by making a single-scattering approximation [21–27], including recent work using phase-space measurements for calibrating depth [28]. However, most scattering material is volumetric and dynamic, so single or static scattering approximations are not applicable. Here, we demonstrate the validity of our method both for thick and thin diffusers and both static and dynamic scattering.

The assumptions made here are that our object consists of a sparse set of point sources (i.e. fewer point sources than the number of 4D pixels) and that the scattering material is homogenous. This holds for situations including fluorophores in biological tissue or vehicle detection in fog. The measurement system captures phase space with very high resolution [29] by using spectrogram measurements [30] taken with a digital micromirror device (DMD) array. The concepts presented here, however, are general and can be used with any phase-space dataset (e.g. light field cameras).

Our inverse algorithm consists of sparse signal recovery in 4D phase space via constrained optimization procedures. A multi-slice forward model [31–33] includes multiple scattering and propagation effects, leading to a well-defined inverse problem. In addition to solving for the 3D particle locations, our method is also able to estimate the (scalar) scattering coefficient. Success degrades with increased scattering or decreased sparsity. The limits are difficult to quantify, since they will have a complicated dependence on the object, geometry and the amount of scattering. The important point is that the volume and resolution of data that we can reconstruct will scale not with the number of measurements, but rather with the sparsity, such that this method could in future be scaled up to very large volumes with high-resolution recovery.

2. Phase space basics

Phase space refers to any function that simultaneously describes space and spatial frequency (angle) [15, 34]. For example, light fields [16, 17] in geometric optics describe both position and angle of each ray. To include phase and diffraction effects, the more general wave optics descriptors of phase space (e.g. Wigner functions, spectrograms) must be used [35,36]. For both ray and wave optics, propagation is described by a simple shearing operation in phase space, allowing one to trace rays (waves) backward through 3D space to find depth.

Here, we use the phase-space Wigner function, which incorporates both wave-optical and geometric effects. The 4D Wigner function, W(r, u), has two transverse spatial coordinates, r = (x, y), and two spatial frequency coordinates, u = (ux, uy). It is defined as [34]

W(r,u)=f˜*(u+u/2)f˜(uu/2)ei2πurd2u,
where is a quasi-monochromatic, partially coherent complex field in Fourier domain, * represents complex conjugate and 〈·〉 denotes an ensemble average for the case of partial coherence.

Our experimental system does not measure the Wigner function of the light, but rather its Fourier spectrogram. Spectrograms are measured by scanning an aperture across the signal and capturing its local power spectrum for each position of the aperture [29]. Here, the aperture is scanned in Fourier space while collecting real-space images, so the resulting Fourier spectrogram is written as

I(r,u)=|f˜(u)a(uu)ei2πurd2u|2,
where a is the window function for the aperture. The Fourier spectrogram is related to the Wigner function by a simple convolution operation, I(r, u) = W(r, u) ⊗ Wa(r, u), where ⊗ denotes convolution over both r and u and Wa is the Wigner function of the aperture, a. For this reason, the spectrogram is often called a smoothed Wigner function, and in the limit they become equivalent. A traditional 2D intensity image is a projection of the Wigner function over all spatial frequencies, Intensity(r) = ∬W(r, u)d2u. Integrating instead over all positions gives the spatial spectrum, Spectrum(u) = ∬W(r, u)d2r.

One of the most appealing reasons for using the phase-space framework is that propagation becomes a geometric shearing operation, independent of the coherence properties of the light [15, 34]. Denoting the propagation operator as 𝒫Δz, we describe propagation of distance Δz in Wigner space as

𝒫ΔzW(r,u)=W(r,u)δ(rrλΔzu/nr)d2r,
where λ is wavelength in vacuum, nr is the refractive index of the material and δ is the Dirac delta function. This is exactly analogous to the shift-and-add approach for digital refocusing of light field data [37], since angles and spatial frequencies are related by λu/nr = (sinθx, sinθy), where (θx, θy) is the angle of propagation represented in the light field framework. When wave-optical phase-space functions are used, as here, phase and diffraction are fully accounted for.

3. 3D localization of point sources in scattering

Consider the phase-space description of an in-focus point source (dropping the y dimensions for simplicity). The point source emits light at a single point along x and isotropically in all directions, so its phase space is a vertical line covering all frequencies (angles). Since propagation shears phase space (Eq. (3)), moving the point source out of focus will cause the line to tilt proportionately. Thus, the intersection of the line with ux = 0 defines the transverse position of the point source and the slope of the line defines its depth above the focal plane (z position):

duxdx=nrλz,
where z is the distance between the point source and the focus plane of the imaging system. This simple model for 3D positions of point sources is easily extended to multiple point sources that are incoherent with each other (e.g. fluorophores). Each point source creates a distinct line in phase space that linearly superimposes with the others (see Fig. 1b). Since any line can be fully represented by just two parameters (intercept and slope), measuring the full phase space provides a significant amount of redundant data for 3D localization; this is important for undoing the scattering. Note that when we consider both x and y dimensions, phase space extends from 2D to 4D and the line becomes a hyperplane in the higher-dimensional space, creating even more redundancy. Thus, determining the 3D positions of point sources becomes a plane-fitting problem which is highly overdetermined when using 4D phase-space measurements.

 figure: Fig. 1

Fig. 1 Phase space measurements for three point sources at different depths, with and without scattering media. (a) The 3D positions of the point sources. (b) A phase-space slice and intensity cut-line for the case of no scattering. Each point source creates a line whose intercept with ux = 0 defines its lateral position and slope defines its depth. (c) A phase-space slice and intensity cut-line for the case of point sources in volumetric scattering material. The lines blur in proportion to point source’s depth inside the scattering material.

Download Full Size | PDF

When volumetric scattering is introduced, light will be spread along the angle dimension as it propagates, causing the phase space function to blur along both r and u. Thus, point sources at deeper depths will be both more tilted and more spread out (see Fig. 1c). In the following section, we derive an analytical phase-space model for this volumetric scattering of a point source and use it as a forward model for our inverse algorithm, which attempts to identify the 3D location of many point sources simultaneously.

4. Algorithm

To reconstruct the 3D positions of the point sources from a 4D phase-space dataset, we first develop an analytic forward model for scattering, which is used in the inverse problem. By assuming a point source in homogeneous scattering media with Gaussian spreading statistics, our forward model becomes a tilted 4D Gaussian in phase space. It is derived with a multi-slice approach, in order to account for multiple scattering. Next, we use this forward model in solving the inverse problem, where we implement an atomic norm optimization procedure subject to constraints of sparsity and the physics of our forward model.

4.1. Forward model derivation

The volumetric scattering medium is modeled as a stack of infinitely many transverse planes of thin dynamic diffusers separated by homogeneous refractive index material. The Wigner function of the emitted light from a point source is thus propagated and scattered repeatedly as it passes through the scattering material. First, it propagates through a small distance, Δz, in the homogeneous material, as described in Eq. (3). Next, it is scattered by a diffusing plane, denoted by operator 𝒟η,

𝒟σ,NW(r,u)=W(r,u)N12πσ2exp(N12σ2(uu)2)d2u.
Here σ is the scattering coefficient (defined as the standard deviation of the angular spreading) and W(r, u) is the Wigner function of the wave-field immediately before the diffuser. Thus, scattering is modeled as an in-place Gaussian spreading of the angle (frequency) information, which accurately describes dynamic diffusers and is a good approximation for static diffusers. Note that ∬𝒟ηW(r, u)d2u = ∬W(r, u′)d2u′, which means that the light spreads without changing its position or overall intensity. In our multi-slice approach, this process of propagate-and-scatter is repeated for each z step through the scattering material until it reaches the exit plane. Multi-slice approaches [31–33] have been used previously for coherent wave-fields and nonlinear propagation [38]. Here, to apply these ideas in phase space, we seek an analytical expression for the limit of infinitesimal z steps.

Consider a point source located at (rs,zs), described by the Wigner function, Ws = δ(rrs). The light then propagates a total distance through the volumetric scattering medium, divided into N layers. Letting WN(r, u) denote the Wigner function of the 2D field after the last layer of scattering material (exit plane), we have

WN(r,u)=(𝒟σ,N𝒫Δz=/N)NWs(r,u).

In order to derive the analytical phase-space model for the case of N → ∞, W(r, u), we use Feynman path integrals [39],

W(r,u)=limNWN(r,u)=nr22πλ22σ2exp(nr22λ22σ2(rrsλu/nr)2).
Equation (7) is our forward model describing the phase space at the exit of the scattering medium from a point source at depth zs. In our actual imaging system, however, the exit plane is at axial distance zd, which is not necessarily the native focus plane of the microscope. Therefore, the phase space that we measure may have an extra propagation operation due to the imaging system. The light still passes through a distance = zdzs of scattering material, and then is subject to an extra free-space back propagation operation on W(r, u) from the exit plane, z = zd, to the actual focus, z = 0. Note that the interface of different refractive indices does not affect the phase space function, unlike the light field, since the complex field does not change across the interface. As a result, the Wigner function at our measurement plane is described by W(r, u) = 𝒫′zdW(r, u), where 𝒫′ represents propagation in air. Accounting for the difference in the refractive index from air to the scattering medium, the expression becomes,
W(r,u)=nr22πλ2(zdzs)2σ2exp(nr22λ2(zdzs)2σ2(rrs+λ(zdzdzsnr)u)2).

4.2. Reconstruction

The goal for reconstruction is to find both the transverse position rs and the depth zs of many point sources simultaneously. Our method is based on minimizing the atomic norm [40], which promotes sparsity in the solution, in a similar fashion to compressed sensing. Using Eq. (8), we can calculate the phase space for each possible point source in 3D. The collection of all of these phase-space functions form an atom set. The phase-space measurement from multiple point sources will then be a non-negative linear combination of the atoms in this set. Our algorithm aims to decompose the measurements into a sparse combination of these atoms by minimizing the reconstruction error while jointly optimizing for a sparse number of atoms.

First, we define the atom set using the forward model in Eq. (8),

𝒜={a(r,u;rs,zs)=nr22πλ2(zdzs)2σ2enr22λ2(zdzs)2σ2(rrs+λ(zd(zdzs)/nr)u)2,rs,zs},
where the atom a(r, u; rs, zs) is a 4D function of (r, u) parametrized by the 3D position (rs, zs). Therefore, the decomposed weighted sum is
I^(r,u)=rs,zsc(rs,zs)a(r,u;rs,zs),
where c(rs, zs) is a non-negative coefficient describing the radiating power of a point source at position (rs, zs). Equation (10) can be regarded as a phase space prediction of the expected measurement data, given our current estimate of the object. To find the sparse c(rs, zs) that minimizes the difference between the prediction and the measured data, we solve the following optimization problem
minc0r,u|I(r,u)I^(r,u)|2+μrs,zs|c(rs,zs)|,
where μ is a tuned regularization constant and I(r, u) is the measured phase space. Here the sum is over the continuous parameter space of all possible locations of sources. In this work, we discretize the set of candidate positions. After discretization, the non-smooth convex optimization reduces to the popular 1-minimization problem, also known as the LASSO [41]. This formulation encourages sparsity in the observed point sources, and performs well in our experiments. We use an accelerated proximal gradient method [42] which iteratively minimizes the reconstruction error (the squared term) and “shrinks” the c vector towards a sparse solution. We then threshold the recovered c(rs, zs) by τ and spatially cluster it [43], before estimating the position of each point source as the centroid of each cluster.

While the parameters nr and zd in Eq. (8) can be reasonably assumed to be known (or measured), the amount of scattering is often unknown and cannot be assumed a priori. Thus, our algorithm implements an outer loop optimization over different scattering coefficients in order to solve for σ as well as the point source positions. We generate multiple 𝒜 sets with different σ and solve Eq. (11) with the Î given by each 𝒜, one at a time. The σ that minimizes Eq. (11) and the detected points in that optimization are chosen to be the final outcome of the algorithm.

5. Experiments

To measure the Fourier spectrogram (Eq. (2)), we require a programmable aperture in the Fourier domain of the object [29]. Building a folded 4 f system (FL1=225 mm, FL2=175 mm) after the object, we place a reflective DMD (DLP®Discovery 4100, .7” XGA) in the Fourier plane in order to rapidly create and scan the apertures, while capturing real-space intensity images (see Fig. 2a). Our sCMOS camera (PCO.edge 5.5) is placed at the output of the 4 f system and fully synced with the DMD controller via hardware link. Acquisition times can achieve camera-limited maximum frame rates (∼ 1 kHz), but are typically limited by light efficiency to tens of seconds per dataset. Conveniently, phase-space sampling density can be flexibly traded off for acquisition time and noise performance. We capture data at the highest resolution possible, though it may be excessive for the simple objects used here. Alternatively, the optical system could be replaced by a single-shot light field camera [16], at a cost of lower resolution.

 figure: Fig. 2

Fig. 2 Experimental localization of LEDs at different depths, with and without rotating diffusers between them. (a) The setup uses an object consisting of 3 LEDs and a spectrogram measurement system that employs a DMD in Fourier space to scan the aperture. (b–d) Intensity, phase space (along green line) and recovered LED positions for the case (b) without scattering, (c) with diffuser 3 only, and (d) with all three diffusers. The algorithm successfully recovers the position and depth of each LED in all cases.

Download Full Size | PDF

We perform two sets of experiments, using LEDs placed at various locations within a 3D volume. First, we scan a 1D window (strip) across the DMD and collect 2D phase-space data for the case of multiple discrete rotating (dynamic) diffusers. Second, we scan a 2D window across the DMD and collect 4D phase-space data for the case of LEDs embedded in scattering gelatin, to demonstrate the applicability of the algorithm to volumetric scattering effects.

5.1. 1D object with multiple thin diffusers

The object for the 1D experiment consists of 3 LEDs (dimension 0.9mm×1.5mm) in a line, which serve as point sources with different transverse (x) and depth (z) positions. The LEDs have center wavelength 657 nm and bandwidth 38.8 nm; we use a bandpass color filter (650 nm ± 5 nm) to suppress dispersion effects from the DMD. The window has a width of 1.74 mm and is stepped across 225 positions in angle space (step size 54 μm) to use the full area of the DMD. The 2D phase space slices are computed along the green line (averaged over a small area) and plotted at the corresponding ux in Fig. 2. The phase space image is then input to our algorithm to recover the positions and depths of the point sources. In the algorithm, we setup up 1000 sample points in the x coordinate and 200 sample planes in z. The sampling size is 11.7 μm in x and 1.5 mm in z. The scattering parameter, σ, was found to be 3 × 10−4 for the case of no scattering and 1.1 × 10−2 for both the 1-diffuser and 3-diffuser cases.

With no diffusers (non-scattering case), the LED positions are easily determined by the phase space measurements (Fig. 2b). Each LED prescribes a distinct line through the phase-space measurement, allowing us to find its lateral and depth position using our algorithm. Notice that the traditional 2D intensity image (taken with an open window in Fourier space) does not contain sufficient information for depth detection, even in the non-scattering case.

When a thin rotating diffuser is placed after the LEDs (diffuser 3 in Fig. 2a), the phase space lines become blurred, with LEDs further away from the camera blurring more, as expected (Fig. 2c). In this case, the intensity image is completely diffused. However, the three lines in phase space are still distinguishable, and so the three point sources can still be located with good accuracy. Interestingly, as additional diffusers are added between LEDs to create a multiple scattering situation, the phase space blurring only becomes marginally worse (Fig. 2d). This demonstrates that the scattering material closest to the exit plane will play the largest role, since more light passes through it. Even with three rotating diffusers placed between the LEDs, we are able to successfully recover the position and depth of the LEDs.

5.2. 2D object with volumetric scattering material

Next, we extend our experiments to 4D acquisition and a truly volumetric scattering. The target consists of 8 LEDs (center wavelength 524 nm, bandwidth 32.0 nm) imaged both with and without a surrounding scattering medium of chopped gelatin in air. The air in the gaps between the diffusing gelatin creates a refractive index mismatch which provides a strong forward-scattering situation intended to mimic that of biological tissue. The scattering is assumed to be homogeneous across the volume since the gelatin is distributed approximately uniformly. The data collection procedure scans a 2D square window with side length 1.09 mm across 26 × 26 positions (step size 365 μm) on the DMD, collecting 676 images (16 bit) in total, each with 5.53M pixels. This large dataset (∼ 7 GB) demonstrates the ability of our system to capture very high resolution phase-space data and by implementing our algorithm in a distributed computing environment, we are able to solve the computational inverse problem. However, since the density of point sources is small for this experiment, we find that it is sufficient to down-sample the 4D data to 128 × 128 × 8 × 8 before running our algorithm. We reconstruct a 3D result of 128 × 128 × 128 voxels and recover a scattering parameter σ of 5 × 10−3 for the case without scattering and 1 × 10−2 for the case with scattering.

The results of this experiment are shown in Fig. 3 for measurements taken both before adding the scattering gelatin and after. We illustrate the 4D data by showing only a few 2D slices along ux-x and uy-x of the 4D phase space acquired. Each line in the 2D slice corresponds to a portion of the plane generated by a point source. After adding the scattering material, the intensity images blur beyond distinction, and the phase-space slices also blur. However, the phase-space images still retain sufficient information for localizing each LED. Note that even when the 2D phase-space slices seem completely blurred, there may still be sufficient information for localization when considering the entire 4D dataset. Thus, the 3D recovery succeeds, even with highly multiply scattering situations. Of course, the algorithm will fail with too much scattering or with reduced phase-space sampling. In our resulting 3D localizations, one can see that the variance of the estimate in the z direction is somewhat worse than the lateral direction, due to the small range of angles captured. One of the LEDs is mistakenly localized in an errant position, which we believe is due to artifacts caused by the blocking of some light by the wire leads on the LEDs. For non-occluded LEDs, however, the algorithm correctly localizes the LEDs with good accuracy and recovers a scattering parameter of σ of 5 × 10−3 for the case without scattering and 1 × 10−2 for the case with scattering, consistent with previous measurements.

 figure: Fig. 3

Fig. 3 4D phase-space experiments with and without volumetric scattering. The first column contains the 2D intensity images that are captured by a traditional camera and the second column shows some example phase space 2D slices (along the red lines in the intensity images). The third column shows the recovered xz positions of point sources (y not shown) and the last column shows reconstructed 3D positions of the LEDs. Each color dot corresponds to a successful LED position recovery, while red dots are failures due to occlusions by the finite size of LEDs.

Download Full Size | PDF

6. Discussion

Our setup is analogous to that presented in [29], with several key differences. Instead of placing the camera in Fourier space and scanning a real-space window, here we choose to place the camera in real space and scan a Fourier-space window. This leads to better real-space resolution and a larger field of view, even when down-sampling the aperture positions to reduce acquisition time. Additionally, since most objects typically have a large DC term, capturing images in real space (as opposed to Fourier space) tends to alleviate the dynamic range problems encountered in [29]. Finally, instead of an LCoS modulator, a DMD is used here because it is orders of magnitude faster in modulating the light. Since the DMD is not polarization sensitive, we further obtain 2× better photon efficiency for unpolarized objects and avoid crossed polarizer leakage. The disadvantages of the DMD array are that it suffers color dispersion artifacts and must be implemented in a folded geometry which can cause aberrations at high angles.

7. Conclusion

The experiments presented here offer a proof of concept for our proposed method using 4D phase-space measurements to robustly localize a sparse set of point sources through both dynamic and volumetric scattering media. The method does not require coherent or active illumination, so works with existing imaging systems. Future work will exploit the large pixel counts of both our camera and DMD array to capture very large 4D datasets (∼Terabytes) and localize millions of point sources in scattering media. We plan to implement faster multiplexing methods [44–46] and explore alternate phase-space imaging schemes, such as modified spectrograms [47], light fields [16] and point scanning systems [48–51].

Acknowledgments

The authors thank Ziji Liu for help with experiments and Jason Fleischer for helpful discussions. This work was supported by the Office of Naval Research (grant N00014-14-1-0083).

References and links

1. L. Wang, P. Ho, C. Liu, G. Zhang, and R. Alfano, “Ballistic 2-d imaging through scattering walls using an ultrafast optical Kerr gate,” Science 253, 769–771 (1991). [CrossRef]   [PubMed]  

2. W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science 248, 73–76 (1990). [CrossRef]   [PubMed]  

3. D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee, T. Flotte, K. Gregory, C. A. Puliafito, et al., “Optical coherence tomography,” Science 254, 1178–1181 (1991). [CrossRef]   [PubMed]  

4. S. Farsiu, J. Christofferson, B. Eriksson, P. Milanfar, B. Friedlander, A. Shakouri, and R. Nowak, “Statistical detection and imaging of objects hidden in turbid media using ballistic photons,” Appl. Opt. 46, 5805–5822 (2007). [CrossRef]   [PubMed]  

5. J. Kim, D. Lanman, Y. Mukaigawa, and R. Raskar, “Descattering transmission via angular filtering,” in Computer Vision ECCV 2010, vol. 6311 of Lecture Notes in Computer Science, K. Daniilidis, P. Maragos, and N. Paragios, eds. (Springer, 2010), pp. 86–99.

6. D. V. Dylov, L. Waller, and J. W. Fleischer, “Instability-driven recovery of diffused images,” Opt. Lett. 36, 3711–3713 (2011). [CrossRef]   [PubMed]  

7. M. Cui and C. Yang, “Implementation of a digital optical phase conjugation system and its application to study the robustness of turbidity suppression by phase conjugation,” Opt. Express 18, 3444–3455 (2010). [CrossRef]   [PubMed]  

8. C.-L. Hsieh, Y. Pu, R. Grange, G. Laporte, and D. Psaltis, “Imaging through turbid layers by scanning the phase conjugated second harmonic radiation from a nanoparticle,” Opt. Express 18, 20723–20731 (2010). [CrossRef]   [PubMed]  

9. E. J. McDowell, M. Cui, I. M. Vellekoop, V. Senekerimyan, Z. Yaqoob, and C. Yang, “Turbidity suppression from the ballistic to the diffusive regime in biological tissues using optical phase conjugation,” J. Biomed. Opt. 15, 025004 (2010). [CrossRef]   [PubMed]  

10. S. Gigan, S. Popoff, A. Liutkus, D. Martina, O. Katz, G. Chardon, R. Carminati, G. Lerosey, M. Fink, A. Boccara, I. Carron, and L. Daudet, “Image transmission through a scattering medium: Inverse problem and sparsity-based imaging,” in “13th Workshop on Information Optics (WIO, 2014)”, pp. 1–3.

11. M. Kim, Y. Choi, C. Yoon, W. Choi, J. Kim, Q.-H. Park, and W. Choi, “Maximal energy transport through disordered media with the implementation of transmission eigenchannels,” Nat. Photonics 6, 583–587 (2012). [CrossRef]  

12. S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: An approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010). [CrossRef]   [PubMed]  

13. I. Vellekoop and A. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32, 2309–2311 (2007). [CrossRef]   [PubMed]  

14. N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7, 141–147 (2010). [CrossRef]  

15. M. Bastiaans, “The Wigner distribution function applied to optical signals and systems,” Opt. Commun. 25, 26–30 (1978). [CrossRef]  

16. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Tech. Rep. CTSR 2005-02, (Stanford, 2005).

17. M. Levoy, “Light fields and computational imaging,” Computer 39, 46–55 (2006). [CrossRef]  

18. J. Rosen and D. Abookasis, “Seeing through biological tissues using the fly eye principle,” Opt. Express 11, 3605–3611 (2003). [CrossRef]   [PubMed]  

19. I. Moon and B. Javidi, “Three-dimensional visualization of objects in scattering medium by use of computational integral imaging,” Opt. Express 16, 13080–13089 (2008). [CrossRef]   [PubMed]  

20. D. Abookasis and T. Moshe, “Feasibility study of hidden flow imaging based on laser speckle technique using multiperspectives contrast images,” Opt. Lasers Eng. 62, 38–45 (2014). [CrossRef]  

21. J. Goodman, W. Huntley Jr, D. Jackson, and M. Lehmann, “Wavefront-reconstruction imaging through random media,” Appl. Phys. Lett. 8, 311–313 (1966). [CrossRef]  

22. H. Kogelnik and K. Pennington, “Holographic imaging through a random medium,” J. Opt. Soc. Am. 58, 273–274 (1968). [CrossRef]  

23. J. D. Gaskill, “Imaging through a randomly inhomogeneous medium by wavefront reconstruction,” J. Opt. Soc. Am. 58, 600–608 (1968). [CrossRef]  

24. E. Small, O. Katz, and Y. Silberberg, “Spatiotemporal focusing through a thin scattering layer,” Opt. Express 20, 5189–5195 (2012). [CrossRef]   [PubMed]  

25. A. K. Singh, D. N. Naik, G. Pedrini, M. Takeda, and W. Osten, “Looking through a diffuser and around an opaque surface: A holographic approach,” Opt. Express 22, 7694–7701 (2014). [CrossRef]   [PubMed]  

26. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nature Photonics 6, 549–553 (2012). [CrossRef]  

27. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014). [CrossRef]  

28. K. T. Takasaki and J. W. Fleischer, “Phase-space measurement for depth-resolved memory-effect imaging,” Opt. Express 22, 31426–31433 (2014). [CrossRef]  

29. L. Waller, G. Situ, and J. Fleischer, “Phase-space measurement and coherence synthesis of optical beams,” Nat. Photonics 6, 474–479 (2012). [CrossRef]  

30. H. Bartelt, K. Brenner, and A. Lohmann, “The Wigner distribution function and its optical production,” Opt. Commun. 32, 32–38 (1980). [CrossRef]  

31. J. M. Cowley and A. F. Moodie, “The scattering of electrons by atoms and crystals. I. A new theoretical approach,” Acta Crystallog. 10, 609–619 (1957). [CrossRef]  

32. A. M. Maiden, M. J. Humphry, and J. M. Rodenburg, “Ptychographic transmission microscopy in three dimensions using a multi-slice approach,” J. Opt. Soc. Am. A 29, 1606–1614 (2012). [CrossRef]  

33. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2, 104–111 (2015). [CrossRef]  

34. M. Testorf, B. Hennelly, and J. Ojeda-Castañeda, Phase-Space Optics: Fundamentals and Applications (McGraw-Hill Professional, 2009).

35. Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” in “IEEE International Conference on Computational Photography (ICCP, 2009”, pp. 1–10.

36. M. A. Alonso, “Wigner functions in optics: describing beams as ray bundles and pulses as particle ensembles,” Adv. Opt. Photon. 3, 272–365 (2011). [CrossRef]  

37. M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” ACM Trans. Graph. 23, 825–834 (2004). [CrossRef]  

38. G. Agrawal, Nonlinear Fiber Optics, 3rd ed. (Academic Press, 2001).

39. R. MacKenzie, “Path integral methods and applications,” ArXiv e-prints (1999).

40. B. Bhaskar, G. Tang, and B. Recht, “Atomic norm denoising with applications to line spectral estimation,” IEEE Trans. Signal Process. 61, 5987–5999 (2013). [CrossRef]  

41. R. Tibshirani, “Regression shrinkage and selection via the lasso,” J. Royal Stat. Soc. Ser. B 58, 267–288 (1994).

42. S. Wright, R. Nowak, and M. Figueiredo, “Sparse reconstruction by separable approximation,” IEEE Trans. Sign. Process. 57, 2479–2493 (2009). [CrossRef]  

43. G. Tang, B. Bhaskar, and B. Recht, “Sparse recovery over continuous dictionaries-just discretize,” in Asilomar Conference on Signals, Systems and Computers, 2013), pp. 1043–1047.

44. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for fourier ptychography with an LED array microscope,” Biomed. Opt. Express 5, 2376–2389 (2014). [CrossRef]   [PubMed]  

45. H. Nagahara, C. Zhou, T. Watanabe, H. Ishiguro, and S. K. Nayar, “Programmable aperture camera using LCoS,” in “Computer Vision–ECCV 2010,” (Springer, 2010), pp. 337–350. [CrossRef]  

46. C.-K. Liang, T.-H. Lin, B.-Y. Wong, C. Liu, and H. H. Chen, “Programmable aperture photography: multiplexed light field acquisition,” in “ACM Transactions on Graphics (TOG),” (ACM, 2008), vol. 27, p. 55. [CrossRef]  

47. J. K. Wood, K. A. Sharma, S. Cho, T. G. Brown, and M. A. Alonso, “Using shadows to measure spatial coherence,” Opt. Lett. 39, 4927–4930 (2014). [CrossRef]   [PubMed]  

48. A. Wax and J. E. Thomas, “Measurement of smoothed Wigner phase-space distributions for small-angle scattering in a turbid medium,” J. Opt. Soc. Am. A 15, 1896–1908 (1998). [CrossRef]  

49. F. Robles, R. Graf, and A. Wax, “Dual window method for processing spectroscopic optical coherence tomography signals with simultaneously high spectral and temporal resolution,” Opt. Express 17, 6799–6812 (2009). [CrossRef]   [PubMed]  

50. F. Robles, C. Wilson, G. Grant, and A. Wax, “Molecular imaging true-colour spectroscopic optical coherence tomography,” Nat. Photonics 5, 744–747 (2011). [CrossRef]  

51. F. E. Robles and A. Wax, “Separating the scattering and absorption coefficients using the real and imaginary parts of the refractive index with low-coherence interferometry,” Opt. Lett. 35, 2843–2845 (2010). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (3)

Fig. 1
Fig. 1 Phase space measurements for three point sources at different depths, with and without scattering media. (a) The 3D positions of the point sources. (b) A phase-space slice and intensity cut-line for the case of no scattering. Each point source creates a line whose intercept with ux = 0 defines its lateral position and slope defines its depth. (c) A phase-space slice and intensity cut-line for the case of point sources in volumetric scattering material. The lines blur in proportion to point source’s depth inside the scattering material.
Fig. 2
Fig. 2 Experimental localization of LEDs at different depths, with and without rotating diffusers between them. (a) The setup uses an object consisting of 3 LEDs and a spectrogram measurement system that employs a DMD in Fourier space to scan the aperture. (b–d) Intensity, phase space (along green line) and recovered LED positions for the case (b) without scattering, (c) with diffuser 3 only, and (d) with all three diffusers. The algorithm successfully recovers the position and depth of each LED in all cases.
Fig. 3
Fig. 3 4D phase-space experiments with and without volumetric scattering. The first column contains the 2D intensity images that are captured by a traditional camera and the second column shows some example phase space 2D slices (along the red lines in the intensity images). The third column shows the recovered xz positions of point sources (y not shown) and the last column shows reconstructed 3D positions of the LEDs. Each color dot corresponds to a successful LED position recovery, while red dots are failures due to occlusions by the finite size of LEDs.

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

W ( r , u ) = f ˜ * ( u + u / 2 ) f ˜ ( u u / 2 ) e i 2 π u r d 2 u ,
I ( r , u ) = | f ˜ ( u ) a ( u u ) e i 2 π u r d 2 u | 2 ,
𝒫 Δ z W ( r , u ) = W ( r , u ) δ ( r r λ Δ z u / n r ) d 2 r ,
d u x d x = n r λ z ,
𝒟 σ , N W ( r , u ) = W ( r , u ) N 12 π σ 2 exp ( N 12 σ 2 ( u u ) 2 ) d 2 u .
W N ( r , u ) = ( 𝒟 σ , N 𝒫 Δ z = / N ) N W s ( r , u ) .
W ( r , u ) = lim N W N ( r , u ) = n r 2 2 π λ 2 2 σ 2 exp ( n r 2 2 λ 2 2 σ 2 ( r r s λ u / n r ) 2 ) .
W ( r , u ) = n r 2 2 π λ 2 ( z d z s ) 2 σ 2 exp ( n r 2 2 λ 2 ( z d z s ) 2 σ 2 ( r r s + λ ( z d z d z s n r ) u ) 2 ) .
𝒜 = { a ( r , u ; r s , z s ) = n r 2 2 π λ 2 ( z d z s ) 2 σ 2 e n r 2 2 λ 2 ( z d z s ) 2 σ 2 ( r r s + λ ( z d ( z d z s ) / n r ) u ) 2 , r s , z s } ,
I ^ ( r , u ) = r s , z s c ( r s , z s ) a ( r , u ; r s , z s ) ,
min c 0 r , u | I ( r , u ) I ^ ( r , u ) | 2 + μ r s , z s | c ( r s , z s ) | ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.