Performing imaging with scattered light is challenging due to the complex and random modulation imposed upon the light by the scatterer. Persistent correlations, such as the optical memory effect (ME), enable high-fidelity, diffraction-limited imaging through scattering media without any prior knowledge of or access to the scattering media. However, conventional ME techniques have been limited to gray-scale imaging. We overcome this restriction by using spectral coding and compressed sensing to realize snapshot color imaging through scattering media. We demonstrate our method and obtain high-fidelity multispectral images using both emulated data (spanning the visible and infrared) and experimental data (in the visible).
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Imaging through scattering media is of great interest and has important applications in many fields such as biological, medical, and astronomical imaging. The challenge of this problem is that the optical paths of the incident photons are changed in complicated ways as they pass through the scatterer, resulting in a seemingly random speckle pattern on the other side. Several imaging techniques have been developed to circumvent the effects of the scatterer and enable one to extract information about the object from the speckle pattern. Examples of these methods include the use of time-gating , adaptive optics , wavefront shaping [3–5], transmission matrix [6–9], and deep-learning-based methods [10–12]. However, these techniques require either prior knowledge about or access to the object or scattering media for accurate results. For example, transmission matrix methods require one to measure the system point spread function (PSF), while one must supply deep learning methods with representative object–speckle training pairs. These requirements, which amount to having access to or control over both sides of the scattering medium, can be impractical in many applications .
In contrast, a separate approach known as memory effect (ME) imaging [14,15] enables one to perform imaging through a scatterer without any prior knowledge about the scatterer or object. The ME approach relies on the fact that object points within the ME range of the scatterer [16,17] yield highly correlated speckle patterns, which can be computationally processed to produce an image of the object using only a single speckle measurement . While promising, ME imaging has traditionally been limited by the requirement that the illumination be narrowband (less than 1 nm)  in order to produce high-contrast speckle patterns  and, correspondingly, high-fidelity images.
To overcome ME imaging’s traditional constraint of narrowband operation, several methods have been developed to mitigate or resolve the spectral content of the signal. For example, broadband (more than 10–15 nm) and multispectral reference illumination have been used to generate high-fidelity gray-scale images through a scatterer [19–21]; however, these approaches have limited utility for applications that require spectral information [13,22–24]. To date, two methods have been developed for realizing color ME imaging. The first approach requires access to the optical source or the object prior to the scatterer [8,25], which is incompatible with the goals of noninvasive imaging and, as such, sacrifices one of ME imaging’s key advantages. The second method relies on direct measurement of the particular system’s PSF, for example, using a Bayer filter  or monochrome camera . Regardless of the choice of detector, these methods assume prior knowledge of the wavelength-dependent PSF () of the system and employ a deconvolution-based algorithm to recover the speckle associated with each spectral channel. While these methods perform well, measurement of may be impractical or impossible, such as when the scatterer is dynamic or not accessible ahead of the imaging task. Furthermore, the application of deconvolution-based image processing implicitly assumes that the PSFs associated with different spectral channels are uncorrelated, which is not necessarily true and inconsistent with the fundamental assumptions of ME imaging.
In this paper, we propose and demonstrate a method for realizing multispectral imaging of objects through scattering media using only a single speckle measurement with a monochrome camera. In contrast to previous approaches, we neither require any a priori knowledge about the system PSF or source spectrum, nor assume anything about the properties of the spectral content of the signal. To accomplish this, we employ a coded aperture in combination with a wavelength-dependent element (e.g., a prism or grating) to encode the spectral content within the low-contrast, multiplexed speckle signal [28–31]. We then use a compressed sensing (CS) algorithm to recover the speckle signals associated with each independent spectral channel, which are then processed via correlation-based algorithms [32,33]. Using this approach, we produce high-fidelity images of a target object at five well-separated spectral channels between 450 nm and 750 nm using emulated data. We also demonstrate experimental imaging of three well-separated spectral channels between 450 nm and 650 nm and six contiguous spectral channels between 515 and 575 nm.
The rest of the paper is organized as follows. Section 2 introduces the forward model and implementation at a general level, and Section 3 describes the details of implementation as well as the data processing techniques. Section 4 demonstrates our results with both emulated and experimental data, and Section 5 discusses the fundamental limitations and practical concerns of our technique. Finally, the last section summarizes the work and discusses future opportunities.
2. PRINCIPLE OF OPERATION
At a conceptual level, imaging through scatter involves three key elements: an object (either externally illuminated or self-luminous), a scatterer, and a detector (see Fig. 1). As in conventional ME systems, we consider an object whose angular extent is contained within the ME field of view  and located a distance behind a scatterer. After interacting with the scatterer, the light then propagates a distance before arriving at a detector. While conventional ME imaging involves a standard camera as the detector, we instead employ a coded detector module that includes a coded aperture and wavelength-dependent optical element. The purpose of this element is to uniquely modulate each spectral channel prior to their combination and transduction at the monochrome detector. Thus, rather than simply measuring a low-contrast speckle whose spectral channels are inextricably mingled, we instead record a spectrally multiplexed signal that is well-conditioned for separation via computational methods. It is important to note that our capacity to demultiplex this measurement is based on our knowledge of the properties of the coded aperture and wavelength-dependent element, and does not require any characterization of or assumptions about the scatterer or light source.
To explain how the process works in greater detail, we describe quantitatively the measurement scheme in the context of conventional ME physics. For objects contained within the ME field of view, the speckle pattern of a single wavelength is the convolution of the object and system PSF at that wavelength , which can be written as . The resulting speckle associated with multispectral object can therefore be expressed as the combination of all associated wavelength-dependent speckle patterns as2 and Fig. S3 in Supplement 1.)
After we measure the multiplexed speckle, the next step involves using knowledge of to separately recover the speckle at each spectral band. We note, first, that Eq. (3) is an underdetermined system and, as such, necessitates a CS approach to inversion. Such approaches rely on the fact that the recovered signal of interest is sparse in some basis; while appropriate bases are well-known for some types of images (e.g., natural scenes), the random-seeming speckle signals require a different approach. Following the method we developed in the context of temporal coding of scatter  (the details of which are outlined in Section 6 of Supplement 1), we employ dictionary learning to determine a sparse basis with which to represent the speckle. The resulting dictionary, trained on speckle images from a variety of different measurement configurations, is very general and does not depend on the specific objects and scatterers involved in the generation of . To be explicitly clear, this dictionary training step does not require access to the scatterer used in the actual imaging configuration; instead, off-line training with a distinct scatterer is sufficient. This is therefore consistent with our goal of performing color imaging with no a priori information about the scatterer. To recover the estimated speckle at each wavelength , we use a dictionary-based orthogonal matching pursuit (OMP) algorithm [34,35].
Finally, we obtain an image of the object by calculating the autocorrelation of each spectral channel independently, , and then inverting the autocorrelation [32,33] at each wavelength (see Fig. 2). The images at each wavelength are then combined to generate a composite color image of the estimated object . This technique makes no assumptions about correlations between the spectral channels and requires only the assumption that be sufficiently random that its autocorrelation is relatively sharply peaked . In addition, it requires only information about the coded detector, relying on prior calibration of the coded aperture and a pretrained library, which makes the approach truly noninvasive and enables single-shot operation.
A. Experimental Setup
As discussed above and shown in Fig. 1, our experiment requires an object, scatterer, and coded detector module. To generate the color object in a controlled and well-characterized way, we use a multistep process. We first use a broadband lamp (Newport 66921, 500 W) with a spectral range from 200 to 2500 nm, which then passes through an integrating sphere so that it is sufficiently spatially incoherent. The light then passes through optical elements (e.g., a spectral filter and/or a spatial light modulator (SLM)—see Section 1 in Supplement 1 for details) that act to define the spatiospectral properties of the object. In this work, we consider two types of objects: those with well-separated and contiguous spectra. In the former case, we choose the spacing of the spectral channels to exceed the spectral correlation length of the diffuser (i.e., the resulting speckle patterns are uncorrelated); in the latter case, we choose the spacing to be less than the spectral correlation length.
For the scatterer, we use a 600 grit ground glass diffuser (Thorlabs DG20-600-MD) that is mounted 24 cm away from the back focal plane of the imaging lens. The diffuser’s spectral decorrelation length is approximately 15 nm FWHM (see Fig. S2 in Supplement 1). The light scatters as it passes through the diffuser, and a 4 mm diameter aperture immediately after the diffuser limits the angular range of the scattered light reaching the detector (i.e., acts as a field stop). This field stop is placed at the scatterer for convenience but could be engineered into the optical train of the receiver and therefore does not require access to the scatterer. We note that the technique is not limited to a particular choice of diffuser and that it works equally well for diffuse reflections; the main requirement is that the scattering element should provide an accessible ME regime.
Relative to conventional snapshot ME imaging setups , the key addition to our system is our coded detector. We choose here to use a coded aperture snapshot spectral imager (CASSI) designed to operate between ultraviolet and near-IR wavelengths . In the coded detector, we use a binary, chrome-on-glass coded aperture with a 50% transmissive random pattern and features. After modulation by the coded aperture, the light passes through a double Amicci prism, which imposes a wavelength-dependent shift while maintaining the system’s inline optical axis. Finally, a CMOS camera (Andor NEO 5.5 with ) focused on the coded aperture plane measures the optical signal. In order to reduce background light and stray reflections in the system, we implement a series of bellows between the optical elements and cover the setup with a black box.
In addition to the fully experimental configuration described in Section 3.A, we also create emulated data by digitally combining empirical data. Given that the measurement is related to and according to Eq. (3), we first measure experimentally the uncoded speckle PSF at multiple wavelengths. For the data shown in Fig. 3, we employ bandpass filters centered at 450, 550, 650, 700, and 750 nm (Thorlabs FB450-10, FB550-10, FB650-10, FB700-10 and FB750-10, respectively), each with a 10 nm bandwidth (i.e., slightly within the ME spectral correlation length of the scatterer). We then digitally convolve a target color object with the appropriate spectral components of to generate the emulated speckle that would be produced by that object, and modulate the result by the code pattern. Here, we choose a 50% open, random binary code pattern with code feature size of detector pixels. Finally, we apply a wavelength-dependent shift based on the known dispersion of the prism , and sum the resulting coded speckle signals together. The resulting emulated data is therefore based on a mixture of empirical measurement (e.g., of and key parameters) and known physics, but lacks any explicit model error. As such, we can use this approach to explore system design tradeoffs and understand fundamental performance limits.
C. Data Processing
The input to the data processing algorithms is the gray-scale, multiplexed coded speckle images measured by the camera. We use only the center , however, which contain enough speckle grains to produce high-quality autocorrelation patterns without including aberrations due to the relay optics. To recover the speckle from the individual spectral channels, we employ a dictionary-based OMP algorithm. We train our overcomplete speckle dictionary, which contains 512 dictionary elements, using the beta process factor analysis (BPFA ) algorithm and over 73,000 speckle patches measured on a completely separate experimental setup (as described in Ref. ). Each of the speckle patches consists of , which is capable of representing the speckle structure with high fidelity while ensuring that training is computationally tractable. With the dictionary in place, we use a matrix inverse lemma (MIL) implementation of OMP that projects the multiplexed speckle to the dictionary space and iteratively calculates the optimal dictionary elements and corresponding coefficients to recover the subframe speckle patterns of each spectral channel (see Section 6 of Supplement 1 for additional discussion).
Once the separate speckle images at each spectral channel are determined, we proceed by using correlation-based processing . When computing the autocorrelation of the recovered speckle signals, we first normalize the speckle image by dividing it by a low-pass version of itself (obtained by convolving the raw image with a uniform kernel). We then choose the pixel patch in the center of the normalized image and smooth it using a Gaussian filter with a standard deviation of 1 pixel. To recover the object associated with each spectral channel, we choose the center patch of the autocorrelation and normalize by subtracting the minimal value and dividing the background-subtracted pattern with its the maximal value. We then invert each autocorrelation separately using phase retrieval implemented using the conjugate gradient descent method. We use a random initial guess (although one could, instead, use a bispectrum-based initial guess  to avoid the twin image problem) and estimate the magnitude of the Fourier transform of the object by taking the square root of the Fourier transform of the product of the autocorrelation and a 2D Tukey window (to avoid edge effects). We enforce realness and nonnegativity constraints on the object and typically run several hundred iterations. To recover an accurate relative intensity (or gray-scale value) for each of the normalized spectral channels, we weight each one by the ratio of the mean recovered speckle intensity to the total signal in the estimated object at that channel . Here, represents a sum over all pixels in that image. We note that this method makes no assumption about the number of channels, overall spectral range, or existence of correlations between spectral channels. However, as discussed in Section 7 of Supplement 1, one can coregister the spectral channels by exploiting the existence of spatial correlations in the appearance of the object across different spectral bands and/or correlations between the speckle in different channels, if available.
A. Emulation Results: Well-Separated Multispectral Imaging
Figure 3 shows the examples of multispectral, snapshot imaging through a scatterer for two emulated objects with well-separated spectra. As described in Section 4.B, we consider five separate spectral channels spanning 300 nm across the visible to near-IR. The top row of Fig. 3(a) contains the ground truth object, which consists of several numbers (shown both in false color and broken out by spectral channel). When plotting the false color object, we map the intensity profile of each wavelength into the CIE 1931 RGB space . The recovered object, shown in the bottom row of Fig. 3(b) both in false color and in terms of separate spectral channels, demonstrates that the technique yields excellent imaging performance and negligible cross talk between spectral channels. Finally, we demonstrate the fidelity of the recovered spectra by comparing the spectral intensity (averaged across all bright pixels) of the ground truth and recovered images in Fig. 3(b). Figures 3(c) and 3(d) show the corresponding ground truth and recovered multispectral images (along with associated spectra) for a cell from the stem of a cotton plant (spatial profile obtained via 100 Anatomy Botany Prepared Microscope Slides—Set D No. 13). For the purposes of the emulation, we assigned overlapping spatial regions of the cell structure to different spectral channels. To assess quantitatively the quality of the recovered images, we calculate the structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR) values relative to the ground truth object on a per-channel basis. Table 1 shows that each of the five channels has a SSIM index of 0.8–0.9 and a PSNR of more than 20. Thus, despite the fact that the speckle signal itself is very low contrast when five 10 nm wide spectral bands are superimposed at the detector, our technique of coded detection can accurately recover the spatial–spectral object properties. Furthermore, we demonstrate the capacity to perform color imaging beyond the traditional three-channel (i.e., RGB), visible imaging paradigm.
B. Experiment Results: Well-Separated Spectral Channels
While the emulation results demonstrate the power of the approach in a well-controlled environment, we turn next to fully experimental data. We consider three 8–12 nm wide spectral channels centered at 450, 550, and 650 nm that, when combined with different relative weights, generate a broad range of colors. Figure 4 shows a comparison between the ground truth and recovered object for a multicolor letter “H.” The object has an extent of , corresponding to the largest undistorted area we can image at the object plane. We use an exposure time of approximately 1800 s (dictated by the lossy method used to generate the controllable object) to ensure a SNR of 60–70. This SNR is not a requirement but, for this proof of concept experiment, is sufficient to ensure success for a range of object complexities. We note, however, that real-world application of this technique can be orders of magnitude faster, as discussed in Section 4 of Supplement 1). From left to right, the top row in Fig. 4 contains the ground truth object at each wavelength, , as well as the full-color object, respectively. To measure this ground truth, we use a machine vision camera with the corresponding bandpass filters to image the spectral components directly, and obtain the full-color image by summing the resulting spectral channels. The second row shows the autocorrelation patterns of each reconstructed spectral channel that form the multiplexed measurement, which serve as the inputs to the phase retrieval data processing step. The third row shows the reconstructed object in each spectral channel as well as the full-color object. We find that we are able to recover not only the correct spatial structure in the separate channels, but also the correct relative gray-scale intensity of the object within each spectral channel. The full-color image shows that the relative weights between spectral channels is also correct, as the combined reconstructed image color matches the ground truth and the SSIM index (PSNR) is greater than 0.92 (26) for every channel (see Table 1). To further demonstrate this point, the final row of Fig. 4 shows the intensity of the ground truth and reconstructed object along a horizontal slice through the object (normalized here such that the highest intensity across all channels equals unity). The normalized intensities of the ground truth and reconstructed object match very well in all spectral bands. Thus, despite the fact that there exist noise and potential model error (e.g., due to imperfect system characterization), the resulting image is of high quality and in good agreement with the emulated results.
C. Experiment Results: Contiguous Spectral Channels
While the experimental results in the previous section demonstrate that the approach works well for an object with separated spectral bands, we look next at an object with a continuous spectral range of 60 nm. The object consists of a letter “X” and a plus sign (“+”), with a total spatial extent of [see Fig. 5(a)]. The letter “X” has a relatively uniform and continuous spectrum between 515 nm and 575 nm, while the plus sign has a structured spectrum predominantly located between 535 nm and 575 nm [see Fig. 5(b)]. We create the color object by placing a 60 nm wide bandpass filter over the entire object and a long-pass filter over the plus sign (see Section 1 of Supplement 1 for additional details). For this configuration, we use an exposure time of 120 s to obtain our target SNR of 70 dB. In reconstruction, we divide the 60 nm spectrum into six contiguous, 10 nm wide channels [indicated in Fig. 5(b)]. We find that the reconstructed images agree well with the spatial and spectral content of the ground truth object [see Figs. 5(a)–5(c) and Table 1]. These results complement the spectrally well-separated images in Section 3.B and demonstrate that the presence or absence of spectral correlations in the measured speckle does not impact the performance of our method. In fact, we find that the system spectral performance is determined mainly by its calibration and details of the coded detector, rather than the object spectral content (see Section 5 of Supplement 1 for additional discussion).
Our method uses a general, pretrained speckle dictionary and known coded aperture, and, as neither of these elements are specific to the scatterer or object, we require only a single multiplexed monochrome measurement to recover a color object. In addition, with our use of a patch-based dictionary with small speckle patches () for the demultiplexing, the speckle recovery step captures the fundamental structure of a speckle grain and explicitly does not include any speckle correlations. Therefore, our method operates without reliance on or assumptions about the correlation between spectral channels and can even be applied to different scattering materials. This is consistent with our goals that the method be noninvasive and snapshot (i.e., not rely on temporal or ensemble averaging, as employed by Refs.  and ). Furthermore, the fact that our technique only involves modifications to the detector subsystem makes it particularly applicable to astronomical imaging  and biological imaging of self-luminous objects [3,39]. We note, however, that the method does assume that the object fits within the ME angular field of view and that the width of the individual spectral channels that we reconstruct over not be too broad (e.g., greater than the spectral correlation length). In the rest of this section, we discuss the dependence of the technique on key system parameters.
In conventional ME, a speckle grain should be at least Nyquist sampled on the detector plane to produce a sharp autocorrelation pattern. Our coding approach introduces two additional sampling conditions that should be met to recover high-fidelity object images. First the dictionary element patch size should be big enough to contain a meaningful speckle structure, although larger patches render OMP estimation and dictionary training more computationally expensive. Second, each speckle patch should be sufficiently modulated by the code pattern, and a single code feature should be at least Nyquist sampled on the detector. In our experiment, the code feature size of the physical coded aperture is and subtends approximately on the detector. Thus, each speckle patch was coded by an average of code features, thereby enabling unique recovery of each speckle patch.
It is worth noting that the binary coded aperture and the double Amicci prism are not indispensable to the spectral coding task. Instead of a conventional, static coded aperture, one can use an SLM or DMD [40,41] to realize gray-scale , spectral, or dynamic coding. Also, depending on the system requirements, one can choose either a prism, diffraction grating [41,43], or spectrally sensitive SLM  as the dispersive element. Similarly, one can relax the snapshot requirement and take multiple multiplexed measurements  or use side information [44,46] to improve the image quality. Coregistration of the reconstructed spectral channels can be a concern. Availability of cross-channel correlations in either object spatial structure or measured speckle can provide a solution. However, for the case of widely separated spectral channels with absolutely no correlation in the object structure, the proposed method would be expected to fail. Additional multishot measurement approaches are a potential remedy to this and are currently under investigation.
As is common in computational imaging systems, the performance of our technique is ultimately limited by model error. In our system, this mismatch can arise due to the shift between the actual coding pattern involved in the multiplexed measurement and the calibrated coding pattern. This typically arises due to changes in environmental parameters over a slow time scale, and we find that the shift is under 3% of the camera pixel size.
In conclusion, we have developed a coded, single-shot technique for multispectral imaging through a scatterer. We modulate the wavelength-dependent speckle with a coded aperture to take one multiplexed measurement and compute the wavelength-dependent speckle with a dictionary-based OMP algorithm. The multispectral object information is retrieved from the demultiplexed speckle by computing the autocorrelation pattern and running a phase retrieval algorithm. We demonstrate our method with five-channel emulation, three well-separated channel experimental results in the range of 450–750 nm, and six contiguous channel experimental results in the range of 515–575 nm; however, one can easily extend this to a wider spectral field of view (as long as they can be separated and measured with the detector). Our approach can also be combined with polarization coding  and/or temporal coding [29,31] to enable multidimensional estimation of occluded objects in a variety of scenarios.
Defense Advanced Research Projects Agency (DARPA) (HR0011- 16-C-0027).
X. L. constructed the system, performed the experiment, collected and processed the data, and drafted the manuscript. J. A. G. helped develop the fundamental concept, oversaw the experimental development, and helped draft the manuscript. M. E. G. conceived the fundamental idea, provided guidance on experiment and analysis, and helped draft the manuscript.
See Supplement 1 for supporting content.
1. M. D. Duncan, R. Mahon, L. L. Tankersley, and J. Reintjes, “Time-gated imaging through scattering media using stimulated Raman amplification,” Opt. Lett. 16, 1868–1870 (1991). [CrossRef]
2. J. M. Beckers, “Adaptive optics for astronomy—principles, performance, and applications,” Annu. Rev. Astron. Astrophys. 31, 13–62 (1993). [CrossRef]
3. R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015). [CrossRef]
4. S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010). [CrossRef]
5. V. Durán, F. Soldevila, E. Irles, P. Clemente, E. Tajahuerce, P. Andrés, and J. Lancis, “Compressive imaging in scattering media,” Opt. Express 23, 14424–14433 (2015). [CrossRef]
6. S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010). [CrossRef]
7. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012). [CrossRef]
8. R. French, S. Gigan, and O. L. Muskens, “Speckle-based hyperspectral imaging combining multiple scattering and compressive sensing in nanowire mats,” Opt. Lett. 42, 1820–1823 (2017). [CrossRef]
9. R. French, S. Gigan, and O. L. Muskens, “Snapshot fiber spectral imaging using speckle correlations and compressive sensing,” Opt. Express 26, 32302–32316 (2018). [CrossRef]
10. S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5, 803–813 (2018). [CrossRef]
11. M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv:1708.07881 (2017).
12. G. Satat, M. Tancik, O. Gupta, B. Heshmat, and R. Raskar, “Object classification through scattering media with deep learning on time resolved measurement,” Opt. Express 25, 17466–17479 (2017). [CrossRef]
13. E. K. Hege, D. O’Connell, W. Johnson, S. Basty, and E. L. Dereniak, “Hyperspectral imaging for astronomy and space surveillance,” Proc. SPIE 5159, 380–392 (2004). [CrossRef]
14. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012). [CrossRef]
15. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014). [CrossRef]
16. S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988). [CrossRef]
17. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988). [CrossRef]
18. F. van Beijnum, E. G. van Putten, A. Lagendijk, and A. P. Mosk, “Frequency bandwidth of light focused through turbid media,” Opt. Lett. 36, 373–375 (2011). [CrossRef]
19. H. Li, T. Wu, J. Liu, C. Gong, and X. Shao, “Simulation and experimental verification for imaging of gray-scale objects through scattering layers,” Appl. Opt. 55, 9731–9737 (2016). [CrossRef]
20. T. Wu, C. Guo, and X. Shao, “Non-invasive imaging through thin scattering layers with broadband illumination,” arXiv:1809.06854 (2018).
21. X. Xu, X. Xie, A. Thendiyammal, H. Zhuang, J. Xie, Y. Liu, J. Zhou, and A. P. Mosk, “Imaging of objects through a thin scattering layer using a spectrally and spatially separated reference,” Opt. Express 26, 15073–15083 (2018). [CrossRef]
22. G. Shaw and H. Burke, “Spectral imaging for remote sensing,” Lincoln Lab. J. 14, 3–28 (2003).
23. D. Lu and Q. Weng, “Urban classification using full spectral information of Landsat ETM + imagery in Marion County,” Photogramm. Eng. Remote Sens. 71, 1275–1284 (2005). [CrossRef]
24. D. Wu and D.-W. Sun, “Advanced applications of hyperspectral imaging technology for food quality and safety analysis and assessment: a review part I: fundamentals,” Innov. Food Sci. Emerg. Technol. 19, 1–14(2013). [CrossRef]
25. L. Zhu, J. Liu, L. Feng, C. Guo, T. Wu, and X. Shao, “Recovering the spectral and spatial information of an object behind a scattering media,” OSA Continuum 1, 553–563 (2018). [CrossRef]
26. H. Zhuang, H. He, X. Xie, and J. Zhou, “High speed color imaging through scattering media with a large field of view,” Sci. Rep. 6, 32696 (2016). [CrossRef]
27. S. K. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica 4, 1209–1213(2017). [CrossRef]
28. M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express 15, 14013–14027 (2007). [CrossRef]
29. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013). [CrossRef]
30. T.-H. Tsai and D. J. Brady, “Coded aperture snapshot spectral polarization imaging,” Appl. Opt. 52, 2153–2161 (2013). [CrossRef]
31. X. Li, A. Stevens, J. A. Greenberg, and M. E. Gehm, “Single-shot memory-effect video,” Sci. Rep. 8, 13402 (2018). [CrossRef]
32. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]
33. T. Wu, O. Katz, X. Shao, and S. Gigan, “Single-shot diffraction-limited imaging through scattering layers via bispectrum analysis,” Opt. Lett. 41, 5003–5006 (2016). [CrossRef]
34. T. T. Cai and L. Wang, “Orthogonal matching pursuit for sparse signal recovery with noise,” IEEE Trans. Inf. Theory 57, 4680–4688 (2011). [CrossRef]
35. Y. Fang, L. Chen, J. Wu, and B. Huang, “GPU implementation of orthogonal matching pursuit for compressive sensing,” in 17th International Conference on Parallel and Distributed Systems (ICPADS) (2011), pp. 1044–1047.
36. D. S. Kittle, D. L. Marks, and D. J. Brady, “Design and fabrication of an ultraviolet-visible coded aperture snapshot spectral imager,” Opt. Eng. 51, 071403 (2012). [CrossRef]
37. M. Zhou, H. Chen, J. Paisley, L. Ren, L. Li, Z. Xing, D. Dunson, G. Sapiro, and L. Carin, “Nonparametric Bayesian dictionary learning for analysis of noisy and incomplete images,” IEEE Trans. Image Process. 21, 130–144 (2012). [CrossRef]
38. S. Thomas and G. John, “The C.I.E. colorimetric standards and their use,” Trans. Opt. Soc. 33, 73–134 (1931). [CrossRef]
39. J. Schneider and C. Aegerter, “Guide star based deconvolution for imaging behind turbid media,” J. Eur. Opt. Soc. Rapid Publ. 14, 21 (2018). [CrossRef]
40. H. Rueda, H. Arguello, and G. R. Arce, “DMD-based implementation of patterned optical filter arrays for compressive spectral imaging,” J. Opt. Soc. Am. A 32, 80–89 (2015). [CrossRef]
41. X. Lin, G. Wetzstein, Y. Liu, and Q. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett. 39, 2044–2047 (2014). [CrossRef]
42. N. Diaz, H. Rueda, and H. Arguello, “High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering,” Ingeniería e InvestiIgación 35, 53–60 (2015). [CrossRef]
43. Y. August, C. Vachman, Y. Rivenson, and A. Stern, “Compressive hyperspectral imaging by random separable projections in both the spatial and the spectral domains,” Appl. Opt. 52, D46–D54(2013). [CrossRef]
44. X. Yuan, T.-H. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Signal Process. 9, 964–976 (2015). [CrossRef]
45. C. V. Correa, H. Arguello, and G. R. Arce, “Spatiotemporal blue noise coded aperture design for multi-shot compressive spectral imaging,” J. Opt. Soc. Am. A 33, 2312–2322 (2016). [CrossRef]
46. X. Yuan, Y. Sun, and S. Pang, “Compressive video sensing with side information,” Appl. Opt. 56, 2697–2704 (2017). [CrossRef]