Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single-shot multispectral imaging through a thin scatterer

Open Access Open Access

Abstract

Performing imaging with scattered light is challenging due to the complex and random modulation imposed upon the light by the scatterer. Persistent correlations, such as the optical memory effect (ME), enable high-fidelity, diffraction-limited imaging through scattering media without any prior knowledge of or access to the scattering media. However, conventional ME techniques have been limited to gray-scale imaging. We overcome this restriction by using spectral coding and compressed sensing to realize snapshot color imaging through scattering media. We demonstrate our method and obtain high-fidelity multispectral images using both emulated data (spanning the visible and infrared) and experimental data (in the visible).

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Imaging through scattering media is of great interest and has important applications in many fields such as biological, medical, and astronomical imaging. The challenge of this problem is that the optical paths of the incident photons are changed in complicated ways as they pass through the scatterer, resulting in a seemingly random speckle pattern on the other side. Several imaging techniques have been developed to circumvent the effects of the scatterer and enable one to extract information about the object from the speckle pattern. Examples of these methods include the use of time-gating [1], adaptive optics [2], wavefront shaping [35], transmission matrix [69], and deep-learning-based methods [1012]. However, these techniques require either prior knowledge about or access to the object or scattering media for accurate results. For example, transmission matrix methods require one to measure the system point spread function (PSF), while one must supply deep learning methods with representative object–speckle training pairs. These requirements, which amount to having access to or control over both sides of the scattering medium, can be impractical in many applications [13].

In contrast, a separate approach known as memory effect (ME) imaging [14,15] enables one to perform imaging through a scatterer without any prior knowledge about the scatterer or object. The ME approach relies on the fact that object points within the ME range of the scatterer [16,17] yield highly correlated speckle patterns, which can be computationally processed to produce an image of the object using only a single speckle measurement [15]. While promising, ME imaging has traditionally been limited by the requirement that the illumination be narrowband (less than 1 nm) [15] in order to produce high-contrast speckle patterns [18] and, correspondingly, high-fidelity images.

To overcome ME imaging’s traditional constraint of narrowband operation, several methods have been developed to mitigate or resolve the spectral content of the signal. For example, broadband (more than 10–15 nm) and multispectral reference illumination have been used to generate high-fidelity gray-scale images through a scatterer [1921]; however, these approaches have limited utility for applications that require spectral information [13,2224]. To date, two methods have been developed for realizing color ME imaging. The first approach requires access to the optical source or the object prior to the scatterer [8,25], which is incompatible with the goals of noninvasive imaging and, as such, sacrifices one of ME imaging’s key advantages. The second method relies on direct measurement of the particular system’s PSF, for example, using a Bayer filter [26] or monochrome camera [27]. Regardless of the choice of detector, these methods assume prior knowledge of the wavelength-dependent PSF (Sλ) of the system and employ a deconvolution-based algorithm to recover the speckle associated with each spectral channel. While these methods perform well, measurement of Sλ may be impractical or impossible, such as when the scatterer is dynamic or not accessible ahead of the imaging task. Furthermore, the application of deconvolution-based image processing implicitly assumes that the PSFs associated with different spectral channels are uncorrelated, which is not necessarily true and inconsistent with the fundamental assumptions of ME imaging.

In this paper, we propose and demonstrate a method for realizing multispectral imaging of objects through scattering media using only a single speckle measurement with a monochrome camera. In contrast to previous approaches, we neither require any a priori knowledge about the system PSF or source spectrum, nor assume anything about the properties of the spectral content of the signal. To accomplish this, we employ a coded aperture in combination with a wavelength-dependent element (e.g., a prism or grating) to encode the spectral content within the low-contrast, multiplexed speckle signal [2831]. We then use a compressed sensing (CS) algorithm to recover the speckle signals associated with each independent spectral channel, which are then processed via correlation-based algorithms [32,33]. Using this approach, we produce high-fidelity images of a target object at five well-separated spectral channels between 450 nm and 750 nm using emulated data. We also demonstrate experimental imaging of three well-separated spectral channels between 450 nm and 650 nm and six contiguous spectral channels between 515 and 575 nm.

The rest of the paper is organized as follows. Section 2 introduces the forward model and implementation at a general level, and Section 3 describes the details of implementation as well as the data processing techniques. Section 4 demonstrates our results with both emulated and experimental data, and Section 5 discusses the fundamental limitations and practical concerns of our technique. Finally, the last section summarizes the work and discusses future opportunities.

2. PRINCIPLE OF OPERATION

At a conceptual level, imaging through scatter involves three key elements: an object (either externally illuminated or self-luminous), a scatterer, and a detector (see Fig. 1). As in conventional ME systems, we consider an object whose angular extent is contained within the ME field of view [7] and located a distance u behind a scatterer. After interacting with the scatterer, the light then propagates a distance v before arriving at a detector. While conventional ME imaging involves a standard camera as the detector, we instead employ a coded detector module that includes a coded aperture and wavelength-dependent optical element. The purpose of this element is to uniquely modulate each spectral channel prior to their combination and transduction at the monochrome detector. Thus, rather than simply measuring a low-contrast speckle whose spectral channels are inextricably mingled, we instead record a spectrally multiplexed signal that is well-conditioned for separation via computational methods. It is important to note that our capacity to demultiplex this measurement is based on our knowledge of the properties of the coded aperture and wavelength-dependent element, and does not require any characterization of or assumptions about the scatterer or light source.

 figure: Fig. 1.

Fig. 1. Schematic of single-shot multispectral imaging through scatterer. The setup consists of a color object, scatterer, and coded detector. In the object section, a lamp acts as a spatially incoherent light source to illuminate the color object. The scatterer consists of a ground glass diffuser and an aperture that acts like a field stop for the coded detector. A coded detector, consisting of a coded aperture, prism, and monochrome camera (coupled via appropriate relay optics), records the multiplexed, coded speckle signal. The coded detector represents the key modification of traditional ME imaging schemes required by our approach.

Download Full Size | PPT Slide | PDF

To explain how the process works in greater detail, we describe quantitatively the measurement scheme in the context of conventional ME physics. For objects contained within the ME field of view, the speckle pattern Iλ of a single wavelength is the convolution of the object Oλ and system PSF at that wavelength Sλ, which can be written as Iλ=Oλ*Sλ. The resulting speckle associated with multispectral object O can therefore be expressed as the combination of all associated wavelength-dependent speckle patterns as

I=O*S=λOλ*Sλ=λIλ,
where Oλ is the spectral component of the color object and we consider a well-separated set of wavelengths (although the summation is replaced by an integral for the continuous case). In our spectrally coded ME imaging technique, we first modulate the speckle at each wavelength by the same code pattern, T. The prism and associated relay optics then map each coded speckle signal to different shifted but overlapping locations on the camera. When designed correctly, the spatial shift on the camera plane (x,y) only exists in the horizontal direction such that
Tλ(x,y)=T(x+d(λλ0),y).
Here, Tλ(x,y) is the wavelength-dependent coding pattern, which is independent of the object and scatterer and needs only be characterized prior to measuring the speckle, and d(λλ0) is the wavelength-dependent spatial shift due to the prism (as defined relative to a reference wavelength λ0). The multiplexed measurement
I(x,y)=λTλ(x,y)·Iλ(x,y),
therefore, corresponds to a single low-contrast, blurry-looking speckle image, where each distinct spectral band is distinctly modulated by the code (see Fig. 2 and Fig. S3 in Supplement 1.)

 figure: Fig. 2.

Fig. 2. Data acquisition and reconstruction pipeline. In the memory effect regime, the wavelength-dependent speckle Iλ is the convolution of the object spectral component and the PSF corresponding to the wavelength. During data acquisition, Iλ(x,y) is coded by a random binary mask Tλ(x,y), and the multiplexed speckle is the summation of the coded speckles across spectral channels at the camera plane. We recover independent speckle frames I^λ(x,y) using a dictionary-based OMP algorithm. We calculate the autocorrelation of each channel individually and reconstruct the spectral information of object O^λ(x,y) with a phase retrieval algorithm. The color “LENS” object is reconstructed using emulated data.

Download Full Size | PPT Slide | PDF

After we measure the multiplexed speckle, the next step involves using knowledge of Tλ to separately recover the speckle at each spectral band. We note, first, that Eq. (3) is an underdetermined system and, as such, necessitates a CS approach to inversion. Such approaches rely on the fact that the recovered signal of interest is sparse in some basis; while appropriate bases are well-known for some types of images (e.g., natural scenes), the random-seeming speckle signals require a different approach. Following the method we developed in the context of temporal coding of scatter [31] (the details of which are outlined in Section 6 of Supplement 1), we employ dictionary learning to determine a sparse basis with which to represent the speckle. The resulting dictionary, trained on speckle images from a variety of different measurement configurations, is very general and does not depend on the specific objects and scatterers involved in the generation of I(x,y). To be explicitly clear, this dictionary training step does not require access to the scatterer used in the actual imaging configuration; instead, off-line training with a distinct scatterer is sufficient. This is therefore consistent with our goal of performing color imaging with no a priori information about the scatterer. To recover the estimated speckle at each wavelength I^λ(x,y), we use a dictionary-based orthogonal matching pursuit (OMP) algorithm [34,35].

Finally, we obtain an image of the object O^λ by calculating the autocorrelation of each spectral channel independently, I^λI^λ, and then inverting the autocorrelation [32,33] at each wavelength (see Fig. 2). The images at each wavelength are then combined to generate a composite color image of the estimated object O^. This technique makes no assumptions about correlations between the spectral channels and requires only the assumption that Sλ be sufficiently random that its autocorrelation is relatively sharply peaked [14]. In addition, it requires only information about the coded detector, relying on prior calibration of the coded aperture and a pretrained library, which makes the approach truly noninvasive and enables single-shot operation.

3. METHODS

A. Experimental Setup

As discussed above and shown in Fig. 1, our experiment requires an object, scatterer, and coded detector module. To generate the color object in a controlled and well-characterized way, we use a multistep process. We first use a broadband lamp (Newport 66921, 500 W) with a spectral range from 200 to 2500 nm, which then passes through an integrating sphere so that it is sufficiently spatially incoherent. The light then passes through optical elements (e.g., a spectral filter and/or a spatial light modulator (SLM)—see Section 1 in Supplement 1 for details) that act to define the spatiospectral properties of the object. In this work, we consider two types of objects: those with well-separated and contiguous spectra. In the former case, we choose the spacing of the spectral channels to exceed the spectral correlation length of the diffuser (i.e., the resulting speckle patterns are uncorrelated); in the latter case, we choose the spacing to be less than the spectral correlation length.

For the scatterer, we use a 600 grit ground glass diffuser (Thorlabs DG20-600-MD) that is mounted 24 cm away from the back focal plane of the imaging lens. The diffuser’s spectral decorrelation length is approximately 15 nm FWHM (see Fig. S2 in Supplement 1). The light scatters as it passes through the diffuser, and a 4 mm diameter aperture immediately after the diffuser limits the angular range of the scattered light reaching the detector (i.e., acts as a field stop). This field stop is placed at the scatterer for convenience but could be engineered into the optical train of the receiver and therefore does not require access to the scatterer. We note that the technique is not limited to a particular choice of diffuser and that it works equally well for diffuse reflections; the main requirement is that the scattering element should provide an accessible ME regime.

Relative to conventional snapshot ME imaging setups [15], the key addition to our system is our coded detector. We choose here to use a coded aperture snapshot spectral imager (CASSI) designed to operate between ultraviolet and near-IR wavelengths [36]. In the coded detector, we use a binary, chrome-on-glass coded aperture with a 50% transmissive random pattern and 14.8μm×14.8μm features. After modulation by the coded aperture, the light passes through a double Amicci prism, which imposes a wavelength-dependent shift while maintaining the system’s inline optical axis. Finally, a CMOS camera (Andor NEO 5.5 with 2560×2160pixels) focused on the coded aperture plane measures the optical signal. In order to reduce background light and stray reflections in the system, we implement a series of bellows between the optical elements and cover the setup with a black box.

B. Emulation

In addition to the fully experimental configuration described in Section 3.A, we also create emulated data by digitally combining empirical data. Given that the measurement is related to Sλ and Tλ according to Eq. (3), we first measure experimentally the uncoded speckle PSF Sλ at multiple wavelengths. For the data shown in Fig. 3, we employ bandpass filters centered at 450, 550, 650, 700, and 750 nm (Thorlabs FB450-10, FB550-10, FB650-10, FB700-10 and FB750-10, respectively), each with a 10 nm bandwidth (i.e., slightly within the ME spectral correlation length of the scatterer). We then digitally convolve a target color object with the appropriate spectral components of Sλ to generate the emulated speckle that would be produced by that object, and modulate the result by the code pattern. Here, we choose a 50% open, random binary code pattern with code feature size of 2×2 detector pixels. Finally, we apply a wavelength-dependent shift based on the known dispersion of the prism [36], and sum the resulting coded speckle signals together. The resulting emulated data is therefore based on a mixture of empirical measurement (e.g., of Sλ and key parameters) and known physics, but lacks any explicit model error. As such, we can use this approach to explore system design tradeoffs and understand fundamental performance limits.

 figure: Fig. 3.

Fig. 3. Emulation results of snapshot color ME with 5 spectral channels. (a), (c) Color object and normalized radiance plot of each spectral component for ground truth (top) and recovered image (bottom) corresponding to the number object and cell from the stem of a cotton plant, respectively. (c), (d) Comparison of the original spectrum and the recovered spectrum averaged across all bright pixels for the number object and cell from the stem of a cotton plant, respectively.

Download Full Size | PPT Slide | PDF

C. Data Processing

The input to the data processing algorithms is the gray-scale, multiplexed coded speckle images measured by the camera. We use only the center 1224×1224pixels, however, which contain enough speckle grains to produce high-quality autocorrelation patterns without including aberrations due to the relay optics. To recover the speckle from the individual spectral channels, we employ a dictionary-based OMP algorithm. We train our overcomplete speckle dictionary, which contains 512 dictionary elements, using the beta process factor analysis (BPFA [37]) algorithm and over 73,000 speckle patches measured on a completely separate experimental setup (as described in Ref. [31]). Each of the speckle patches consists of 16×16pixels, which is capable of representing the speckle structure with high fidelity while ensuring that training is computationally tractable. With the dictionary in place, we use a matrix inverse lemma (MIL) implementation of OMP that projects the multiplexed speckle to the dictionary space and iteratively calculates the optimal dictionary elements and corresponding coefficients to recover the subframe speckle patterns of each spectral channel (see Section 6 of Supplement 1 for additional discussion).

Once the separate speckle images at each spectral channel are determined, we proceed by using correlation-based processing [32]. When computing the autocorrelation of the recovered speckle signals, we first normalize the speckle image by dividing it by a low-pass version of itself (obtained by convolving the raw image with a uniform 100×100 kernel). We then choose the 1024×1024 pixel patch in the center of the normalized image and smooth it using a Gaussian filter with a standard deviation of 1 pixel. To recover the object associated with each spectral channel, we choose the center 256×256 patch of the autocorrelation and normalize by subtracting the minimal value and dividing the background-subtracted pattern with its the maximal value. We then invert each autocorrelation separately using phase retrieval implemented using the conjugate gradient descent method. We use a random initial guess (although one could, instead, use a bispectrum-based initial guess [33] to avoid the twin image problem) and estimate the magnitude of the Fourier transform of the object by taking the square root of the Fourier transform of the product of the autocorrelation and a 2D Tukey window (to avoid edge effects). We enforce realness and nonnegativity constraints on the object and typically run several hundred iterations. To recover an accurate relative intensity (or gray-scale value) for each of the normalized spectral channels, we weight each one by the ratio of the mean recovered speckle intensity to the total signal in the estimated object at that channel wλ=mean(I^λ)/p(O^λ). Here, p represents a sum over all pixels in that image. We note that this method makes no assumption about the number of channels, overall spectral range, or existence of correlations between spectral channels. However, as discussed in Section 7 of Supplement 1, one can coregister the spectral channels by exploiting the existence of spatial correlations in the appearance of the object across different spectral bands and/or correlations between the speckle in different channels, if available.

4. RESULTS

A. Emulation Results: Well-Separated Multispectral Imaging

Figure 3 shows the examples of multispectral, snapshot imaging through a scatterer for two emulated objects with well-separated spectra. As described in Section 4.B, we consider five separate spectral channels spanning 300 nm across the visible to near-IR. The top row of Fig. 3(a) contains the ground truth object, which consists of several numbers (shown both in false color and broken out by spectral channel). When plotting the false color object, we map the intensity profile of each wavelength into the CIE 1931 RGB space [38]. The recovered object, shown in the bottom row of Fig. 3(b) both in false color and in terms of separate spectral channels, demonstrates that the technique yields excellent imaging performance and negligible cross talk between spectral channels. Finally, we demonstrate the fidelity of the recovered spectra by comparing the spectral intensity (averaged across all bright pixels) of the ground truth and recovered images in Fig. 3(b). Figures 3(c) and 3(d) show the corresponding ground truth and recovered multispectral images (along with associated spectra) for a cell from the stem of a cotton plant (spatial profile obtained via 100 Anatomy Botany Prepared Microscope Slides—Set D No. 13). For the purposes of the emulation, we assigned overlapping spatial regions of the cell structure to different spectral channels. To assess quantitatively the quality of the recovered images, we calculate the structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR) values relative to the ground truth object on a per-channel basis. Table 1 shows that each of the five channels has a SSIM index of 0.8–0.9 and a PSNR of more than 20. Thus, despite the fact that the speckle signal itself is very low contrast when five 10 nm wide spectral bands are superimposed at the detector, our technique of coded detection can accurately recover the spatial–spectral object properties. Furthermore, we demonstrate the capacity to perform color imaging beyond the traditional three-channel (i.e., RGB), visible imaging paradigm.

Tables Icon

Table 1. Quantitative Performance of Experimental Results

B. Experiment Results: Well-Separated Spectral Channels

While the emulation results demonstrate the power of the approach in a well-controlled environment, we turn next to fully experimental data. We consider three 8–12 nm wide spectral channels centered at 450, 550, and 650 nm that, when combined with different relative weights, generate a broad range of colors. Figure 4 shows a comparison between the ground truth and recovered object for a multicolor letter “H.” The object has an extent of 1.8mm×1.8mm, corresponding to the largest undistorted area we can image at the object plane. We use an exposure time of approximately 1800 s (dictated by the lossy method used to generate the controllable object) to ensure a SNR of 60–70. This SNR is not a requirement but, for this proof of concept experiment, is sufficient to ensure success for a range of object complexities. We note, however, that real-world application of this technique can be orders of magnitude faster, as discussed in Section 4 of Supplement 1). From left to right, the top row in Fig. 4 contains the ground truth object at each wavelength, Oλ, as well as the full-color object, respectively. To measure this ground truth, we use a machine vision camera with the corresponding bandpass filters to image the spectral components directly, and obtain the full-color image by summing the resulting spectral channels. The second row shows the autocorrelation patterns of each reconstructed spectral channel that form the multiplexed measurement, which serve as the inputs to the phase retrieval data processing step. The third row shows the reconstructed object in each spectral channel as well as the full-color object. We find that we are able to recover not only the correct spatial structure in the separate channels, but also the correct relative gray-scale intensity of the object within each spectral channel. The full-color image shows that the relative weights between spectral channels is also correct, as the combined reconstructed image color matches the ground truth and the SSIM index (PSNR) is greater than 0.92 (26) for every channel (see Table 1). To further demonstrate this point, the final row of Fig. 4 shows the intensity of the ground truth and reconstructed object along a horizontal slice through the object (normalized here such that the highest intensity across all channels equals unity). The normalized intensities of the ground truth and reconstructed object match very well in all spectral bands. Thus, despite the fact that there exist noise and potential model error (e.g., due to imperfect system characterization), the resulting image is of high quality and in good agreement with the emulated results.

 figure: Fig. 4.

Fig. 4. Reconstruction results of a color “H” object. The first, second, and third rows correspond to the ground truth object, recovered speckle autocorrelations, and estimated object, respectively. The last row shows a comparison between the normalized intensity along a slice through the object (indicated by the horizontal dashed white line) for each spectral channel. From left to right, the results correspond to the 650 nm, 550 nm, and 450 nm spectral bands. The fourth column shows full-color composite representations of the ground truth and recovered objects.

Download Full Size | PPT Slide | PDF

C. Experiment Results: Contiguous Spectral Channels

While the experimental results in the previous section demonstrate that the approach works well for an object with separated spectral bands, we look next at an object with a continuous spectral range of 60 nm. The object consists of a letter “X” and a plus sign (“+”), with a total spatial extent of 1.7mm×1.7mm[see Fig. 5(a)]. The letter “X” has a relatively uniform and continuous spectrum between 515 nm and 575 nm, while the plus sign has a structured spectrum predominantly located between 535 nm and 575 nm [see Fig. 5(b)]. We create the color object by placing a 60 nm wide bandpass filter over the entire object and a long-pass filter over the plus sign (see Section 1 of Supplement 1 for additional details). For this configuration, we use an exposure time of 120 s to obtain our target SNR of 70 dB. In reconstruction, we divide the 60 nm spectrum into six contiguous, 10 nm wide channels [indicated in Fig. 5(b)]. We find that the reconstructed images agree well with the spatial and spectral content of the ground truth object [see Figs. 5(a)5(c) and Table 1]. These results complement the spectrally well-separated images in Section 3.B and demonstrate that the presence or absence of spectral correlations in the measured speckle does not impact the performance of our method. In fact, we find that the system spectral performance is determined mainly by its calibration and details of the coded detector, rather than the object spectral content (see Section 5 of Supplement 1 for additional discussion).

 figure: Fig. 5.

Fig. 5. Reconstruction results of a contiguous spectrum object. (a) The ground truth object and recovered color object consisting of an “X” and “+”. (b) Ground truth and recovered spectra of the “X” (black, blue) and “+” (yellow, red), respectively. The gray lines indicate the spectral bin edges (each 10 nm wide and centered at 520, 530, 540, 550, 560, and 570 nm). (c) Recovered autocorrelations and the corresponding phase retrieval results in 10 nm bands at the specified wavelengths.

Download Full Size | PPT Slide | PDF

5. DISCUSSION

Our method uses a general, pretrained speckle dictionary and known coded aperture, and, as neither of these elements are specific to the scatterer or object, we require only a single multiplexed monochrome measurement to recover a color object. In addition, with our use of a patch-based dictionary with small speckle patches (16×16pixels) for the demultiplexing, the speckle recovery step captures the fundamental structure of a speckle grain and explicitly does not include any speckle correlations. Therefore, our method operates without reliance on or assumptions about the correlation between spectral channels and can even be applied to different scattering materials. This is consistent with our goals that the method be noninvasive and snapshot (i.e., not rely on temporal or ensemble averaging, as employed by Refs. [7] and [20]). Furthermore, the fact that our technique only involves modifications to the detector subsystem makes it particularly applicable to astronomical imaging [13] and biological imaging of self-luminous objects [3,39]. We note, however, that the method does assume that the object fits within the ME angular field of view and that the width of the individual spectral channels that we reconstruct over not be too broad (e.g., greater than the spectral correlation length). In the rest of this section, we discuss the dependence of the technique on key system parameters.

In conventional ME, a speckle grain should be at least Nyquist sampled on the detector plane to produce a sharp autocorrelation pattern. Our coding approach introduces two additional sampling conditions that should be met to recover high-fidelity object images. First the dictionary element patch size should be big enough to contain a meaningful speckle structure, although larger patches render OMP estimation and dictionary training more computationally expensive. Second, each speckle patch should be sufficiently modulated by the code pattern, and a single code feature should be at least Nyquist sampled on the detector. In our experiment, the code feature size of the physical coded aperture is 14.8μm×14.8μm and subtends approximately 2.3×2.3pixels on the detector. Thus, each speckle patch was coded by an average of 7×7 code features, thereby enabling unique recovery of each speckle patch.

It is worth noting that the binary coded aperture and the double Amicci prism are not indispensable to the spectral coding task. Instead of a conventional, static coded aperture, one can use an SLM or DMD [40,41] to realize gray-scale [42], spectral, or dynamic coding. Also, depending on the system requirements, one can choose either a prism, diffraction grating [41,43], or spectrally sensitive SLM [44] as the dispersive element. Similarly, one can relax the snapshot requirement and take multiple multiplexed measurements [45] or use side information [44,46] to improve the image quality. Coregistration of the reconstructed spectral channels can be a concern. Availability of cross-channel correlations in either object spatial structure or measured speckle can provide a solution. However, for the case of widely separated spectral channels with absolutely no correlation in the object structure, the proposed method would be expected to fail. Additional multishot measurement approaches are a potential remedy to this and are currently under investigation.

As is common in computational imaging systems, the performance of our technique is ultimately limited by model error. In our system, this mismatch can arise due to the shift between the actual coding pattern involved in the multiplexed measurement and the calibrated coding pattern. This typically arises due to changes in environmental parameters over a slow time scale, and we find that the shift is under 3% of the camera pixel size.

6. CONCLUSION

In conclusion, we have developed a coded, single-shot technique for multispectral imaging through a scatterer. We modulate the wavelength-dependent speckle with a coded aperture to take one multiplexed measurement and compute the wavelength-dependent speckle with a dictionary-based OMP algorithm. The multispectral object information is retrieved from the demultiplexed speckle by computing the autocorrelation pattern and running a phase retrieval algorithm. We demonstrate our method with five-channel emulation, three well-separated channel experimental results in the range of 450–750 nm, and six contiguous channel experimental results in the range of 515–575 nm; however, one can easily extend this to a wider spectral field of view (as long as they can be separated and measured with the detector). Our approach can also be combined with polarization coding [30] and/or temporal coding [29,31] to enable multidimensional estimation of occluded objects in a variety of scenarios.

Funding

Defense Advanced Research Projects Agency (DARPA) (HR0011- 16-C-0027).

Acknowledgment

X. L. constructed the system, performed the experiment, collected and processed the data, and drafted the manuscript. J. A. G. helped develop the fundamental concept, oversaw the experimental development, and helped draft the manuscript. M. E. G. conceived the fundamental idea, provided guidance on experiment and analysis, and helped draft the manuscript.

 

See Supplement 1 for supporting content.

REFERENCES

1. M. D. Duncan, R. Mahon, L. L. Tankersley, and J. Reintjes, “Time-gated imaging through scattering media using stimulated Raman amplification,” Opt. Lett. 16, 1868–1870 (1991). [CrossRef]  

2. J. M. Beckers, “Adaptive optics for astronomy—principles, performance, and applications,” Annu. Rev. Astron. Astrophys. 31, 13–62 (1993). [CrossRef]  

3. R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015). [CrossRef]  

4. S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010). [CrossRef]  

5. V. Durán, F. Soldevila, E. Irles, P. Clemente, E. Tajahuerce, P. Andrés, and J. Lancis, “Compressive imaging in scattering media,” Opt. Express 23, 14424–14433 (2015). [CrossRef]  

6. S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010). [CrossRef]  

7. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012). [CrossRef]  

8. R. French, S. Gigan, and O. L. Muskens, “Speckle-based hyperspectral imaging combining multiple scattering and compressive sensing in nanowire mats,” Opt. Lett. 42, 1820–1823 (2017). [CrossRef]  

9. R. French, S. Gigan, and O. L. Muskens, “Snapshot fiber spectral imaging using speckle correlations and compressive sensing,” Opt. Express 26, 32302–32316 (2018). [CrossRef]  

10. S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5, 803–813 (2018). [CrossRef]  

11. M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv:1708.07881 (2017).

12. G. Satat, M. Tancik, O. Gupta, B. Heshmat, and R. Raskar, “Object classification through scattering media with deep learning on time resolved measurement,” Opt. Express 25, 17466–17479 (2017). [CrossRef]  

13. E. K. Hege, D. O’Connell, W. Johnson, S. Basty, and E. L. Dereniak, “Hyperspectral imaging for astronomy and space surveillance,” Proc. SPIE 5159, 380–392 (2004). [CrossRef]  

14. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012). [CrossRef]  

15. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014). [CrossRef]  

16. S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988). [CrossRef]  

17. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988). [CrossRef]  

18. F. van Beijnum, E. G. van Putten, A. Lagendijk, and A. P. Mosk, “Frequency bandwidth of light focused through turbid media,” Opt. Lett. 36, 373–375 (2011). [CrossRef]  

19. H. Li, T. Wu, J. Liu, C. Gong, and X. Shao, “Simulation and experimental verification for imaging of gray-scale objects through scattering layers,” Appl. Opt. 55, 9731–9737 (2016). [CrossRef]  

20. T. Wu, C. Guo, and X. Shao, “Non-invasive imaging through thin scattering layers with broadband illumination,” arXiv:1809.06854 (2018).

21. X. Xu, X. Xie, A. Thendiyammal, H. Zhuang, J. Xie, Y. Liu, J. Zhou, and A. P. Mosk, “Imaging of objects through a thin scattering layer using a spectrally and spatially separated reference,” Opt. Express 26, 15073–15083 (2018). [CrossRef]  

22. G. Shaw and H. Burke, “Spectral imaging for remote sensing,” Lincoln Lab. J. 14, 3–28 (2003).

23. D. Lu and Q. Weng, “Urban classification using full spectral information of Landsat ETM + imagery in Marion County,” Photogramm. Eng. Remote Sens. 71, 1275–1284 (2005). [CrossRef]  

24. D. Wu and D.-W. Sun, “Advanced applications of hyperspectral imaging technology for food quality and safety analysis and assessment: a review part I: fundamentals,” Innov. Food Sci. Emerg. Technol. 19, 1–14(2013). [CrossRef]  

25. L. Zhu, J. Liu, L. Feng, C. Guo, T. Wu, and X. Shao, “Recovering the spectral and spatial information of an object behind a scattering media,” OSA Continuum 1, 553–563 (2018). [CrossRef]  

26. H. Zhuang, H. He, X. Xie, and J. Zhou, “High speed color imaging through scattering media with a large field of view,” Sci. Rep. 6, 32696 (2016). [CrossRef]  

27. S. K. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica 4, 1209–1213(2017). [CrossRef]  

28. M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express 15, 14013–14027 (2007). [CrossRef]  

29. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013). [CrossRef]  

30. T.-H. Tsai and D. J. Brady, “Coded aperture snapshot spectral polarization imaging,” Appl. Opt. 52, 2153–2161 (2013). [CrossRef]  

31. X. Li, A. Stevens, J. A. Greenberg, and M. E. Gehm, “Single-shot memory-effect video,” Sci. Rep. 8, 13402 (2018). [CrossRef]  

32. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]  

33. T. Wu, O. Katz, X. Shao, and S. Gigan, “Single-shot diffraction-limited imaging through scattering layers via bispectrum analysis,” Opt. Lett. 41, 5003–5006 (2016). [CrossRef]  

34. T. T. Cai and L. Wang, “Orthogonal matching pursuit for sparse signal recovery with noise,” IEEE Trans. Inf. Theory 57, 4680–4688 (2011). [CrossRef]  

35. Y. Fang, L. Chen, J. Wu, and B. Huang, “GPU implementation of orthogonal matching pursuit for compressive sensing,” in 17th International Conference on Parallel and Distributed Systems (ICPADS) (2011), pp. 1044–1047.

36. D. S. Kittle, D. L. Marks, and D. J. Brady, “Design and fabrication of an ultraviolet-visible coded aperture snapshot spectral imager,” Opt. Eng. 51, 071403 (2012). [CrossRef]  

37. M. Zhou, H. Chen, J. Paisley, L. Ren, L. Li, Z. Xing, D. Dunson, G. Sapiro, and L. Carin, “Nonparametric Bayesian dictionary learning for analysis of noisy and incomplete images,” IEEE Trans. Image Process. 21, 130–144 (2012). [CrossRef]  

38. S. Thomas and G. John, “The C.I.E. colorimetric standards and their use,” Trans. Opt. Soc. 33, 73–134 (1931). [CrossRef]  

39. J. Schneider and C. Aegerter, “Guide star based deconvolution for imaging behind turbid media,” J. Eur. Opt. Soc. Rapid Publ. 14, 21 (2018). [CrossRef]  

40. H. Rueda, H. Arguello, and G. R. Arce, “DMD-based implementation of patterned optical filter arrays for compressive spectral imaging,” J. Opt. Soc. Am. A 32, 80–89 (2015). [CrossRef]  

41. X. Lin, G. Wetzstein, Y. Liu, and Q. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett. 39, 2044–2047 (2014). [CrossRef]  

42. N. Diaz, H. Rueda, and H. Arguello, “High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering,” Ingeniería e InvestiIgación 35, 53–60 (2015). [CrossRef]  

43. Y. August, C. Vachman, Y. Rivenson, and A. Stern, “Compressive hyperspectral imaging by random separable projections in both the spatial and the spectral domains,” Appl. Opt. 52, D46–D54(2013). [CrossRef]  

44. X. Yuan, T.-H. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Signal Process. 9, 964–976 (2015). [CrossRef]  

45. C. V. Correa, H. Arguello, and G. R. Arce, “Spatiotemporal blue noise coded aperture design for multi-shot compressive spectral imaging,” J. Opt. Soc. Am. A 33, 2312–2322 (2016). [CrossRef]  

46. X. Yuan, Y. Sun, and S. Pang, “Compressive video sensing with side information,” Appl. Opt. 56, 2697–2704 (2017). [CrossRef]  

References

  • View by:

  1. M. D. Duncan, R. Mahon, L. L. Tankersley, and J. Reintjes, “Time-gated imaging through scattering media using stimulated Raman amplification,” Opt. Lett. 16, 1868–1870 (1991).
    [Crossref]
  2. J. M. Beckers, “Adaptive optics for astronomy—principles, performance, and applications,” Annu. Rev. Astron. Astrophys. 31, 13–62 (1993).
    [Crossref]
  3. R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015).
    [Crossref]
  4. S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
    [Crossref]
  5. V. Durán, F. Soldevila, E. Irles, P. Clemente, E. Tajahuerce, P. Andrés, and J. Lancis, “Compressive imaging in scattering media,” Opt. Express 23, 14424–14433 (2015).
    [Crossref]
  6. S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
    [Crossref]
  7. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
    [Crossref]
  8. R. French, S. Gigan, and O. L. Muskens, “Speckle-based hyperspectral imaging combining multiple scattering and compressive sensing in nanowire mats,” Opt. Lett. 42, 1820–1823 (2017).
    [Crossref]
  9. R. French, S. Gigan, and O. L. Muskens, “Snapshot fiber spectral imaging using speckle correlations and compressive sensing,” Opt. Express 26, 32302–32316 (2018).
    [Crossref]
  10. S. Li, M. Deng, J. Lee, A. Sinha, and G. Barbastathis, “Imaging through glass diffusers using densely connected convolutional networks,” Optica 5, 803–813 (2018).
    [Crossref]
  11. M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv:1708.07881 (2017).
  12. G. Satat, M. Tancik, O. Gupta, B. Heshmat, and R. Raskar, “Object classification through scattering media with deep learning on time resolved measurement,” Opt. Express 25, 17466–17479 (2017).
    [Crossref]
  13. E. K. Hege, D. O’Connell, W. Johnson, S. Basty, and E. L. Dereniak, “Hyperspectral imaging for astronomy and space surveillance,” Proc. SPIE 5159, 380–392 (2004).
    [Crossref]
  14. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
    [Crossref]
  15. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
    [Crossref]
  16. S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
    [Crossref]
  17. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
    [Crossref]
  18. F. van Beijnum, E. G. van Putten, A. Lagendijk, and A. P. Mosk, “Frequency bandwidth of light focused through turbid media,” Opt. Lett. 36, 373–375 (2011).
    [Crossref]
  19. H. Li, T. Wu, J. Liu, C. Gong, and X. Shao, “Simulation and experimental verification for imaging of gray-scale objects through scattering layers,” Appl. Opt. 55, 9731–9737 (2016).
    [Crossref]
  20. T. Wu, C. Guo, and X. Shao, “Non-invasive imaging through thin scattering layers with broadband illumination,” arXiv:1809.06854 (2018).
  21. X. Xu, X. Xie, A. Thendiyammal, H. Zhuang, J. Xie, Y. Liu, J. Zhou, and A. P. Mosk, “Imaging of objects through a thin scattering layer using a spectrally and spatially separated reference,” Opt. Express 26, 15073–15083 (2018).
    [Crossref]
  22. G. Shaw and H. Burke, “Spectral imaging for remote sensing,” Lincoln Lab. J. 14, 3–28 (2003).
  23. D. Lu and Q. Weng, “Urban classification using full spectral information of Landsat ETM + imagery in Marion County,” Photogramm. Eng. Remote Sens. 71, 1275–1284 (2005).
    [Crossref]
  24. D. Wu and D.-W. Sun, “Advanced applications of hyperspectral imaging technology for food quality and safety analysis and assessment: a review part I: fundamentals,” Innov. Food Sci. Emerg. Technol. 19, 1–14(2013).
    [Crossref]
  25. L. Zhu, J. Liu, L. Feng, C. Guo, T. Wu, and X. Shao, “Recovering the spectral and spatial information of an object behind a scattering media,” OSA Continuum 1, 553–563 (2018).
    [Crossref]
  26. H. Zhuang, H. He, X. Xie, and J. Zhou, “High speed color imaging through scattering media with a large field of view,” Sci. Rep. 6, 32696 (2016).
    [Crossref]
  27. S. K. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica 4, 1209–1213(2017).
    [Crossref]
  28. M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express 15, 14013–14027 (2007).
    [Crossref]
  29. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013).
    [Crossref]
  30. T.-H. Tsai and D. J. Brady, “Coded aperture snapshot spectral polarization imaging,” Appl. Opt. 52, 2153–2161 (2013).
    [Crossref]
  31. X. Li, A. Stevens, J. A. Greenberg, and M. E. Gehm, “Single-shot memory-effect video,” Sci. Rep. 8, 13402 (2018).
    [Crossref]
  32. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982).
    [Crossref]
  33. T. Wu, O. Katz, X. Shao, and S. Gigan, “Single-shot diffraction-limited imaging through scattering layers via bispectrum analysis,” Opt. Lett. 41, 5003–5006 (2016).
    [Crossref]
  34. T. T. Cai and L. Wang, “Orthogonal matching pursuit for sparse signal recovery with noise,” IEEE Trans. Inf. Theory 57, 4680–4688 (2011).
    [Crossref]
  35. Y. Fang, L. Chen, J. Wu, and B. Huang, “GPU implementation of orthogonal matching pursuit for compressive sensing,” in 17th International Conference on Parallel and Distributed Systems (ICPADS) (2011), pp. 1044–1047.
  36. D. S. Kittle, D. L. Marks, and D. J. Brady, “Design and fabrication of an ultraviolet-visible coded aperture snapshot spectral imager,” Opt. Eng. 51, 071403 (2012).
    [Crossref]
  37. M. Zhou, H. Chen, J. Paisley, L. Ren, L. Li, Z. Xing, D. Dunson, G. Sapiro, and L. Carin, “Nonparametric Bayesian dictionary learning for analysis of noisy and incomplete images,” IEEE Trans. Image Process. 21, 130–144 (2012).
    [Crossref]
  38. S. Thomas and G. John, “The C.I.E. colorimetric standards and their use,” Trans. Opt. Soc. 33, 73–134 (1931).
    [Crossref]
  39. J. Schneider and C. Aegerter, “Guide star based deconvolution for imaging behind turbid media,” J. Eur. Opt. Soc. Rapid Publ. 14, 21 (2018).
    [Crossref]
  40. H. Rueda, H. Arguello, and G. R. Arce, “DMD-based implementation of patterned optical filter arrays for compressive spectral imaging,” J. Opt. Soc. Am. A 32, 80–89 (2015).
    [Crossref]
  41. X. Lin, G. Wetzstein, Y. Liu, and Q. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett. 39, 2044–2047 (2014).
    [Crossref]
  42. N. Diaz, H. Rueda, and H. Arguello, “High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering,” Ingeniería e InvestiIgación 35, 53–60 (2015).
    [Crossref]
  43. Y. August, C. Vachman, Y. Rivenson, and A. Stern, “Compressive hyperspectral imaging by random separable projections in both the spatial and the spectral domains,” Appl. Opt. 52, D46–D54(2013).
    [Crossref]
  44. X. Yuan, T.-H. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Signal Process. 9, 964–976 (2015).
    [Crossref]
  45. C. V. Correa, H. Arguello, and G. R. Arce, “Spatiotemporal blue noise coded aperture design for multi-shot compressive spectral imaging,” J. Opt. Soc. Am. A 33, 2312–2322 (2016).
    [Crossref]
  46. X. Yuan, Y. Sun, and S. Pang, “Compressive video sensing with side information,” Appl. Opt. 56, 2697–2704 (2017).
    [Crossref]

2018 (6)

2017 (4)

2016 (4)

2015 (5)

X. Yuan, T.-H. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Signal Process. 9, 964–976 (2015).
[Crossref]

H. Rueda, H. Arguello, and G. R. Arce, “DMD-based implementation of patterned optical filter arrays for compressive spectral imaging,” J. Opt. Soc. Am. A 32, 80–89 (2015).
[Crossref]

N. Diaz, H. Rueda, and H. Arguello, “High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering,” Ingeniería e InvestiIgación 35, 53–60 (2015).
[Crossref]

V. Durán, F. Soldevila, E. Irles, P. Clemente, E. Tajahuerce, P. Andrés, and J. Lancis, “Compressive imaging in scattering media,” Opt. Express 23, 14424–14433 (2015).
[Crossref]

R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015).
[Crossref]

2014 (2)

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

X. Lin, G. Wetzstein, Y. Liu, and Q. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett. 39, 2044–2047 (2014).
[Crossref]

2013 (4)

2012 (4)

D. S. Kittle, D. L. Marks, and D. J. Brady, “Design and fabrication of an ultraviolet-visible coded aperture snapshot spectral imager,” Opt. Eng. 51, 071403 (2012).
[Crossref]

M. Zhou, H. Chen, J. Paisley, L. Ren, L. Li, Z. Xing, D. Dunson, G. Sapiro, and L. Carin, “Nonparametric Bayesian dictionary learning for analysis of noisy and incomplete images,” IEEE Trans. Image Process. 21, 130–144 (2012).
[Crossref]

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

2011 (2)

F. van Beijnum, E. G. van Putten, A. Lagendijk, and A. P. Mosk, “Frequency bandwidth of light focused through turbid media,” Opt. Lett. 36, 373–375 (2011).
[Crossref]

T. T. Cai and L. Wang, “Orthogonal matching pursuit for sparse signal recovery with noise,” IEEE Trans. Inf. Theory 57, 4680–4688 (2011).
[Crossref]

2010 (2)

S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
[Crossref]

2007 (1)

2005 (1)

D. Lu and Q. Weng, “Urban classification using full spectral information of Landsat ETM + imagery in Marion County,” Photogramm. Eng. Remote Sens. 71, 1275–1284 (2005).
[Crossref]

2004 (1)

E. K. Hege, D. O’Connell, W. Johnson, S. Basty, and E. L. Dereniak, “Hyperspectral imaging for astronomy and space surveillance,” Proc. SPIE 5159, 380–392 (2004).
[Crossref]

2003 (1)

G. Shaw and H. Burke, “Spectral imaging for remote sensing,” Lincoln Lab. J. 14, 3–28 (2003).

1993 (1)

J. M. Beckers, “Adaptive optics for astronomy—principles, performance, and applications,” Annu. Rev. Astron. Astrophys. 31, 13–62 (1993).
[Crossref]

1991 (1)

1988 (2)

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

1982 (1)

1931 (1)

S. Thomas and G. John, “The C.I.E. colorimetric standards and their use,” Trans. Opt. Soc. 33, 73–134 (1931).
[Crossref]

Aegerter, C.

J. Schneider and C. Aegerter, “Guide star based deconvolution for imaging behind turbid media,” J. Eur. Opt. Soc. Rapid Publ. 14, 21 (2018).
[Crossref]

Andrés, P.

Arce, G. R.

Arguello, H.

August, Y.

Barbastathis, G.

Basty, S.

E. K. Hege, D. O’Connell, W. Johnson, S. Basty, and E. L. Dereniak, “Hyperspectral imaging for astronomy and space surveillance,” Proc. SPIE 5159, 380–392 (2004).
[Crossref]

Beckers, J. M.

J. M. Beckers, “Adaptive optics for astronomy—principles, performance, and applications,” Annu. Rev. Astron. Astrophys. 31, 13–62 (1993).
[Crossref]

Bertolotti, J.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Blum, C.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Boccara, A. C.

S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
[Crossref]

S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

Brady, D.

X. Yuan, T.-H. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Signal Process. 9, 964–976 (2015).
[Crossref]

Brady, D. J.

Burke, H.

G. Shaw and H. Burke, “Spectral imaging for remote sensing,” Lincoln Lab. J. 14, 3–28 (2003).

Cai, T. T.

T. T. Cai and L. Wang, “Orthogonal matching pursuit for sparse signal recovery with noise,” IEEE Trans. Inf. Theory 57, 4680–4688 (2011).
[Crossref]

Carin, L.

X. Yuan, T.-H. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Signal Process. 9, 964–976 (2015).
[Crossref]

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013).
[Crossref]

M. Zhou, H. Chen, J. Paisley, L. Ren, L. Li, Z. Xing, D. Dunson, G. Sapiro, and L. Carin, “Nonparametric Bayesian dictionary learning for analysis of noisy and incomplete images,” IEEE Trans. Image Process. 21, 130–144 (2012).
[Crossref]

Carminati, R.

S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

Chen, H.

M. Zhou, H. Chen, J. Paisley, L. Ren, L. Li, Z. Xing, D. Dunson, G. Sapiro, and L. Carin, “Nonparametric Bayesian dictionary learning for analysis of noisy and incomplete images,” IEEE Trans. Image Process. 21, 130–144 (2012).
[Crossref]

Chen, L.

Y. Fang, L. Chen, J. Wu, and B. Huang, “GPU implementation of orthogonal matching pursuit for compressive sensing,” in 17th International Conference on Parallel and Distributed Systems (ICPADS) (2011), pp. 1044–1047.

Clemente, P.

Correa, C. V.

Dai, Q.

Dang, C.

Deng, M.

Dereniak, E. L.

E. K. Hege, D. O’Connell, W. Johnson, S. Basty, and E. L. Dereniak, “Hyperspectral imaging for astronomy and space surveillance,” Proc. SPIE 5159, 380–392 (2004).
[Crossref]

Diaz, N.

N. Diaz, H. Rueda, and H. Arguello, “High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering,” Ingeniería e InvestiIgación 35, 53–60 (2015).
[Crossref]

Duncan, M. D.

Dunson, D.

M. Zhou, H. Chen, J. Paisley, L. Ren, L. Li, Z. Xing, D. Dunson, G. Sapiro, and L. Carin, “Nonparametric Bayesian dictionary learning for analysis of noisy and incomplete images,” IEEE Trans. Image Process. 21, 130–144 (2012).
[Crossref]

Durán, V.

Fang, Y.

Y. Fang, L. Chen, J. Wu, and B. Huang, “GPU implementation of orthogonal matching pursuit for compressive sensing,” in 17th International Conference on Parallel and Distributed Systems (ICPADS) (2011), pp. 1044–1047.

Feng, L.

Feng, S.

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

Fienup, J. R.

Fink, M.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
[Crossref]

S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

French, R.

Freund, I.

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

Gehm, M. E.

Gigan, S.

R. French, S. Gigan, and O. L. Muskens, “Snapshot fiber spectral imaging using speckle correlations and compressive sensing,” Opt. Express 26, 32302–32316 (2018).
[Crossref]

R. French, S. Gigan, and O. L. Muskens, “Speckle-based hyperspectral imaging combining multiple scattering and compressive sensing in nanowire mats,” Opt. Lett. 42, 1820–1823 (2017).
[Crossref]

T. Wu, O. Katz, X. Shao, and S. Gigan, “Single-shot diffraction-limited imaging through scattering layers via bispectrum analysis,” Opt. Lett. 41, 5003–5006 (2016).
[Crossref]

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
[Crossref]

Gong, C.

Greenberg, J. A.

X. Li, A. Stevens, J. A. Greenberg, and M. E. Gehm, “Single-shot memory-effect video,” Sci. Rep. 8, 13402 (2018).
[Crossref]

Guo, C.

L. Zhu, J. Liu, L. Feng, C. Guo, T. Wu, and X. Shao, “Recovering the spectral and spatial information of an object behind a scattering media,” OSA Continuum 1, 553–563 (2018).
[Crossref]

T. Wu, C. Guo, and X. Shao, “Non-invasive imaging through thin scattering layers with broadband illumination,” arXiv:1809.06854 (2018).

Gupta, O.

He, H.

H. Zhuang, H. He, X. Xie, and J. Zhou, “High speed color imaging through scattering media with a large field of view,” Sci. Rep. 6, 32696 (2016).
[Crossref]

Hege, E. K.

E. K. Hege, D. O’Connell, W. Johnson, S. Basty, and E. L. Dereniak, “Hyperspectral imaging for astronomy and space surveillance,” Proc. SPIE 5159, 380–392 (2004).
[Crossref]

Heidmann, P.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Heshmat, B.

Horstmeyer, R.

R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015).
[Crossref]

Huang, B.

Y. Fang, L. Chen, J. Wu, and B. Huang, “GPU implementation of orthogonal matching pursuit for compressive sensing,” in 17th International Conference on Parallel and Distributed Systems (ICPADS) (2011), pp. 1044–1047.

Irles, E.

John, G.

S. Thomas and G. John, “The C.I.E. colorimetric standards and their use,” Trans. Opt. Soc. 33, 73–134 (1931).
[Crossref]

John, R.

Johnson, W.

E. K. Hege, D. O’Connell, W. Johnson, S. Basty, and E. L. Dereniak, “Hyperspectral imaging for astronomy and space surveillance,” Proc. SPIE 5159, 380–392 (2004).
[Crossref]

Kane, C.

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

Katz, O.

T. Wu, O. Katz, X. Shao, and S. Gigan, “Single-shot diffraction-limited imaging through scattering layers via bispectrum analysis,” Opt. Lett. 41, 5003–5006 (2016).
[Crossref]

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

Kittle, D.

Kittle, D. S.

D. S. Kittle, D. L. Marks, and D. J. Brady, “Design and fabrication of an ultraviolet-visible coded aperture snapshot spectral imager,” Opt. Eng. 51, 071403 (2012).
[Crossref]

Lagendijk, A.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

F. van Beijnum, E. G. van Putten, A. Lagendijk, and A. P. Mosk, “Frequency bandwidth of light focused through turbid media,” Opt. Lett. 36, 373–375 (2011).
[Crossref]

Lancis, J.

Lee, J.

Lee, P. A.

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

Lerosey, G.

S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
[Crossref]

Li, G.

M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv:1708.07881 (2017).

Li, H.

Li, L.

M. Zhou, H. Chen, J. Paisley, L. Ren, L. Li, Z. Xing, D. Dunson, G. Sapiro, and L. Carin, “Nonparametric Bayesian dictionary learning for analysis of noisy and incomplete images,” IEEE Trans. Image Process. 21, 130–144 (2012).
[Crossref]

Li, S.

Li, X.

X. Li, A. Stevens, J. A. Greenberg, and M. E. Gehm, “Single-shot memory-effect video,” Sci. Rep. 8, 13402 (2018).
[Crossref]

Liao, X.

Lin, X.

Liu, J.

Liu, Y.

Llull, P.

X. Yuan, T.-H. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Signal Process. 9, 964–976 (2015).
[Crossref]

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013).
[Crossref]

Lu, D.

D. Lu and Q. Weng, “Urban classification using full spectral information of Landsat ETM + imagery in Marion County,” Photogramm. Eng. Remote Sens. 71, 1275–1284 (2005).
[Crossref]

Lyu, M.

M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv:1708.07881 (2017).

Mahon, R.

Marks, D. L.

D. S. Kittle, D. L. Marks, and D. J. Brady, “Design and fabrication of an ultraviolet-visible coded aperture snapshot spectral imager,” Opt. Eng. 51, 071403 (2012).
[Crossref]

Mosk, A. P.

Muskens, O. L.

O’Connell, D.

E. K. Hege, D. O’Connell, W. Johnson, S. Basty, and E. L. Dereniak, “Hyperspectral imaging for astronomy and space surveillance,” Proc. SPIE 5159, 380–392 (2004).
[Crossref]

Paisley, J.

M. Zhou, H. Chen, J. Paisley, L. Ren, L. Li, Z. Xing, D. Dunson, G. Sapiro, and L. Carin, “Nonparametric Bayesian dictionary learning for analysis of noisy and incomplete images,” IEEE Trans. Image Process. 21, 130–144 (2012).
[Crossref]

Pang, S.

Popoff, S.

S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
[Crossref]

Popoff, S. M.

S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

Raskar, R.

Reintjes, J.

Ren, L.

M. Zhou, H. Chen, J. Paisley, L. Ren, L. Li, Z. Xing, D. Dunson, G. Sapiro, and L. Carin, “Nonparametric Bayesian dictionary learning for analysis of noisy and incomplete images,” IEEE Trans. Image Process. 21, 130–144 (2012).
[Crossref]

Rivenson, Y.

Rosenbluh, M.

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

Ruan, H.

R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015).
[Crossref]

Rueda, H.

N. Diaz, H. Rueda, and H. Arguello, “High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering,” Ingeniería e InvestiIgación 35, 53–60 (2015).
[Crossref]

H. Rueda, H. Arguello, and G. R. Arce, “DMD-based implementation of patterned optical filter arrays for compressive spectral imaging,” J. Opt. Soc. Am. A 32, 80–89 (2015).
[Crossref]

Sahoo, S. K.

Sapiro, G.

P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express 21, 10526–10545 (2013).
[Crossref]

M. Zhou, H. Chen, J. Paisley, L. Ren, L. Li, Z. Xing, D. Dunson, G. Sapiro, and L. Carin, “Nonparametric Bayesian dictionary learning for analysis of noisy and incomplete images,” IEEE Trans. Image Process. 21, 130–144 (2012).
[Crossref]

Satat, G.

Schneider, J.

J. Schneider and C. Aegerter, “Guide star based deconvolution for imaging behind turbid media,” J. Eur. Opt. Soc. Rapid Publ. 14, 21 (2018).
[Crossref]

Schulz, T. J.

Shao, X.

Shaw, G.

G. Shaw and H. Burke, “Spectral imaging for remote sensing,” Lincoln Lab. J. 14, 3–28 (2003).

Silberberg, Y.

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

Sinha, A.

Situ, G.

M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv:1708.07881 (2017).

Small, E.

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

Soldevila, F.

Stern, A.

Stevens, A.

X. Li, A. Stevens, J. A. Greenberg, and M. E. Gehm, “Single-shot memory-effect video,” Sci. Rep. 8, 13402 (2018).
[Crossref]

Stone, A. D.

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

Sun, D.-W.

D. Wu and D.-W. Sun, “Advanced applications of hyperspectral imaging technology for food quality and safety analysis and assessment: a review part I: fundamentals,” Innov. Food Sci. Emerg. Technol. 19, 1–14(2013).
[Crossref]

Sun, Y.

Tajahuerce, E.

Tancik, M.

Tang, D.

Tankersley, L. L.

Thendiyammal, A.

Thomas, S.

S. Thomas and G. John, “The C.I.E. colorimetric standards and their use,” Trans. Opt. Soc. 33, 73–134 (1931).
[Crossref]

Tsai, T.-H.

X. Yuan, T.-H. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Signal Process. 9, 964–976 (2015).
[Crossref]

T.-H. Tsai and D. J. Brady, “Coded aperture snapshot spectral polarization imaging,” Appl. Opt. 52, 2153–2161 (2013).
[Crossref]

Vachman, C.

van Beijnum, F.

van Putten, E. G.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

F. van Beijnum, E. G. van Putten, A. Lagendijk, and A. P. Mosk, “Frequency bandwidth of light focused through turbid media,” Opt. Lett. 36, 373–375 (2011).
[Crossref]

Vos, W. L.

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Wang, H.

M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv:1708.07881 (2017).

Wang, L.

T. T. Cai and L. Wang, “Orthogonal matching pursuit for sparse signal recovery with noise,” IEEE Trans. Inf. Theory 57, 4680–4688 (2011).
[Crossref]

Weng, Q.

D. Lu and Q. Weng, “Urban classification using full spectral information of Landsat ETM + imagery in Marion County,” Photogramm. Eng. Remote Sens. 71, 1275–1284 (2005).
[Crossref]

Wetzstein, G.

Willett, R. M.

Wu, D.

D. Wu and D.-W. Sun, “Advanced applications of hyperspectral imaging technology for food quality and safety analysis and assessment: a review part I: fundamentals,” Innov. Food Sci. Emerg. Technol. 19, 1–14(2013).
[Crossref]

Wu, J.

Y. Fang, L. Chen, J. Wu, and B. Huang, “GPU implementation of orthogonal matching pursuit for compressive sensing,” in 17th International Conference on Parallel and Distributed Systems (ICPADS) (2011), pp. 1044–1047.

Wu, T.

Xie, J.

Xie, X.

Xing, Z.

M. Zhou, H. Chen, J. Paisley, L. Ren, L. Li, Z. Xing, D. Dunson, G. Sapiro, and L. Carin, “Nonparametric Bayesian dictionary learning for analysis of noisy and incomplete images,” IEEE Trans. Image Process. 21, 130–144 (2012).
[Crossref]

Xu, X.

Yang, C.

R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015).
[Crossref]

Yang, J.

Yuan, X.

Zhou, J.

Zhou, M.

M. Zhou, H. Chen, J. Paisley, L. Ren, L. Li, Z. Xing, D. Dunson, G. Sapiro, and L. Carin, “Nonparametric Bayesian dictionary learning for analysis of noisy and incomplete images,” IEEE Trans. Image Process. 21, 130–144 (2012).
[Crossref]

Zhu, L.

Zhu, R.

X. Yuan, T.-H. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Signal Process. 9, 964–976 (2015).
[Crossref]

Zhuang, H.

Annu. Rev. Astron. Astrophys. (1)

J. M. Beckers, “Adaptive optics for astronomy—principles, performance, and applications,” Annu. Rev. Astron. Astrophys. 31, 13–62 (1993).
[Crossref]

Appl. Opt. (5)

IEEE J. Sel. Top. Signal Process. (1)

X. Yuan, T.-H. Tsai, R. Zhu, P. Llull, D. Brady, and L. Carin, “Compressive hyperspectral imaging with side information,” IEEE J. Sel. Top. Signal Process. 9, 964–976 (2015).
[Crossref]

IEEE Trans. Image Process. (1)

M. Zhou, H. Chen, J. Paisley, L. Ren, L. Li, Z. Xing, D. Dunson, G. Sapiro, and L. Carin, “Nonparametric Bayesian dictionary learning for analysis of noisy and incomplete images,” IEEE Trans. Image Process. 21, 130–144 (2012).
[Crossref]

IEEE Trans. Inf. Theory (1)

T. T. Cai and L. Wang, “Orthogonal matching pursuit for sparse signal recovery with noise,” IEEE Trans. Inf. Theory 57, 4680–4688 (2011).
[Crossref]

Ingeniería e InvestiIgación (1)

N. Diaz, H. Rueda, and H. Arguello, “High-dynamic range compressive spectral imaging by grayscale coded aperture adaptive filtering,” Ingeniería e InvestiIgación 35, 53–60 (2015).
[Crossref]

Innov. Food Sci. Emerg. Technol. (1)

D. Wu and D.-W. Sun, “Advanced applications of hyperspectral imaging technology for food quality and safety analysis and assessment: a review part I: fundamentals,” Innov. Food Sci. Emerg. Technol. 19, 1–14(2013).
[Crossref]

J. Eur. Opt. Soc. Rapid Publ. (1)

J. Schneider and C. Aegerter, “Guide star based deconvolution for imaging behind turbid media,” J. Eur. Opt. Soc. Rapid Publ. 14, 21 (2018).
[Crossref]

J. Opt. Soc. Am. A (2)

Lincoln Lab. J. (1)

G. Shaw and H. Burke, “Spectral imaging for remote sensing,” Lincoln Lab. J. 14, 3–28 (2003).

Nat. Commun. (1)

S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1, 81 (2010).
[Crossref]

Nat. Photonics (3)

R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9, 563–571 (2015).
[Crossref]

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6, 549–553 (2012).
[Crossref]

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8, 784–790 (2014).
[Crossref]

Nature (1)

J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491, 232–234 (2012).
[Crossref]

Opt. Eng. (1)

D. S. Kittle, D. L. Marks, and D. J. Brady, “Design and fabrication of an ultraviolet-visible coded aperture snapshot spectral imager,” Opt. Eng. 51, 071403 (2012).
[Crossref]

Opt. Express (6)

Opt. Lett. (5)

Optica (2)

OSA Continuum (1)

Photogramm. Eng. Remote Sens. (1)

D. Lu and Q. Weng, “Urban classification using full spectral information of Landsat ETM + imagery in Marion County,” Photogramm. Eng. Remote Sens. 71, 1275–1284 (2005).
[Crossref]

Phys. Rev. Lett. (3)

S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104, 100601 (2010).
[Crossref]

S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61, 834–837 (1988).
[Crossref]

I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61, 2328–2331 (1988).
[Crossref]

Proc. SPIE (1)

E. K. Hege, D. O’Connell, W. Johnson, S. Basty, and E. L. Dereniak, “Hyperspectral imaging for astronomy and space surveillance,” Proc. SPIE 5159, 380–392 (2004).
[Crossref]

Sci. Rep. (2)

H. Zhuang, H. He, X. Xie, and J. Zhou, “High speed color imaging through scattering media with a large field of view,” Sci. Rep. 6, 32696 (2016).
[Crossref]

X. Li, A. Stevens, J. A. Greenberg, and M. E. Gehm, “Single-shot memory-effect video,” Sci. Rep. 8, 13402 (2018).
[Crossref]

Trans. Opt. Soc. (1)

S. Thomas and G. John, “The C.I.E. colorimetric standards and their use,” Trans. Opt. Soc. 33, 73–134 (1931).
[Crossref]

Other (3)

Y. Fang, L. Chen, J. Wu, and B. Huang, “GPU implementation of orthogonal matching pursuit for compressive sensing,” in 17th International Conference on Parallel and Distributed Systems (ICPADS) (2011), pp. 1044–1047.

T. Wu, C. Guo, and X. Shao, “Non-invasive imaging through thin scattering layers with broadband illumination,” arXiv:1809.06854 (2018).

M. Lyu, H. Wang, G. Li, and G. Situ, “Exploit imaging through opaque wall via deep learning,” arXiv:1708.07881 (2017).

Supplementary Material (1)

NameDescription
Supplement 1       Supplemental Document

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Schematic of single-shot multispectral imaging through scatterer. The setup consists of a color object, scatterer, and coded detector. In the object section, a lamp acts as a spatially incoherent light source to illuminate the color object. The scatterer consists of a ground glass diffuser and an aperture that acts like a field stop for the coded detector. A coded detector, consisting of a coded aperture, prism, and monochrome camera (coupled via appropriate relay optics), records the multiplexed, coded speckle signal. The coded detector represents the key modification of traditional ME imaging schemes required by our approach.
Fig. 2.
Fig. 2. Data acquisition and reconstruction pipeline. In the memory effect regime, the wavelength-dependent speckle I λ is the convolution of the object spectral component and the PSF corresponding to the wavelength. During data acquisition, I λ ( x , y ) is coded by a random binary mask T λ ( x , y ) , and the multiplexed speckle is the summation of the coded speckles across spectral channels at the camera plane. We recover independent speckle frames I ^ λ ( x , y ) using a dictionary-based OMP algorithm. We calculate the autocorrelation of each channel individually and reconstruct the spectral information of object O ^ λ ( x , y ) with a phase retrieval algorithm. The color “LENS” object is reconstructed using emulated data.
Fig. 3.
Fig. 3. Emulation results of snapshot color ME with 5 spectral channels. (a), (c) Color object and normalized radiance plot of each spectral component for ground truth (top) and recovered image (bottom) corresponding to the number object and cell from the stem of a cotton plant, respectively. (c), (d) Comparison of the original spectrum and the recovered spectrum averaged across all bright pixels for the number object and cell from the stem of a cotton plant, respectively.
Fig. 4.
Fig. 4. Reconstruction results of a color “H” object. The first, second, and third rows correspond to the ground truth object, recovered speckle autocorrelations, and estimated object, respectively. The last row shows a comparison between the normalized intensity along a slice through the object (indicated by the horizontal dashed white line) for each spectral channel. From left to right, the results correspond to the 650 nm, 550 nm, and 450 nm spectral bands. The fourth column shows full-color composite representations of the ground truth and recovered objects.
Fig. 5.
Fig. 5. Reconstruction results of a contiguous spectrum object. (a) The ground truth object and recovered color object consisting of an “X” and “+”. (b) Ground truth and recovered spectra of the “X” (black, blue) and “+” (yellow, red), respectively. The gray lines indicate the spectral bin edges (each 10 nm wide and centered at 520, 530, 540, 550, 560, and 570 nm). (c) Recovered autocorrelations and the corresponding phase retrieval results in 10 nm bands at the specified wavelengths.

Tables (1)

Tables Icon

Table 1. Quantitative Performance of Experimental Results

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

I = O * S = λ O λ * S λ = λ I λ ,
T λ ( x , y ) = T ( x + d ( λ λ 0 ) , y ) .
I ( x , y ) = λ T λ ( x , y ) · I λ ( x , y ) ,

Metrics

Select as filters


Select Topics Cancel
© Copyright 2022 | Optica Publishing Group. All Rights Reserved