Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Image transmission through dynamic scattering media by single-pixel photodetection

Open Access Open Access

Abstract

Smart control of light propagation through highly scattering media is a much desired goal with major technological implications. Since interaction of light with highly scattering media results in partial or complete depletion of ballistic photons, it is in principle impossible to transmit images through distances longer than the extinction length. Nevertheless, different methods for image transmission, focusing, and imaging through scattering media by means of wavefront control have been published over the past few years. In this paper we show that single-pixel optical systems, based on compressive detection, can also overcome the fundamental limitation imposed by multiple scattering to successfully transmit information. But, in contrast with the recently introduced schemes that use the transmission matrix technique, our approach does not require any a-priori calibration process that ultimately makes the present method suitable to use with dynamic scattering media. This represents an advantage over previous methods that rely on optical feedback wavefront control, especially for short speckle decorrelation times.

© 2014 Optical Society of America

1. Introduction

In a conventional imaging experiment, a lens maps every input pixel of an object to its conjugated output pixel at the sensor. Input and output measurement modes correspond to highly specific and localized information and all modes are simultaneously measured with an optical array detector, being transfer of information physically limited, in the ideal case, by diffraction. However, in a scattering medium the relationship between input and output pixels suffers the effects of light propagation by multiple scatterers. As a result, the spatial information of an input mode is scrambled and coupled through all output modes. Alternatively, an output pixel retains a tiny fraction of the optical field coming from every input mode and the interference between the different light fields generates an image that looks like a speckle field. Indeed, the ability of a lens to provide a clear image of an object is critically limited, even when a very thin layer of a scattering material is placed in the light path.

Scattering is the dominant extinction process that limits the imaging depth range inside biological tissue [1]. Although optical clearing techniques based on tissue manipulation has recently resulted in a see-through tissue [2], manipulation of photons provides a parallel avenue to control light-matter interactions without alteration of the biological material. Fine control of wave fields with current megapixel programmable spatial light modulators is by far the most employed approach. Examples of such technology are adaptive optics [3] and, more recently, control of disorder [4]. Transmission matrix characterization or feedback control based on iterative methods has allowed to undo the modifications that a scattering medium performs on the incoming wavefront by wavefront shaping [59] and seeing through static scattering media has also been demonstrated with this approach [10, 11]. Additionally, the angular correlation exhibited by speckle patterns within a small range of angles has been exploited for non-invasive imaging of a fluorescent object completely embedded in an opaque scattering medium [12]. Some recent attempts to extend the above techniques to steady focusing light through a slowly evolving dynamic medium are based on the use of faster spatial light modulators or by all-optical feedback [13, 14]. In this direction, recent advances in both DMD technology and GPU processing allow to compute the transmission matrix in tens of milliseconds [15]. However the dynamic nature of scattering in living tissue still remains an open question. Micrometer-scale translations of the scatter centres generate a decorrelated speckle field on the millisecond timescale, which makes dificult to follow the changing state of the medium for current feedback control techniques.

Image transmission through dynamic scattering media requires a paradigm shift to remove the use of feedback algorithms. Here we address this challenge. To do this, we use computational imaging of projected patterns with measurements being captured sequentially by a single-pixel sensor. The programmed patterns are used as generalized measurement modes where the object information is expressed. The same principle enables retrieval of the spatial information of an object with the use of a single-pixel detector in ghost [16, 17] and compressive imaging [18, 19]. In the latter case, the development of data collection strategies based on compressive sampling allows image compression to be performed at the sensing stage [20]. Applications that benefit from the advantages of single-pixel cameras are, among others, 3D imaging [21], entanglement imaging [22], fluorescence microscopy [23] or imaging at regions of the electromagnetic spectrum where current pixelated sensors are unavailable [24]. In this work we demonstrate that the presence of a scattering medium between the object and the light detector, even in the dynamically varying case, does not invalidate the operation principle of the proposed single-pixel scheme. Notably, scrambling of light due to disorder mixes information from all the regions of the sample but does not destroy the object information that can be retrieved from the generalized modes. As we will show later, we emphasize that this statement is accurate even if the medium is dynamic. Image transmission through disordered media has an immediate impact in the case of multimode fibers, which are subject to mode coupling. This crosstalk effect behaves in a similar way to a disordered medium. In this sense, an increment of the imaging capabilities of multimode fibers has been achieved by exploiting the properties of highly scattering media with a ‘single-pass’ architecture [25]. This approach, conceptually similar to that presented in Ref. [10], implies a precise calibration of the scattering effects and assumes a static medium. Both limitations are overcome by our single-pixel-sensor method.

2. Operation principle

To demonstrate our technique for image transmission through a scattering medium with a single-pixel camera, we refer to the diagram illustrated in Fig. 1(a). First, we consider the situation of static scattering. The sample (in this example a binary version of the famous Cheshire Cat in Alice‘s Adventures in Wonderland) is sequentially illuminated with a set of microstructured projecting patterns, which are codified onto a spatial light modulator. The light transmitted through the sample undergoes strong scattering so that any object is completely hidden, as is shown in Fig. 1(b). For each pattern, an optical sensor without spatial resolution averages the signal generated by the noise-like light distribution that covers its active surface. Each photodetected signal corresponds to a fraction of the light arising from the inner product between the sample and the projecting pattern and contains information of the whole sample thanks to light scrambling imposed by the scattering medium. Given that the sensor averages a sufficient number of speckle grains and that the number of input patterns is suitable, the information arising from the different modes is decorrelated and equally weighted. This fact guarantees that their contributions become independent for incoherent imaging and that the image-retrieval process can be formulated as a sequential measurement of the coefficients of the object intensity in the input basis (see Section 5). Notably, most of the coefficients of the object expressed in the function basis do not contribute in a significant way to image retrieval (see Fig. 1(c)). This means that the signal is sparse and compressive sensing can be used at the sampling stage by selecting in a random way the projecting patterns. In Section 3 we include a detailed study about the applicability of compressive sensing to our technique.

 figure: Fig. 1

Fig. 1 Schematic diagram of the experimental set-up. (a) Set of microstructured patterns is projected onto the sample, which is placed in front of the scattering layer. The transmitted light undergoes strong scattering and a fraction is recorded by a photodiode. Within single-pixel imaging schemes, we need to project several patterns to form the image with measurements captured sequentially. (b) Photograph of the scattering layer covering partially a text located at a distance 10mm from the diffuser. In this way we show that the portion of the text under the diffuser is completely hidden. (c) Plot of two thousand projections derived from the photodetected signal in terms of the input modes.

Download Full Size | PDF

In our experiments, the Walsh-Hadamard basis is used to express the object xin as a linear combination, xin=iEiinHi (i = 1,...,N). A Walsh-Hadamard matrix of order n, denoted by Hi(n), is an N = n × n matrix with entries that satisfies HiT(n)Hi(n)=nI(n), where I(n) is the identity matrix and HiT(n) stands for the transposed matrix. Hadamard matrices form an orthonormal basis of matrices that was first proposed by researchers in statistics and is considered to be the optimum weighting design for extracting information from random noise. A shifted and rescaled version of Hi(n) generates a binary pattern taking on the values 0 or 1, which can be simply encoded onto a spatial light modulator as an intensity pattern.

The scattering medium placed after the object is characterized at the macroscopic scale through the transmission matrix K with complex coefficients kmn. The individual element kmn connects the nth input Hadamard mode with the mth output measurement mode expressed in the canonical basis [27]. We assume that the matrix-coefficient statistics follows from the classical random walk phenomenon, as each kmn results from the sum of contributions from many elementary pathways inside the medium that connect incoming and outgoing modes. As a result, the complex phasor kmn is said to be a circular complex Gaussian variate and the transmission matrix amounts to a random matrix of independent identically distributed entries of Gaussian statistics [28]. The outgoing optical field corresponding to the mth output mode is given by Emout=nkmnEnin and the field intensity from the mth mode is Imout=|nkmnEnin|2. The input modes add together on an amplitude basis and the signal is affected by strong correlations between the signals provided by the different input Hadamard modes. In addition, the measurement in our scheme of single-pixel detection is given by the summation of the signals provided by the output canonical modes that cover the active surface of the sensor, i.e., Iout=mImout (m = 1,...,S). However, the averaging over the photodetector surface forces the cross terms that couple different input modes to vanish and their contributions become uncorrelated and equally weighted so that the modes now add together on an intensity basis, i.e., Iout=mImoutn|Enin|2. Consequently, the sequential projection of the different Hadamard modes onto the input object allow us at the detection level to measure separately any image expansion coefficient |Enin|2. This result, derived in the Section 5, is extremely crucial for the validity of our proposed disorder-assisted single-pixel image-retrieval method and states that the incoherent nature of single-pixel imaging is preserved even through disordered media.

Applying the above ideas to our test object, we obtained the results shown in Fig. 2. The Cheshire Cat was directly codified onto the spatial light modulator with an image size of N = 128 × 128 pixels. The number of Walsh-Hadamard patterns was M = 0.2N. We used a diode-pumped laser at 532 nm (Oxxius slim-532) coupled to a single mode optical fiber. The beam was expanded and modulated by a liquid crystal reflective spatial light modulator (Holoeye LC-R 2500, with XGA pixel pitch of 19μm) inserted between an appropriate combination of polarizers. The scattering layer was a commercial diffuser (Edmund Optics T54-497). After passing through the diffuser, the light was partially guided by a lens to the photodiode (Thorlabs DET36A with active area size of 3.6 × 3.6mm2) to measure the fraction of the transmitted light intensity and an analog-to-digital converter digitized the photodetected signal. We found that the scattering layer completely hides the presence of the object so that the image recorded by a charge-coupled device camera is simply a speckle pattern (see Fig. 2(a)). On the contrary, the reconstructed image from computational imaging of projected patterns with single-pixel detection showed an excellent resemblance with the original object (see Fig. 2(b)).

 figure: Fig. 2

Fig. 2 Seeing through scattering media with a single-pixel camera. (a) The Cheshire Cat is placed in front of a scattering layer that completely hides it. We show the image of the object as seen by a charge-coupled-device camera (Stingray-F-145 with square pixel size of 6.45 × 6.45μm2). The image contains not information at all on the shape of the object. (b) Image of the retrieved object from the photodetected values recorded by the photodiode. The good quality of the reconstructed image confirms that disorder-assisted imaging can indeed recover fine details of the object even though a sensor without spatial resolution is used.

Download Full Size | PDF

3. Compressive imaging through a static scattering layer

3.1. Compressive sensing

In the framework of single-pixel imaging, the imaging process in intensity is formulated as

y=MIin=(H111H1nnHN11HNnn)Iin,
where the sampling matrix M is here a row-wise array of the Walsh-Hadamard modes and Iin = |xin|2 denotes the sampled intensity corresponding to the unknown object (expressed in the canonical basis) arranged in a column vector of dimension N. The components of the vector y are the projecting coefficients |Enin|2.

The object is recovered off-line from the projections |Enin|2 by solving the algebraical problem described in Eq. (1). To form a completely determined set of measurements, the rank of the sampling matrix M must equal the object data dimension N, as has been written in Eq. (1). However, compressive sensing allows the reconstruction of sparse signals Iin although the number of measurements M be lower than the number of pixels of the sampled object, i.e., M << N. This image sampling mechanism overcomes the fundamental tradeoff in imaging between speed and resolution and allows to speed up computational imaging. The point is that a sparse signal has only a small part of coefficients with a significant value when transformed in the appropriate base of functions. Therefore, many of the modes of the base provide little to no useful information and can be removed without substantial loss of image quality. Although it is not known at the sensing stage what coefficients have appreciable amplitude, compressive sensing algorithm recovers an undersampled signal Irecin that approaches the exact signal Iin with high probability from random undersampling of the matrix M. From a mathematical point of view, the system is undetermined and the object is recovered by solving the optimization problem min(‖Iinl1) such that y(M) = MusIin(N). In the above equation Mus is the undersampled version of the sampling matrix M, with dimension M × N, and ‖Iinl1 is the l1-norm of the object represented in an appropriate basis (in our case, the Walsh-Hadamard basis), which reflects the inherent sparsity that exists in natural objects. Given a sufficiently large number of samplings, the problem is rigorously solvable and the recovered signal Irecin approaches the exact signal Iin. If only k Hadamard projections of the signal Iin have a significant value, the number of helpful sampling modes scales as Mk logN [20] and the compression ratio is given by CR = N/M.

3.2. Application to a microscopic sample

To test the quality of the images recovered by compressive sensing through a turbid medium, we conducted a series of experiments with the optical implementation shown in Fig. 3(a). The laser source and the photodiode were those employed in the preceding section. The object was a sample of stained onion cells, the Walsh-Hadamard patterns had 64 × 64 pixels and the optics of the system was adapted to adjust the spatial scale of the projecting patterns to the sample by use of an optical relay containing a microscope objective (Nikon 4× and NA = 0.1). We used the diffuser shown in Fig. 1(b) and included in the optical system the Stingray CCD camera. The sample was hidden by the scattering layer so that the image recorded by the CCD camera was a speckle pattern, as is shown in the small intensity map in Fig. 3(a). The code employed for compressive sensing was the function l1eq-pd of the l1 -magic software package, which solves the standard basis pursuit problem using a primal-dual algorithm [29]. The quality of the undersampled image was tested using the standard peak signal-to-noise ratio, PSNR=10log(Imax2/MSE), where Imax is the maximum pixel intensity value of the reference image and MSE=(1N)IrefinIrecin2 stands for the mean square error between the undersampled image, Irecin, and the image recovered from the whole measurement set, Irefin. Figure 3(b) illustrates that the time required for the sensing stage could be reduced by a factor of 2 while the PSNR is still higher than 20dB, which indicates high image fidelity. This is relevant to speed up the image retrieval. Moreover, for faster operation, the slow liquid crystal spatial light modulator can be replaced by a digital micromirror device or any equivalent component. Considering that the maximum full-image frame rate of commercially available digital micromirror devices is 22.7kHz, the capture of images of 64 × 64 pixels at the sensing stage could operate at a frame rate of 10Hz. Even in this situation, the above imaging rate is still unsatisfactory to overcome the dynamic nature of real scattering media whose speckle decorrelations are on the millisecond order.

 figure: Fig. 3

Fig. 3 Disorder-assisted compressive microscopy. (a) Schematic diagram of the optical setup. LS laser source; L lens; BS beam splitter; SLM programmable spatial light modulator; D iris diaphram; MO microscope objective; OBJ sample (stained onion cells); SM scattering medium; CCD charge-coupled device camera; PD Photodiode. (b) Plot of the quality of the recovered images in dB as a function of the compression ratio. We show as insets the image of the onion cells retrieved from the photodetected signals generated by a randomized selection (24% and 50%) of the 4096 input modes, along with the unsampled image (CR = 1) for comparison.

Download Full Size | PDF

4. Image reconstruction through dynamic scattering media

Our method is also valid to recover the image through a scattering medium that changes its temporal properties due to the movement of the scattering centres. The critical point here is that our technique works without the need of a signal feedback. For a dynamic scattering medium, new realizations of the random walk are created as the time goes by and, as a consequence, the speckle intensity at any point on the detector plane changes with time, and the transmission matrix too. However, our approach addresses this situation in a rather different way. As long as the statistical properties of the medium remain stationary and the number of speckle grains impinging on the active area of the sensor is high enough, the photodetected current does not change with time and, as a result, the sample is reconstructed regardless of the diffuser movement (see Fig. 4(a)). In our experiment the diffuser was a polypropylene cover characterized by its orange peel texture. The diffuser was moved by means of a stream of air and a USAF resolution chart acted as the sample. The chaotic movement of the diffuser mimicked a dynamic scattering medium generating variable speckle. The key point was that the photodiode performed a spatial intensity integration in such a way that, under the above assumptions, the value of every measurement provided by the non-pixelated detector was effectively the same as in the static state. Figure 4(b) shows a snapshot of the dynamic speckle pattern after the moving diffuser. This image was recorded by use of a charge-coupled device camera like in Fig. 3(a). A plot of the measured intensity as a function of time for a single pixel of the camera is compared with that corresponding to a small region of 64×64 pixels. The uniformity of the intensity level in the latter case is clearly noticed and is the basis of our implementation.

 figure: Fig. 4

Fig. 4 Image reconstruction through a dynamic scattering medium. (a) Image of the sample (a typical resolution chart) reconstructed from the photodetected signal showing the successful image retrieving ability of our single-pixel imaging setup through a moving diffuser. (b) Snapshot of the speckle field generated by the object placed in front of a moving diffuser. At the right side we plot the detected signal with an integration time smaller than the typical temporal drift of the speckle grains corresponding to both a pixel of the camera (top) and a region of 64 × 64 pixels (bottom).

Download Full Size | PDF

5. Discussion

Here we examine in greater detail the impact of the statistical properties of the scattering medium in the retrieved image process. As discussed before, the medium is characterized at the macroscopic scale through the transmission matrix K with complex coefficients kmn. The individual quantity kmn connects the nth input Hadamard mode with the mth output measurement mode expressed in the canonical basis. The measurement in our scheme of single-pixel detection is given by the summation of the signals provided by the different output modes that cover the active surface of the sensor, say S modes. Thus, when the media is fed with a complex signal that consists of N input modes, the photodetected signal is given by

Iout=m=1SImout=m=1s|n=1NkmnEnin|2=m=1Sn=1N|kmn|2|Enin|2+m=1Sn=1Nn=1nnNkmnkmn*EninEnin*.
It is convenient to define An = ∑m |kmn|2 and Bnn=mkmnkmn* to shorten the above expression, so that the output signal can be rewritten as
Iout=n=1NAn|Enin|2+n=1Nn=1nnNBnnEninEnin*.
Due to the scattering statistics, roughly speaking the averaging over the photodetector surface forces to cancel the cross terms that couple the Bnn′ contributions from the different input modes, or more precisely, the contribution of the coefficients Bnn′ is negligible with respect to that of the coeffcients An. Instead, the An contributions nearly become equally weighted for all n. Consequently,
Ioutn=1N|Enin|2.
The input modes now add together on an intensity basis, as in conventional single-pixel imaging, which is at the core of our computational imaging method. At this point, and concerning Eq. (4), two additional comments are relevant. On one hand, the projection of the nth Hadamard mode onto the input plane allows us at the detection level to evaluate separately the corresponding squared object expansion coefficient, |Enin|2. On the other hand, compressive sensing reduces the number of sequentially projected patterns we need to recover the object information from N to M < N modes randomly selected. We tested experimentally the above hypotheses by sequentially launching several Hadamard patterns onto the spatial light modulator and detecting the output intensity with the charge-coupled-device camera used in Fig. 3(a). The size of the speckle grains recorded by the camera (see inset in Fig. 3(a)) was estimated from the full width at half maximum (FWHM) of the autocorrelation signal to be about 12.9μm. Next the active area of the matrix sensor was limited to 64 × 64 pixels. In the experiment, 1500 Hadamard modes were sequentially displayed onto the spatial light modulator and the signal corresponding to the 64 × 64 output modes was separately recorded for every input mode. The histogram of the |kmn|-values is shown in Fig. 5(a). The results fit nicely to the assumed random walk model for the scatterers. From the above experimental counts, we numerically calculate the value of the parameters An and Bnn′. Their corresponding statistical distributions are shown in Figs. 5(b) and (d), respectively. In the latter case, in order to compute the Bnn′ coefficients, we allocate a random uniform phase distribution between 0 and 2π to the |kmn| measured elements.

 figure: Fig. 5

Fig. 5 Charting the statistical properties of the scattering layer. (a) Histogram of the absolute value of the transmission matrix elements assessed by experimental means. The plot fairly matches the Rayleigh distribution, as we expected. (b) Histogram of the summations An evaluated from the data in (a). The small ratio between the standard deviation σA and the mean value 〈A〉 ensures that the integrated contributions coming from the different input modes are nearly equally weighted. (c) Relative dispersion of the An contributions in terms of the number of output modes integrated by the area of the sensor. (d) Histogram of the summations Bnn′ evaluated from the data in (a) assigning randomly a uniform phase distribution between 0 and 2π. Note that in order to make easier the comparison of results, the scale of the abscissa axis in the graphs (b) and (d) is the same.

Download Full Size | PDF

Two findings are clear from the histograms in Figs. 5(b) and (d). First, the An values are clearly clustered around a certain value 〈A〉. In mathematical terms, the quotient between the standard deviation σA and the mean value of the histogram turns out to be 0.13, in good agreement with the nearly zero value we expected. Likewise, the Bnn′ coefficient histogram (a nearly Gaussian distribution) is centred at origin, shows positive and negative values, and is far away to intersect the An coefficient distribution. The combination of all the above facts validates the assumptions we guessed to derive Eq. (4) from Eq. (3). Figure 5(c) shows that the relative dispersion of the An contributions increases when the number of output modes integrated by the detector is smaller and smaller. The stagnation value (0.13 in Fig. 5(b)) can be reduced by increasing the number of input modes taken into account. In our experiments in Fig. 3, the number of modes inside the active surface of the photodiode was about 105, which is clearly enough to assume decorrelation between the different input modes.

6. Conclusions

We have demonstrated that computational techniques combined with single-pixel sensing enables image reconstruction behind arbitrary scattering media, in contrast to charge-coupled device cameras, where the pixelated structure of the sensor returns a noise-like speckle pattern. Our approach does not require a previous calibration of the disordered media and permits to retrieve images when we deal with dynamic scatterers. In contrast with techniques based on measuring the transmission matrix, our technique does not need to characterize the scattering medium, but operates on an intensity basis, thereby computing intensity distributions instead of complex fields. Moreover, the use of compressive sensing is limited to scenes that are sparse on the chosen basis. Different scenes may need different compression ratios (up to 1), which could entail higher acquisition times depending on the object under study. Our implementation is a first step to tackle the general problem of imaging objects completely embedded in a scattering medium. In parallel, our disordered-assisted single-pixel configuration shows straightforward applications for image transmission through multimode fibers or to look around corners. In addition, the operation principle of single-pixel imaging offers an ideal framework for using dedicated sensors, such as fiber spectrometers, beam polarimeters, and avalanche photodiodes, which can be employed to measure new physical imaging dimensions (wavelength, polarization, low-light intensity level, ...) of the sample under scrutiny.

Acknowledgments

We thank Víctor Torres-Company at the Chalmers University and Peter Török at the Imperial College of London for useful discussions and for reading the manuscript. This work was supported in part from MINECO (grants CSD2007-00013 and FIS2010-15746), Generalitat Valenciana (grants PROMETEO2012-021 and ISIC 2012/013), and the Universitat Jaume I (P1-1B2012-55). E.I. and F.S. were partially supported by a Generalitat Valenciana research fellowship.

References and links

1. M. Wenner, “The most transparent research,” Nature Medicine 15, 1106–1109 (2009). [CrossRef]   [PubMed]  

2. K. Chung, J. Wallace, S. Y. Kim, S. Kalyanasundaram, A. S. Andalman, T. J. Davidson, J. J. Mirzabekov, K. A. Zalocusky, J. Mattis, A. K. Denisin, S. Pak, H. Bernstein, C. Ramakrishnan, L. Grosenick, V. Gradinaru, and K. Deisseroth, “Structural and molecular interrogation of intact biological systems,” Nature (London) 497, 332–337 (2013). [CrossRef]  

3. M. J. Booth, D. Dbarre, and A. Jesacher, “Adaptive optics for biomedical microscopy,” Opt. Photon. News 23, 22 (2012). [CrossRef]  

4. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012). [CrossRef]  

5. I. M. Vellekoop, A. Lagendijk, and A. P. Mosk, “Exploiting disorder for perfect focusing,” Nat. Photonics 4, 320–322 (2010). [CrossRef]  

6. S. M. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Controlling light through optical disordered media: transmission matrix approach,” New J. Phys. 13, 123021 (2011). [CrossRef]  

7. R. T. Hillman, T. Yamauchi, W. Choi, R. R. Dasari, M. S. Feld, Y. Park, and Z. Yaqoob, “Digital optical phase conjugation for delivering two-dimensional images through turbid media,” Sci. Rep.3, (2013). [CrossRef]   [PubMed]  

8. O. Katz, E. Small, Y. Bromberg, and Y. Silberberg, “Focusing and compression of ultrashort pulses through scattering media,” Nat. Photonics 5, 372–377 (2011). [CrossRef]  

9. D. J. McCabe, A. Tajalli, D. R. Austin, P. Bondareff, I. A. Walmsley, S. Gigan, and B. Chatel, “ Spatio-temporal focusing of an ultrafast pulse through a multiply scattering medium,” Nat. Commun. 2, 447–455 (2011). [CrossRef]   [PubMed]  

10. S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque materia,” Nat. Commun. 1, 1038(2010). [CrossRef]  

11. J. Katz and J. Sheng, “Applications of Holography in Fluid Mechanics and Particle Dynamics,” Annu. Rev. Fluid Mech. 42, 531–555 (2010). [CrossRef]  

12. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature (London) 491, 232–234 (2012). [CrossRef]  

13. D. B. Conkey, A. M. Caravaca-Aguirre, and R. Piestun, “High-speed scattering medium characterization with application to focusing light through turbid media,” Opt. Express 20, 1733–1740 (2012). [CrossRef]   [PubMed]  

14. M. Nixon, O. Katz, E. Small, Y. Bromberg, A. A. Friesem, Y. Silberberg, and N. Davidson, “Real-time wavefront shaping through scattering media by all-optical feedback,” Nat. Photonics 7, 919–924 (2013). [CrossRef]  

15. M. Plöschner, B. Straka, K. Dholakia, and T. Cizmár, ”GPU accelerated toolbox for real-time beam-shaping in multimode fibres,” Opt. Express 22, 2933–2947 (2014). [CrossRef]   [PubMed]  

16. Y. Bromberg, O. Katz, and Y. Silberberg, “Ghost imaging with a single detector,” Phys. Rev. A 79, 053840 (2009). [CrossRef]  

17. P. Clemente, V. Durán, E. Tajahuerce, V. Torres-Company, and J. Lancis, “Single-pixel digital ghost holography,” Phys. Rev. A 86, 041803 (2012). [CrossRef]  

18. M. F. Duarte, M. A. Davenport, D. Takhar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-Pixel Imaging via Compressive Sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008). [CrossRef]  

19. F. Magalhães, F. M. Araújo, M. V. Correia, M. Abolbashari, and F. Farahi, “ Active illumination single-pixel camera based on compressive sensing,” Appl. Opt. 50, 405–414 (2011). [CrossRef]   [PubMed]  

20. E. J. Candes and M. B. Wakin, “An Introduction To Compressive Sampling,” IEEE Signal Process. Mag. 25, 21–30 (2008). [CrossRef]  

21. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, “3D Computational Imaging with Single-Pixel Detectors,” Science 340, 844–847 (2013). [CrossRef]   [PubMed]  

22. G. A. Howland and J. C. Howell, “Efficient High-Dimensional Entanglement Imaging with a Compressive-Sensing Double-Pixel Camera,” Phys. Rev. X 3, 011013 (2013).

23. V. Studer, J. Bobin, M. Chahid, H. S. Mousavi, E. Candes, and M. Dahan, “Compressive fluorescence microscopy for biological and hyperspectral imaging,” Proc. Natl. Acad. Sci. USA 109, E1679–E1687 (2012). [CrossRef]   [PubMed]  

24. J. Hunt, T. Discroll, A. Mrozack, G. Lipworth, M. Reynolds, D. Brady, and D. R. Smith, “Metamaterial Apertures for Computational Imaging,” Science 339, 310–313 (2013). [CrossRef]   [PubMed]  

25. Y. Choi, C. Yoon, M. Kim, W. Choi, and W. Choi, “Optical Imaging With the Use of a Scattering Lens,” IEEE J. Sel. Top. Quantum Electron. 20, 6800213 (2014).

26. T. Cizmar and K. Dholakia, “Exploiting multimode waveguides for pure fibre-based imaging,” Nature Commun. 3, 1027 (2012). [CrossRef]  

27. S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and G. Gigan, “Measuring the Transmission Matrix in Optics: An Approach to the Study and Control of Light Propagation in Disordered Media,” Phys. Rev. Lett. 104, 100601 (2010). [CrossRef]   [PubMed]  

28. J. W. Goodman, “Some fundamental properties of speckle,” J. Opt. Soc. Am. 66, 1145–1150 (1976). [CrossRef]  

29. E. J. Candes, http://www.stat.stanford.edu/candes/l1magic.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Schematic diagram of the experimental set-up. (a) Set of microstructured patterns is projected onto the sample, which is placed in front of the scattering layer. The transmitted light undergoes strong scattering and a fraction is recorded by a photodiode. Within single-pixel imaging schemes, we need to project several patterns to form the image with measurements captured sequentially. (b) Photograph of the scattering layer covering partially a text located at a distance 10mm from the diffuser. In this way we show that the portion of the text under the diffuser is completely hidden. (c) Plot of two thousand projections derived from the photodetected signal in terms of the input modes.
Fig. 2
Fig. 2 Seeing through scattering media with a single-pixel camera. (a) The Cheshire Cat is placed in front of a scattering layer that completely hides it. We show the image of the object as seen by a charge-coupled-device camera (Stingray-F-145 with square pixel size of 6.45 × 6.45μm2). The image contains not information at all on the shape of the object. (b) Image of the retrieved object from the photodetected values recorded by the photodiode. The good quality of the reconstructed image confirms that disorder-assisted imaging can indeed recover fine details of the object even though a sensor without spatial resolution is used.
Fig. 3
Fig. 3 Disorder-assisted compressive microscopy. (a) Schematic diagram of the optical setup. LS laser source; L lens; BS beam splitter; SLM programmable spatial light modulator; D iris diaphram; MO microscope objective; OBJ sample (stained onion cells); SM scattering medium; CCD charge-coupled device camera; PD Photodiode. (b) Plot of the quality of the recovered images in dB as a function of the compression ratio. We show as insets the image of the onion cells retrieved from the photodetected signals generated by a randomized selection (24% and 50%) of the 4096 input modes, along with the unsampled image (CR = 1) for comparison.
Fig. 4
Fig. 4 Image reconstruction through a dynamic scattering medium. (a) Image of the sample (a typical resolution chart) reconstructed from the photodetected signal showing the successful image retrieving ability of our single-pixel imaging setup through a moving diffuser. (b) Snapshot of the speckle field generated by the object placed in front of a moving diffuser. At the right side we plot the detected signal with an integration time smaller than the typical temporal drift of the speckle grains corresponding to both a pixel of the camera (top) and a region of 64 × 64 pixels (bottom).
Fig. 5
Fig. 5 Charting the statistical properties of the scattering layer. (a) Histogram of the absolute value of the transmission matrix elements assessed by experimental means. The plot fairly matches the Rayleigh distribution, as we expected. (b) Histogram of the summations An evaluated from the data in (a). The small ratio between the standard deviation σA and the mean value 〈A〉 ensures that the integrated contributions coming from the different input modes are nearly equally weighted. (c) Relative dispersion of the An contributions in terms of the number of output modes integrated by the area of the sensor. (d) Histogram of the summations Bnn′ evaluated from the data in (a) assigning randomly a uniform phase distribution between 0 and 2π. Note that in order to make easier the comparison of results, the scale of the abscissa axis in the graphs (b) and (d) is the same.

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

y = MI in = ( H 1 11 H 1 n n H N 11 H N n n ) I in ,
I out = m = 1 S I m out = m = 1 s | n = 1 N k m n E n in | 2 = m = 1 S n = 1 N | k m n | 2 | E n in | 2 + m = 1 S n = 1 N n = 1 n n N k m n k m n * E n in E n i n * .
I out = n = 1 N A n | E n in | 2 + n = 1 N n = 1 n n N B n n E n in E n i n * .
I out n = 1 N | E n in | 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.