Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Speckle rotation decorrelation based single-shot video through scattering media

Open Access Open Access

Abstract

Optical imaging and tracking moving objects through scattering media is a challenge with important applications. However, previous works suffer from time-consuming recovery process, object complexity limit, or object information lost. Here we present a method based on the speckle rotation decorrelation property. The rotational speckles detected at short intervals are uncorrelated and multiplexed in a single-shot camera image. Object frames of the video are recovered by cross-correlation deconvolution of the camera image with a computationally rotated point spread function. The near real-time recovery provides sharp object image frames with accurate object relative positions, exact movement velocity, and continuous motion trails. This multiplexing technique has important implications for a wide range of real-world imaging scenarios.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Optical imaging is a prime means of collecting information about the inhomogeneity of complex samples such as biological tissues, atmospheric turbulence, fog, etc. Unfortunately, these samples scatter the light field in many directions and ruin the image quality [1]. A host of techniques extract information from multiply scattered light, such as wavefront shaping [2–5], optical phase conjugation [6,7], and transmission matrix measurement [8,9]. Some recent works based on the inherent angular correlations in the speckle patterns, namely ‘memory effects’, provide novel imaging solutions [10]. Based on this, speckle correlation method retrieves the Fourier-amplitude of object from the autocorrelation of detected speckles and recovers the lost Fourier-phase of object via an iterative phase-retrieval algorithm [11–22]. This ingenious method allows for imaging through highly scattering media without the need for detailed knowledge of the scattering sample by ignoring the point spread function (PSF). However, it suffers from the limit of object complexity and time-consuming iteration. Alternatively, as long as the PSF is measured, the object information can be reconstructed by deconvolution [23–28]. Notably, the deconvolution method is able to rapidly recover objects with exact relative positions only if the PSFs are correlated. If the PSFs are uncorrelated, interestingly, the PSFs can only deconvolute specific information and ignore others, act like filters. Recently, several works based on decorrelation are proposed [18,27,28]. One of them exploits the spectral decorrelation property of PSFs, and recovers multispectral images with corresponding spectral PSFs from one monochromatic speckle [28]. Another one utilizes the decorrelation property of PSFs in different memory effect ranges, and enlarges the imaging range [27]. The last one generates uncorrelated PSFs with a coded aperture, and provides a single-shot video of objects through scattering media [18]. The first two works can realize fast and sharp imaging but require measurements of PSFs more than once since the PSFs are unrelated. The last work suffers from the recovery time since the uncorrelated speckles are selected with a dictionary learning approach and the images are retrieved with the iterative phase-retrieval algorithm, which are both time-consuming.

Imaging and tracking moving objects through scattering media is significant in many applications. Four goals are required to achieve, including sharp imaging of objects, exact locating object position, continuous recoding motion trail, and fast or real-time recovery. Recently, several methods have been demonstrated to be able to track moving objects through scattering media [18,29–32]. However, these methods are either incapable of imaging [29] and quantitatively measuring the trajectory [32], or suffer from the time-consuming iteration algorithm and the limit of the complexity of object shape [18,30,31].

Here, we present a method for single-shot video of moving objects through scattering media being able to provide sharp image frames with accurate object relative positions, exact movement velocity, and continuous motion trails in near real-time. This method is based on the speckle rotation decorrelation property, that is, speckle rotation around principal optic axis gives rise to fast decorrelation of speckles. The rotational speckles detected at intervals are not correlated, therefore, they are multiplexed in a single-shot camera image. Only a single PSF at initial position is required to measure, and the PSFs at different angular position may be obtained computationally by rotation manipulation. Object frames of the video are recovered by cross-correlation deconvolution (CD) of the camera image with the computationally rotated PSFs. The processes of selecting the specific speckle and retrieving the object information are incorporated, so that the object frames can be reconstructed near real-time.

2. Speckle rotation decorrelation

Speckle motion is considered as a difficulty to get over for a long time, since speckle rotation and speckle shift cause decorrelation and thus decrease the imaging quality [18,24,30,33,34]. In particular, speckle rotation gives rise to decorrelation more easily than speckle shift. Theoretically, a simple shift of a real function does not affect the maximum value of the cross-correlation between the functions before and after the shift, but only changes its position. Nonetheless, in the case of imaging through scattering media, the change of the angle between the incident light and the scattering medium not only gives rise to the speckle shift, but also slowly changes the structure of the speckle, which finally results in the decorrelation. Since the structure change of speckle is slow, the decorrelation process is usually subdued, bringing a ‘memory effect’ range. However, the rotation of an image around the center greatly affects the cross-correlation between the images before and after the rotation, and the cross-correlation falls off if there is no rotation symmetry of the image. In the cases of imaging through scattering media, speckles are generally random and asymmetrical. Therefore, the speckle rotation results in an extremely fast decorrelation.

To demonstrate, a collimated beam of 532nm coherent light with 8 mm beam diameter is incident into a scattering sample (Newport, 10DKIT-C3-40°) and emerged as speckle. Then the rotational speckles are produced by rotating the scattering sample around principal optic axis (the inset in Fig. 1(a)), then they are recorded at intervals by a 2/3” camera, which is placed at a distance of 80 mm in front of the scattering medium. The red solid line in Fig. 1(a) presents the maximum value of the intensity cross-correlation between the rotational speckles and the reference speckle detected at a fixed angle. Besides, the camera rotation around principal optic axis can also generate rotational speckles, and their corresponding cross-correlations are plotted as blue dotted line in Fig. 1(a). It can be seen that the two lines overlap with each other, which is because the rotation of the scattering sample and the camera are relative motions. It is shown that the correlation drops to 1/2 when the rotational angle is around 0.12 degree, which can be called the rotational decorrelation angle (RDA). It indicates that slight change of speckle rotation causes decorrelation. We call this property speckle rotation decorrelation (SRD).

 figure: Fig. 1

Fig. 1 Intensity cross-correlation between the reference speckle and (a) the rotational speckles produced by rotating the scattering sample (red solid line) or the camera (blue dotted line) around principal optic axis, and (b) the shifted speckles produced by rotating the scattering sample around the vertical axis. The insets in (a) and (b) are the experimental diagrams.

Download Full Size | PDF

As a contrast, a sequence of speckles are obtained by rotating the scattering sample around the vertical axis (the inset in Fig. 1(b)), which is the same as the way of measuring the memory effect angular range. The speckle retains its structure and undergoes a slight shift for small rotation angles. Figure 1(b) shows the maximum value of the cross-correlation between the shifted speckles and the reference speckle taken at its initial position. The measured graph is fitted with the theoretical memory effect curve, which is given by Feng et al. [10] The corresponding decorrelation angle in Fig. 1(b), i.e., the memory effect angle, is about 3.6 degree, which is almost 30 times of the RDA in Fig. 1(a) under the same conditions. Therefore, the speckle rotation causes decorrelation much more easily than the speckle shift.

It’s worth mentioning that the value of RDA caused by speckle rotation is not affected by the lateral position shift of the camera and the replacement of different scattering samples. Whereas, it slightly increases with the reduction of the camera’s sensing area and the extension of the distance between camera and scattering sample.

From a different perspective, the SRD property can be exploited for specific applications instead of being regarded as a difficulty. Here, we take advantage of SRD for single-shot video of moving objects hidden behind scattering layers. In this application, speckles detected at different time are required not to correlate with each other. Thus the SRD is suitable and effective, since the speckle rotation can cause decorrelation quickly and easily.

3. Principle of single-shot video of a moving object

The principle of the experiments for SRD based single-shot video of moving objects through scattering media is presented in Fig. 2. A dynamic object moving within the memory effect range is hidden behind a scattering medium and illuminated by a spatially incoherent and narrowband source. Then the scattered rotational speckles are produced by rotating the recording camera around principal optic axis, as it is better for the scattering sample not to be artificially altered in practical application. Since the object is located within the memory-effect range, each point on the object generates a shift-invariant random speckle pattern on the camera, which can be regarded as the PSF of the imaging system in Fig. 2(a). Thus, the momentary camera image It1 is the convolution of the PSF and the moving object Ot1 at moment t1, which can be expressed as:

It1(r)=Sθ1(r)Ot1(r),
where the symbol denotes a convolution operation, r represents the lateral position vector, and Sθ1 is the PSF for angular positionθ1at moment t1. Notably, the PSF is only detected for once under the condition that the camera is initially placed, which can be regarded as the original location of rotation. After time of t1, the camera rotates to -θ1 from the origin, which corresponds to rotating the speckle to θ1, and the PSF for θ1can be obtained computationally by rotation manipulation of the PSF measured at initial location. Since the speckle rotation gives rise to decorrelation, the camera image I is a superposition of the momentary camera images with a rotational interval angle bigger than the RDA θcorr:
I(r)=iIti(r)=iSθi(r)Oti(r),
where Sθi is the PSF rotated for θi, and Δθi=θi-θi-1>θcorr.

 figure: Fig. 2

Fig. 2 Schematic and conceptual description of SRD based single-shot video of hidden objects. (a) Spatially incoherent light from a moving (or static) object passes through a scattering sample, and the scattered speckles are detected by a rotational camera. The relative rotations between speckle and camera forms the speckle rotation. The SRD property makes the camera image become a superposition of the dynamic speckles at various moments. (b) The image recovery via CD. Computationally rotated PSFs select and recover matched frames of the video by cross-correlating with the camera image. The corresponding time of each frame is contained in the rotated angle of PSF. Motion trail of the moving object is reconstructed with accurate relative positions.

Download Full Size | PDF

Two PSFs with a rotational interval angle smaller than θcorr are detected in nearly the same angle and correlated, so their correlation is a strongly peaked function. On the contrary, the correlation of two PSFs with an interval angle bigger than θcorr is rapidly disappeared. So we reach:

SθiSθj={δθij,i=j0,ij,
where denotes the correlation operation, and δθijis the delta function. According to Eq. (3), with the PSF generated by a point on the object plane at position r0, the cross-correlation between the computationally rotated PSFs and the camera image I yields:
jSθj(rr0)I(r)=jSθj(rr0)[iSθi(r)Oti(r)]=j[δθj(r+r0)Otj(r)]+C=jOtj(r+r0)+C,
where C is an additional constant background term. According to the above equation, which can be called the CD of rotational speckles, each frame of the moving object can be immediately singled out and reconstructed at the same time, with no iteration required (Fig. 2(b)). Notably, the CD recovers image with higher quality than the normal deconvolution. In the normal deconvolution with the known PSF, the division by the PSF in Fourier domain and the inverse Fourier transform amplifies the noises, which leads to the low quality of the recovered images. Moreover, the corresponding time of each recovered frame tj can be calculated from the rotational angle θjof PSF:
tj=θj/ω+j×tpause,
where ω represents the rotational velocity of camera, and tpause is the pause time between each two adjacent rotations. Notably, the object lateral position relative to the point where PSF was generated can also be retrieved from Eq. (4). Thus, the motion trail of the moving object can be accurately obtained after retrieving the object image, corresponding time, and relative position of each frame. Meanwhile, the single-shot video of the moving object hidden behind the scattering medium is reconstructed.

4. Experiments and results

4.1 Deconvolution of rotational speckles with angle-matching PSF

In this experiment, a stationary object “5” generated from a spatial light modulator (SLM, RealLight, RL-SLM-R1) is placed at a distance of 472 mm behind the scattering medium (Thorlabs, DG100 × 100-120) and illuminated by a 532nm spatially incoherent pseudothermal source. The imaging resolution of the system is 23.6μm. The CMOS camera (Point Grey, GS3-U3-51S5M-C, 2448 × 2048 pixels, 3.45μm pixel size) fixed on a motorized rotation stage (Zolix, MC600) is placed at a distance of 80 mm in front of the scattering medium. The central point of the camera and rotation stage is overlapped to avoid displacement during the rotation. In this system condition, the RDA is around 0.12 degree. Five spaced speckle patterns are detected in sequence by rotating the camera from −1 to 1 degree with an interval of 0.5 degree. The PSF is detected at 0 degree by substituting a point for the object on the SLM. Then the PSF is cross-correlated with each speckle pattern detected at different angles respectively. As shown in Fig. 3(a), object information is recovered via CD only at 0 degree, but totally blurred at other rotational angles of camera. It proved once again that the rotational decorrelation angle is so small that the SRD is very suitable for single-shot video of hidden objects.

 figure: Fig. 3

Fig. 3 (a) CD of rotational speckles with PSF at 0 degree. Object information is recovered only at 0 degree, but totally blurred at unmatched rotational angles. (b) CD of PSF at 0 degree with rotational speckles detected at 0 degree (Speckle1), 30 degree (Speckle2), and their superposition (Speckle1 + 2). (c) CD of PSF computationally rotated to 30 degrees with the same speckles in (b). Object images are only recovered with the angle-matching PSF. Top-right insets in (c), the rotated objects caused by camera rotation. Scale bars: 34.5μm.

Download Full Size | PDF

In the experiment of single-shot video of hidden objects, multiple speckles detected from different angles are multiplexed. The unmatched speckles have little effect on the reconstructed images, which only leads to a slight fall in PSNR. To prove, two speckles are respectively detected at 0 degree (Speckle1) and 30 degree (Speckle2) by rotating the camera, then the two speckles are added together (Speckle1 + 2). PSF detected at 0 degree is cross-correlated with these three speckles in turn. Figure 3(b) shows that the object images can be reconstructed only when the rotational angles of the speckle and PSF are matched. The right object image is almost the same as the left one with about 8% decrease of PSNR. This decrease is caused by the background noise from the unmatched Speckle2. The conditions in Fig. 3(c) are identical with that in Fig. 3(b) except that PSF is computationally rotated to 30 degrees. Object images are recovered only with the angle-matching PSF (middle and right images in Fig. 3(c)). Since the speckle rotation is caused by rotating the camera, the object is also rotated in the view of camera. Therefore, the objects recovered by deconvolution are also rotated for 30 degrees (top-right insets in Fig. 3(c)). The final object images are reconstructed by a reverse rotation of the recovered objects for 30 degrees (Fig. 3(c)).

4.2 Single-shot video of moving objects

The system of the experiment for SRD based single-shot video of hidden object is the same as Chapter 4.1 except for changing the object into a moving “5” with a height of about 650μm on the SLM (Fig. 4(f)). The object translates from left to right within the memory effect range at a rate of 45.0μm/s. The camera rotates at a speed of 20 degree/s and pauses for 1.1s at intervals of 5 degrees. A single camera image is shot and the continuous exposure time is set as 20s.

 figure: Fig. 4

Fig. 4 Reconstructed single-shot video of moving objects through scattering media and (a)-(d) are frames 1, 4, 7, 10 of the video (see also Visualization 1). (e) Corresponding recovered image without speckle rotation. (f) Object image generated on SLM. Scale bars: 34.5μm.

Download Full Size | PDF

Each frame of the video is directly recovered by the CD of the camera image with the computationally rotated PSF according to Eq. (4), without a time-consuming process of selection. Notably, a single measurement of the PSF is enough for the recovery of all frames, which makes the detection process simple and convenient. The edges of the PSF and the camera image are cut after the PSF intensity signal is computationally rotated, and 1449 × 1449 pixels in the middle are retained so as to avoid the lack of edge information caused by rotation. This operation turns the RDA into about 0.19 degree. In our method, the number of the reserved pixels for compute is approximately 100 times of that in the iterative phase-retrieval algorithm based methods, while our reconstruction time is just about 1/40 of them running on the same computer.

Figures 4(a)-4(d) are frames 1, 4, 7, 10 of the recovered video, which is composed of 11 frames. The video’s playback speed is set as about 3.5 times of realistic. As a comparison, when the camera rotation is not implemented during the detection, the recovered object image is blurred (Fig. 4(e)). The displacement distance between the retrieved objects in each two adjacent frames is 10.35μm on camera image plane. Therefore, with the system conditions mentioned in Chapter 4.1, the translational absolute velocity of the object is calculated to be 45.2μm/s, which is in good agreement with the actual speed 45.0μm/s.

4.3 Single-shot video of complicated dynamic objects

Firstly, objects “Op”, “ti”, “cs”, and “Optics” are generated on the SLM in sequence (Fig. 5(f)). The system conditions and the retrieving processes are just the same as Chapter 4.2. The recovered video contains 11 frames, and frames 3, 6, 8, 11 are shown in Figs. 5(a)-5(d). The reconstructed dynamic objects are sharp. Secondly, our method is also effective for the objects with various motion trails. To prove, a letter “O” generated on the SLM is cut into four parts and shifted in four different directions (Fig. 5(l)). Figures 5(g)-5(j) are frames 1, 4, 7, 10 of the 11 frames from the video. The objects’ motion trails in every direction are clearly retrieved at every moment. As a comparison, the corresponding recovered objects are blurred when the camera is motionless during detection (Figs. 5(e) and 5(k)).

 figure: Fig. 5

Fig. 5 Reconstructed single-shot video of dynamic objects through scattering media and (a)-(d) are frames 3, 6, 8, 11 of the recovered video (see also Visualization 2) and (g)-(j) are frames 1, 4, 7, 10 of the video (see also Visualization 3). (e) and (k) are corresponding recovered images without speckle rotation. (f) and (l) are object images generated on SLM. Scale bars: 34.5μm.

Download Full Size | PDF

It is worth mentioning that, in our method, the object consists of 6 letters in a single frame, such as the “Optics” in Fig. 5(d), can be recovered with the same processing time as the object composed of a simple letter, such as the “O” in Fig. 5(g). Moreover, their imaging qualities are almost the same, with just a slight PSNR less in Fig. 5(d). As a contrast, in phase-retrieval algorithm based speckle correlation method, the retrieval time of “Optics” is much longer than “O”, and both of them take dozens of times longer than ours because of the time-consuming iteration. Besides, the “Optics” is recovered with much lower imaging quality in comparing with “O”, and sometimes it even cannot be correctly recovered.

5. Discussion

The frame number limitation of a single-shot video is measured by calculating the PSNR of a series of videos with different frame numbers. As shown in Fig. 6, the PSNR decreases with the increase of the frame number. Since each frame is recovered from one of the uncorrelated speckles, the remaining uncorrelated speckles inherently cause background noise. The larger the frame number, the more uncorrelated speckles remain, and the lower the PSNR will be. On the basis of the variation trend of the curve in Fig. 6, the PSNR decreases to half of the maximum when the frame number increases to around 20 in our experiment condition.

 figure: Fig. 6

Fig. 6 PSNRs of the single-shot videos with different frame numbers. Increasing frame number causes enhanced background noise.

Download Full Size | PDF

Several conditions in the camera rotation process should be explained. Firstly, the minimal pause time is limited by the mechanical pause of the motorized rotation stage rather than the integration time for imaging. In fact, more than 10ms is enough for imaging. Thus, the acquisition speed can be improved by replacing the motorized rotation stage by a high performance one. Secondly, the frame image can be recovered with any interval angle bigger than RDA. However, the image of higher PSNR is recovered with a bigger interval angle, such as above 3 degrees in our experiment condition. Thirdly, faster rotation speed of camera is better for imaging, and the acquisition speed as well as the imaging quality can both be improved.

The speckle motion is used for a decorrelation tool in our experiment, but its impact on imaging is still necessary to be discussed. Firstly, if the scattering sample in our system is naturally rotated, all object frames can still be recovered with correct lateral relative distance from the point where PSF was generated. However, the time sequence of the frames will be lost, because simultaneous rotations of the scattering sample and the camera give rise to a resultant speckle rotation with unknown speed and direction. In this case, the rotational angle of the speckle is not equal to the negative rotational angle of the camera, thus the corresponding time of each recovered frame can no longer be determined from Eq. (5). Another consequence is that the recovered object rotation caused by camera rotation cannot be compensated correctly. Secondly, if the scattering sample is naturally translated within the memory effect range, different from the above, the reconstruction of the video is unaffected.

Notably, there is no requirement for the objects. The object can either be dynamic or static. In addition, the object has the potential to extend to three dimensions by shifting the point which generates the PSF in axial direction.

There are some challenges that this method may address, which contain the increase of the frame number limitation, the enlargement of the imaging field of view, and the detection of the PSF. In practical applications, the PSF of the system can be obtained by using guide-stars embedded in the medium at the object plane, speckle pattern estimation, or spatial correlation [23–28].

6. Conclusion

We have shown that a video of moving objects hidden behind scattering media can be derived from a single-shot camera image based on the SRD property. The imaging system is so simple that only a spatially-incoherent light source and a camera fixed on a motorized rotation stage are required. The camera image is the superposition of the rotational speckles at intervals. Object frames of the video are selected and recovered by cross-correlating a computationally rotated PSF with the camera image. This deconvolution process is fast and simple, without any iteration or selection procedures. The near real-time recovery provides sharp image frames with accurate object relative positions, proper time sequence, exact movement velocity, and continuous motion trails. The presented single-shot video recovery method is fast and robust, which will benefit the real-time observation and the quantitative analysis of dynamic objects as well as their motion trails. This multiplexing technique is expected to increase the information storage density in biomedical and astronomical applications.

Funding

Graduate Innovation Foundation of Jiangsu (KYCX17_0247); National Natural Foundation of China (Grant No.61675095).

References

1. J. W. Goodman, Speckle Phenomena in Optics: Theory and Applications (Roberts and Company Publishers, 2007).

2. I. M. Vellekoop and A. P. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32(16), 2309–2311 (2007). [CrossRef]   [PubMed]  

3. I. M. Vellekoop, E. G. van Putten, A. Lagendijk, and A. P. Mosk, “Demixing light paths inside disordered metamaterials,” Opt. Express 16(1), 67–80 (2008). [CrossRef]   [PubMed]  

4. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6(5), 283–292 (2012). [CrossRef]  

5. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012). [CrossRef]  

6. Z. Yaqoob, D. Psaltis, M. S. Feld, and C. Yang, “Optical phase conjugation for turbidity suppression in biological samples,” Nat. Photonics 2(2), 110–115 (2008). [CrossRef]   [PubMed]  

7. C. L. Hsieh, Y. Pu, R. Grange, G. Laporte, and D. Psaltis, “Imaging through turbid layers by scanning the phase conjugated second harmonic radiation from a nanoparticle,” Opt. Express 18(20), 20723–20731 (2010). [CrossRef]   [PubMed]  

8. S. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Image transmission through an opaque material,” Nat. Commun. 1(6), 81 (2010). [CrossRef]   [PubMed]  

9. S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104(10), 100601 (2010). [CrossRef]   [PubMed]  

10. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61(20), 2328–2331 (1988). [CrossRef]   [PubMed]  

11. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012). [CrossRef]   [PubMed]  

12. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014). [CrossRef]  

13. K. T. Takasaki and J. W. Fleischer, “Phase-space measurement for depth-resolved memory-effect imaging,” Opt. Express 22(25), 31426–31433 (2014). [CrossRef]   [PubMed]  

14. X. Yang, Y. Pu, and D. Psaltis, “Imaging blood cells through scattering biological tissue using speckle scanning microscopy,” Opt. Express 22(3), 3405–3413 (2014). [CrossRef]   [PubMed]  

15. A. Porat, E. R. Andresen, H. Rigneault, D. Oron, S. Gigan, and O. Katz, “Widefield lensless imaging through a fiber bundle via speckle correlations,” Opt. Express 24(15), 16835–16855 (2016). [CrossRef]   [PubMed]  

16. Y. Shi, Y. Liu, J. Wang, and T. Wu, “Non-invasive depth-resolved imaging through scattering layers via speckle correlations and parallax,” Appl. Phys. Lett. 110(23), 231101 (2017). [CrossRef]  

17. M. Hofer, C. Soeller, S. Brasselet, and J. Bertolotti, “Wide field fluorescence epi-microscopy behind a scattering medium enabled by speckle correlations,” Opt. Express 26(8), 9866–9881 (2018). [CrossRef]   [PubMed]  

18. X. Li, A. Stevens, J. A. Greenberg, and M. E. Gehm, “Single-shot memory-effect video,” Sci. Rep. 8(1), 13402 (2018). [CrossRef]   [PubMed]  

19. Q. Chen, H. He, X. Xu, X. Xie, H. Zhuang, J. Ye, and Y. Guan, “Memory Effect Based Filter to Improve Imaging Quality Through Scattering Layers,” IEEE Photonics J. 10(5), 1–10 (2018). [CrossRef]  

20. C. Guo, J. Liu, W. Li, T. Wu, L. Zhu, J. Wang, G. Wang, and X. Shao, “Imaging through scattering layers exceeding memory effect range by exploiting prior information,” Opt. Commun. 434, 203–208 (2019). [CrossRef]  

21. O. Salhov, G. Weinberg, and O. Katz, “Depth-resolved speckle-correlations imaging through scattering layers via coherence gating,” Opt. Lett. 43(22), 5528–5531 (2018). [CrossRef]   [PubMed]  

22. B. Das, N. S. Bisht, R. V. Vinu, and R. K. Singh, “Lensless complex amplitude image retrieval through a visually opaque scattering medium,” Appl. Opt. 56(16), 4591–4597 (2017). [CrossRef]   [PubMed]  

23. E. Edrei and G. Scarcelli, “Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media,” Sci. Rep. 6(1), 33558 (2016). [CrossRef]   [PubMed]  

24. H. Zhuang, H. He, X. Xie, and J. Zhou, “High speed color imaging through scattering media with a large field of view,” Sci. Rep. 6(1), 32696 (2016). [CrossRef]   [PubMed]  

25. Z. Wang, X. Jin, and Q. Dai, “Non-invasive imaging through strongly scattering media based on speckle pattern estimation and deconvolution,” Sci. Rep. 8(1), 9088 (2018). [CrossRef]   [PubMed]  

26. X. Xu, X. Xie, H. He, H. Zhuang, J. Zhou, A. Thendiyammal, and A. P. Mosk, “Imaging objects through scattering layers and around corners by retrieval of the scattered point spread function,” Opt. Express 25(26), 32829–32840 (2017). [CrossRef]  

27. L. Li, Q. Li, S. Sun, H. Z. Lin, W. T. Liu, and P. X. Chen, “Imaging through scattering layers exceeding memory effect range with spatial-correlation-achieved point-spread-function,” Opt. Lett. 43(8), 1670–1673 (2018). [CrossRef]   [PubMed]  

28. S. K. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica 4(10), 1209–1213 (2017). [CrossRef]  

29. E. H. Zhou, H. Ruan, C. Yang, and B. Judkewitz, “Focusing on moving targets through scattering samples,” Optica 1(4), 227–232 (2014). [CrossRef]   [PubMed]  

30. M. Cua, E. H. Zhou, and C. Yang, “Imaging moving targets through scattering media,” Opt. Express 25(4), 3935–3945 (2017). [CrossRef]   [PubMed]  

31. C. Guo, J. Liu, T. Wu, L. Zhu, and X. Shao, “Tracking moving targets behind a scattering medium via speckle correlation,” Appl. Opt. 57(4), 905–913 (2018). [CrossRef]   [PubMed]  

32. M. I. Akhlaghi and A. Dogariu, “Tracking hidden objects using stochastic probing,” Optica 4(4), 447–453 (2017). [CrossRef]  

33. S. Sudarsanam, J. Mathew, S. Panigrahi, J. Fade, M. Alouini, and H. Ramachandran, “Real-time imaging through strongly scattering media: seeing through turbid media, instantly,” Sci. Rep. 6(1), 25033 (2016). [CrossRef]   [PubMed]  

34. E. Edrei and G. Scarcelli, “Optical imaging through dynamic turbid media using the Fourier-domain shower-curtain effect,” Optica 3(1), 71–74 (2016). [CrossRef]   [PubMed]  

Supplementary Material (3)

NameDescription
Visualization 1       Reconstructed single-shot video of Fig. 4. (a)-(d)
Visualization 2       Reconstructed single-shot video of Fig. 5. (a)-(d)
Visualization 3       Reconstructed single-shot video of Fig. 5. (g)-(j)

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 Intensity cross-correlation between the reference speckle and (a) the rotational speckles produced by rotating the scattering sample (red solid line) or the camera (blue dotted line) around principal optic axis, and (b) the shifted speckles produced by rotating the scattering sample around the vertical axis. The insets in (a) and (b) are the experimental diagrams.
Fig. 2
Fig. 2 Schematic and conceptual description of SRD based single-shot video of hidden objects. (a) Spatially incoherent light from a moving (or static) object passes through a scattering sample, and the scattered speckles are detected by a rotational camera. The relative rotations between speckle and camera forms the speckle rotation. The SRD property makes the camera image become a superposition of the dynamic speckles at various moments. (b) The image recovery via CD. Computationally rotated PSFs select and recover matched frames of the video by cross-correlating with the camera image. The corresponding time of each frame is contained in the rotated angle of PSF. Motion trail of the moving object is reconstructed with accurate relative positions.
Fig. 3
Fig. 3 (a) CD of rotational speckles with PSF at 0 degree. Object information is recovered only at 0 degree, but totally blurred at unmatched rotational angles. (b) CD of PSF at 0 degree with rotational speckles detected at 0 degree (Speckle1), 30 degree (Speckle2), and their superposition (Speckle1 + 2). (c) CD of PSF computationally rotated to 30 degrees with the same speckles in (b). Object images are only recovered with the angle-matching PSF. Top-right insets in (c), the rotated objects caused by camera rotation. Scale bars: 34.5μm.
Fig. 4
Fig. 4 Reconstructed single-shot video of moving objects through scattering media and (a)-(d) are frames 1, 4, 7, 10 of the video (see also Visualization 1). (e) Corresponding recovered image without speckle rotation. (f) Object image generated on SLM. Scale bars: 34.5μm.
Fig. 5
Fig. 5 Reconstructed single-shot video of dynamic objects through scattering media and (a)-(d) are frames 3, 6, 8, 11 of the recovered video (see also Visualization 2) and (g)-(j) are frames 1, 4, 7, 10 of the video (see also Visualization 3). (e) and (k) are corresponding recovered images without speckle rotation. (f) and (l) are object images generated on SLM. Scale bars: 34.5μm.
Fig. 6
Fig. 6 PSNRs of the single-shot videos with different frame numbers. Increasing frame number causes enhanced background noise.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

I t 1 (r)= S θ 1 (r) O t 1 (r),
I(r)= i I t i (r) = i S θ i (r) O t i (r),
S θ i S θ j ={ δ θ ij ,i=j 0,ij ,
j S θ j (r r 0 ) I(r)= j S θ j (r r 0 )[ i S θ i (r) O t i (r) ] = j [ δ θ j (r+ r 0 ) O t j (r) ] +C = j O t j (r+ r 0 )+C ,
t j = θ j /ω+j× t pause ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.