Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Noninvasive imaging of two isolated objects through a thin scattering medium beyond the 3D optical memory effect

Open Access Open Access

Abstract

A speckle image formed by scattering lights can be decoded by recently invented techniques, owing to the optical memory effect, thereby enabling the observation of a hidden object behind a thin scattering medium. However, the range of three-dimensional OME is typically small; therefore, both the field of view and depth of field are limited. We propose a method that can significantly and simultaneously improve both values for a specific scenario, where one object moves around the other position-fixed object. The effectiveness of the proposed scheme is demonstrated through a set of experiments.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Scattering presents an obstacle to optical imaging; thus, the realization of imaging through scattering media warrants long-term, in-depth studies. Recently, many innovative approaches have been presented, such as optical phase conjugation [14], the wavefront shaping technique (WST) [510], transmission matrix measurements [1114], the speckle autocorrelation technique (SAT) [1520], and point-spread-function (PSF)-based deconvolution [2124]. These imaging techniques can be generally classified into invasive and noninvasive categories, depending on whether the object space or the scattering medium has to be accessed. The WST and SAT are widely investigated and considered to be two of the most promising techniques [710], despite being limited by the optical memory effect (OME). Furthermore, if an application scenario may be simplified as imaging “through” but not “in” a thin scattering medium, then the SAT is comparatively better than the WST because it requires significantly less time and is a non-invasive method, which is applicable to many circumstances. This pioneering work was first designed by Bertolotti et al., who experimentally proved that an object completely hidden behind an opaque scattering medium could be non-invasively reconstructed from its speckle autocorrelation with an iterative Fienup-type algorithm [15]. However, this method is time consuming because a set of speckles are subsequently captured under different illumination angles. In 2014, Katz et al. proposed a “single-shot” non-invasive imaging method to see through a thin scattering medium by introducing a spatially incoherent laser source [16]. However, this technique is also affected by the abovementioned limited imaging range governed by the OME in three dimensions, which has been thoroughly investigated in previous studies [2527]. We know that a thin scattering medium can be regarded as a special “scattering lens,” owing to the correlation property of the speckle field, which enables the lights coming from a point located in a three-dimensional (3D) region to “focus” on the camera plane. That is, there exists a small 3D area, such as the range of the 3D OME, where a hidden object can be revealed. In brief, we introduce the concepts of the field of view (FOV) and depth of field (DOF) to represent the lateral and axial range of the 3D OME, respectively.

To extend the FOV or DOF of an imaging system with an ideal thin scattering medium, in 2018, Tang et al. imaged a single object that exceeded the lateral OME range by stitching together the PSFs of every independent spatial position in the object plane [24]. This method is straightforward; however, the object plane must be invaded several times to obtain all PSFs in advance. Alternatively, Kadobianskyi et al. increased the FOV up to four times using the time-gated technique. However, it relies heavily on the first “snake” photons to arrive, which are few in number and not easy to obtain with normal methods [28]. In 2019, Guo et al. proposed a scheme that expanded the FOV by invasively exploiting extra information in advance, such as, the shape of a referenced object [29]. In the same year, Wang et al. proposed a single-shot scheme that exceeded the OME range via Fourier spectrum guessing and iterative energy-constrained compensation [17]; however, this method is time-consuming. In addition to the FOV-extended strategies, depth-resolving capability has also been widely investigated. In 2012, Katz et al. proved the existence of the axial OME range of a thin scattering medium and its maximal value [6]. At a similar time, Takasaki et al. imaged a hidden object located at different depths by introducing the windowed Fourier transform [30]. In 2018 and 2019, Xie et al. and Liao et al. separately demonstrated that the traditional DOF of a scattering-lens-based imaging system could be largely extended by manipulating a given PSF [31,32] or by implanting several PSFs in advance [23]. In 2020, Li et al. experimentally demonstrated that an independent component analysis method could be applied to restore multiple objects beyond the 3D OME using multiple exposures with wavefront modulation [20]. It is noted that almost all reported methods are either invasive or time-consuming.

In this study, we developed a method to recover two isolated objects that are distributed beyond the 3D OME range using a designed superimposing and averaging (SA) algorithm, which requires multiple speckle patterns. Of course, the methods of superimposing and averaging are not novel. In the field of image processing, several existing studies have already used multi-frame image superposition and averaging technology [15,33,34]. As a result, environmental or interference noise can be effectively restrained. In comparison, our method employs this technique to eliminate or weaken the speckle component caused by one of the objects and magnify that of the other. Consequently, the two objects can be reconstructed separately. However, our method can be implemented in a special scenario that involves a “stationary object” O1 and a “moving object” O2; the objects are both small enough, but depart from each other exceeding the region of the 3D OME. Therefore, to some extent, we can say that our scheme can simultaneously extend the FOV and DOF of a system with a scattering medium. As the object O2 moves, a continuous acquisition of the speckle field Ii is adopted. Because the isolated speckle components, buried in each captured speckle image, of objects O1 and O2 are not correlated with each other, an SA algorithm is applied to extract the isolated speckle components from the collected speckles. Then, a phase-retrieval algorithm (PRA) is executed to reconstruct the images of objects O1 and O2 [35,36]. In the following sections, we first describe the basic principle using mathematical expressions; then, some experimental results are provided to demonstrate the effectiveness of our scheme, and a discussion and conclusion is presented in the final section.

2. Principle

We first provide a brief introduction to the well-known OME and SAT. The OME indicates that when the angle of the incident beam varies within a particular range, the output light fields scattered by the opaque medium are highly correlated, and they shift in accordance with the change in the incident angle [14]. That is, the scattering system can be regarded as a linear shift-invariant system, and its PSF is a speckle pattern determined by the properties of this system. It should be emphasized that the PSF is shift-invariant only within the OME range. The 3D OME range can be depicted in two orthogonal directions, and it is typically assumed to be $\Delta X = {{u\lambda } / {\pi L}}$, $\Delta Z = 2\lambda {(u)^2}/\pi DL$, where $\Delta X$ and $\Delta Z$ indicate the OME range in the lateral and axial directions, respectively, u is the distance from the object to the scattering medium, $\lambda $ is the wavelength of the light source, L is the transport mean free path of the scattering medium, and D is the effective “entrance pupil” diameter, which is dictated by the illuminated area on the scattering medium [6,15]. For convenience, but without loss of accuracy, we defined the 3D OME volume as $\Delta X \times \Delta X \times \Delta Z$, which is related to the FOV and DOF. Consequently, in a scattering system under incoherent illumination, when an object is located within the 3D OME range, the obtained output intensity after the scattering medium possesses a very simple form:$I = O \ast S$, where I is the speckle image detected by the camera sensor, “${\ast} $” denotes the convolution operator, and O and S are the object and PSF, respectively. Moreover, owing to the PSF’s random characteristic, its autocorrelation is approximated as a delta function [37]. Thereafter, it is easy to conclude that if the size of a flat object is smaller than the OME range, the autocorrelation of speckle image I is approximately equal to the autocorrelation of the object itself:

$$I \otimes I = (O \ast S) \otimes (O \ast S) = (O \otimes O) \ast (S \otimes S) \approx O \otimes O + C$$
where “${\otimes} $” denotes the autocorrelation operator, and C represents a noise term. According to the Wiener-Khinchin theorem, the Fourier spectrum intensity (FSI) of the hidden object can be obtained:
$${|{\textrm{F}\{ O\} } |^2} = \textrm{F}\{ O \otimes O\} \approx \textrm{F}\{ I \otimes I\},$$
where F{.} denotes the Fourier transform operator. Then, a Fienup-type iterative phase-retrieval algorithm is employed to reconstruct the small hidden object with the FSI as the constraint in the Fourier domain [15]. The above technique is unable to reconstruct an object that is larger than the maximum size of the OME range in lateral directions. However, as described in the Introduction, several valuable techniques have been proposed to alternatively enlarge the FOV or DOF, but not the size of the hidden object itself.

Here, we suggest another specific but reasonable application scenario, as shown in Fig. 1. There are two flat objects hidden behind a scattering layer, one “stationary object” O1 and one “moving object” O2 that randomly moves beside O1. Both are smaller than the maximal size of the OME range; however, the instant distances between them are always larger than the maximal size of the lateral or axial OME range. In such a situation, the typically-used WST, SAT, and PSF-based techniques would become invalid because the two objects as a whole exceed the 3D OME range of the scattering imaging system. Using a pre-mark procession of two independent PSFs, we may directly and separately reconstruct the two hidden objects; however, this is an invasive method that cannot be allowed in many situations. Here, we provide a theoretical analysis to illustrate that it is possible to simultaneously reconstruct both of the two wide-spaced objects without invasion of the object-located area.

 figure: Fig. 1.

Fig. 1. Schematic diagram of the experiments. RD denotes the rotating diffuser, and L denotes the collimating lens. The “stationary object” O1 and “moving object” O2 are objects placed in different positions. The “moving object” O2 freely moves among different object planes, as shown on OP1, OP2 and OP3. The red and blue dashed circles represent the traditional lateral and axial OME range.

Download Full Size | PDF

The detected intensity image corresponding to the two non-adjacent objects can be mathematically described as:

$$I = {I_{{O_\textrm{1}}}} + {I_{{O_\textrm{2}}}} = {O_1} \ast {S_1} + {O_2} \ast {S_2},$$
where S1 and S2 are the PSFs corresponding to the two 3D OME ranges, ${I_{{O_\textrm{1}}}}$ and ${I_{{O_\textrm{2}}}}$ are the speckle patterns formed by objects O1 and O2, respectively, and their summation yields I. In accordance with our assumption, the two PSFs, S1 and S2, would be extremely different, thereby yielding ${S_1} \otimes {S_2} \approx 0$ because the two objects are distributed in different 3D OME ranges. Therefore, the autocorrelation of speckle image I (Eq. (3)) can be written and further simplified as:
$$\begin{aligned}{c} I \otimes I &= ({O_1} \otimes {O_1}) \ast ({S_1} \otimes {S_1}) + ({O_2} \otimes {O_2}) \ast ({S_2} \otimes {S_2}) + 2 \times ({O_1} \otimes {O_2}) \ast ({S_1} \otimes {S_2})\\ &\approx {O_1} \otimes {O_1} + {O_2} \otimes {O_2} + {C_1}, \end{aligned}$$
where C1 is the noisy term generated by the cross-correlation. It can be easily observed that the autocorrelation function of a detected speckle image can be approximately expressed by the addition of two individual autocorrelations of those two objects. Thus, the problem is to obtain the individual autocorrelation components or the function of each object from the detected speckle I, where two individual speckles are completely mixed with each other. To achieve this, we designed an SA algorithm, which relies on the capture of multi-frame speckle images.

A schematic of the experimental setup with spatially incoherent illumination is shown in Fig. 1. Two objects O1 (character ‘S’) and O2 (character ‘Z’), which are located at different positions in an object space behind a scattering medium are simply represented by three adjacent object planes. Using a continuous acquisition operation, a series of speckle images are collected during the period when object O1 remains still, but object O2 freely moves in three object planes during our experiments; however, this can be extended to many other positions in practice.

As shown in Fig. 1, each of the two wide-spaced objects (O1, O2) falls within the 3D OME region, whereas their distance is beyond that. While O2 slowly moves “around” the fixed O1, light passes through the objects and scattering medium to a camera, which subsequently forms a set of speckles captured by the camera:

$${I_i} = {I_{{O_1}}} + I_{{O_2}}^i = {O_1} \ast {S_1} + {O_2} \ast S_2^i,$$
where Ii denotes the ith captured speckle image, and $S_2^i$ is the PSF corresponding to the moving object O2 at the ith position. Because object O1 remains stationary during the continuous sampling process, all the captured speckle images Ii contain a fixed component ${I_{{O_1}}}$, whereas all the speckle components $I_{{O_2}}^i$ in Ii are randomly varied, but uncorrelated with each other. After sampling several times (n times), the proposed SA algorithm is employed. It requires the superimposing and subsequent averaging of all the captured speckle patterns (I1, …, In). As Eq. (6) shows, the intensity of the fixed component ${I_{{O_1}}}$ does not change, whereas the accumulated sum of all the components $I_{{O_2}}^i$ (i=1,…, n) is divided by n, which leads to a negligible noise term that follows the normal distribution [35]. The case where n is sufficiently large will be discussed later:
$${I_{ave}} = {{\sum\limits_{i = 1}^n {{I_i}} } {\bigg /} n} = {O_1} \ast {S_1} + \frac{1}{n}\sum\limits_{i = 1}^n {{O_2} \ast S_2^i} \approx {O_1} \ast {S_1} + {C_2}.$$

Any frame of a speckle image Ii collected by the camera can be regarded as a linear overlay of the speckle components (${I_{{O_1}}}$,$I_{{O_2}}^i$) produced by objects O1 and O2, respectively (as shown in Eq. (3)). Furthermore, because object O1 is static, a subtraction operator from any speckle image will reveal information about O2:

$$I_{dif}^i = {I_i} - {I_{ave}} = {O_2} \ast S_2^i - \frac{1}{n}\sum\limits_{i = 1}^n {{O_2} \ast S_2^i} \approx {O_2} \ast S_2^i - {C_2},$$
where negative C2 is a negligible noisy term, similar to C2 (Eq. (3)).

To date, we have separately obtained the isolated speckle components generated by these two objects, and we now propose a relationship between Iave and O1, as well as $I_{dif}^i$ and O2, in line with Eq. (7), as follows:

$$\begin{array}{l} {I_{ave}} \otimes {I_{ave}} \approx {O_1} \otimes {O_1}\\ I_{dif}^i \otimes I_{dif}^i \approx {O_2} \otimes {O_2} \end{array}.$$

By feeding the autocorrelation terms to the SAT [15], the recovery of the two hidden objects may be obtained, thereby achieving the purpose of imaging beyond the 3D OME in a non-invasive manner.

3. Experiment

An experimental setup is constructed to verify the effectiveness of the proposed scheme, as shown in Fig. 2(a). A laser beam ($\lambda = 532\textrm{ }nm$) is expanded and successively passed through the rotating ground glass and collimating lens, thereby serving as a spatially incoherent light source. To solve the occlusion issue between two moveable object planes, the incoming light is split by a beam splitter B1, separately passed through the object planes OP1 and OP2, and converged using another splitter B2. The transparent characters “S” and “Z,” which have a height of 1 mm, are regarded as O1 and O2 (Photolithography masks) and are placed at target plane OP1 and OP2, respectively. Initially, we set the distances of objects O1 and O2 to the scattering medium as u1 = u2 = 34.5 cm. A high-resolution complementary metal-oxide-semiconductor (CMOS) camera (PCO edge 5.5, 2160 × 2160 px with a pixel size of 6.5 × 6.5 μm, PCO-Tech, Romulus, MI, USA), positioned at a distance of v = 12.5 cm from the scattering medium, is used to detect the speckle patterns. The magnification of our imaging setup is M = v/u = 0.37.

 figure: Fig. 2.

Fig. 2. Experimental setup and 3D OME range of the scattering medium. (a) Experimental setup. The system consists of a laser, rotating diffuser (RD), collimating lens (L), beam splitter (B1 and B2), mirrors (M1 and M2), and object planes (OP1 and OP2). Light from the source is divided and separately incident on transparent characters “S” and “Z.” The light then converges into one beam that passes through the scattering medium. u1 and u2 are the distance from the object planes OP1 and OP2 to the scattering medium. v is the distance from the scattering medium to the camera. (b) Lateral OME range of the scattering medium in experiments. The red dot represents the correlation between the PSF measured at the center of the pinhole and those at lateral-shifting positions, and the blue curve is obtained using a Gaussian fitting. (c) Axial OME range of the scattering medium in our experiments, which was obtained using a similar method to (b).

Download Full Size | PDF

In our experiments, both the diameters of the two characters “S” and “Z” are set to be 1 mm, which is almost half of the lateral OME range (1.3 mm), to simulate the special scenario. The lateral OME is determined by the half-width at half-maximum (HWHM) of the fitted correlation curve, shown in Fig. 2(b), and is obtained by subsequently calculating the ratios of the peak value of the centered PSF’s autocorrelation and each peak value of the cross-correlations between the centered PSF and each PSF that is laterally shifted by a step of 0.2 mm in the object plane. Similarly, the axial OME range is also confirmed to be approximately 12 mm, as shown in Fig. 2(c). Therefore, in principle, the exposure position for the “moving” character “Z” is arbitrary, but each movement between two exposures should exceed 2.5 mm or 12 mm in the lateral or axis direction, respectively. We named this rule the condition of exposure.

In accordance with the condition of exposure, we initially collected 13 frame speckle images by setting 13 positions for character “Z” in a 3D space by adjusting the OP2. This made “Z” equally surround character “S” in OP1. The exposure time of the camera is set to t = 200 ms. Figure 3 presents the SA-based experimental results. Figure 3(a) shows the ground truth, and (b) presents the 13 collected speckle images in a simplified manner. The autocorrelation and reconstruction results from any single frame using the SAT are shown in Fig. 3(c); we cannot extract enough information about the shapes of the two hidden objects. By employing our SA algorithm, we obtain the result Iave (Fig. 3(e), left part), from which the speckle image is subtracted to further obtain $I_{dif}^i$ (Fig. 3(d), right part). Then, the autocorrelation of the objects (Fig. 3(e)) is directly given by calculating the autocorrelation of Iave and $I_{dif}^i$. Finally, a typically used SAT is applied to recover the two hidden small objects with high quality (Fig. 3(f)). Note that in this experiment, we could recover the “moving” object (character “Z”) for each position (or exposure) if we calculated the corresponding $I_{dif}^i$, according to Eq. (7). This is discussed and demonstrated subsequently.

 figure: Fig. 3.

Fig. 3. Experimental results of superposition imaging of multi-frame speckle images. (a) The ground truth of objects O1 (character “S”) and O2 (character “Z”). The left side is the stationary object O1; the right side is the “moving” object O2. (b) Raw camera image Ii, of which 13 frames are collected. (c) Autocorrelation of an arbitrarily selected single frame and the corresponding reconstructed result using the SAT. (d) Computed speckle components Iave and $I_{dif}^i$ using our proposed SA algorithm. (e) The autocorrelations of Iave and $I_{dif}^i$. (f) The corresponding reconstructed results using SAT. Scale bars: 200 pixels in (b) and (e) and 50 pixels in (a), (c), (d), (f), and (g).

Download Full Size | PDF

It is noted that our proposed method is still effective if the two objects are far apart in the axis or lateral directions. This is an interesting point that differs from previous studies [22,23]. In fact, this case is absolutely inferred from the aforementioned analysis. However, we provide further experimental results. Figure 4(a) depicts a simplified experimental diagram of two objects at different axial positions far from each other. We change the distance u2 from 43.5 to 25.5 cm with a step of 3 cm, while u1 is fixed at 34.5 cm. This guarantees that each movement interval exceeds the range of the axial OME. We can only observe one object at a time using traditional imaging methods, and the other object will become blurred, or even invisible (e.g., Fig. 4(b)). However, with our proposed method (here, set n = 7), both objects can be recovered with high fidelity (Fig. 4(e)). Note that each collected speckle image Ii contains the same information of the stational object O1 and different information of the moving object O2 corresponding to various axis positions. In accordance with Eq. (6)–(7), we can easily calculate the terms Iave and $I_{dif}^i$ (i = 1,…, 7), as shown in Fig. 4(d), which are approximated as isolated speckle components of O1 and O2. The final reconstruction results with good fidelity (Fig. 4(e)) can be performed in a similar manner as above. We can also observe that the magnification of the reconstructed character “Z” is different for each object, which is caused by various object distances u2.

 figure: Fig. 4.

Fig. 4. Imaging of two objects O1 and O2 located far from each other in the axial direction. (a) Schematic of experimental setup, u1 is 34.5 cm, u2 varies from 43.5 to 25.5 cm, and each movement step is 3 cm. (b) Images obtained by conventional imaging systems. The yellow dashed circle indicates the position of the defocused object. (c) Captured speckle images. (d) Computed speckle components Iave (left) and $I_{dif}^i$ (right). (e) Reconstruction objects. Scale bars: 50 pixels in (b) and (e), 200 pixels in (c) and (d).

Download Full Size | PDF

To further evaluate the effect of the value of n on the quality of the reconstructed results, we substitute different numbers, from n=3 to 13, of speckle images in our SA algorithm, followed by an SAT-based reconstruction process. Several typical results are shown in Fig. 5(a), where we can observe that most object details are recovered while the correlation value is greater than 0.8, in which case $n \ge 7$. This implies that the noise terms C2 (Eq. (6)) and -C2 (Eq. (7)) can be ignored if n is sufficiently large. This is reasonable because the number n, to some extent, determines the total energy allocation between Iave and $I_{dif}^i$, which are approximately the isolated speckle components of O1 and O2. We can also observe from Eq. (6) that as n increases, the averaged speckle components generated by the moving object O2 will be smoothed and buried in the background noise, while that of O1 remains constant.

 figure: Fig. 5.

Fig. 5. Relationship between n and the corresponding recovery results. (a) The reconstructed results with different values of n, which is the number of collected speckle images. (b) Correlation coefficient curve between the reconstructed results and ground truth. Scale bar: 50 pixels.

Download Full Size | PDF

4. Conclusion

Using the basic framework of the well-known speckle-autocorrelation-based strategy, we have demonstrated that our SA algorithm can be implemented in a special but reasonable scenario, in which a small object is hidden behind a thin scattering medium and remains fixed while another object moves. Interestingly, the distance between the two objects is beyond the 3D OME range in both the lateral or axis directions during each exposure. This implies that our methodology may significantly and simultaneously expand both the FOV and DOF of an imaging system with the existence of a scattering medium, although it is only valid for a special situation. It is worth noting that our method does not expand the 3D OME range, which is an inherent feature that is determined by the scattering medium. Our method only enlarges the FOV and DOF. It is also determined that although the two objects can be separately recovered, their spatial positions cannot be determined. In addition, the reconstruction quality will be affected if the weight difference between the two speckle components is significant. For example, the size difference or the axis distance difference between the two objects will cause insufficient extraction of the speckle components, which results in undesirable convergence outputs from the iterative algorithm. However, a significant advantage of our proposed method is that we do not require any invasive operation, which is typically impractical in many scenarios. Moreover, the exceeding FOV level of our method can be increased, as long as the detector can fully collect the scattering lights from the objects.

Funding

National Natural Science Foundation of China (61805152, 61875129, 62061136005); Sino-German Center for Research Promotion (GZ 1391, M-0044); Natural Science Foundation of Guangdong Province (2021A1515011801).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. E. N. Leith and J. Upatnieks, “Holographic Imagery through Diffusing Media,” J. Opt. Soc. Am. 56(4), 523 (1966). [CrossRef]  

2. J. W. Goodman, W. H. Huntley, D. W. Jackson, and M. Lehmann, “Wavefront-Reconstruction Imaging through Random Media,” Appl. Phys. Lett. 8(12), 311–313 (1966). [CrossRef]  

3. Z. Yaqoob, D. Psaltis, M. S. Feld, and C. Yang, “Optical Phase Conjugation for Turbidity Suppression in Biological Samples,” Nat. Photonics 2(2), 110–115 (2008). [CrossRef]  

4. M. Cui and C. Yang, “Implementation of a digital optical phase conjugation system and its application to study the robustness of turbidity suppression by phase conjugation,” Opt. Express 18(4), 3444–3455 (2010). [CrossRef]  

5. I. M. Vellekoop and A. P. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32(16), 2309–2311 (2007). [CrossRef]  

6. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012). [CrossRef]  

7. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6(5), 283–292 (2012). [CrossRef]  

8. R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue,” Nat. Photonics 9(9), 563–571 (2015). [CrossRef]  

9. O. Salhov, G. Weinberg, and O. Katz, “Depth-resolved speckle-correlations imaging through scattering layers via coherence gating,” Opt. Lett. 43(22), 5528–5531 (2018). [CrossRef]  

10. S. Rotter and S. Gigan, “Light fields in complex media: Mesoscopic scattering meets wave control,” Rev. Mod. Phys. 89(1), 015005 (2017). [CrossRef]  

11. S. M. Popoff, G. Lerosey, R. Carminati, M. Fink, A. C. Boccara, and S. Gigan, “Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media,” Phys. Rev. Lett. 104(10), 100601 (2010). [CrossRef]  

12. S. M. Popoff, G. Lerosey, M. Fink, A. C. Boccara, and S. Gigan, “Controlling light through optical disordered media: transmission matrix approach,” New J. Phys. 13(12), 123021 (2011). [CrossRef]  

13. M. Cui, “A high speed wavefront determination method based on spatial frequency modulations for focusing light through random scattering media,” Opt. Express 19(4), 2989–2995 (2011). [CrossRef]  

14. S. Feng, C. Kane, P. A. Lee, and A. D. Stone, “Correlations and fluctuations of coherent wave transmission through disordered media,” Phys. Rev. Lett. 61(7), 834–837 (1988). [CrossRef]  

15. J. Bertolotti, E. G. van Putten, C. Blum, A. Lagendijk, W. L. Vos, and A. P. Mosk, “Non-invasive imaging through opaque scattering layers,” Nature 491(7423), 232–234 (2012). [CrossRef]  

16. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014). [CrossRef]  

17. X. Wang, X. Jin, J. Li, X. Lian, X. Ji, and Q. Dai, “Prior-information-free single-shot scattering imaging beyond the memory effect,” Opt. Lett. 44(6), 1423–1426 (2019). [CrossRef]  

18. T. Wu, O. Katz, X. Shao, and S. Gigan, “Single-shot diffraction-limited imaging through scattering layers via bispectrum analysis,” Opt. Lett. 41(21), 5003–5006 (2016). [CrossRef]  

19. M. Cua, E. H. Zhou, and C. Yang, “Imaging moving targets through scattering media,” Opt. Express 25(4), 3935–3945 (2017). [CrossRef]  

20. W. Li, J. Liu, S. He, L. Liu, and X. Shao, “Multitarget imaging through scattering media beyond the 3D optical memory effect,” Opt. Lett. 45(10), 2692–2695 (2020). [CrossRef]  

21. E. Edrei and G. Scarcelli, “Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media,” Sci. Rep. 6(1), 33558 (2016). [CrossRef]  

22. H. Zhuang, H. He, X. Xie, and J. Zhou, “High speed color imaging through scattering media with a large field of view,” Sci. Rep. 6(1), 32696 (2016). [CrossRef]  

23. M. Liao, D. Lu, G. Pedrini, W. Osten, G. Situ, W. He, and X. Peng, “Extending the depth-of-field of imaging systems with a scattering diffuser,” Sci. Rep. 9(1), 7165 (2019). [CrossRef]  

24. D. Tang, S. K. Sahoo, V. Tran, and C. Dang, “Single-shot large field of view imaging with scattering media by spatial demultiplexing,” Appl. Opt. 57(26), 7533–7538 (2018). [CrossRef]  

25. I. I. Freund, M. Rosenbluh, and S. Feng, “Memory effects in propagation of optical waves through disordered media,” Phys. Rev. Lett. 61(20), 2328–2331 (1988). [CrossRef]  

26. B. Judkewitz, R. Horstmeyer, I. M. Vellekoop, I. N. Papadopoulos, and C. Yang, “Translation correlations in anisotropically scattering media,” Nat. Photonics 11(8), 684–689 (2015). [CrossRef]  

27. M. Haskel and A. Stern, “Modeling optical memory effects with phase screens,” Opt. Express 26(22), 29231–29243 (2018). [CrossRef]  

28. M. Kadobianskyi, I. N. Papadopoulos, T. Chaigne, R. Horstmeyer, and B. Judkewitz, “Scattering correlations of time-gated light,” Optica 5(4), 389–394 (2018). [CrossRef]  

29. C. Guo, J. Liu, W. Li, T. Wu, L. Zhu, J. Wang, G. Wang, and X. Shao, “Imaging through scattering layers exceeding memory effect range by exploiting prior information,” Opt. Commun. 434, 203–208 (2019). [CrossRef]  

30. K. T. Takasaki and J. W. Fleischer, “Phase-space measurement for depth-resolved memory-effect imaging,” Opt. Express 22(25), 31426–31433 (2014). [CrossRef]  

31. X. Xie, H. Zhuang, H. He, X. Xu, H. Liang, Y. Liu, and J. Zhou, “Extended depth-resolved imaging through a thin scattering medium with PSF manipulation,” Sci. Rep. 8(1), 4585 (2018). [CrossRef]  

32. X. Xu, X. Xie, A. Thendiyammal, H. Zhuang, J. Xie, Y. Liu, J. Zhou, and A. P. Mosk, “Imaging of objects through a thin scattering layer using a spectrally and spatially separated reference,” Opt. Express 26(12), 15073–15083 (2018). [CrossRef]  

33. G. Li, W. Yang, D. Li, and G. Situ, “Cyphertext-only attack on the double random-phase encryption: Experimental demonstration,” Opt. Express 25(8), 8690–8697 (2017). [CrossRef]  

34. R. Ma, Z. Wang, W. Y. Wang, Y. Zhang, J. Liu, W. L. Zhang, A. S. L. Gomes, and D. Y. Fan, “Wavelength-dependent speckle multiplexing for imaging through opacity,” Opt. Lasers Eng. 141, 106567 (2021). [CrossRef]  

35. J. R. Fienup, “Reconstruction of an object from the modulus of its Fourier transform,” Opt. Lett. 3(1), 27–29 (1978). [CrossRef]  

36. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]  

37. J. W. Goodman, Speckle phenomena in optics: theory and applications (Roberts and Company Publishers, 2007).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Schematic diagram of the experiments. RD denotes the rotating diffuser, and L denotes the collimating lens. The “stationary object” O1 and “moving object” O2 are objects placed in different positions. The “moving object” O2 freely moves among different object planes, as shown on OP1, OP2 and OP3. The red and blue dashed circles represent the traditional lateral and axial OME range.
Fig. 2.
Fig. 2. Experimental setup and 3D OME range of the scattering medium. (a) Experimental setup. The system consists of a laser, rotating diffuser (RD), collimating lens (L), beam splitter (B1 and B2), mirrors (M1 and M2), and object planes (OP1 and OP2). Light from the source is divided and separately incident on transparent characters “S” and “Z.” The light then converges into one beam that passes through the scattering medium. u1 and u2 are the distance from the object planes OP1 and OP2 to the scattering medium. v is the distance from the scattering medium to the camera. (b) Lateral OME range of the scattering medium in experiments. The red dot represents the correlation between the PSF measured at the center of the pinhole and those at lateral-shifting positions, and the blue curve is obtained using a Gaussian fitting. (c) Axial OME range of the scattering medium in our experiments, which was obtained using a similar method to (b).
Fig. 3.
Fig. 3. Experimental results of superposition imaging of multi-frame speckle images. (a) The ground truth of objects O1 (character “S”) and O2 (character “Z”). The left side is the stationary object O1; the right side is the “moving” object O2. (b) Raw camera image Ii, of which 13 frames are collected. (c) Autocorrelation of an arbitrarily selected single frame and the corresponding reconstructed result using the SAT. (d) Computed speckle components Iave and $I_{dif}^i$ using our proposed SA algorithm. (e) The autocorrelations of Iave and $I_{dif}^i$ . (f) The corresponding reconstructed results using SAT. Scale bars: 200 pixels in (b) and (e) and 50 pixels in (a), (c), (d), (f), and (g).
Fig. 4.
Fig. 4. Imaging of two objects O1 and O2 located far from each other in the axial direction. (a) Schematic of experimental setup, u1 is 34.5 cm, u2 varies from 43.5 to 25.5 cm, and each movement step is 3 cm. (b) Images obtained by conventional imaging systems. The yellow dashed circle indicates the position of the defocused object. (c) Captured speckle images. (d) Computed speckle components Iave (left) and $I_{dif}^i$ (right). (e) Reconstruction objects. Scale bars: 50 pixels in (b) and (e), 200 pixels in (c) and (d).
Fig. 5.
Fig. 5. Relationship between n and the corresponding recovery results. (a) The reconstructed results with different values of n, which is the number of collected speckle images. (b) Correlation coefficient curve between the reconstructed results and ground truth. Scale bar: 50 pixels.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

I I = ( O S ) ( O S ) = ( O O ) ( S S ) O O + C
| F { O } | 2 = F { O O } F { I I } ,
I = I O 1 + I O 2 = O 1 S 1 + O 2 S 2 ,
c I I = ( O 1 O 1 ) ( S 1 S 1 ) + ( O 2 O 2 ) ( S 2 S 2 ) + 2 × ( O 1 O 2 ) ( S 1 S 2 ) O 1 O 1 + O 2 O 2 + C 1 ,
I i = I O 1 + I O 2 i = O 1 S 1 + O 2 S 2 i ,
I a v e = i = 1 n I i / n = O 1 S 1 + 1 n i = 1 n O 2 S 2 i O 1 S 1 + C 2 .
I d i f i = I i I a v e = O 2 S 2 i 1 n i = 1 n O 2 S 2 i O 2 S 2 i C 2 ,
I a v e I a v e O 1 O 1 I d i f i I d i f i O 2 O 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.