Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Imaging operator in indirect imaging correlography

Open Access Open Access

Abstract

Indirect imaging correlography (IIC) is a coherent imaging technique that provides access to the autocorrelation of the albedo of objects obscured from line-of-sight. This technique is used to recover sub-mm resolution images of obscured objects at large standoffs in non-line-of-sight (NLOS) imaging. However, predicting the exact resolving power of IIC in any given NLOS scene is complicated by the interplay between several factors, including object position and pose. This work puts forth a mathematical model for the imaging operator in IIC to accurately predict the images of objects in NLOS imaging scenes. Using the imaging operator, expressions for the spatial resolution as a function of scene parameters such as object position and pose are derived and validated experimentally. In addition, a self-supervised deep neural network framework to reconstruct images of objects from their autocorrelation is proposed. Using this framework, objects with $\approx$ 250 $\mu m$ features, located at 1 mt standoffs in an NLOS scene, are successfully reconstructed.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

When light paths from an object are imaged onto a detector by an imaging lens, the one-to-one correspondence between object and image is preserved. However, when these light paths are intercepted by an optically rough surface, such as a wall, the light gets scattered and loses its one-to-one correspondence. Non-line-of-sight (NLOS) imaging techniques [1] exploit the unique attributes of the scattered light, such as its finite speed [29], intensity [1014], or coherence [1523], to circumvent the scrambling effects of the rough surface and assemble an image of the obscured object. Since the sources and detectors do not have direct access to the obscured objects, NLOS imaging techniques utilize the scattering surfaces to indirectly access the objects in the hidden scene. Hence, these surfaces are termed as virtual sources (VS) and virtual detectors (VD) [16]. Moreover, due to their ability to scatter light over the entire hemisphere, these scattering surfaces help recover information about objects located anywhere in the hemisphere. Thus, these scattering surfaces simultaneously assist and impede the image formation process.

Significant advances over the last decade have resulted in the development of a wide range of techniques for solving the NLOS imaging problem. These approaches can be broadly classified as “active NLOS imaging techniques” [223] that utilize an illumination source, and “passive NLOS imaging techniques” [2427] that rely on ambient illumination for recovering the information of objects in the NLOS scene. Here, the discussion is restricted to active NLOS imaging techniques that recover the image of an obscured object either from the round-trip time-of-flight or speckle correlations in the scattered light.

In time-of-flight (or) transient imaging approaches [29], the hidden scene is interrogated using a pulsed laser and a time-resolved detector, such as a streak camera or single-photon avalanche diode (SPAD). The temporal light pulse from the laser is scattered by the VS hidden objects, and the VD before reaching the detector. Consequently, the photon distribution measured by the time-resolved detector is determined by the location and albedo of the objects in the hidden scene. These time-resolved measurements are acquired at different source-detector positions (confocal or non-confocal) to assemble a space-time data stack, also called a transient image. Computational techniques, such as filtered back projection, f-k migration [5], and phasor fields [6], are adapted to recover the 3D representation of the hidden volume from the transient image. The spatial resolution of reconstructed scenes using transient imaging approaches is currently limited by the temporal resolution of the detector. While state-of-the-art transient imaging techniques using SPAD detectors have demonstrated the recovery of 3D scenes with a typical lateral resolution of about ${\approx} 1\textrm{ }cm$ by scanning over ${\approx} 1\textrm{ }mt \times 1\textrm{ }mt$ region on the VD [1], recent studies have reported an order of magnitude improvement over these values using detectors with $1\textrm{}ps$ temporal resolution [9]. However, the lateral resolution of transient imaging approaches is expected to be fundamentally limited by the decorrelation of the spectral components of the pulse source that is induced by the surface roughness of the VS/VD [18].

Steady-state/intensity-modulation techniques [1014] also achieve similar reconstruction resolutions in NLOS imaging scenes. In steady-state NLOS imaging techniques, the intensity images of the VD are recorded using a CMOS detector and an unmodulated source. Here, the spatial variations in the intensity images induced by the location and reflectance properties of the hidden objects are utilized to reconstruct the hidden scene. The intensity-modulated NLOS imaging techniques, however, utilize the emerging time-of-flight sensors and an intensity-modulated light source to record the magnitude and phase of the scattered light. The magnitude and phase at the modulation frequency is then used to recover the contents of the hidden scene.

In speckle-correlation-based approaches [1523], the hidden scene is interrogated using a continuous-wave laser source and a spatially resolved detector array, such as a CMOS focal plane array. Here, the combination of the temporal coherence of the laser source and the surface roughness of the VS/VD pair results in the formation of a fully developed speckle field at the detector plane. Despite their random appearance, the speckle patterns are known to encode information about the scattering process and are used in applications as varied as optical metrology [28], bio-medical imaging [29], and wavelength sensing [30]. In NLOS imaging, the spatial and spectral correlations in the speckle patterns recorded by the focal plane array are utilized to recover the images of hidden objects [17].

The speckle correlation technique, introduced in [18], known as synthetic wavelength holography (SWH), relies on the correlation in speckle fields at closely spaced wavelengths to recover holograms of objects deep inside a hidden scene. It exploits the notion that the speckle fields captured at closely spaced wavelengths remain largely correlated with the distinction of experiencing a relative phase difference that depends on the wavelength separation and the object’s position. Synthetic Wavelength Holography recovers this differential phase information by assembling a synthetic hologram or the conjugate product of the individual optical holograms recorded at each independent wavelength. The hidden scene is then reconstructed using holographic back-projection techniques at the “synthetic wavelength,” determined by the two center wavelengths. The spatial resolution of reconstructed scenes using SWH is limited by the synthetic wavelength of operation, which is further determined by the surface roughness of the scattering medium. With rough surfaces such as dry walls, a synthetic wavelength of ${\approx} 1\textrm{ }mm$ was demonstrated. Therefore, the maximum lateral resolution using SWH is expected to be ${\approx} 1\textrm{ }mm.\textrm{ }$

Another class of speckle correlation techniques exploits the spatial or angular correlations in the scattered light at a single wavelength to recover images of objects in an NLOS scene. These techniques, due to their operation at optical wavelengths, offer phenomenal spatial resolution and are best suited for NLOS imaging scenes constrained by a small probe area. The speckle correlation technique introduced in [19] utilizes spatially incoherent illumination to record speckle intensity patterns emerging from the object. It is also used for a wide range of applications where scattering acts as an impediment to imaging, such as tracking objects inside scattering media [31] and endoscopy [32,33]. The technique is predicated on the notion that the recorded speckle intensity image is a superposition of correlated but spatially offset speckle intensity images arising from individual object points. The spatial offsets of the individual speckle intensity images are directly related to the relative separations between the object points. The autocorrelation of the object's albedo is then obtained by computing the autocorrelation of the captured image.

Imaging correlography techniques for indirect imaging [20,21,34], however, utilize a narrow linewidth laser source with a high degree of temporal coherence to record speckle intensity patterns emerging from the object. The recorded speckle interference pattern is a result of the coherent superposition of the speckle fields from the individual object points. The autocorrelation of the object's albedo is then obtained from the power spectrum of the recorded image. Due to the coherent superposition of the constituent speckle fields, correlography-based approaches are expected to accommodate more object points for a given dynamic range on the detector. Using this technique, images of objects with a spatial resolution of ${\approx} 250\textrm{ }\mu m$ at $1\textrm{ }mt$ standoff [21] were recovered around a corner. The high spatial resolution of the spatial speckle correlation techniques is a direct consequence of the reliance on the wave nature of light at visible wavelengths $({ \approx 0.5\textrm{ }\mu m} )$. Consequently, extremely small path delays $({ \approx 1\textrm{ }fs} )$ from object points also yield a discernable variation in the detected intensity pattern. However, the high spatial resolution advantage of these techniques is offset by the extremely small angular field of view, arising from the limitation imposed by the memory effect angle (iso-planatic angle) [35]. In addition, techniques exploiting spatial correlations in scattered speckle patterns do not possess the ability to localize the hidden objects, except under highly constrained scenarios where the use of digital holographic techniques is feasible [10]. Localization by pairing the speckle correlation technique with coherence gating is also proposed in literature [22].

 figure: Fig. 1.

Fig. 1. Comparison of NLOS imaging approaches.

Download Full Size | PDF

The key performance attributes of the NLOS techniques discussed here are summarized in Fig. 1. It is evident from Fig. 1 that NLOS imaging techniques have complementary advantages and tradeoffs. However, for NLOS applications requiring the highest spatial resolution imagery with a small probing area, use of spatial speckle correlation techniques like indirect imaging correlography (IIC) is best suited.

Figure 2 illustrates the schematic setup of IIC and exemplar reconstructions of ${\approx} 250\; \mu m$ features using IIC. The hidden object is indirectly illuminated from the laser light scattered by the VS, and the speckle patterns are recorded by a camera imaging the VD. The recorded speckle patterns are processed to obtain to the autocorrelation of the albedo of the hidden objects. The process of recovering the image of the hidden object from the autocorrelation is ill-posed and is solved either by use of opportunistic features such as a glint, or an extended reference on the object [36], iterative optimization [37], or deep learning techniques [21]. It can be noticed from Fig. 2 that the reconstructed images appear foreshortened relative to the original object albedo. These distortions manifest from a mismatch in the orientations of the object plane and VD plane. Although models to study light propagation between arbitrarily oriented planes exist [38,39], these cannot be applied directly to predict images formed in IIC due to the absence of factors specific to NLOS imaging, such as the VD magnification. Moreover, studies to predict feature visibility in NLOS imaging are specific to transient imaging and cannot be generalized for other NLOS techniques [40].

 figure: Fig. 2.

Fig. 2. IIC schematic and exemplar results.

Download Full Size | PDF

In this work, a mathematical model for the imaging operator in IIC to accurately predict the images of objects in an arbitrary NLOS scene is developed. The proposed framework casts the NLOS imager as a computational imager with the VD and the image sensor acting as its pupil planes. Using this abstraction, analytical expressions for the lateral resolution of objects as a function of VD size, imaging magnification, object’s position, and pose are derived. In addition to the mathematical model, a new scheme to reconstruct the hidden object albedo is presented. The technique leverages the recent advances in self-supervised networks for solving inverse imaging problems to recover the image from its autocorrelation.

2. Theory

Indirect imaging correlography is an adaptation of a lensless imaging technique developed in the 1980s, called imaging correlography [4143]. Imaging correlography was originally developed to recover images of stellar objects from the backscattered speckle intensity patterns recorded under coherent illumination. In NLOS imaging scenarios, the speckle patterns experience additional scrambling due to scattering at the VS and VD, and thereby lead to a reduced signal-to-noise ratio, and reconstructed field of view. Figure 2 illustrates the acquisition process in IIC, wherein the VS is illuminated using a narrow-linewidth laser source. The incident light is scattered by the VS and indirectly illuminates the objects in the hidden scene. Each illuminated object point in the hidden scene acts as a secondary point source that directs light toward the VD. The light distribution illuminating the VD is relayed back to the detector using an imaging lens, forming the speckle intensity image. Under the assumption that the object’s angular extent is smaller than the iso-planatic angle of the VD, light from the secondary object point sources interfere with each other to produce an interference pattern on the detector. The spatial frequency of each fringe pattern is determined by the relative separation between the point sources and the wavelength of operation. More specifically, the intensity distribution, ${I_{VD}}(X )$, at the VD emerging from the field reflected by the hidden object, ${u_{obj}}({{x_\pi }} )$, can be expressed using scalar diffraction theory [44] as,

$${I_{VD}}(X )\propto {|{\mathrm{{\cal F}}({{u_{obj}}({{x_\pi }} )} )} |^2}$$
where ${x_\pi }$ and X denote the spatial coordinates in the plane of the object and VD respectively (see Fig. 3). The hidden object albedo $|{{u_{obj}}({{x_\pi }} )} |$ is recovered from the intensity measurements by exploiting the inverse Fourier transform relationship between ${I_{VD}}(X )$ and ${u_{obj}}({{x_\pi }} )$,
$$\begin{array}{{ccc}} {{{|{{\mathrm{{\cal F}}^{ - 1}}({{I_{VD}}(X )} )} |}^2}\; }& = &{{{|{{\mathrm{{\cal F}}^{ - 1}}({{{|{\mathrm{{\cal F}}({{u_{obj}}({{x_\pi }} )} )} |}^2}} )} |}^2}}\\ \; & = &{{{|{{u_{obj}}({{x_\pi }} )\mathrm{\ \star }\; {u_{obj}}({{x_\pi }} )} |}^2}} \end{array}\; $$
where $\mathrm{\ \star }$ denotes the autocorrelation operation. The relationship between the Fourier modulus of the recorded image and the hidden object albedo can also be understood by considering a hidden object comprised of only two-point reflectors. The light by the individual point reflectors interferes to form a fringe patten at the VD. The power spectrum of the recorded fringe pattern exhibits three peaks corresponding to the fringe frequency. These three peaks in the power spectrum correspond to the autocorrelation of the two points in the hidden scene. This notion can be extended for a collection of point sources [45], resulting in the autocorrelation relationship between the power spectrum of the recorded intensity image and object albedo. In an NLOS scene, the mismatch between the angle of incidence of scattered light from the object and the viewing direction or the VD normal results in distortions in the recovered image. Consequently, the achievable spatial resolution is dependent on the angular mismatch between the incident and viewing directions. The imaging operator described below models the effect of angular mismatch and derives the expressions for spatial resolution of the hidden objects as a function of the pose and position of the hidden object relative to VD.

2.1 Imaging operator for IIC

The process of recovering the object albedo using IIC can be abstracted away using a computational imager (CI) whose entrance pupil is positioned at the VD, the exit pupil at the location of the image sensor, and a computational sensor behind the exit pupil. This notion is an extension of an idea first proposed by Isaac Freund in his seminal work on looking around corners [46]. In the proposed abstraction, shown in Fig. 3, the combination of the VD wall, imaging optics, and the image sensor act as a computational thick lens. The following derivation relates the chief rays on the object side to the image side of the CI to discover the imaging operator associated with IIC.

 figure: Fig. 3.

Fig. 3. Abstraction of IIC setup for the derivation of the imaging operator.

Download Full Size | PDF

The relationship between the chief rays on the object side and the image side can be found by observing the spatial pattern formed at the entrance pupil plane by any two object points. From scalar diffraction theory, the spatial fringe pattern at the entrance pupil plane is determined by the difference in direction cosines of the interfering rays and the wavelength of illumination [44]. Therefore, expressing the difference in direction cosines at the entrance pupil as a function of the object position and pose helps in estimating the spatial pattern observed at the entrance pupil, and its Fourier domain representation.

For the purposes of this analysis, the coordinate system is assumed to be aligned with the entrance pupil plane. The coordinates of the object are found by applying a rigid body transformation (rotation, followed by translation) to the object about the entrance pupil plane. Additionally, the difference in direction cosines of each object point is expressed relative to the center of the object, ${{\boldsymbol P}_{\boldsymbol r}}$. The 3D rotation matrix R defines the pose of the object with respect to the coordinate axis, and the translation vector ${\boldsymbol t}$ defines the final position of the object. The derivation begins by identifying the transformed coordinates of the reference point ${{\boldsymbol P}_{\boldsymbol r}}{\boldsymbol \; }$ on the object after the rigid body transformation as,

$$\boldsymbol{P}_{\boldsymbol{r}}^{\prime} = \boldsymbol{t}$$

Similarly, the coordinates of any point ${{\boldsymbol P}_{\boldsymbol \pi }}$ with a spatial offset of ${\left[ {\begin{array}{{ccc}} {{x_\pi }}&{{y_\pi }}&0 \end{array}} \right]^T}$ from ${{\boldsymbol P}_{\boldsymbol r}}$ on the object can be expressed as,

$${\boldsymbol P^{\prime}} = \left( {R{{\left[ {\begin{array}{{ccc}} {{x_\pi }}&{{y_\pi }}&0 \end{array}} \right]}^T}} \right) + {\boldsymbol {t}}$$

The difference in direction cosines of the rays joining the objects points and the origin is given by,

$$\left[ {\begin{array}{{c}} {\hat{l}} \\ {\hat{m}} \\ {\hat{n}} \end{array}} \right] = \frac{{\boldsymbol{P}^{\prime}}}{{\left\| {\boldsymbol{P}^{\prime}} \right\|}} - \frac{{\boldsymbol{P}_{\boldsymbol{r}}^{\prime}}}{{\left\| {\boldsymbol{P}_{\boldsymbol{r}}^{\prime}} \right\|}}$$
where, $\left\| {\boldsymbol{P}^{\prime}} \right\| = L$ and $\left\| {\boldsymbol{P}_{\boldsymbol{r}}^{\prime}} \right\| = {L_r}$ denote the Euclidean distances between the object points and the origin. From Eq. (4), $\left\| {\boldsymbol{P}^{\prime}} \right\|$ can be expressed using R and ${\boldsymbol t}$ as $\left\| {\boldsymbol{P}^{\prime}} \right\| = \left\| {R{\boldsymbol{P}_{\boldsymbol\pi} } + \boldsymbol{t}} \right\|$. Under the assumption that the object extent is much smaller than the propagation distance, or $\left\| {R{\boldsymbol{P}_\pi }} \right\| \ll \,\left\| \boldsymbol{t} \right\|,\,{\left\| {\boldsymbol{P}^{\prime}} \right\|^{ - 1}}$ can be expanded using its Taylor series form as below,
$$\frac{1}{{\left\| {\boldsymbol{P}^{\prime}} \right\|}} \approx \frac{1}{{\left\| \boldsymbol{t} \right\|}} - \frac{{{{\left( t \right)}^T}\left( {R{\boldsymbol{P}_\pi }} \right)}}{{{{\left\| \boldsymbol{t} \right\|}^3}}}$$

Upon substituting Eq. (6) in Eq. (5), the difference in direction cosines from object points ${\boldsymbol P^{\prime}}$ and ${\boldsymbol P}_{\boldsymbol r}^{\prime}$ can be expressed as a function of object’s position and pose, as follows,

$$\left[ {\begin{array}{{c}} {\hat{l}} \\ {\hat{m}} \\ {\hat{n}} \end{array}} \right] = \frac{{R{\boldsymbol{P}_{\boldsymbol{\pi}} }}}{{\left\| \boldsymbol{t} \right\|}} - \frac{{{{\left( t \right)}^T}\left( {R{\boldsymbol{P}_\pi }} \right)}}{{{{\left\| \boldsymbol{t} \right\|}^3}}}\left( {R{\boldsymbol{P}_\pi } + \boldsymbol{t}} \right)$$

The resulting vector can be expressed as direction cosines associated with a unit vector, by replacing $\hat{n}$ with $\sqrt {1 - {{\hat{l}}^2} - {{\hat{m}}^2}} $. The above process may be viewed as a geometric rectification, wherein the points on the hidden object are mapped on to an on-axis object parallel to the VD and positioned at $z = {L_r}$. The rectified CI is shown in Fig. 3. Since the entrance pupil and exit pupil planes of the computational thick lens are related through the pupil magnification (here, the magnification of the optics imaging the VD), ${m_p}$, the direction cosines on the image side are given by [47],

$$\left[ {\begin{array}{{c}} {\tilde{l}}\\ {\tilde{m}}\\ {\tilde{n}} \end{array}} \right] = \frac{1}{{\sqrt {1 + ({m_p^2 - 1} ){{\hat{n}}^2}} }}\left[ {\begin{array}{{ccc}} 1&0&0\\ 0&1&0\\ 0&0&{{m_p}} \end{array}} \right]\left[ {\begin{array}{{c}} {\hat{l}}\\ {\hat{m}}\\ {\hat{n}} \end{array}} \right]$$

From [44], the direction cosines ${[{\tilde{l},\tilde{m}} ]^T}$ of a monochromatic wave at wavelength $\lambda $ are related to the spatial frequencies ${f_x}$ and ${f_y}$ as

$$\left[ {\begin{array}{{c}} {{f_x}}\\ {{f_y}} \end{array}} \right] = \left[ {\begin{array}{{c}} {\frac{{\tilde{l}}}{\lambda }}\\ {\frac{{\tilde{m}}}{\lambda }} \end{array}} \right]$$

Furthermore, the spatial frequency resolution of reconstruction is determined by the spatial extent ${L_{sensor}} \times {H_{sensor}}$ of the image sensor. Therefore, the pixel coordinates of the object points in the Fourier reconstruction can be determined as the ratio of the spatial frequency and the spatial frequency resolution, which is given by,

$$\left[ {\begin{array}{{c}} u\\ v \end{array}} \right] = \left[ {\begin{array}{{c}} {\frac{{{f_x}}}{{{{({{L_{sensor}}\; } )}^{ - 1}}}}}\\ {\frac{{{f_y}}}{{{{({{H_{sensor}}} )}^{ - 1}}}}} \end{array}} \right] = \; \left[ {\begin{array}{{c}} {\tilde{l} \times {{\left( {\frac{\lambda }{{{L_{sensor}}}}} \right)}^{ - 1}}}\\ {\tilde{m} \times {{\left( {\frac{\lambda }{{{H_{sensor}}}}} \right)}^{ - 1}}} \end{array}} \right]$$

2.2 Spatial resolution on target

The spatial resolution on target is defined here as the smallest physical separation between two points on the hidden object that can be resolved using IIC. The horizontal resolution can be determined by considering a point ${{\boldsymbol P}_{\boldsymbol H}}:{\left[ {\begin{array}{{ccc}} {\delta x}&0&0 \end{array}} \right]^T}$ on the hidden object located at $\boldsymbol{P}_r^{\prime}:{\left[ {\begin{array}{{ccc}} {{t_x}}&{{t_y}}&{{t_z}} \end{array}} \right]^T}$ with the pose defined using $R = \left[ {\begin{array}{{ccc}} {{r_{11}}}&{{r_{12}}}&{{r_{13}}}\\ {{r_{21}}}&{{r_{22}}}&{{r_{23}}}\\ {{r_{31}}}&{{r_{32}}}&{{r_{33}}} \end{array}} \right]$. The expressions for horizontal resolution $\delta x$ can be obtained by substituting $u = 1$ in Eq. (10). This results in the expression for $\delta x$ given below,

$$\delta x = \frac{{\lambda \left\| \boldsymbol{t} \right\|}}{{m_p^{ - 1}{L_{sensor}}}} \times {\left\{ {{r_{11}} - {t_x}\left( {\frac{{{t_x}{r_{11}} + {t_y}{r_{21}} + {t_z}{r_{31}}}}{{{{\left\| \boldsymbol{t} \right\|}^2}}}} \right)} \right\}^{ - 1}}$$

The vertical resolution can similarly be determined by considering a point ${{\boldsymbol P}_{\boldsymbol V}}:{\left[ {\begin{array}{{ccc}} 0&{\delta y}&0 \end{array}} \right]^T}$ on the hidden object and solving for $\delta y$ by substituting $v = 1$ in Eq. (10),

$$\delta y = \frac{{\lambda \left\| \boldsymbol{t} \right\|}}{{m_p^{ - 1}{H_{sensor}}}} \times {\left\{ {{r_{22}} - {t_x}\left( {\frac{{{t_x}{r_{12}} + {t_y}{r_{22}} + {t_z}{r_{32}}}}{{{{\left\| \boldsymbol{t} \right\|}^2}}}} \right)} \right\}^{ - 1}}$$

It can be observed that the first term in Eq. (11) and Eq. (12) is analogous to the diffraction-limited spatial resolution of a coherent lensless imaging system, which is scaled by a pose-position dependent scale factor. In deriving Eq. (11) and Eq. (12), it is assumed that $\hat{n} = 1$, as the angular subtense of the object points relative to the reference point in the rectified CI, is small.

3. Experimental validation of the geometric imaging model

The theoretical expressions derived in Sec. 2 were experimentally validated using the NLOS imaging depicted in Fig. 4. The setup consists of a narrow linewidth laser source (Azur Light Systems GR, operating at 532 nm) to illuminate a white dry wall panel acting as the scattering surface (VS/VD). Furthermore, the free space laser beam is coupled into a single mode optical fiber whose output is expanded to a spot diameter of 2.7" using a lens assembly (Microlaser Systems, FC10). The illumination system is additionally mounted on a motorized rotation and tilt stage to illuminate the hidden scene with different speckle realizations by scanning the illuminated region on the VS. This enables the acquisition of an ensemble of images with different speckle realizations that help mitigate the noise in the autocorrelation estimates of the object.

 figure: Fig. 4.

Fig. 4. NLOS imaging setup schematic and experimental apparatus in the hidden scene used to validate the theoretical expressions.

Download Full Size | PDF

A Digital Micromirror Device (DMD), DLP7000, $0.7$ XGA, with a $768 \times 1024$ array of micromirrors of size $13.68\; \mu m$ was used as a proxy for an object and is positioned deep inside the L-shaped corner as shown in Fig. 4. The object was mounted on two translation stages and a rotation stage for varying the position and pose of the object. The DMD was chosen as the hidden object to programmatically change object albedo and for maximizing the light throughput. More results of the IIC reconstructions using diffuse targets are furnished in Sec. 3 of Supplement 1.

A diffraction-limited imager comprised of an image sensor (IDS U3-3890CP-M-GL) and imaging optics (Sigma 135 mm f/1.8 DG HSM Art Lens) were positioned $1.3\; mt$ away from VD to record the speckle intensity images emerging from the light scattered by the hidden object. The recorded speckle images are processed using the workflow depicted in Fig. 5. Each speckle intensity image of size $3000 \times 4000$ is first split into non-overlapping sub-images of size $449 \times 449$ that correspond to a VD size of $7.5\textrm{}mm \times 7.5\textrm{}mm$. An estimate of the autocorrelation of the object albedo is obtained by computing the ensemble averaged power spectrum of the recorded data stack. The ensemble averaged power spectrum is computed as the difference of the ensemble mean power spectrum of the individual sub-images, and power spectrum of the ensemble mean of the speckle sub-images (see Fig. 5). Analogously, the autocovariance of the ensemble of speckle sub-images is computed to obtain an estimate of the power spectrum of the object albedo. The experimental setup and the processing workflow described herein were used to perform a series of experiments that validate the predictions of the geometric imaging model described in Sec. 2.

 figure: Fig. 5.

Fig. 5. Processing workflow to recover power spectrum and autocorrelation from recorded speckle images. The autocorrelation and power spectrum images are contrast adjusted for display purposes.

Download Full Size | PDF

3.1 Validation of the object to image mapping

The imaging relationship between the hidden object and its reconstruction on the computational sensor plane provided by the expressions in Eq. (10) were validated for a hidden scene with the parameters described in Fig. 6. The hidden object was positioned $0.86\; mt$ behind and $0.78\; mt$ to the left of the VD and at the same height as the camera. An image of the character ‘N’ with two glint features was loaded into the DMD. The glint features are used to circumvent the need to perform a phase-retrieval process and directly compare the simulated and experimental autocorrelation estimates. The expressions of Eq. (10) were implemented in MATLAB to simulate the image of the object as seen in the reconstruction plane.

The autocorrelation of the simulated image and the experimentally obtained autocorrelation can be observed to be in close agreement from Fig. 6. Moreover, it can be noticed that the reconstruction of ‘N’ in the horizontal orientation extends up to $61 \;\textrm{}pixels$ from the center, whereas it extends up to $85 \; \textrm{}pixels$ in the vertical orientation despite the symmetric separation in both orientations. The experimental results presented here demonstrate the severity of foreshortening experienced in IIC.

 figure: Fig. 6.

Fig. 6. Results from the experiment to validate the object plane to reconstruction plane mapping predicted by the imaging operator.

Download Full Size | PDF

3.2 Validation of the resolving ability of the IIC system

The resolving ability of the IIC system provided by the expressions in Eq. (11) and Eq. (12) were validated for a hidden scene with the parameters described in Fig. 7. Briefly, the hidden object was positioned $0.86\; mt$ behind and $0.88\; mt$ to the left of the VD and at the same height as the camera. From Eq. (11) and Eq. (12), the IIC system has a spatial frequency cutoff of $8.5\frac{{lp}}{{mm}}$ in the horizontal orientation and $11.5\frac{{lp}}{{mm}}$ in the vertical orientation. Therefore, the DMD is loaded with Ronchi rulings of frequencies closer to the cutoff frequency, as illustrated in Fig. 7. The power spectrum of the hidden objects is computed from the autocorrelation of detected speckle intensity images. The two peaks in the power spectrum and the corresponding line trace demonstrate the ability of the IIC setup to record frequencies so close to the cutoff. This experiment validates the ability of IIC to record very high frequency features on objects hidden around a corner.

 figure: Fig. 7.

Fig. 7. Results from the experiment to validate the resolving ability of the IIC system.

Download Full Size | PDF

3.3 Validation of the effect of object position on the resolving ability of IIC system

The geometric imaging model discussed in Sec. 2 is predicated on the notion that the VD acts as the entrance pupil of the IIC computational imager. Consequently, changes in the size of the VD must affect the resolving ability of the IIC system. The apparent VD size seen by the objects keeps decreasing as the objects move off-axis relative to the VD. This results in a reduction in the resolving power of IIC. To quantitatively examine this effect, the Spatial Frequency Response (SFR) of the IIC system is estimated for each off-axis object position and the corresponding cutoff frequencies are analyzed in this section.

The SFR of an imaging system quantifies the imaging system’s ability to resolve fine features on target. The SFR is typically estimated by measuring the response of the imaging system to specially designed test patterns such as a slanted edge, stochastic pattern, or a pseudo-random intensity code. Furthermore, the properties of laser speckle are also utilized for SFR estimation of focal plane arrays [48]. As depicted in Fig. 5, in IIC, the autocovariance of the speckle intensity image yields the power spectrum of the object albedo. Consequently, the measured power spectrum is weighted by the autocorrelation function of the exit pupil, acting as the SFR of the IIC system. Therefore, given a reference object albedo in the hidden scene, the SFR of the IIC system can be estimated as the ratio of the measured power spectrum and the analytic power spectrum admitted by the object albedo.

 figure: Fig. 8.

Fig. 8. Left panel: Workflow used to estimate the spatial frequency response of the IIC system. Right panel: Comparison of cut-off frequency plots predicted by theory and experiments.

Download Full Size | PDF

Since the approach requires computation of an analytic power spectrum, simpler target shapes such as a point target or slit target were found to be best suited for these experiments. However, the poor light throughput from a point target imposes additional challenges to record the speckle at the detector. Therefore, a slit target of two orthogonal orientations was used in the experiments, as illustrated in Fig. 8. The approach begins by loading the DMD with the slit target and measuring the power spectrum through the workflow of the figure. Subsequently, the ratio of the line trace of the measured power spectrum along the central row or column with the analytic power spectrum of the slit target is computed to give the SFR of the system. The process to determine the analytic power spectrum of the slit target is detailed in Sec. 6 of Supplement 1. A line fit to the computed SFR and the theoretical SFR are observed to be in close agreement. This process was repeated for a series of off-axis displacements ${X_r}$ and the cutoff frequencies for each of these positions is determined. The cutoff frequencies estimated experimentally are in close agreement with the analytic cutoff frequencies determined as ${({2\delta x} )^{ - 1}}$ and ${({2\delta y} )^{ - 1}}$, where $\delta x$ and $\delta y$ are obtained from Eq. (11) and Eq. (12), respectively.

4. Phase retrieval

While the theoretical and experimental results in the previous section demonstrate the phenomenal resolving potential of IIC, the image fidelity is also determined by the reconstruction procedure used to invert the autocorrelation. It is worth remembering that the speckle intensity measurements in IIC provide access only to the power spectrum or the autocorrelation of the object albedo. The process of recovering the image of the hidden object from the power spectrum or the autocorrelation is ill-posed and requires solving a constrained optimization problem. Traditional phase retrieval approaches, such as the hybrid-input-output algorithm [37], start from the Fourier magnitude and solve for the Fourier phase through an iterative optimization procedure. In this optimization scheme, the solution space is constrained by imposing physical constraints, such as non-negative albedo and finite object support. Advances in deep neural networks (DNNs) have also been utilized to solve the inverse problem and recover the image of the hidden object. In [21], a DNN trained on synthetic data generated using the appropriate noise model was used to predict the image from its autocorrelation. In this work, a self-supervised DNN framework [49] is utilized to recover the image of hidden object from an experimentally observed autocorrelation, thereby dispensing with the training process.

Self-supervised DNNs that exploit the knowledge of a physical model relating the latent and observed measurements are used to solve a variety of inverse problems, such as phase imaging [50,51] phase unwrapping [52] and MRI reconstruction [53], 3D imaging [54], phase retrieval from Fourier measurements [55]. The basic framework, depicted in Fig. 9, comprises of a DNN, a task-specific physical model and a loss-function. The DNN, defined by the network weights, $\theta $, maps a random tensor z to an output image or a vector $l = \; {f_\theta }(z )$. The physical model $\mathrm{{\cal P}}$, maps the output of the network to the space of observed measurements $\hat{m}$. And, the loss function, $\mathrm{{\cal L}}$ is a measure of the dissimilarity between the observed measurements m and the predicted measurement $\hat{m}$. The gradient of the loss function $\partial \mathrm{{\cal L}}/\partial \theta $ is used to update the weights of the network. The network weights are updated over several iterations until the predicted measurement matches the observed measurements, resulting in an image of the hidden object.

 figure: Fig. 9.

Fig. 9. Top panel: Self-supervised DNN framework used to recover images of hidden object from its autocorrelation. Bottom panel: Experimentally obtained autocorrelations and their corresponding reconstructions. The autocorrelations are contrast adjusted for display purposes. (See Supplement 1 for network details)

Download Full Size | PDF

The self-supervised network used for the task of recovering the image of the hidden object in IIC, utilizes a convolutional neural network based on U-NET autoencoder architecture that is widely used for image processing tasks [56]. A detailed description of the network architecture is provided in Sec. 1 of Supplement 1. The physical model computes the autocorrelation of the latent representation l. Moreover, the loss function measures the $\textrm{SSIM}$ [57,58] between the autocorrelation of the predicted latent image, $\hat{m} = \; l\mathrm{\ \star }l$, and the measured autocorrelation m. An analysis on the performance of the network to different loss functions is provided in Sec. 5 of the Supplement 1 The input random tensor z is passed through a multilevel decomposition based on max pooling, transposed convolution and skip connections. The purposed network has the same five-levels setup as the original U-net, wherein the number of channels is doubled after each pooling operation, all the way from 16 to 256. Zero-padding is applied to keep the output size consistent with the input size of convolution. The purposed network employs instance normalization followed by a Leaky ReLU activation function. The weights are optimized using an Adam optimizer with a learning rate of 0.001 with the network converging typically in 1500 iterations. The results of the image reconstructions using the self-supervised framework are furnished in the bottom panel of Fig. 9. The results furnished in Fig. 9 demonstrate the phenomenal capability of the self-supervised framework in reconstructing complex targets from different languages without training on data from these languages. More results and comparisons of the reconstructions between the supervised and self-supervised network are furnished in Supplement 1.

5. Conclusion

A pupil plane formalism for NLOS imaging with the VD and imaging sensor acting as the pupil planes of a computational thick lens is introduced in this work. Using this formalism, an imaging operator for IIC is derived and is validated through a series of experiments. The theoretical and experimental results presented in this submission demonstrate the diffraction limited performance of IIC, and its ability to resolve ${\approx} 250\; \mu m\; $ features at $1\; \; mt$ standoff in an NLOS imaging scene. The proposed pupil plane formalism casts the elements of an NLOS scene as the pupil planes of a computational thick lens. As this formalism is not specific to IIC, it can be used to derive imaging operator and predict the performance of other NLOS imaging modalities, such as transient imaging and synthetic wavelength holography. Additionally, a reconstruction technique for IIC using a self-supervised DNN framework is introduced. The framework pairs a DNN with an IIC forward model to iteratively solve for the image of hidden objects from experimentally obtained autocorrelations.

Funding

Defense Advanced Research Projects Agency (HR0011-16-C-0028).

Acknowledgments

The authors thank Prof. Christopher Metzler for making available the codebase of [21].

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are available at [59].

Supplemental document

See Supplement 1 for supporting content.

References

1. D. Faccio, A. Velten, and G. Wetzstein, “Non-line-of-sight imaging,” Nat. Rev. Phys. 2(6), 318–327 (2020). [CrossRef]  

2. A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in 2009 IEEE 12th International Conference on Computer Vision, Sep. 2009, pp. 159–166

3. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012). [CrossRef]  

4. M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature. 555(7696), 338–341 (2018). [CrossRef]  

5. D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graph. 38(4), 1–13 (2019). [CrossRef]  

6. X. Liu, G. Ibón, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual wave optics,” Nature 572(7771), 620–623 (2019). [CrossRef]  

7. D. Faccio and A. Velten, “A trillion frames per second: the techniques and applications of light-in-flight photography,” Rep. Prog. Phys. 81(10), 105901 (2018). [CrossRef]  

8. C. A. Metzler, D. B. Lindell, and G. Wetzstein, “Keyhole Imaging: Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Single Optical Path,” IEEE Trans. Comput. Imaging 7, 1–12 (2021). [CrossRef]  

9. B. Wang, M. Y. Zheng, J. J. Han, X. Huang, X. P. Xie, F. Xu, Q. Zhang, and J. W. Pan, “Non-line-of-sight imaging with picosecond temporal resolution.,” Phys. Rev. Lett. 127(5), 053602 (2021). [CrossRef]  

10. K. J. Christoph Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6(1), 32491 (2016). [CrossRef]  

11. C. Wenzheng, S. Daneau, F. Mannan, and F. Heide, “Steady-state non-line-of-sight imaging.,” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6790–6799 (2019).

12. C. Yanpeng, R. Liang, J. Yang, Y. Cao, Z. He, J. Chen, and X. Li, “Computational framework for steady-state NLOS localization under changing ambient illumination conditions,” Opt. Express 30(2), 2438–2452 (2022). [CrossRef]  

13. F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3222–3229 (2014).

14. A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 1–10 (2013). [CrossRef]  

15. A. K. Singh, D. N. Naik, G. Pedrini, M. Takeda, and W. Osten, “Looking through a diffuser and around an opaque surface: A holographic approach,” Opt. Express 22(7), 7694–7701 (2014). [CrossRef]  

16. P. Rangarajan and M. P. Christensen, “Imaging hidden objects by transforming scattering surfaces into computational holographic sensors,” in Imaging and Applied Optics 2016 (2016), paper CTh4B.4, Jul. 2016, p. CTh4B.4.

17. P. Rangarajan, F. Willomitzer, O. Cossairt, and M. P. Christensen, “Spatially resolved indirect imaging of objects beyond the line of sight,” in Unconventional and Indirect Imaging, Image Reconstruction, and Wavefront Sensing 2019. 11135, 124–131 (2019).

18. F. Willomitzer, P. V. Rangarajan, F. Li, M. M. Balaji, M. P. Christensen, and O. Cossairt, “Fast non-line-of-sight imaging with high-resolution and wide field of view using synthetic wavelength holography,” Nat. Commun. 12(1), 6647 (2021). [CrossRef]  

19. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014). [CrossRef]  

20. A. Viswanath, P. Rangarajan, D. MacFarlane, and M. P. Christensen, “Indirect Imaging Using Correlography,” in Imaging and Applied Optics 2018 (3D, AO, AIO, COSI, DH, IS, LACSEA, LS&C, MATH, pcAOP) (2018), paper CM2E.3, Jun. 2018, p. CM2E.3.

21. C. A. Metzler, F. Heide, P. Rangarajan, M. M. Balaji, A. Viswanath, A. Veeraraghavan, and R. G. Baraniuk, “Deep-inverse correlography: towards real-time high-resolution non-line-of-sight imaging,” Optica 7(1), 63–71 (2020). [CrossRef]  

22. S. Ofer, G. Weinberg, and O. Katz, “Depth-resolved speckle-correlations imaging through scattering layers via coherence gating,” Opt. Lett. 43(22), 5528–5531 (2018). [CrossRef]  

23. J. A. Newman, Q. Luo, and K. J. Webb, “Imaging Hidden Objects with Spatial Speckle Intensity Correlations over Object Position,” Phys. Rev. Lett. 116(7), 073902 (2016). [CrossRef]  

24. K. L. Bouman, V. Ye, A. B. Yedidia, F. Durand, G. W. Wornell, A. Torralba, and W. T. Freeman, “Turning corners into cameras: Principles and methods,” In Proceedings of the IEEE International Conference on Computer Vision, 2270–2278 (2017).

25. C. Saunders, J. Murray-Bruce, and V. K. Goyal, “Computational periscopy with an ordinary digital camera,” Nature. 565(7740), 472–475 (2019). [CrossRef]  

26. M. Kaga, T. Kushida, T. Takatani, K. Tanaka, T. Funatomi, and Y. Mukaigawa, “Thermal non-line-of-sight imaging from specular and diffuse reflections,” IPSJ Trans. Comput. Vis. Appl. 11(1), 8 (2019). [CrossRef]  

27. K. Tanaka, Y. Mukaigawa, and A. Kadambi, “Polarized Non-Line-of-Sight Imaging,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2020, pp. 2133–2142.

28. R. Erf, Speckle Metrology. Elsevier, 2012.

29. T. Durduran and A. G. Yodh, “Diffuse correlation spectroscopy for non-invasive, micro-vascular cerebral blood flow measurement,” NeuroImage 85, 51–63 (2014). [CrossRef]  

30. N. K. Metzger, R. Spesyvtsev, G. D. Bruce, B. Miller, G. T. Maker, G. Malcolm, M. Mazilu, and K. Dholakia, “Harnessing speckle for a sub-femtometre resolved broadband wavemeter and laser stabilization,” Nat. Commun. 8(1), 15610 (2017). [CrossRef]  

31. Y. Jauregui-Sánchez, H. Penketh, and J. Bertolotti, “Tracking moving objects through scattering media via speckle correlations,” Nat. Commun. 13(1), 5779 (2022). [CrossRef]  

32. N. Stasio, C. Moser, and D. Psaltis, “Calibration-free imaging through a multicore fiber using speckle scanning microscopy,” Opt. Lett. 41(13), 3078–3081 (2016). [CrossRef]  

33. A. Porat, E. R. Andresen, H. Rigneault, D. Oron, S. Gigan, and O. Katz, “Widefield lensless imaging through a fiber bundle via speckle correlations,” Opt. Express 24(15), 16835–16855 (2016). [CrossRef]  

34. E. Edrei and G. Scarcelli, “Optical imaging through dynamic turbid media using the Fourier-domain shower-curtain effect,” Optica 3(1), 71–74 (2016). [CrossRef]  

35. I. Freund, M. Rosenbluh, and S. Feng, “Memory Effects in Propagation of Optical Waves through Disordered Media,” Phys. Rev. Lett. 61(20), 2328–2331 (1988). [CrossRef]  

36. M. Guizar-Sicairos and J. R. Fienup, “Direct image reconstruction from a Fourier intensity pattern using HERALDO,” Opt. Lett. 33(22), 2668–2670 (2008). [CrossRef]  

37. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]  

38. S. Ganci, “Fourier diffraction through a tilted slit,” Eur. J. Phys. 2(3), 158–160 (1981). [CrossRef]  

39. N. Delen and B. Hooker, “Free-space beam propagation between arbitrarily oriented planes based on full diffraction theory: a fast Fourier transform approach,” J. Opt. Soc. Am. A 15(4), 857–867 (1998). [CrossRef]  

40. X. Liu, S. Bauer, and A. Velten, “Analysis of Feature Visibility in Non-Line-Of-Sight Measurements,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2019, pp. 10132–10140.

41. M. Elbaum, M. King, and M. Greenebaum, “Laser correlography: transmission of high-resolution object signatures through the turbulent atmosphere,” RIVERSIDE RESEARCH INST NEW YORK, 1974.

42. P. S. Idell, J. R. Fienup, and R. S. Goodman, “Image synthesis from nonimaged laser-speckle patterns,” Opt. Lett. 12(11), 858–860 (1987). [CrossRef]  

43. D. G. Voelz, J. D. Gonglewski, and P. S. Idell, “Image synthesis from nonimaged laser-speckle patterns: comparison of theory, computer simulation, and laboratory results,” Appl. Opt. 30(23), 3333–3344 (1991). [CrossRef]  

44. J. W. Goodman, “Introduction to Fourier optics. 3rd,” Roberts Co. Publ.3, (2005).

45. L. I. Goldfischer, “Autocorrelation Function and Power Spectral Density of Laser-Produced Speckle Patterns,” J. Opt. Soc. Am. 55(3), 247–253 (1965). [CrossRef]  

46. I. Freund, “Looking through walls and around corners,” Phys. A (Amsterdam, Neth.) 168(1), 49–65 (1990). [CrossRef]  

47. I. Sinharoy, P. Rangarajan, and M. P. Christensen, “Geometric model for an independently tilted lens and sensor with application for omnifocus imaging,” Appl. Opt. 56(9), D37–D46 (2017). [CrossRef]  

48. M. Sensiper, G. D. Boreman, A. D. Ducharme, and D. R. Snyder, “Use of narrowband laser speckle for MTF characterization of CCDs,” in Infrared Focal Plane Array Producibility and Related Materials, Aug. 1992, vol. 1683, pp. 144–151.

49. J. Liu, M. M. Balaji, C. A. Metzler, M. S. Asif, and P. Rangarajan, “Solving Inverse Problems using Self-Supervised Deep Neural Nets,” in OSA Imaging and Applied Optics Congress 2021 (3D, COSI, DH, ISA, pcAOP) (2021), paper CTh5A.2, Jul. 2021, p. CTh5A.2.

50. F. Wang, Y. Bian, H. Wang, M. Lyu, G. Pedrini, W. Osten, G. Barbastathis, and G. Situ, “Phase imaging with an untrained neural network,” Light: Sci. Appl. 9(1), 77 (2020). [CrossRef]  

51. E. Bostan, R. Heckel, M. Chen, M. Kellman, and L. Waller, “Deep phase decoder: self-calibrating phase microscopy with an untrained deep neural network,” Optica 7(6), 559–562 (2020). [CrossRef]  

52. F. Yang, T.-A. Pham, N. Brandenberg, M. P. Lütolf, J. Ma, and M. Unser, “Robust Phase Unwrapping via Deep Image Prior for Quantitative Phase Imaging,” IEEE Trans. on Image Process. 30, 7025–7037 (2021). [CrossRef]  

53. M. Z. Darestani and R. Heckel, “Accelerated MRI With Un-Trained Neural Networks,” IEEE Trans. Comput. Imaging 7, 724–733 (2021). [CrossRef]  

54. K. C. Zhou, C. Cooke, J. Park, R. Qian, R. Horstmeyer, J. A. Izatt, and S. Farsiu, “Mesoscopic photogrammetry with an unstabilized phone camera.,” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7535 (2021).

55. M. Chen, P. Lin, Y. Quan, T. Pang, and H. Ji, “Unsupervised Phase Retrieval Using Deep Approximate MMSE Estimation,” IEEE Trans. Signal Process. 70, 2239–2252 (2022). [CrossRef]  

56. D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep Image Prior,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, 9446–9454. Accessed: Jan. 06, 2023.

57. W. Zhou, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity.,” IEEE Transactions on Image Processing 13(4), 600 (2004). [CrossRef]  

58. M. Nordic, “Structural Similarity Index (SSIM)-MATLAB ssim.”.

59. M.M. Balaji, “Data Files,” Southern Methodist University (2023), https://smu.box.com/s/3oyppmcca3e50rj1oark5gbcnmzh0yq5.

Supplementary Material (1)

NameDescription
Supplement 1       Supplementary document

Data availability

Data underlying the results presented in this paper are available at [59].

59. M.M. Balaji, “Data Files,” Southern Methodist University (2023), https://smu.box.com/s/3oyppmcca3e50rj1oark5gbcnmzh0yq5.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Comparison of NLOS imaging approaches.
Fig. 2.
Fig. 2. IIC schematic and exemplar results.
Fig. 3.
Fig. 3. Abstraction of IIC setup for the derivation of the imaging operator.
Fig. 4.
Fig. 4. NLOS imaging setup schematic and experimental apparatus in the hidden scene used to validate the theoretical expressions.
Fig. 5.
Fig. 5. Processing workflow to recover power spectrum and autocorrelation from recorded speckle images. The autocorrelation and power spectrum images are contrast adjusted for display purposes.
Fig. 6.
Fig. 6. Results from the experiment to validate the object plane to reconstruction plane mapping predicted by the imaging operator.
Fig. 7.
Fig. 7. Results from the experiment to validate the resolving ability of the IIC system.
Fig. 8.
Fig. 8. Left panel: Workflow used to estimate the spatial frequency response of the IIC system. Right panel: Comparison of cut-off frequency plots predicted by theory and experiments.
Fig. 9.
Fig. 9. Top panel: Self-supervised DNN framework used to recover images of hidden object from its autocorrelation. Bottom panel: Experimentally obtained autocorrelations and their corresponding reconstructions. The autocorrelations are contrast adjusted for display purposes. (See Supplement 1 for network details)

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

I V D ( X ) | F ( u o b j ( x π ) ) | 2
| F 1 ( I V D ( X ) ) | 2 = | F 1 ( | F ( u o b j ( x π ) ) | 2 ) | 2 = | u o b j ( x π )   u o b j ( x π ) | 2
P r = t
P = ( R [ x π y π 0 ] T ) + t
[ l ^ m ^ n ^ ] = P P P r P r
1 P 1 t ( t ) T ( R P π ) t 3
[ l ^ m ^ n ^ ] = R P π t ( t ) T ( R P π ) t 3 ( R P π + t )
[ l ~ m ~ n ~ ] = 1 1 + ( m p 2 1 ) n ^ 2 [ 1 0 0 0 1 0 0 0 m p ] [ l ^ m ^ n ^ ]
[ f x f y ] = [ l ~ λ m ~ λ ]
[ u v ] = [ f x ( L s e n s o r ) 1 f y ( H s e n s o r ) 1 ] = [ l ~ × ( λ L s e n s o r ) 1 m ~ × ( λ H s e n s o r ) 1 ]
δ x = λ t m p 1 L s e n s o r × { r 11 t x ( t x r 11 + t y r 21 + t z r 31 t 2 ) } 1
δ y = λ t m p 1 H s e n s o r × { r 22 t x ( t x r 12 + t y r 22 + t z r 32 t 2 ) } 1
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.