Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Cohesive framework for non-line-of-sight imaging based on Dirac notation

Open Access Open Access

Abstract

The non-line-of-sight (NLOS) imaging field encompasses both experimental and computational frameworks that focus on imaging elements that are out of the direct line-of-sight, for example, imaging elements that are around a corner. Current NLOS imaging methods offer a compromise between accuracy and reconstruction time as experimental setups have become more reliable, faster, and more accurate. However, all these imaging methods implement different assumptions and light transport models that are only valid under particular circumstances. This paper lays down the foundation for a cohesive theoretical framework which provides insights about the limitations and virtues of existing approaches in a rigorous mathematical manner. In particular, we adopt Dirac notation and concepts borrowed from quantum mechanics to define a set of simple equations that enable: i) the derivation of other NLOS imaging methods from such single equation (we provide examples of the three most used frameworks in NLOS imaging: back-propagation, phasor fields, and f-k migration); ii) the demonstration that the Rayleigh-Sommerfeld diffraction operator is the propagation operator for wave-based imaging methods; and iii) the demonstration that back-propagation and wave-based imaging formulations are equivalent since, as we show, propagation operators are unitary. We expect that our proposed framework will deepen our understanding of the NLOS field and expand its utility in practical cases by providing a cohesive intuition on how to image complex NLOS scenes independently of the underlying reconstruction method.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Non-line-of-sight (NLOS) imaging aims at visualizing elements that are out of direct line-of-sight in a scene. Many of the imaging methods are based on combining pulsed laser sources and time-resolved sensors [16]. Time-resolved sensors, such as Single-Photon Avalanche Detectors (SPADs), measure the temporal profile of light transport [711]. Broadly speaking, such techniques reconstruct geometry that is hidden from the direct view by analyzing the temporal footprint of light transport of such geometry after a bounce on a diffuse relay wall [1215], as shown in Fig. 1. The NLOS imaging landscape has seen rapid progression in recent years, as experimental setups have become more reliable, faster, and more accurate, which has generated an effervescence in reconstruction methods with varying accuracy and reconstruction time. We refer the reader to any of the existing surveys on the topic [35,16,17]. However, this body of reconstruction methods has been formulated independently, with ad-hoc assumptions and usually simplified light transport models adapted to particular cases. Two of the major formulations are back-propagation and wave-based propagation [18,19]. Back-propagation formulations model the transport of light within the hidden scene and aim at recovering the geometry inverting the transport model [1,9,2022]. Wave-based propagation formulations aim at modeling the time-of-flight of light within the phase of a complex function, applying well-known Fourier optics to image the hidden scene [18,19,2325].

 figure: Fig. 1.

Fig. 1. A classic NLOS scenario looking around a corner. The displayed room has a relay wall that is illuminated by a laser source. The light diffused by the relay wall illuminates the hidden T-shaped geometry. The indirect illumination reflected from the hidden geometry is reflected by the relay wall back to the time-resolved sensor. Figure adapted from [26].

Download Full Size | PDF

A key part of this paper’s contribution is the formulation of a cohesive theoretical framework that not only encloses several reconstruction methods but can also explain their limitations and virtues in a cohesive way. One of the key insights that has inspired our work is the realization that the magnitudes propagated in both major formulations (time-resolved real numbers in back-propagation and phasors in wave-based propagation) are, indeed, elements in a Hilbert space $\mathcal {H}$ connected through a linear operator.

We build on top of the idea that reconstruction methods are unable to accurately predict the hidden geometry due to insufficient information and the inherent ambiguities of light transport within the hidden scene. Therefore, the output of such methods can be interpreted as a probability distribution, for each point in the volume, of being part of the hidden geometry. This leads to another key insight inspiring our work: thinking about the measurement of a hidden scene as the measurement of an operator in quantum mechanics. This in turn links with quantum mechanics theory and the uncertainty principle, in which it is impossible to predict particular outcomes of events, but it is possible to estimate probabilities of such events.

A fundamental piece in the formulation of quantum mechanics is the use of Dirac notation to describe quantum systems [27,28]. As we show, the use of Dirac notation in an NLOS context provides a cohesive theoretical framework that will explain why the methods proposed so far work the way they do, as well as to connect the reconstruction methods with the fundamental physics of light propagation.

In this work, we will demonstrate the following theoretical contributions i) the derivation of back-propagation, phasor fields, and f-k migration formulations for NLOS imaging from a single equation (Section 4, Section 5, and Section 6), ii) the demonstration that the Rayleigh-Sommerfeld diffraction operator is the propagation operator for wave-based imaging methods (Section 5), and iii) the demonstration that back-propagation and wave-based imaging methods are equivalent point-to-point (Section 7), which is supported by a prior demonstration that propagation operators are unitary (Annex). The paper also brings other contributions, such as modeling general temporal profiles for light source emission (most methods assume delta pulses), as well as novel insights on the general properties of propagation operators, including the Rayleigh-Sommerfeld diffraction operator commonly used in phasor fields. Other NLOS imaging formulations [2935] may also be derived from our proposed framework, as we discuss at the end of this manuscript.

Based on this work, we expect to enable future research including, but not limited to i) bringing the mathematical toolset of quantum mechanics to NLOS imaging, which can result in more optimal and higher performing computational methods; ii) implement other imaging and reconstruction methods that may model the interaction of light with the hidden scene more accurately by finding an appropriate Hilbert space representation and specific propagation operator; iii) define optimal scene sampling and reconstruction strategies based on the expected properties of the operators, for example, their rank, dimensionality, and symmetry; and iv) use the generality of our proposed framework to devise algorithms that will work with all NLOS imaging methods.

2. Background

Non-line-of-sight (NLOS) imaging aims at visualizing elements that are out of direct line-of-sight. Broadly speaking, images are obtained by analyzing the temporal footprint of light transport in a hidden scene after bouncing on a diffuse relay wall. NLOS imaging techniques also leverage time-of-flight (ToF) data to estimate the range and location of elements. ToF data may be captured in other parts of the electromagnetic spectrum outside the visible part.

To image a hidden scene, a set of impulse response signals of the scene is first captured. This is achieved by illuminating points on the relay wall with short laser pulses, which play the role of the delta excitation of the system. The points on the relay wall illuminated by the laser reflect, in a diffuse manner, some of the light to the hidden scene. This is considered as the first light bounce. The second light bounce is the diffuse reflection of the hidden scene back to the relay wall. The light coming back from the hidden scene bounces a third time on the relay wall, being captured at different points by time-resolved sensors synchronized with the emission of the laser pulses. Two of the major formulations to realize NLOS imaging are back-propagation and wave-based propagation.

The back-propagation formulation creates a voxelization of the NLOS scene and estimates the presence of geometry on a particular voxel based on the cumulative contributions of light intensities captured by the time-resolved sensor at specific timestamps [1,20,21]. The timestamps are computed based on the estimated time-of-flight of the light according to the relative position of the voxel with respect to the illumination and sensor points on the relay wall. Back-propagation uses the computed timestamps to look up the intensity of light captured at the corresponding time index on the time-resolved signal. A more mathematical background on back-propagation is provided at the beginning of Section 4.

Wave-based propagation formulations aim at modeling the time-of-flight of light within the phase of a complex function so that virtual light sources and cameras can be modeled and well-known Fourier wave optics methods can be used to image the hidden scene. Two of the most popular wave-based formulations are phasor fields [18], and f-k migration [19].

In the rest of this section, our goal is to familiarize the reader with the terminology and some fundamental concepts of the Dirac notation and quantum mechanics. More mathematical backgrounds on phasor fields and f-k migration are provided at the beginning of Section 5 and Section 6 respectively. Table 1 describes the symbols in this paper.

Tables Icon

Table 1. Table describing the symbols used throughout this paper. $^{(1)}$ We will use $\mathbf {x}_{n}$ and $n$ indistinctly to refer to an illumination point in $\mathcal {L}$. $^{(2)}$ We will use $\mathbf {x}_{m}$ and $m$ indistinctly to refer to a sensing point in $\mathcal {S}$.

2.1 Dirac notation and quantum mechanics

Quantum mechanics is a fundamental theory in physics developed in the first half of the 20th century that aims at describing the physical properties of matter at the scale of atoms and subatomic particles. Although being controversial and often conflicting with our common-sense intuition derived from our experience in the world, quantum mechanics has been successful in providing correct predictions and results in every situation to which it has been applied.

A key component in quantum mechanics is the use of Dirac notation to represent the different states of a quantum system and its dynamic evolution. Dirac created this notation to simplify the formulation of quantum mechanics in mathematical terms [27,28] and quickly became as de-facto standard notation in the field. The scope of this section is to provide enough background in terms of terminology an intuition of the key mathematical elements in Dirac notation. In particular, we aim at describing the state of a pulse of light and how this pulse propagates and interacts with a hidden scene. We refer the interested reader to other sources for more in-depth explanations [3640].

2.1.1 Kets and bras

Dirac notation is also known as bra-ket notation. A ket represents the state of a quantum system and it is expressed as $|u\rangle$ (any letter other than $u$ may be used to identify the ket). This ket is a vector, or element, in a Hilbert space $\mathcal {H}$. In general, the term Hilbert space refers to an infinite-dimensional space where the inner product is complete. This definition also includes finite-dimensional spaces, which automatically satisfy the condition of completeness. Without loss of generality, we can think of a ket as a column vector with a dimension equal to $K$ in which the components are complex:

$$|u\rangle = \left( u_{1}, \ldots, u_{K} \right)^{T}$$

A bra is the adjoint of a ket. Therefore, the bra is the transposed and conjugated version of the ket. Without loss of generality, the bra is expressed as a row vector with dimension $K$ in which the components are the conjugated components of the ket:

$$|u\rangle^{{\dagger}} = \left( u_{1}^{*}, \ldots, u_{K}^{*} \right) \equiv \langle u|$$

Notice that the transposed conjugated $|\cdot \rangle ^{\dagger }$ is denoted as $\langle \cdot |$ in Dirac notation. Note that the dimension $K$ of these vectors can be infinite, provided, as stated before, that their inner product is complete.

2.1.2 Inner product

The inner product defines the contraction of a bra with a ket, which is expressed as:

$$\langle v||u\rangle = |v\rangle^{{\dagger}}|u\rangle = \left( v_{1}^{*}, \ldots, v_{K}^{*} \right) \left( u_{1}, \ldots, u_{K} \right)^{T} = v_{1}^{*}u_{1} + \cdots + v_{K}^{*}u_{K} = \sum_{k=1}^{K} v_{k}^{*}u_{k}$$

In practical terms, the inner product is the scalar product of two vectors in which the components are complex. The inner product of a ket with its bra is the square of the norm of the ket:

$$\langle u||u\rangle = \left( u_{1}^{*}u_{1} + \cdots + u_{K}^{*}u_{K} \right) = \left\|{u}\right\|^{2}$$

Note that the inner product is expressed as a bracket of $\langle v |$ and $|u\rangle$ and that both $\langle v |$ and $|u\rangle$ have the same dimension. Hence, the name of braket notation for this formalism [27]. The generalization of Eq. (3) to an infinite dimensional space is the integral:

$$\langle v||u\rangle = \int_{-\infty}^{\infty} dx \; v^{*}(x) \, u(x)$$

Kets play a prominent role in representing the states of quantum systems. Kets are the states upon which operators act; for instance, $A |u\rangle$ means that operator $A$ is acting on $|u\rangle$, because in standard algebra operators are placed on the left side of the element they act on. Bras are considered the dual representation of the Kets and, thus, adjoint operators act on the element placed on their left side. For instance, $\langle u | A^{\dagger }$ means that the adjoint operator of $A$ is acting on $\langle u |$. Considering matrices and vectors as particular cases, we can consider $A |u\rangle$ as the product of matrix $A$ with column vector $\mathbf {u}$, whereas we can consider $\langle u | A^{\dagger }$ as the product of row vector $\left [ \mathbf {u}^{*} \right ]^{T}$ with matrix $A^{\dagger } = \left [ A^{*} \right ]^{T}$.

2.1.3 Outer product

The outer product is defined as the cartesian product of a ket with a bra:

$$|u\rangle\langle v| = \left[ \begin{array}{ccc} u_{1}v_{1}^{*} & \ldots & u_{1}v_{L}^{*} \\ \vdots & \ddots & \vdots \\ u_{K}v_{1}^{*} & \ldots & u_{K}v_{L}^{*} \end{array} \right]$$
where $K$ is the dimension of $|u\rangle$ and $L$ is the dimension of $\langle v |$. Notice that the dimensions can be different and, therefore, the matrix defined in Eq. (6) is not necessarily square. The outer product behaves like an operator. The generalization of Eq. (6) to infinite dimensional spaces is the cross-correlation:
$$|u\rangle\langle v| = \left( v \star u \right)(x) = \int_{-\infty}^{\infty} dy \; v^{*}(y-x) \, u(y)$$

3. Fundamental equations for NLOS imaging

In this section we propose two simple equations based on quantum mechanics and Dirac notation that will allow us deriving back-propagation and wave-based imaging methods such as phasor fields and f-k migration.

An illumination point $n$ is where light hits the relay wall, and thus undergoes a first diffuse bounce towards the hidden scene. This diffused light illuminates the hidden scene, which in turn reflects some light back to the relay wall after the second bounce. A sensing point $m$ is where the light reflected by the hidden scene hits the relay wall, which, in turn reflects part of the light back to the sensor. This final reflection to the sensor is considered to be the third bounce. For more details about the light path in a NLOS scene, refer to the beginning of Section 2.

Let us start by defining the kets that represent the state of light at both the illumination and sensing points. Let us define $|l_{n}\rangle$ as the state of light at an arbitrary illumination point $n$ located on the virtual light source surface $\mathcal {L}$, and $|s_{m}\rangle$ as the state of light at an arbitrary sensing point $m$ located in the virtual camera surface $\mathcal {S}$. We can then define an operator $\hat {T}_{m,n}$ as the outer product between the kets representing light measured at the sensing points on $\mathcal {S}$ and the bras representing light at the illumination points on $\mathcal {L}$ as:

$$\hat{T}_{m,n} = |s_{m}\rangle \langle l_{n}|$$
which allows us to define a matrix operator $\hat {T}$ that contains all the illumination-sensing pairs such that:
$$\boxed{ \hat{T} = \left[ \hat{T}_{m,n} \right] = \left[ \, |s_{m}\rangle \langle l_{n}| \, \right]}$$

Equation (9) defines a simple, comprehensive operator to model the interaction of light with a hidden scene in a NLOS context; we call $\hat {T}$ the scene operator. In the following, we will use individual elements of the scene operator $\hat {T}_{m,n}$ instead of its matrix form $\hat {T} = \left [ \hat {T}_{m,n} \right ]$.

Equation (9) can be expressed more explicitly by introducing propagation operators, which actually model the propagation of light from point-to-point, whereas $\hat {T}_{m,n}$ models a convenient mathematical relationship between the state of light at the illumination point and the state of light at the sensing point.

Let’s define a propagation operator $\hat {U}_{m,n}$ that models the interaction of light with the hidden scene by propagating light from an illumination point $n$ on $\mathcal {L}$ to a sensing point $m$ on $\mathcal {S}$. The contributions from all illumination points to an arbitrary sensing point $m$ are given by:

$$|s_{m}\rangle = \sum_{n'=1}^{N} \hat{U}_{m,n'} |l_{n'}\rangle$$
where $N$ is the total number of illumination points and $n'$ is the summation index over all $n$ illumination points.

Plugging Eq. (10) into Eq. (9) yields:

$$\boxed{ \hat{T}_{m,n} = \sum\nolimits_{n'=1}^{N} \hat{U}_{m,n'} |l_{n'}\rangle \langle l_{n}|}$$
which defines the fundamental equation that comprehends the general formulation in solving any NLOS scene using propagation operators. We propose the construction of $\hat {T}_{m,n}$ as an operator that will allow us to develop and state the claims of the paper. $\hat {T}_{m,n}$ contains information about the measurement of the scene through the information of the captured light contained in $|s_{m}\rangle$ and the set of propagation operators $\hat {U}_{m,n}$. However, $\hat {T}_{m,n}$ is not a measurement operator.

Equation (9) will allow us to demonstrate a series of general properties of the different NLOS reconstruction methods and it is the starting point to i) derive the back-propagation, phasor fields, and f-k migration formulations from the same Eq. (9) in Section 4, Section 5, and Section 6, respectively, thus setting evidence that our proposed theoretical framework represents a cohesive formulation for the NLOS problem; ii) demonstrate that the Rayleigh-Sommerfeld diffraction operator is the propagation operator for wave-based imaging methods, such as phasor fields and f-k migration in Section 5; and iii) demonstrate that back-propagation and wave-based imaging methods are equivalent in Section 7.

Both Eq. (9) and Eq. (11) are general, and be seen as sheet equations under which several NLOS reconstruction methods can be formally described. To realize this, we need to i) identify the form that kets take in the particular Hilbert space that describes light under a given method. For example, wave-based imaging methods are conveniently described in frequency space (phasors), whereas time-domain methods (back-propagation) are described in a real time-resolved space; and ii) find the adequate propagation operator $\hat {U}_{m,n}$ that acts on the kets in a particular Hilbert space based on the specific restrictions of the reconstruction method. Other NLOS reconstruction methods may be derived from our proposed framework as well, following this methodology.

4. Derivation of the back-propagation formulation

The back-propagation formulation and reconstruction was one of the first methods proposed to reconstruct a hidden scene from an arbitrary number of illumination and sensing points [1]. In practical terms, illumination is realized using laser pulses whereas sensing is realized using time-resolved detectors. In the back-propagation formulation, for every voxel $\mathbf {x}_{v}$ of a hidden scene inside the imaging volume $\mathcal {V}$, the imaged signal corresponding to that voxel $G(\mathbf {x}_{v})$ from a set of time-resolved measurements $H_{m,n}(t)$ on the relay wall is defined as:

$$G(\mathbf{x}_{v}) = \sum_{m} \sum_{n} H_{m,n}(t_{v})$$
where $t_{v}$ corresponds to the total time-of-flight of light from the illumination source to the illumination point $n$ ($d_{1}$), from illumination point $n$ to voxel $\mathbf {x}_{v}$ ($d_{2}$), from voxel $\mathbf {x}_{v}$ to sensing point $m$ ($d_{3}$), and back from sensing point $m$ to the detector ($d_{4}$). Figure 2 illustrates the back-propagation formulation in which the hidden geometry is contained within a mesh of voxels in $\mathcal {V}$, and the different light paths from the illumination source to the sensor. $\mathbf {x}_{n}$ describes the position coordinates of illumination point $n$, while $\mathbf {x}_{m}$ describes the position coordinates of sensing point $m$.

 figure: Fig. 2.

Fig. 2. Schematic of the back-propagation formulation. A mesh of voxels partitions the imaging space containing the hidden geometry. The signal for each voxel $G(\mathbf {x}_{v})$ is computed as the contributions from all time-resolved signals $H_{m,n} (t)$ corresponding to time $t_{v}$, which corresponds to the total time-of-flight $d_{1}$ of light from the illumination source to illumination point $n$ on the relay wall, time-of-flight $d_{2}$ from illumination point $n$ to a voxel $\mathbf {x}_{v} \in \mathcal {V}$, time-of-flight $d_{3}$ from voxel $\mathbf {x}_{v}$ to sensing point $m$ on the relay wall, and time-of-flight $d_{4}$ back to the sensor from sensing point $m$. $\mathbf {x}_{n}$ and $\mathbf {x}_{m}$ describe the position coordinates of illumination point $n$ and sensing point $m$ respectively.

Download Full Size | PDF

In quantum mechanics, an observable is an experimental measurement of an operator. We will show that the back-propagation formulation described in Velten et al. [1] can be derived by inverting the observable corresponding to the operator defined in Eq. (11). One of the fundamental postulates of quantum mechanics is that it is not possible to measure all possible outcomes of an operator simultaneously; instead, we can only measure the probability of all the possible outcomes of such operator. The Born rule [40] defines how to measure the observable $O$ corresponding to an operator $\hat {O}$, which defines the observable $O$ as the expected value of operator $\hat {O}$ using suitable basis functions $\{|b_{a}\rangle \}$ representing the system’s state:

$$O = \sum_{a} \langle b_{a}| \hat{O} |b_{a}\rangle$$

The way that the expected value of an operator is defined by the Born rule determines that the result is a Real number or function. Although our scene operator is not based on an actual physical quantum mechanics system, it is formally defined as one and it is a Hermitian operator and, thus, it benefits from the mathematical body applicable to quantum mechanics. The reason to use the observable of operator Eq. (11) in deriving the back-propagation formulation is that the observable is a Real magnitude and back-propagation uses the intensity of the captured light, which is also a Real magnitude.

The extension of Eq. (12) to the continuous domain leads to Equation S2 in the supplemental material of Velten et al. [1], which states that:

$$I(\mathbf{x}_{m}, t) = \int_{\mathbb{R}} dt' \int_{\mathbb{R}^{2}} \frac{d\mathbf{S}_{v}}{\pi \lVert \mathbf{r}_{v} \lVert^{2}} \delta (\lVert \mathbf{r}_{v} \rVert - ct + ct') I(\mathbf{x}_{n}, t')$$
where we replaced $\mathbf {x}_{l}$ in the original formulation by $\mathbf {x}_{n}$ and $\mathbf {x}_{s}$ by $\mathbf {x}_{m}$ and, thus, $\mathbf {r}_{v} = \mathbf {x}_{v} - \mathbf {x}_{m}$, $\int _{\mathbb {R}^{2}} d\mathbf {S}_{v}$ is the surface integral of the hidden volume in $\mathcal {V}$, the term $\frac {1}{\pi \lVert \mathbf {r}_{v} \lVert ^{2}} \delta (\lVert \mathbf {r}_{v} \lVert - ct + ct')$ is the diffuse Lambertian reflection of a point in $\mathcal {V}$ illuminated by a point source, and $c$ is the speed of light in a vacuum. As stated in Velten et al. [1], the dependency of the reflection on the surface normal’s in $\mathcal {V}$ is neglected.

We can evaluate the observable corresponding to the operator in Eq. (11) for a single illumination-sensor pair using the Born rule as:

$$T_{m,n} = \langle b| \hat{T}_{m,n} |b\rangle = \langle b| \hat{U}_{m,n}|l_{n}\rangle \langle l_{n}| |b\rangle$$
where $|b\rangle$ is a suitable basis, for example, Gaussian packets. We can then establish the following identification:
$$I(\mathbf{x}_{m}, t) \equiv T_{m,n} \quad , \quad I(\mathbf{x}_{n}, t') \equiv \langle l_{n}||b\rangle$$
where $I(\mathbf {x}_{m}, t)$ is the intensity captured at sensor position $m$ as a function of time $t$, and $I(\mathbf {x}_{n}, t')$ is the intensity of illumination position $n$ as function of time $t'$.

The propagation term $\langle b | \hat {U}_{m,n}|l_{n}\rangle$ is:

$$\int_{\mathbb{R}^{2}} \frac{d\mathbf{S}_{v}}{\pi \lVert \mathbf{r}_{v} \lVert^{2}} \delta (\lVert \mathbf{r}_{v} \rVert -ct + ct') \equiv \langle b| \hat{U}_{m,n}|l_{n}\rangle$$

Plugging Eq. (16) into Eq. (17) yields:

$$\boxed{ I(\mathbf{x}_{m}, t) = \int_{\mathbb{R}} dt' \int_{\mathbb{R}^{2}} \frac{d\mathbf{S}_{v}}{\pi \lVert \mathbf{r}_{v} \lVert^{2}} \delta (\lVert \mathbf{r}_{v} \rVert - ct + ct') I(\mathbf{x}_{n}, t')}$$
where $\int _{\mathbb {R}} dt'$ implements the outer product defined in Eq. (15). The result is an equation that is exactly Eq. (14), which is the staring equation for the back-propagation formulation described in Velten et al. [1]. Therefore, the identification expressed by Eq. (16) and Eq. (17), coming originally from Eq. (11), demonstrate that the back-propagation formulation can be formally derived from our proposed framework.

5. Derivation of the phasor fields formulation

In this section we show how the phasor fields formulation can be derived from Eq. (11). Prior to demonstrating the derivation, we will introduce a brief background of such formulation.

5.1 Phasor fields formulation background

The phasor fields formulation is a wave-based computational framework used in NLOS imaging that transforms the relay wall into a computational virtual light emitter and virtual camera. The immediate consequence is that it transforms NLOS scenarios into virtual line-of-sight ones, in which we can use well-known tools from Fourier wave optics to model sophisticated imaging systems.

The way to transform the relay wall into a virtual emitter, or virtual camera, is by modeling the wavefront of a virtual monochromatic light at different points and instants on the relay wall as a phasor. A phasor $\mathcal {P}_{\omega }(\mathbf {x}, t)$ at point $\mathbf {x}$, time $t$, and frequency $\omega$ is defined as:

$$\mathcal{P}_{\omega}(\mathbf{x}, t) = \mathcal{P}_{0}(\mathbf{x}) e^{i \omega t}$$
where $\mathcal {P}_{0}(\mathbf {x})$ is the amplitude of the phasor at point $\mathbf {x}$. The phasor represents the amplitude and phase of the virtual illumination. The superposition of phasors generates the virtual wavefront.

The propagation of a phasor at point $\mathbf {x}_{1}$ lying on a surface $S_{1}$ to a point $\mathbf {x}_{2}$ lying on a surface $S_{2}$ is expressed as:

$$\mathcal{P}_{\omega}(\mathbf{x}_{2}, t) = \gamma \int_{S_{1}} \mathcal{P}_{\omega}(\mathbf{x}_{1}, t) \frac{e^{i k \lVert \mathbf{x}_{2} - \mathbf{x}_{1} \lVert}}{\lVert \mathbf{x}_{2} - \mathbf{x}_{1} \lVert} d\mathbf{x}_{1}$$
where $\gamma \approx 1 / \lVert \left\langle S_{1} \right\rangle - \mathbf {x}_{2} \lVert$ is an attenuation factor due to spherical propagation of a single point emitter, $\left\langle S_{1} \right \rangle$ is the average distance of surface $S_{1}$ with respect to $\mathbf {x}_{2}$ assuming that $\mathbf {x}_{2}$ is located far enough from $S_{1}$, and $k$ is the wavenumber corresponding to the wavelength of the virtual monochromatic light. Equation (20) has the form of a Rayleigh-Sommerfeld diffraction (RSD) propagation operator, which is the operator in Fourier wave optics that propagates a wavefront of light generated by all single point emitters on $S_{1}$ to a single point $\mathbf {x}_{2}$ on $S_{2}$.

To image a hidden scene using phasor fields, we first must capture a set of impulse response signals of the scene. A single impulse response signal of the scene captured by an illumination-sensor pair $\{ m, n \}$ is expressed as $H_{m,n}$, where $n$, $m$ are the illuminated and sensed points on the relay wall. The virtual light source surface $\mathcal {L}$ comprises all points $\mathbf {x}_{n}$ on the relay wall acting as virtual emitters, whereas the virtual camera $\mathcal {S}$ comprises all points $\mathbf {x}_{m}$ on the relay wall acting as virtual sensors. Phasor fields allows to model the illumination on $\mathcal {L}$ as phasors $\mathcal {P}(\mathbf {x}_{n}, t)$, and the light captured at $\mathcal {S}$ as another phasor $\mathcal {P}(\mathbf {x}_{n}, t)$. Given the linearity and time-invariance of light transport, the phasor at the virtual camera can be expressed as the convolution between the phasor at the virtual emitter with the impulse response signal:

$$\mathcal{P}(\mathbf{x}_{m}, t) = \int_{\mathcal{L}} \mathcal{P}(\mathbf{x}_{n}, t) \ast H_{m,n}(t) \, d\mathbf{x}_{n}$$

Finally, to generate the image of the hidden scene $I(\mathbf {x})$ as seen from the virtual camera, an image formation model $\Phi [\cdot ]$ is applied over $\mathcal {P}(\mathbf {x}_{m}, t)$ such that:

$$I(\mathbf{x}) = \Phi\left[ \mathcal{P}(\mathbf{x}_{m}, t) \right]$$

Phasor fields enables great flexibility in modeling different virtual emitters, virtual sensors, and image formation models based on different particular characteristics of the NLOS geometry and experimental hardware.

5.2 Derivation

ToF data is captured in the time domain. However, any signal captured in the time domain can be expressed as the superposition of its frequency components. To derive phasor fields from Eq. (9) we thus need to express the signals captured in the time-domain as the inverse Fourier transform of the corresponding signals in the frequency-domain. Moreover, a convenient Hilbert space to describe wave-based propagation methods is the frequency space, or Fourier space. Other Hilbert spaces may be used as well and, indeed, our proposed framework enables that possibility.

The ket representing the captured signal in the time-domain can be expressed as the inverse Fourier transform of the signal in the frequency-domain as:

$$|s_{m}\rangle = \frac{1}{2\pi} \int_{\mathbb{R}}d\omega \; S(\mathbf{x}_{m}, \omega) \; e^{i\omega t}$$
where $S(\mathbf {x}_{m}, \omega )$ is the captured signal expressed in the frequency domain.

The time component of the bra representing the illumination has to be expressed as the conjugate of the inverse Fourier transform as:

$$\langle l_{n}| = \frac{1}{2\pi} \int_{\mathbb{R}}d\omega' \; L^{*} (\mathbf{x}_{n}, \omega') \; e^{{-}i\omega' t}$$

Using these expressions of the ket and bra in Eq. (9) results in:

$$\hat{T}_{m,n} = \int_{\mathbb{R}^{2}} d\mathbf{x}_{n} \; \frac{1}{2\pi} \int_{\mathbb{R}} d\omega \; S(\mathbf{x}_{m} - \mathbf{x}_{n}, \omega) e^{i\omega t} \; \frac{1}{2\pi} \int_{\mathbb{R}} d\omega' \; L^{*}(\mathbf{x}_{n}, \omega') e^{{-}i\omega' t}$$

Now, assuming that the illumination is a delta in time with a Gaussian distribution of energy in space results in:

$$L(\mathbf{x}_{n}, \omega') = L_{n}^{(0)} \, e^{-\left( \frac{\Vert \mathbf{x}_{n} \Vert}{\sigma_{n}} \right)^{2}} e^{{-}i\omega' t_{n}^{(0)}}$$
and plugging Eq. (26) into Eq. (25) yields:
$$\boxed{ \hat{T}_{m,n} = \int_{\mathbb{R}^{2}} d\mathbf{x}_{n} \; \frac{1}{2\pi} \int_{\mathbb{R}} d\omega' \; L_{n}^{(0)} e^{-\left( \frac{\Vert \mathbf{x}_{n} \Vert}{\sigma_{n}} \right)^{2}} e^{{-}i\omega' (t - t_{n}^{(0)})} \; \frac{1}{2\pi} \int_{\mathbb{R}} d\omega \; S(\mathbf{x}_{m} - \mathbf{x}_{n}, \omega) e^{i\omega t}}$$
where $\int _{\mathbb {R}^{2}} d\mathbf {x}_{n}$ implements the outer product between Eq. (23) and Eq. (24) as required in defining $\hat {T}_{m,n}$ in Eq. (9). For delta-type illumination pulses, $S(\mathbf {x}_{m} - \mathbf {x}_{n}, \omega ) = \tilde {H}_{m,n}(\omega ) = \mathcal {F} [ H_{m,n}(t) ]$, where $\tilde {H}_{m,n}(\omega )$ is the Fourier transform of the impulse response of the scene $H_{m,n}(t)$.

Note that Eq. (27) is equivalent to the original phasor fields formulation in Eq. (21), as follows:

$$\begin{matrix} \mathcal{P}(\mathbf{x}_{m}, t) \equiv \hat{T}_{m,n} \\ \mathcal{P}(\mathbf{x}_{n}, t) \equiv \frac{1}{2\pi} \int_{\mathbb{R}} d\omega' \; L_{n}^{(0)} \, e^{-\left( \frac{\Vert \mathbf{x}_{n} \Vert}{\sigma_{n}} \right)^{2}} \, e^{{-}i\omega' (t - t_{n}^{(0)})} \\ H_{m,n}(t) \equiv \frac{1}{2\pi} \int_{\mathbb{R}} d\omega \; S(\mathbf{x}_{m} - \mathbf{x}_{n}, \omega) \, e^{i\omega t} \end{matrix}$$
where $\mathcal {P}(\mathbf {x}_{m}, t)$ is the phasor at the virtual camera $\mathcal {S}$, $\mathcal {P}(\mathbf {x}_{n}, t)$ is the phasor at the virtual illumination source surface $\mathcal {L}$, and $H_{m,n}(t)$ is the scene impulse response of the illumination-sensor pair $\{m,n\}$. This shows how our quantum-mechanics framework includes the phasor fields formulation. Figure 3 depicts the magnitudes involved in the phasor fields formulation.

 figure: Fig. 3.

Fig. 3. (a) A laser source illuminates point $n$ on the relay wall with a delta light pulse $\delta _{n}(t)$. (b) An ultra-fast sensor captures time-resolved illumination $H_{m,n}(t)$ on multiple visible points $m$ on the relay wall. By convolving the impulse response with an illumination function, the phasor field formulation transforms the relay wall into a virtual light source with aperture $\mathcal {L}$ (c), and also into virtual camera with aperture $\mathcal {S}$ (d). These transformations allow modeling a computational lens that focuses the response from all points at each point in the bounding volume $\mathcal {V}$ of the hidden scene, effectively transforming the NLOS geometry into a virtual line-of-sight one. Figure adapted from [26].

Download Full Size | PDF

5.3 Rayleigh-Sommerfeld diffraction operator is the propagation operator in wave-based imaging methods

The Schrödinger equation describes the temporal evolution of a quantum system, which is expressed as:

$$i\frac{h}{2\pi} \frac{\partial}{\partial t} |\psi(\mathbf{x}, t)\rangle = \hat{H}(\mathbf{x}, t)|\psi(\mathbf{x}, t)\rangle$$
where $h$ is Planck’s constant, $|\psi (\mathbf {x}, t)\rangle$ is the wavefunction describing the state of the quantum system, and $\hat {H}(\mathbf {x}, t)$ is the Hamiltonian operator, which is a unitary operator that describes the physics of the quantum system. Note that the dependencies in position $\mathbf {x}$ and time $t$ are expressed explicitly in the kets and operators.

The solution of the Schrödinger equation at an arbitrary position $\mathbf {x}$ and time $t$ for a free particle, in our case a photon, is described by a wavefunction $|\psi (\mathbf {x}, t)\rangle$ [37,40] such that:

$$|\psi(\mathbf{x}, t)\rangle = \int_{\mathbb{R}^{3}} d\mathbf{k} \frac{e^{i\left[\mathbf{k} (\mathbf{x} - \mathbf{x}_{0}) - \omega (t - t_{0})\right]}}{\lVert \mathbf{x} - \mathbf{x}_{0} \lVert^2} |\psi(\mathbf{x}_{0}, t_{0})\rangle$$
where $\mathbf {k}$ is the wavenumber vector in the direction of propagation of the photon, $\omega$ its frequency, and $|\psi (\mathbf {x}_{0}, t_{0})\rangle$ the wavefunction describing its state at initial position $\mathbf {x}_{0}$ and time $t_{0}$. We can set $t_{0} = 0$ as our temporal reference and, thus, redefine $t \equiv t - t_{0}$.

Defining $|\psi (\mathbf {x}_{n})\rangle \equiv |\psi (\mathbf {x}_{0}, t_{0})\rangle$ as the wavefunction describing the state of light at the virtual light source surface $\mathcal {L}$ at $t_{0} = 0$ and $|\psi (\mathbf {x}_{m}, t)\rangle$ as the wavefunction describing the state of the captured light at the virtual camera $\mathcal {S}$ at a later time $t$, we obtain:

$$|\psi(\mathbf{x}_{m}, t)\rangle = \int_{\mathbb{R}^{3}} d\mathbf{k} \frac{e^{i\mathbf{k}(\mathbf{x}_{m} - \mathbf{x}_{n}) - i\omega t}}{\lVert \mathbf{x}_{m} - \mathbf{x}_{n} \lVert^{2}} |\psi(\mathbf{x}_{n})\rangle$$
and defining the following identification yields:
$$\begin{matrix} \mathcal{P}(\mathbf{x}_{n}) \equiv |\psi(\mathbf{x}_{n})\rangle \\ \mathcal{P}(\mathbf{x}_{m}, t) \equiv |\psi(\mathbf{x}_{m}, t)\rangle \end{matrix}$$
which transforms Eq. (31) into:
$$\mathcal{P}(\mathbf{x}_{m}, t) = \int_{\mathbb{R}^{3}} d\mathbf{k} \frac{e^{i\mathbf{k}(\mathbf{x}_m{} - \mathbf{x}_{n}) - i\omega t}}{\lVert \mathbf{x}_{m} - \mathbf{x}_{n} \lVert^{2}} \mathcal{P}(\mathbf{x}_{n})$$

The contribution from all the point sources lying on the virtual light surface $\mathcal {L}$ results in:

$$\boxed{ \mathcal{P}(\mathbf{x}_{m}, t) = \int_{\mathcal{L}} d\mathbf{x}_{n} \int_{\mathbb{R}^{3}} d\mathbf{k} \frac{e^{i\mathbf{k}(\mathbf{x}_{m} - \mathbf{x}_{n}) - i\omega t}}{\lVert \mathbf{x}_{m} - \mathbf{x}_{n} \lVert^{2}} \mathcal{P}(\mathbf{x}_{n})}$$
which is equivalent to Equation (S8) and (S9) in Liu et al. [18], representing the polychromatic wave as a superposition of monochromatic waves. Different from the original formulation, which was limited to delta-like illumination functions, our framework allows the modeling of any arbitrary light signals.

6. Derivation of the f-k migration formulation

The f-k migration formulation is applied to confocal measurements in which the measurement of each illumination point happens at the same position of the sensing point, i.e. $\mathbf {x}_{m} = \mathbf {x}_{n} = \mathbf {x}$. Considering a single illumination-sensing pair located at the same position and writing the dependency on position and time explicitly, turns Eq. (11) into:

$$\hat{T}_{m,n} = |s_{m}(\mathbf{x}, t)\rangle \langle l_{n}(\mathbf{x}, t')|$$
where kets $|l_{n}(\mathbf {x}, t)\rangle$ and $|s_{m}(\mathbf {x}, t')\rangle$ contain the explicit dependency on position $\mathbf {x}$ and time $t$ and describe the state of light at the confocal position $\mathbf {x}$ at times $t$ and $t'$ for the illumination and sensing instants, respectively.

The propagation of light from $t$ to $t'$ can be expressed as:

$$|s_{m}(\mathbf{x}, t)\rangle = \hat{U}_{m,n}(\mathbf{x}, t, t') |l_{n}(\mathbf{x}, t')\rangle$$
where $\hat {U}_{m,n}(\mathbf {x}, t, t')$ is the propagation operator containing the explicit dependency on confocal position $\mathbf {x}$ and time propagating the light from the illumination point at time $t'$ to the hidden scene and back to the sensing point at $t$. Plugging Eq. (36) into Eq. (35) yields:
$$\hat{T}_{m,n} = \hat{U}_{m,n}(\mathbf{x}, t, t') |l_{n}(\mathbf{x}, t')\rangle \langle l_{n}(\mathbf{x}, t')|$$

Multiplying Eq. (37) by $|l_{n}(\mathbf {x}, t)\rangle$ on both sides we obtain:

$$\hat{T}_{m,n} |l_{n}(\mathbf{x}, t)\rangle= \hat{U}_{m,n}(\mathbf{x}, t, t') |l_{n}(\mathbf{x}, t')\rangle \langle l_{n}(\mathbf{x}, t')| |l_{n}(\mathbf{x}, t)\rangle$$
and considering that the illumination states are orthogonal in the time axis:
$$\langle l_{n}(\mathbf{x}, t')| |l_{n}(\mathbf{x}, t)\rangle = \delta(t'-t)$$
which leads to $t = t'$ and, thus:
$$\hat{U}_{m,n}(\mathbf{x}, t, t') = \hat{U}_{m,n}(\mathbf{x}, t, t)$$
which allows us to redefine $\hat {U}_{m,n}(\mathbf {x}, t, t')$ as $\hat {U}_{m,n}(\mathbf {x}, t, t') \equiv \hat {U}_{m,n}(\mathbf {x}, t)$ and, therefore:
$$\left( \hat{T}_{m,n} - \hat{U}_{m,n}(\mathbf{x}, t) \right) |l_{n}(\mathbf{x}, t)\rangle = 0$$

The propagation equation that is used as the starting point for the f-k migration formulation is Eq. (1) in Lindell et al. [19], which states that:

$$\left( \nabla^{2}- \frac{1}{c^{2}}\frac{\partial^{2}}{\partial t^{2}} \right) \Psi(\mathbf{x}, t) = 0$$
where $\Psi$ is the complex-valued scalar wave field, the expression in front of $\Psi$ is the wave equation operator, and $c$ is the speed of light.

Now, using the following identification:

$$\Psi(\mathbf{x}, t) \equiv |l_{n}(\mathbf{x}, t)\rangle$$
$$\left( \hat{T}_{m,n} - \hat{U}_{m,n}(\mathbf{x}, t) \right) \equiv \left( \nabla^{2}- \frac{1}{c^{2}}\frac{\partial^{2}}{\partial t^{2}} \right)$$
we can exactly match Eq. (42) with Eq. (41).

The solution for Eq. (42) relies on the boundary condition:

$$\left. \Psi(\mathbf{x}, t) \right\vert_{z=0} = \left. \Psi(\mathbf{x}, t) \right\vert_{t=0}$$

This results shows how the f-k migration formulation can also be derived from our framework within the context of confocal measurements and the boundary conditions described by Lindell et al [19]. The physical interpretation of Eq. (43) is the virtual wavefront that complies with the boundary condition set by the f-k model.

7. Back-propagation and wave-based imaging methods are equivalent point-to-point

As we explained in Section 2, back-propagation and wave-based propagation are the two most used formulations to image hidden scenes in NLOS scenarios. While back-propagation voxelizes the imaging space and computes the signal contributions corresponding to the time of flight between the illumination and sensing points and a particular voxel, wave-based propagation models the propagation defining virtual light sources and virtual cameras converting the NLOS geometry into a line-of-sight geometry (hence, wave-based propagation formulations are also referred as forward-propagation formulations).

At first sight, it may look as if back-propagation and wave-based methods are quite different; however, our proposed formulation allows us to demonstrate formally that both are actually equivalent. The demonstration of this equivalence is based on the fact that propagation operators are unitary, which is demonstrated in Annex A. A unitary operator is an operator such that $\hat {U} \hat {U}^{\dagger } = \hat {U} \hat {U}^{-1} = \mathbb {I}$, the identity operator.

Fig. 4 depicts a diagram showing the operators and their propagation directions for both back-propagation and forward propagation, which will help visualizing the algebra developed in this section. The state of light at point $m$ is represented by $|s_{m}\rangle$, the state of light at point $\mathbf {x}_{v}$ is represented by $|\mathbf {x}_{v}\rangle$, the state of light at point $n$ is represented by $|l_{n}\rangle$, and the state of light at the virtual point $\mathbf {x}_{v}'$ is represented by $|\mathbf {x}_{v}'\rangle$.

 figure: Fig. 4.

Fig. 4. Diagram showing the operators and their propagation directions for both back-propagation and forward propagation to help visualizing the algebra developed in Section 7. $|l_{n}\rangle$ represents the illumination al point $n$, $|s_{m}\rangle$ represents the signal at sensing point $m$, $|\mathbf {x}_{v}\rangle$ represents the signal at the voxel $\mathbf {x}_{v}$ in the back-propagation model, and $|\mathbf {x}_{v}'\rangle$ represents the signal at the virtual voxel $\mathbf {x}_{v}'$ in the forward propagation model. The result expressed by Eq. (52) demonstrates that back-propagation and forward propagation are equivalent point-to-point.

Download Full Size | PDF

The propagation from $n$ to $m$ goes through $\mathbf {x}_{v}$, therefore:

$$|s_{m}\rangle = \hat{U}_{m,\mathbf{x}_{v}} |\mathbf{x}_{v}\rangle$$
where $\hat {U}_{m,\mathbf {x}_{v}}$ propagates the light from voxel $\mathbf {x}_{v}$ to point $m$. The inverse of Eq. (46) is:
$$|\mathbf{x}_{v}\rangle = \hat{U}_{m,\mathbf{x}_{v}}^{{-}1} |s_{m}\rangle$$

In general, we will consider that $\hat {U}_{b,a}$ is the propagation operator of light from a generic point $a$ to a generic point $b$.

Now, the propagation from point $m$ to point $\mathbf {x}_{v}'$ is given by:

$$|\mathbf{x}_{v}'\rangle = \hat{U}_{\mathbf{x}_{v}',m} |s_{m}\rangle$$
and the inverse is:
$$|s_{m}\rangle = \hat{U}_{\mathbf{x}_{v}',m}^{{-}1} |\mathbf{x}_{v}'\rangle$$

Plugging Eq. (46) into Eq. (48) yields:

$$|\mathbf{x}_{v}'\rangle = \hat{U}_{\mathbf{x}_{v}',m} \hat{U}_{m,\mathbf{x}_{v}} |\mathbf{x}_{v}\rangle$$

We can define the propagation operator from $\mathbf {x}_{v}$ to $\mathbf {x}_{v}'$ as:

$$\hat{U}_{\mathbf{x}_{v}',m} \hat{U}_{m,\mathbf{x}_{v}} \equiv \hat{U}_{\mathbf{x}_{v}', \mathbf{x}_{v}}$$

Now, using the property that $\hat {U}_{\mathbf {x}_{v}', \mathbf {x}_{v}}$ must be unitary:

$$\boxed{ \hat{U}_{\mathbf{x}_{v}', \mathbf{x}_{v}} = \hat{U}_{\mathbf{x}_{v}',m} \hat{U}_{m,\mathbf{x}_{v}} = \mathbb{I} \quad \Rightarrow \quad \hat{U}_{\mathbf{x}_{v}',m} = \hat{U}_{m,\mathbf{x}_{v}}^{{-}1}}$$
and the only way this is possible is if $|\mathbf {x}_{v}'\rangle = |\mathbf {x}_{v}\rangle$, which means that the state of light at both $\mathbf {x}_{v}$ and $\mathbf {x}_{v}'$ voxels is identical.

This result demonstrates that propagating the state of light at point $m$ using the inverse of the forward operator from point $m$ to point $\mathbf {x}_{v}'$ in Eq. (48) is the same as inverting the propagation operator from point $\mathbf {x}_{v}$ to point $m$ defined in Eq. (46). In other words, we have demonstrated quite naturally that both back-propagation and wave-based imaging methods are equivalent point-to-point. This equivalence may open the possibility to develop hybrid formulations to optimize imaging scenes based on certain features of the scene and the experimental setup.

This results imposes certain conditions on the properties of the transport matrix $\left [ \hat {U}_{m,n} \right ]$, which considers the propagation among all illumination and sensing points. For instance, Eq. (8) describes the propagation of light from an illumination point $\mathbf {x}_{n}$ to a sensor point $\mathbf {x}_{m}$. However, the signal collected at the sensor point will be the contribution of all the illumination points and, hence:

$$|s_{m}\rangle = \sum_{n'} \hat{U}_{m,n'} |l_{n'}\rangle$$

Similarly, the inverse of Eq. (53) is:

$$|l_{n}\rangle = \sum_{m'} \hat{U}_{m',n}^{{-}1} |s_{m'}\rangle$$
which can be written in terms of Eq. (54) as:
$$|s_{m}\rangle = \sum_{n'} \sum_{m'} \hat{U}_{m,n'} \hat{U}_{m',n}^{{-}1} |s_{m'}\rangle$$

Equation (54) is not the direct inverse of Eq. (53). Equation (53) expresses the contributions of all illumination points on a sensing point $m$, whereas Eq. (54) represents the contribution of all sensing points to an illumination point $n$ if we were to invert the propagation from all sensing points to illumination point $n$.

Using the fact that each component $(m,n)$ of transport matrix $\hat {U}_{m,n}$ is unitary, i.e. $\hat {U}_{m,n}^{-1} = \hat {U}_{m,n}^{\dagger } = \hat {U}_{n,m}^{*}$, Eq. (55) can be further expressed as:

$$|s_{m}\rangle = \sum_{n'} \sum_{m'} \hat{U}_{m,n'} \hat{U}_{n,m'}^{*} |s_{m'}\rangle$$
which leads to:
$$\sum_{n'} \sum_{m'} \hat{U}_{m,n'} \hat{U}_{n,m'}^{*} = \delta_{m,m'} \delta_{n',n}$$
where $\delta _{m,m'}$ and $\delta _{n',n}$ are Kronecker deltas for sensing point index $m$ and illumination point index $n$. The Kronecker deltas are introduced because the states representing the illumination points and sensing points are orthogonal by definition.

Equation (57) implies that the trace of the matrix resulting from the product in Eq. (57) is equal to one (or a Real number if $\left [ \hat {U}_{m,n} \right ]$ is not normalized):

$$\boxed{ \left( \left[ \hat{U}_{m,n} \right] \left[ \hat{U}_{n,m}^{*} \right] \right) = 1 }$$

Although the transport matrix $\left [ \hat {U}_{m,n} \right ]$ as a whole might not be invertible, it must comply with Eq. (58) and, therefore, it can be used to invert it by using diagonalization or matrix decomposition methods.

8. Application: NLOS with non-delta illumination

Equation (9) and Eq. (11) provide the fundamental description of light interaction and propagation in a NLOS context using Dirac notation. This description is general enough to represent a varied range of NLOS reconstruction methods, such as back-propagation, phasor fields and f-k migration. The use of this notation not only serves as a general theoretical framework, but it enables derivations that are independent of the reconstruction algorithm as well. In this section we illustrate this with one of such derivations, in which we lift one of the main assumptions of current NLOS methods: the use of delta pulses of light as illumination source.

As stated in Eq. (10), the propagation operator $U_{m, n}$ relates the state of light on capture $|s_{m}\rangle$ and the state of the illumination $|l_{n}\rangle$. The information about the hidden scene lies within this propagation operator $U_{m, n}$ and NLOS reconstruction algorithms assume the illumination $|l_{n}\rangle$ to be a delta in time. However, real-world illumination hardware does not emit delta but near Gaussian pulses, accounting for excitation and decay. The width (standard deviation) of such Gaussian pulses is extremely small for sophisticated, expensive lasers, being larger (and therefore impractical for NLOS reconstructions) for lower-cost emitters. Using our framework, we can disentangle the effect of the illumination from the capture, enabling the use of Gaussian distributed temporal models for emitters (cheaper lasers), instead of delta models, which in turn enables the use of cheaper time-resolved sensors as well.

Similar to other techniques [18,25], we assume a single illumination point in ${n}$ with known temporal emission profile, which transforms Eq. (11) into:

$$\hat{T}_{m, 1} = \hat{U}_{m, 1} |l_{1}\rangle \langle l_{1}|$$
and multiplying either side with $|l_{1}\rangle \equiv \hat {U}_{m, 1} \left\| {l_{1}} \right\|^{2}$ results in:
$$\hat{T}_{m, 1} |l_{1}\rangle = \hat{U}_{m, 1} |l_{1}\rangle \langle l_{1}||l_{1}\rangle = \hat{U}_{m, 1} \left\|{l_{1}}\right\|^{2} |l_{1}\rangle \quad \rightarrow \quad \left( \hat{T}_{m, 1} - \hat{U}_{m, 1} \left\|{l_{1}}\right\|^{2} \right) |l_{1}\rangle = 0$$
which implies that:
$$\hat{T}_{m, 1} = \hat{U}_{m, 1} \left\|{l_{1}}\right\|^{2}$$
if Eq. (60) has to hold true $\forall |l_{1}\rangle$.

Because phasor fields operates in the frequency space, we can express $\left\|{l_{1}}\right\|^{2}$ as the summation of the square amplitude of each frequency component of the signal as:

$$\left\|{l_{1}}\right\|^{2} = \sum_{\nu} \left\|{l_{1, \nu}}\right\|^{2}$$
where $\left\|{l_{1, \nu }}\right\|^{2}$ is the square amplitude of frequency component $\nu$. Notice that decomposition expressed in Eq. (62) is not restricted to the frequency domain alone but other domains can benefit from such decomposition as well.

A delta pulse can be normalized such that:

$$\left\|{l_{1}}\right\|^{2} = \sum_{\nu} \left\|{l_{1, \nu}}\right\|^{2} = 1$$

Because of the superposition principle, we can decompose Eq. (61) for each frequency component $\nu$ as well. Then, dividing Eq. (61) on sides by $\left\|{l_{1, \nu }}\right\|^{2}$, we actually obtain a delta-coupled scene operator for each frequency component $\nu$:

$$\hat{T'}_{m, 1, \nu} = \frac{\hat{T}_{m, 1, \nu}}{\left\|{l_{1,\nu}}\right\|^{2}}$$

Now, we can add all frequency components of the scene operator as:

$$\hat{T'}_{m, 1} = \sum_{\nu} \frac{\hat{T}_{m, 1, \nu}}{\left\|{l_{1,\nu}}\right\|^{2}} \equiv \hat{U}_{m, 1}$$

We then use this delta-decoupled operator for the NLOS reconstructions instead of the original signal. In other words, we can achieve the same effect of a delta-like illumination from an arbitrary time-resolved illumination by applying Eq. (65) and using $\hat {T'}_{m, 1}$ as input for the NLOS reconstructions.

To validate our results, we have synthesized a hidden scene consisting of a Stanford dragon with a transient renderer optimized for NLOS imaging [41]. The dragon is at 1 meter of distance to a relay wall of $2\times 2$ meters, with a sensing grid of $128\times 128$ points $\mathcal {S}$, and a single illumination point $\mathcal {L}$ at the center of the relay wall. In Fig. 5 we show the results of reconstructing the hidden scene with the phasor fields framework [18], with the temporal model of a Gaussian pulse with different widths: 250, 500, 750, and 1000 picoseconds. As the top row shows, as the width of the pulse increases, the reconstruction becomes blurrier until no figure can be recognized. Using our framework, however, we obtain reasonable reconstructions (where the figure of the dragon is clearly recognizable) even in the presence of very wide Gaussian pulses.

 figure: Fig. 5.

Fig. 5. Results of the reconstruction of a dragon with phasor fields using a Gaussian light pulse of increasing width: 250, 500, 750, and 1000 picoseconds. Top: reconstruction of the raw signal with Gaussian illumination. As the width of the pulse increases the reconstructions become blurrier, to the point where the dragon cannot be recognized. Bottom: reconstruction using our framework. Even in the presence of very wide pulses, we can still obtain reasonable reconstructions.

Download Full Size | PDF

Aspects of the signal such as the effect of noise and decreasing temporal resolution can be explored under our proposed framework following on the steps of the example described in this section. However, such exploration lies outside the scope of this paper, which aims to establish a foundation for such exploratory works.

9. Discussion and conclusion

This work draws inspiration from Dirac himself, who said: "if one is working from the point of view of getting beauty in one’s equations, and if one has really a sound insight, one is on a sure line of progress", and: "a theory with mathematical beauty is more likely to be correct than an ugly one that fits some experimental data". We have focused on laying down the theoretical groundwork to provide a comprehensive formulation of the different mechanisms and imaging methods in NLOS imaging. In particular, formulating the NLOS problem in terms of the Dirac notation and defining a novel scene operator (Eq. (9)) has allowed us i) to derive the back-propagation, the phasor fields, and the f-k migration formulations from a unified framework; ii) to demonstrate that the Rayleigh-Sommerfeld diffraction operator is the propagation operator for wave-based propagation imaging formulations; and iii) to demonstrate that back-propagation and wave-based imaging formulations are equivalent. We have demonstrated that the unitary property of propagation operators is necessary so that the experimental measurements are real-valued magnitudes, reversible in time, and consistent with Helmholtz reciprocity principle. The development of this work has been based on the key realization that phasors defined in the wave-based formulation can be considered as projections of kets on different representations and operators in a Hilbert space that model the interaction of light between a relay wall and a hidden scene. The different projections of the kets yields the different formulations, including back-propagation, phasor fields, and f-k migration. The conclusions of this paper are consistent with traditional wave optics framework. However, we provide a different perspective that is more formal and general in the context of formulating NLOS imaging. We actually express the concept of time-resolved NLOS imaging in a way that is agnostic to the reconstruction algorithm and, therefore, our conclusions are not tied to particular algorithms.

Other NLOS methods could also be derived from our framework. Although a full formal derivation falls out of the scope of this paper, we provide here some meaningful insights towards this goal. For instance, the technique proposed in Saunders et al. [29] is based on measuring the shifts of the shadows caused by an occluder that is placed between the NLOS scene and the relay wall. Using our framework the shift of the measurement points on the relay wall could be modeled as an affine transformation of the propagation operator $\hat {U}$ connecting the illumination and the sensed points. Such affine transformation would be directly related to the relative position of the occluder and the hidden scene. The technique proposed in Pei et al. [30] is based on modeling arbitrary illumination patterns represented by different point-spread-function patterns, and performing an optimization process to minimize noise and select the most representative sampling points. One of the strengths of our framework is that it allows arbitrary illumination functions, therefore Pei et al. fits naturally in it. The technique proposed in Cao et al. [31] is based on actively changing the focus of the illumination using wavefront shaping. Being a wave-based method it could be derived similarly to the derivation of phasor-fields and f-k migration, with operator $\hat {U}$ being mapped to the operator in charge of wavefront shaping and focusing.

Finally, inspired by another Dirac’s quote stating that "I understand an equation when I can predict the properties of its solutions, without actually solving it", we have laid down the foundation of a cohesive mathematical framework to formulate NLOS challenges at large and we expect it will provide a deeper understanding of the aspects that are most important in imaging NLOS scenes as well as to devise more optimal and fast scene sampling and computational methods. This effort may include the formulation of a generalized light transport operator to model properties such as polarization, scattering, dispersion, or birefringence. The key process for incorporating those properties would be to represent them in a suitable Hilbert space with the corresponding propagation operators. This work is the first stepping stone for future research directions involving both theoretical and experimental results.

A. Annex: Propagation operators are unitary

We will demonstrate that any propagation operator $\hat {U}$ is a unitary operator. This property is key in demonstrating that backprojection and wave-based imaging methods are equivalent, which is demonstrated in Section 7. A unitary operator is a bounded linear operator on a Hilbert space that satisfies:

$$\hat{U} \hat{U}^{{\dagger}} = \mathbb{I}$$
where $\hat {U}^{\dagger }$ is the adjoint operator of $\hat {U}$ and $\mathbb {I}$ is the identity operator. In other words, a unitary operator is an operator such that the adjoint operator is also the inverse operator, as it follows from Eq. (66):
$$\hat{U} \hat{U}^{{\dagger}} = \mathbb{I} \quad \Rightarrow \quad \hat{U}^{{\dagger}} = \hat{U}^{{-}1}$$

We will demonstrate that point-to-point propagation operators in Eq. (10) are unitary operators. The propagation of light from and illumination point $n$ in virtual light source surface $\mathcal {L}$ to a sensing point $m$ in virtual camera $\mathcal {S}$ is defined as:

$$\hat{T}_{m,n} = |s_{m}\rangle \langle l_{n}|$$
where $|s\rangle$ represent the state of light at sensing point $m$ and $\langle l |$ represent the state of light at illumination point $n$.

The adjoint of Eq. (68) is:

$$\hat{T}_{m,n}^{{\dagger}} = \left( |s_{m}\rangle \langle l_{n}| \right)^{{\dagger}} = |l_{n}\rangle \langle s_{m}|$$

In quantum mechanics, an observable represents and experimental measurement of operator $\hat {O}$. Observable $O$ corresponding to operator $\hat {O}$ is calculated according to Born’s rule, which states that the probability density of finding a system in a given state is proportional to the square of the amplitude of a system’s wavefunction. In practical terms, the observable is the expected value of operator $\hat {O}$ using a suitable basis functions $\{|b_{a}\rangle \}$ representing the system’s state:

$$O = \sum_{a} \langle b_{a}| \hat{O} |b_{a}\rangle$$

The basis functions must behave in a way such that $\langle b _{k'}| |b_{k}\rangle = \delta _{k',k}$. Notice that the observable $O \in \mathbb {R}$ whereas $\hat {O} \in \mathbb {C}$ (symbol $ \hat {\cdot } \;$ denotes the operator).

The observable for Eq. (68) is computed as:

$$T_{m,n} = \sum_{a} \langle b_{a}| \hat{T}_{m,n} |b_{a}\rangle$$

Similarly, the observable for Eq. (69) is computed as:

$$T_{m,n}^{{\dagger}} = \sum_{a} \langle b_{a}| \hat{T}_{m,n}^{{\dagger}} |b_{a}\rangle$$

Notice that both Eq. (71) and Eq. (72) represent the experimental measurements of the propagation of light between two points, which are Real numbers (as opposed to Complex numbers) and reversible in time due to the Helmholtz reciprocity principle. Therefore:

$$T_{m,n} = T_{m,n}^{{\dagger}}$$
which implies that the corresponding operators must be equal too:
$$\hat{T}_{m,n} = \hat{T}_{m,n}^{{\dagger}}$$

The propagation of light from an illumination point $n$ to a sensing point $m$ is defined as:

$$|s_{m}\rangle = \hat{U}_{m,n} |l_{n}\rangle$$
and the inverse propagation from a sensing point $m$ to an illumination point $n$ is defined as:
$$|l_{n}\rangle = \hat{V}_{n,m} |s_{m}\rangle$$
where $V_{n,m}$ is the inverse propagation operator.

The bra corresponding to Eq. (76) is:

$$\langle l_{n}| = \langle s_{m}| \hat{V}_{n,m}^{{\dagger}}$$

Using Eq. (77) in Eq. (68) yields:

$$\hat{T}_{m,n} = |s_{m}\rangle \langle l_{n}| = |s_{m}\rangle \langle s_{m}| \hat{V}_{n,m}^{{\dagger}}$$
and using now Eq. (75) in Eq. (78) yields:
$$\hat{T}_{m,n} = |s_{m}\rangle \langle l_{n}| = |s_{m}\rangle \langle s_{m}| \hat{V}_{n,m}^{{\dagger}} = \hat{U}_{m,n} |l_{n}\rangle \langle s_{m}| \hat{V}_{n,m}^{{\dagger}} = \hat{U}_{m,n} \hat{V}_{n,m}^{{\dagger}} |l_{n}\rangle \langle s_{m}| = \hat{T}_{m,n}^{{\dagger}}$$

We can commute $|l_{n}\rangle \langle s _{m}| \hat {V}_{n,m}^{\dagger } = \hat {V}_{n,m}^{\dagger } |l_{n}\rangle \langle s _{m}|$ because the outer product is a linear operation that is commutative.

The only way Eq. (79) is equal to Eq. (74) is if:

$$\boxed{ \hat{U}_{m,n} \hat{V}_{n,m}^{{\dagger}} = \mathbb{I} \quad \Rightarrow \quad \hat{V}_{n,m}^{{\dagger}} = \hat{U}_{m,n}^{{\dagger}} \quad \Rightarrow \quad \hat{U}_{m,n} \hat{U}_{m,n}^{{\dagger}} = \mathbb{I}}$$
which proves that a propagation operator $\hat {U}_{m,n}$ is unitary.

Funding

Project ENLIGHTEN, European Defence Fund (101103242); Ministerio de Ciencia, Innovación y Universidades/Agencia Estatal de Investigación/10.13039/501100011033 (Project PID2019-105004GB-I00).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. A. Velten, T. Willwacher, O. Gupta, et al., “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012). [CrossRef]  

2. A. Velten, D. Wu, A. Jarabo, et al., “Femto-photography: Capturing and visualizing the propagation of light,” ACM Trans. Graph. 32(4), 1–8 (2013). [CrossRef]  

3. K. L. Bouman, V. Ye, A. B. Yedidia, et al., “Turning corners into cameras: Principles and methods,” in International Conference on Computer Vision pp. 2289–2297 (IEEE, 2017).

4. T. Maeda, G. Satat, T. Swedish, et al., “Recent advances in imaging around corners,” arXiv, arXiv:1910.05613 (2019). [CrossRef]  

5. D. Faccio, A. Velten, and G. Wetzstein, “Non-line-of-sight imaging,” Nat. Rev. Phys. 2(6), 318–327 (2020). [CrossRef]  

6. J. Rapp, C. Saunders, J. Tachella, et al., “Seeing around corners with edge-resolved transient imaging,” Nat. Commun. 11(1), 5929 (2020). [CrossRef]  

7. M. Buttafava, J. Zeman, A. Tosi, et al., “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23(16), 20997–21011 (2015). [CrossRef]  

8. F. Heide, S. Diamond, D. B. Lindell, et al., “Sub-picosecond photon-efficient 3d imaging using single-photon sensors,” Sci. Rep. 8(1), 17726 (2018). [CrossRef]  

9. M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53(2), 023102 (2014). [CrossRef]  

10. M. Laurenzis, M. L. Manna, M. Buttafava, et al., “Advanced active imaging with single photon avalanche diodes,” Emerg. Imaging Sens. Technol. for Secur. Def. III; Unmanned Sensors, Syst. Countermeas. 10799, 1079903 (2018). [CrossRef]  

11. D. B. Lindell, M. O’Toole, and G. Wetzstein, “Efficient non-line-of-sight imaging with computational single-photon imaging,” Adv. Photonics Count. Tech. XIV 11386, 113860C (2020). [CrossRef]  

12. O. Gupta, T. Willwacher, A. Velten, et al., “Reconstruction of hidden 3d shapes using diffuse reflections,” Opt. Express 20(17), 19096–19108 (2012). [CrossRef]  

13. G. Gariepy, F. Tonolini, R. Henderson, et al., “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016). [CrossRef]  

14. F. Heide, M. O’Toole, K. Zang, et al., “Non-line-of-sight imaging with partial occluders and surface normals,” ACM Trans. Graph. 38(3), 1–10 (2019). [CrossRef]  

15. M. L. Manna, J.-H. Nam, S. A. Reza, et al., “Non-line-of-sight-imaging using dynamic relay surfaces,” Opt. Express 28(4), 5331–5339 (2020). [CrossRef]  

16. A. Jarabo, B. Masia, J. Marco, et al., “Recent advances in transient imaging: A computer graphics and vision perspective,” Visual Informatics 1(1), 65–79 (2017). [CrossRef]  

17. R. Geng, Y. Hu, and Y. Chen, “Recent advances on non-line-of-sight imaging: Conventional physical models, deep learning, and new scenes,” APSIPA Transactions on Signal and Information Processing 11(1), e1 (2022). [CrossRef]  

18. X. Liu, I. Guillén, M. La Manna, et al., “Non-line-of-sight imaging using phasor-field virtual wave optics,” Nature 572(7771), 620–623 (2019). [CrossRef]  

19. D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graph. 38(4), 1–13 (2019). [CrossRef]  

20. V. Arellano, D. Gutierrez, and A. Jarabo, “Fast back-projection for non-line of sight reconstruction,” Opt. Express 25(10), 11574–11583 (2017). [CrossRef]  

21. M. L. Manna, F. Kine, E. Breitbach, et al., “Error backprojection algorithms for non-line-of-sight imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1615–1626 (2017). [CrossRef]  

22. S. Xin, S. Nousias, K. N. Kutulakos, et al., “A theory of fermat paths for non-line-of-sight shape reconstruction,” in IEEE CVPR (2019), pp. 6800–6809.

23. M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555(7696), 338–341 (2018). [CrossRef]  

24. S. A. Reza, M. La Manna, S. Bauer, et al., “Phasor field waves: A huygens-like light transport model for non-line-of-sight imaging applications,” Opt. Express 27(20), 29380–29400 (2019). [CrossRef]  

25. J. Marco, A. Jarabo, J. H. Nam, et al., “Virtual light transport matrices for non-line-of-sight imaging,” IEEE/CVF International Conference on Computer Vision00, 2420–2429 (2021). [CrossRef]  

26. D. Royo, T. Sultan, A. Muñoz, et al., “Virtual mirrors: Non-line-of-sight imaging beyond the third bounce,” ACM Trans. Graph. 42(4), 1–15 (2023). [CrossRef]  

27. P. A. M. Dirac, “A new notation for quantum mechanics,” Math. Proc. Cambridge Philos. Soc. 35(3), 416–418 (1939). [CrossRef]  

28. P. A. M. Dirac, The Principles of Quantum Mechanics (Clarendon Press, 1981).

29. C. Saunders, J. Murray-Bruce, and V. K. Goyal, “Computational periscopy with an ordinary digital camera,” Nature 565(7740), 472–475 (2019). [CrossRef]  

30. C. Pei, A. Zhang, Y. Deng, et al., “Dynamic non-line-of-sight imaging system based on the optimization of point spread functions,” Opt. Express 29(20), 32349 (2021). [CrossRef]  

31. R. Cao, F. d. Goumoens, B. Blochet, et al., “High-resolution non-line-of-sight imaging employing active focusing,” Nat. Photonics 16(6), 462–468 (2022). [CrossRef]  

32. K. Tanaka, Y. Mukaigawa, and A. Kadambi, “Polarized Non-Line-of-Sight Imaging,” IEEE/CVF Conference on Computer Vision and Pattern Recognition00, 2133–2142 (2020).

33. B. M. Smith, M. O’Toole, and M. Gupta, “Tracking Multiple Objects Outside the Line of Sight Using Speckle Imaging,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition pp. 6258–6266 (2018).

34. F. Willomitzer, P. V. Rangarajan, F. Li, et al., “Fast non-line-of-sight imaging with high-resolution and wide field of view using synthetic wavelength holography,” Nat. Commun. 12(1), 6647 (2021). [CrossRef]  

35. J. Klein, C. Peters, J. Martín, et al., “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6(1), 32491 (2016). [CrossRef]  

36. A. P. French and E. F. Taylor, An Introduction to Quantum Physics (Norton, 1978).

37. R. M. Eisberg and R. Resnick, Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles (Wiley, 1985).

38. R. P. Feynman, R. B. Leighton, and M. Sands, The Feynman Lectures on Physics, Vol. III (Basic Books, 2015).

39. D. J. Griffiths and D. F. Schroeter, Introduction to Quantum Mechanics (Cambridge University Press, 2018).

40. J. J. Sakurai and J. Napolitano, Modern Quantum Mechanics (Cambridge University Press, 2021).

41. D. Royo, J. García, A. Muñoz, et al., “Non-line-of-sight transient rendering,” Comput. Graph. 107, 84–92 (2022). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. A classic NLOS scenario looking around a corner. The displayed room has a relay wall that is illuminated by a laser source. The light diffused by the relay wall illuminates the hidden T-shaped geometry. The indirect illumination reflected from the hidden geometry is reflected by the relay wall back to the time-resolved sensor. Figure adapted from [26].
Fig. 2.
Fig. 2. Schematic of the back-propagation formulation. A mesh of voxels partitions the imaging space containing the hidden geometry. The signal for each voxel $G(\mathbf {x}_{v})$ is computed as the contributions from all time-resolved signals $H_{m,n} (t)$ corresponding to time $t_{v}$, which corresponds to the total time-of-flight $d_{1}$ of light from the illumination source to illumination point $n$ on the relay wall, time-of-flight $d_{2}$ from illumination point $n$ to a voxel $\mathbf {x}_{v} \in \mathcal {V}$, time-of-flight $d_{3}$ from voxel $\mathbf {x}_{v}$ to sensing point $m$ on the relay wall, and time-of-flight $d_{4}$ back to the sensor from sensing point $m$. $\mathbf {x}_{n}$ and $\mathbf {x}_{m}$ describe the position coordinates of illumination point $n$ and sensing point $m$ respectively.
Fig. 3.
Fig. 3. (a) A laser source illuminates point $n$ on the relay wall with a delta light pulse $\delta _{n}(t)$. (b) An ultra-fast sensor captures time-resolved illumination $H_{m,n}(t)$ on multiple visible points $m$ on the relay wall. By convolving the impulse response with an illumination function, the phasor field formulation transforms the relay wall into a virtual light source with aperture $\mathcal {L}$ (c), and also into virtual camera with aperture $\mathcal {S}$ (d). These transformations allow modeling a computational lens that focuses the response from all points at each point in the bounding volume $\mathcal {V}$ of the hidden scene, effectively transforming the NLOS geometry into a virtual line-of-sight one. Figure adapted from [26].
Fig. 4.
Fig. 4. Diagram showing the operators and their propagation directions for both back-propagation and forward propagation to help visualizing the algebra developed in Section 7. $|l_{n}\rangle$ represents the illumination al point $n$, $|s_{m}\rangle$ represents the signal at sensing point $m$, $|\mathbf {x}_{v}\rangle$ represents the signal at the voxel $\mathbf {x}_{v}$ in the back-propagation model, and $|\mathbf {x}_{v}'\rangle$ represents the signal at the virtual voxel $\mathbf {x}_{v}'$ in the forward propagation model. The result expressed by Eq. (52) demonstrates that back-propagation and forward propagation are equivalent point-to-point.
Fig. 5.
Fig. 5. Results of the reconstruction of a dragon with phasor fields using a Gaussian light pulse of increasing width: 250, 500, 750, and 1000 picoseconds. Top: reconstruction of the raw signal with Gaussian illumination. As the width of the pulse increases the reconstructions become blurrier, to the point where the dragon cannot be recognized. Bottom: reconstruction using our framework. Even in the presence of very wide pulses, we can still obtain reasonable reconstructions.

Tables (1)

Tables Icon

Table 1. Table describing the symbols used throughout this paper. ( 1 ) We will use x n and n indistinctly to refer to an illumination point in L . ( 2 ) We will use x m and m indistinctly to refer to a sensing point in S .

Equations (80)

Equations on this page are rendered with MathJax. Learn more.

| u = ( u 1 , , u K ) T
| u = ( u 1 , , u K ) u |
v | | u = | v | u = ( v 1 , , v K ) ( u 1 , , u K ) T = v 1 u 1 + + v K u K = k = 1 K v k u k
u | | u = ( u 1 u 1 + + u K u K ) = u 2
v | | u = d x v ( x ) u ( x )
| u v | = [ u 1 v 1 u 1 v L u K v 1 u K v L ]
| u v | = ( v u ) ( x ) = d y v ( y x ) u ( y )
T ^ m , n = | s m l n |
T ^ = [ T ^ m , n ] = [ | s m l n | ]
| s m = n = 1 N U ^ m , n | l n
T ^ m , n = n = 1 N U ^ m , n | l n l n |
G ( x v ) = m n H m , n ( t v )
O = a b a | O ^ | b a
I ( x m , t ) = R d t R 2 d S v π r v 2 δ ( r v c t + c t ) I ( x n , t )
T m , n = b | T ^ m , n | b = b | U ^ m , n | l n l n | | b
I ( x m , t ) T m , n , I ( x n , t ) l n | | b
R 2 d S v π r v 2 δ ( r v c t + c t ) b | U ^ m , n | l n
I ( x m , t ) = R d t R 2 d S v π r v 2 δ ( r v c t + c t ) I ( x n , t )
P ω ( x , t ) = P 0 ( x ) e i ω t
P ω ( x 2 , t ) = γ S 1 P ω ( x 1 , t ) e i k x 2 x 1 x 2 x 1 d x 1
P ( x m , t ) = L P ( x n , t ) H m , n ( t ) d x n
I ( x ) = Φ [ P ( x m , t ) ]
| s m = 1 2 π R d ω S ( x m , ω ) e i ω t
l n | = 1 2 π R d ω L ( x n , ω ) e i ω t
T ^ m , n = R 2 d x n 1 2 π R d ω S ( x m x n , ω ) e i ω t 1 2 π R d ω L ( x n , ω ) e i ω t
L ( x n , ω ) = L n ( 0 ) e ( x n σ n ) 2 e i ω t n ( 0 )
T ^ m , n = R 2 d x n 1 2 π R d ω L n ( 0 ) e ( x n σ n ) 2 e i ω ( t t n ( 0 ) ) 1 2 π R d ω S ( x m x n , ω ) e i ω t
P ( x m , t ) T ^ m , n P ( x n , t ) 1 2 π R d ω L n ( 0 ) e ( x n σ n ) 2 e i ω ( t t n ( 0 ) ) H m , n ( t ) 1 2 π R d ω S ( x m x n , ω ) e i ω t
i h 2 π t | ψ ( x , t ) = H ^ ( x , t ) | ψ ( x , t )
| ψ ( x , t ) = R 3 d k e i [ k ( x x 0 ) ω ( t t 0 ) ] x x 0 2 | ψ ( x 0 , t 0 )
| ψ ( x m , t ) = R 3 d k e i k ( x m x n ) i ω t x m x n 2 | ψ ( x n )
P ( x n ) | ψ ( x n ) P ( x m , t ) | ψ ( x m , t )
P ( x m , t ) = R 3 d k e i k ( x m x n ) i ω t x m x n 2 P ( x n )
P ( x m , t ) = L d x n R 3 d k e i k ( x m x n ) i ω t x m x n 2 P ( x n )
T ^ m , n = | s m ( x , t ) l n ( x , t ) |
| s m ( x , t ) = U ^ m , n ( x , t , t ) | l n ( x , t )
T ^ m , n = U ^ m , n ( x , t , t ) | l n ( x , t ) l n ( x , t ) |
T ^ m , n | l n ( x , t ) = U ^ m , n ( x , t , t ) | l n ( x , t ) l n ( x , t ) | | l n ( x , t )
l n ( x , t ) | | l n ( x , t ) = δ ( t t )
U ^ m , n ( x , t , t ) = U ^ m , n ( x , t , t )
( T ^ m , n U ^ m , n ( x , t ) ) | l n ( x , t ) = 0
( 2 1 c 2 2 t 2 ) Ψ ( x , t ) = 0
Ψ ( x , t ) | l n ( x , t )
( T ^ m , n U ^ m , n ( x , t ) ) ( 2 1 c 2 2 t 2 )
Ψ ( x , t ) | z = 0 = Ψ ( x , t ) | t = 0
| s m = U ^ m , x v | x v
| x v = U ^ m , x v 1 | s m
| x v = U ^ x v , m | s m
| s m = U ^ x v , m 1 | x v
| x v = U ^ x v , m U ^ m , x v | x v
U ^ x v , m U ^ m , x v U ^ x v , x v
U ^ x v , x v = U ^ x v , m U ^ m , x v = I U ^ x v , m = U ^ m , x v 1
| s m = n U ^ m , n | l n
| l n = m U ^ m , n 1 | s m
| s m = n m U ^ m , n U ^ m , n 1 | s m
| s m = n m U ^ m , n U ^ n , m | s m
n m U ^ m , n U ^ n , m = δ m , m δ n , n
( [ U ^ m , n ] [ U ^ n , m ] ) = 1
T ^ m , 1 = U ^ m , 1 | l 1 l 1 |
T ^ m , 1 | l 1 = U ^ m , 1 | l 1 l 1 | | l 1 = U ^ m , 1 l 1 2 | l 1 ( T ^ m , 1 U ^ m , 1 l 1 2 ) | l 1 = 0
T ^ m , 1 = U ^ m , 1 l 1 2
l 1 2 = ν l 1 , ν 2
l 1 2 = ν l 1 , ν 2 = 1
T ^ m , 1 , ν = T ^ m , 1 , ν l 1 , ν 2
T ^ m , 1 = ν T ^ m , 1 , ν l 1 , ν 2 U ^ m , 1
U ^ U ^ = I
U ^ U ^ = I U ^ = U ^ 1
T ^ m , n = | s m l n |
T ^ m , n = ( | s m l n | ) = | l n s m |
O = a b a | O ^ | b a
T m , n = a b a | T ^ m , n | b a
T m , n = a b a | T ^ m , n | b a
T m , n = T m , n
T ^ m , n = T ^ m , n
| s m = U ^ m , n | l n
| l n = V ^ n , m | s m
l n | = s m | V ^ n , m
T ^ m , n = | s m l n | = | s m s m | V ^ n , m
T ^ m , n = | s m l n | = | s m s m | V ^ n , m = U ^ m , n | l n s m | V ^ n , m = U ^ m , n V ^ n , m | l n s m | = T ^ m , n
U ^ m , n V ^ n , m = I V ^ n , m = U ^ m , n U ^ m , n U ^ m , n = I
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.