Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Depth estimation in SPAD-based LIDAR sensors

Open Access Open Access

Abstract

In direct time-of-flight (D-TOF) light detection and ranging (LIDAR), accuracy and full-scale range (FSR) are the main performance parameters to consider. Particularly, in single-photon avalanche diodes (SPAD) based systems, the photon-counting statistics plays a fundamental role in determining the LIDAR performance. Also, the intrinsic performance ultimately depends on the system parameters and constraints, which are set by the application. However, the best-achievable performance directly depends on the selected depth estimation method and is not necessarily equal to intrinsic performance. We evaluate a D-TOF LIDAR system, in the particular context of smartphone applications, in terms of parameter trade-offs and estimation efficiency. First, we develop a simulation model by combining radiometry and photon-counting statistics. Next, we perform a trade-off analysis to study dependencies between system parameters and application constraints, as well as non-linearities caused by the detection method. Further, we derive an analytical model to calculate the Cramér–Rao lower bound (CRLB) of the LIDAR system, which analytically accounts for the shot noise. Finally, we evaluate a depth estimation method based on artificial intelligence (AI) and compare its performance to the CRLB. We demonstrate that the AI-based estimator fully compensates the non-linearity in depth estimation, which varies depending on application conditions such as target reflectivity.

Published by Optica Publishing Group under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. Introduction

Depth sensing, defined as the method of obtaining a measure of the distance between a sensor and a target surface, has been an active field of research in automotive and consumer electronics. It plays a fundamental role in providing input information to systems, such as autonomous driving and face recognition. The intrinsic performance in depth sensing is only determined by the quality of input data obtained with the ranging hardware. However, the best achievable performance is also determined by the utilized depth estimation algorithm and noise rejection method.

There are several competing or complementary technologies available for depth sensing, such as light detection and ranging (LIDAR), radio detection and ranging (RADAR), sound navigation and ranging (SONAR), among others. Autonomous driving has been the leading force to develop LIDAR systems that are more compact, have lower costs, and achieve improved reliability as well as full-scale range (FSR). The main advantage of LIDAR is the ability to resolve targets with excellent angular resolution. On the other hand, the main disadvantages of LIDAR are its limited FSR, and difficulty to work under unfavorable environmental conditions, such as rain and fog. In order to build advanced driver assistance systems (ADAS), a sensor fusion approach is preferred so the disadvantages of either technique are compensated [15]. Also, LIDAR sensors are utilized in the smartphone industry to perform tasks such as face recognition, scene modeling, auto-focus assistance, etc. [610].

Essentially, a LIDAR sensor emits light and some portion of it is reflected back from a target surface to the sensor. Typically, the light source is a laser and the detection part is composed of a photodetector array or integrated camera [1114]. The depth information is calculated directly or indirectly from the round-trip time-of-flight (TOF) of the detected light. LIDAR can be classified in two main sub-categories, namely indirect TOF (I-TOF) and direct TOF (D-TOF), depending on how the actively generated light is modulated and measured [15]. Other techniques that are based on light interferometry, such as frequency-modulated continuous wave (FMCW) LIDAR, also show potential for depth sensing [16].

In D-TOF sensors, the laser light is generated following a pulse-width modulation (PWM) scheme with a very low duty cycle, so the spatial resolution is improved and optical peak power is increased. There are mainly two techniques in D-TOF LIDAR, namely flash and scanning LIDAR [16]. Flash LIDAR is utilized in short range application such as in smartphones [10]. On the other hand, scanning LIDAR can achieve a larger FSR and therefore shows more potential for automotive applications [17,18]. In both cases, the FSR of the LIDAR system is limited by the irradiance of the background light, maximum laser power required by safety regulations, as well as the optical sensitivity.

In this paper, we specifically select the case of smartphone LIDAR to investigate effects such as non-linearities, trade-offs between parameters and constraints, etc. For this purpose, we develop a custom pseudo ray-tracing algorithm that is coupled to a Monte Carlo (MC) code to simulate the LIDAR system. Most important, we evaluate depth estimation methods, such as deep-learning based methods, in terms of statistical efficiency. For such evaluation, we calculate the Cramér–Rao lower bound (CRLB) by utilizing an analytical model, and compare it to the performance of the depth estimation methods. It is important to mention that the analytical model utilized to calculate the CRLB accounts for the shot noise.

2. LIDAR system modeling

In order to model general scenarios in SPAD-based D-TOF LIDAR, we propose a generic D-TOF LIDAR sensor that is shown in Fig. 1(a). The generic sensor is composed of a vertical-cavity surface-emitting laser (VCSEL), and a SPAD array that is connected to several time-to-digital converters (TDCs) [1214,1820].

 figure: Fig. 1.

Fig. 1. Diagram and operation of a generic LIDAR sensor. (a) sensor block diagram. (b) generic timing diagram.

Download Full Size | PDF

We optimize the system parameters considering smartphone applications. However, the presented model and equations are valid for any D-TOF application in which correlation-based noise filtering methods, such as coincidence filters, are not utilized. It is important to note that correlation-based methods require complex hardware implementations [20].

The sensor’s principle of working is simply based on emitting a light pulse with the VCSEL, which arrives at a target surface, part of the light is reflected back and captured by the SPAD array. The distance is calculated by measuring the time difference between the emitted pulse and detected pulse. Normally, the system performs multiple measurements based on time-correlated single-photon counting (TCSPC) cycles, in order to form a histogram of the recorded time differences or timestamps. The timestamps are calculated by the TDCs (see Fig. 1(b)), and we consider an architecture where multiple SPADs share a single TDC through timelines. The shared timelines allow to have a lower number of TDCs with respect to the number of SPADs [21].

To study the proposed D-TOF LIDAR sensor performance and evaluate several depth estimation methods, we propose a simulation and signal processing flow that is shown in Fig. 2. The optical modeling stage estimates the detected number of VCSEL photons per TCSPC cycle $S_{\mathrm{p}}$, and the noise rate $N_{\mathrm{r}}$ in terms of counts per second. The noise events can include spurious counts due to background light as well as dark counts. Next, the time statistics modeling stage simulates the random and temporal behavior of the photon-counting process. For signal generation, we consider that the SPADs are recharged at the beginning of the TCSPC cycle. Also, when a detection occurs, the fired SPADs are kept quenched until the next recharge phase (see Fig. 1(b)).

 figure: Fig. 2.

Fig. 2. Simulation and signal processing flow

Download Full Size | PDF

The simulation and signal processing flow can simulate a single D-TOF LIDAR scenario with fixed simulation parameters. As a result, it outputs a single value that corresponds to the estimated target depth $\hat {d_{\mathrm{T}}}$, which is calculated based on a timestamp histogram $\mathbf {H_\mathrm{T}}$. However, in practice, we simulate several scenarios in the same run by sweeping the simulation parameters.

In the following subsections, details about the optical and time-statistics modeling stages are described and discussed.

2.1 Optical model

Typically, in D-TOF LIDAR, the target surface is considered to be an ideal light diffuser. In order to simplify the simulation model, previous works assume that the incidence angle between a light ray from the laser to the target surface is always perpendicular [18,20]. However, when the D-TOF LIDAR sensor moves closer to the target, the incidence angle of the light ray cannot be longer considered perpendicular (see $\theta _{\mathrm{e},i,j}$ in Fig. 3(a)).

Therefore, we propose a model in which the target surface is divided into sub-elements $\Delta T_{i,j}$, which are considered as individual Lambertian reflectors. Also, this pixelated model can consider situations in which the laser field-of-illumination (FOI) exceeds the target surface area (see Fig. 3(a)). In this model, the detector consists of a lens, an optical bandpass filter and a SPAD array. The detector collects the summation of the diffused power from the pixelated sub-elements whose locations are inside the FOI of the VCSEL, and the field-of-view (FOV) of the detector (see Fig. 3(b)).

The optical model is based on a pseudo-sequential ray tracing algorithm that first calculates light paths between the VCSEL and the target surface. Next, it calculates the light paths between the sub-elements of the pixelated target surface and the SPAD array.

2.1.1 Signal event

A signal event is defined as a VCSEL photon detection that generates avalanche current in a SPAD. The signal light path is divided into two sub-paths: the emission path in which the VCSEL emits photons to the target (see Fig. 3(a)); and the reflection path in which the target reflects back photons onto the sensor lens (see Fig. 3(b)).

In the first sub-path, the amount of received VCSEL light by a sub-element of the pixelated target surface is calculated. The VCSEL and the SPAD array are placed at the beginning of the $d$-axis and are denoted as $d = 0$. And we define $d_{\mathrm{T}}$ as the distance from the VCSEL surface to the target surface.

The coverage sphere of the VCSEL is determined by a cone with its apex angle $2\theta _{\mathrm{e}}$, which is equal to the FOI (see Fig. 3(a)). The total solid angle that corresponds to the FOI, $\Omega _{\mathrm{FOI}}$, can be approximated as

$$\begin{gathered} \Omega_{\mathrm{FOI}} = \int_0^{2\pi}\int_0^{\theta_{\mathrm{e}}}\sin{\theta}d\theta d\phi \approx 4\pi\sin^2{\frac{\theta_{\mathrm{e}}}{2}}= 4\pi\sin^2{\frac{\mathrm{FOI}}{4}}. \end{gathered}$$

At every sub-element $\Delta T_{i,j}$, we define $\theta _{\mathrm{e},i,j}$ as the incident angle between a VCSEL light ray and the normal vector to the surface of $\Delta T_{i,j}$ (see Fig. 3(a)). And we define $d_{\mathrm{T}}$ as the distance from the VCSEL surface to the target surface. So, the distance $d_{\mathrm{e},i,j}$ from the VCSEL to $\Delta T_{i,j}$ is given by

$$d_{\mathrm{e},i,j}=\frac{d_{\mathrm{T}}}{\cos{\theta_{\mathrm{e},i,j}}}.$$

And the solid angle $\Delta \Omega _{i,j}$ from $\Delta T_{i,j}$ is

$$\Delta\Omega_{i,j} = \frac{\hat{\Delta T_{i,j}}}{d_{\mathrm{e},i,j}^2},$$
where $\hat {\Delta T_{i,j}}$ is the area of $\Delta T_{i,j}$.

A received radiant flux $\Delta \Phi _{\mathrm{e},i,j}$ by $\Delta T_{i,j}$, when it is located inside the FOI, is calculated as the ratio between $\Omega _{\mathrm{FOI}}$ and $\Delta \Omega _{i,j}$; and multiplied by the total VCSEL optical power $P_{\mathrm{s}}$, as a dot product. This calculation is expressed as

$$\Delta\Phi_{\mathrm{e},i,j}=\left\{ \begin{aligned} & \frac{P_{\mathrm{s}}\hat{\Delta T_{i,j}}}{4\pi \sin^2({\frac{\mathrm{FOI}}{4}})d_{\mathrm{e},i,j}^2} \cos{\theta_{\mathrm{e},i,j}} & \vert \theta_{\mathrm{e},i,j} \vert \leq\frac{\mathrm{FOI}}{2} \\ & 0 & \vert \theta_{\mathrm{e},i,j} \vert >\frac{\mathrm{FOI}}{2} \end{aligned}. \right.$$

The second sub-light-path of the ray tracing algorithm corresponds to the detected light by the SPAD array, which is back-reflected from the target surface (see Fig. 3(b)). The radiant flux $\Delta \Phi _{\mathrm{s},i,j}$ emitted by the $\Delta T_{i,j}$ is the product of the received radiant flux and the reflectivity $\rho$ of the target surface, and is defined by

$$\Delta \Phi_{\mathrm{s},i,j} = \rho\Delta \Phi_{\mathrm{e},i,j}.$$

The detected radiant intensity $\Delta I_{\mathrm{e},i,j}$, emitted from $\Delta T_{i,j}$, is proportional to the product of the peak radiant intensity $I_{0,i,j}$ and cosine $\theta _{\mathrm{s},i,j}$, which is the angle between the sensor lens $\hat {S}$ and the normal vector of $\Delta T_{i,j}$ (see Fig. 3(b)). $\Delta I_{\mathrm{e},i,j}$ is given by

$$\Delta I_{\mathrm{e},i,j} = I_{0,i,j}\cos{\theta_{\mathrm{s},i,j}}.$$

A plane radiator or reflector that is perfectly diffusive emits light in all directions. Further, the total emitted power is contained within half sphere with respect to the normal of that plane (see Fig. 3(b)). Therefore, the radiant flux $\Delta \Phi _{\mathrm{s},i,j}$ diffused from $\Delta T_{i,j}$ of the target is calculated as the surface integral of the diffused radiant intensity $\Delta I_{\mathrm{e},i,j}$ over the solid angle $\Omega$ of the half sphere

$$\Delta \Phi_{\mathrm{s},i,j} = \int_{\Omega}\Delta I_{\mathrm{e},i,j}\mathrm{d}\Omega = \pi I_{0,i,j}.$$

Approximating the illumination with the inverse-square law, the irradiance $\Delta E_{\mathrm{e},i,j}$, measured at the sensor lens and emitted by $\Delta T_{i,j}$, is expressed as

$$\Delta E_{\mathrm{e},i,j} = \frac{I_{\mathrm{e},i,j}}{r^2} = \frac{\Delta \Phi_{\mathrm{s},i,j}\cos{\theta_{\mathrm{s},i,j}}}{\pi d_{\mathrm{s},i,j}^2},$$
where $d_{\mathrm{s},i,j}$ is the distance from $\Delta T_{i,j}$ to the D-TOF LIDAR sensor.

The total radiant flux $\Phi _{\mathrm{s}}$ sampled at the lens, whose area is $A_{\mathrm{l}}$, is the summation of the radiant flux $\Delta \Phi _{\mathrm{s},i,j}$ of every $\Delta T_{i,j}$, but only if they are inside the FOV. $\Phi _{\mathrm{s}}$ is calculated as follows

$$\Phi_{\mathrm{s}} = \sum_{i,j}\left\{ \begin{aligned} & \frac{A_{\mathrm{l}}\Delta \Phi_{\mathrm{s},i,j}(\cos{\theta_{\mathrm{s},i,j}})^2}{\pi d_{\mathrm{s},i,j}^2} & \vert \theta_{\mathrm{s},i,j} \vert \leq\frac{\mathrm{FOV}}{2} \\ & 0 & \vert \theta_{\mathrm{s},i,j} \vert>\frac{\mathrm{FOV}}{2} \end{aligned}. \right.$$

Finally, the number of photons per TCSPC cycle $S_{\mathrm{p}}$, is calculated as

$$S_{\mathrm{p}} = \Phi_{\mathrm{s}}\eta_{\mathrm{l}} \eta_{\mathrm{f}}\frac{2\lambda}{\pi fhc}\mathrm{PDE},$$
where $h$ is the Plank’s constant, $c$ is the light speed, $f$ is the VCSEL repetition frequency, and $\lambda$ is the VCSEL center wavelength. The photon detection efficiency (PDE) is defined as the product of photon detection probability (PDP) and fill factor (FF) of the SPAD array. Also, it accounts for a reduction ratio $2/\pi$ since the light-sensitive area of the SPAD array is represented as a square, and it is inscribed in a circle that corresponds to the light projected by the lens [20]. An optical bandpass filter is placed between the lens and the SPAD array to reduce background ambient light (see Fig. 3(b)). So, the transmittance of the optical bandpass filter $\eta _{\mathrm{f}}$ is also considered as a power loss.

2.1.2 Noise event

A noise event is defined as any SPAD avalanche that is not triggered by a photon emitted from the VCSEL. Noise events come from artificial light or natural light in the two different light paths: photons emitted from a noise source to the target and the target reflects photons to the sensor lens; or direct emission of photons to the sensor lens. We only consider the first light path (see Fig. 3). Additionally, dark counts can trigger SPAD avalanches, but in practice the dark count rate (DCR) is significantly smaller than the noise rate produced by background ambient light.

 figure: Fig. 3.

Fig. 3. Setup of the optical simulation. (a) Diagram of the forward emissions of the VCSEL light and background illumination. (b) Diagram of SPAD array’s light detection of the backward reflection.

Download Full Size | PDF

We use the equivalent light irradiance, $E_{\mathrm{n}}$, at sea level to estimate the background noise power. Similar to the signal event calculation, we utilize the same ray tracing equations but by replacing $\Phi _{\mathrm{s},i,j}$ in Eq. (9) by

$$\Delta \Phi_{\mathrm{n},i,j} = \rho E_{\mathrm{n}} \hat{\Delta T_{i,j}}.$$

Therefore, we obtain an expression for the total radiant flux measured at the sensor lens, due to noise photons only, expressed as

$$\Phi_{\mathrm{n}} = \sum_{i,j}\left\{ \begin{aligned} & \frac{A_{\mathrm{l}}\Delta \Phi_{\mathrm{n},i,j}(\cos{\theta_{\mathrm{s},i,j}})^2}{\pi d_{\mathrm{s},i,j}^2} & \vert \theta_{\mathrm{s},i,j} \vert \leq\frac{\mathrm{FOV}}{2} \\ & 0 & \vert \theta_{\mathrm{s},i,j} \vert >\frac{\mathrm{FOV}}{2} \end{aligned}. \right.$$

Similarly as in Eq. (10), we calculate the noise rate per time unit $N_{\mathrm{r}}$ as

$$N_{\mathrm{r}} = \Phi_{\mathrm{n}}\eta_{\mathrm{l}}\eta_{\mathrm{f}}\frac{2\lambda}{\pi hc}\mathrm{PDE}.$$

2.2 Statistical model

The output of the time statistics modeling stage is a single histogram represented by the vector $\mathbf {H_\mathrm{T}}$, when simulating fixed optical parameters. The timestamps of signal and noise events are generated using $S_\mathrm{p}$ and $N_\mathrm{r}$ as input information, which are calculated from Eqs. (10) and (13) respectively (see Fig. 2).

In a single TCSPC cycle, the number of detected VCSEL photons $X$ is a random variable (RV) that follows a Poisson process. Its probability density function (PDF) is defined as

$$P(X=k) = \frac{S_{\mathrm{p}}^k}{k!}e^{{-}S_{\mathrm{p}}},$$
where $S_{\mathrm{p}}$ is the expected number of signal events per TCSPC cycle. We assume that the VCSEL light pulse has a Gaussian shape and neglect any dispersion effects. Therefore, the timestamp of a signal event is represented by a RV with a Gaussian PDF given by
$$\begin{gathered} f(t\mid 2 \frac{d_{\mathrm{T}}}{c},\sigma_{\mathrm{l}}) =\mathcal{N}(t\mid 2 \frac{d_{\mathrm{T}}}{c},\sigma_{\mathrm{l}}), \end{gathered}$$
where $\sigma _{\mathrm{l}}$ relates to the VCSEL pulse width, and $\mathcal {N}$ is the Gaussian function.

The noise events are uniformly distributed over time. Their timestamps are represented by an RV with exponential PDF, which is defined by

$$g(t\mid N_{\mathrm{r}})= \left\{ \begin{aligned} & N_{\mathrm{r}}e^{{-}N_{\mathrm{r}} t} & t\geq0 \\ & 0 & t<0 \end{aligned}. \right.$$

Based on the Monte Carlo method, random signal timestamps are generated following Eqs. (14) and (15), and noise timestamps are generated following Eq. (16). Next, the signal and noise timestamps are grouped together and sorted. In the case of having a single-TDC system, only the first timestamp is added to the $\mathbf {H}_{\mathrm{T}}$. In the case of multiple TDCs, this process is repeated for every subgroups composed of many SPADs and a single TDC, and the first timestamp of every TDC is added to the $\mathbf {H}_{\mathrm{T}}$. The timestamp generation process and update of $\mathbf {H}_{\mathrm{T}}$ is repeated for every TCSPC cycle.

Additionally, we consider that the dominant factor in the timing uncertainty is the rounding effect produced by TDC’s least significant bit (LSB) of size $\Delta _{\mathrm{TDC}}$. Therefore, we perform a timestamp rounding by considering $\Delta _{\mathrm{TDC}}$ as bin size when calculating $\mathbf {H_{\mathrm{T}}}$. In our model, the sensor is able to detect several photons per laser pulse. However, we consider that a single SPAD is working in photon-starving mode and assumed that the SPAD timing jitter is much smaller than $\sigma _{\mathrm{l}}$. When SPADs are not working in photon-starving mode, the so-called multi-photon distortion affects the SPAD timing jitter [22].

3. System trade-off analysis

In this section, we perform a system trade-off analysis by using the models described in the previous section. We define two criteria to evaluate the system performance, namely the depth range and depth resolution.

We define parameters that are swept during the analyses and categorize them into two subsets, namely variable and fixed. The variable parameters can be modified by the system during operation, and the fixed parameter are constrained by the application. The variable parameters are $P_{\mathrm{s}}$, total number of TCSPC cycles $N_{\mathrm{TCSPC}}$, PDP, and the TDC bin size $\Delta _{\mathrm{TDC}}$. The fixed parameters are $E_{\mathrm{n}}$ and $\rho$. It is important to mention that in this section we consider a system with a single TDC. Also, appendix C explains how the models and results of single-TDC systems can be re-adjusted to be re-interpreted for multiple-TDC systems.

Also, appendix A shows additional static parameters that are common for all simulations.

3.1 Depth range

The aim of the depth range analysis is to find the maximum distance that the system can detect under limiting conditions. We utilize the find peaks function (FPF) from MATLAB as a reference algorithm to estimate $d_\mathrm{T}$[23]. In this analysis, we gradually increase $d_{\mathrm{T}}$ until the FPF is unable to locate a signal peak, and the final valid value of $d_{\mathrm{T}}$ is considered as the depth range.

Figure 4 shows the maximum depth under different parameter values. The system fails when $N_{\mathrm{TCSPC}}$ is lower than 500. Furthermore, the depth range decreases with the increase of $E_{\mathrm{n}}$.

 figure: Fig. 4.

Fig. 4. Depth range under variable and fixed parameter conditions, each individual shows the depth range as a function of $P_\mathrm{s}$. (a) depth range with $N_{\mathrm{TCSPC}}$ as parameter. (b) depth range with $E_{\mathrm{n}}$ as parameter. (c) depth range as $\rho$ as parameter. (d) depth range under extreme conditions of $\rho$ and $E_{\mathrm{n}}$. When the corresponding parameters are not swept, they are set as $N_{\mathrm{TCSPC}} = 30000$, $\Delta _{\mathrm{TDC}} = 25 \;\textrm{ps}$, $E_{\mathrm{n}}$ = 0.1 W m−2 nm−1, and $\rho$ = 8 %.

Download Full Size | PDF

We observe that $\rho$ and $\Delta E_{\mathrm{n}}$ are highly dependent parameters when determining the depth range. Therefore, we perform a depth range analysis with two extreme values of $\rho$, and sweeping $\Delta E_{\mathrm{n}}$ as well as $P_\mathrm{s}$ (see Fig. 4(d)). It can be found that for the $\rho$ value of 60 % the depth range is slightly higher for low $P_{\mathrm{s}}$ values. Also, it reaches a faster saturation when increasing the $P_{\mathrm{s}}$. On the other hand, the 8 % $\rho$ has a slight lower depth range for low $P_{\mathrm{s}}$ values, but its depth range continues to increase as $P_{\mathrm{s}}$ increases. The explanation to this observation that in the curve with a $\rho$ value of 8 %, the depth range is restricted by the VCSEL power, only. In the curve with higher $\rho$, the depth range is restricted by the amount of reflected background light.

Also, we further analyze the effect of PDP in the depth range in appendix B.

3.2 Depth resolution

In the depth resolution analysis, we study the mean-square-error (MSE) of the system when estimating $d_{\mathrm{T}}$. In this section, we also utilize the FPF as our estimation method to locate the VCSEL peak position in $\mathbf {H_\mathrm{T}}$. Further, $\hat {d_{\mathrm{T}}}$ is simply calculated as the time of the peak found by FPF from $\mathbf {H_{\mathrm{T}}}$, and converted into the corresponding half distance. For a better understanding of the analysis, we report the $\sqrt {\mathrm{\hat {MSE}}}$ in order to have consistency in the depth unit. Also, we study the variance $\sigma _{\mathrm{MSE}}$ and bias component $B_{\mathrm{MSE}}$ of the MSE, separately.

In this analysis, we select a standard value of $P_{\mathrm{s}}$ equal to 7.36 mW, which allows a depth range of 0.5 m when $\mathrm{PDP}= {2}\;{\%}$ (see Fig. 11 in appendix B), and is below the maximum safety value when utilizing $N_{\mathrm{TCSPC}}=10000$ [24].

We sweep the variable parameters, and for every point in the sweep we generate 1,000 histograms to calculate the MSE. Fig. 5 shows the results when sweeping the variable parameters and Fig. 6 shows the results when sweeping the fixed parameters.

 figure: Fig. 5.

Fig. 5. Depth resolution under variable parameter conditions, and subdivided into $B_{\mathrm{MSE}}$ and $\sigma _{\mathrm{MSE}}$. (a) $\sqrt {\mathrm{\hat {MSE}}}$ with $N_{\mathrm{TCSPC}}$ as parameter. (b) $B_{\mathrm{MSE}}$ and $\sigma _{\mathrm{MSE}}$ with $N_{\mathrm{TCSPC}}$ as parameter. (c) $\sqrt {\mathrm{\hat {MSE}}}$ with $P_{\mathrm{s}}$ as parameter. (d) $B_{\mathrm{MSE}}$ and $\sigma _{\mathrm{MSE}}$ with $P_{\mathrm{s}}$ as parameter. When the corresponding parameters are not swept, they are set as $P_{\mathrm{s}}$ = 7.36 mW, $N_{\mathrm{TCSPC}} = 10000$, $\Delta _{\mathrm{TDC}} = 25 \;\textrm{ps}$, $E_{\mathrm{n}}$ = 0.1 W m−2 nm−1, and $\rho$ = 8 %.

Download Full Size | PDF

 figure: Fig. 6.

Fig. 6. Depth resolution under fixed parameter conditions, and subdivided into $B_{\mathrm{MSE}}$ and $\sigma _{\mathrm{MSE}}$. (a) $\sqrt {\mathrm{\hat {MSE}}}$ with $E_{\mathrm{n}}$ as parameter. (b) $B_{\mathrm{MSE}}$ and $\sigma _{\mathrm{MSE}}$ with $E_{\mathrm{n}}$ as parameter. (c) $\sqrt {\mathrm{\hat {MSE}}}$ with $\rho$ as parameter. (d) $B_{\mathrm{MSE}}$ and $\sigma _{\mathrm{MSE}}$ with $\rho$ as parameter. When the corresponding parameters are not swept, they are set as $P_{\mathrm{s}}$ = 7.36 mW, $N_{\mathrm{TCSPC}} = 10000$, $\Delta _{\mathrm{TDC}} = 25 \;\textrm{ps}$, $E_{\mathrm{n}}$ = 0.1 W m−2 nm−1, and $\rho$ = 8 %.

Download Full Size | PDF

First, we simulate the depth resolution while sweeping $N_{\mathrm{TCSPC}}$ (see Fig. 5(a)). We find that the MSE remains almost unchanged with respect to $N_{\mathrm{TCSPC}}$ (see Fig. 5(a)). However by analyzing $B_{\mathrm{MSE}}$ and $\sigma _{\mathrm{MSE}}$ separately, we find that the MSE value is mainly determined by $B_{\mathrm{MSE}}$. It is important to note that $B_{\mathrm{MSE}}$ is an order of magnitude larger than $\sigma _{\mathrm{MSE}}$. Also, the value of $B_{\mathrm{MSE}}$ does not depend on $N_{\mathrm{TCSPC}}$; and as expected, $\sigma _{\mathrm{MSE}}$ decreases as $N_{\mathrm{TCSPC}}$ increases (see Fig. 5(b)). The large value of the $B_{\mathrm{MSE}}$ is explained by order statistics [25,26]. This effect is further elaborated in appendix D.

Next, we simulate the MSE as function of $P_{\mathrm{s}}$ (see Figs. 5(c) and 5(d)). And again, the change in $B_{\mathrm{MSE}}$ is explained by order statistics; as the total number of VCSEL photon arriving at the detector changes with respect to $P_{\mathrm{s}}$. The tendency in Figs. 6(c) and 6(d) is also explained by a bias shift.

4. Cramér–Rao lower bound

The CRLB predicts the optimum performance that can be achieved by any unbiased estimator, and is only determined by the estimator’s input data. It is calculated from a likelihood function that relates the estimator’s input data and the parameter to be estimated, which are $\mathbf {H_\mathrm{T}}$ and $d_{\mathrm{T}}$, respectively, in this case. In this section, we describe in detail the steps to calculate the CRLB of the 1D-TOF LIDAR system (see Fig. 1(a)).

First, we define a timestamp as an RV that corresponds to photon detection time with respect to an uncorrelated signal (see Fig. 1(b)). Next, we continue with the calculation of the timestamp PDF of background photons, only. We assume that the emission process of the background light source is Poissonian. Therefore, the timestamp PDF, when the sensor is only exposed to background photons is given by Eq. (16), which is derived from random Poisson points [27].

So far, we consider that our system has a single TDC and so it can only record the first photon detection. Therefore, we can derive Eq. (16) by utilizing order statistics with first order, as an alternative way to random Poisson points. To do so, we propose to define the PDF of the unsorted background photons’ timestamps as follows

$$s_{\mathrm{N}}(t) = \lim_{T \to +\infty} \begin{cases} 0 & \textrm{for} t < 0 \\ \frac{1}{T} & \textrm{for} t \geq 0\\ \end{cases}.$$

Also, the size of the unsorted timestamp set is given by

$$R = \lim_{T \to +\infty} N_{\mathrm{r}} T.$$

$T$ must tend to infinity in order to achieve the definition of random Poisson points [27]. Next, we calculate the PDF of the first background photon’s timestamp. We define it as the PDF of the first order statistic of set composed of $R$ independent and identically distributed (IID) RVs with PDF equal to $s_{\mathrm{N}}$, and is given by

$$\begin{aligned} n_{1}(t) &= \lim_{T \to +\infty} R [1-S_{\mathrm{N}}(t)]^{R-1} s_{\mathrm{N}}(t) \\ &= \lim_{T \to +\infty} R [1 - \frac{t}{T}]^{R-1} \frac{1}{T} H(t) \\ &= \lim_{T \to +\infty} N_{\mathrm{r}}[1 - \frac{t}{T}]^{N_{\mathrm{r}} T - 1} H(t) \\ &= N_{\mathrm{r}} \mathrm{e}^{-N_{\mathrm{r}}t} H(t), \end{aligned}$$
where $H(t)$ is the unit step function. It is important to note that $n_{1}(t)$ considers the influence of the shot noise. Also, Eq. 19 express the relationship between order statistics and random Poisson points. The next step is to calculate the PDF of the timestamp that corresponds to the first photon detection, but when the VCSEL is turned on. Therefore, we define the PDF of the unsorted photons’ timestamps, which includes VCSEL and background photons, as
$$s_{\mathrm{N+P}}(t\mid d_{\mathrm{T}}) = \lim_{T \to +\infty} \begin{cases} 0 & \textrm{for}\; t < 0 \\ \frac{1}{T}+\frac{S_{\mathrm{P}}}{R}\mathcal{N}(t\mid 2 \frac{d_{\mathrm{T}}}{c},\sigma_{\mathrm{l}}) & \textrm{for}\; t \geq 0\\ \end{cases}.$$

Similarly as in Eq. 19, we define the first detected timestamp as the first order statistic of a set composed of $R$ IID RVs with PDF equal to $s_{\mathrm{N+P}}$. Subsequently, the PDF of fist timestamp, which corresponds to VCSEL and background photons, is given by

$$p_{1}(t\mid d_{\mathrm{T}}) = \lim_{T \to +\infty} R [1-S_{\mathrm{N+P}}(t)]^{R-1} s_{\mathrm{N+P}}(t).$$

At this point, we can consider that $p_{1}$ approximates a PDF that corresponds to the time between random Poisson points with nonuniform density [27], as far as $T$ tends to infinity. Next, we define the likelihood function, which relates the estimator’s input data to the parameter to be estimated, as follows

$$l(t\mid d_{\mathrm{T}}) = p_{1}(t\mid d_{\mathrm{T}}) \ast U(t\mid [0,\Delta_{\mathrm{TDC}}]).$$

The uniform distribution $U(t\mid [0,\Delta _{\mathrm{TDC}}])$ models the rounding effect due to the finite size of the TDC’s LSB. Also, we assume that the single-shot resolution of the TDC is only limited by the LSB size. We do not consider differential nonlinearity (DNL) or integral nonlinearity (INL) effects in the TDC. Timing jitter, DNL, or INL effects can be modeled by replacing $U(t\mid [0,\Delta _{\mathrm{TDC}}])$ by a customized PDF (see appendix E).

Finally, we rescale the likelihood function as follows

$$l'(t\mid d_{\mathrm{T}}) = \frac{l(t\mid d_{\mathrm{T}})}{\int_{0}^{T_{\mathrm{h}}}{l(t\mid d_{\mathrm{T}}) \mathrm{d}t}},$$
where $T_{\mathrm{h}}$ is the maximum time range of the timestamp histogram $\mathbf {H_\mathrm{T}}$ or TDC full scale range. In all calculations, we consider an 8-bit TDC, so $T_{\mathrm{h}}$ is equal to 6.4 ns when $\Delta _{\mathrm{TDC}}= {25}\;\textrm{ps}$.

The Fisher information $\mathcal {I}_{1}(d_{\mathrm{T}})$ observed in one single detection is given by

$$\mathcal{I}_{1}(d_{\mathrm{T}}) = \int_{0}^{T_{\mathrm{h}}}{[ \frac{\partial}{\partial d_{\mathrm{T}}} l'(t\mid d_{\mathrm{T}})]^2\frac{1}{l'(t\mid d_{\mathrm{T}})} \mathrm{d}t}.$$

Since every TCSPC cycle is independent from each other, assuming that the target is static during the detection process, the Fisher information observed in $\mathbf {H_\mathrm{T}}$ is calculated as

$$\mathcal{I}_{\mathrm{\mathbf{h}}}(d_{\mathrm{T}}) = N_{\mathrm{h}}\mathcal{I}_{1}.$$

$N_{\mathrm{h}}$ represents the total number of counts contained in $\mathbf {H_\mathrm{T}}$ and is subject to shot noise. However, the shot noise influence in $N_{\mathrm{h}}$ can be neglected since the histogram typically contains several thousands of counts. Finally, the CRLB is defined as the inverse of $\mathcal {I}_{\mathrm{\mathbf {h}}}(d_{\mathrm{T}})$.

We numerically calculate $p_{1}(t\mid d_{\mathrm{T}})$ for two difference $\rho$ values with a time step of 1 ps (see Fig. 7). In this calculation, we also consider two cases per $\rho$ value: $T=T_{\mathrm{h}}$ and $T=T_{\mathrm{h}} \cdot 10^4$. Obviously, the second case aims to approach that $T$ tends to infinity, numerically. In addition, we calculate $\mathbf {H_\mathrm{T}}$ using the MC code of subsection 2.2 but we reduce $\Delta _{\mathrm{TDC}}$ to 1 ps and increase $N_{\mathrm{TCSPC}}$ to $10^7$.

 figure: Fig. 7.

Fig. 7. $p_{1}(t\mid d_{\mathrm{T}})$ calculated with the analytical and MC models for two different reflectivity conditions and two different $T$ values. The system parameters are $P_{\mathrm{s}}$ = 7.36 mW, $N_{\mathrm{TCSPC}} = 10^7$, $\Delta _{\mathrm{TDC}} = {1}\;\textrm{ps}$, $E_{\mathrm{n}}$ = 0.4 W m−2 nm−1, and $d_{\mathrm{T}}= {0.5}\;\textrm{m}$. In (a) $\rho$ is 8 % and in (b) $\rho$ is 60 %.

Download Full Size | PDF

As observed in Fig. 7, $p_{1}(t\mid d_{\mathrm{T}})$ is perfectly matching $\mathbf {H_\mathrm{T}}$ generated by the MC code when $T=T_{\mathrm{h}} \cdot 10^4$. It is worth noticing that the analytical model includes the shot noise influence in the detection process as long as $T$ tends to infinity. Previous analytical models utilized to calculate the CRLB in estimation problems with single-photon detectors did not account for influence of the shot noise [28,29]. However, in a LIDAR case, the shot noise has a significant influence since the average number of detected photons per TCSPC cycle is rather low.

Finally, we calculate the $\sqrt {\mathrm{CRLB}}$ with respect to $d_{\mathrm{T}}$ for the two different $\rho$ values (see Fig. 8), as well as for $T=T_{\mathrm{h}}$ and $T=T_{\mathrm{h}} \cdot 10^4$. In the numerical calculation, $R$ is rounded to the nearest integer number (see Eq. (21)), which causes numerical artifacts in the $\sqrt {\mathrm{CRLB}}$ due to large discontinuities in $l(t\mid d_{\mathrm{T}})$. This effect is minimized as $T$ tends to infinity (see Fig. 8).

 figure: Fig. 8.

Fig. 8. CRLB calculated for two reflectivity conditions and two different $T$ values. The system parameters are $P_{\mathrm{s}}$ = 7.36 mW, $N_{\mathrm{TCSPC}} = 30000$, $\Delta _{\mathrm{TDC}} = {25}\;\textrm{ps}$, and $E_{\mathrm{n}}$ = 0.4 W m−2 nm−1. In (a) $\rho$ is 8 % and in (b) $\rho$ is 60 %.

Download Full Size | PDF

5. Full system performance

So far, we utilize the FPF as our reference estimation method. However, as explained in section 3.2, this method is biased. Also, the behavior of the FPF depends on its input parameters, such as the minimum peak prominence, which requires to be adjusted depending on the application conditions [23]. Therefore, we propose to utilize artificial neural networks (ANNs) as a robust and unbiased depth estimator, which can be trained to adjust its coefficient, automatically.

In this section, we evaluate the performance of an improved version of the FPF (see appendix D) and compare it to the ANN-based depth estimator’s performance. In addition, we benchmark the performance of both methods against the CRLB that is calculated in section 4.

5.1 Artificial neural network

Depth estimation based on a feedforward ANNs can directly calculate the depth and internally compensate for the non-linearities of the system [30]. To clarify, the ANN is not used as a classifier since it has a single output which directly gives $\hat {d_{\mathrm{T}}}$. The ANN inputs are the counts of the 256 bins of single histogram $\mathbf {H_{\mathrm{T}}}$, and the ANN has single hidden layer with 8 neurons. The output neuron’s activation function is linear and the hidden neurons’ activation function is a tansig function.

To obtain a representative training set, we generate many $\mathbf {H_{\mathrm{T}}}$ by linearly sweeping the parameters of the model. We assume that the parameters of the system are already fixed, and swept only the fixed parameters (see Tables 3, 4 and 5). Thus, $E_\mathrm{e}$, $\rho$, and $d_{\mathrm{T}}$ are the three parameters that are swept to obtain the training set (see Table 1). It is important to mention that at every parameter condition, the simulation is repeated for 5 times, in order to account for the statistical effects.

Tables Icon

Table 1. Range of parameter sweeps in ANN training and testing sets. The variable parameters are: $N_{\mathrm{TCSPC}} = 30000$, $\Delta _{\mathrm{TDC}}$=25 ps, and $P_{\mathrm{s}}$ = 7.36 mW.

5.1.1 ANN performance evaluation

We evaluate the performance of the ANN to input histograms that are not used during training. Therefore, we create a testing set in addition to the training set (see Table 1). We combine all patterns that belong to the same value of $d_{\mathrm{T}}$ and evaluate the ANN performance by creating a distribution of $\hat {d_{\mathrm{T}}}$ per $d_{\mathrm{T}}$ value. Fig. 9 shows the $\hat {d_{\mathrm{T}}}$ at every $d_{\mathrm{T}}$ of the testing set as an error bar plot, where the bar width represents in interquartile range. As observed in Fig. 9, the non-linearities of the signal peak in $\mathbf {H_{\mathrm{T}}}$ are perfectly compensated by the ANN. In addition, the selection of 8 hidden neurons is not arbitrary, and it was chosen after observing that a further increase of hidden neurons do not improve significantly the estimation performance, for the utilized dataset size.

 figure: Fig. 9.

Fig. 9. Linearity evaluation of the ANN for the testing set.

Download Full Size | PDF

Tables Icon

Table 2. Performance summary of the estimated depth shown as $\overline {\hat {d_{\mathrm{T}}}}\pm \ 2\hat {\sigma _{d_\mathrm{T}}}$.

5.2 Performance comparison

In this subsection, we compare the performance between the improved FPF (see appendix D) and ANN. We select six distributions from the linear sweep, calculate $\hat {d_{\mathrm{T}}}$ minus $d_{\mathrm{T}}$, and show the results as a set of boxplots (see Fig. 10). It can be observed that the error distribution of the improved FPF is more irregular with respect to the ANN. Additionally, it appears that the linearity of the ANN is significantly better compared to the improved FPF. Also, Fig. 10 compares the $\pm 3\sqrt {\mathrm{CRLB}}$ to the error distribution of the estimation methods.

 figure: Fig. 10.

Fig. 10. Distribution of the estimation error for the testing set of the improved FPF and the ANN depicted as a boxplot and compared to the CRLB. (a) improved FPF performance; (b) ANN performance. The CRLB is calculated with $P_{\mathrm{s}}$ = 7.36 mW, $N_{\mathrm{TCSPC}} = 30000$, $\Delta _{\mathrm{TDC}} = {25}\;\textrm{ps}$, $E_{\mathrm{n}}$ = 0.4 W m−2 nm−1, $\rho$ is 60 %, and $T=T_{\mathrm{h}} \cdot 10^4$.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Depth range under different PDP conditions. The simulation parameters are set as: $N_{\mathrm{TCSPC}}$ = 30000, $\Delta _{\mathrm{TDC}} = 25 \;\textrm{ps}$, $E_{\mathrm{n}}$ = 0.4 W m−2 nm−1, and $\rho$ = 60 %.

Download Full Size | PDF

In addition, Table 2 shows a comparison summary expressed as sample mean and standard deviation of the $\hat {d_{\mathrm{T}}}$ distribution. It can be observed that the standard deviation achieved by the ANN is significantly lower compared to the improved version of the FPF.

6. Discussion and conclusions

The proposed simulation model based on radiometry and photon-counting statistics of section 2 is a simple method, which is not only limited to simulate D-TOF LIDAR systems for smartphone applications. The simple approach of pixelating the target surface overcame the limitation of previous models [18,20]. In addition, this approach allows to accurately simulate conditions when the target surface is close to the sensor, as well as conditions when the target surface is smaller than the FOI or FOV.

In subsection 3.1, we determine that the main parameter that limits the FSR of the LIDAR system is $N_{\mathrm{r}}$, either if the target is highly reflective (high $\rho$ values) or under high background irradiance ($E_{\mathrm{n}}$). The reason behind this effect is that $N_{\mathrm{r}}$ determines the chance that a TDC is available for a laser photon detection. Also, in appendix C, we show that the influence of $N_{\mathrm{r}}$ on the entire system is reduced by increasing the total number of the TDCs; which effectively reduces the background photon rate per individual TDC. In addition, we showed that a longer FSR is obtained by increasing $N_{\mathrm{TCSPC}}$, however this decreases the maximum estimation per unit time that the system can achieve. In addition, we also show in appendix C that a single-TDC system can model a multiple-TDC system by readjusting the simulation parameters, accordingly.

In subsection 3.2, the linearity of the system is mainly influenced by the PDF shape of the first photon timestamp $p_{1}(t|d_{\mathrm{T}})$, which is left-skewed with respect to the VCSEL temporal intensity $f(t)$. This effect of a shifted histogram peak even if the target is at the same $d_{\mathrm{T}}$ depends on $S_{\mathrm{P}}$, which subsequently depends on $P_{\mathrm{s}}$ and $\rho$. Also, we conclude that if the $B_{\mathrm{MSE}}$ is compensated, the reminder $\sigma _{\mathrm{MSE}}$ is improved if the total number of detected laser photons per histogram is incremented. This is achieved by increasing $P_{\mathrm{s}}$, $\rho$, or $N_{\mathrm{TCSPC}}$.

In section 4, we analytically calculate the CRLB that corresponds to the depth estimation of a given D-TOF LIDAR system. We approximate the condition of random Poisson points with nonuniform density by simply extending the calculation time of $p_{1}(t|d_{\mathrm{T}})$ to infinity, and therefore account for the shot noise. It is important to mention that in this application of single-photon detection, the shot noise has a fundamental influence since the average number of detected photons, per TCSPC cycle, could be lower than the unity.

Finally, we propose the utilization of AI based methods for depth estimation, in section 5. We determine that the ANN can automatically compensate for the systematic based produced by the single-photon detection method. In addition, this method has a significantly improved depth resolution in comparison to the reference method, which is based on finding peaks in the timestamp histogram.

Nomenclature

$\Delta \Phi_{\mathrm{n},i,j}$

radiant flux emitted by $\Delta T_{i,j}$, due to background light photons only

$\Delta \Phi_{\mathrm{s},i,j}$

radiant flux emitted by $\Delta T_{i,j}$, due to VCSEL photons only

$\Delta I_{\mathrm{e},i,j}$

detected radiant intensity by the D-TOF LIDAR sensor, due to VCSEL photons only

$\Delta T_{i,j}$

sub-element of the target surface

$\Delta\Omega_{i,j}$

solid angle between $\Delta T_{i,j}$ and the VCSEL

$\Delta\Phi_{\mathrm{e},i,j}$

radiant flux received at $\Delta T_{i,j}$, due to VCSEL photons only

$\Delta_{\mathrm{TDC}}$

TDC bin size

$\eta_{\mathrm{f}}$

optical bandpass filter transmittance at $\lambda$

$\eta_{\mathrm{l}}$

lens transmittance at $\lambda$

$\hat{\Delta T_{i,j}}$

area of $\Delta T_{i,j}$

$\hat{d_{\mathrm{T}}}$

estimated target depth

$\lambda$

VCSEL center wavelength

$\mathbf{H_\mathrm{T}}$

timestamp histogram

$\mathcal{I}_{1}(d_{\mathrm{T}})$

Fisher information observed in one single detection

$\mathcal{I}_{\mathrm{\mathbf{h}}}(d_{\mathrm{T}})$

Fisher information observed in a $\mathbf{H_\mathrm{T}}$

$\Phi_{\mathrm{n}}$

radiant flux received at the lens of the D-TOF LIDAR sensor, due to background light photons only

$\Phi_{\mathrm{s}}$

radiant flux received at the lens of the D-TOF LIDAR sensor, due to VCSEL photons only

$\rho$

reflectivity of the target surface

$\sigma_{\mathrm{l}}$

VCSEL pulse width in terms of standard deviation

$\sigma_{\mathrm{MSE}}$

estimator’s standard deviation

$\theta_{\mathrm{e},i,j}$

angle between a VCSEL ray trajectory and the normal to the surface of $\Delta T_{i,j}$

$\theta_{\mathrm{s},i,j}$

angle between the sensor and the normal vector of $\Delta T_{i,j}$

$\xi$

equivalence constant between a systems with single and multiple TDCs

$A'_{\mathrm{l}}$

equivalent area of the receiver lens for a subgroup of SPADs and single TDC

$A_{\mathrm{l}}$

area of the receiver lens

$B_{\mathrm{MSE}}$

estimator’s bias

$d_{\mathrm{e},i,j}$

the distance from the VCSEL to $\Delta T_{i,j}$

$d_{\mathrm{s},i,j}$

distance from $\Delta T_{i,j}$ to the D-TOF LIDAR sensor

$d_{\mathrm{T}}$

target depth

$E_{\mathrm{n}}$

equivalent solar irradiance at sea level

$f$

VCSEL repetition frequency

$f(t\mid 2 \frac{d_{\mathrm{T}}}{c},\sigma_{\mathrm{l}})$

normalized VCSEL intensity over time or timestamp PDF of unsorted photons

$g(t\mid N_{\mathrm{r}})$

timestamp PDF of the first detected background photon

$l'(t\mid d_{\mathrm{T}})$

normalized likelihood function with respect to $T_\mathrm{h}$

$l(t\mid d_{\mathrm{T}})$

likelihood function

$N_{\mathrm{h}}$

Total number of counts contained in $\mathbf{H_\mathrm{T}}$

$N_{\mathrm{r}}$

noise rate per time unit

$N_{\mathrm{TCSPC}}$

total number of TCSPC cycles

$N_{\mathrm{TCSPC}}'$

equivalent TCSPC cycles of a single-TDC sensor utilized for modeling a multiple-TDC sensor

$p_{1}(t\mid d_{\mathrm{T}})$

timestamp PDF of the first detected photon

$P_{\mathrm{s}}$

VCSEL optical power

$R$

timestamp set size

$S_{\mathrm{N+P}}(t)$

unsorted timestamp CDF of background light and VCSEL photons

$s_{\mathrm{N+P}}(t)$

unsorted timestamp PDF of background light and VCSEL photons

$S_{\mathrm{N}}(t)$

unsorted timestamp CDF of background light photons

$s_{\mathrm{N}}(t)$

unsorted timestamp PDF of background light photons

$S_{\mathrm{p}}$

detected number of VCSEL photons per TCSPC cycle

$T$

maximum time utilized to calculate $p_{1}(t\mid d_{\mathrm{T}})$

$T_{\mathrm{h}}$

maximum time range of the timestamp histogram ${\mathbf{H}_{\mathrm T}}$

$U(t\mid {[0,\Delta_{\mathrm{TDC}}]})$

Uniform distribution with support $[0,\Delta_{\mathrm{TDC}}]$

ANNs

Artificial neural networks

CRLB

Cramér–Rao lower bound

DCR

dark cont rate

FF

fill factor

FOI

field-of-illumination

FOV

field-of-view

FPF

find peaks function

FSR

full scale range

M

total number of SPADs

MC

Monte Carlo

MSE

mean-square-error

PDE

photon detection efficiency

PDF

probability density function

PDP

photon detection probability

SPAD

single-photon avalanche diode

TCSPC

time-correlated single-photon counting

TDC

time-to-digital converter

VCSEL

vertical-cavity surface-emitting laser

Appendix A System parameters

Since, we focus our analysis on smartphone LIDAR applications, we define our systems parameters based on the characteristics of commercially available sensors [3134]. Tables 3, 4 and 5 show parameters that are not classified as either variable nor fixed parameter, and are used in the simulations.

Appendix B PDP influence in depth range

In the depth range analysis, we find out that PDP has a drastic impact on $N_{\mathrm{r}}$, which is one of the main factor that limits the depth range. Therefore, we evaluate the effect of PDP, separately, by sweeping its value from 1% to 4%. In this simulation, we choose the highest background light condition and target reflectivity (see Fig. 11). The value of $E_{\mathrm{n}}$ is selected according to [32], and the PDP sweep range is derived from [37].

Tables Icon

Table 3. SPAD array parameters.

Tables Icon

Table 4. VCSEL parameters.

Tables Icon

Table 5. Target surface parameters.

From Fig. 11, it can be observed at high background noise, the increase of PDP decreases the depth range. The reason is that the probability of detecting background photon before the arrival of VCSEL photons increases exponentially with respect to PDP. Therefore, for high values of $E_{\mathrm{n}}$ and PDP, the VCSEL photons are missed. We consider that a maximum distance of 0.5 m is sufficient for a smartphone application [3134], and so we fixed the PDP value to 2 % in all of the simulations.

We consider that $P_{\mathrm{s}}$ is the easiest controllable parameter in the system. Thus, each time we simulate the depth range, we also sweep the VCSEL power up a maximum value of 20 mW. We consider that if a target is too close to the sensor, for eye safety reason, the sensor can immediately turn off the VCSEL after one complete measurement cycle. Therefore, the exposure time is equal to the inverse of the repetition rate multiplied by $N_{\mathrm{TCSPC}}$. For example, the maximum value of $P_{\mathrm{s}}$ for a class 1 sensor with $N_{\mathrm{TCSPC}}=30000$ is equal to 12.77 mW [24].

Appendix C System with Multiple TDCs

In this section we expand the equations and models of section 2 in order to simulate systems composed of multiple TDCs. As shown in Fig. 1(a), the SPADs are shared in subgroups that are connected to a single TDC. For example, if we have 14 TDCs in the system, the total number of 196 SPADs are shared in subgroups of 14 SPADs per TDC.

By modifying the values of $N_{\mathrm{TCSPC}}$ and $A_{\mathrm{l}}$, we can approximate the condition of having several smaller and statistically independent SPAD arrays or subgroups connected to a single TDC. Therefore, assuming that we have $\xi$ total TDCs in the system, we can recalculate an equivalent and smaller lens area as follows

$$A'_{\mathrm{l}} = \frac{A_{\mathrm{l}}}{\xi}.$$

By replacing $A_{\mathrm{l}}$ and $A'_{\mathrm{l}}$ in Eqs. (9) and (12), we can obtain a TOF histogram of a subgroup of SPADs connected to a single TDC. In this calculation, we assume that the light power is distributed uniformly across the SPAD array’s active area. Since we have multiple subgroups, the overall effect is accounted by increasing $N_{\mathrm{TCSPC}}$, as follows

$$N_{\mathrm{TCSPC}}' = \xi N_{\mathrm{TCSPC}}.$$

We re-run the MC code and fully account for a system composed of multiple TDCs, and compare to the model of a single TDC with modified $A'_{\mathrm{l}}$ and $N_{\mathrm{TCSPC}}'$. The output TOF histogram for both cases are shown in Fig. 12, and obviously, they are statically equivalent.

 figure: Fig. 12.

Fig. 12. Comparison between fully modeled multiple-TDC system and equivalent single-TDC system. The system parameters are $P_{\mathrm{s}}$ = 7.36 mW, $N_{\mathrm{TCSPC}} = 30000$, $\Delta _{\mathrm{TDC}} = {25}\;\textrm{ps}$, $E_{\mathrm{n}}$ = 0.4 W m−2 nm−1, $\rho = {60}\;{\%}$. (a) TOF histogram obtained with fully modeled a system with 14 TDCs. (b) TOF histogram obtained with modeling the equivalent single-TDC system.

Download Full Size | PDF

Also, it is importance to note that just by increasing the number of TDCs in the system, without using any background-light rejection method, the noise counts are reduced and signal counts are increased.

Appendix D Improved find peak function

According to the analysis in section 3.2, the signal peak found in $\mathbf {H_{\mathrm{T}}}$ is shifted with respect to the VCSEL intensity peak. The reason is that the shape of the signal peak is left-skewed with respect to the VCSEL pulse, because the system can only detect the first photon (see Fig. 13). So, in order to compensate the bias we utilize a simple linear regression, which is expressed as follows

$$\hat{d_{\mathrm{T}}}' = K \hat{d_{\mathrm{T}}} + B,$$
where $K$ and $B$ are coefficients of the linear regression. These two coefficients are calculated based on fitting the typical obtained depth with the target depth.

 figure: Fig. 13.

Fig. 13. $p_{1}(t|d_{\mathrm{T}})$ against VCSEL pulse. The system parameters are $P_{\mathrm{s}}$ = 7.36 mW, $E_{\mathrm{n}}$ = 0.04 W m−2 nm−1, $d_{\mathrm{T}}= {0.2}\;\textrm{m}$, and $\rho$ is 8 %.

Download Full Size | PDF

We add another improvement to the FPF that is a simple background subtraction algorithm, which is performed in three steps. First, background-light timestamps are measured with the VCSEL intentionally turned off, and we save the corresponding timestamp histogram. Next, we turn on the VCSEL, capture timestamps and generate a new timestamp histogram. Finally, we input to the FPF the subtraction between the two previous histograms. In this way, we remove the exponential shape contained in the $\mathbf {H_{\mathrm{T}}}$.

Appendix E Influence of TDC timing jitter

We perform an extra MC simulation to depict the influence of the TDC timing jitter on the system performance. It is important to note that the SPAD jitter, VCSEL pulse width, and TDC timing jitter cannot be added in quadrature. The reason behind this is that the photons go through a sorting process before they are detected by the TDC. Also due to this reason, the laser peak width in the timestamp histogram can be narrower than the addition in quadrature of the SPAD jitter and VCSEL pulse width; given that the number of average detected photons is larger than the unity [38].

Figure 14 shows $\mathbf {H_\mathrm{T}}$ when we model the TDC timing jitter as a Gaussian R.V., which is added to the timestamps after the sorting process. We intentionally reduce $\Delta _{\mathrm{TDC}}$ to 1 ps and increase $N_{\mathrm{TCSPC}}$ to $10^7$, to observe the influence of the timing jitter in $\mathbf {H_\mathrm{T}}$. As shown in Fig. 14, a TDC timing jitter up to 100 ps FWHM has not significant influence on the peak width in $\mathbf {H_\mathrm{T}}$ that corresponds to the laser photons.

 figure: Fig. 14.

Fig. 14. $\mathbf {H_\mathrm{T}}$ calculated with the MC model for several TDC timing jitter values. The system parameters are $\rho$ is 8 %, $P_{\mathrm{s}}$ = 7.36 mW, $N_{\mathrm{TCSPC}} = 10^7$, $\Delta _{\mathrm{TDC}} = {1}\;\textrm{ps}$, $E_{\mathrm{n}}$ = 0.4 W m−2 nm−1, and $d_{\mathrm{T}}= {0.5}\;\textrm{m}$.

Download Full Size | PDF

Acknowledgments

Silicon Integrated co-funded this research.

Disclosures

The authors declare no conflict of interests.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. D. Göhring, M. Wang, M. Schnürmacher, et al., “Radar/lidar sensor fusion for car-following on highways,” in The 5th International Conference on Automation, Robotics and Applications, (2011), pp. 407–412.

2. H. Cho, Y.-W. Seo, B. V. Kumar, et al., “A multi-sensor fusion system for moving object detection and tracking in urban driving environments,” in 2014 IEEE International Conference on Robotics and Automation (ICRA), (2014), pp. 1836–1843.

3. J. Kocic, N. Jovicic, and V. Drndarevic, “Sensors and sensor fusion in autonomous vehicles,” in 2018 26th Telecommunications Forum (TELFOR), (2018), pp. 420–425.

4. M. Mahlisch, R. Schweiger, W. Ritter, et al., “Sensorfusion using spatio-temporal aligned video and lidar for improved vehicle detection,” in 2006 IEEE Intelligent Vehicles Symposium, (2006), pp. 424–429.

5. R. Halterman and M. Bruch, “Velodyne HDL-64E lidar for unmanned surface vehicle obstacle detection,” in Unmanned Systems Technology XII, vol. 7692G. R. Gerhart, D. W. Gage, and C. M. Shoemaker, eds., International Society for Optics and Photonics (SPIE, 2010), pp. 123–130.

6. cnet, “Lidar is one of the iphone and ipad’s coolest tricks. here’s what else it can do,” https://www.cnet.com/tech/mobile/lidar-is-one-of-the-iphone-ipad-coolest-tricks-its-only-getting-better/. (accessed January 17, 2022).

7. J. H. Gao and L.-S. Peh, “A smartphone-based laser distance sensor for outdoor environments,” in 2016 IEEE International Conference on Robotics and Automation (ICRA), (2016), pp. 2922–2929.

8. J. A. P. Y. Phun and C. Safitri, “Smartphone authentication with hand gesture recognition (HGR) using LiDAR,” in 2021 5th International Conference on Informatics and Computational Sciences (ICICoS), (2021), pp. 93–98.

9. T. Wu, J. Liu, Z. Li, et al., “Accurate smartphone indoor visual positioning based on a high-precision 3D photorealistic map,” Sensors 18(6), 1974 (2018). [CrossRef]  

10. G. Luetzenburg, A. Kroon, and A. A. Bjørk, “Evaluation of the apple iphone 12 pro LiDAR for an application in geosciences,” Sci. Rep. 11(1), 22221 (2021). [CrossRef]  

11. A. Payne, A. Daniel, A. Mehta, et al., “7.6 a 512×424 CMOS 3D time-of-flight image sensor with multi-frequency photo-demodulation up to 130mhz and 2gs/s ADC,” in 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), (2014), pp. 134–135.

12. M. Perenzoni, D. Perenzoni, and D. Stoppa, “A 64 × 64-pixels digital silicon photomultiplier direct TOF sensor with 100-MPhotons/s/pixel background rejection and imaging/altimeter mode with 0.14% precision up to 6 kmhttps://www.overleaf.com/project/61dea127f975132f45f2ef37 for spacecraft navigation and landing,” IEEE J. Solid-State Circuits 52(1), 151–160 (2017). [CrossRef]  

13. C. Niclass, M. Soga, H. Matsubara, et al., “A 100-m range 10-frame/s 340 × 96-pixel time-of-flight depth sensor in 0.18-μm CMOS,” IEEE J. Solid-State Circuits 48(2), 559–572 (2013). [CrossRef]  

14. S. W. Hutchings, N. Johnston, I. Gyongy, et al., “A reconfigurable 3-D-stacked SPAD imager with in-pixel histogramming for flash LIDAR or high-speed time-of-flight imaging,” IEEE J. Solid-State Circuits 54(11), 2947–2956 (2019). [CrossRef]  

15. A. Süss, V. Rochus, M. Rosmeulen, et al., “Benchmarking time-of-flight based depth measurement techniques,” in Smart Photonic and Optoelectronic Integrated Circuits XVIII, vol. 9751S. He, E.-H. Lee, L. A. Eldada, eds., International Society for Optics and Photonics (SPIE, 2016), pp. 199–217.

16. F. Villa, F. Severini, F. Madonini, et al., “SPADs and SiPMs arrays for long-range high-speed light detection and ranging (LiDAR),” Sensors 21(11), 3839 (2021). [CrossRef]  

17. R. Roriz, J. Cabral, and T. Gomes, “Automotive lidar technology: A survey,” IEEE Trans. Intell. Transport. Syst. 23(7), 6282–6297 (2022). [CrossRef]  

18. P. Padmanabhan, C. Zhang, and E. Charbon, “Modeling and analysis of a direct time-of-flight sensor architecture for LiDAR applications,” Sensors 19(24), 5464 (2019). [CrossRef]  

19. C. Niclass, M. Soga, H. Matsubara, et al., “A 0.18-μ m CMOS SoC for a 100-m-range 10-frame/s 200×96-pixel time-of-flight depth sensor,” IEEE J. Solid-State Circuits 49(1), 315–330 (2014). [CrossRef]  

20. A. Ronchini Ximenes, “Modular time-of-flight image sensor for light detection and ranging: A digital approach to lidar,” https://doi.org/10.4233/uuid:c434368a-9a67-45de-a66f-f5dc30430e03 (2019). (accessed April 03, 2023).

21. A. Ronchini Ximenes, P. Padmanabhan, and E. Charbon, “Mutually coupled time-to-digital converters (tdcs) for direct time-of-flight (dtof) image sensors,” Sensors 18(10), 3413 (2018). [CrossRef]  

22. M. W. Fishburn, Fundamentals of CMOS single-photon avalanche diodes (Delft University of Technology, 2012), pp. 55–68.

23. Mathworks, “findpeaks function reference,” https://www.mathworks.com/help/signal/ref/findpeaks.html. (accessed April 03, 2023).

24. M. Chen, 1D-TOF: system modeling, processing algorithm design and implementation (Delft University of Technology, 2022), pp. 11–13, Msc. Dissertation.

25. B. Arnold, N. Balakrishnan, and H. Nagaraja, A First Course in Order Statistics, Classics in Applied Mathematics (Society for Industrial and Applied Mathematics, 1992).

26. M. Fishburn and E. Charbon, “System tradeoffs in gamma-ray detection utilizing spad arrays and scintillators,” IEEE Trans. Nucl. Sci. 57(5), 2549–2557 (2010). [CrossRef]  

27. A. Papoulis, Probability, Random Variables, and Stochastic Processes (McGraw-Hill, 1984), pp. 55–58,76–77, McGraw-Hill series in electrical engineering. Communications and Information theory.

28. S. Seifert, H. van Dam, and D. Schaart, “The lower bound on the timing resolution of scintillation detectors,” Phys. Med. Biol. 57(7), 1797 (2012). [CrossRef]  

29. E. Venialgo, S. Mandai, T. Gong, et al., “Time estimation with multichannel digital silicon photomultipliers,” Phys. Med. Biol. 60(6), 2435 (2015). [CrossRef]  

30. M. Hagan and M. Menhaj, “Training feedforward networks with the marquardt algorithm,” IEEE Trans. Neural Netw. 5(6), 989–993 (1994). [CrossRef]  

31. AMS, “TMF882X datasheet,” https://ams.com/documents/20143/6015057/TMF882X_DS000693_5-00.pdf. (accessed April 03, 2023).

32. AMS, “TMF8701 datasheet,” https://ams.com/documents/20143/4409399/TMF8701_DS000602_8-00.pdf. (accessed April 03, 2023).

33. STmicroelectronics, “VL53L3CX datasheet,” https://www.st.com/resource/en/datasheet/vl53l3cx.pdf. (accessed April 03, 2023).

34. STmicroelectronics, “VL6180 datasheet,” https://www.st.com/resource/en/datasheet/vl6180.pdf. (accessed April 03, 2023).

35. Thorlabs, “N-BK7 transmittance,” https://www.thorlabs.com/images/TabImages/Uncoated_N-BK7_Transmission_780.gif. (accessed April 03, 2023).

36. Thorlabs, “FBH940-10 specifications,” https://www.thorlabs.com/images/tabimages/FBH940-10_Transmission_G1-370.gif. (accessed April 03, 2023).

37. ONsemiconductor, “RB-series SiPM sensors datasheet,” https://nl.mouser.com/datasheet/2/308/MICRORB-SERIES-D-1489599.pdf. (accessed April 05, 2023).

38. S. Mandai, E. Venialgo, and E. Charbon, “Timing optimization utilizing order statistics and multichannel digital silicon photomultipliers,” Opt. Lett. 39(3), 552–554 (2014). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. Diagram and operation of a generic LIDAR sensor. (a) sensor block diagram. (b) generic timing diagram.
Fig. 2.
Fig. 2. Simulation and signal processing flow
Fig. 3.
Fig. 3. Setup of the optical simulation. (a) Diagram of the forward emissions of the VCSEL light and background illumination. (b) Diagram of SPAD array’s light detection of the backward reflection.
Fig. 4.
Fig. 4. Depth range under variable and fixed parameter conditions, each individual shows the depth range as a function of $P_\mathrm{s}$. (a) depth range with $N_{\mathrm{TCSPC}}$ as parameter. (b) depth range with $E_{\mathrm{n}}$ as parameter. (c) depth range as $\rho$ as parameter. (d) depth range under extreme conditions of $\rho$ and $E_{\mathrm{n}}$. When the corresponding parameters are not swept, they are set as $N_{\mathrm{TCSPC}} = 30000$, $\Delta _{\mathrm{TDC}} = 25 \;\textrm{ps}$, $E_{\mathrm{n}}$ = 0.1 W m−2 nm−1, and $\rho$ = 8 %.
Fig. 5.
Fig. 5. Depth resolution under variable parameter conditions, and subdivided into $B_{\mathrm{MSE}}$ and $\sigma _{\mathrm{MSE}}$. (a) $\sqrt {\mathrm{\hat {MSE}}}$ with $N_{\mathrm{TCSPC}}$ as parameter. (b) $B_{\mathrm{MSE}}$ and $\sigma _{\mathrm{MSE}}$ with $N_{\mathrm{TCSPC}}$ as parameter. (c) $\sqrt {\mathrm{\hat {MSE}}}$ with $P_{\mathrm{s}}$ as parameter. (d) $B_{\mathrm{MSE}}$ and $\sigma _{\mathrm{MSE}}$ with $P_{\mathrm{s}}$ as parameter. When the corresponding parameters are not swept, they are set as $P_{\mathrm{s}}$ = 7.36 mW, $N_{\mathrm{TCSPC}} = 10000$, $\Delta _{\mathrm{TDC}} = 25 \;\textrm{ps}$, $E_{\mathrm{n}}$ = 0.1 W m−2 nm−1, and $\rho$ = 8 %.
Fig. 6.
Fig. 6. Depth resolution under fixed parameter conditions, and subdivided into $B_{\mathrm{MSE}}$ and $\sigma _{\mathrm{MSE}}$. (a) $\sqrt {\mathrm{\hat {MSE}}}$ with $E_{\mathrm{n}}$ as parameter. (b) $B_{\mathrm{MSE}}$ and $\sigma _{\mathrm{MSE}}$ with $E_{\mathrm{n}}$ as parameter. (c) $\sqrt {\mathrm{\hat {MSE}}}$ with $\rho$ as parameter. (d) $B_{\mathrm{MSE}}$ and $\sigma _{\mathrm{MSE}}$ with $\rho$ as parameter. When the corresponding parameters are not swept, they are set as $P_{\mathrm{s}}$ = 7.36 mW, $N_{\mathrm{TCSPC}} = 10000$, $\Delta _{\mathrm{TDC}} = 25 \;\textrm{ps}$, $E_{\mathrm{n}}$ = 0.1 W m−2 nm−1, and $\rho$ = 8 %.
Fig. 7.
Fig. 7. $p_{1}(t\mid d_{\mathrm{T}})$ calculated with the analytical and MC models for two different reflectivity conditions and two different $T$ values. The system parameters are $P_{\mathrm{s}}$ = 7.36 mW, $N_{\mathrm{TCSPC}} = 10^7$, $\Delta _{\mathrm{TDC}} = {1}\;\textrm{ps}$, $E_{\mathrm{n}}$ = 0.4 W m−2 nm−1, and $d_{\mathrm{T}}= {0.5}\;\textrm{m}$. In (a) $\rho$ is 8 % and in (b) $\rho$ is 60 %.
Fig. 8.
Fig. 8. CRLB calculated for two reflectivity conditions and two different $T$ values. The system parameters are $P_{\mathrm{s}}$ = 7.36 mW, $N_{\mathrm{TCSPC}} = 30000$, $\Delta _{\mathrm{TDC}} = {25}\;\textrm{ps}$, and $E_{\mathrm{n}}$ = 0.4 W m−2 nm−1. In (a) $\rho$ is 8 % and in (b) $\rho$ is 60 %.
Fig. 9.
Fig. 9. Linearity evaluation of the ANN for the testing set.
Fig. 10.
Fig. 10. Distribution of the estimation error for the testing set of the improved FPF and the ANN depicted as a boxplot and compared to the CRLB. (a) improved FPF performance; (b) ANN performance. The CRLB is calculated with $P_{\mathrm{s}}$ = 7.36 mW, $N_{\mathrm{TCSPC}} = 30000$, $\Delta _{\mathrm{TDC}} = {25}\;\textrm{ps}$, $E_{\mathrm{n}}$ = 0.4 W m−2 nm−1, $\rho$ is 60 %, and $T=T_{\mathrm{h}} \cdot 10^4$.
Fig. 11.
Fig. 11. Depth range under different PDP conditions. The simulation parameters are set as: $N_{\mathrm{TCSPC}}$ = 30000, $\Delta _{\mathrm{TDC}} = 25 \;\textrm{ps}$, $E_{\mathrm{n}}$ = 0.4 W m−2 nm−1, and $\rho$ = 60 %.
Fig. 12.
Fig. 12. Comparison between fully modeled multiple-TDC system and equivalent single-TDC system. The system parameters are $P_{\mathrm{s}}$ = 7.36 mW, $N_{\mathrm{TCSPC}} = 30000$, $\Delta _{\mathrm{TDC}} = {25}\;\textrm{ps}$, $E_{\mathrm{n}}$ = 0.4 W m−2 nm−1, $\rho = {60}\;{\%}$. (a) TOF histogram obtained with fully modeled a system with 14 TDCs. (b) TOF histogram obtained with modeling the equivalent single-TDC system.
Fig. 13.
Fig. 13. $p_{1}(t|d_{\mathrm{T}})$ against VCSEL pulse. The system parameters are $P_{\mathrm{s}}$ = 7.36 mW, $E_{\mathrm{n}}$ = 0.04 W m−2 nm−1, $d_{\mathrm{T}}= {0.2}\;\textrm{m}$, and $\rho$ is 8 %.
Fig. 14.
Fig. 14. $\mathbf {H_\mathrm{T}}$ calculated with the MC model for several TDC timing jitter values. The system parameters are $\rho$ is 8 %, $P_{\mathrm{s}}$ = 7.36 mW, $N_{\mathrm{TCSPC}} = 10^7$, $\Delta _{\mathrm{TDC}} = {1}\;\textrm{ps}$, $E_{\mathrm{n}}$ = 0.4 W m−2 nm−1, and $d_{\mathrm{T}}= {0.5}\;\textrm{m}$.

Tables (5)

Tables Icon

Table 1. Range of parameter sweeps in ANN training and testing sets. The variable parameters are: N T C S P C = 30000 , Δ T D C =25 ps, and P s = 7.36 mW.

Tables Icon

Table 2. Performance summary of the estimated depth shown as d T ^ ¯ ±   2 σ d T ^ .

Tables Icon

Table 3. SPAD array parameters.

Tables Icon

Table 4. VCSEL parameters.

Tables Icon

Table 5. Target surface parameters.

Equations (28)

Equations on this page are rendered with MathJax. Learn more.

Ω F O I = 0 2 π 0 θ e sin θ d θ d ϕ 4 π sin 2 θ e 2 = 4 π sin 2 F O I 4 .
d e , i , j = d T cos θ e , i , j .
Δ Ω i , j = Δ T i , j ^ d e , i , j 2 ,
Δ Φ e , i , j = { P s Δ T i , j ^ 4 π sin 2 ( F O I 4 ) d e , i , j 2 cos θ e , i , j | θ e , i , j | F O I 2 0 | θ e , i , j | > F O I 2 .
Δ Φ s , i , j = ρ Δ Φ e , i , j .
Δ I e , i , j = I 0 , i , j cos θ s , i , j .
Δ Φ s , i , j = Ω Δ I e , i , j d Ω = π I 0 , i , j .
Δ E e , i , j = I e , i , j r 2 = Δ Φ s , i , j cos θ s , i , j π d s , i , j 2 ,
Φ s = i , j { A l Δ Φ s , i , j ( cos θ s , i , j ) 2 π d s , i , j 2 | θ s , i , j | F O V 2 0 | θ s , i , j | > F O V 2 .
S p = Φ s η l η f 2 λ π f h c P D E ,
Δ Φ n , i , j = ρ E n Δ T i , j ^ .
Φ n = i , j { A l Δ Φ n , i , j ( cos θ s , i , j ) 2 π d s , i , j 2 | θ s , i , j | F O V 2 0 | θ s , i , j | > F O V 2 .
N r = Φ n η l η f 2 λ π h c P D E .
P ( X = k ) = S p k k ! e S p ,
f ( t 2 d T c , σ l ) = N ( t 2 d T c , σ l ) ,
g ( t N r ) = { N r e N r t t 0 0 t < 0 .
s N ( t ) = lim T + { 0 for t < 0 1 T for t 0 .
R = lim T + N r T .
n 1 ( t ) = lim T + R [ 1 S N ( t ) ] R 1 s N ( t ) = lim T + R [ 1 t T ] R 1 1 T H ( t ) = lim T + N r [ 1 t T ] N r T 1 H ( t ) = N r e N r t H ( t ) ,
s N + P ( t d T ) = lim T + { 0 for t < 0 1 T + S P R N ( t 2 d T c , σ l ) for t 0 .
p 1 ( t d T ) = lim T + R [ 1 S N + P ( t ) ] R 1 s N + P ( t ) .
l ( t d T ) = p 1 ( t d T ) U ( t [ 0 , Δ T D C ] ) .
l ( t d T ) = l ( t d T ) 0 T h l ( t d T ) d t ,
I 1 ( d T ) = 0 T h [ d T l ( t d T ) ] 2 1 l ( t d T ) d t .
I h ( d T ) = N h I 1 .
A l = A l ξ .
N T C S P C = ξ N T C S P C .
d T ^ = K d T ^ + B ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.