Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Impact of blur on 3D laser imaging: Monte-Carlo modelling for underwater applications

Open Access Open Access

Abstract

3D laser imaging technology could allow visualizing objects hidden in turbid water. Such a technology mainly works at short distances (<50 m) because of the high attenuation of light in water. Therefore, a significant part of the scattering events from the water column is located out of the optical depth of field (DoF), which could induce optical blur on images. In this study, a model is proposed to represent such an optical blur, based on geometric optics. The model is then implemented in a Monte-Carlo scheme. Blur significantly affects the scattered signal from water before the DoF in monostatic conditions, but has less impact in bi-static conditions. Furthermore, it is shown that blur enables a very large variance reduction of 2D images of objects situated within the DoF. Such an effect increases with the extinction coefficient.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The ocean represents a key environment for many purposes: telecommunication by submarine infrastructures [1], oil transport by pipelines [2], inspection of resources like hydrocarbons [3] and minerals [4]. It is therefore crucial to detect and localize objects of different composition, size and shape, accurately using sensors that could be adapted to autonomous underwater vehicles (AUV) [5]. Underwater acoustics sensing has successfully been used for applications likes oil plumes inspection, seabed mapping [6] and 3D reconstruction [7]. Single beam and side-scan sonars are not able to resolve vertical features [8]. Multi-beam sonars enable 3D imaging with a fair resolution but are generally power consuming. On the other hand, underwater active optical sensing is a relevant discrete embeddable technology to detect and localize an object hidden in turbid water [9]. Such a technology can be based either on triangulation, 2D flash or 3D Time of Flight (ToF) sensors. Triangulation techniques provide very good depth accuracy $( < 1\; mm)$ or a short range $( < 2.5\; m$ For higher range, 2D flash and 3D sensors are more accurate [10,11] but these technologies mainly work at moderate distances (<50 m) because of the high attenuation of light in water. 2D flash allows a powerful filtering of the light backscattered from the turbid water [12] and therefore provides a 2D image of a given plane of interest. 3D technologies are based on a matrix sensor [13] or on a scanning system [14,15].

This study focuses on a 3D sensor based on a matrix of independent telemeter, namely a Single Photon Avalanched Diode (SPAD), coupled with a Time Digital Convertor (TDC). The detector is triggered with a pulsed laser. Each pixel detects only one photon per laser pulse and returns a timing or range distance. Such a system requires the integration upon several laser pulses to rebuild the 3D scene. For the case of high water turbidity and/or for a far target of interest, the photons collected by the pixels mostly consist of photons that have been scattered by the water column rather than the target. The optimization of the laser power and the temporal gate driving the sensor is therefore required. Two-dimensional (2D) laser gated sensors typically use short temporal gates (1-5 ns), which can be adjusted precisely on the target of interest as far as the distance of the object is known. The 3D sensors usually work with longer temporal gates, which allow both a full reconstruction of the object and its detection without any a priori knowledge of the sensor-target distance. However, such a longer temporal gate implies the integration of a higher amount of scattered light from the water column that is in-between the sensor and the target. For the case of short distance underwater imaging (<50 m), a large part of the scattering events could be located outside the Depth of Field (DoF). As a result, the evaluation of target and water scattering magnitudes is not straightforward and blur effects must be taken into consideration.

In this paper, the impact of blur on 3D laser imaging is investigated. A semi-analytical Monte-Carlo model [16] that is combined with a basic blur analytical model is described. Monte-Carlo simulations enable to statistically rebuild the signal for any system / scene configuration. It has been proved useful in a wide range of applications such as the estimation of the impact of multiple scattering in the atmosphere [1719], in turbid water [2022] and for seabed mapping [23].

The model is then applied to a Proof of Concept (PoC) of a 3D laser imaging device developed at ONERA institute (France). The paper is organized as followed. The blurred-Monte Carlo scheme is described Section 2. Then, the relevant optical parameters that are used to describe both the imaging device and the water turbidity are outlined Section 3. The impact of blur on the signal backscattered from the water column and from the scene is discussed Section 4.

2. Method

2.1 Analytical description of the blur

A target located at a distance $\textrm{L}$ rom an imaging device of focal length $f^{\prime}$ is considered. The tuning of the focus on this plane of interest requires placing the sensor at a distance $x = \; {f^{{\prime}2}}/({L - f^{\prime}} )$ rom the focal plane as shown Fig. 1. A scattering event at a shorter distance $l < L$ equires a sensor at a distance $y\; = \; {f^{{\prime}2}}/({l - f^{\prime}} ){\; } > x$ rom the focal plane to get the optimal focus. A classical geometrical approach leads to express the blurry circle diameter B as follows (Eq. (1)):

$$B = {\; }\frac{{y - x}}{{f^{\prime} + y}} \times {\; }\frac{{f^{\prime}}}{{NA}}$$
where $NA$ is the numerical aperture, $f^{\prime}/{\; }NA$ is the equivalent collection aperture diameter, and $x,y,f^{\prime}$ are positive numbers. While $l$ decreases from $L$ to 0, B will increase from 0 (blurry circle reduced to a point) to ${f^\mathrm{^{\prime}}}/{\; }NA$

Therefore, a scattering event that occurs on the target plane, or within the Depth of Field (DoF), reaches only one pixel of the sensor. The collected flux outside the DoF generates a blurry circle on the focal plane. The photons that are initially dedicated to one pixel are “diluted” over several pixels thus leading to a large decrease of the signal per pixel. A pixel that is located at the center of the sensor will not be significantly impacted as this dilution effect can be compensated by the dilution of its neighbors. A pixel that is located at the edge of the sensor undergoes a flux migration out of the Field of View (FoV). Conversely, a photon outside the FoV can now contribute to the total signal since a scattered point source creates a blurry circle on the focal plane that may intersects the sensor.

 figure: Fig. 1.

Fig. 1. Scheme for the modelling of blur effects. The focus at a distance L (orange) is carried out by placing the sensor at a distance x from the focal plane. Any hydrosols that are outside the DoF (blue) are not conjugated with the sensor, thus resulting in the occurrence of blur.

Download Full Size | PDF

For largely off axes scattering events, a cutoff angle (Angle of View - AoV) above which the light is not transmitted toward the detector is introduced (Fig. 2). Such an angle, which differs from the FoV, is required for an accurate estimation of the collected scattered flux coming from out of the DoF and FoV. Note that AoV is independent of the sensor size contrary to FoV. It should be highlighted that a partial detection occurs when the scattering event happens outside the DoF within the AoV.

 figure: Fig. 2.

Fig. 2. Illustration of Angle of View (AoV), Field of View (FoV) and Depth of Field (DoF). The full detection is achieved within the FoV and DoF (referred using *). A partial detection occurs within the AoV outside the DoF (**). There is no detection outside the AoV nor outside the FoV within the DoF range (***).

Download Full Size | PDF

Imaging optics are typically designed for a specific sensor because it remains difficult to cover large incidence angles while maintaining optimal properties such as stigmatism and Point Spread Function (PSF). As an example, fish eye lenses show a very large acceptance angle but a poor resolution. As a result, the Angle of View of commercial camera is generally designed to be slightly larger than the Field of View and AoV is rarely documented by manufacturers. Yet, the accurate modelling of blur effects requires knowledge of AoV as it directly drives the magnitude of collected scattered flux from the water column that is outside the DoF. The estimation of AoV of a given commercially available optical device is performed Section 3.3. The blur model and AoV are now implemented in a numerical Monte Carlo scheme here.

2.2 Monte-Carlo photons model accounting for blur and angle of view effects

A detailed description of the photon scattering Monte Carlo modelling approach is provided in [24]. An overview of the model is presented in this section. The photons are initialized based on a Gaussian profile. The configuration can be either monostatic, which means that the laser and the camera are collocated and the baseline is equal to zero, or bi-static, which means that the laser and the camera are located in separate positions for which the distance is equal to the baseline. More details about the monostatic and bi-static configurations could be found in [25]. For a bi-static configuration, the initial direction of the photon is set such that the center of the laser beam intersects the center of the Field of View at the plane of interest. The first interaction distance is randomly fixed using the extinction coefficient ${K_{ext}}$ and the Beer Lambert law. Then, the albedo of the turbid water is the parameter that contributes to make the decision whether the interaction between the photon and the medium is either an absorption process or a scattering event. If the photon is scattered, the phase function and its associated cumulated function distribution are used to randomly determine the scattering angle. The photon can then undergo multiple scattering events. If the plane of interest is reached, the photon is backscattered using a Lambertian law and its magnitude is multiplied by the local albedo of the scene.

For each scattering event from the water column or from the target surface plane, the analytical flux that is backscattered toward the detector is computed by considering the extinction of light over the remaining propagating distance and the angle of the last scattering event. The analytical flux is calculated only if the scattering event occurs inside the AoV domain. Then, the blurry circle diameter $B$ is computed based on Eq. (1). The energy that is initially dedicated to a unique pixel is now diluted on various ${N_{pix}}$ pixels with ${N_{pix}} = \pi {B^2}/4{S_{pix}}$ where ${S_{pix}}$ is the surface of one pixel. Each scattering event in the DoF is associated to a unique pixel, thus the DoF area is defined by ${N_{pix}} \le 1$ For ${N_{pix}} > 1$ the impacted pixels (inside the blurry circle) are looked-up within the sensor. The collected energy is then diluted by $1/{N_{pix}}$ for each impacted pixel.

The detection event is then incremented in a given temporal bin according to the corresponding global travel time. The overall result is a cube tensor that is characterized by two spatial coordinates and one temporal dimension. Low water turbidity conditions imply not much scattering events from the water column. A photon is thus only retrieved in the temporal bin corresponding to a round trip to the plane of interest. A multiple scattering coupling between the water column and the target of interest occurs with increased turbidity; the correspondence between the travel time of the photon and the distance might be lost.

The next step of the modelling process consists in a temporal convolution of the 3D temporal signal (the cube tensor) with the temporal resolution of our device. The temporal resolution depends on the laser pulse duration, on the clock frequency of our camera and on additional internal jitters of the trigger scheme. At that stage, the full wave information is available for each pixel. An additional random scheme is required to reproduce the single photon statistics of the SPAD and thus, to obtain a typical experimental data set. Note, however, that such a photon-counting scheme was not used in this study since the information contained within the full wave received by the pixels is sufficient to study the impact of the blur.

3. Relevant optical parameters for the Monte Carlo model

3.1 Optical properties of the water constituents

The optical properties of the water constituents that are required for the Monte-Carlo numerical scheme are derived from the radiative transfer model OSOAA [26,27]. Such a model is able to simulate the propagation of the light from the top of atmosphere to the bottom of the ocean (and vice-versa) within a coupled atmosphere-ocean system that includes a rough air-sea interface. In the current study, only the modules of the OSOAA model that provide the optical properties (absorption and scattering coefficients) of the water constituents, namely pure seawater and mineral-like particles are used. The optical properties of pure seawater are known from theory [28,29] while the extinctions coefficients and phase function of mineral-like particles are determined using Mie theory [30,31] based on the refractive index, the size distribution and the concentration of particles. The value of refractive index relative to water used here is 1.18. The size distribution is assumed to follow a standard Junge power law [32] using a slope coefficient value of 4, that is representative of oceanic conditions [33]. The mineral-like particles concentration was varied from $0.02\; \textrm{to}\; 8{\; }mg/{m^3}$ to provide a relevant range of extinction coefficient ${K_{ext}} \in 0.05 - 1.32{\; }{m^{ - 1}}$ resulting in an optical depth range of $0.25 - 7.6$ for a round trip of $1{\; }m$. The overall phase function of the turbid water is mostly influenced by that of mineral-like particles since pure seawater molecular phase function is fairly isotropic. Thus, the overall phase function shows a peak in the forward direction (i.e., scattering angles near 0°). About 50% of the flux is scattered from a scattering angle ranging from 0° to 15°. The modelling of peaked phase functions by a Monte Carlo approach remains a challenging numerical task due to the rapid variations of the phase function with respect to the scattering angle near 0°. To correctly model the phase function in the forward peak (i.e., scattering angle near 0°), a reverse-sampling method was used [34]. Such a method consists in searching, based on a numerical optimization process, a set of scattering angles ${\{{\theta_i^{rs}\; } \}_{i \in [{1,{N_{angle}}} ]}}$ such as the integrated phase function (i.e., the cumulated distribution function) between two angles is constant. In addition, the number of scattering angles used is high, namely ${N_{angle}} = 2000$ to make the sampling optimal.

3.2 Description of the PoC

A submersible Proof of Concept (PoC) of a laser imaging device was developed at ONERA institute (France). Such a PoC is used here to estimate the magnitude of blur effects on the temporal signal as most of the optical parameters are well documented. The PoC is based on a POLIMI $32 \times 32\; $ SPAD [35]. The laser is triggered on a green pulsed HORUS laser. The SPAD camera is mounted on a Yamano remote controlled zoom. Therefore, the focus, field of view and numerical aperture can be fine-tuned during the image acquisition under operational conditions (prototype in immersion phase). Laser light is brought to the medium using an optical fiber. The laser divergence is tunable with a translated lens. The optical parameters that characterize the laser imaging device are reported in Table 1. The Field of View is the ratio between the sensor diagonal Di and the focal length f’ as follows (Eq. (2)):

$$\textrm{FOV} = 2 \times \textrm{atan}\left( {\frac{{{\textrm{D}_\textrm{i}}}}{{2\mathrm{f^{\prime}}}}} \right)$$

Tables Icon

Table 1. Optical parameters that characterize the laser imaging device. The baseline is defined as the emitter-receiver distance; the laser divergence is provided here for the air medium and the water refractive index is taken into account for underwater imaging; the numerical aperture is the ratio of the focal length to the diameter of the diaphragm.

3.3 Estimation of the AoV

A large sensor, namely the VOXTEL 256*256 pixel [36], has been used to evaluate the AoV of the zoom of our device. The zoom can be mounted either on the POLIMI or on the VOXTEL. The VOXTEL diagonal size is 10.9 mm (versus 6.8 mm for the POLIMI Table 1), and thus allows probing incidence angles that are not accessible by the well adapted POLIMI. Fig. 3 shows images which are obtained using VOXTEL sensor over various focal lengths. Figure 3(a)-(b) are limited by PoC’s windows that are located $1\; \textrm{cm}$ in front of the entrance of the zoom. The dark part of the image (on the 4 corners) does not significantly vary for Fig. 3(c)-(f). The variation of AoV thus appears to be linearly related to that of FoV. AoV parameter k is therefore defined as follows (Eq. (3)):

$$k = \frac{{AoV}}{{FoV}}$$
If k is greater than 1, a given zoom will be adapted to a given sensor. The dark part of the images Fig. 3(c)-(f) lead to the estimation $k = 0.8$ for the Yamano zoom combined with the VOXTEL camera. The corresponding value of k for the well adapted POLIMI is then evaluated $k = \frac{{10.9}}{{6.8}} \times 0.8 = 1.4$.

 figure: Fig. 3.

Fig. 3. Acquisition of images using the VOXTEL 256 × 256 pixel camera for focal lengths that vary from 9 mm to 169 mm. The target is located at a distance of 3.75 m from the camera. The measurements were carried out for an ambient room light.

Download Full Size | PDF

Laser imaging requires the use of an interferometric filter centered one the laser wavelength and mounted on the entrance face of the projective optics to decrease the influence of background light [37,38]. Off axes incident beam impinging on the filter undergoes a larger filter thickness, thus resulting in a blue shift of the transmission spectra. This spectral shift increases with the incidence angle [39]. If the shift is larger than the filter bandwidth, the scattered beam will not reach the detector thus leading to the consideration of an additional AoV that is intrinsic to the interferometric filter. Laser images were acquired with and without the presence of the filter (Fig. 4); the angular transmission of the filter is measured using a 9 mm focal length. The filter component Edmund Optic EDM/84-114 (λ = 527 nm FWHM = 20 nm) is used. An AoV of 30° is determined which is in good agreement with Fabry Perot calculation [40] that provides a value of 32°. For larger focal lengths, the AoV and FoV of the zoom decrease while the AoV of the filter is constant. Therefore, the AoV is dominated by the zoom for focal lens larger than 20 mm. Such a consideration is only true for our device PoC and its large filter band of $20\,nm$ For daylight atmosphere applications, narrower filters might be required and the angular acceptance of the filter might significantly decrease and then drives the total AoV.

 figure: Fig. 4.

Fig. 4. Laser images acquired (a) without and (b) with filter; (c) laser angular transmission. The laser angular transmission is based on a horizontal line from the two images ratio. A three-points moving average was used to reduce oscillations due to the scene inhomogeneity.

Download Full Size | PDF

4. Results and discussion

4.1 Performance of the blur model

The blur Monte-Carlo model performance is evaluated on a typical configuration which is as follows: a monostatic imaging laser is used to observe a target plane of interest (seabed model simplification) located at 5 meters from the sensor. Figure 5 shows a comparison between the signals integrated over the entire sensor as a function of time for the no-blur (blue) and blur (orange) Monte-Carlo schemes. A chaotic behavior of the blurred model is observed at the exit of DoF area. It is likely to be due to the fact that a number of pixels ${N_{pix}},$ which is slightly larger than one, leads to an inaccurate use of the surface ratio to determine the dilution factor as it deeply impacts the collection efficiency. The orange scheme will thus be labelled as “coarse” blur Model. Such an issue is handled by developing a more accurate determination of the impacted pixels. This can be carried out by using a first “lookup” process to count the exact number of impacted pixels $\widetilde {\; {N_{pix}}}$ and then to apply a global accurate dilution of $1/\; \widetilde {{N_{pix}}}$. The procedure is only performed for low values of ${N_{pix}} < 400$.e when the dilution error could be high. The results obtained for the corrected Monte-Carlo blur model that accounts for this procedure is shown (green line). It is observed that the corrected model is able to remove the oscillating regime around 2.5-3.5 m. Such corrected version will be used in the rest of the article.

 figure: Fig. 5.

Fig. 5. Effect of the blur model (coarse and corrected, see text) on the integrated signal per launched photon. A monostatic condition is considered. The parameters used are: $f^{\prime} = 100\; mm,\; \; NA = 3.7,\; \; div\; = \; 5.7^\circ ,\; \; {K_{ext}} = 0.25\; {m^{ - 1}},\; \; {w_{laser}} = 1\; mm$. The albedo value of the target plane is uniform and its value is $\omega = 1$. A number of $4 \times {10^{10}}$ photons were sent. The AoV is not considered here.

Download Full Size | PDF

For the condition ${N_{pix}} < 1$ which means an observation within the Depth of Field, the agreement between the three models is highly satisfactory. Thus, the model without blur could be systematically used. For the condition $1 < {N_{pix}} < 20$, an oscillating shape is clearly observed for the coarse blur model, for which the error is greater than 50%. ${N_{pix}}$ would be higher or lower than $\widetilde {{N_{pix}}}$ depending on the distance of the object, which directly impacts the collection efficiency. For ${N_{pix}} > 20$, a good agreement between the corrected and the “coarse” blur model is observed. The blur effect leads to the dilution of the energy outside the FoV near the detector. This latter point will be examined in Section 4.2.

It should be highlighted that the computation time required for running the blur model is larger than for no-blur mainly because of the look-up procedure which is performed at each scattering event. Note that the look-up pixel scheme is restricted to a square having a side size value of $B$ to minimize the CPU cost. The CPU ratio $R$ which is defined as the ratio between the CPU time of the “corrected” blur model and that of the “coarse” model, is computed with respect to the variation of the water turbidity through the parameter Kext (Table 2). It is observed that R increases with ${K_{ext}}$ significant values of $R$ are obtained; typically $R > 3$ for very high values of ${K_{ext}}$Kext > 2.5 m -1). Such a variation of R with turbidity is expected since an increase of Kext means an increase of the number of scattering events per photon. The number of scattering events per photon is hereafter referred to as the “scattering multiplicity M$({{K_{ext}}} )$ The scattering multiplicity M has been calculated for each run. The normalized CPU ratio $\tilde{R} = R/M$ is reported in Table 2. The variation of $\tilde{R}$ with turbidity is fairly constant. The CPU time per scattering event is increased by 25% to 33% when using the blur model.

Tables Icon

Table 2. Variation of the ratio R between the computing CPU time of the blur model and that of the coarse model with respect to the extinction coefficient ${{\boldsymbol K}_{{\boldsymbol ext}}}$ (i.e., water turbidity). $\tilde{{\boldsymbol R}}$ is defined as the ratio between R and the scattering multiplicity M (see text).

4.2 Radiometric impact of the blur model outside the DoF

Two standard scenarios are examined here. For the bi-static configuration representative of imaging system, the detector is located at a given distance from the laser emission, namely $0.14\; m$, while the monostatic configuration is similar as a mono-detector Lidar system. The integrated signal over the entire detector is calculated for both the blur and coarse models. The variation of the blur ratio, which is defined as the ratio between both models, with the imaging distance is shown Fig. 6 for various values of the parameter $k\; = \; AoV/FoV$. The range of variation of ${K_{ext}}$ is $0.05\; {m^{ - 1}} - 0.82\; {m^{ - 1}}$. Since the blur ratio does not show a pronounced sensitivity to the ${K_{ext}}$ value (not shown), the results are shown for the value of Kext of 0.25 m-1. The blur effects lead to a significant decrease of the collected flux close to the detector for $k = 1$. This is because the blur circles are much larger than the detector for the case of short distances ($< 1\; m$). Thus, the fluxes are moved out of the detector. The blur effects vanish significantly before the Depth of Field area is reached ($3.8\; m$). The increase of the parameter k leads to the collection of a higher light flux coming from out of the FoV. The initial decrease due to the blur effect appears to be higher for the monostatic configuration. The blur ratio remains lower than 1 even for $k > 5$ which is unrealistic for a classic imaging device. The impact of $k\; $ for the bi-static configuration is more important. The blur ratio could be higher than 1 because the illumination cone is totally outside the FOV right after the imaging system. Note that our laser imaging device PoC is designed for a value $k = 1.4$, which corresponds to a domain where the blur ratio varies significantly with $k$. Note that for $k = 1$, the blur effect leads to a decrease of the signal after the integration of all scattered flux from the column water ($0\; - \; 4.8$ m) by a factor of $85$ (monostatic) and only by a factor of $1.13$ (bi-static) for ${K_{ext}} = 0.82\; {m^{ - 1}}$. The blur effects are therefore much lower for bi-static configuration since the most significant impact of the blur on the signal occurs before the recovering distance.

 figure: Fig. 6.

Fig. 6. Blur ratio (i.e., ratio of the energy between the blur and the coarse model) as function of distance range for various values of $k\; = \; AoV/FoF$ and for the monostatic and bi-static ($14\; cm$) configurations. The same parameters as those used in Fig. 5 are used here except for $f^{\prime} = 200\; mm$. The extinction coefficient value is ${K_{ext}} = 0.25\; {m^{ - 1}}$.

Download Full Size | PDF

4.3 Variance reduction within the DoF

The snapshot images that have been calculated for a temporal bin corresponding to the target plane located within the depth of field ($5\; m$) are examined here. As shown Fig. 6, there is no impact of the blur on the integrated signal. Figure 7 shows images that have been obtained using the no-blur and the blur models for a contrast target (white rectangular panel with a black circular panel at the center) located $5\; m$ away from the camera.

 figure: Fig. 7.

Fig. 7. Illustration of the insensitivity of the loss of contrast to the type of model used (i.e., blur model or no-blur model) for a low turbidity case (a-b-c, top panel with ${K_{ext}} = 0.1\; {m^{ - 1}}$) and a high turbidity case (d-e-f, bottom panel with ${K_{ext}} = 0.81\; {m^{ - 1}}$). The observed scene is a white rectangular panel ($\omega = 1$) with a black circular panel ($\omega = 0$) of radius $r = 0.15\; m$, located $5\; m$ away from the camera.

Download Full Size | PDF

A sharp edge object exhibits the same sharp image for both the blur and no-blur models for both low (${K_{ext}} = 0.1\; {m^{ - 1}}$) and high (${K_{ext}} = 0.81\; {m^{ - 1}}$) turbidity conditions. Therefore, the blur model does not impact the image of an object located on the focus plane. However, the increase in turbidity leads to a reduction of the overall contrast of the image. This loss of contrast is caused by the photons that undergo a scattering event on the way back to the sensor, thus not reaching the expected pixel.

Figure 7(d), which was obtained with the no-blur model, appears to be more noisy than Fig. 7(e), which was obtained with the blur model. It is thus likely that the blur model has an impact on the convergence speed. Such an impact has been studied with a homogeneous rectangular plane ($\omega = 1$) located $5\; m$ away from the camera, using both the no-blur and blur models for an increasing number of photon (Fig. 8).

 figure: Fig. 8.

Fig. 8. Images ($32 \times 32$) of the target plane for the no-blur (a)-(b)-(c) and the blur model (d)-(e)-(f) for an increasing number of photon (left to right). The target signal represents the normalized photon parts collected by a given pixel for the temporal bin of interest. The parameters are as follows: $f^{\prime} = 100\; mm,\; \; NA = 3.7,\; \; B = 0\; m,\; \; div\; = \; 5.7^\circ ,\; \; {K_{ext}} = 0.82\; {m^{ - 1}},\; \; {w_{laser}} = 1\; mm,\; \; k = 1.$ The target is assumed to have a uniform value of albedo $\omega = 1$ and it is located at $L = 5\; m$.

Download Full Size | PDF

The image simulated using the blur model is much less noisy than that of the no-blur model. This could be explained by analyzing light pathways from the target scene toward the detector. On the way back, the scattering events from turbid water only affect one pixel for the coarse model, while the blur effects dilute these scattering events ultimately to the entire sensor. The scattered energy thus enables each pixel to receive more information per scattering event. The blur effect could therefore be equivalent to an instrumental spatial convolution that smooths the resulting 2D image. Such a convolution cannot be carried out during a post processing because the spatial smoothing differs for each scattering distance. The blur effects therefore lead to a reduction of the variance. Images were simulated for increasing numbers of photon ${N_{ph}}$ to evaluate the speed up in convergence. The quadratic relative difference ${Q_d}({{N_{ph}}} )\; $ between an image ${I_{i,j}}({{N_{ph}}} )$ produced using a number of photon ${N_{ph}}\; $ and the converged images $I_{i,j}^{conv}$ (Fig. 8(f)) has been calculated as a metric for quantifying the differences between the blur model and the no-blur model in term of convergence speed. ${Q_d}\; $ is expressed as follows (Eq. (4)):

$${Q_d}({{N_{ph}}} )= \frac{1}{{{N_{pix}}}}\sqrt {\mathop \sum \limits_{({i,\; j} )\in {{[{1,{N_{pix}}} ]}^2}} {{\left( {\frac{{{I_{i,j}}({{N_{ph}}} )- I_{i,j}^{conv}}}{{I_{i,j}^{conv}}}} \right)}^2}} $$
${Q_d}$ is shown for various ${K_{ext}}$ values for both the blur (orange) and no-blur (blue) models (Fig. 9). A requirement of ${Q_d}{\; } = {\; }1{\%}$ (red line) is reported as an example. The intersection of the red line with the two regression lines leads to the two required numbers of photons. The convergence speed up enabled by blur effects ${S_{up}}$ is therefore defined as the ratio of the two numbers of photons, no-blur over blur (Table 3).

 figure: Fig. 9.

Fig. 9. Variation of the quadratic relative difference ${Q_d}$ between the fully converged 2D image and simulations for an increasing number of photons (no-blur model in blue, blur in orange). Various values of ${K_{ext}}$ are considered, namely (a) 0.025 m-1, (b) 0.25 m-1, (c) 0.82 m-1. The same optical parameters as those used in Fig. 6 are used. Log-log linear fits are shown with their coefficients of determination ${r^2}$.

Download Full Size | PDF

Tables Icon

Table 3. Ratio (no-blur over blur model) of the number of photons required to reach a given quadratic relative difference ${Q_d}$ to the converged 2D image. The ${K_{ext}}$ values range from $0.05$ to $1.32{\; }{m^{ - 1}}$ and two levels of accuracies are considered, namely ${Q_d} = \textrm{1}\%$ and ${Q_d}\textrm{ = 0}\textrm{.1}\%\textrm{.}$

The convergence speed up is low and fairly insensitive to the required level of accuracies for very clear waters (${K_{ext}} = 0.05{\; }{m^{ - 1}}$). This is because a photon coming back from the scene is unlikely to be scattered by the water column. The probability of scattering events increases with ${K_{ext}}$ thus leading to a greater potential of the reduction of the variance. For ${K_{ext}} = 0.82{\; }{m^{ - 1}}$, ${S_{up}} = 190$ which is much larger than the additional CPU cost of the blur calculation ($R = 2.6$, see Table 2). The blur model is thus about $73$ (i.e. $190/2.6)$ times faster than the coarse model. A much higher speed up time is observed for a more accurate convergence ${Q_d} = 0.1{\; \%}$. Since the poorly converged images such as those shown Fig. 8(c)-(d) show a value of ${Q_d} = 0.75{\; \%}$ while a ${Q_d}{\; }$ value of 0.27% is obtained for Fig. 8(e) which means an almost full convergence, the accuracy value of ${Q_d} = 0.1{\; \%}$ is strongly required. Note that similar results were obtained for a bi-static geometry.

5. Conclusion

3D laser imaging technology generally works with long temporal gates to detect an object without any a priori knowledge of the sensor-target distance. Such a gate integrates scattered light from the water column that is in-between the sensor and the target. In the case of short distance underwater imaging (< 50 m), a large part of the scattering events is located out of the Depth of Field (DoF). Therefore, the blur effects must be taken into consideration. A simple analytical model was presented. Scattering events that occur outside the Depth of Field generates a blurry circle on the focal plane. The energy dilution over several pixels and outside the sensor was discussed. Largely off axes scattering events are cut off by the projective optics if the incident angle is larger than the Angle of View (AoV). The blur model was then combined with a Monte-Carlo scheme. The optical parameters were derived from a proof-of-concept developed at ONERA institute (France). The difference between AoV and FoV was discussed; the ratio $AoV/FoV$ was found to be close to $1.4$ for a large focal lengths range. The specific AoV of an interferometric filter was measured. Such a filter does not appear to be a significant constraint for our setup.

The CPU cost of blur model implementation in the Monte Carlo scheme was then evaluated. The computing time increases for the blur model since a look-up procedure in the sensor matrix is performed for each scattering event. The CPU time per scattering event increases by 25% up to 33%. The blur effects outside the Depth of Field were then examined. It was shown that the blur effect leads to a decrease of the signal at short range distances ($< 1\; m$) as it spreads light out of the detector. This effect can be counterbalanced while increasing the AoV and collecting a higher amount of flux from outside the FoV. The cumulated blur effect over the entire water column was found to be more significant for a monostatic configuration (decrease by a factor 85) than for a bi-static configuration (decrease by 25%) where most of the blur effect occurs before the recovering distance.

The images of the target plane located within the Depth of Field were also examined. Despite the lack of impact on the integrated signal, the images that include the blur effects are much less noisy than those which are generated using the no-blur model. The blur effects could be interpreted as an instrumental spatial convolution that leads to a smoothing of the resulting 2D image. A global speed-up of the computation time of Monte-Carlo scheme was observed, depending on both the extinction coefficient and the convergence accuracy. The total computation time required to reach an accuracy of 0.1% was reduced by a factor of 2500 for ${K_{ext}} = 0.81\; {m^{ - 1}}$.

Future works could consist in taking into account the real angular transmission of the lens system, rather than a simple binary cutoff angle. Such an angular transmission could be measured by the method proposed in Section 3.3, or calculated using a software such as Zemax [41]. Additional comprehensive sensitivity studies will be relevant as well to gain a better understanding of the blur effect on the performance of an imager with regard to the probability of detection issue. Particularly, the next step could consist in working on the image quality of a contrast target. The goal would be to understand the relative contribution of the focus plane (blur model) and the turbidity on the influence of the contrast within the image.

Funding

Office National d'études et de Recherches Aérospatiales (ONERA); Région Occitanie Pyrénées-Méditerranée (Project DIVISOUMA).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. C. Mai, S. Pedersen, L. Hansen, K. L. Jepsen, and Z. Yang, “Subsea infrastructure inspection: A review study,” in 2016 IEEE International Conference on Underwater System Technology: Theory and Applications (USYS) (2016), pp. 71–76.

2. “Inspection and monitoring systems subsea pipelines: A review paper - Michael Ho, Sami El-Borgi, Devendra Patil, Gangbing Song, 2020,” https://journals.sagepub.com/doi/10.1177/1475921719837718.

3. J. Hwang, N. Bose, H. D. Nguyen, and G. Williams, “Acoustic Search and Detection of Oil Plumes Using an Autonomous Underwater Vehicle,” J. Mar. Sci. Eng. 8(8), 618 (2020). [CrossRef]  

4. L. J. Wong, B. Kalyan, M. Chitre, and H. Vishnu, “Acoustic Assessment of Polymetallic Nodule Abundance Using Sidescan Sonar and Altimeter,” IEEE J. Oceanic Eng. 46(1), 132–142 (2021). [CrossRef]  

5. C. Roman, G. Inglis, and J. Rutter, “Application of structured light imaging for high resolution mapping of underwater archaeological sites,” in OCEANS’10 IEEE SYDNEY (2010), pp. 1–9.

6. L. Hellequin, J.-M. Boucher, and X. Lurton, “Processing of high-frequency multibeam echo sounder data for seafloor characterization,” IEEE J. Oceanic Eng. 28(1), 78–89 (2003). [CrossRef]  

7. R. M. K. Plets, J. K. Dix, J. R. Adams, J. M. Bull, T. J. Henstock, M. Gutowski, and A. I. Best, “The use of a high-resolution 3D Chirp sub-bottom profiler for the reconstruction of the shallow water archaeological site of the Grace Dieu (1439), River Hamble, UK,” J. Archaeol. Sci. 36(2), 408–418 (2009). [CrossRef]  

8. K. Sun, W. Cui, and C. Chen, “Review of Underwater Sensing Technologies and Applications,” Sensors 21(23), 7849 (2021). [CrossRef]  

9. M. Massot-Campos and G. Oliver-Codina, “Optical Sensors and Methods for Underwater 3D Reconstruction,” Sensors 15(12), 31525–31557 (2015). [CrossRef]  

10. S. Y. Chua, N. Guo, C. S. Tan, and X. Wang, “Improved Range Estimation Model for Three-Dimensional (3D) Range Gated Reconstruction,” Sensors 17(9), 2031 (2017). [CrossRef]  

11. M. Castillón, A. Palomer, J. Forest, and P. Ridao, “State of the Art of Underwater Active Optical 3D Scanners,” Sensors 19(23), 5161 (2019). [CrossRef]  

12. P. Mariani, I. Quincoces, K. H. Haugholt, Y. Chardard, A. W. Visser, C. Yates, G. Piccinno, G. Reali, P. Risholm, and J. T. Thielemann, “Range-Gated Imaging System for Underwater Monitoring in Ocean Environment,” Sustainability 11(1), 162 (2018). [CrossRef]  

13. A. Maccarone, F. M. D. Rocca, A. McCarthy, R. Henderson, and G. S. Buller, “Three-dimensional imaging of stationary and moving targets in turbid underwater environments using a single-photon detector array,” Opt. Express 27(20), 28437–28456 (2019). [CrossRef]  

14. A. Maccarone, A. McCarthy, X. Ren, R. E. Warburton, A. M. Wallace, J. Moffat, Y. Petillot, and G. S. Buller, “Underwater depth imaging using time-correlated single-photon counting,” Opt. Express 23(26), 33911–33926 (2015). [CrossRef]  

15. D. McLeod, J. Jacobson, M. Hardy, and C. Embry, “Autonomous inspection using an underwater 3D LiDAR,” in 2013 OCEANS - San Diego (2013), pp. 1–8.

16. M. Kervella, F.-X. d’Abzac, F. Hache, L. Hespel, and T. Dartigalongue, “Picosecond time scale modification of forward scattered light induced by absorption inside particles,” Opt. Express 20(1), 32–41 (2012). [CrossRef]  

17. A. V. Starkov, M. Noormohammadian, and U. G. Oppel, “A stochastic model and a variance-reduction Monte-Carlo method for the calculation of light transport,” Appl. Phys. B 60(4), 335–340 (1995). [CrossRef]  

18. C. M. R. Platt, “Remote Sounding of High Clouds. III: Monte Carlo Calculations of Multiple-Scattered Lidar Returns,” J. Atmos. Sci. 38(1), 156–167 (1981). [CrossRef]  

19. K. E. Kunkel and J. A. Weinman, “Monte Carlo Analysis of Multiply Scattered Lidar Returns,” J. Atmospheric Sci. 33(9), 1772–1781 (1976). [CrossRef]  

20. H. R. Gordon, “Interpretation of airborne oceanic lidar: effects of multiple scattering,” Appl. Opt. 21(16), 2996–3001 (1982). [CrossRef]  

21. G. W. Kattawar and G. N. Plass, “Time of Flight Lidar Measurements as an Ocean Probe,” Appl. Opt. 11(3), 662–666 (1972). [CrossRef]  

22. L. R. Poole, “Computed laser backscattering from turbid liquids: comparison with laboratory results,” Appl. Opt. 21(12), 2262–2264 (1982). [CrossRef]  

23. C.-K. Wang, W. Philpot, M. Kim, and H.-M. Lei, “A Monte Carlo study of the seagrass-induced depth bias in bathymetric lidar,” Opt. Express 19(8), 7230–7243 (2011). [CrossRef]  

24. E. Tinet, S. Avrillier, and J. M. Tualle, “Fast semianalytical Monte Carlo simulation for time-resolved light propagation in turbid media,” J. Opt. Soc. Am. A 13(9), 1903–1915 (1996). [CrossRef]  

25. A. Hassebo, B. Salas, and Y. Y. Hassebo, “Monostatic and bistatic lidar systems: simulation to improve SNR and attainable range in daytime operations,” Proc. SPIE 10094, 1009421 (2017). [CrossRef]  

26. M. Chami, R. Santer, and E. Dilligeard, “Radiative transfer model for the computation of radiance and polarization in an ocean–atmosphere system: polarization properties of suspended matter for remote sensing,” Appl. Opt. 40(15), 2398–2416 (2001). [CrossRef]  

27. M. Chami, B. Lafrance, B. Fougnie, J. Chowdhary, T. Harmel, and F. Waquet, “OSOAA: a vector radiative transfer model of coupled atmosphere-ocean system for a rough sea surface application to the estimates of the directional variations of the water leaving reflectance to better process multi-angular satellite sensors data over the ocean,” Opt. Express 23(21), 27829–27852 (2015). [CrossRef]  

28. R. M. Pope and E. S. Fry, “Absorption spectrum (380–700 nm) of pure water. II. Integrating cavity measurements,” Appl. Opt. 36(33), 8710–8723 (1997). [CrossRef]  

29. L. Kou, D. Labrie, and P. Chylek, “Refractive indices of water and ice in the 0.65- to 2.5-µm spectral range,” Appl. Opt. 32(19), 3531–3540 (1993). [CrossRef]  

30. L. Travis, M. Mishchenko, and A. Lacis, “Scattering, Absorption, and Emission of Light by Small Particles,” (2002).

31. J. R. Frisvad, N. J. Christensen, and H. W. Jensen, “Predicting the Appearance of Materials Using Lorenz–Mie Theory,” in The Mie Theory: Basics and Applications, W. Hergert and T. Wriedt, eds., Springer Series in Optical Sciences (Springer, 2012), pp. 101–133.

32. H. Bader, “The hyperbolic distribution of particle sizes,” J. Geophys. Res. 75(15), 2822–2830 (1970). [CrossRef]  

33. I. N. McCave, “Particulate size spectra, behavior, and origin of nepheloid layers over the Nova Scotian Continental Rise,” J. Geophys. Res. 88(C12), 7647–7666 (1983). [CrossRef]  

34. P. Naglič, F. Pernuš, B. Likar, and M. Bürmen, “Lookup table-based sampling of the phase function for Monte Carlo simulations of light propagation in turbid media,” Biomed. Opt. Express 8(3), 1895–1910 (2017). [CrossRef]  

35. F. Villa, R. Lussana, D. Bronzi, S. Tisa, A. Tosi, F. Zappa, A. Dalla Mora, D. Contini, D. Durini, S. Weyers, and W. Brockherde, “CMOS Imager With 1024 SPADs and TDCs for Single-Photon Timing and 3-D Time-of-Flight,” IEEE J. Sel. Top. Quantum Electron. 20(6), 364–373 (2014). [CrossRef]  

36. V. Dhulla, S. S. Mukherjee, A. O. Lee, N. Dissanayake, B. Ryu, and C. Myers, “256 × 256 dual-mode CMOS SPAD image sensor,” Proc. SPIE 10978, 26 (2019). [CrossRef]  

37. M. H. Asghar, M. B. Khan, and S. Naseem, “Modeling Thin Film Multilayer Broad-Band-Pass Filters in Visible Spectrum,” Czechoslov. J. Phys. 53(12), 1209–1217 (2003). [CrossRef]  

38. T. D. Rahmlow Jr., M. Fredell, S. Chanda, and R. Johnson Jr., “Ultra-narrow bandpass filters for infrared applications with improved angle of incidence performance,” Proc. SPIE 9822, 982211 (2016). [CrossRef]  

39. H. A. Macleod, Thin Film Optical Filters, Fifth Edition, 5e édition (CRC Press, 2018).

40. G. Hernandez, “Analytical Description of a Fabry-Perot Spectrometer. 3: Off-Axis Behavior and Interference Filters,” Appl. Opt. 13(11), 2654–2661 (1974). [CrossRef]  

41. T. Goossens, Z. Lyu, J. Ko, G. C. Wan, J. Farrell, and B. Wandell, “Ray-transfer functions for camera simulation of 3D scenes with hidden lens design,” Opt. Express 30(13), 24031–24047 (2022). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Scheme for the modelling of blur effects. The focus at a distance L (orange) is carried out by placing the sensor at a distance x from the focal plane. Any hydrosols that are outside the DoF (blue) are not conjugated with the sensor, thus resulting in the occurrence of blur.
Fig. 2.
Fig. 2. Illustration of Angle of View (AoV), Field of View (FoV) and Depth of Field (DoF). The full detection is achieved within the FoV and DoF (referred using *). A partial detection occurs within the AoV outside the DoF (**). There is no detection outside the AoV nor outside the FoV within the DoF range (***).
Fig. 3.
Fig. 3. Acquisition of images using the VOXTEL 256 × 256 pixel camera for focal lengths that vary from 9 mm to 169 mm. The target is located at a distance of 3.75 m from the camera. The measurements were carried out for an ambient room light.
Fig. 4.
Fig. 4. Laser images acquired (a) without and (b) with filter; (c) laser angular transmission. The laser angular transmission is based on a horizontal line from the two images ratio. A three-points moving average was used to reduce oscillations due to the scene inhomogeneity.
Fig. 5.
Fig. 5. Effect of the blur model (coarse and corrected, see text) on the integrated signal per launched photon. A monostatic condition is considered. The parameters used are: $f^{\prime} = 100\; mm,\; \; NA = 3.7,\; \; div\; = \; 5.7^\circ ,\; \; {K_{ext}} = 0.25\; {m^{ - 1}},\; \; {w_{laser}} = 1\; mm$. The albedo value of the target plane is uniform and its value is $\omega = 1$. A number of $4 \times {10^{10}}$ photons were sent. The AoV is not considered here.
Fig. 6.
Fig. 6. Blur ratio (i.e., ratio of the energy between the blur and the coarse model) as function of distance range for various values of $k\; = \; AoV/FoF$ and for the monostatic and bi-static ($14\; cm$) configurations. The same parameters as those used in Fig. 5 are used here except for $f^{\prime} = 200\; mm$. The extinction coefficient value is ${K_{ext}} = 0.25\; {m^{ - 1}}$.
Fig. 7.
Fig. 7. Illustration of the insensitivity of the loss of contrast to the type of model used (i.e., blur model or no-blur model) for a low turbidity case (a-b-c, top panel with ${K_{ext}} = 0.1\; {m^{ - 1}}$) and a high turbidity case (d-e-f, bottom panel with ${K_{ext}} = 0.81\; {m^{ - 1}}$). The observed scene is a white rectangular panel ($\omega = 1$) with a black circular panel ($\omega = 0$) of radius $r = 0.15\; m$, located $5\; m$ away from the camera.
Fig. 8.
Fig. 8. Images ($32 \times 32$) of the target plane for the no-blur (a)-(b)-(c) and the blur model (d)-(e)-(f) for an increasing number of photon (left to right). The target signal represents the normalized photon parts collected by a given pixel for the temporal bin of interest. The parameters are as follows: $f^{\prime} = 100\; mm,\; \; NA = 3.7,\; \; B = 0\; m,\; \; div\; = \; 5.7^\circ ,\; \; {K_{ext}} = 0.82\; {m^{ - 1}},\; \; {w_{laser}} = 1\; mm,\; \; k = 1.$ The target is assumed to have a uniform value of albedo $\omega = 1$ and it is located at $L = 5\; m$.
Fig. 9.
Fig. 9. Variation of the quadratic relative difference ${Q_d}$ between the fully converged 2D image and simulations for an increasing number of photons (no-blur model in blue, blur in orange). Various values of ${K_{ext}}$ are considered, namely (a) 0.025 m-1, (b) 0.25 m-1, (c) 0.82 m-1. The same optical parameters as those used in Fig. 6 are used. Log-log linear fits are shown with their coefficients of determination ${r^2}$.

Tables (3)

Tables Icon

Table 1. Optical parameters that characterize the laser imaging device. The baseline is defined as the emitter-receiver distance; the laser divergence is provided here for the air medium and the water refractive index is taken into account for underwater imaging; the numerical aperture is the ratio of the focal length to the diameter of the diaphragm.

Tables Icon

Table 2. Variation of the ratio R between the computing CPU time of the blur model and that of the coarse model with respect to the extinction coefficient K e x t (i.e., water turbidity). R ~ is defined as the ratio between R and the scattering multiplicity M (see text).

Tables Icon

Table 3. Ratio (no-blur over blur model) of the number of photons required to reach a given quadratic relative difference Q d to the converged 2D image. The K e x t values range from 0.05 to 1.32 m 1 and two levels of accuracies are considered, namely Q d = 1 % and Q d  = 0 .1 % .

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

B = y x f + y × f N A
FOV = 2 × atan ( D i 2 f )
k = A o V F o V
Q d ( N p h ) = 1 N p i x ( i , j ) [ 1 , N p i x ] 2 ( I i , j ( N p h ) I i , j c o n v I i , j c o n v ) 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.