Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Optical 4D signal detection in turbid water by multi-dimensional integral imaging using spatially distributed and temporally encoded multiple light sources

Open Access Open Access

Abstract

We propose an underwater optical signal detection system based on multi-dimensional integral imaging with spatially distributed multiple light sources and four-dimensional (4D) spatial-temporal correlation. We demonstrate our system for the detection of optical signals in turbid water. A 4D optical signal is generated from a three-dimensional (3D) spatial distribution of underwater light sources, which are temporally encoded using spread spectrum techniques. The optical signals are captured by an array of cameras, and 3D integral imaging reconstruction is performed, followed by multi-dimensional correlation to detect the optical signal. Inclusion of multiple light sources located at different depths allows for successful signal detection at turbidity levels not feasible using only a single light source. We consider the proposed system under varied turbidity levels using both Pseudorandom and Gold Codes for temporal signal coding. We also compare the effectiveness of the proposed underwater optical signal detection system to a similar system using only a single light source and compare between conventional and integral imaging-based signal detection. The underwater signal detection capabilities are measured through performance-based metrics such as receiver operating characteristic (ROC) curves, the area under the curve (AUC), and the number of detection errors. Furthermore, statistical analysis, including Kullback-Leibler divergence and Bhattacharya distance, shows improved performance of the proposed multi-source integral imaging underwater system. The proposed integral-imaging based approach is shown to significantly outperform conventional imaging-based methods.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Research for advanced underwater signal detection systems is becoming increasingly important due to the growing popularity and widespread use of autonomous or unmanned underwater vehicles for marine exploration and other applications [1]. In underwater communication, acoustic signals are commonly used for transmission due to their long propagation length; however, acoustic signals are greatly limited by bandwidth limitations, low data rate, and large transmission delay, which degrade the quality of the signal [2]. On the other hand, underwater wireless optical communication (UWOC) using photodiodes or photomultipliers enables high data rate, low latency, and highly secure communication compared with conventional acoustic communication methods [3]. Despite these advantages, UWOC systems also have their own difficulties as light cannot propagate as far as acoustic waves due to various physical processes, including absorption, scattering, and beam divergence in the underwater medium [1].

Moreover, an optical signal propagating in an aquatic medium suffers from attenuation and broadening in the spatial, temporal, angular, and polarization domains. The wavelength-dependent attenuation and broadening are due to absorption and multiple scattering of light by organic and inorganic particulates in the turbid media [4,5]. As a result, underwater signal detection in degraded environments proves to be a difficult task.

Conventional UWOC approaches generally use avalanche photodiode or photomultiplier tubes [3] and are capable of extremely high data rates (12.4 Gigabit per second). However, it has been reported that these methods may be bulky, expensive, effective only in the short-range and can be damaged at high intensity [68]. In the proposed optical integral imaging based UWOC systems, the speed of data transmission is restricted due to the frame rate of the camera; typically, this can be in the range of approximately 100 frames per second (fps) [9,10] for conventional cameras. However, with recent advances in image sensor technologies, some cameras are capable of achieving framerates of up to 1,000,000 fps [11]. Also, the proposed integral imaging approach captures perspectives as a point source; thus, it might be possible to use cameras with a smaller number of pixels to increase the frame rate.

In optical imaging-based methods, researchers have demonstrated various techniques to address the problem of underwater signal detection in turbid water. These strategies focus on the mitigation of the effects of turbidity using methods such as peplography [12,13] or polarization descattering [14] techniques. Moreover, 3D imaging strategies can capture 3D information of the scene including the intensity and angular information of the beam, which is not possible using approaches that only capture intensity information of the beam. Thus, a 3D approach may achieve better performance for underwater signal detection in turbid conditions. In [13], integral imaging-based methods were shown to outperform conventional imaging-based strategies for signal detection in degraded underwater environments due to the capture of both intensity and angular information from different viewing perspectives. Here, we further extend the capabilities of the integral imaging-based underwater signal detection system by using multiple light sources distributed at multiple depths and spatial locations. The proposed system allows for the capture of information not provided by conventional 2D imaging methods; thus, it may provide better signal detection capabilities in turbid underwater scenes.

In this paper, underwater signal detection in turbid water is presented using an integral imaging system with four-dimensional (4D) spatial-temporal correlation [15]. A distribution of multiple light sources transmits a temporally coded signal, and the signal is recorded by an array of image sensors. Integral imaging reconstruction provides 4D temporal-depth-sectioned data. A 4D template is designed to include both the spatial and temporal information of the transmitted signal and used for correlation, which is performed in the frequency domain for detection of the optical signal. Additionally, prior to integral imaging reconstruction, a signal recovery algorithm based on dark channel prior image restoration is applied to the turbid water elemental images to reduce noise [16]. The proposed system is tested at varying turbidity conditions and the performance is measured using the receiver operating characteristic (ROC) curve, the area under the curve (AUC), the number of detection errors as well as by examination of statistical measures such as the Kullback-Leibler Divergence [17] and Bhattacharya distance [18,19]

This paper is organized as follows: the integral imaging reconstruction procedure is explained in section 2, the experimental methods, including correlation procedure, are described in section 3, and the experimental results for underwater signal detection are given in section 4. Finally, in section 5, the conclusions are given.

2. 3D sensing and reconstruction for signal detection in turbid water using integral imaging

Integral imaging (InIm) is a three-dimensional (3D) imaging technique, first proposed by Lippmann in 1908 [20], that can capture both intensity and directional information of optical rays. With the rapid development of digital imaging technology and devices, integral imaging has gained further developments in recent decades [21,22]. During the signal capture, otherwise referred to as the pickup process, multiple 2D images, each with a different perspective, are recorded by a lenslet or camera array. These images, referred to as “elemental images,” capture the 3D scene under investigation. The reconstruction process is the reverse mapping of elemental images into the image space. In the reconstruction process, we can specify the depth of reconstruction to provide depth information about the 3D scene. As the reconstruction algorithm is naturally optimum in maximum likelihood sense, integral imaging reconstruction can reduce the scattering and help in the visualization of 3D scenes [23,24]. For a given reconstruction depth, objects at the chosen depth will remain in focus, whereas all objects belonging to a different depth will become blurred and out of focus. The reconstruction process can be described as follows:

$$I({x,y,z,t} )= \frac{1}{{O(x,y,z,t)}}\sum\limits_{m = 0}^{M - 1} {\sum\limits_{n = 0}^{N - 1} {E{I^{m,n}}} \left( {x - m\frac{{{N_x}{P_x}f}}{{{C_x}z}},y - n\frac{{{N_y}{P_y}f}}{{{C_y}z}}} \right)}$$
where I (x, y, z, t) is the integral imaging reconstructed video, x and y are the indices of each elemental image on each video frame, t. In underwater imaging, the refractive index of the media must be considered in order to accurately calculate the depth of reconstruction. z is the reconstruction distance denoted z = zair +zw/nw, where zair is a distance in air medium and zw is the distance in the water and nw is the refractive index of water. Nx and Ny are the number of elemental images in x and y directions, EIm,n (·) is the elemental image in the mth column and nth row. By shifting and overlapping the elemental images, the reconstructed image is obtained on a specific depth plane. O(x,y,z,t) is the number of overlapping pixels. Px and Py are the pitch between adjacent image sensors on the camera array, and f is the focal length of the camera lens. Cx and Cy are the size of the image sensor. The pickup and reconstruction stages for integral imaging of an underwater scene are shown by Figs. 1(a) and 1(b), respectively.

 figure: Fig. 1.

Fig. 1. 3D imaging system for underwater signal detection: (a) experimental setup to capture the optical signal during the pickup stage of integral imaging and (b) computational volumetric reconstruction process for integral imaging. The external white light LED is used to mimic ambient light for shallow water scenes.

Download Full Size | PDF

3. Experimental methods

In this section, we discuss the experimental methods for the proposed system in a turbid environment with multiple light sources. The experiments were carried out at different turbidity levels to assess the performance of the proposed system. As shown in Fig. 2(a), a water tank having dimensions 500(W) × 250(L) × 250(H) mm is filled with turbid water. Turbidity of the water is controlled through the addition of antacid. Additionally, as shown in Fig. 2(a), a white LED lamp was set on top of the water tank to act as an ambient light source and mimic a shallow water scene. Figure 2(b) shows a 3 × 3 camera array consisting of G-192 GigE cameras and F5 C-mount lenses with a focal length of 20 mm. The pitch between cameras is 80 mm in both the horizontal and vertical directions. The pixel size is 4.5 µm × 4.5 µm, the image size is 1600(H) × 1200(V), and the camera array synchronized to record video data set at a frame rate of 20 fps. An F-number of 5 and an exposure time of 10 ms were used. The camera sensor was set at a distance of 1000 mm away along the axial direction from the center of the multiple light source distribution. As shown in Fig. 2(c), 5 LEDs with each LED having output power of 5 mW are located at different spatial positions and used to transmit the optical signal. The wavelength of the light source is 450 nm, and its full width at half maximum is 20.46 nm.

 figure: Fig. 2.

Fig. 2. (a) Water tank with turbid water and a white LED source to mimic ambient light environments in shallow water scenes, (b) A 3 × 3 camera array for integral imaging pickup (c) An example of 3D distributed multiple light sources with 5 LEDs.

Download Full Size | PDF

Eleven different turbidity levels were generated by mixing 20 liters of pure water with 5-600 ml of liquid antacid. To quantify the turbidity level, an optical power meter was used to record the intensity of a light source of wavelength 450 nm at different locations separated by distance d along the direction of propagation through the turbid media. From the measured intensities, the Beer-Lambert law which is given as I = I0e−αd was applied to calculate the Beer’s coefficient (α). In the equation, I0 is initial intensity, and I is the intensity after propagating a distance d in the media. A sample of water was taken from each turbidity condition, and the intensity of light was measured at two locations with a distance of 10 mm between them in order to tabulate the Beer’s coefficient.

In the experiments, α ranged from 0.006 mm−1 to 0.324 mm−1, as shown in Table 1. The distribution of the 5 LEDs are shown in clear and turbid (α = 0.041 mm−1) water by Figs. 3(a) and 3(b), respectively. In the presence of turbidity, light from the LEDs is scattered by the particles in the turbid water.

 figure: Fig. 3.

Fig. 3. Images of the multiple LED distribution in underwater viewed from the central camera perspective, taken (a) under clear water without turbidity, and (b) in turbid water, which contains 50 ml antacid (α = 0.041 mm−1).

Download Full Size | PDF

Tables Icon

Table 1. Turbidity levels used in the experiments. Turbidity levels and Beer’s coefficient are varied by changing the amount of antacid added to the water.

During the experiments, the 5 LEDs transmit the optical signal with temporal coding. When the LEDs are on, the transmitted signal represents a 1, and when the LEDs are off, the transmitted signal is a 0. The transmitted signals were coded using spread spectrum techniques such as pseudorandom sequence (PRS) and gold code for robust communication [25,26]. In this experiment, each light source is encoded with the same information (modulated signal code). The original signal information was 8-bits in length, given as [1, 0, 0, 1,1, 0, 1, 0], and coded with either 9-bits length pseudorandom code or gold code. The spread spectrum signals were generated by a linear feedback shift register and sent at a speed of 20 frames per second. The gold code is generated by multiplying two PRS codes, providing better correlation properties such as being more uniform and bounded [25]. The final transmitted signal is 72-bits in length. The flow chart for optical signal transmission and signal detection is shown in Fig. 4. The use of pseudorandom sequences or gold code provides that the signal can easily be phase synchronized, even in a noisy environments due to their property of strong autocorrelation [26].

 figure: Fig. 4.

Fig. 4. Flow chart of the proposed system for (a) optical signal transmission and (b) detection in underwater communication. InIm denotes Integral Imaging.

Download Full Size | PDF

The proposed imaging system’s data rate is limited by the frame rate of the cameras used in the system as well as the length of the spread spectrum codes. Longer coded sequences have a lower data rate but have higher detection capabilities. In our system, frames are captured at a rate of 20 frames per second (fps). Each frame contains 1 bit of information, however, as the original signal is coded by a 9-bits spread spectrum sequence, for every 9-bits of information received, we have received only 1 bit of signal data. Therefore, the calculated data rate of the current system is approximately 2 bits per second (bps) (20 fps/9 = 2.22 ∼ 2). Faster cameras with frame rates up to 1 million frames per second have been reported both scientifically [11] and commercially [27]. State-of-the-art and currently commercially available cameras (e.g., FastCam SA-X2; Photron) can record images at frame rates of greater than 1 million frames per second [27]. A similar system using cameras of frame rate 1,000,000 frames per second could achieve data rates up to 15,873 bps, 32,258 bps, 111,111 bps and 142,857 bps if modulated by 63-bits, 31-bits, 9-bits, and 7-bits spread spectrum codes, respectively. For comparison, typical data rates of acoustic communication modulated with 63-bits and 31-bits spread spectrum code in shallow water are 27 bps, and 355 bps, respectively [2829].

Each camera of the camera array records a video sequence during the signal transmission. To improve signal detection, a signal recovery algorithm is applied on each elemental image using dark channel prior estimation. We can assume the image formation model in an underwater environment as [30]:

$${I^c}(x) = {J^c}(x){t^c}(x) + {A^c}(1 - {t^c}(x)),c \in \{{r,g,b} \}$$
Where Ic(x) is the recorded intensity of the input image, Jc is the ideal image without degradation, Ac is the scattered light under turbid conditions, and tc(x) is transmission function, with each component consisting of its three color channels, c, at pixel x. The original signal J(x) first goes through a multiplicative distortion and then through an additive distortion. The transmission function is given by beer’s law as t(x)=e−α d(x), where α is the attenuation coefficient governed by the turbidity level, and d(x) is the scene depth. As the turbidity increases, t(x) attenuates more rapidly and tends toward zero, making the scattered light contribution exponentially more significant (i.e., I (x)A). The effect of scattering can be mitigated by considering the dark channel prior (DCP) method. Using the DCP, A and t can be estimated independently of α and d(x), and an estimated J can be recovered with improved quality in comparison to the original recorded image I. The details of this approach are provided in the section 3.1 of [31].

The effect of the signal recovery algorithm is clearly illustrated in Fig. 5. where Fig. 5(a) shows the original 2D elemental image in turbid water (α = 0.006 mm−1), 5(b) shows the recovered 2D image after applying the signal recovery algorithm based on dark channel prior estimation and the color-bar, shows the intensity of the blue color channel for each image, which ranges from 0 to 1. After applying the signal recovery algorithm, the scattering effect surrounding each LED is decreased, and we see an increase in the SNR for the 2D elemental image, which is expected to improve our detection capabilities. SNR is defined as µsignalbackground where µsignal is the signal mean, and σbackground is the standard deviation of the background region.

 figure: Fig. 5.

Fig. 5. Example of the signal recovery algorithm using dark channel prior estimation. (a) Original 2D elemental image at α = 0.006 mm−1 and SNR=3.2, and (b) 2D elemental image after applying the signal recovery algorithm with SNR=4.1. SNR=µsignalbackground. The color-bar represents the intensity of the blue channel values, which range from 0 to 1.

Download Full Size | PDF

After the signal recovery algorithm has been applied to the recorded elemental images, the data is reconstructed using Eq. (1) to provide the 4D video data containing the 5 LEDs. As we are using blue LEDs, only the blue color channel is further considered for reconstruction and signal detection. Figure 6(a)–6(c) shows the intensity of the blue color channel for three consecutive video frames of a signal recorded in clear water. During processing, 13 distinct depths were reconstructed at each time frame to represent the 4D data. In Figs. 6(i)–(iii), the five reconstructed frames are shown at the depths corresponding to each of the LEDs.

 figure: Fig. 6.

Fig. 6. Example of transmitted 4D (spatial and temporal) data structure. (a-c) First three frames of captured 2D elemental images from the video sequence of temporally encoded multiple (five) light sources. (i-iii) Five reconstructed depth images focused at each LED by integral imaging with depths Z1=900 mm, Z2=920 mm, Z3=940 mm, Z4=950 mm, and Z5=960 mm.

Download Full Size | PDF

Multi-dimensional correlation is performed between the 4D template filter and the reconstructed 4D data to detect the original signal. The reference template filter h(x, y, z, t) is numerically simulated based on the spatial distribution of the multiple light sources and the temporal sequence, which had 9 frames of LED ON and LED OFF to match the Pseudorandom Sequence or Gold code. The LED ON structure is designed with each LED as a point source at its correct (x, y, z) position, and all other pixels are zero, as shown in Fig. 7(a). The LED OFF structure is designed with the same amount of depths as the LED ON structure, but each depth (z) is set as a negative ones matrix as shown in Fig. 7(b). In Fig. 7, the color-bar shows the pixel value for each image, which ranges from -1 to 1. The negative ones matrix is used to further decorrelate the signal off data and enhance or stretch the separation between the correlation peaks.

 figure: Fig. 7.

Fig. 7. Example of the reference template correlation filter. (a) Reference template planes showing depth frames with LED ON signals. (b) Reference template planes showing depth frames with LED OFF signals. The color-bar represents the pixel values, which range from -1 to 1.

Download Full Size | PDF

The reference template filter h(x, y, z, t) is then Fourier transformed, providing a 4D matrix in the frequency domain, H(u, v, φ, ϑ). The 4D reconstructed test data T(u, v, φ, ϑ) is filtered with the correlation reference template filter H(u, v, φ, ϑ) in the frequency domain, where T(u, v, φ, ϑ) is the Fourier transform of the reconstructed 4D video data I(x, y, z, t) . The correlation output is then obtained by inverse Fourier transform:

$$C(x,y,z,t) = F{T^{ - 1}}\{ [H(u,v,\phi ,\vartheta )].[T(u,v,\phi ,\vartheta )]\}$$
The final correlation result S(t) along the time domain is expressed as the maximum correlation value across x, y, and z at each time instant, i.e., S(t)=arg{max(C(:,:,:;t))}. S(t) of the reconstructed 4D video data should have high and low peaks corresponding to either the transmission of a 1 or a 0, respectively. Given the 8-bits original signal and 9-bits coding, the correlation result is expected to have 8 prominent locations of either local minima or maxima values, each separated by 9-bits. By summation of the prominence of these peaks separated by 9-bits, we can find, in an automated fashion, the correct start frame for the signal transmission as the frame, which gives the maximum sum of the prominence across the recorded signal. Figure 8 shows an example correlation result S(t) for α=0.041 mm−1 and the correctly detected data points from the original signal.

 figure: Fig. 8.

Fig. 8. 4D correlation results S(t) for underwater signal detection with 50 ml antacid added to the water tank (α=0.041 mm−1). Red circles indicate the correctly detected transmitted signals.

Download Full Size | PDF

The optimal thresholds for the classification of the correlation-based approaches are calculated from the receiver operating characteristic (ROC) curves calculated at each turbidity condition [32,33]. Depending on whether the correlation value was more or less than the threshold, it is regarded as 1 or 0, respectively.

4. Results and discussion

We investigate the detection performance of the proposed system under varying turbidity conditions. For statistical analysis, Kullback-Leibler divergence (DKL) [17] and Bhattacharyya distance (DB) [18] are used to assess the difference between the probability distribution of the data when the LEDs are off and when the LEDs are on. A greater separation between the two probabilities results in higher values of DKL and DB, which indicates a better ability to discriminate and detect the transmitted signal. The measures of DKL and DB are calculated as follows:

$${D_{KL}}({R||Q} )={-} \sum\limits_m {R\log \frac{Q}{R}} \ge 0$$
$${D_B}(R,Q) ={-} \ln \left( {\sum\limits_m^{} {\sqrt {R.Q} } } \right)$$
In the above equations, R and Q are the probability distributions of the data when the LEDs are turned off and on, respectively. For each LED, we consider only the 25×25-pixel area surrounding each source LED. The five regions of interest corresponding to the LEDs are then combined as a single matrix for computation of the probability distributions. In the case of the integral imaging, the regions of interest are cropped from the 3D in-focus reconstructed images at the correct depth of each LED, and for conventional imaging, each region of interest is cropped from the 2D elemental image after the signal recovery algorithm has been applied. The DKL and DB of the conventional 2D imaging data and 3D data with integral imaging reconstruction are given in Table 2. The divergence between the distributions when the signal is transmitted in comparison to when the signal is not transmitted is larger for the 3D reconstructed image than that of the conventional 2D imaging data. Thus, 3D reconstructed data achieves better class separation and should perform better in signal detection.

Tables Icon

Table 2. SNR, Kullback-Leibler divergence (DKL), and Bhattacharyya distance (DB) at various turbidity levels for 2D imaging and 3D integral imaging.

Furthermore, we calculated the signal to noise ratio (SNR) at each turbidity level for the 2D elemental images before and after applying the signal recovery algorithm, as well as for the 3D reconstructed images. These SNR values at various turbidity levels are reported in Table 2. SNR is calculated as SNR=µsignalbackground, where µsignal is the average intensity of the same pixel regions considered for analysis using the Kullback-Leibler divergence and Bhattacharya distance. σbackground is the standard deviation of the intensity of the background region of the same size taken from the area without LEDs. The SNR of the 3D reconstructed image is higher than that of the 2D recovered image.

The performance of the signal detection system on the experimental data is evaluated using the receiver operating characteristic (ROC) curves. We compare five different cases of signal detection: (i) integral imaging using multiple light sources coded with gold code, (ii) integral imaging using multiple light sources coded with PRS code, (iii) integral imaging using a single light source coded with gold code, (iv) integral imaging using a single light source coded with PRS code and (v) conventional imaging using multiple light sources coded with gold code. In the single LED experiments, the single LED had a 5 mW output, the same as was used in the multiple LED experiments.

Four videos were collected, resulting in 32-bits of total data at each turbidity level. The ROC for each group is shown in Fig. 9 for α = 0.324 mm−1. The AUC and number of detection errors are shown in Fig. 10 as a function of Beer’s coefficient. As shown in Fig. 10(a), the ROC curve for the integral imaging reconstructed video data with the multiple light sources and coded with gold code [blue line] has an Area Under the Curve (AUC) of 0.8906, at the highest turbidity level (α = 0.324 mm−1), which outperforms the other methods tested. Integral imaging using multiple light sources coded with PRS code [black line], integral imaging using a single light source coded with gold code [magenta line], integral imaging using a single light source coded with PRS code [green line] and conventional imaging with multiple light sources and gold code [red line] had corresponding AUC values of 0.6914, 0.447 0.4197, and 0.4021 respectively at α = 0.324 mm−1.

 figure: Fig. 9.

Fig. 9. ROC (Receiver operating characteristic) curves for underwater signal detection at turbidity level (α=0.324 mm−1). Results compared between integral imaging (InIm) reconstructed video data with the multiple light sources and coded with gold code [blue line], integral imaging using multiple light sources coded with PRS code [black line], integral imaging using a single light source coded with gold code [magenta line], integral imaging using a single light source coded with PRS code [green line] and conventional imaging with multiple light sources and gold code [red line].

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. (a) Area under curves and (b) number of detection errors for underwater signal detection at various turbidity levels. Results are compared between integral imaging (InIm) reconstructed video data with the multiple light sources and coded with gold code [blue line], integral imaging using multiple light sources coded with PRS code [black line], integral imaging using a single light source coded with gold code [magenta line], integral imaging using a single light source coded with PRS code [green line] and conventional imaging with multiple light sources and gold codes [red line].

Download Full Size | PDF

Figure 10(a) shows the AUC versus the beer’s law coefficient. The AUC of the proposed method maintains a higher value than that of all other tested methods. Also seen in Fig. 10(b), the number of detection errors increases for all methods as the turbidity increases; however, the number of errors using the proposed approach is lower than that of all other tested methods.

One additional experiment was carried out at a high turbidity level of α = 0.227 mm−1 to compare the performance of a single high-power LED system with the multiple light source system. In this experiment, the single high-powered LED has an output power of 4.2 mW, which is more than 5 times the power of the individual LEDs in the multiple LED system wherein the output power of each LED was measured as 0.75 mW. The power of the LEDs is measured at a distance of 10 mm using Thorlabs PM100D optical power meter. Intensity distributions of the light sources are shown in Fig. 11. Figure 11(a) shows the 2D elemental image of a single high-power light source, Fig. 11(b) shows the 2D elemental image of five light sources at turbidity level of α = 0.006 mm−1, and the color-bar, shows the intensity of the blue color channel for each image, which ranges from 0 to 1. A lower turbidity was used in this figure to be able visualize the light sources. The turbidity level in the actual experiment is α = 0.227 mm−1, which makes it hard to visualize the light source.

 figure: Fig. 11.

Fig. 11. The intensity distribution of single light source and multiple light sources at α = 0.006 mm−1 (a) 2D elemental image of single high-power light source (b) 2D elemental image of five low-power light sources (5 LEDs with each LED having an output power that is less than 1/5 the high-power LED used in a single LED system of (a)). The color-bar represents the intensity of the blue channel values, which range from 0 to 1.

Download Full Size | PDF

In the single LED light source experiment, the single LED was located at 1000 mm, and in the multiple light source experiment, the 5 LEDs with output power 0.75 mW each were located at different spatial and depth positions. The center of the multiple light source distribution was set at 1000 mm away along the axial direction from the camera sensor. The experimental parameters in this experiment were the same as the previously performed experiments, and the transmitted signals were coded with the gold code. The signal transmission and detection methods are the same as were discussed in the flow chart of the proposed system, as shown in Fig. 4.

The performance of the signal detection system on the experimental data is evaluated using the receiver operating characteristic (ROC) curves. We compare three different cases of signal detection: (i) integral imaging using a single high-power light source coded with gold code, (ii) integral imaging using multiple (five) low-power light sources coded with gold code, and (iii) conventional 2D imaging using a single high-power light source coded with gold code. A total of 32-bits of data were transmitted and received at a turbidity level of α = 0.227 mm−1, and the ROC for each group is shown in Fig. 12. The ROC curve at this turbidity level (α = 0.227 mm−1) for the integral imaging with the multiple low-power light sources and coded with gold code [blue line] has an Area Under the Curve (AUC) of 0.9844, which gives better performance than the single high-power light source integral imaging system [red line] which has an Area Under the Curve (AUC) of 0.9414, and 2D imaging with a single high-power light source [black line] which has an Area Under the Curve (AUC) of 0.6367.

 figure: Fig. 12.

Fig. 12. ROC (Receiver operating characteristic) curves for underwater signal detection at turbidity level (α=0.227 mm−1). Results compared between integral imaging (InIm) with the multiple low-power light sources and coded with gold code [blue line], integral imaging using a single high-power light source coded with gold code [red line] and 2D conventional imaging with a high-power single light sources and gold code [black line].

Download Full Size | PDF

From the above experimental results, we have demonstrated that multiple light sources with different spatial and depth locations may provide better signal detection performance than a single high-power LED light source even when the single LED has more than 5 times the power for the individual LEDs in the multiple LED case. Thus, we may conclude that multiple light sources with multiple depth locations may improve 4D correlation and increase the detection performance of the system.

5. Conclusion

In summary, we have presented a system for 4D optical signal detection in turbid environments using integral imaging. In our approach, we use a signal consisting of temporally encoded multiple light sources to generate an effective signal detection link in turbid water. A signal recovery algorithm based on dark channel prior estimation is applied on the captured 2D elemental images prior to 3D reconstruction in order to improve signal detection. Integral imaging reconstruction provides depth information and reconstructed 4D spatial-temporal data. Following reconstruction, a 4D correlation is applied to the data. The use of optimal codes such as Pseudorandom and Gold codes allow for robust signal detection in a turbid environment. Statistical analysis indicates improved separation of probability distributions according to Kullback-Leibler Divergence and Bhattacharya distance and increased SNR using the proposed multiple distributed light sources integral imaging-based approach over conventional imaging methods. Furthermore, the proposed method outperforms the other tested methods in terms of area under the receiver operating characteristic curve and number of detection errors indicating superior detection abilities in turbid water. Future work includes further improvements to signal detection in turbid water by examining a variety of integral imaging sensing approaches [3435], as well as other optical sensing techniques [36], signal detection algorithms, and statistical approaches [3739].

Funding

Office of Naval Research (N000141712405).

Acknowledgments

We thank Fletcher Blackmon for suggesting Gold Codes to improve the underwater signal detection performance. T. O’Connor acknowledges the Dept. of Education through the GAANN Fellowship. We wish to acknowledge support under The Office of Naval Research (ONR) (N000141712405).

Disclosures

The authors declare no conflicts of interest.

References

1. H. Kaushal and G. Kaddoum, “Underwater Optical Wireless Communication,” IEEE Access 4, 1518–1547 (2016). [CrossRef]  

2. M. Stojanovic and P.-P. J. Beaujean, “Acoustic Communication,” in Springer Handbook of Ocean Engineering (Springer International Publishing, 2016, pp. 359–386.

3. M. Khalighi, T. Hamza, S. Bourennane, P. Léon, and J. Opderbecke, “Underwater Wireless Optical Communications Using Silicon Photo-Multipliers,” IEEE Photonics J. 9(4), 1–10 (2017). [CrossRef]  

4. A. Burguera, F. Bonin-Font, and G. Oliver, “Trajectory-based visual localization in underwater surveying missions,” Sensors 15(1), 1708–1735 (2015). [CrossRef]  

5. Z. Lin, W. Li, C. Gatebe, R. Poudyal, and K. Stamnes, “Radiative transfer simulations of the two-dimensional ocean glint reflectance and determination of the sea surface roughness,” Appl. Opt. 55(6), 1206–1215 (2016). [CrossRef]  

6. T.-C. Wu, Y.-C. Chi, H.-Y. Wang, C.-T. Tsai, and G.-R. Lin, “Blue Laser Diode Enables Underwater Communication at 12.4 Gbps,” Sci. Rep. 7(1), 40480 (2017). [CrossRef]  

7. P. Lacovara, “High-bandwidth underwater communications,” Mar. Technol. Soc. J. 42(1), 93–102 (2008). [CrossRef]  

8. K. K. Hamamatsu Photonics, “Photomultiplier tubes: Basics and applications,” Ed. 3a 310, (2007).

9. I. Takai, T. Harada, M. Andoh, K. Yasutomi, K. Kagawa, and S. Kawahito, “Optical vehicle-to-vehicle communication system using LED transmitter and camera receiver,” IEEE Photonics J. 6(5), 1–14 (2014). [CrossRef]  

10. M. S. M. Akram, L. G. D. Aravinda, M. K. P. D. Munaweera, G. M. R. I. Godaliyadda, and M. P. B. Ekanayake, “Camera based visible light communication system for underwater applications,” in IEEE International Conference on Industrial and Information Systems, ICIIS 2017 - Proceedings (IEEE, 2018), pp. 1–6.

11. P. Xia, Y. Awatsuji, K. Nishio, and O. Matoba, “One million fps digital holography,” Electron. Lett. 50(23), 1693–1695 (2014). [CrossRef]  

12. M. Cho and B. Javidi, “Peplography—a passive 3D photon counting imaging through scattering media,” Opt. Lett. 41(22), 5401–5404 (2016). [CrossRef]  

13. S. Komatsu, A. Markman, and B. Javidi, “Optical sensing and detection in turbid water using multi-dimensional integral imaging,” Opt. Lett. 43(14), 3261–3264 (2018). [CrossRef]  

14. T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 385–399 (2009). [CrossRef]  

15. X. Shen, H. Kim, K. Satoru, A. Markman, and B. Javidi, “Spatial-temporal human gesture recognition under degraded conditions using three-dimensional integral imaging,” Opt. Express 26(11), 13938–13951 (2018). [CrossRef]  

16. Y. Peng, K. Cao, and P. C. Cosman, “Generalization of the Dark Channel Prior for Single Image Restoration,” IEEE Trans. on Image Process. 27(6), 2856–2868 (2018). [CrossRef]  

17. S. Kullback and R. A. Leibler, “On information and sufficiency,” Ann. Math. Stat. 22(1), 79–86 (1951). [CrossRef]  

18. A. Bhattacharyya, “On a measure of divergence between two statistical populations defined by their probability distributions,” Bull. Calcutta Math. Soc. 35, 99–109 (1943).

19. F. Goudail, P. Réfrégier, and G. Delyon, “Bhattacharyya distance as a contrast parameter for statistical processing of noisy optical images,” J. Opt. Soc. Am. A 21(7), 1231–1240 (2004). [CrossRef]  

20. G. Lippmann, “La photographie intégrale,” CR Séances Acad 146, 446–451 (1908).

21. Y. Igarashi, H. Murata, and M. Ueda, “3-D Display System Using a Computer Generated Integral Photograph,” Jpn. J. Appl. Phys. 17(9), 1683–1684 (1978). [CrossRef]  

22. T. Okoshi, “Three-dimensional displays,” Proc. IEEE 68(5), 548–564 (1980). [CrossRef]  

23. B. Javidi and M. Cho, “Three-Dimensional Visualization of Objects in Turbid Water Using Integral Imaging,” J. Disp. Technol. 6(10), 544–547 (2010). [CrossRef]  

24. A. Markman, X. Shen, and B. Javidi, “Three-dimensional object visualization and detection in low light illumination using integral imaging,” Opt. Lett. 42(16), 3068–3071 (2017). [CrossRef]  

25. M. B. Mollah and M. R. Islam, “Comparative analysis of Gold Codes with PN codes using correlation property in CDMA technology,” in 2012 International Conference on Computer Communication and Informatics (2012), pp. 1–6.

26. J. G. Proakis and M. Salehi, Digital Communications (McGraw-Hill, 2008).

27. “FASTCAM SA-X2,” https://photron.com/fastcam-sa-x2/.

28. M. Palmese, G. Bertolotto, A. Pescetto, and A. Trucco, “Spread Spectrum Modulation for Acoustic Communication in Shallow Water Channel,” in OCEANS 2007 - Europe (2007), pp. 1–4.

29. L. Jun Liu, J. Fen Li, L. Zhou, P. Zhai, H. Zhao, J. Cai Jin, and Z. chao Lv, “An underwater acoustic direct sequence spread spectrum communication system using dual spread spectrum code,” Front. Inf. Technol. Electron. Eng. 19(8), 972–983 (2018). [CrossRef]  

30. Y. Peng and P. C. Cosman, “Underwater Image Restoration Based on Image Blurriness and Light Absorption,” IEEE Trans. on Image Process. 26(4), 1579–1594 (2017). [CrossRef]  

31. K. O. Amer, M. Elbouz, A. Alfalou, C. Brosseau, and J. Hajjami, “Enhancing underwater optical imaging by using a low-pass polarization filter,” Opt. Express 27(2), 621–643 (2019). [CrossRef]  

32. B. Song, G. Zhang, W. Zhu, and Z. Liang, “ROC operating point selection for classification of imbalanced data with application to computer-aided polyp detection in CT colonography,” Int J CARS 9(1), 79–89 (2014). [CrossRef]  

33. M. H. Zweig and G. Campbell, “Receiver-operating characteristic (ROC) plots: a fundamental evaluation tool in clinical medicine,” Clin. Chem. 39(4), 561–577 (1993). [CrossRef]  

34. B. Javidi, X. Shen, A. S. Markman, P. Latorre-Carmona, A. Martinez-Uso, J. Martinez Sotoca, F. Pla, M. Martinez-Corral, G. Saavedra, Y. P. Huang, and A. Stern, “Multidimensional Optical Sensing and Imaging System (MOSIS): From Macroscales to Microscales,” Proc. IEEE 105(5), 850–875 (2017). [CrossRef]  

35. A. Stern and B. Javidi, “Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. 42(35), 7036–7042 (2003). [CrossRef]  

36. M. Dubreuil, P. Delrot, I. Leonard, A. Alfalou, C. Brosseau, and A. Dogariu, “Exploring underwater target detection by imaging polarimetry and correlation techniques,” Appl. Opt. 52(5), 997–1005 (2013). [CrossRef]  

37. F. A. Sadjadi and A. Mahalanobis, “Automatic target recognition XXVII,” Proc. SPIE10202 (2017).

38. J. W. Goodman, Statistical Optics (John Wiley & Sons, 2015).

39. P. Refregier, V. Laude, and B. Javidi, “Nonlinear joint-transform correlation: an optimal solution for adaptive image discrimination and input noise robustness,” Opt. Lett. 19(6), 405–407 (1994). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. 3D imaging system for underwater signal detection: (a) experimental setup to capture the optical signal during the pickup stage of integral imaging and (b) computational volumetric reconstruction process for integral imaging. The external white light LED is used to mimic ambient light for shallow water scenes.
Fig. 2.
Fig. 2. (a) Water tank with turbid water and a white LED source to mimic ambient light environments in shallow water scenes, (b) A 3 × 3 camera array for integral imaging pickup (c) An example of 3D distributed multiple light sources with 5 LEDs.
Fig. 3.
Fig. 3. Images of the multiple LED distribution in underwater viewed from the central camera perspective, taken (a) under clear water without turbidity, and (b) in turbid water, which contains 50 ml antacid (α = 0.041 mm−1).
Fig. 4.
Fig. 4. Flow chart of the proposed system for (a) optical signal transmission and (b) detection in underwater communication. InIm denotes Integral Imaging.
Fig. 5.
Fig. 5. Example of the signal recovery algorithm using dark channel prior estimation. (a) Original 2D elemental image at α = 0.006 mm−1 and SNR=3.2, and (b) 2D elemental image after applying the signal recovery algorithm with SNR=4.1. SNR=µsignalbackground. The color-bar represents the intensity of the blue channel values, which range from 0 to 1.
Fig. 6.
Fig. 6. Example of transmitted 4D (spatial and temporal) data structure. (a-c) First three frames of captured 2D elemental images from the video sequence of temporally encoded multiple (five) light sources. (i-iii) Five reconstructed depth images focused at each LED by integral imaging with depths Z1=900 mm, Z2=920 mm, Z3=940 mm, Z4=950 mm, and Z5=960 mm.
Fig. 7.
Fig. 7. Example of the reference template correlation filter. (a) Reference template planes showing depth frames with LED ON signals. (b) Reference template planes showing depth frames with LED OFF signals. The color-bar represents the pixel values, which range from -1 to 1.
Fig. 8.
Fig. 8. 4D correlation results S(t) for underwater signal detection with 50 ml antacid added to the water tank (α=0.041 mm−1). Red circles indicate the correctly detected transmitted signals.
Fig. 9.
Fig. 9. ROC (Receiver operating characteristic) curves for underwater signal detection at turbidity level (α=0.324 mm−1). Results compared between integral imaging (InIm) reconstructed video data with the multiple light sources and coded with gold code [blue line], integral imaging using multiple light sources coded with PRS code [black line], integral imaging using a single light source coded with gold code [magenta line], integral imaging using a single light source coded with PRS code [green line] and conventional imaging with multiple light sources and gold code [red line].
Fig. 10.
Fig. 10. (a) Area under curves and (b) number of detection errors for underwater signal detection at various turbidity levels. Results are compared between integral imaging (InIm) reconstructed video data with the multiple light sources and coded with gold code [blue line], integral imaging using multiple light sources coded with PRS code [black line], integral imaging using a single light source coded with gold code [magenta line], integral imaging using a single light source coded with PRS code [green line] and conventional imaging with multiple light sources and gold codes [red line].
Fig. 11.
Fig. 11. The intensity distribution of single light source and multiple light sources at α = 0.006 mm−1 (a) 2D elemental image of single high-power light source (b) 2D elemental image of five low-power light sources (5 LEDs with each LED having an output power that is less than 1/5 the high-power LED used in a single LED system of (a)). The color-bar represents the intensity of the blue channel values, which range from 0 to 1.
Fig. 12.
Fig. 12. ROC (Receiver operating characteristic) curves for underwater signal detection at turbidity level (α=0.227 mm−1). Results compared between integral imaging (InIm) with the multiple low-power light sources and coded with gold code [blue line], integral imaging using a single high-power light source coded with gold code [red line] and 2D conventional imaging with a high-power single light sources and gold code [black line].

Tables (2)

Tables Icon

Table 1. Turbidity levels used in the experiments. Turbidity levels and Beer’s coefficient are varied by changing the amount of antacid added to the water.

Tables Icon

Table 2. SNR, Kullback-Leibler divergence (DKL), and Bhattacharyya distance (DB) at various turbidity levels for 2D imaging and 3D integral imaging.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

I ( x , y , z , t ) = 1 O ( x , y , z , t ) m = 0 M 1 n = 0 N 1 E I m , n ( x m N x P x f C x z , y n N y P y f C y z )
I c ( x ) = J c ( x ) t c ( x ) + A c ( 1 t c ( x ) ) , c { r , g , b }
C ( x , y , z , t ) = F T 1 { [ H ( u , v , ϕ , ϑ ) ] . [ T ( u , v , ϕ , ϑ ) ] }
D K L ( R | | Q ) = m R log Q R 0
D B ( R , Q ) = ln ( m R . Q )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.