Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Underwater image enhancement via two-level wavelet decomposition maximum brightness color restoration and edge refinement histogram stretching

Open Access Open Access

Abstract

Underwater images suffer color distortions and low contrast. This is because the light is absorbed and scattered when it travels through water. Different underwater scenes result in different color deviations and levels of detail loss in underwater images. To address these issues of color distortion and low contrast, an underwater image enhancement method that includes two-level wavelet decomposition maximum brightness color restoration, and edge refinement histogram stretching is proposed. First, according to the Jaffe-McGlamery underwater optical imaging model, the proportions of the maximum bright channel were obtained to correct the color of underwater images. Then, edge refinement histogram stretching was designed, and edge refinement and denoising processing were performed while stretching the histogram to enhance contrast and noise removal. Finally, wavelet two-level decomposition of the color-corrected and contrast-stretched underwater images was performed, and the decomposed components in equal proportions were fused. The proposed method can restore the color and detail and enhance the contrast of the underwater image. Extensive experiments demonstrated that the proposed method achieves superior performance against state-of-the-art methods in visual quality and quantitative metrics.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The development and utilization of marine resources is a critical research objective of computer vision, as this is of great significance in marine exploration and military surveying. Underwater images are the main resource for researchers to obtain critical underwater information [14]. However, the underwater environment is complex and changeable. The absorption of light by water and scattering of light by suspended objects cause color distortion, low contrast, and loss of details in underwater images, which have greatly hindered researchers’ ability to obtain marine information [5]. Therefore, clarifying low-quality underwater images has become a key research direction in the field of computer vision [6]. The principle of underwater molding is different from the principle of imaging on water. The physical characteristics of the underwater environment are complex, and underwater image processing is more challenging [7,8]. A schematic diagram of the underwater optical imaging model is shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. Underwater optical imaging model.

Download Full Size | PDF

The light captured by the camera is mainly composed of three parts: direct component ${E_d}$, forward scattering component ${E_f}$, and backward scattering component ${E_b}$[9]. The model is represented as follows:

$${I_c}(X) = {E_d}(X) + {E_f}(X) + {E_d}(X)$$
where ${I_c}(X) = [{I_R}(X),{I_G}(X),{I_B}(X)]$ is the captured image. $X = (x,y)$ is the pixel location. The direct component ${E_d}$ is light reflected from objects that have not been scattered in water, which can be ignored. The forward-scattered component ${E_f}$ is light reflected by objects that have been scattered at a small angle. The back-scattered component ${E_b}$ is light reflected by floating particles [8].

The light received by the camera consists of a direct component, a forward-scattered component, and a backward-scattered component. The attenuated image can be expressed as follows:

$${I_c}(X) = {J_c}(X){t_c}(X) + A[1 - {t_c}(X)], {\kern 7pt}{t_c}(X) = \exp [ - n(\lambda )d(x)]$$
where ${I_c}(X)$ represents the attenuated underwater image, ${J_c}(X)$ represents the real underwater image, ${t_c}(X)$ is the transmittance, $c \in \{{R,G,B} \}$, and $A$ is the background light., $d(x)$ is the distance between the imaging point and the camera, and $n(\lambda )$ is the wavelength-dependent attenuation coefficient.

The transmittance of the blue channel of the red, green, and blue channels is the largest, that ${t_B}(x)$ is the largest. We use the scene benchmark $\Delta $ to judge the depth of the scene. The benchmark can be expressed as follows:

$$\Delta = {t_B}(X) - {t_c}(X)$$

Since the attenuation is exponential, the larger the value of $\Delta $, the deeper the scene, the greater the degree of color falloff. Therefore, the maximum range value of the pixel is used as the basis for the color loss, which can avoid the loss of light information. We considered the maximum pixel value of the blue channel to be attenuated from 255, so we used the maximum bright channel (MBC) method to compensate for each channel.

The underwater image restoration method usually establishes a degradation model according to formula 2, and then derives the relevant parameters to obtain the restored underwater image. The dark channel prior(DCP) [10] technology can effectively remove the fog of the image, but it ignores the absorption and scattering of light on the water during the underwater imaging process, and cannot handle the underwater image well. On the basis of dark channel. Drew et al. proposed Underwater dark channel prior (UDCP) [11], the blue and green light as the input information source, to clarify the underwater image by prior knowledge and additional assumptions. Song et al. [12] proposed a method based on the background light statistical model and transmission map optimization, which is not suitable for the original image with many red components. Existing model-based methods are inaccurate in estimating background light and transmission maps, and cannot recover the color of underwater images. Some methods ignore the selective absorption and scattering of light in aqueous media, thereby reducing the quality of the results.

The attenuation of light has an exponential relationship with the transmission distance and the attenuation coefficient. The attenuation rate of light at different wavelengths is different, and the wavelength of light is inversely proportional to the distance at which it disappears underwater. Red light has the longest wavelength and disappears first underwater, while blue and green light have shorter wavelengths and travel the farthest in water. Therefore, most underwater images appear blue or green [13]. The effects of scattering and absorption eventually lead to color distortion, low contrast, noise, and blurry details. Therefore, there is an urgent need to process low-quality images to obtain high-quality and clear underwater images [14].

The main contributions of this article are as follows:

  • 1. We propose a color correction method for the MBC to target color distortion. According to the Jaffe-McGlamery underwater imaging model, the bright channel is used to compensate for the color, and the ratio of the number of bright channel pixels to the total number of pixels is calculated. The pixel value of the channel is compensated to obtain an underwater image after color correction.
  • 2. We designed an edge refinement histogram stretching (ERHS) algorithm to perform contrast stretching and denoising processes on the color-corrected underwater image. As a result, improving the contrast of the underwater image, refining the edge of the image while quantizing the pixel value [0,255], enhancing the contrast of the underwater image, and eliminating the noise can all be achieved simultaneously.
  • 3. We designed a two-level wavelet decomposition maximum brightness color restoration and ERHS algorithm. First, the first-level components of the two algorithms are fused in equal proportions, and the second-level components are fused in equal proportions. Then, the first- and second-level components are fused in equal proportions to obtain a clear underwater image.

This article is arranged as follows. Section 2 introduces the algorithms for color correction, thinning of edge contrast stretching, and wavelet two-level decomposition and fusion in detail. Next, we set up a comparative ablation experiment to prove the effectiveness of the proposed method in Section 3. Finally, in Section 4, we prove the remarkable effectiveness of the proposed method through a large number of experiments and offer future development directions.

2. Methodology

The algorithm in this paper first uses the maximum number of channels to perform color correction on underwater images. Then, it uses ERHS to perform contrast enhancement and denoising processing on the color-corrected underwater images. Different common fusion strategies are needed to extract the feature information of images. Then, the two-level components of the resulting images from the two algorithms are obtained by wavelet two-level decomposition, and the components of each level are fused in equal proportions. Finally, a clear and high-contrast underwater image is obtained.

2.1 Color correction

According to the Jaffe-McGlamery underwater optical imaging model, we calculated the pixel value area suitable for color compensation [9]. First, we calculated the maximum pixel value of each channel of the underwater image through the following equation:

$${R_{\max }} = \max \{{R(x,y)} \},{G_{\max }} = \max \{{G(x,y)} \},{B_{\max }} = \max \{{B(x,y)} \}$$
where ${R_{\max }}$, ${G_{\max }}$, and ${B_{\max }}$ respectively represent the maximum pixel values of the red, green, and blue channels. $R(x,y)$, $G(x,y)$, and $B(x,y)$ respectively express the pixel values of the red, green, and blue channels with coordinates $(x,y)$.

Traversing the entire underwater image, the sum of the three-channel pixel value of each pixel is calculated and sorted in reverse order through the following equation:

$${I_{sum}}(x,y) = R(x,y) + G(x,y) + B(x,y)$$
where ${I_{sum}}(x,y)$ represents the sum of the pixel values of the three channels. We take the threshold $T$ as 0.1 and define the maximum bright channel pixel as follows:
$${I_{\max }}(x,y) = {I_{sum}}(x,y) \times T$$

Calculating the average of the three channels, whose sum of the three channels is more significant than all points of the largest bright channel, they are ${R_{avg}}$, ${G_{avg}}$, and ${B_{avg}}$, respectively:

$${R_{avg}} = \sum\limits_{I(x,y) > {I_{\max }}(x,y)} {R(x,y)} ,{G_{avg}} = \sum\limits_{I(x,y) > {I_{\max }}(x,y)} {G(x,y)} ,{B_{avg}} = \sum\limits_{I(x,y) > {I_{\max }}(x,y)} {B(x,y)}$$

Finally, the color compensation value of the red, green, and blue channels is obtained.

$${K_r} = \frac{{255}}{{{R_{avg}}}},{K_\textrm{g}} = \frac{{255}}{{{G_{avg}}}},{K_b} = \frac{{255}}{{{B_{avg}}}}$$

Among them, ${K_r},{K_\textrm{g}},{K_b}$ respectively represent the compensation rate of the red, green, and blue channels, and their values were all greater than 1. As such, each pixel was quantized to [0,255]:

$${I_{out}} = \left\{ \begin{array}{l} 255,{I_c} \times {K_c} > 255\\ {I_c} \times {K_c},{I_c} \times {K_c} \le 255\\ 0,{I_c} \times {K_c} \le 0 \end{array} \right.$$
where $c \in \{ R,G,B\}$ and ${I_c}$ represent the single-channel image of the input image, respectively, and ${I_{out}}$ represents the underwater image output after color correction.

We compared our color correction methods with classic white balance methods, including GreyWorld [15], MaxRGB [16], GreyEdge [17], Shade of Grey [18], Automatic White Balance (AWB) [19], and Weighted Grey-Edge [20]. The comparison results are shown in Fig. 2. Contrast pictures were taken with different color cameras. Our proposed method can use photos captured by different devices. Compared to other methods, the swatches processed by our proposed method are closer to the real swatches.

 figure: Fig. 2.

Fig. 2. Color correction comparison result chart. From left to right: (a) Raw image, (b) GreyWorld [15], (c) MaxRGB [16], (d) GreyEdge [17], (e) Shade of Grey [18], (f) Automatic White Balance (AWB) [19], (g) Weighted Grey-Edge [20], and (h) presented method. Images 1 to 3 were taken by Canon D10 (ISO250), Olympus Tough 8000 (ISO 100), and Panasonic TS1 (ISO 100), respectively. (For the complete set, please refer to the website Digital Photography Review (dpreview.com).)

Download Full Size | PDF

2.2 Edge refinement histogram stretching

The suspended particles in a water medium have a scattering effect on incident light [21]. Therefore, histogram stretching increases the contrast of underwater images while also introducing noise [2224]. We propose ERHS, and by setting a threshold, the part of the grayscale whose gray value is greater than the threshold is discretely distributed to the gray value below the threshold of the histogram. Bilinear interpolation is used to solve the block effect. To reduce noise, we performed edge refinement on the underwater image data signal after stretching to reduce the noise caused by stretching.

The contrast stretching algorithm is defined as follows:

$${I_o}(x,y) = \left\lfloor {\frac{{{I_i}(x,y) - {I_a}}}{{{I_{sd}}}} + 0.5} \right\rfloor \times 255$$
where ${I_a}$ represents the mean value of the input image, ${I_{sd}}$ represents the standard deviation of the input image, $\lfloor{} \rfloor$ represents the rounding down operation, and ${I_o}(x,y)$ represents the image obtained after contrast stretching.

To quantify the pixel value of the stretched image to [0,255] and refine the edge, we used the following equation:

$${I_\textrm{r}}(x,y) = \left\{ \begin{array}{l} 0,{I_o}(x,y) \le 0\\ L[{I_o}(x,y)],0 < {I_o}(x,y) < 255\\ 255,{I_o}(x,y) \ge 255 \end{array} \right.$$
where ${I_\textrm{r}}(x,y)$ represents the thinning function of the image.

Keeping the edge details, the image is subjected to logarithmic transformation after the contrast is stretched:

$$\ln f(x,y) = \ln {f_i}(x,y) + \ln {f_k}(x,y)$$
where ${f_i}(x,y)$ is the low-frequency information of the image, ${f_k}(x,y)$ is the high-frequency information of the image, and the above formula is Fourier transformed to obtain
$$F(\omega ,\mu ) = {F_i}(\omega ,\mu ) + {F_k}(\omega ,\mu )$$

Multiplying the equation transformed by the filter function and performing an inverse Fourier transform obtains the following:

$$f(x,y) = {\varphi _i}(x,y) + {\varphi _k}(x,y)$$

The refinement function of the processed image by exponential transformation yields the following:

$$L(x,y) = \exp [{\varphi _i}(x,y)\cdot {\varphi _k}(x,y)]$$

The image obtained after the edge thinning histogram stretching process improves the contrast while increasing the high-frequency component, reducing the low-frequency component and noise. Compared to histogram enhancement [4], edge refinement histogram stretching can better remove noise and enhance contrast. The comparison results are shown in Fig. 3.

 figure: Fig. 3.

Fig. 3. Edge refinement histogram stretching comparison result graph from images 1–3: original image, histogram stretched [4], and edge refinement histogram stretched (ERHS).

Download Full Size | PDF

Figure 3 shows the original underwater image, the resulting histogram enhancement image, and our resulting image for comparison. Histogram enhancement can increase the edge details of underwater images, but it also introduces noise. Our proposed method performs contrast stretching while reducing noise. The edge details of the image are more obvious.

2.3 Wavelet two-level decomposition fusion

Assuming that ${L_j}f$ is the approximate signal of the energy-limited signal $f \in {L^2}R$ at the resolution ${2^j}$, during the secondary decomposition, ${L_j}f$ can be decomposed into the approximate signal ${L_{j - 1}}f$ of f at the resolution ${2^{j - 1}}$, and the details signal ${H_{j - 1}}f$ to be located between the resolution ${2^{j - 1}}$ and ${2^j}$.

$${L_{j - 1}}f(x) = \sum\limits_{k ={-} \infty }^{ + \infty } {{P_{j - 1,k}}{\omega _{j - 1,k}}(x),{H_{j - 1}}f(x)} = \sum\limits_{k ={-} \infty }^{ + \infty } {{Q_{j - 1,k}}{\nu _{j - 1,k}}(x)}$$
Where $\omega$ and $\nu$ are the wavelet scale and wavelet function, respectively. ${P_{j - 1,k}}$, and ${Q_{j - 1,k}}$ are the roughness coefficient and the detail coefficient. The approximate signal ${L_j}f$ of the signal f at resolution ${2^j}$ can be directly expressed as follows:
$${L_j}f = {L_{j - 1}}f + {H_{j - 1}}f,{L_j}f(x) = \sum\limits_{k ={-} \infty }^{ + \infty } {{P_{j,k}}{\omega _{j,k}}(x)}$$

A schematic diagram of the secondary decomposition is shown in Fig. 4. We decomposed the underwater image after color correction to extract two levels of components, namely the first level ${S_1}$ and second level ${S_2}$. The first-level decomposition decomposes the underwater image into a low-frequency approximation component $W_{S1}^1$ and first-level high-frequency detail components, which are the horizontal detail component $W_{S1}^2$, the vertical detail component $W_{S1}^3$, and the diagonal detail component $W_{S1}^4$. In the second-level decomposition, the approximate low-frequency components of the first-level decomposition are subject to the same decomposition operation to obtain one second-level, low-frequency approximate components $W_{S2}^1$ and obtain three second-level, high-frequency detail components $W_{S2}^2$, $W_{S2}^3$, and $W_{S2}^4$. Then, the underwater image after the edge thinning and contrast stretching undergoes the two-level decomposition to extract two levels of components, which are the first level ${H_1}$ and the second level ${H_2}$, respectively. The first-level low-frequency components $W_{H1}^1$, the three first-level high-frequency components $W_{H1}^2$, $W_{H1}^3$, and $W_{H1}^4$, the second-level low-frequency component $W_{H2}^1$, and three second-level high-frequency components are obtained as $W_{H2}^2$, $W_{H2}^3$, and $W_{H2}^4$, respectively.

 figure: Fig. 4.

Fig. 4. Schematic diagram of wavelet decomposition. From images 2 to 5: first-level low-frequency component, first-level horizontal detail component, first-level vertical detail component, and first-level diagonal detail component, respectively. From images 6 to 9: second-level low-frequency component, second-level horizontal detail component, second-level vertical detail component, and second-level diagonal detail component, respectively.

Download Full Size | PDF

We first fused the first- and second-level components after color correction and contrast stretching, and the proportional fusion formula is as follows:

$${I_2}[H{S_2}] = 0.5 \times [W_{H2}^1,W_{H2}^2,W_{H2}^3,W_{H2}^4] + 0.5 \times [W_{S2}^1,W_{S2}^2,W_{S2}^3,W_{S2}^4]$$
$${I_1}[H{S_1}] = 0.5 \times [W_{S1}^1,W_{S1}^2,W_{S1}^3,W_{S1}^4] + 0.5 \times [W_{H1}^1,W_{H1}^2,W_{H1}^3,W_{H1}^4]$$
where, ${I_1}[H{S_1}]$ represents the first-level component after fusion, and ${I_2}[H{S_2}]$ represents the second-level component after fusion.

Finally, the first- and second-level components are fused in equal proportions:

$${I_{out}} = 0.5{I_1}[H{S_1}] + 0.5{I_2}[H{S_2}]$$
where ${I_{out}}$ represents the resulting image.

The two first-level decomposition components are fused in equal proportions. Next, the two sets of high-frequency components are fused in equal proportions to obtain the fused low- and high-frequency components. Finally, the low- and high-frequency information is reconstructed to obtain a clear underwater image. The schematic diagram of the hierarchical fusion data flow is shown in Fig. 5.

 figure: Fig. 5.

Fig. 5. Schematic diagram of equal proportion fusion data flow.

Download Full Size | PDF

3. Experimental Results

Comparison method:

To verify the effectiveness of our color correction method, we compared it with classic experiments, including GreyWorld [15], MaxRGB [16], GreyEdge [17], Shade of Grey [18], Automatic White Balance (AWB) [19], and Weighted Grey-Edge [20]. To verify the effectiveness of our approach, we compared our method with other methods, including Underwater Dark Channel Prior (UDCP) [11], Retinex-based Enhancement (RBE) [25], Image Blurriness and Light Absorption (IBLA) [26], Generalization of the Dark Channel Prior (GDCP) [27], Relative Global Histogram Stretching (RGHS) [28], and Blue-Green Channels Dehazing and Red Channel Correction (GBDRC) [29].

We chose the comparison method for two principles:

  • 1. Common comparison method used in recent years;
  • 2. Experiments can be carried out through the program provided by the author.

Quantitative evaluation: The presented method was proven to be more advantageous than state-of-the-art methods from both subjective results and objective evaluations. In the objective evaluations, we chose the following objective metrics to prove the practicability of our method. To fully evaluate our experimental results, non-reference metrics were used to verify the robustness of our method.

Average gradient (AG): The larger the AG value, the more detailed the underwater image, and the higher the definition.

Image information entropy (EI): The greater the EI value, the greater the edge strength of the underwater image, and the more details and clearer the highlight.

Mean Square Error (MSE): MSE express the sum of the squares of the differences between the corresponding pixels of the two images. The larger the value, the greater the color changed.

Underwater color image quality evaluation (UCIQE) [30]: This is a linear combination of color density, saturation, and contrast. It is used to quantitatively evaluate the uneven color cast, blur, and low contrast of underwater images.

Datasets: We used two datasets, namely the Underwater Image Enhancement Benchmark (UIEB) [31] and Real-World Underwater Image Enhancement (RUIE) [32], as the data sources for our experiments.

UIEB [31]: There are a total of 890 underwater images, including different scenes and depths, and multi-directional shooting in this dataset that could better verify the practicability of our underwater image processing methods. It proved the robustness of our approach from the perspective of multiple scenarios, multiple targets, and multiple features.

RUIE [32]: Containing three sub-datasets, they are classified according to different color casts of underwater images. It was used to prove that our experimental method can handle underwater images with different color casts to demonstrate that our method can be applied to underwater images with different degrees of color cast.

Platform: Experimental platform: i7-6700HQ CPU (2.60 GHz), 16GRAM, Windows 10, MATLAB R2019.

3.1 Color correction

To prove the effectiveness of the proposed maximum bright channel color restoration method, we conducted a comparative experiment with a typical white balance color correction method. The dataset for the comparison of style came from the UIEB. The comparison methods included GreyWorld [15], MaxRGB [16], GreyEdge [17], Shade of Grey [18], Automatic White Balance (AWB) [19], and Weighted Grey-Edge [20]. The experimental results are shown in Fig. 6. The GreyWorld [15] algorithm does not consider the fastest attenuation of red light, which leads to the over-compensation of the red channel in the final result. The MaxRGB algorithm takes the maximum value of image pixels as the light color. When the color is excessively biased, the color illumination is incorrectly judged, resulting in poor results when the algorithm processes severely greenish or blueish images. The Shade of Grey and MaxRGB algorithms are in the same calculation framework, but the parameter values are different. GreyEdge is based on the color constancy theory and also overcompensates for the red channel. The Automatic White Balance algorithm can recover underwater images with serious color casts, but it cannot recover colors well for underwater images with attention. For color casts and underwater images with sunlight, our proposed method can also correct colors well.

 figure: Fig. 6.

Fig. 6. Color correction comparison test. The underwater pictures are from the UIEB dataset. From left to right: (a) Raw image, (b) GreyWorld [15], (c) MaxRGB [16], (d) GreyEdge [17], (e) Shade of Grey [18], (f) Automatic White Balance (AWB) [19], (g) Weighted Grey-Edge [20], and (h) presented method.

Download Full Size | PDF

We analyzed the comparison experiment results from the perspective of computer vision. We converted the results of the comparison experiment shown in Fig. 6 into a false-color map. In Fig. 7, it can be seen more intuitively that the color of our method’s false-color map is richer and more detailed, and the color distribution range is more extensive and uniform. Our maximum bright channel color correction method can better restore the color of underwater images and is suitable for various scenes.

 figure: Fig. 7.

Fig. 7. False-color map of Fig. 6. The underwater pictures are from the UIEB dataset. From left to right: (a) Raw image, (b) GreyWorld [15], (c) MaxRGB [16], (d) GreyEdge [17], (e) Shade of Grey [18], (f) Automatic White Balance (AWB) [19], (g) Weighted Grey-Edge [20], and (h) presented method.

Download Full Size | PDF

3.2 Contrast and edge details

We proposed an ERHS algorithm that can better refine the edges, reduce noise, and increase contrast. We used the canny edge detection algorithm [7] to extract the edge details of the underwater image in Fig. 3 to prove the effectiveness of our ERHS. The experimental results are shown in Fig. 8. The details in image 3 are far clearer than those in images 1 and 2, and the red, green, and blue boxes are the specific display. To show the differences in detail more intuitively, we extracted three areas of detail, which are marked with corresponding color boxes, and put them in the bottom row of Fig. 8.

 figure: Fig. 8.

Fig. 8. Edge detail map. The underwater pictures are from the UIEB dataset. Images 1–3 are the original image, the histogram stretching result image [45], and the ERHS result image. The red box is detail 1, the green part is detail 2, and the blue box is detail 3.

Download Full Size | PDF

3.3 Qualitative evaluation

We use two datasets to verify the broad applicability of our method. The underwater images of the UIEB dataset come from different scenes and contain different targets and lighting [31]. The RUIE dataset has underwater images with different color casts, and the degree of color cast is divided into levels [28]. From the perspectives of multi-scene and multi-color casts, we verified that our method can handle a wide range of underwater images.

We respectively selected 10 different types of underwater images from the UIEB and RUIE datasets for comparative experiments, and comparison methods included UDCP [11], RBE [25], IBLA [26], GDCP [27], RGHS [28], and GBDRC [29]. UDCP only considers blue and green light, causing the processing results to be blue [11]. The RBE method is a single underwater image enhancement method based on Retinex [25]. The IBLA method, based on the Image Formation Model (IFM), has local red channel overcompensation when processing underwater images with white light [26]. The GDCP method estimates the light source from the depth of the scene and cannot accurately estimate the depth of the target object in a complex underwater scene, resulting in poor image restoration [27]. RGHS processes severely greenish images; the processing performance is not good [28]. Finally, the GBDRC method considers that in a richly colored image, the average value of the three channels is close to a value, which leads to the excessive compensation of the red channel [29]. Our method solves the problems that the above methods cannot solve: the red channel compensation of the white light image is excessive, the color recovery of the serious greenish-blue image is poor, and the contrast is low. We show the results of the comparison methods on the UIEB dataset in Fig. 9 and that for the RUIE dataset in Fig. 10.

 figure: Fig. 9.

Fig. 9. UIEB dataset comparison test results. The underwater pictures are from the UIEB dataset. The comparison methods from left to right are as follows: (a) Raw data, (b) UDCP [11], (c) RBE [25], (d) IBLA [26], (e) GDCP [27], (f) RGHS [28], (g) GBDRC [29], and (h) our method.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. RUIE dataset comparison test results. The comparison methods from left to right are as follows: (a) Raw data, (b) UDCP [11], (c) RBE [25], (d) IBLA [26], (e) GDCP [27], (f) RGHS [28], (g) GBDRC [29], and (h) our method.

Download Full Size | PDF

In Fig. 9, the UDCP algorithm is mainly used for color restoration, but the color, contrast, and details of underwater images with changeable scenes cannot be handled well. When RBE processes underwater images with artificial light sources, it does not handle the light source well. IBLA overcompensates for the red channel when restoring the color of image 1 with natural light. GDCP and RGHS cannot restore the color of underwater images with serious color casts. GBDRC takes the red channel as the color reference and overcompensates for the red channel. Our method has obtained good results for underwater images with natural light, artificial light, and serious color casts.

In Fig. 10, we chose green, blue, and blue-green underwater images for comparison experiments. UDCP does not consider the maximum attenuation coefficient of red light, resulting in insufficient red channel compensation. RGB yields better processing results for simple color cast image processing of the scene. IBLA, GDCP, and RGHS yield poor results for underwater images with serious color casts. The GBDRC contrast color cast water system image is also over-compensated for the red channel. Our method can effectively process underwater images with different color casts.

3.4 Quantitative evaluation

We objectively evaluated the processing results of our method from different evaluation metrics and from aspects of color, contrast, and detail. The evaluation metrics include AG, EI, MSE, and UCIQE [30]. Compared to other water system image processing methods, our solution was in the top two in terms of evaluation metrics. Furthermore, the average value of the different indexes of each method of 10 underwater images was calculated. Results showed that the average value of each indicator in our method ranked the first two.

Table 1 shows the evaluation results of the 10 underwater images in Fig. 10 after being processed by the comparative methods and shows the average values. We randomly selected multiple underwater images from the UIEB [31] dataset and found the average value of each indicator under each comparison method in Table 2. Our method’s metrics were ranked the first two. Experiments prove that our method can process underwater images of most scenes, including an artificial light source, a natural light source, different targets and depths, and different color casts.

Tables Icon

Table 1. Quantitative evaluation (QE) comparison of the AG, EI, MSE, UCIQ and UIQE of Fig. 10. Red indicates the best evaluation index. Blue indicates the second evaluation index.

Tables Icon

Table 2. The average index of multiple underwater images. Red indicates the best evaluation value. Blue indicates the second evaluation value.

The two tables prove that our method has good metrics for dealing with different scenes and underwater images with multiple color casts. Since our method refines the edges of underwater images and allows images to have more detail, the AG and EI values of the processing results were well. Our method performs color correction on underwater images, and the processed images have rich colors and saturation, so the MSE and UCIQE values of the resulting images were very high.

3.5. Image segmentation experiment

In addition, our method also improved the application of image segmentation technology. Image segmentation is an important research topic in the imaging field. We used the same segmentation algorithm to segment the resulting graphs of the comparison methods. In the resulting graph of our method, the segmentation algorithm could more accurately identify the edges and details of the object and better segment the target. The segmentation method is based on FCM calculations and distributions [33], and the underwater image is segmented into 10 colors. From the perspective of segmentation, it has been proven that our experiment is excellent in terms of enhancing color, contrast, and detail.

Figure 11 shows the experimental results of the comparison methods, including UDCP [11], RBE [25], IBLA [26], GDCP [27], RGHS [28], and GBDRC [29]. Figure 12 shows the segmentation results of each image in Fig. 11. The UDCP, GDCP, and GBDRC methods were not effective in color processing, and the edges of objects cannot be recognized during segmentation. The IBLA and RGHS methods did not process the details of the edge of the image, and the segmentation results showed large-scale color blocks, which cannot highlight the contour of the target object. After the segmentation of our resulting image, the objects were more prominent, there were more details, and the boundaries were clearer. After segmentation, the resulting image of our method was closer to the real underwater image. Therefore, from the perspective of image segmentation, our method is significantly better at processing underwater images.

 figure: Fig. 11.

Fig. 11. Comparison test result graph. The underwater images are from the UIEB dataset. The comparison methods from left to right are as follows: (a) Raw data, (b) UDCP [11], (c) RBE [25], (d) IBLA [26], (e) GDCP [27], (f) RGHS [28], (g) GBDRC [29], and (h) our method.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. Comparison of experimental image segmentation results. Segmentation algorithm based on FCM algorithm. The comparison methods from left to right are as follows: (a) Raw data, (b) UDCP [11], (c) RBE [25], (d) IBLA [26], (e) GDCP [27], (f) RGHS [28], (g) GBDRC [29], and (h) our method.

Download Full Size | PDF

4. Conclusion

In this paper, we presented the maximum bright channel color restoration method to correct the color of underwater images. The ERHS method improved the contrast of the underwater images after color correction and eliminated noise. The wavelet two-level decomposition extracted the detailed components of the two resulting images from the algorithm for equal-proportion fusion, and the details of the underwater image are preserved. Our fusion only needs to extract the edge details of the image after color correction and the image after contrast stretching for equal proportion fusion, restore the color of the underwater image, and improve contrast and enhance details. Our method effectively restored underwater images and mitigated underwater image color distortion, low contrast, noise, and loss of detail. Furthermore, it provides an opportunity for computer vision to study single underwater images and underwater videos.

Funding

National Natural Science Foundation of China (U20A20161).

Acknowledgments

Thanks to the data set provided by the joint laboratory of the Dalian University of Technology and Zhangzidao Group. We are also extremely grateful to the anonymous reviewers for their critical comments on the manuscript.

Disclosures

The authors declare that there are no conflicts of interest related to this paper.

Data availability

Data underlying the results presented in this paper are not publicly available but may be obtained from the authors upon reasonable request.

References

1. K. Liu and Y. Liang, “Underwater image enhancement method based on adaptive attenuation-curve prior,” Opt. Express 29(7), 10321–10345 (2021). [CrossRef]  

2. J. Zhou, L. Pang, and W. Zhang, “Underwater image enhancement method based on color correction and three-interval histogram stretching,” Meas. Sci. Technol. 32(11), 115405 (2021). [CrossRef]  

3. Y. Tao, L. Dong, L. Xu, and W. Xu, “Effective solution for underwater image enhancement,” Opt. Express 29(20), 32412 (2021). [CrossRef]  

4. R. Hummel, “Image enhancement by histogram transformation,” Comp. Graph. and Image Process. 6(2), 184–195 (1977). [CrossRef]  

5. C. O. Ancuti, C. Ancuti, C. De Vleeschouwer, and P. Bekaert, “Color Balance and Fusion for Underwater Image Enhancement,” IEEE Trans. Image Process. 27(1), 379–393 (2018). [CrossRef]  

6. J. Zhou, Y. Wang, W. Zhang, and Chongyi Li, “Underwater image restoration via feature priors to estimate background light and optimized transmission map,” Opt. Express 29(18), 28228–28245 (2021). [CrossRef]  

7. J. Zhou, T. Yang, and W. Ren, “Underwater image restoration via depth map and illumination estimation based on a single image,” Opt. Express 29(19), 29864–29886 (2021). [CrossRef]  

8. J. Zhou, D. Zhang, and W. Zhang, “Classical and state-of-the-art approaches for underwater image defogging: a comprehensive survey,” Front. Inf. Technol. Electron. Eng. 21(12), 1745–1769 (2020). [CrossRef]  

9. J. S. Jaffe, “Computer modeling and the design of optimal underwater imaging systems,” IEEE J. Ocean. Eng. 15(2), 101–111 (1990). [CrossRef]  

10. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011). [CrossRef]  

11. P. Drews, E. Nascimento, F. Moraes, S. Botelho, and M. Campos, “Transmission estimation in underwater single images,” Proc IEEE Int. Conf. Comput. Vis. Workshops825–830 (2013).

12. W. Song, Y. Wang, D. Huang, and A. Liotta, “Enhancement of underwater images with statistical model of background light and optimization of transmission map,” IEEE Trans. Broadcast. 66(1), 153–169 (2020). [CrossRef]  

13. P. Zhuang, C. Li, and J. Wu, “Bayesian retinex underwater image enhancement,” Eng. Appl. Artif. Intell. 101, 104171 (2021). [CrossRef]  

14. V. Sharma, A. Diba, and D. Neven, “Classification-driven dynamic image enhancement,” Proceedings of CVPR4033–4041 (2018).

15. G. Buchsbaum, “A spatial processor model for object colour perception,” J Franklin Inst. 310(1), 1–26 (1980). [CrossRef]  

16. E. H. Land, “The retinex theory of color vision,” Sci. Am 237(6), 108–128 (1977). [CrossRef]  

17. J. Van. De. Weijer, T. Gevers, and A. Gijsenij, “Edge-based color constancy,” IEEE Trans. Image Process.16(9), 2207–2214 (2007). [CrossRef]  

18. G. D. Finlayson and E. Trezzi, “Shades of gray and colour constancy,” IS&T 1, 37–41 (2004).

19. C. Weng, H. Chen, and C. Fuh, “A novel automatic white balance method for digital still cameras,” (ISCAS), 3801–3804 (2005).

20. A. Gijsenij, T. Gevers, and J. Van. De. Weijer, “Improving color constancy by photometric edge weighting,” IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 918–929 (2012). [CrossRef]  

21. K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013). [CrossRef]  

22. S. M. Pizer, E. P. Amburn, and J. D. Austin, “Adaptive histogram equalization and its variations,” CVGIP 39(3), 355–368 (1987). [CrossRef]  

23. T. K. Kim, J. K. Paik, and B. S. Kang, “Contrast enhancement system using spatially adaptive histogram equalization with temporal filtering,” IEEE Trans. Consum. Electron. 44(1), 82–87 (1998). [CrossRef]  

24. J. Zhou, D. Zhang, and W. Zhang, “Underwater image enhancement method via multi-feature prior fusion,” Appl. Intell. 111, 10489 (2022). [CrossRef]  

25. X. Fu, P. Zhuang, Y. Huang, Y. Liao, X. P. Zhang, and X. Ding, “A retinex-based enhancing approach for single underwater image,” IEEE Int. Conf. Signal Image Process.4572–4576 (2014).

26. Y. T Peng and P. C Cosman, “Underwater image restoration based on image blurriness and light absorption,” IEEE Trans. Image Process. 26(4), 1579–1594 (2017). [CrossRef]  

27. Y. T. Peng, K. Cao, and P. C. Cosman, “Generalization of the dark channel prior for single image restoration,” IEEE Trans. Image Process. 27(6), 2856–2868 (2018). [CrossRef]  

28. D. Huang, Y. Wang, and W. Song, “Shallow-water image enhancement using relative global histogram stretching based on adaptive parameter acquisition,” ICMM. Springer, Cham, 453–465 (2018).

29. C. Li, J. Quo, and Y. Pang, “Single underwater image restoration by blue-green channels dehazing and red channel correction,” ICASSP, 1731–1735 (2016).

30. M. Y ang and A. Sowmya, “An Underwater Color Image Quality Evaluation Metric,” IEEE Trans. Image Process. 24(12), 6062–6071 (2015). [CrossRef]  

31. C. Li, C. Guo, W. Ren, R. Cong, J. Hou, S. Kwong, and D. Tao, “An Underwater Image Enhancement Benchmark Dataset and beyond,” IEEE Trans. Image Process. 29, 4376–4389 (2020). [CrossRef]  

32. R. Liu, X. Fan, M. Zhu, M. Hou, and Z. Luo, “Real-World Underwater Enhancement: Challenges, Benchmarks, and Solutions under Natural Light,” IEEE Trans. Circuits Syst. Video Technol. 30(12), 4861–4875 (2020). [CrossRef]  

33. T. Lei, X. Jia, Y. Zhang, S. Liu, H. Meng, and A. K. Nandi, “Superpixel-Based Fast Fuzzy C-Means Clustering for Color Image Segmentation,” IEEE Trans. Fuzzy Syst. 27(9), 1753–1766 (2019). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Underwater optical imaging model.
Fig. 2.
Fig. 2. Color correction comparison result chart. From left to right: (a) Raw image, (b) GreyWorld [15], (c) MaxRGB [16], (d) GreyEdge [17], (e) Shade of Grey [18], (f) Automatic White Balance (AWB) [19], (g) Weighted Grey-Edge [20], and (h) presented method. Images 1 to 3 were taken by Canon D10 (ISO250), Olympus Tough 8000 (ISO 100), and Panasonic TS1 (ISO 100), respectively. (For the complete set, please refer to the website Digital Photography Review (dpreview.com).)
Fig. 3.
Fig. 3. Edge refinement histogram stretching comparison result graph from images 1–3: original image, histogram stretched [4], and edge refinement histogram stretched (ERHS).
Fig. 4.
Fig. 4. Schematic diagram of wavelet decomposition. From images 2 to 5: first-level low-frequency component, first-level horizontal detail component, first-level vertical detail component, and first-level diagonal detail component, respectively. From images 6 to 9: second-level low-frequency component, second-level horizontal detail component, second-level vertical detail component, and second-level diagonal detail component, respectively.
Fig. 5.
Fig. 5. Schematic diagram of equal proportion fusion data flow.
Fig. 6.
Fig. 6. Color correction comparison test. The underwater pictures are from the UIEB dataset. From left to right: (a) Raw image, (b) GreyWorld [15], (c) MaxRGB [16], (d) GreyEdge [17], (e) Shade of Grey [18], (f) Automatic White Balance (AWB) [19], (g) Weighted Grey-Edge [20], and (h) presented method.
Fig. 7.
Fig. 7. False-color map of Fig. 6. The underwater pictures are from the UIEB dataset. From left to right: (a) Raw image, (b) GreyWorld [15], (c) MaxRGB [16], (d) GreyEdge [17], (e) Shade of Grey [18], (f) Automatic White Balance (AWB) [19], (g) Weighted Grey-Edge [20], and (h) presented method.
Fig. 8.
Fig. 8. Edge detail map. The underwater pictures are from the UIEB dataset. Images 1–3 are the original image, the histogram stretching result image [45], and the ERHS result image. The red box is detail 1, the green part is detail 2, and the blue box is detail 3.
Fig. 9.
Fig. 9. UIEB dataset comparison test results. The underwater pictures are from the UIEB dataset. The comparison methods from left to right are as follows: (a) Raw data, (b) UDCP [11], (c) RBE [25], (d) IBLA [26], (e) GDCP [27], (f) RGHS [28], (g) GBDRC [29], and (h) our method.
Fig. 10.
Fig. 10. RUIE dataset comparison test results. The comparison methods from left to right are as follows: (a) Raw data, (b) UDCP [11], (c) RBE [25], (d) IBLA [26], (e) GDCP [27], (f) RGHS [28], (g) GBDRC [29], and (h) our method.
Fig. 11.
Fig. 11. Comparison test result graph. The underwater images are from the UIEB dataset. The comparison methods from left to right are as follows: (a) Raw data, (b) UDCP [11], (c) RBE [25], (d) IBLA [26], (e) GDCP [27], (f) RGHS [28], (g) GBDRC [29], and (h) our method.
Fig. 12.
Fig. 12. Comparison of experimental image segmentation results. Segmentation algorithm based on FCM algorithm. The comparison methods from left to right are as follows: (a) Raw data, (b) UDCP [11], (c) RBE [25], (d) IBLA [26], (e) GDCP [27], (f) RGHS [28], (g) GBDRC [29], and (h) our method.

Tables (2)

Tables Icon

Table 1. Quantitative evaluation (QE) comparison of the AG, EI, MSE, UCIQ and UIQE of Fig. 10. Red indicates the best evaluation index. Blue indicates the second evaluation index.

Tables Icon

Table 2. The average index of multiple underwater images. Red indicates the best evaluation value. Blue indicates the second evaluation value.

Equations (20)

Equations on this page are rendered with MathJax. Learn more.

I c ( X ) = E d ( X ) + E f ( X ) + E d ( X )
I c ( X ) = J c ( X ) t c ( X ) + A [ 1 t c ( X ) ] , t c ( X ) = exp [ n ( λ ) d ( x ) ]
Δ = t B ( X ) t c ( X )
R max = max { R ( x , y ) } , G max = max { G ( x , y ) } , B max = max { B ( x , y ) }
I s u m ( x , y ) = R ( x , y ) + G ( x , y ) + B ( x , y )
I max ( x , y ) = I s u m ( x , y ) × T
R a v g = I ( x , y ) > I max ( x , y ) R ( x , y ) , G a v g = I ( x , y ) > I max ( x , y ) G ( x , y ) , B a v g = I ( x , y ) > I max ( x , y ) B ( x , y )
K r = 255 R a v g , K g = 255 G a v g , K b = 255 B a v g
I o u t = { 255 , I c × K c > 255 I c × K c , I c × K c 255 0 , I c × K c 0
I o ( x , y ) = I i ( x , y ) I a I s d + 0.5 × 255
I r ( x , y ) = { 0 , I o ( x , y ) 0 L [ I o ( x , y ) ] , 0 < I o ( x , y ) < 255 255 , I o ( x , y ) 255
ln f ( x , y ) = ln f i ( x , y ) + ln f k ( x , y )
F ( ω , μ ) = F i ( ω , μ ) + F k ( ω , μ )
f ( x , y ) = φ i ( x , y ) + φ k ( x , y )
L ( x , y ) = exp [ φ i ( x , y ) φ k ( x , y ) ]
L j 1 f ( x ) = k = + P j 1 , k ω j 1 , k ( x ) , H j 1 f ( x ) = k = + Q j 1 , k ν j 1 , k ( x )
L j f = L j 1 f + H j 1 f , L j f ( x ) = k = + P j , k ω j , k ( x )
I 2 [ H S 2 ] = 0.5 × [ W H 2 1 , W H 2 2 , W H 2 3 , W H 2 4 ] + 0.5 × [ W S 2 1 , W S 2 2 , W S 2 3 , W S 2 4 ]
I 1 [ H S 1 ] = 0.5 × [ W S 1 1 , W S 1 2 , W S 1 3 , W S 1 4 ] + 0.5 × [ W H 1 1 , W H 1 2 , W H 1 3 , W H 1 4 ]
I o u t = 0.5 I 1 [ H S 1 ] + 0.5 I 2 [ H S 2 ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.