Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fusion-based underwater image enhancement with category-specific color correction and dehazing

Open Access Open Access

Abstract

Underwater imaging is usually affected by water scattering and absorption, resulting in image blur and color distortion. In order to achieve color correction and dehazing for different underwater scenes, in this paper we report a fusion-based underwater image enhancement technique. First, statistics of the hue channel of underwater images are used to divide the underwater images into two categories: color-distorted images and non-distorted images. Then, category-specific combinations of color compensation and color constancy algorithms are used to remove the color shift. Second, a ground-dehazing algorithm using haze-line prior is employed to remove the haze in the underwater image. Finally, a channel-wise fusion method based on the CIE L* a* b* color space is used to fuse the color-corrected image and dehazed image. For experimental validation, we built a setup to acquire underwater images. The experimental results validate that the category-specific color correction strategy is robust to different categories of underwater images and the fusion strategy simultaneously removes haze and corrects color casts. The quantitative metrics on the UIEBD and EUVP datasets validate its state-of-the-art performance.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction and related work

The quality of underwater images is restricted by two factors: the absorption of water particles, which intensifies with increasing wavelength, resulting in the color shift, and the effect of scattering, which leads to a foggy appearance and contrast degradation. Therefore, it is necessary to correct color, improve the contrast and eliminate haze.

Underwater imaging techniques can be roughly divided into IFM (image formation model), enhancement, and deep learning-based methods. The IFM explains the degradation mechanism of underwater images. These methods take into account the propagation and polarization properties of light in water and use these properties to estimate parameters in the IFM [14]. [510] used the polarization difference between the backscattered light and the signal light to eliminate scattering. [4,1113] combined the DCP (dark channel prior) [14] and the propagation prior of light in water and proposed multiple variants of the DCP. Although these methods have made progress in dehazing, they are not effective in color correction.

The enhancement methods do not consider reasons for image degradation but instead improve the visual quality of underwater images through image processing. [1519] used color balance, histogram stretching, and wavelet fusion to enhance underwater images. C. Ancuti [20] applied a LP (Laplacian pyramid) fusion strategy and obtained the expected results. However, the white-balancing algorithm in [20] does not work in some cases, which we will show later. In addition, its dehazing effect is not good.

Recently, deep learning-based methods [2126] have been introduced. Neural networks attract the interest of scholars due to their robust nonlinear fit. However, the unique underwater environment makes it challenging to find the ground truth. Therefore, a limitation of deep learning-based methods is the lack of datasets.

Due to the variation of underwater scenes, no color constancy algorithm can be considered universal. However, with the large variety of underwater scenes and available color constancy algorithms, the question is how to select the algorithm that performs best for a specific underwater image. Besides, existing underwater image techniques could not tackle issues of correcting the color shift and dehazing simultaneously. In this paper, to solve the above problems, we divide underwater images into two categories: color-distorted images and non-distorted images, using the statistics of the hue channel of the underwater images. Category-specific combinations of color compensation and color constancy algorithms are used to remove the color shift. We use the algorithm in [27] to obtain the dehazed image. Finally, we use a channel-wise fusion method based on the $\operatorname {CIE} L^{*} a^{*} b^{*}$ color space to blend the color-corrected image and the dehazed image.

In summary, the main innovations of the reported fusion-based underwater image enhancement technique include:

  • • Statistics of the hue channel of underwater images are used to divide the underwater images into two categories: color-distorted images and non-distorted images.
  • • Category-specific combinations of color compensation and color constancy algorithms are used to remove the color shift.
  • • A channel-wise fusion method is proposed to fuse the color-corrected image and the dehazed image in the $\operatorname {CIE} L^{*} a^{*} b^{*}$ color space. By fusion with the color-corrected images, the ground-dehazing method could work for underwater images.
The rest of this article is organized as follows. The details of the reported method are presented in section Method. The experiment results are shown in section Simulation and Experiment. In section Conclusion, we conclude this work with further discussions.

2. Method

Our goal is to achieve color correction and dehazing for different underwater scenes. One single color correction strategy is not suitable for all underwater scenes. For example, the color compensation used for distorted images with a green color cast could cause overcompensation for undistorted images. So, we need to divide the underwater images into several categories. Then, we choose the proper color correction strategy for a specific category. To remove haze, we use the haze-line prior [27]. However, only one of the above two steps alone cannot achieve good results. So, we adopt a channel-wise fusion method based on the $\operatorname {CIE} L^{*} a^{*} b^{*}$ color space to fuse the color-corrected image and the dehazed image. The flowchart of the overall strategy adopted is shown in Fig. 1. The following is a detailed description of the method, divided into four parts.

 figure: Fig. 1.

Fig. 1. Flowchart of the proposed method.

Download Full Size | PDF

2.1 Underwater image classification

There are various underwater images, some with serious color distortion and some with a similar appearance to the ground images. We take inspiration from [28] that the VOH (variation of hue) of an underwater image could identify whether it is distorted or not. Therefore, we use the VOH to divide the underwater images into two categories: distorted and undistorted images. We define a measure of the VOH as:

$$V O H=\frac{1}{M N} \sum_{(x, y)} \delta(x, y)$$
$$\delta(x, y)=\frac{\sum_{(i, j) \in \omega_{(x, y)}} h(i, j)-h(x, y)}{K}$$
where $M \times N$ is the size of the image, $\omega _{(x, y)}$ is a neighborhood region of pixel $(x, y)$, K is the number of pixels in the neighborhood, $h(x, y) \in [0,1]$ is the hue value at location $(x, y)$. Red and blue graphs in Fig. 2(a) are the PDF (probability density function) of VOH of the distorted and undistorted images, respectively. We randomly select 800 raw images, and 800 reference images from UIEBD dataset [23]. As can be seen that the VOH of the raw images, which are the distorted images, has a small variation range and is concentrated in small values, while the VOH of the reference images, which are the undistorted images, can vary widely with a very little probability of being small. We take the value of the VOH at the intersection of the two graphs as the decision boundary for identifying whether the image is distorted or not. Figure 2(a) shows that the decision boundary value is $\mathrm {VOH}=0.0136$.

 figure: Fig. 2.

Fig. 2. Underwater image classification. (a) Red and blue graphs are the PDF (probability density function) of VOH (variation of hue) of distorted and undistorted images. (b) Red, green, and blue graphs are the PDF of the hue of the yellowish, greenish, and bluish images, respectively.

Download Full Size | PDF

However, as the PDF shown in Fig. 2(a), the method seems to have a certain probability of inaccurate classification. We consider that the process from the undistorted images to the distorted images is a continuous change process without a clear boundary. In fact, when the value of VOH is around 0.0136, even if there is a small portion of misclassified images, the misclassified images can be corrected using the method as for the correctly classified images. As can be seen in Fig. 3, the original images in the first and second rows are the undistorted images, but their VOH are both less than 0.0136, and the recovered images are obtained by using the method for the distorted images, and the images in the third and fourth rows show the same vice versa.

 figure: Fig. 3.

Fig. 3. Misclassified images with recovery effects. The original images in the first and second rows are the undistorted images, and the recovered images are obtained by using the method for the distorted images. The original images in the third and fourth rows are the distorted images, and the recovered images are obtained by using the method for the undistorted images.

Download Full Size | PDF

Further, there are still several types of distorted images: greenish, bluish, and yellowish. That is because the types and concentration of the medium in different underwater scenes and the distance from the camera to the scene are different, resulting in different intensities of the scattering and absorption effect when light propagates in the water. Because the VOH of the distorted images is small and the color difference among the three types is large, we use the range of the images’ hue to identify the three types. As shown in Fig. 2(b), the ranges of the hue of the yellowish, greenish, and bluish images are $0.1<h u e_{y} \leq 0.25$, $0.25<h u e_{g} \leq 0.4$, and $0.4<h u e_{b} \leq 0.6$ respectively. It is worth noting that a part of the greenish and bluish images’ hue overlaps, and the overlapping part of the images can be regarded as green-bluish. The color processing of this part of the images will be discussed in the next subsection.

2.2 Color correction

In the previous subsection, we divided images into several categories. The image classification is for better constancy and compensation of the appropriate color for each type of image. This subsection will demonstrate how to choose the appropriate combination of color compensation and constancy algorithms strategy for each type of image. The undistorted images do not have a highly attenuated channel, and their appearances are similar to the ground scenes. The Gray-World algorithm works well in this category of images. However, there is always one or more channels that are severely attenuated for the distorted images. So we need to compensate the attenuated channels first. In [20], Ancuti used the green channel as the reference channel to compensate for the red and blue channels. For most underwater images, the green channel has better details and texture information than the other two channels. Nevertheless, when the image is yellowish, the green channel is attenuated, and the red channel becomes the superior channel. Furthermore, when the image is bluish, the blue channel is overexposed, and we should reduce it instead of compensating it. Mathematically, to account for the above observations, we use the similar compensation form in [20] and express the compensation channels at each pixel $(x, y)$ of the yellowish, greenish, and bluish images as follows: for the yellowish images, the green and blue channels are compensated as:

$$\left\{\begin{array}{l} I_{g c}(x, y)=I_{g}(x, y)+\alpha\left(\overline{I_{r}}-\overline{I_{g}}\right)\left(1-I_{g}(x, y)\right) I_{r}(x, y) \\ I_{b c}(x, y)=I_{b}(x, y)+\beta\left(\overline{I_{r}}-\overline{I_{b}}\right)\left(1-I_{b}(x, y)\right) I_{r}(x, y) \end{array}\right.$$
where $I_{r}$, $I_{g}$, and $I_{b}$ represent the red, green and blue channels of image I, each channel being in the interval [0, 1], after normalization by the upper limit of their dynamic range; $\overline {I_{r}}$, $\overline {I_{g}}$, and $\overline {I_{b}}$ are the mean value of $I_{r}$, $I_{g}$, and $I_{b}$ respectively. $\alpha$ and $\beta$ denote constant parameters. In practice, our tests have revealed that values of $\alpha =1$ and $\beta =1.3$ are suitable for subsequent processing. The symbols in the compensation equations of the greenish and bluish images have the same meaning as in Eq. (3), so they will not be described anymore. For the greenish images, the compensation forms as:
$$\left\{\begin{array}{l} I_{r c}(x, y)=I_{r}(x, y)+\alpha\left(\overline{I_{g}}-\overline{I_{r}}\right)\left(1-I_{r}(x, y)\right) I_{g}(x, y) \\ I_{b c}(x, y)=I_{b}(x, y)+\beta\left(\overline{I_{g}}-\overline{I_{b}}\right)\left(1-I_{b}(x, y)\right) I_{g}(x, y) \end{array}\right.$$
where $I_{g}$ is the reference channel to compensate $I_{r}$ and $I_{b}$. For the bluish images, the compensation forms as:
$$\left\{\begin{array}{c} I_{r c}(x, y)=I_{r}(x, y)+\alpha\left(\overline{I_{g}}-\overline{I_{r}}\right)\left(1-I_{r}(x, y)\right) I_{g}(x, y) \\ I_{b c}(x, y)=I_{b}(x, y)-\beta\left(\overline{I_{b}}-\overline{I_{g}}\right) I_{b}(x, y) I_{g}(x, y) \end{array}\right.$$
where $I_{g}$ is the reference channel to compensate $I_{r}$ and to reduce $I_{b}$. Color compensation is not enough to remove the color shift of the distorted images. The shift of color could be seen as the illumination of different light sources [33]. We could use color constancy algorithms to remove the color shift after color compensation based on this assumption. There is a general framework for color constancy algorithms proposed in [32]:
$$\left(\int\left|\frac{\partial^{n} I_{c, \sigma}(x, y)}{\partial x^{i} \partial y^{j}}\right|^{p} d x d y\right)^{\frac{1}{p}}=k e_{c}^{n, p, \sigma}$$
where $I(x, y)$ is the pixel value at location $(x,y)$, $c={r,g,b}$, $n$ is the order of the derivative, and $p$ is the Minkowski-norm. Further, the derivative is defined as the convolution of the image with the derivative of a Gaussian filter with scale parameter $\sigma$ [34]:
$$\frac{\partial^{n} I_{c, \sigma}}{\partial x^{i} \partial y^{j}}=I_{c} * \frac{\partial^{n} G_{\sigma}}{\partial x^{i} \partial y^{j}}$$
where $*$ denotes the convolution and $i+j=n$. Different values of $n$, $p$, and $\sigma$ can generate different color constancy algorithms. For instance, the Gray-World algorithm can be generated by substituting $n=0$, $p=1$, $\sigma =0$.

Different color constancy algorithms are based on different assumptions. As a consequence, no color constancy algorithms can be considered universal. We used four well-known color constancy algorithms (the Gray-World [29], the maxRGB [30], the Gray-Edge [32], and the Shades of Gray [31]) for the yellowish, greenish, and bluish images. The results were measured by the UICM (underwater image colorfulness measure), which was a colorfulness term in the UIQM (underwater image quality measure) [35] indicator. We selected each type of image from the Challenging-60 subset of the UIEBD to demonstrate the effect of these four algorithms on each type of image. As can be seen in Table 1 and Fig. 4, the Gray-World algorithm has the best UICM scores for the yellowish image, but suffers from red artifacts for the greenish and bluish images. The Shade of Gray algorithm has the best UICM scores for the greenish and yellowish images, but it does not work well on the yellowish image. Although the score of the MaxRGB algorithm on the greenish image is close to that of the Shade of Gray, it has the second-to-last and first-to-last scores on the yellowish and bluish images, respectively. This indicates that a single color constancy algorithm is not appropriate for all underwater images. In addition, we find that the blue channel of the green-bluish images mentioned in the previous subsection has similar texture information as the green channel. Besides, the Gray-World algorithm is best for the green-bluish images. So, we use the Gray-World algorithm after sole red compensation for the green-bluish images.

 figure: Fig. 4.

Fig. 4. The results of color constancy. From top row to bottom row are the yellowish, greenish, and bluish types of underwater images. From left column to right column are the color constancy algorithms: The Gray-World [29], the maxRGB [30], the Shades of Gray [31], and the Gray-Edge [32].

Download Full Size | PDF

Tables Icon

Table 1. Color correction quantitative evaluation based on the UICM (underwater image colorfulness measure) metric.a

2.3 Image dehazing

In the previous subsection, we presented color correction strategies for different categories of underwater images, focusing on the two-step process for the distorted images: color compensation and white balance. We formulate category-specific color compensation forms and select the most appropriate color constancy algorithm. Although our color correction strategies are crucial to removing the color shift, obtaining a high-fidelity underwater image is insufficient. Because the problem of haze still exists in the color-corrected image, causing low contrast and blurriness.

The IFM methods remove the haze by estimating parameters in the IFM with specific assumptions about the underwater environment. Consequently, these methods have low generality (because of specific assumptions) and high computational complexity (because of the large number of parameters in the IFM). Since there are similar IFM in the ground-dehazing algorithms, the ground-dehazing algorithms are usually used as references to dehaze the underwater images. The methods used in [11,12] are the variants of the DCP, which was initially proposed for ground dehazing.

In this work, a ground-dehazing algorithm [27] named Non-local algorithm using haze-line prior is employed to remove the haze in the underwater image. The reason why we use the ground-dehazing algorithm is that the IFM in the ground-dehazing algorithms is more straightforward than in the underwater-dehazing algorithms due to the foggy outdoor scenes being less complex than underwater scenes.

2.4 Image fusion

To dehaze and color correct simultaneously, we propose a channel-wise fusion method to fuse the color-corrected image and the dehazed image. Our fusion method is based on the following assumption: haze information exists in the lightness (intensity) channel, and color information exists in the color channels. We convert the dehazed image and the color-corrected image to the $\operatorname {CIE} L^{*} a^{*} b^{*}$ color space. The $L^{*}$ channel of the dehazed image has better edges and details of the scene, while the $a^{*}$ channel and the $b^{*}$ channel of the color-corrected image have more accurate color information. Then, we use the $L^{*}$ channel of the dehazed image as the $L^{*}$ channel of the fused image, and the $a^{*}$ and $b^{*}$ channels of the fused image are computed as:

$$a_{\text{fused}}=\sigma \cdot a_{\text{dz}}+(1-\sigma) \cdot a_{\text{cc}}$$
$$b_{\text{fused}}=\sigma \cdot b_{\text{dz}}+(1-\sigma) \cdot b_{\text{cc}}$$
where $a_{\text {fused}}$ ($b_{\text {fused}}$) is the $a^{*}$ ($b^{*}$) channel of the fused image. $a_{\text {dz}}$ ($b_{\text {dz}}$) is the $a^{*}$ ($b^{*}$) channel of the dehazed image. $a_{\text {cc}}$ ($b_{\text {cc}}$) is the $a^{*}$ ($b^{*}$) channel of the color-corrected image. $\sigma$ is the weight, ranging from 0 to 0.5. $\sigma$ is adjusted to change the background color to make the fused image more underwater. After that, we convert the fused image to the RGB color space. As shown in Fig. 5, the color-corrected and dehazed images obtained respectively from the previous two subsections either have haze or color shift. The image without fusion illustrates that direct dehazing after color correction obtains the wrong result. The fusion result shows that the proposed fusion method could eliminate the haze and color cast simultaneously.

 figure: Fig. 5.

Fig. 5. The ablation experiment compares different fusion and without fusion underwater imaging techniques.

Download Full Size | PDF

3. Simulation and experiment

In this section, we first perform validation of our method. Then we compare our technique with the existing specialized underwater image techniques, including the IFM, enhancement, and deep-learning methods. Finally, we simulate the underwater environment and build a setup to acquire underwater images to prove the utility of our method in practice.

3.1 Validation of our method

We first demonstrate the effectiveness of our color compensation method in color correction for different categories (including the undistorted, yellowish, greenish, and bluish) of underwater images. For each category of an underwater image, we process it in the following four ways: no compensation, compensation in the form of Eq. (3), compensation in the form of Eq. (4), and compensation in the form of Eq. (5). After that, we choose the most appropriate color constancy algorithm for the specific category of image. These results with different processing are evaluated by the UICM metric in Fig. 6. Equation (3), Eq. (4), and Eq. (5) are designed for the yellowish, greenish, and bluish images respectively. As can be seen in Fig. 6, the undistorted image has the highest score without color compensation. The distorted images processed by the appropriate color compensation formula have the highest UICM score and a better result of removing the color shift than that processed by other color compensation formulas. For example, the greenish image is reconstructed to its normal color and has the highest score using Eq. (4), but it produces quantization artifacts using other formulas, which is due to the error compensation of the channels.

 figure: Fig. 6.

Fig. 6. The results of using different color compensation in color correction. The reconstruction quality is evaluated by the UICM metric. The highest scores are bolded. From top row to bottom row are the undistorted, yellowish, greenish, and bluish types of underwater images.

Download Full Size | PDF

The two input images of fusion have superior channels which contain more information in different color spaces. To demonstrate the validity of our fusion strategy, we fuse the superior channels and the inferior channels of two input images in the $\operatorname {CIE} L^{*} a^{*} b^{*}$, RGB, and HSV color space respectively. The results are measured by the IL-NIQE (integrated local natural image quality evaluator) [36]. Figures 7(a) and (b) shows the results of fusion with the superior and inferior channels in the $\operatorname {CIE} L^{*} a^{*} b^{*}$. It can be seen that fusion with the preponderant channels has a better result than the inferior channels in the $\operatorname {CIE} L^{*} a^{*} b^{*}$ color space. Figures 7(c) and (d) are the results of fusion in the RGB and HSV color space. Fusion in the RGB and HSV color space does not remove the color shift and generate artifacts. In contrast, fusion in the $\operatorname {CIE} L^{*} a^{*} b^{*}$ color space has a better appearance with color correction and dehazing. That is because the lightness channel and the color channel of the $\operatorname {CIE} L^{*} a^{*} b^{*}$ are separated, which could make full use of the advantages of the two input images of the fusion. However, the $s^{*}$ channel of the HSV color space contains both color and intensity information and causes the two input images to affect each other. In the RGB color space, we fuse the two input images in a naive way that we multiply the two versions by two weights, and the two weights add up to one. We find that by changing the weights, we could only make a trade-off between dehazing and color correction and could not achieve both simultaneously. Fusion in the RGB color space requires more complex weight maps, as [20] did. By comparison, fusion in the $\operatorname {CIE} L^{*} a^{*} b^{*}$ color space would be simpler.

 figure: Fig. 7.

Fig. 7. Fusion in different channels and different color spaces produces different results. (a) The fusion image with the superior channels. (b) The fusion image with the inferior channels. (c) The fusion image in RGB color space. (d) The fusion image in HSV color space. The best score for the IL-NIQE (integrated local natural image quality evaluator) is in bold.

Download Full Size | PDF

3.2 Comparison with other methods

Since there are many fusion methods used for underwater images, we first compare our fusion method with the existing specialized fusion methods (the wavelet fusion [17] and the LP Fusion [20]). Figure 8 presents the results obtained by these fusion techniques. The results are measured by the IL-NIQE [36] and PCQI (patch-based contrast quality index) [37]. As can be seen in Fig. 8, the wavelet fusion [17] does not correct the background color and remove the color shift. The LP fusion method works best for the image in the second row, but not for images in other rows. Our method has the best effect for the images in the first, third, and fourth rows, and the second-best effect for the image in the second row, which is close to the effect of LP fusion. LP fusion is a multi-scale fusion with an enhancement process, which improves contrast well, but also increases noise at each image scale. In Fig. 11, LP fusion method has considerable noise in its background due to the enhancement process. Our method is to fuse the color-corrected image and dehazed image in a channel-wise manner, without an enhancement process. Therefore, our method achieves good results in processing various types of images, which means our method is more generalizable.

 figure: Fig. 8.

Fig. 8. Results with different fusion methods. From left to right: The original image, the wavelet fusion [17], the LP fusion [20], and our fusion result. The reconstruction quality is evaluated by the IL-NIQE and PCQI (patch-based contrast quality index) metrics. The best scores are in bold.

Download Full Size | PDF

To demonstrate the robustness of the reported technique to various underwater scenes compared with other underwater techniques, we select 100 unpaired images in different scenarios with poor quality in each of two datasets UIEBD that was proposed by Chongyi Li et al. [23] and EUVP. We compare the reported technique with six representative methods in Table 2, including two enhancement methods (LP fusion [20], Muniraj et al. [38]), two IFM methods (Haze-line [4], Sea-thru [3]), and two deep learning methods (FUnIE-GAN [25], Ucolor [26]). We adopt three no-reference color image quality metrics (UCIQE [39], UIQM [35], and IL-NIQE [36]), one general-purpose image contrast metric (PCQI [37]), and one subjective perception score (PS). Please note that, due to complex underwater conditions, the UIQM and UCIQE scores cannot accurately reflect the performance of underwater imaging methods in some cases. The UIQM and the UCIQE are more inclined to give high scores to sharp images and colorfulness images, even if sharpening causes artifacts and the color is wrong. The IL-NIQE is trained with underwater images. The FUnIE-GAN is trained on the EUVP dataset, and the Ucolor is trained on the UIEBD dataset.

Tables Icon

Table 2. Underwater image quality evaluation based on PS (perception score), PCQI, UIQM (underwater image quality measure), UCIQE (underwater colour image quality evaluation), and IL-NIQE metrics.a

Table 2 shows the quantitative results obtained with the PCQI, UIQM, UCIQE, IL-NIQE, and PS. As can be seen, the reported technique has the best results on the two datasets with the IL-NIQE, PCQI, and PS. The reported technique does not have the best results with the UIQM and UCIQE, that is because the reported technique does not have a sharpening process that may cause artifacts and retains some bluish appearance to make it more realistic.

Figure 9 presents the results of these methods in real-world underwater images from the UIEBD and the EUVP datasets. The method of Muniraj et al. [38] and the Haze-line [4] could not correct color in some challenging cases. The LP fusion [20] has good performance, but it does not dehaze. In the Sea-thru algorithm [3], we use a CNN-based monocular depth estimation technique to obtain the depth of the scenes, which might not be accurate in some underwater cases. However, this precisely reflects the limitation of the Sea-thru [3], which is extremely dependent on the depth map. The deep learning-based methods (FUnIE-GAN [25], Ucolor [26]) do not have good robustness to complex underwater scenes. They could not have satisfactory results when the images were beyond their training datasets. In contrast, the reported technique recovers the relatively realistic color removes the haze. The reported technique also performs well under different underwater conditions.

 figure: Fig. 9.

Fig. 9. Visual comparison. (a) Images from the UIEBD dataset. (b) Images from the EUVP dataset. The best scores are in bold.

Download Full Size | PDF

In some challenging cases, turbidity affects the quality of the recovered images. To simulate different turbid conditions, we used a transparent polymethyl methacrylate water tank with a volume of $20 \mathrm {~cm} * 20 \mathrm {~cm} * 20 \mathrm {~cm}$, poured half of the water into the tank and placed a toy car in water as an underwater scene. The light source is a Stabilized Quartz-Tungsten Lamp for 350-2700 nm. The detector is Andor Zyla 5.5 sCMOS. The lamp emits light on the toy car underwater, and the detector receives the reflected light. The absorption effect of the water tank on the light can be ignored. The internal automatic white balance function of the detector is turned off. The experimental setup is shown in Fig. 10. We put 20, 40, and 60 drops of milk into the water tank. Figure 11(a) shows the recovery of our technique under different turbid conditions. Figure 11(b) shows the recovery of several methods at 60 drops of milk. The Sea-thru [3] and Haze-line [4] algorithms both have incorrect color; The LP fusion [20] has considerable noise in its background due to the enhancement process. Moreover, there is distortion at the bottom area of the restored image. The CLAHE [15], and the GDCP [11] perform poorly under highly turbid conditions. the reported technique has a better result without enhanced noise. Because we fused the dehazed image to obscure the enhancement process.

 figure: Fig. 10.

Fig. 10. The experimental setup. The lamp emits light on the toy car underwater, and the detector receives the reflected light.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Experimental results under different turbidity. (a) Results of different turbid conditions using the reported technique. (b) Results at 60 drops of milk condition. Approaches from left to right and top to bottom are Sea-thru [3], LP fusion [20], CLAHE [15], Haze-line [4], GDCP [11] and the reported technique. The best scores are in bold.

Download Full Size | PDF

4. Conclusion

In conclusion, we have presented a fusion-based underwater image enhancement technique to achieve color correction and dehazing simultaneously for different underwater scenes. We first use the statistics of the hue channel of underwater images to classify them. Then our technique fuses two processed images obtained from a single degraded underwater image in the $\operatorname {CIE} L^{*} a^{*} b^{*}$ color space. One of the processed images is a dehazed image obtained by using haze-line prior, which has eliminated haze; the other is a color-corrected image obtained by using category-specific color-correction strategy. We could not only dehaze but also realize color correction by using the characteristics of the $\operatorname {CIE} L^{*} a^{*} b^{*}$ color space. We built a setup and designed a few experiments to validate the robustness of different scenes and turbidity with typical underwater methods. The results show that the reported technique has state-of-the-art performance.

Funding

National Natural Science Foundation of China (61991451, 61827901, 61971045); National Key Research and Development Program of China (2020YFB0505601); Graduate Interdisciplinary Innovation Project of Yangtze Delta Region Academy of Beijing Institute of Technology (Jiaxing) (GIIP2021-016).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. Y.-T. Peng and P. C. Cosman, “Underwater image restoration based on image blurriness and light absorption,” IEEE T. Image Process. 26(4), 1579–1594 (2017). [CrossRef]  

2. C.-Y. Li, J.-C. Guo, R.-M. Cong, Y.-W. Pang, and B. Wang, “Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior,” IEEE T. Image Process. 25(12), 5664–5677 (2016). [CrossRef]  

3. D. Akkaynak and T. Treibitz, “Sea-thru: A method for removing water from underwater images,” in Proc. CVPR IEEE (2019), pp. 1682–1691.

4. D. Berman, D. Levy, S. Avidan, and T. Treibitz, “Underwater single image color restoration using haze-lines and a new quantitative dataset,” IEEE Trans. Pattern Anal. Mach. Intell. 43(8), 2822–2837 (2021). [CrossRef]  

5. Y. Y. Schechner and N. Karpel, “Recovery of underwater visibility and structure by polarization analysis,” IEEE J. Oceanic Eng. 30(3), 570–587 (2005). [CrossRef]  

6. T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE T. Pattern Anal. 31(3), 385–399 (2008). [CrossRef]  

7. F. Liu, P. Han, Y. Wei, K. Yang, S. Huang, X. Li, G. Zhang, L. Bai, and X. Shao, “Deeply seeing through highly turbid water by active polarization imaging,” Opt. Lett. 43(20), 4903–4906 (2018). [CrossRef]  

8. X. Li, H. Hu, L. Zhao, H. Wang, Y. Yu, L. Wu, and T. Liu, “Polarimetric image recovery method combining histogram stretching for underwater imaging,” Sci. Rep. 8(1), 1–10 (2018). [CrossRef]  

9. K. O. Amer, M. Elbouz, A. Alfalou, C. Brosseau, and J. Hajjami, “Enhancing underwater optical imaging by using a low-pass polarization filter,” Opt. Express 27(2), 621–643 (2019). [CrossRef]  

10. Y. Wei, P. Han, F. Liu, and X. Shao, “Enhancement of underwater vision by fully exploiting the polarization information from the stokes vector,” Opt. Express 29(14), 22275–22287 (2021). [CrossRef]  

11. Y.-T. Peng, K. Cao, and P. C. Cosman, “Generalization of the dark channel prior for single image restoration,” IEEE T. Image Process. 27(6), 2856–2868 (2018). [CrossRef]  

12. A. Galdran, D. Pardo, A. Picón, and A. Alvarez-Gila, “Automatic red-channel underwater image restoration,” J. Vis. Commun. Image R. 26, 132–145 (2015). [CrossRef]  

13. W. Zhang, W. Liu, and L. Li, “Underwater single-image restoration with transmission estimation using color constancy,” Journal of Marine Science and Engineering 10(3), 430 (2022). [CrossRef]  

14. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE T. Pattern Anal. 33, 2341–2353 (2010). [CrossRef]  

15. M. S. Hitam, E. A. Awalludin, W. N. J. H. W. Yussof, and Z. Bachok, “Mixture contrast limited adaptive histogram equalization for underwater image enhancement,” in Proc. ICCAT IEEE (2013), pp. 1–5.

16. G. Singh, N. Jaggi, S. Vasamsetti, H. Sardana, S. Kumar, and N. Mittal, “Underwater image/video enhancement using wavelet based color correction (wbcc) method,” in IEEE Underwater Technology (UT), (IEEE, 2015), pp. 1–5.

17. A. Khan, S. S. A. Ali, A. S. Malik, A. Anwer, and F. Meriaudeau, “Underwater image enhancement by wavelet based fusion,” in Proc. USYS IEEE (IEEE, 2016), pp. 83–88.

18. Y. Wang, X. Ding, R. Wang, J. Zhang, and X. Fu, “Fusion-based underwater image enhancement by wavelet decomposition,” in Proc. ICIT IEEE (IEEE, 2017), pp. 1013–1018.

19. W. Luo, S. Duan, and J. Zheng, “Underwater image restoration and enhancement based on a fusion algorithm with color balance, contrast optimization, and histogram stretching,” IEEE Access 9, 31792–31804 (2021). [CrossRef]  

20. C. O. Ancuti, C. Ancuti, C. De Vleeschouwer, and P. Bekaert, “Color balance and fusion for underwater image enhancement,” IEEE T. Image Process. 27(1), 379–393 (2017). [CrossRef]  

21. J. Li, K. A. Skinner, R. M. Eustice, and M. Johnson-Roberson, “WaterGAN: Unsupervised generative network to enable real-time color correction of monocular underwater images,” IEEE Robot. Autom. Mag. Lett. 3, 387–394 (2017). [CrossRef]  

22. C. Fabbri, M. J. Islam, and J. Sattar, “Enhancing underwater imagery using generative adversarial networks,” in Proc. ICRA IEEE (IEEE, 2018), pp. 7159–7165.

23. C. Li, C. Guo, W. Ren, R. Cong, J. Hou, S. Kwong, and D. Tao, “An underwater image enhancement benchmark dataset and beyond,” IEEE T. Image Process. 29, 4376–4389 (2019). [CrossRef]  

24. J. Lu, N. Li, S. Zhang, Z. Yu, H. Zheng, and B. Zheng, “Multi-scale adversarial network for underwater image restoration,” Opt. Laser Technol. 110, 105–113 (2019). [CrossRef]  

25. M. J. Islam, Y. Xia, and J. Sattar, “Fast underwater image enhancement for improved visual perception,” IEEE Robot. Autom. Mag. Lett. 5(2), 3227–3234 (2020). [CrossRef]  

26. C. Li, S. Anwar, J. Hou, R. Cong, C. Guo, and W. Ren, “Underwater image enhancement via medium transmission-guided multi-color space embedding,” IEEE T. Image Process. 30, 4985–5000 (2021). [CrossRef]  

27. D. Berman, S. Avidan, and T. Treibitz, “Non-local image dehazing,” in Proc. CVPR IEEE (2016), pp. 1674–1682.

28. S. K. Dhara, M. Roy, D. Sen, and P. K. Biswas, “Color cast dependent image dehazing via adaptive airlight refinement and non-linear color balancing,” IEEE T. Circ. Syst. Vid. 31(5), 2076–2081 (2020). [CrossRef]  

29. G. Buchsbaum, “A spatial processor model for object colour perception,” J. Franklin Inst. 310(1), 1–26 (1980). [CrossRef]  

30. E. H. Land, “The retinex theory of color vision,” Sci. Am. 237(6), 108–128 (1977). [CrossRef]  

31. G. D. Finlayson and E. Trezzi, “Shades of gray and colour constancy,” in Color Imag. Conf., vol. 2004 (Society for Imaging Science and Technology, 2004), pp. 37–41.

32. J. Van De Weijer, T. Gevers, and A. Gijsenij, “Edge-based color constancy,” IEEE T. Image Process. 16(9), 2207–2214 (2007). [CrossRef]  

33. X. Fu, P. Zhuang, Y. Huang, Y. Liao, X.-P. Zhang, and X. Ding, “A retinex-based enhancing approach for single underwater image,” in Proc. ICIP IEEE (IEEE, 2014), pp. 4572–4576.

34. A. Gijsenij and T. Gevers, “Color constancy using natural image statistics and scene semantics,” IEEE T. Pattern Anal. 33(4), 687–698 (2010). [CrossRef]  

35. K. Panetta, C. Gao, and S. Agaian, “Human-visual-system-inspired underwater image quality measures,” IEEE J. Oceanic Eng. 41(3), 541–551 (2015). [CrossRef]  

36. L. Zhang, L. Zhang, and A. C. Bovik, “A feature-enriched completely blind image quality evaluator,” IEEE T. Image Process. 24(8), 2579–2591 (2015). [CrossRef]  

37. S. Wang, K. Ma, H. Yeganeh, Z. Wang, and W. Lin, “A patch-structure representation method for quality assessment of contrast changed images,” IEEE Signal Proc. Let. 22(12), 2387–2390 (2015). [CrossRef]  

38. M. Muniraj and V. Dhandapani, “Underwater image enhancement by color correction and color constancy via retinex for detail preserving,” Comput. Electr. Eng. 100, 107909 (2022). [CrossRef]  

39. M. Yang and A. Sowmya, “An underwater color image quality evaluation metric,” IEEE T. Image Process. 24(12), 6062–6071 (2015). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Flowchart of the proposed method.
Fig. 2.
Fig. 2. Underwater image classification. (a) Red and blue graphs are the PDF (probability density function) of VOH (variation of hue) of distorted and undistorted images. (b) Red, green, and blue graphs are the PDF of the hue of the yellowish, greenish, and bluish images, respectively.
Fig. 3.
Fig. 3. Misclassified images with recovery effects. The original images in the first and second rows are the undistorted images, and the recovered images are obtained by using the method for the distorted images. The original images in the third and fourth rows are the distorted images, and the recovered images are obtained by using the method for the undistorted images.
Fig. 4.
Fig. 4. The results of color constancy. From top row to bottom row are the yellowish, greenish, and bluish types of underwater images. From left column to right column are the color constancy algorithms: The Gray-World [29], the maxRGB [30], the Shades of Gray [31], and the Gray-Edge [32].
Fig. 5.
Fig. 5. The ablation experiment compares different fusion and without fusion underwater imaging techniques.
Fig. 6.
Fig. 6. The results of using different color compensation in color correction. The reconstruction quality is evaluated by the UICM metric. The highest scores are bolded. From top row to bottom row are the undistorted, yellowish, greenish, and bluish types of underwater images.
Fig. 7.
Fig. 7. Fusion in different channels and different color spaces produces different results. (a) The fusion image with the superior channels. (b) The fusion image with the inferior channels. (c) The fusion image in RGB color space. (d) The fusion image in HSV color space. The best score for the IL-NIQE (integrated local natural image quality evaluator) is in bold.
Fig. 8.
Fig. 8. Results with different fusion methods. From left to right: The original image, the wavelet fusion [17], the LP fusion [20], and our fusion result. The reconstruction quality is evaluated by the IL-NIQE and PCQI (patch-based contrast quality index) metrics. The best scores are in bold.
Fig. 9.
Fig. 9. Visual comparison. (a) Images from the UIEBD dataset. (b) Images from the EUVP dataset. The best scores are in bold.
Fig. 10.
Fig. 10. The experimental setup. The lamp emits light on the toy car underwater, and the detector receives the reflected light.
Fig. 11.
Fig. 11. Experimental results under different turbidity. (a) Results of different turbid conditions using the reported technique. (b) Results at 60 drops of milk condition. Approaches from left to right and top to bottom are Sea-thru [3], LP fusion [20], CLAHE [15], Haze-line [4], GDCP [11] and the reported technique. The best scores are in bold.

Tables (2)

Tables Icon

Table 1. Color correction quantitative evaluation based on the UICM (underwater image colorfulness measure) metric.a

Tables Icon

Table 2. Underwater image quality evaluation based on PS (perception score), PCQI, UIQM (underwater image quality measure), UCIQE (underwater colour image quality evaluation), and IL-NIQE metrics.a

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

V O H = 1 M N ( x , y ) δ ( x , y )
δ ( x , y ) = ( i , j ) ω ( x , y ) h ( i , j ) h ( x , y ) K
{ I g c ( x , y ) = I g ( x , y ) + α ( I r ¯ I g ¯ ) ( 1 I g ( x , y ) ) I r ( x , y ) I b c ( x , y ) = I b ( x , y ) + β ( I r ¯ I b ¯ ) ( 1 I b ( x , y ) ) I r ( x , y )
{ I r c ( x , y ) = I r ( x , y ) + α ( I g ¯ I r ¯ ) ( 1 I r ( x , y ) ) I g ( x , y ) I b c ( x , y ) = I b ( x , y ) + β ( I g ¯ I b ¯ ) ( 1 I b ( x , y ) ) I g ( x , y )
{ I r c ( x , y ) = I r ( x , y ) + α ( I g ¯ I r ¯ ) ( 1 I r ( x , y ) ) I g ( x , y ) I b c ( x , y ) = I b ( x , y ) β ( I b ¯ I g ¯ ) I b ( x , y ) I g ( x , y )
( | n I c , σ ( x , y ) x i y j | p d x d y ) 1 p = k e c n , p , σ
n I c , σ x i y j = I c n G σ x i y j
a fused = σ a dz + ( 1 σ ) a cc
b fused = σ b dz + ( 1 σ ) b cc
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.