Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Local linear model and restoration method of underwater images

Open Access Open Access

Abstract

When light transports in water, it will be scattered and absorbed by the water body and water particles, resulting in blurred images and color distortion. In order to improve the quality of underwater imaging, the local linear model and restoration method of underwater images are proposed in this paper. Based on the distance-invariant feature in the local region, the local linear model is established, and the slope and intercept of the model represent the transmission rate and the backscattered light of the local region of the image, respectively. Utilizing this model, the problem of underwater image restoration has been transformed into the problem of solving the slope and intercept of linear equations. To solve the linear imaging model, the concept of local special-value is defined in this paper, and several fitting points can be obtained through the special-value. Then the linear model is solved by the fitting method, and the restoration of underwater images is completed. The restoration results of different underwater scene images verify that the linear model has a good effect in improving the image clarity and removing the color distortion.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The development of marine resources exploitation and underwater target detection are hot issues at present, and it is very important for human underwater activities to obtain a clear underwater scene. However, the marine environment is complex and changeable. When the light transports in the water, it will be affected by the scattering and absorbing of the water body and suspended particles, resulting in the degradation of underwater image quality, which is mainly manifested in color distortion, image blurriness, etc. Such low-quality images are unfavorable to the further underwater vision tasks. Therefore, the research on the methods that can improve the quality of underwater images has very important significance and value.

For a long time, scholars had done lots of researches on the restoration of underwater images, and proposed series of underwater imaging methods. The restoration methods based on the underwater imaging physical model consider the light transmission process in water body, which are universally valid and has received extensive attention [1]. Such methods constructed the restoration model by studying the formation mechanism of underwater images, and then target images without degradation can be inverted by estimating the parameters in the model. The model proposed by McGlamery [2] describes the underwater imaging process in detail and has become a classic model in the field of underwater image restoration. In this model, the light received by the imaging system consists of three parts, including direct attenuation component, forward-scattered light and backscattered light. In order to simplify the calculation process, its simplified imaging model is used in most methods. The simplified model reflected that the intensity of underwater images is mainly affected by the transmission rate and the backscattered light, and the backscattered light is made up of transmission rate and background light. So, solving these two parameters transmission rate and the background light is the key problem of the restoration method [3]. It is a common method to calculate the transmission rate by using prior information, which is mainly routed from the dark channel prior [4]. For example, the underwater dark channel prior (UDCP) proposed by Drews [5], the red-dark channel prior (RCP) proposed by Galdren [6], and etc. The prior information suitable for the underwater scenes can be obtained by improving the dark channel prior in the atmosphere, and then the transmission rate is solved. When estimating the background light, these methods usually select the brightest intensity of the original image directly or the image intensity where the brightest pixel of the dark channel is located. The use of other kinds of prior information is also a common method of underwater image restoration. Liu et al. [7] proposed a more efficient adaptive attenuation-curve prior, and the degradation process of underwater images can be simulated with attenuation curves. In addition to using prior information, taking advantage of the optical properties of underwater images is also a common method for image restoration. Peng et al. [8] estimated the transmission rate based on both image blurriness and light absorption. And the method picked three background light candidates from the top 0.1% blurry pixels in the input image, the lowest variance region and the largest blurriness region, then the background light is determined for each color channel by a weighted combination of the darkest and brightest candidates. Xie et al. [9] estimated the transmission rate of red channel by the RCP and the background light using the quad-tree segmentation method. The three-channel target radiance is inverted by adding the background light to each channel directly. Berman et al. [10] used the edge map of the scene to determine background light and the haze-line prior to estimate the transmission rate of blue-channel. Then the attenuation ratios of the blue-red and blue-green color channels were selected by enumeration that make the restoration result best satisfies Gray-World assumption [11]. According to the blue-channel transmission rate and attenuation ratios, the transmission rates of other channels were obtained. Some scholars have proposed the combination of image enhancement method and image restoration method. Li et al [12] first carried on the adaptive color correction of the image, and then employed the scattering removal based on the haze-line method. Liu et al. [13] estimated background light based on the gray close operation, and then used different methods to estimate transmission rates, the final transmission rate results were obtained by fusing the estimation results.

On the other hand, when solving transmission rate, although most underwater imaging models in existing researches regard the transmission rate of direct attenuation component and backscattered light as same, some scholars believe that this simplified model is only a rough estimate of underwater imaging process. They have also proposed restoration methods that treat two transmission rates as different. For example, Akkaynak et al. [14] proved the wideband attenuation coefficients of transmission rate of direct attenuation component and backscattered light were different through derivation of optical transmission mechanism and field experiments in natural ocean, they also proposed a revised underwater imaging model and corresponding restoration methods. Zhou et al. [15] also believed that the two transmission rates were different, and further solved the problem that requiring inputting depth maps based on the method of Akkaynak. Liu et al. [16] considered the influence of absorption and scattering for attenuation of direct attenuation component when establishing the model, but only considered the influence of scattering for backscattered light. Therefore, in actual image restoration, we need to pay attention to the problem that the two transmission rates may be different.

In general, the intensity of the underwater images mainly depends on the transmission rate and the backscattered light in the imaging model. By solving these two key parameters, the inversion of the non-degraded image can be completed. The existing methods of solving the transmission rate and backscattered light utilize different methods respectively according to the different underwater image features. In addition, some scholars have raised the problem that the transmission rates of the direct light component and the backscattered light component in the imaging model are different, which is different from the traditional underwater image restoration method. For the problem of solving the imaging model parameters separately, it is hoped that the correlation between transmission rate and backscattered light at the imaging model level can be obtained, and the two parameters can be solved simultaneously based on the correlation. The local linear model and restoration method of underwater images are proposed in this paper. According to the distance invariance in local region of underwater images, the linear imaging model that reflects the intrinsic relationship between transmission rate and backscattered light is established. The slope and intercept of the model represent transmission rate and backscattered light in local region of the images respectively. In order to solve the linear equation, the concept of local special-value is defined, and several fitting points are obtained based on special-value, then the linear model is solved by fitting method. According to the slope and intercept of linear model, the transmission rate and the backscattered light can be obtained at the same time, which reduces the complexity of solving the parameters separately. Moreover, the local linear model includes more detailed description of the underwater imaging mechanism. The whole backscattered light is regarded as a constant, and only the transmission rate of the direct light component needs to be solved, which avoiding the problem that the transmission rates of the direct attenuation component and backscattered light may be different. The local linear model provides a new perspective for underwater image restoration.

2. Local linear model for underwater imaging

2.1 Transmission process of light in water

There are complex components in water environment such as water molecules, suspended particles, and organic matters [17]. The target radiance will be absorbed and scattered by water medium during transporting to the sensor, which will reduce the quality of underwater images.

The transmission process of light in the scattering medium can be described by the radiance transfer equation (RTE) [18]. When target light reaches the imaging device from the target, its radiance mainly depends on two factors. One is the direct attenuation of the scene radiance on the transmission path, which is caused by the water medium. The other is the energy interference that is introduced into the transmission path, which is due to the elastic scattering between ambient light and the scattering medium. Since image degradation is mainly caused by the backscattered light, the effect of forward scattering light is not considered here [19].

On the basis of the general RTE, we have simplified it that only considering the energy change of the beam in the horizontal direction. In the water medium, the general RTE can be used to describe the energy change during the light transmits in any direction underwater. The depth at any point on the optical transmission path is different, and the ambient light intensity at a certain position is closely related to the depth. So the energy interference $I_w^\ast (z,d)$ generated by the ambient light is also different everywhere [20]. The influence factor of the angle between the transmission direction and the vertical direction need to be additionally considered. In order to simplify the modeling process, the angle is fixed at $9{0^ \circ }$, and the underwater depth d on the transmission path can be regarded as the same. Then the change of the energy interference with the transmission distance z can be ignored, which facilitates the description of the energy change in the horizontal direction when the target light transmits in the water body. The model used in the proposed method to describe the variation of light energy over the entire transmission path is shown in the Fig. 1:

 figure: Fig. 1.

Fig. 1. The Model of light transmission in water.

Download Full Size | PDF

The Fig. 1 depicts various energy changes when light transmits in water, including the attenuation of light intensity caused by water absorption and scattering, and the increase in light intensity caused by the scattering between ambient light and water. $dz$ represents the water body micro-element, and the light intensity change generated on the water body micro-element is given in Fig. 1. Based on the energy description in Fig. 1, the expression of target radiance at any distance can be deduced by means of the energy change on the water body micro-element.

The expression of target radiance ${I_w}(z,d)$ can be obtained, which describes the energy intensity after transmitting distance z in the horizontal direction at depth d. For a homogeneous and source-free water body, the change of the target light energy after propagating $dz$ path at the distance z in the horizontal direction can be expressed as the sum of direct attenuation and path function interference, it can be expressed as follows [21]:

$$\frac{{d{I_w}(z,d)}}{{dz}} = \textrm{ - }{c_w}{I_w}(z,d) + I_w^\ast (z,d),w \in \{ R,G,B\}$$
where ${c_w}$ represents the attenuation coefficient of the water body, which describes the attenuation of the beam radiance intensity on the unit path. Its value is the sum of the absorption coefficient ${a_w}$ and the scattering coefficient ${b_w}$. $I_w^\ast (z,d)$ represents the increased amount in radiance intensity on the unit path length at distance z, which is called the path function [20] and is related to the depth d. Because the path function at the same depth d can be regarded as constant everywhere, the path function is simplified to $I_w^\ast (d)$. w represents the wavelength of light.

Equation (1) is a typical differential-equation, it can be transformed [21] and Eq. (2) can be obtained:

$$\frac{{d{I_w}(z,d)}}{{dz}}\textrm{exp} ({c_w}z) = \textrm{ - }{c_w}{I_w}(z,d)\textrm{exp} ({c_w}z) + I_w^\ast (d)\textrm{exp} ({c_w}z),w \in \{ R,G,B\}$$

Shifting the term of Eq. (2), and expressing it as the derivative of $({I_w}(z,d) \times \textrm{exp} ({c_w}z))$ with respect to the distance z like follows:

$$\frac{d}{{dz}}({I_w}(z,d) \times \textrm{exp} ({c_w}z)) = I_w^\ast (d)\textrm{exp} ({c_w}z),w \in \{ R,G,B\}$$

Then multiplying both sides of Eq. (3) by $dz$, and integrating it with respect to z from ${z_\textrm{1}} = \textrm{0}$ to ${z_\textrm{1}} = z$, so that the energy change on the entire transmission path z can be calculated:

$$\int_0^z {d({I_w}({z^{\prime}},d) \times \textrm{exp} ({c_w}{z^{\prime}}))} = \int_0^z {I_w^\ast (d)\textrm{exp} ({c_w}{z^{\prime}})d{z^{\prime}}} ,w \in \{ R,G,B\}$$

Solving Eq. (4) and getting the follow result:

$${I_w}(z,d)\textrm{exp} ({c_w}z)\textrm{ - }{I_w}(0,d) = I_w^\ast (d)\int_0^z {\textrm{exp} ({c_w}{z^{\prime}})d{z^{\prime}}} = \frac{{I_w^\ast (d)}}{{{c_w}}}(\textrm{exp} ({c_w}z)\textrm{ - 1}),w \in \{ R,G,B\}$$

Through further transformation, the expression ${I_w}(z,d)$ of the target radiance intensity after propagating the distance z horizontally can be obtained [21]:

$${I_w}(z,d) = {I_w}(0,d)\textrm{exp} (\textrm{ - }{c_w}z) + \frac{{I_w^\ast (d)}}{{{c_w}}}(1\textrm{ - }\textrm{exp} (\textrm{ - }{c_w}z)),w \in \{ R,G,B\}$$
where the path function $I_w^\ast (d)$ is generated by the scattering between water medium and ambient light from all directions, and its intensity is expressed as follows:
$$I_w^\ast (d) = \int_0^\pi {{E_w}(d,\phi ){\beta _w}(\phi )d\Omega = {E_w}(d) \times 2\pi \int_0^\pi {{\beta _w}(\phi )\sin \phi d\phi } = {b_w}{E_w}(d)} ,w \in \{ R,G,B\}$$
where ${\beta _w}(\phi )$ is the volume scattering function, which represents the ratio of the scattering intensity to the incident intensity in a unit path length and unit angle along a certain direction. ${E_w}(d,\phi )$ represents the intensity of the incident light along the direction $\phi$ at the depth d, and $\phi$ represents the incident direction. It is pointed out that the intensity of the incident light from different directions at the same depth d can be expressed as a constant value ${E_w}(d)$. Based on the above equations, when the target radiance ${I_w}(0,d)$ reaches the camera lens after propagating the distance z in the water body horizontally, the total radiance intensity is:
$$ \begin{aligned} {F_w}(z) &= {D_w}(z) + {B_w}(z) = {I_w}(0,d) \textrm{exp} (\textrm{ - }{c_w}z) + \frac{{{b_w}{E_w}(d)}}{{{c_w}}}(1 \textrm{ - }\textrm{exp} (\textrm{ - }{c_w}z))\\ &= {J_w}\textrm{exp} (\textrm{ - }{c_w}z) + \frac{{{b_w}{E_w}(d)}}{{{c_w}}}(1 \textrm{ - }\textrm{exp} (\textrm{ - }{c_w}z)),w \in \{ R,G,B\} \end{aligned}$$

2.2 Linear imaging model of the local region

In practical applications, the pixel intensity of an image depends on many factors, which are related to the target surface reflectivity ${\rho _w}$, the incident light intensity ${E_w}$ and the camera sensor response ${R_w}$ [16]. And the reflection intensity of a Lambertian surface is related to the illumination and reflectivity [22], that is, ${J_w}(\lambda ) = {E_w}(d,\lambda ){\rho _w}(\lambda )$. Taking these factors into consideration, the signal in Eq. (8) is integrated to obtain the image intensity at a horizontal distance z away from the target [16]:

$$\scalebox{0.88}{$\displaystyle{I_w}(z,d) = {\kappa _w}\int\limits_V {{E_w}(d,\lambda ){\rho _w}(\lambda )\textrm{exp} (\textrm{ - }{c_w}(\lambda )z){R_w}(\lambda )d\lambda } + {\kappa _w}\int\limits_V {\frac{{{b_w}(\lambda ){E_w}(d,\lambda )}}{{{c_w}(\lambda )}}(1 \textrm{ - }\textrm{exp} (\textrm{ - }{c_w}(\lambda )z)){R_w}(\lambda )d\lambda }$}$$
where V = [400 nm,760 nm], representing the human visible spectrum; ${\kappa _w}$ is a scalar that controls image exposure and camera pixel geometry.

The attenuation can be expressed by wideband channels when working with wideband cameras, and the Eq. (9) can be simplified as follows [23]:

$${I_w}(z,d) = J_w^{\prime}\textrm{exp} (\textrm{ - }c_w^{\prime}z) + B_w^{^{\prime}\infty }(1\textrm{ - }\textrm{exp} (\textrm{ - }c_w^{\prime}z))$$

The first term of the equation is the direct attenuation component in the imaging model, and the second term of the equation is the backscattered light. $c_w^{\prime}$ represents the effective wideband attenuation coefficients. The wideband attenuation coefficient is defined from the imaging view, and it is a function of camera's spectral response, water optical properties, imaging distance, etc. [24]. It is different from the traditional attenuation coefficient, which is only the inherent optical properties of water. The attenuation coefficients of the two terms of the equation are usually considered same, but some scholars believe that they are different because of different influencing factors, and some scholars consider both the effects of absorption and scattering for the direct component, but only consider the effects of scattering for the backscattered light. For the above cases, a general equation is used to express the imaging process, which using two different signs of attenuation coefficients to indicate that they describe different components [14]:

$${I_w}(z,d) = D_w^{\prime}(z,d) + B_w^{\prime}(z,d) = J_w^{\prime}\textrm{exp} (\textrm{ - }c_{1w}^{\prime}z) + B_w^{^{\prime}\infty }(1 \textrm{ - }\textrm{exp} (\textrm{ - }c_{2w}^{\prime}z))$$
where $c_{1w}^{\prime}$ represents the wideband attenuation coefficient of the direct component, $c_{2w}^{\prime}$ represents the wideband attenuation coefficient of the backscattered component.

And we know that the scene radiance ${E_w}(d,\lambda )$ is related to the water depth d, it can be expressed as ${E_w}(0,\lambda )\textrm{exp}( - {K_w}(\lambda )d)$, which describes the remaining intensity of the light at the water surface ${E_w}(0,\lambda )$ after propagating distance d vertically in the water body [19]. So, Eq. (12) and Eq. (13) can be obtained [14]:

$$J_w^{\prime} = {\kappa _w}\int\limits_V {{E_w}(0,\lambda )\textrm{exp} ( - {K_w}(\lambda )d){\rho _w}(\lambda ,x)} {R_w}(\lambda )d\lambda$$
$$B_w^{^{\prime}\infty } = {\kappa _w}\int\limits_V {\frac{{{b_w}(\lambda ){E_w}(0,\lambda )\textrm{exp} ( - {K_w}(\lambda )d)}}{{{c_w}(\lambda )}}} {R_w}(\lambda )d\lambda$$

At this time, the light intensity captured by the sensor is a function of the horizontal scene distance z and the vertical depth d. The distance z and water depth d at each pixel position in the local region of the underwater image can be regarded as fixed values approximately; the light intensity ${E_w}(0,\lambda )$ at the water surface is also same everywhere for the same image; the integration band range for the imaging process is fixed, the spectral response function and the optical properties of the water body are only related to the wavelength, so $c_{1w}^{\prime}$, $c_{2w}^{\prime}$ of the local region are also fixed values [24]. So, the following expression for the pixel intensity in local region $\varOmega$ that centered on the pixel y in the image can be obtained as follows:

$$D_w^{\prime}(x) = {\kappa _w}\int\limits_V {{E_w}(0,\lambda )\textrm{exp} ( - {K_w}(\lambda )d){\rho _w}(\lambda ,x)} {R_w}(\lambda )d\lambda \times \textrm{exp} (\textrm{ - }c_{1w}^{\prime}z) = J_w^{\prime}(x) \times {k_w},x \in \varOmega (y)$$
$$B_w^{\prime}(x) = {\kappa _w}\int\limits_V {\frac{{{b_w}(\lambda ){E_w}(0,\lambda )\textrm{exp} (\textrm{ - }{K_w}(\lambda )d)}}{{{c_w}(\lambda )}}} {R_w}(\lambda )d\lambda \times (1 \textrm{ - }\textrm{exp} (\textrm{ - }c_{2w}^{\prime}z)) = B_w^{\prime},x \in \varOmega (y)$$
$B_w^{\prime}$, ${k_w} = \textrm{exp} (\textrm{ - }c_{1w}^{\prime}z) = {t_w}$ are all constant. Finally, the linear imaging model in the local region is obtained:
$${I_w}(x) = {k_w}J_w^{\prime}(x) + B_w^{\prime},x \in \varOmega (y)$$
In the Eq. (16), there is a linear relationship between the total intensity ${I_w}(x)$ and the target radiance $J_w^{\prime}(x)$ in the local region, which constructs the model correlation between transmission rate and backscattered light that they are the slope and the intercept of local linear model respectively. Therefore, for an underwater degraded image, the transmission rate and backscattered light can be obtained simultaneously through the slope and intercept as long as the local linear model is solved. And then the target radiance can be inverted. This method does not need to solve the wideband attenuation coefficients, and no longer needs to consider the problem of whether the transmission rates of direct attenuation component and backscattered light are equal. By means of the local linear model, the problem of image restoration is finally transformed into the problem to solve the linear model.

3. Solving method of the local linear model

3.1 Method of obtaining the local special-value

Linear fitting is a popular method for solving linear model. In theory, as long as at least two fitting points are obtained, the linear model can be solved. Combing with Eq. (16), the fitting points can be expressed as $(J_w^{^{\prime}i},I_w^i)[i \ge 2]$, where the abscissa is the target radiance, and the ordinate is the corresponding underwater image intensity. However, because the target radiance cannot be directly obtained from the underwater images, it is very difficult to obtain the abscissa $J_w^{^{\prime}i}$. It is well known that the quality of underwater images will be improved after being normalized. Through theoretical derivation of normalization processing and statistical research on a large number of images, the concept of local special-value $J_w^{^{\prime}sp}$ is defined in this paper, which is approximately equal to the target radiance $J_w^{^{\prime}i}$. According to the position of the pixel where $J_w^{^{\prime}sp}$ is located, the ordinate $I_w^{sp}$ of the fitting point can be obtained from the underwater image, so that a fitting point $(J_w^{^{\prime}sp},I_w^{sp})$ can be directly obtained. And other fitting points can be obtained with the help of the local special-value. Next, the concept and acquisition method of the local special-value $J_w^{^{\prime}sp}$ will be introduced.

Normalization processing is a common data transformation operation, it transforms a set of data from the original interval $[{X^{min}},{X^{max}}]$ to a fixed interval, and the fixed interval is usually $[0,1]$. It is a linear transformation that does not change the order of the original data. Generally, the expression is:

$${X^{new}} = \frac{{{X^i}\textrm{ - }{X^{min}}}}{{{X^{max}}\textrm{ - }{X^{min}}}} = \frac{1}{{{X^{max}}\textrm{ - }{X^{min}}}}{X^i}\textrm{ - }\frac{{{X^{min}}}}{{{X^{max}}\textrm{ - }{X^{min}}}},{X^i} \in [{X^{min}},{X^{max}}]$$

A typical case of the relationship curve between the data before and after normalizing is shown in Fig. 2(a).

 figure: Fig. 2.

Fig. 2. The distribution curves before and after normalizing. (a) Changes in the numerical distribution curves. (b) Distribution curve of absolute data difference before and after normalizing.

Download Full Size | PDF

Through Fig. 2(a), it can be found that there is an intersection of the data distribution curves before and after normalizing. With the intersection as the center, as the distance from the intersection increases, the distance between the two straight lines in the direction perpendicular to the horizontal axis becomes larger. The absolute value of the difference between the data before and after normalizing is calculated, then Eq. (18) can be obtained:

$$|{{X^{new}}\textrm{ - }{X^i}} |= \left|{\frac{{{X^i} - {X^{min}}}}{{{X^{max}} - {X^{min}}}}\textrm{ - }{X^i}} \right|= \left|{(\frac{\textrm{1}}{{{X^{max}} - {X^{min}}}}\textrm{ - 1}){X^i}\textrm{ - }\frac{{{X^{min}}}}{{{X^{max}}\textrm{ - }{X^{min}}}}} \right|,{X^i} \in [{X^{min}},{X^{max}}]$$

In the above equation, the result is $|{\textrm{ - }{X^{\min }}} |$ when ${X^i} = {X^{\min }}$, while the result is $|{\textrm{1}\textrm{ - }{X^{\max }}} |$ when ${X^i} = {X^{\max }}$. Therefore, the curve of the absolute value of the difference can be shown in Fig. 2(b).

So, it is a universal law of normalization that there is a value in the normalized data, which is almost equal to the data before normalizing. And this value is defined by us as the special-value in the normalized data, which is ${X^{sp}}$ shown in the Fig. 2. We conducted experiments to verify this conclusion in 8000 different local regions of 400 different natural images, which are collected from the search engine. For each local region, the absolute minimum value of the pixel intensities difference before and after normalizing is obtained, and the probability distribution of absolute minimum value is shown in Fig. 3.

 figure: Fig. 3.

Fig. 3. Probability distribution of the absolute minimum difference before and after normalizing in the local region. (a) Red channel. (b) Green channel. (c) Blue channel.

Download Full Size | PDF

It can be found that about 60% of the minimum difference is <0.0005 for the RGB channels, over 70% of the minimum difference is in the range of <0.001, over 80% of that is <0.002. Therefore, the minimum difference can be regarded as zero approximately.

Extending from general data to underwater image intensities, the local region of the underwater images $\Omega (y)$ is normalized as follows:

$$I_w^{en}(x) = \frac{{{I_w}(x)\textrm{ - }I_w^{min}}}{{I_w^{max}\textrm{ - }I_w^{min}}},x \in \varOmega (y)$$
where $I_w^{en}(x)$ represents the normalized result, $I_w^{\max }$ and $I_w^{\min }$ represent the maximum and minimum pixel intensity in the local region respectively. And Eq. (20) can be obtained by substituting the local linear model into Eq. (19):
$$I_w^{en}(x) = \frac{{{k_w}J_w^{\prime}(x) + B_w^{\prime}\textrm{ - }I_w^{min}}}{{I_w^{max}\textrm{ - }I_w^{min}}},x \in \varOmega (y)$$

The pixel intensity of the local region only changes with the target radiance intensity, so there is:

$$I_w^{en}(x) = \frac{{{k_w}J_w^{\prime}(x) + B_w^{\prime}\textrm{ - }({k_w}J_w^{^{\prime}min} + B_w^{\prime})}}{{{k_w}J_w^{^{\prime}max} + B_w^{\prime}\textrm{ - }({k_w}J_w^{^{\prime}min} + B_w^{\prime})}} = \frac{{J_w^{\prime}(x) \textrm{ - }J_w^{^{\prime}min}}}{{J_w^{^{\prime}max}\textrm{ - }J_w^{^{\prime}min}}},x \in \varOmega (y)$$

It can be concluded that the local normalized result $I_w^{en}(x)$ is also the normalized result $J_w^{^{\prime}en}(x)$ of the local region in the target scene $J_w^{\prime}(x)$ :

$$I_w^{en}(x) = \frac{{J_w^{\prime}(x) \textrm{ - }J_w^{^{\prime}min}}}{{J_w^{^{\prime}max}\textrm{ - }J_w^{^{\prime}min}}} = J_w^{^{\prime}en}(x),x \in \varOmega (y)$$

According to the aforementioned conclusion that there is a special-value after normalizing the data, there will also be a value $J_w^{^{\prime}sp}$ in $J_w^{^{\prime}en}(x)$, which can be regarded as the pixel intensity of the natural scene before normalizing. In this way, through the normalization operation on the local region of the underwater image, there is the pixel intensity equal to the target radiance $J_w^{\prime}$, that is, the local special-value. Next, the method is introduced to obtain the special-value based on the gradient distribution characteristics in the normalized local region.

What we obtain is the normalized image of the local region of natural scene, which is a linear transformation of the original pixel intensity, and the original interval $[J_w^{^{\prime}\min },J_w^{^{\prime}\max }]$ is widened to $[{0,1} ]$. To a certain extent, the change of pixel intensity will cause the change of original statistical characteristic of image, the gradient characteristic of image is used here.

Usually, in the local region of natural images, the distribution range of pixel intensity is narrow and the intensities between the adjacent pixels are relatively close, which makes the gradient of light intensity small. Through observing the gradient histogram of local region of natural image (as shown in Fig. 4), it can be found that the gradient of the image presents the characteristic of being “small” as a whole, the distribution of gradient is mainly concentrated approaches to zero, and as the gradient increasing, the distribution frequency becomes lower and approaches to zero. The normalizing operation expands the image gradient, the distribution range is broadened, the distribution frequency is more dispersed, and the “small” characteristic of gradient is weakened. It is pointed out here that for some local regions with rich texture, although the gradient does not appear to be “small” due to the wide distribution of the pixel intensity, the change of pixel intensity before and after normalizing is also small. At this time, any pixel intensity in the local region can be regarded as the local special-value approximately, and the proposed method is still applicable.

 figure: Fig. 4.

Fig. 4. Histogram distribution of gradient before and after local normalizing.

Download Full Size | PDF

We observed the distribution of pixel positions with small pixel value difference before and after normalizing in the local region of natural images. The pixel intensities at these locations are the local special-value and intensities approach to it. And the positions distribution has continuity to a certain extent, this also shows the continuity of the pixel intensities distribution in local regions with simple texture, that is the intensity of a pixel at one location is usually close to the intensities of pixels around it. It is difficult to determine the position of pixel with the smallest difference directly since there may be only a few pixels. Therefore, it can be simplified to find the position of rectangular block where local special-value and intensities approach to it are most likely to exist in, and the mean value of pixel intensity in the block can be used as approximate result of the local special-value of the region.

According to the content introduced above, it can be obtained that there will be a local special-value $J_w^{^{\prime}sp}$ after the local region of underwater image is normalized. And this pixel intensity is almost unchanged before and after normalization, which is approximately equal to the pixel intensity of the non-degraded target image $J_w^{\prime}$. It is the abscissa of fitting points of the local linear model. And in the local region, searching local special-value can be transformed into finding the target rectangular block contain the most pixels, whose intensities value are close to local special-value and the pixel intensity close to it. The change of the pixel intensity before and after normalization increases gradually with the center of the local special-value, the pixels value of light intensities that change little can more closely reflect the image characteristic before normalizing. Therefore, the rectangular block can be searched by finding the area where the pixels value change little before and after normalization, which retains the statistical characteristics of the natural image. In order to find the target rectangular block, we need to construct an evaluation basis. Gradient is a commonly used image statistical feature, the local normalization operation widens the gradient distribution of the natural image and weakens the “small” characteristic, so the gradient distribution feature is used as the evaluation criterion. The internal gradient distribution characteristic of the target rectangular block should still show this “small” characteristic. By constructing an objective function reflecting the “smaller” characteristic of gradient, the position of target rectangular block can be determined and the local special-value can be obtained. Because the zero-norm of the matrix represents the number of non-zero elements in the matrix, combined with the characteristic that the gradient is concentrated approaches the zero value, so the zero-norm of the rectangular block gradient can be used to approximate this characteristic, which the gradient of the rectangular block tends to zero. In view of the possibility that different rectangular blocks may have the same zero-norm, the average gradient of the rectangular block is added. Then the final objective function is constructed with the sum of the zero-norm of the rectangular block gradient and the average gradient of the rectangular block. The entire local region is traversed in the form of window scanning, and the rectangular block with smallest objective function value is the target rectangular block. The specific expression is as follows:

$$I_{enw}^{\varOmega \_sp}(x) = arg\mathop {min}\limits_{I_{enw}^\varOmega \in {I_{enw}}} [{||{\nabla I_{enw}^\varOmega (x)} ||_0} + aver(\nabla I_{enw}^\varOmega (x))]$$
$I_{enw}^\Omega (x)$ represents the rectangular block that is traversed in the normalized local region. After obtaining the target rectangular block, the final local special-value is:
$$J_w^{^{\prime}sp}(x) = aver(I_{enw}^{\varOmega \_sp}(x))$$
After obtaining the local special-value $J_w^{^{\prime}sp}$, according to the pixel position of $J_w^{^{\prime}sp}$ in local region, the corresponding underwater image intensity $I_w^{sp}$ can be gained from the same local region of the underwater image. Thus, a fitting point $(J_w^{^{\prime}sp},I_w^{sp})$ that can be used to solve the linear model is obtained. In order to complete the linear model fitting, more fitting points are needed.

3.2 Restoring underwater image by linear fitting

The law of normalization processing shows that the closer the pixel intensity $J_w^{^{\prime}en}$ is to $J_w^{^{\prime}sp}$, the smaller the difference $|{J_w^{^{\prime}en} - J_w^{\prime}} |$, and the closer $J_w^{^{\prime}en}$ is to $J_w^{\prime}$ with the local special-value $J_w^{^{\prime}sp}$ as the center in the normalized local region. Therefore, other values can be obtained approximately equal to the target radiance $J_w^{\prime}$ within the pixel intensity approach to the local special-value $J_w^{^{\prime}sp}$ (the adjustment range of the fitting value shown in Fig. 5).

 figure: Fig. 5.

Fig. 5. Schematic diagram of linear fitting value determination method.

Download Full Size | PDF

In the normalized pixel intensity of the local region $J_w^{{\prime}en}$, the local special-value $J_w^{{\prime}sp}$ is taken as the center, and 5% of the pixels number in the local region is taken as the interval in the nearby pixel intensities. And then two intensities are selected respectively with the same interval in the directions greater than $J_w^{{\prime}sp}$ and less than $J_w^{{\prime}sp}$ in the normalized local region, so that four preselected intensities can be obtained, which are identified as $J_w^{{\prime}1pre}$, $J_w^{{\prime}2pre}$, $J_w^{{\prime}3pre}$, $J_w^{{\prime}4pre}$ (as shown by the rose-red dots in Fig. 5). In order to make the preselected intensities closer to the target radiance, the pixel intensities larger than the special-value are multiplied by the coefficient $1 \textrm{ - }\varepsilon$ ($\textrm{0} < \varepsilon < \textrm{1}$), and the pixel intensities smaller than the special-value are multiplied by the coefficient $1 + \varepsilon$. In this way, four intensities that can be used as the abscissas of linear fitting points are obtained, which are identified as $J_w^{{\prime}1}$, $J_w^{{\prime}2}$, $J_w^{{\prime}3}$, $J_w^{{\prime}4}$(as shown by the green dots in Fig. 5). According to the pixel positions of the preselected intensities, four intensities can also be obtained in the underwater image, which are the ordinates of the linear fitting points $I_w^1$, $I_w^2$, $I_w^3$, $I_w^4$ . Combing the fitting point directly obtained by utilizing the local special-value, five linear fitting points $(J_w^{{\prime}1},I_w^1)$, $(J_w^{{\prime}2},I_w^2)$, $(J_w^{{\prime}sp},I_w^{sp})$, $(J_w^{{\prime}3},I_w^3)$, $(J_w^{{\prime}4},I_w^4)$ are determined that can be used to solve the local linear model by fitting. For simplicity, the same $\varepsilon$ is used for all local regions of the entire image.

After obtaining the fitting points, linear imaging model of local region will be obtained by using the least square method to fit fitting points linearly. Figure 6 is an example of the linear fitting result of single channel. After solving the local linear model of the underwater image, the value of transmission rate ${t_w}$ and the backscattered light ${B_w}$ of the local region are obtained through its slope and intercept.

 figure: Fig. 6.

Fig. 6. Example of solution result of local linear model. (a) The location of the white box is the selected local region. (b) Enlarged view of the selected local region and the position of the red box is the rectangular block selected for determining the local special-value. (c) Fitting points based on the local special-value and fitting line.

Download Full Size | PDF

After obtaining the fitting results of the local region, it can be rolled out to the global region from the local region. In practice, if we remove the scattered light in the extreme distant region thoroughly, the image may seem unnatural [4]. Therefore, small amount of scattered light is retained in the extreme distant regions in the images, making the restoration results more natural and improving the visual effect of restoration results. It is known that the intensity of backscattered light is proportional to the distance, so the average intensity of the brightest 10%−5% pixels in the global backscattered light are assigned to the pixels at the position of the brightest 5%, so that there is still some scattered light for the extreme distant regions. Using guided filtering [25] to remove the blocky effect, the final global transmission rate $t_{w\_global}^{\prime}(x)$ and the backscattered light intensity $B_{w\_global}^{\prime}(x)$ can be obtained:

$$\begin{array}{l} t_{w\_global}^{\prime}(x) = guidedfilter({t_{w\_global}}(x))\\ B_{w\_global}^{\prime}(x) = guidedfilter({B_{w\_global}}(x)) \end{array}$$
${t_{w\_global}}(x)$, ${B_{w\_global}}(x)$ represents the global transmission rate and backscattered light before filtering. Finally, we restrict the lower bound of the transmission rate to improve the estimation accuracy, and the target radiance is obtained as follows:
$${J_{w\_global}}(x) = \frac{{{I_w}(x)\textrm{ - }B_{w\_global}^{\prime}(x)}}{{max(t_{w\_global}^{\prime}(x),0.1)}},w \in \{ R,G,B\}$$
The flowchart of the proposed method is shown in Fig. 7.

 figure: Fig. 7.

Fig. 7. Flowchart for the proposed method.

Download Full Size | PDF

4. Analysis of the experimental results

4.1 Analysis of the value of parameter ɛ

When obtaining fitting points, the parameter $\varepsilon$ was used to adjust pixel intensities approach to the special-value of local region. This section will determine the empirical value of $\varepsilon$.

Figure 8 shows the typical cases of the pixel intensity distribution before and after normalizing, which represents the distribution curves after expanding the three types of pixel intensity to $[{0,1} ]$. The three types are that the overall intensity of the image is small, middle and large. It can be seen that there are intersection points in all three types and the distance of the curve near the intersection point is small. We perform restoration experiments on 200 underwater images from UIEB, RUIE, and EUVP datasets [2628] in order to determine the empirical value of $\varepsilon$. The fitting values are determined near the intersection point, so the value range of $\varepsilon$ does not need to be very large, take it as [0.0, 0.1], and the step size is 0.002. Each value of the parameter $\varepsilon$ is used to perform image restoration, so that 50 restoration results are obtained for each image. The sum of the information entropy and the UIQM of the image is used as the basis for quality evaluation, and the $\varepsilon$ corresponding to the maximal sum is used as the optimal value of the parameter $\varepsilon$ of the image. By analyzing the optimal value distribution of $\varepsilon$, as shown in Fig. 9, the empirical value of $\varepsilon$ can be obtained.

 figure: Fig. 8.

Fig. 8. Three typical distribution curves of pixel values before and after normalizing

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. The statistical law of $\varepsilon$. (a) Some examples about the change law of evaluation index with $\varepsilon$. (b) The probability distribution histograms of the optimal $\varepsilon$.

Download Full Size | PDF

Figure 9(a) shows some examples about the change rule of the index of the restoration images with $\varepsilon$ in the 200 images. From the examples of the change curves, it can be seen that the overall change trend of the curve approximately shows the convex function form that gradually rises and begins to decline after reaching the peak. In these images, there is an optimal parameter $\varepsilon$ corresponding to the maximum value of the index. The peak position distribution of the curves shows that the optimal value is concentrated on the value slightly less than the median. When the value of $\varepsilon$ is larger, the image index shows a descending trend, and the image quality gradually deteriorates. In order to analyze the distribution of optimal values of $\varepsilon$ more accurately, this paper solves the probability distribution histograms of different optimal values. As shown in Fig. 9(b), and the ranges of optimal values are [0.00,0.01), [0.01,0.02), [0.02,0.03), [0.03,0.04) until [0.09,0.1).

In Fig. 9(b), the optimal value of $\varepsilon$ is mainly concentrated between [0.00,0.06), and the optimal value in [0.00,0.05) accounts for near 75%. In fact, lots of images are distorted when the value of $\varepsilon$ exceeds 0.06. Combing with the visual result of restored images, in order to have good restoration effect on different types of underwater images and expand the scope of application of the algorithm, the intermediate value is selected, and takes $\varepsilon$ as a fixed value: 0.03. And the effectiveness of algorithm will be evaluated and analyzed based on $\varepsilon = 0.03$.

4.2 Results of linear fitting in a local region

The linear fitting results are obtained and the backscattered light is separated in the local region of different types of underwater images, in order to verify the rationality of the proposed method. As shown in Fig. 10, two local regions located at the near and far position in the image respectively are selected from the underwater images, which mainly exhibit degraded features of blue and green. The white boxes in the figure indicate the selected position.

 figure: Fig. 10.

Fig. 10. Results of local region background light separation in underwater images

Download Full Size | PDF

According to the backscattered light expression in Eq. (15), its intensity is proportional to the scene distance. The Fig. 10 shows the two intensities of the backscattered light separated from the two selected local regions, respectively. It can be found that the backscattered light separated from the near regions of the two images is obviously darker and lower than that of the far regions. It is in line with the conclusion that the backscattered light increases with distance.

The Fig. 11 is the linear fitting curves between the total radiance and the target light in the local region, where (a) and (b) are the fitting results of near and far regions in the blue image, respectively, (c) and (d) are the results in green image. As pointed out above, the slope of the fitted curve represents the transmission rate of the local region. It can be found that the three-channel slope of the near region is obviously greater than the slope of the far region, which is also in line with the changing law of the transmission rate with the distance. At the same time, the slopes of the curves in (c) and (d) are generally higher than those in (a) and (b), which means that the transmission rates of the green image are generally greater than those of the blue image. This is also in line with the visual perception that the green image is clearer than the blue image. The intercept of the fitted line is the backscattered light intensity of the local region. In the four images, the intercepts of the blue channel and green channel curves are larger than the intercept of the red channel, and in the fitting curves of blue image, the value of blue channel is the largest and its curves at the top of the image, the green image shows the same rule, this is consistent with the color characteristics exhibited by both images. And the difference between the blue-green channel intercept and the red channel intercept in (a) and (b) is larger than that in (c) and (d), which also reflects the more serious scattering and blurriness of the blue image.

 figure: Fig. 11.

Fig. 11. Linear fitting curves of local region. (a)-(b) The curves of near region and far region in blue stone images. (c)-(d) The curves of near region and far region in green stone images.

Download Full Size | PDF

In general, the backscattered light and the transmission rate obtained by the method in the local region are in accordance with the laws of the underwater imaging physical model, which verifies the rationality and correctness of the linear fitting method in the local region.

4.3 Evaluation and analysis of the restoration results

In Section 4.2, we analyze the fitting results of the proposed method in the local region of the underwater image qualitatively. Then we will analyze the effectiveness of the algorithm on the entire image. During the experiment, the size of local region is 25*25, the size of rectangular block is 3*3. The restored images are analyzed from two perspectives: subjective perception and objective indicators. The results are compared with the existing advanced methods, includes statistical model of background light and optimization of transmission map (SMBOT) [29], relative global histogram stretching (RGHS) [30], underwater dark channel prior (UDCP) [5], image blurriness and light absorption (IBLA) [8], color balance and fusion for underwater image(CBFUI) [31]. The codes for these methods are all provided by the authors or obtained from the internet.

In order to evaluate and analyze the restoration results from subjective perception, we use the popular natural underwater image dataset UIEB as the main processing objects to analyze the de-scattering effect of the proposed method on different types of scenes.

(1) Restoration effect of the images in the UIEB dataset

Two typical underwater image types are mainly analyzed for the images of this dataset: the scene filled with objects and the scene contains background region.

The restoration results of images filled with objects are shown in Fig. 12. By comparing the results, it is found that SMBOT, CBFUI and the proposed method have achieved good results, which are significantly better than the results produced by UDCP and IBLA. Although UDCP method has some de-scattering effect, but the color distortion is serious. As for IBLA, the processing results of some images such as scene 1 and scene 6 are dark and the color distortion is serious. For scene 2 and scene 3, the color distortion of results is reduced, but the scattering removal is not obvious enough. Although the restoration effect of RGHS is improved compared with that of UDCP and IBLA, the results of the dark image like scene 3 and scene 4 are less effective than that of SMBOT and the proposed method. CBUFI effectively corrects the chromatic aberration and improves the image clarity, but the restoration results are dark as a whole. The proposed method has achieved good effectiveness for the images filled with targets. The effect of scattering removing is very obvious, texture and details of the image are improved. For example, the texture of scene 2, scene 4, and scene 5 have been significantly improved. At the same time, it is also effective for underwater images like scene 1 with color shift and depth of field, the proposed method has the most thorough de-scattering and clearest results among all methods.

 figure: Fig. 12.

Fig. 12. Comparison of restoration results of underwater images filled with objects

Download Full Size | PDF

As shown in Fig. 13, the method is effective in both scatter removal and color restoration for the underwater images contain the background region. The proposed method and the SMBOT have yielded more clear results and more natural colors than other methods. The texture details and contrast of the five images are all improved, especially in the close regions, such as ship details in Scene 2, the nearby corals in Scene 3 and the fishes in Scene 5. CBFUI method also effectively removes the effect of scattering, but the result is still dark. The proposed method can restore the details of scene more effectively compared with the SMBOT and the CBFUI, and has better color restoration effect, which proves that this method is also effective for this type of underwater images. It is noticed that for images where the boundary between the target and the background is obvious such as Scene 1 and Scene 3, the restoration result of the method will produce some halos. The guided filtering can be additionally used to reduce the halos to further improve the visual effect.

(2) Analysis about the effect of the proposed method for color correction

 figure: Fig. 13.

Fig. 13. Comparison of restoration results of underwater images containing background region

Download Full Size | PDF

In order to analyze the effect on correcting color distortion, we select the images with obvious color distortion in the UIEB dataset and select some blue and green tones images additionally. From the results in Fig. 14, it can be seen that the proposed method removes the color cast of the images effectively, and the color of the restored results is more natural. The processing effect of the proposed method is better than that of the CBFUI. Although the SMBOT and the RGHS also have some effect, the results of SMBOT are not natural enough, and the recovery effect of the RGHS on the green images are not as good as the proposed method.

(3) Analysis of the objective index

 figure: Fig. 14.

Fig. 14. Comparison of color correction results for underwater images

Download Full Size | PDF

The underwater image quality evaluation indicators are used to further verify the effectiveness of the method quantitatively. The used evaluation indicators are: UCIQE [32], image information entropy, image average gradient Mean-G. Among these indicators, the larger the value, the better the result.

In this paper, the average values of the evaluation indicators are calculated for the six underwater images filled with targets and the five underwater images contains the background region, which were used in the subjective analysis. The index results are shown in Table 1.

Tables Icon

Table 1. Comparison of Evaluation Indexes of the Restoration Results (Two types of scenarios)

As shown in Table 1, the proposed method has achieved good results in quantitative evaluation by analyzing the evaluation indicators for the two types of images, and they are all in the front position. The information entropy and the average gradient of the proposed method are the best, which shows that the method has achieved good results in removing the influence of scattering and improving image detail information. Although the value of UCIQE is in the second position, it is close to the optimal value, which shows that this method improves the overall quality of underwater images. Therefore, the comprehensive analysis shows that the proposed method has achieved good results in restoring different types of underwater images.

In addition, 400 different types of images in the UIEB dataset are restored for the object index of result in order to further analyze the effectiveness of this method, and the mean value of index is given. The results are as follows.

Based on the analysis of the above results, the restoration results of the proposed method have achieved good results in both subjective perception and objective evaluation indicators. For different types of underwater images, it can restore image details, remove color cast, and improve the overall quality of the image. The restoration effect for the close scene of the underwater images contain background region is satisfactory, but some halos may be generated at the junction of the target and the background. The additional filtering process reduces the halos and improves the visual effect. Although this makes the evaluation index lower than that before filtering, it can be seen from Table 2 that the result is still better than other methods.

Tables Icon

Table 2. Comparison of The Average Evaluation Indexes of the Restoration Results in UIEB dataset

5. Conclusion

In order to improve the quality of underwater images, the local linear model and restoration method for underwater images are proposed. Based on the approximate distance-invariant feature in the local region, the local linear model of the underwater images is established. This model reflects the linear proportional relationship between the underwater degraded image intensity and the target radiance. The slope and intercept of the model represent the transmission rate and backscattered light in the local region respectively, so that the problem of underwater images restoration can be transformed into the problem of solving linear model. In order to solve the linear model and complete the restoration of underwater images, the concept of local special-value is defined, which is derived from the study of normalization processing. The five fitting points are obtained by means of the local special-value, which could be used for linear fitting. And then the local linear equation can be solved by least squares fitting. According to the local linear equation, the two key parameters of transmission rate and backscattered light can be obtained concurrently. Moreover, since the backscattered light in the local region can be regarded as a constant in the linear model, only the transmission rate of direct attenuation component needs to be solved. The proposed method simplifies the process of solving parameters, and large numbers of experimental results verify the effectiveness of the method. The contrast of the restoration results is improved and the color distortion is also weakened. The local linear model offers a new approach to restoring underwater images, and the proposed method provides an idea to solve the linear model by obtaining the local special-value and the fitting points. The research on other statistical laws of images and solving methods of the linear model are the direction for the future efforts.

Funding

National Natural Science Foundation of China (61571177).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. Zhou, Y. Wang, W. Zhang, and C. Li, “Underwater image restoration via feature priors to estimate background light and optimized transmission map,” Opt. Express 29(18), 28228–28245 (2021). [CrossRef]  

2. B.L. McGlamery, “A computer model for underwater camera systems,” Ocean Optics VI 0208, 221–231 (1980). [CrossRef]  

3. S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” Int. J. Comput. Vis. 48(3), 233–254 (2002). [CrossRef]  

4. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011). [CrossRef]  

5. P. Drews-Jr, E. do Nascomento, F. Moraes, S. Botelho, and M. Campos, “Transmission estimation in underwater single images,” in IEEE International Conf. on Computer Vision Workshops (ICCVW) (2013), pp. 825–830.

6. A. Galdran, D. Pardo, A. Picon, and A. Alvarez-Gila, “Automatic red-channel underwater image restoration,” J. Vis. Commun. Image R. 26, 132–145 (2015). [CrossRef]  

7. K. Liu and Y. Liang, “Underwater image enhancement method based on adaptive attenuation-curve prior,” Opt. Express 29(7), 10321–10345 (2021). [CrossRef]  

8. Y. T. Peng and P. C. Cosman, “Underwater image restoration based on image blurriness and light absorption,” IEEE Trans. on Image Process. 26(4), 1579–1594 (2017). [CrossRef]  

9. J. Xie, G. Hou, G. Wang, and Z. Pan, “A variational framework for underwater image dehazing and deblurring,” IEEE Trans. on Circuits and Systems for Video Technology (to be published.

10. D. Berman, T. Treibitz, and S. Avidan, “Diving into haze-lines: color restoration of underwater images,” in British Machine Vision Conference (BMVC) (2017).

11. G. Buchsbaum, “A spatial processor model for object colour perception,” J. Franklin Inst. 310(1), 1–26 (1980). [CrossRef]  

12. T. Li, S. Rong, W. Zhao, L. Chen, Y. Liu, H. Zhou, and B. He, “Underwater image enhancement using adaptive color restoration and dehazing,” Opt. Express 30(4), 6216–6235 (2022). [CrossRef]  

13. K. Liu and Y. Liang, “Enhancement of underwater optical images based on background light estimation and improved adaptive transmission fusion,” Opt. Express 29(18), 28307–28328 (2021). [CrossRef]  

14. D. Akkaynak and T. Treibitz, “A revised underwater image formation model,” in IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) (2018), pp. 6723–6732.

15. J. Zhou, T. Yang, W. Ren, D. Zhang, and W. Zhang, “Underwater image restoration via depth map and illumination estimation based on a single image,” Opt. Express 29(19), 29864–29886 (2021). [CrossRef]  

16. F. Liu, Y. Wei, P. Han, K. Yang, L. Bai, and X. Shao, “Polarization-based exploration for clear underwater vision in natural illumination,” Opt. Express 27(3), 3629–3641 (2019). [CrossRef]  

17. R.W. Preisendorfer, “Introduction to hydrologic optics,” in Hydrologic Optics (1976), Vol. 1.

18. N. G. Jerlov, Marine Optics (Elsevier Scientific Publishing Company, 1976), Chap. 7.

19. J. Y. Chiang and Y. C. Chen, “Underwater image enhancement by wavelength compensation and dehazing,” IEEE Trans. on Image Process. 21(4), 1756–1769 (2012). [CrossRef]  

20. J. E. Tyler, “Observed and computed path radiance in the underwater light field,” J. Mar. Res. 18(3), 157–167 (1960).

21. C.D. Mobley, Light and Water: Radiative Transfer in Natural Waters (Academic press, 1994), Chap. 5.

22. D. J. Jobson, Z. Rahman, and G. A. Woodell, “A multiscale retinex for bridging the gap between color images and the human observation of scenes,” IEEE Trans. on Image Process. 6(7), 965–976 (1997). [CrossRef]  

23. D. Akkaynak, T. Treibitz, T. Shlesinger, R. Tamir, Y. Loya, and D. Iluz, “What is the space of attenuation coefficients in underwater computer vision?” in IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) (2017), pp. 568–577.

24. D. Akkaynak and T. Treibitz, “Sea-Thru: a method for removing water from underwater images,” in IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) (2019), pp. 1682–1691.

25. K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Trans. Pattern Anal. Mach. Intell. 35(6), 1397–1409 (2013). [CrossRef]  

26. C. Li, C. Guo, W. Ren, R. Cong, J. Hou, S. Kwong, and D. Tao, “An underwater image enhancement benchmark dataset and beyond,” IEEE Trans. on Image Process. 29, 4376–4389 (2020). [CrossRef]  

27. R. Liu, X. Fan, M. Zhu, M. Hou, and Z. Luo, “Real-World underwater enhancement: challenges, benchmarks, and solutions under natural light,” IEEE Trans. Circuits Syst. Video Technol. 30(12), 4861–4875 (2020). [CrossRef]  

28. M. Islam, Y. Xia, and J. Sattar, “Fast Underwater Image Enhancement for Improved Visual Perception,” IEEE Robot. Autom. Lett. 5(2), 3227–3234 (2020). [CrossRef]  

29. W. Song, Y. Wang, D. Huang, A. Liotta, and C. Perra, “Enhancement of underwater images with statistical model of background light and optimization of transmission map,” IEEE Trans. on Broadcast. 66(1), 153–169 (2020). [CrossRef]  

30. D. Huang, Y. Wang, W. Song, J. Sequeira, and Sébastien Mavromatis, “Shallow-water image enhancement using relative global histogram stretching based on adaptive parameter acquisition,” in International Conf. on Multimedia Modeling (ICMM) (2018), pp. 453–465.

31. O. A. Codruta, A. Cosmin, D. V. Christophe, and P. Bekaert, “Color Balance and Fusion for Underwater Image Enhancement,” IEEE Trans. on Image Process. 27(1), 379–393 (2018). [CrossRef]  

32. M. Yang and A. Sowmya, “An underwater color image quality evaluation metric,” IEEE Trans. on Image Process. 24(12), 6062–6071 (2015). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1.
Fig. 1. The Model of light transmission in water.
Fig. 2.
Fig. 2. The distribution curves before and after normalizing. (a) Changes in the numerical distribution curves. (b) Distribution curve of absolute data difference before and after normalizing.
Fig. 3.
Fig. 3. Probability distribution of the absolute minimum difference before and after normalizing in the local region. (a) Red channel. (b) Green channel. (c) Blue channel.
Fig. 4.
Fig. 4. Histogram distribution of gradient before and after local normalizing.
Fig. 5.
Fig. 5. Schematic diagram of linear fitting value determination method.
Fig. 6.
Fig. 6. Example of solution result of local linear model. (a) The location of the white box is the selected local region. (b) Enlarged view of the selected local region and the position of the red box is the rectangular block selected for determining the local special-value. (c) Fitting points based on the local special-value and fitting line.
Fig. 7.
Fig. 7. Flowchart for the proposed method.
Fig. 8.
Fig. 8. Three typical distribution curves of pixel values before and after normalizing
Fig. 9.
Fig. 9. The statistical law of $\varepsilon$. (a) Some examples about the change law of evaluation index with $\varepsilon$. (b) The probability distribution histograms of the optimal $\varepsilon$.
Fig. 10.
Fig. 10. Results of local region background light separation in underwater images
Fig. 11.
Fig. 11. Linear fitting curves of local region. (a)-(b) The curves of near region and far region in blue stone images. (c)-(d) The curves of near region and far region in green stone images.
Fig. 12.
Fig. 12. Comparison of restoration results of underwater images filled with objects
Fig. 13.
Fig. 13. Comparison of restoration results of underwater images containing background region
Fig. 14.
Fig. 14. Comparison of color correction results for underwater images

Tables (2)

Tables Icon

Table 1. Comparison of Evaluation Indexes of the Restoration Results (Two types of scenarios)

Tables Icon

Table 2. Comparison of The Average Evaluation Indexes of the Restoration Results in UIEB dataset

Equations (26)

Equations on this page are rendered with MathJax. Learn more.

$$\frac{{d{I_w}(z,d)}}{{dz}} = \textrm{ - }{c_w}{I_w}(z,d) + I_w^\ast (z,d),w \in \{ R,G,B\}$$
$$\frac{{d{I_w}(z,d)}}{{dz}}\textrm{exp} ({c_w}z) = \textrm{ - }{c_w}{I_w}(z,d)\textrm{exp} ({c_w}z) + I_w^\ast (d)\textrm{exp} ({c_w}z),w \in \{ R,G,B\}$$
$$\frac{d}{{dz}}({I_w}(z,d) \times \textrm{exp} ({c_w}z)) = I_w^\ast (d)\textrm{exp} ({c_w}z),w \in \{ R,G,B\}$$
$$\int_0^z {d({I_w}({z^{\prime}},d) \times \textrm{exp} ({c_w}{z^{\prime}}))} = \int_0^z {I_w^\ast (d)\textrm{exp} ({c_w}{z^{\prime}})d{z^{\prime}}} ,w \in \{ R,G,B\}$$
$${I_w}(z,d)\textrm{exp} ({c_w}z)\textrm{ - }{I_w}(0,d) = I_w^\ast (d)\int_0^z {\textrm{exp} ({c_w}{z^{\prime}})d{z^{\prime}}} = \frac{{I_w^\ast (d)}}{{{c_w}}}(\textrm{exp} ({c_w}z)\textrm{ - 1}),w \in \{ R,G,B\}$$
$${I_w}(z,d) = {I_w}(0,d)\textrm{exp} (\textrm{ - }{c_w}z) + \frac{{I_w^\ast (d)}}{{{c_w}}}(1\textrm{ - }\textrm{exp} (\textrm{ - }{c_w}z)),w \in \{ R,G,B\}$$
$$I_w^\ast (d) = \int_0^\pi {{E_w}(d,\phi ){\beta _w}(\phi )d\Omega = {E_w}(d) \times 2\pi \int_0^\pi {{\beta _w}(\phi )\sin \phi d\phi } = {b_w}{E_w}(d)} ,w \in \{ R,G,B\}$$
$$ \begin{aligned} {F_w}(z) &= {D_w}(z) + {B_w}(z) = {I_w}(0,d) \textrm{exp} (\textrm{ - }{c_w}z) + \frac{{{b_w}{E_w}(d)}}{{{c_w}}}(1 \textrm{ - }\textrm{exp} (\textrm{ - }{c_w}z))\\ &= {J_w}\textrm{exp} (\textrm{ - }{c_w}z) + \frac{{{b_w}{E_w}(d)}}{{{c_w}}}(1 \textrm{ - }\textrm{exp} (\textrm{ - }{c_w}z)),w \in \{ R,G,B\} \end{aligned}$$
$$\scalebox{0.88}{$\displaystyle{I_w}(z,d) = {\kappa _w}\int\limits_V {{E_w}(d,\lambda ){\rho _w}(\lambda )\textrm{exp} (\textrm{ - }{c_w}(\lambda )z){R_w}(\lambda )d\lambda } + {\kappa _w}\int\limits_V {\frac{{{b_w}(\lambda ){E_w}(d,\lambda )}}{{{c_w}(\lambda )}}(1 \textrm{ - }\textrm{exp} (\textrm{ - }{c_w}(\lambda )z)){R_w}(\lambda )d\lambda }$}$$
$${I_w}(z,d) = J_w^{\prime}\textrm{exp} (\textrm{ - }c_w^{\prime}z) + B_w^{^{\prime}\infty }(1\textrm{ - }\textrm{exp} (\textrm{ - }c_w^{\prime}z))$$
$${I_w}(z,d) = D_w^{\prime}(z,d) + B_w^{\prime}(z,d) = J_w^{\prime}\textrm{exp} (\textrm{ - }c_{1w}^{\prime}z) + B_w^{^{\prime}\infty }(1 \textrm{ - }\textrm{exp} (\textrm{ - }c_{2w}^{\prime}z))$$
$$J_w^{\prime} = {\kappa _w}\int\limits_V {{E_w}(0,\lambda )\textrm{exp} ( - {K_w}(\lambda )d){\rho _w}(\lambda ,x)} {R_w}(\lambda )d\lambda$$
$$B_w^{^{\prime}\infty } = {\kappa _w}\int\limits_V {\frac{{{b_w}(\lambda ){E_w}(0,\lambda )\textrm{exp} ( - {K_w}(\lambda )d)}}{{{c_w}(\lambda )}}} {R_w}(\lambda )d\lambda$$
$$D_w^{\prime}(x) = {\kappa _w}\int\limits_V {{E_w}(0,\lambda )\textrm{exp} ( - {K_w}(\lambda )d){\rho _w}(\lambda ,x)} {R_w}(\lambda )d\lambda \times \textrm{exp} (\textrm{ - }c_{1w}^{\prime}z) = J_w^{\prime}(x) \times {k_w},x \in \varOmega (y)$$
$$B_w^{\prime}(x) = {\kappa _w}\int\limits_V {\frac{{{b_w}(\lambda ){E_w}(0,\lambda )\textrm{exp} (\textrm{ - }{K_w}(\lambda )d)}}{{{c_w}(\lambda )}}} {R_w}(\lambda )d\lambda \times (1 \textrm{ - }\textrm{exp} (\textrm{ - }c_{2w}^{\prime}z)) = B_w^{\prime},x \in \varOmega (y)$$
$${I_w}(x) = {k_w}J_w^{\prime}(x) + B_w^{\prime},x \in \varOmega (y)$$
$${X^{new}} = \frac{{{X^i}\textrm{ - }{X^{min}}}}{{{X^{max}}\textrm{ - }{X^{min}}}} = \frac{1}{{{X^{max}}\textrm{ - }{X^{min}}}}{X^i}\textrm{ - }\frac{{{X^{min}}}}{{{X^{max}}\textrm{ - }{X^{min}}}},{X^i} \in [{X^{min}},{X^{max}}]$$
$$|{{X^{new}}\textrm{ - }{X^i}} |= \left|{\frac{{{X^i} - {X^{min}}}}{{{X^{max}} - {X^{min}}}}\textrm{ - }{X^i}} \right|= \left|{(\frac{\textrm{1}}{{{X^{max}} - {X^{min}}}}\textrm{ - 1}){X^i}\textrm{ - }\frac{{{X^{min}}}}{{{X^{max}}\textrm{ - }{X^{min}}}}} \right|,{X^i} \in [{X^{min}},{X^{max}}]$$
$$I_w^{en}(x) = \frac{{{I_w}(x)\textrm{ - }I_w^{min}}}{{I_w^{max}\textrm{ - }I_w^{min}}},x \in \varOmega (y)$$
$$I_w^{en}(x) = \frac{{{k_w}J_w^{\prime}(x) + B_w^{\prime}\textrm{ - }I_w^{min}}}{{I_w^{max}\textrm{ - }I_w^{min}}},x \in \varOmega (y)$$
$$I_w^{en}(x) = \frac{{{k_w}J_w^{\prime}(x) + B_w^{\prime}\textrm{ - }({k_w}J_w^{^{\prime}min} + B_w^{\prime})}}{{{k_w}J_w^{^{\prime}max} + B_w^{\prime}\textrm{ - }({k_w}J_w^{^{\prime}min} + B_w^{\prime})}} = \frac{{J_w^{\prime}(x) \textrm{ - }J_w^{^{\prime}min}}}{{J_w^{^{\prime}max}\textrm{ - }J_w^{^{\prime}min}}},x \in \varOmega (y)$$
$$I_w^{en}(x) = \frac{{J_w^{\prime}(x) \textrm{ - }J_w^{^{\prime}min}}}{{J_w^{^{\prime}max}\textrm{ - }J_w^{^{\prime}min}}} = J_w^{^{\prime}en}(x),x \in \varOmega (y)$$
$$I_{enw}^{\varOmega \_sp}(x) = arg\mathop {min}\limits_{I_{enw}^\varOmega \in {I_{enw}}} [{||{\nabla I_{enw}^\varOmega (x)} ||_0} + aver(\nabla I_{enw}^\varOmega (x))]$$
$$J_w^{^{\prime}sp}(x) = aver(I_{enw}^{\varOmega \_sp}(x))$$
$$\begin{array}{l} t_{w\_global}^{\prime}(x) = guidedfilter({t_{w\_global}}(x))\\ B_{w\_global}^{\prime}(x) = guidedfilter({B_{w\_global}}(x)) \end{array}$$
$${J_{w\_global}}(x) = \frac{{{I_w}(x)\textrm{ - }B_{w\_global}^{\prime}(x)}}{{max(t_{w\_global}^{\prime}(x),0.1)}},w \in \{ R,G,B\}$$
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.