Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Active non-uniform illumination-based underwater polarization imaging method for objects with complex polarization properties

Open Access Open Access

Abstract

Active polarization imaging is one of the most effective underwater optical imaging methods that can eliminate the degradation of image contrast and clarity caused by macro-molecule scattering. However, the non-uniformity of active illumination and the diversity of object polarization properties may decrease the quality of underwater imaging. This paper proposes a non-uniform illumination-based active polarization imaging method for underwater objects with complex optical properties. Firstly, illumination homogenization in the frequency domain is proposed to extract and homogenize the natural incident light from the total receiving light. Then, the weight values of the polarized and non-polarized images are computed according to each pixel’s degree of linear polarization (DoLP) in the original underwater image. By this means, the two images can be fused to overcome the problem of reflected light loss generated by the complex polarization properties of underwater objects. Finally, the fusion image is normalized as the final result of the proposed underwater polarization imaging method. Both qualitative and quantitative experimental results show that the presented method can effectively eliminate the uneven brightness of the whole image and obtain the underwater fusion image with significantly improved contrast and clarity. In addition, the ablation experiment of different operation combinations shows that each component of the proposed method has noticeable enhancement effects on underwater polarization imaging. Our codes are available in Code 1.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

With the exploration and development of ocean resources, underwater optical imaging technology has been rapidly developed in the past few decades [1]. Different from traditional imaging technology, underwater optical imaging technology [2] combines conventional optical imaging with optical modulation [3], computational imaging [4], and digital image processing [5]. A significant difference between underwater optical imaging and traditional imaging technology is that underwater optical imaging captures the object reflected light through media to the greatest extent and eliminates the scattered light caused by scattering media as much as possible [6]. Based on the differences in optical properties, the scattered light can be removed from the total receiving light, and the direct reflected light of object can thus be retained [7].

Among various underwater imaging methods [812], active polarization imaging is one of the most effective methods for reducing the impact of medium scattering and obtaining clear underwater images based on the different polarization properties of different light [13]. In particular, one of the most representative active polarization imaging methods in scattering media was proposed by Schechner [14] in 2009, which employs an active polarized light source to illuminate the underwater scene and utilizes a polarizer to separate the scattered light. Then, researchers have continuously improved the underwater polarization imaging theory during the past decade [1518]. Hu et al. [19] proposed a visibility enhancement method using transmittance correction to realize underwater imaging of objects with different polarization properties. Later, they [20] further developed a polarimetric imaging-based underwater image recovery algorithm under non-uniform optical field.

However, all the approaches discussed above need to extract the background region from the underwater image that does not contain any foreground objects so as to estimate the intensity of backscattered light. Thus, we believe that the dependence on prior information severely limits the reference scope of these existing underwater polarization imaging methods. Furthermore, Zhao et al. [21] designed a polarization de-scattering imaging method based on genetic algorithms through turbid water without prior knowledge. Similarly, this method is not adaptable to situations where the imaging scene contains underwater objects with different polarization properties. So far, achieving satisfactory underwater imaging performance by existing polarization imaging approaches is still challenging in case additional assumptions are violated, or prior information is unavailable. In summary, the imaging performance may significantly degrade when there is no background area in the scene, the active illumination light is non-uniform, or objects with multiple polarization properties exist in the scene.

In this paper, we focus on exploiting an active polarization underwater imaging method with non-uniform illumination for objects with various polarization properties in the scene without background prior. Firstly, an incident light estimation based on frequency domain filtering is proposed to eliminate the negative effects of non-uniform illumination distribution. Then, a reflected light image restoration and enhancement method based on polarization-weighted fusion is developed to retain the reflected light from underwater objects. In contrast, the scattered light component is eliminated to the greatest extent. Finally, the real-world experiments of polarization imaging in turbid water demonstrate the superiority of our method in homogenizing illumination light and recovering underwater images with complex polarization properties. So, the main contributions of this paper can be summarized as follows:

  • 1. A frequency domain filtering method is developed for extracting and homogenizing the incident component from the underwater image so as to overcome the problem of active non-uniform illumination.
  • 2. A polarization-weighted fusion algorithm is proposed to estimate the intensity of object reflected light based on polarization property differences between polarized and non-polarized images.
  • 3. The incident component homogenization method and the polarization-weighted fusion algorithm are comprehensively integrated to restore the clear underwater image.
The rest of our paper is arranged as follows. Section 2 introduces the traditional underwater active polarization imaging model. Section 3 describes the proposed method and analyzes the principle of input parameter setting. Section 4 reports the experimental results, including the experimental setup and two experimental results with respect to the polarization imaging performance through turbid water. Finally, we draw the conclusion of this work in Section 5.

2. Active polarization imaging mode

According to the underwater imaging physical model [22], the total light of the underwater scene can be divided into backscattered light and object reflected light, which can be expressed as

$${{{I}_{total}}=B+T}$$
where, ${{I}_{total}}$ represents the intensity of total light received by the detector, $B$ and $T$ denote the intensities of the backscattered and object reflected light, respectively. The object reflected light is the reflection of the object’s surface to the incident light, including specular and diffuse light. The backscattered light can be regarded as the light that undergoes volume scattering throughout the water path between the imaging system and the object being imaged. Therefore, the primary purpose of underwater imaging is to remove backscattered light from the total receiving light.

However, traditional intensity-based imaging cannot separate the backscattered light from the total receiving light [14]. On the contrary, the polarization-based imaging method takes advantage of the polarization information of different components to distinguish between the backscattered light and the object reflected light. Assuming that the influence of the polarization angle is ignored, the relationship between the DoLP and the intensity of the two different underwater components can be expressed as

$${{{I}_{total}}{{P}_{total}}=B{{P}_{scat}}+T{{P}_{obj}}}$$
where, ${{P}_{total}}$ denotes the DoLP of ${{I}_{total}}$, ${{P}_{scat}}$ denotes the DoLP of backscattered light, ${{P}_{obj}}$ denotes the DoLP of the object reflected light. According to Eqs. (1) and (2), the backscattered light $B$ and the object reflected light $T$ can be expressed as
$${\left\{ \begin{matrix} B=\left( \frac{{{P}_{total}}-{{P}_{obj}}}{{{P}_{scat}}-{{P}_{obj}}} \right){{I}_{total}} \\ T=\left( \frac{{{P}_{scat}}-{{P}_{total}}}{{{P}_{scat}}-{{P}_{obj}}} \right){{I}_{total}} \\ \end{matrix} \right.\ }$$

As shown in Eq. (3), the intensity of the object reflected light $T$ is related to the DoLP of the total light ${{P}_{total}}$, the DoLP of the backscattered light ${{P}_{scat}}$, and the DoLP of the object reflected light ${{P}_{obj}}$. In traditional Stokes vector calculation methods, two pairs of polarization orthogonal images need to be photographed, and the differential operator is used to calculate the Stokes vector. Then, researchers further proposed more complex imaging systems to reduce the number of images and optimize the angle of image acquisition [23][24]. However, a normal image detector with a polarizer can only measure the DoLP and intensity of total receiving light. The intensity of object reflected light cannot be accurately calculated without any assumption.

In frequently-used active polarization imaging methods, the object reflected light is regarded as fully polarized or non-polarized, which means the value of ${{P}_{scat}}$ is 1 or 0. Then, the average intensity and DoLP of the pixels in the region without objects are calculated as the backscattered light intensity ${B}$ and the DoLP of the backscattered light ${{P}_{scat}}$. Finally, the intensity of object reflected light ${T}$ can be obtained by Eq. (3).

However, there are three main problems when the existing methods are applied to underwater imaging in practical applications: Firstly, the intensity of the backscattered light is considered a global invariant in traditional underwater imaging models. However, the backscattered light is not a global invariant due to the uneven illumination brightness and the difference in the incidence angle of the light source in different areas of the image. The backscattered light is unevenly distributed on the object’s reflective surface. The image contrast will be uneven if the non-uniformity of backscattered light is not considered. Then, in the traditional active polarization imaging model, the estimation of backscattered light mainly depends on the average background intensity of the target area not included in the image. This method is simple, and the calculation results have considerable accuracy. However, this method is unsuitable for images without an ideal background area. Therefore, estimating the backscattered light intensity without relying on the background area is a significant problem. Finally, the assumption of the polarization characteristics of object reflected light cannot be applied to scenes containing targets with different polarization characteristics. In traditional underwater imaging models, considering the strong depolarization characteristics of natural targets, the target reflected light is non-polarized or weak polarized light. However, this assumption does not apply to artificial objects like metals. Therefore, it is also an essential issue for underwater imaging to build hypotheses that conform to more kinds of scenes.

Therefore, to eliminate the influence of the above three problems on underwater imaging results, we propose an improved active polarization imaging method, which combines polarization imaging and digital image processing theory and can achieve clear underwater imaging without prior information [25].

3. Improved methods of underwater active polarization imaging

3.1 Overview

This paper proposes an active underwater polarization imaging method based on illumination homogenization and polarization-weighted fusion aiming at the problems of traditional active polarization imaging. And the algorithm flow is shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. The flow chart of the proposed method.

Download Full Size | PDF

The first part is illumination homogenization based on frequency domain filtering, which aims to obtain an underwater image with uniform illumination. Firstly, the fast Fourier transform converts the original underwater image to the frequency domain. The Gaussian frequency domain templates are multiplied to divide the image into the low-frequency and high-frequency components, which can be regarded as the illumination and reflected components. Then we perform inverse Fourier and logarithmic transform on the illumination component’s spectrum to obtain the illumination logarithmic image. At the same time, the original underwater image is also converted to a logarithmic image through logarithmic transformation. Next, the original logarithmic image and the illumination logarithmic image are subtracted to calculate the reflectance logarithmic image. Finally, the mean value of the reflectance logarithmic image and the illumination log image are added and exponentially transformed to obtain the underwater image after illumination homogenization.

The second part is underwater image enhancement based on polarization-weighted fusion. Firstly, the DoLP image is calculated using the periodic integration method [26]. Then, multiplying the uniform underwater image with the DoLP image can obtain the polarized and non-polarized light images. Next, according to the DoLP image, the weight value images of the polarized and non-polarized light images are computed respectively. Finally, we multiply the polarized and non-polarized light images with the corresponding weight value images and add the weighted polarized and non-polarized light images to get the weighted fusion underwater image.

3.2 Illumination homogenization based on frequency domain filtering

This section mainly discusses the influence of active non-uniform illumination on underwater imaging and introduces the proposed method for eliminating this influence on underwater imaging. Based on the Jaffe-McGlamery model [1], both backscattered and object reflected light could be regarded as the products of incident light intensity and the corresponding coefficients, which can be expressed as

$${{{I}_{total}}(x,y)={{I}_{sour}}(x,y)J(x,y)t(x,y)+{{I}_{sour}}(x,y)(1-t(x,y))}$$
where, ${{I}_{total}}(x,y)$ denotes the intensity of received light at position $(x,y)$ in the original underwater image, ${{I}_{sour}}(x,y)$ represents the active illumination intensity, $J(x,y)$ indicates the reflectivity of object surface to the incident light, and $t(x,y)$ is the transmission coefficient of the water path. Generally, the intensity of receiving light and the gray value of image pixels can be regarded as linear relations for typical imaging detectors. Thus, we rewrite Eq. (4) as
$${{{{I}'}_{total}}(x,y)=\eta {{I}_{sour}}(x,y)\left( J(x,y)t(x,y)+\left( 1-t(x,y) \right) \right)}$$
where ${{{I}'}_{total}}(x,y)$ denotes the gray value of pixel $(x,y)$ and $\eta$ is the photoelectric conversion efficiency of the imaging detector. According to Eq. (5), regarded $\eta$ as a constant, the right side of Eq. (5) can be divided into the incident component ${{I}_{sour}}(x,y)$ and reflection component ${{R}_{refl}}(x,y)$. Here, we further simplify Eq. (5) as
$${{{{I}'}_{total}}(x,y)={{I}_{sour}}(x,y)\cdot {{R}_{refl}}(x,y)}$$
and
$${{{R}_{refl}}(x,y)=\eta \left( J(x,y)t(x,y)+1-t(x,y) \right)}$$

As shown in Fig. 2, in most cases, the pixel intensity may be stronger in the center of the underwater image due to the non-uniformity illumination distribution of the active lighting source and different reflection angles in various areas in the field of view. Therefore, the reflection component can be expressed as follows

$${{{R}_{refl}}(x,y)=\frac{{{{{I}'}}_{total}}(x,y)}{{{I}_{sour}}(x,y)}}$$
The reflection component can be regarded as the difference between the total pixel intensity and the incident light in the logarithmic domain, which can be written as
$${\log \left( {{R}_{refl}}(x,y) \right)={{r}_{refl}}(x,y)=\log \left( {{{{I}'}}_{total}}(x,y) \right)-\log \left( {{I}_{sour}}(x,y) \right)}$$
where ${{r}_{refl}}(x,y)$ is the logarithm of ${{R}_{refl}}(x,y)$. Since the incident light can be treated as a spatial smoothing light field, the approximate distribution of the incident light can be obtained by convolving the total light image with a Gaussian function. The relationship is formulated as
$${{{r}_{refl}}(x,y)=\log \left( {{{{I}'}}_{total}}(x,y) \right)-\log \left( {{{{I}'}}_{total}}(x,y)*G(x,y) \right)}$$
where $G(x,y)$ denotes the Gaussian function and the symbol $*$ indicates convolution operation. Note that the form of the Gaussian function is formulated as
$${G(x,y)=\frac{1}{\sqrt{2\pi }\sigma }\exp \left( -\frac{{{x}^{2}}+{{y}^{2}}}{2{{\sigma }^{2}}} \right)}$$
where $\sigma$ is the surrounding space constant representing the smoothness of Gaussian convolution. The low-frequency signal of the image is extracted as the incident component while the textures and details of the image in the reflection component are retained by adjusting $\sigma$ to an appropriate value. The value of $\sigma$ mainly depends on the turbidity degree of the water, which determines the blurring degree of the original underwater image.

 figure: Fig. 2.

Fig. 2. (a) The original underwater image; (b) The gray distribution of the original underwater image.

Download Full Size | PDF

Finally, the average intensity ${{\bar {I}}_{sour}}$ of the original illumination distribution is taken as the intensity of the uniform illumination distribution, which is computed as

$${{{\bar{I}}_{sour}}=\frac{\sum\limits_{x=1}^{W}{\sum\limits_{y=1}^{H}{\left( {{{{I}'}}_{total}}(x,y)*G(x,y) \right)}}}{W\cdot H}}$$
where $W$ and $H$ represent the width and height of the image respectively. By multiplying the uniform illumination distribution ${{\bar {I}}_{sour}}(x,y)$ with the reflection component ${{R}_{refl}}(x,y)$, we express the uniform illumination image ${{{I}'}_{unif}}$ as
$${{{{I}'}_{unif}}(x,y)={{R}_{refl}}(x,y)\cdot {{\bar{I}}_{sour}}(x,y)=\exp \left( {{r}_{refl}}(x,y) \right)\cdot {{\bar{I}}_{sour}}}$$

3.3 Underwater imaging based on polarization-weighted fusion

In Sec. 3.2, homogenization of the low-frequency component in the frequency domain is used to optimize the original non-uniform illumination image. This section will introduce an underwater polarization imaging method for objects with various polarization properties. Equation (3) shows the relationship between the object reflected light and the DoLP of each component. In most cases, it is impossible to calculate the required values of DoLP only based on the underwater images. Therefore, the frequently used method assumes that the reflected or backscattered light is non-polarized based on the polarization properties of objects in different scenes.

As shown in Fig. 3, two regions are selected to represent two different polarization properties in the original underwater image. The object in Region 1 is a metal coin with a highly specular reflectivity. The object in Region 2 is a plastic coin with more diffuse reflectivity. Therefore, as shown in Fig. 3(b), the reflected light from Region 1 has higher DoLP values than that from Region 2. In the conventional polarization imaging model, as shown in Fig. 3(c) and Fig. 3(d), most of the object’s polarized reflected light remains in the total polarized light image. In contrast, most of the object’s non-polarized reflected light remains in the total non-polarized light image.

 figure: Fig. 3.

Fig. 3. (a) The original underwater image and local mesh graphs; (b) The DoLP image and local mesh graphs; (c) The polarized light image and local mesh graphs; (d) The non-polarized light image and local mesh graphs.

Download Full Size | PDF

According to Eq. (3), the magnitude relation of the DoLP can be expressed as

$$\left\{ \begin{matrix} {{P}_{scat}}({{x}_{1}},{{y}_{1}})<{{P}_{total}}({{x}_{1}},{{y}_{1}})<{{P}_{obj}}({{x}_{1}},{{y}_{1}})\text{ }({{x}_{1}},{{y}_{1}})\in \text{Region 1} \\ {{P}_{scat}}({{x}_{2}},{{y}_{2}})>{{P}_{total}}({{x}_{2}},{{y}_{2}})>{{P}_{obj}}({{x}_{2}},{{y}_{2}})\text{ }({{x}_{2}},{{y}_{2}})\in \text{Region 2} \\ \end{matrix} \right.$$
Relying solely on polarized or non-polarized light for underwater imaging cannot obtain the complete reflected light. For the underwater image with different DoLP objects, extracting the features simultaneously from polarized and non-polarized images is necessary to restore the object reflected light as much as possible. Therefore, the weighted fusion algorithm is used to merge the polarized light image and non-polarized light image, which can be formulated as
$${{{{I}'}_{fused}}(x,y)=\left( \alpha (x,y){{P}_{total}}(x,y)+\beta (x,y)\left( 1-{{P}_{total}}(x,y) \right) \right){{{I}'}_{total}}(x,y)}$$
where, ${{{I}'}_{fused}}(x,y)$ denotes the fused image, ${{{I}'}_{n}}(x,y)$ denotes the non-polarized light image, $\alpha (x,y)$ and $\beta (x,y)$ represent the weighting factors of polarized light image and non-polarized light image, respectively. Furthermore, the determination of weight values $\alpha$ and $\beta$ should meet the following principle
  • (1) After homogenizing the illumination distribution, the variation range of the intensity and DoLP of the backscattered light is small. Therefore, we consider that the DoLP of the reflected light positively correlates with the DoLP of the total light.
  • (2) For the object with high DoLP, the reflected light is mainly concentrated in the polarized light image, so the weight $\alpha$ of the polarized light image is significantly greater than the weight $\beta$.
  • (3) For the object with low DoLP, the reflected light is mainly concentrated in the non-polarized light image, so the weight $\beta$ of the non-polarized light image is significantly greater than the weight $\alpha$.
  • (4) The value of $\alpha$ is positively correlated with the DoLP of reflected light, while the value of $\beta$ is negatively correlated with the DoLP of reflected light.
  • (5) The average value of the DoLP of the total light is taken as the threshold to distinguish whether the DoLP of the reflected light of the object is high or low.

To sum up, the steps of polarized and non-polarized image fusion can be summarized as follows: Firstly, the mean DoLP of all the pixels ${{\bar {P}}_{total}}$ is calculated to determine the threshold, which is calculated as

$${{{\bar{P}}_{total}}=\frac{\sum\limits_{x=1}^{W}{\sum\limits_{y=1}^{H}{{{P}_{total}}(x,y)}}}{W\cdot H}}$$
Then, the weight values $\alpha (x,y)$ and $\beta (x,y)$ can be obtained by the average DoLP and the DoLP of each pixel as Eq. (17)
$$\left\{ \begin{matrix} \alpha (x,y)=\frac{{{P}_{total}}(x,y)}{{{{\bar{P}}}_{total}}} \\ \beta (x,y)=\frac{{{{\bar{P}}}_{total}}}{{{P}_{total}}(x,y)} \\ \end{matrix} \right.$$
As shown in Fig. 4(a), the two curves of $\alpha (x,y)$ and $\beta (x,y)$ intersect at the point $({{\bar {P}}_{total}},1)$, which is the threshold for distinguishing high and low DoLP objects. For pixels with DoLP between 0 and ${{\bar {P}}_{total}}$, the value of $\alpha (x,y)$ is less than 1, and the value of $\beta (x,y)$ is greater than 1. So, the part of polarized light is suppressed, and the part of non-polarized light is amplified as the result of weighted fusion. For pixels with DoLP between ${{\bar {P}}_{total}}$ and 1, The value of $\alpha (x,y)$ is more significant than 1, and the value of $\beta (x,y)$ is less than 1. Moreover, the part of polarized light is amplified, and the part of non-polarized light is suppressed as the result of weighted fusion. However, when ${{P}_{total}}(x,y)$ is close to 0, the weight value $\beta (x,y)$ will increase rapidly in a small range, which may lead to abnormal brightness in the final fused image. Therefore, as shown in Eq. (18) and Fig. 4(b), the offset factor $\delta$ is added to $\beta (x,y)$ to make the curve move to the left of the x-coordinate axis to eliminate the influence on the weighted fusion result. Thus, the weights calculated by Eq. (18) can be applied to underwater imaging with different DoLP objects.
$$\left\{ \begin{matrix} \alpha (x,y)=\frac{{{P}_{total}}(x,y)}{{{{\bar{P}}}_{total}}} \\ \beta (x,y)=\frac{{{{\bar{P}}}_{total}}}{{{P}_{total}}(x,y)+\delta } \\ \end{matrix} \right.$$
Combining Eq. (13) and Eq. (18), the final underwater image is computed as
$${{{{I}'}_{enhan}}(x,y)=\left( \alpha (x,y){{P}_{total}}(x,y)+\beta (x,y)\left( 1-{{P}_{total}}(x,y) \right) \right){{{I}'}_{unif}}(x,y)}$$
Finally, the pixel value is mapped to the range from zero to one based on the normalization algorithm to display the result better. The formula can be written as
$${{{{I}'}_{norm}}(x,y)=\frac{{{{{I}'}}_{enhan}}(x,y)-\min ({{{{I}'}}_{enhan}}(x,y))}{\max ({{{{I}'}}_{enhan}}(x,y))-\min ({{{{I}'}}_{enhan}}(x,y))}}$$
where ${{{I}'}_{norm}}(x,y)$ denotes the normalized enhanced image.

 figure: Fig. 4.

Fig. 4. (a) The curve graph corresponding to Eq. (17); (b) The curve graph corresponding to Eq. (18).

Download Full Size | PDF

3.4 Setting principle of parameters $\sigma$ and $\delta$

It should be noted that the method proposed in this paper has two input parameters $\sigma$ and $\delta$. $\sigma$ is the smoothness parameter of Gaussian filtering in Sec. 3.1, and $\delta$ is the DoLP offset of weighted fusion in Sec. 3.2. In this section, we will discuss the influence of these two parameters on the imaging effect and the principle of value set.

As shown in Eq. (10), frequency domain filtering is used to extract the low-frequency component of the image as the incident light component. Figure 5(a) to (d) show the spectrums with different values of $\sigma$. As the value increases, the spectrum’s peak becomes thinner and higher, which means that the Gaussian filtering has a lower cut-off frequency, and more image details are filtered out. This law is also indicated by the normalized images and the SSIM values between the low-frequency components and the original underwater images with different $\sigma$ values shown in Fig. 5(e) to (h). Therefore, a higher $\sigma$ value is conducive to retaining more image details in the high-frequency components. However, as shown in Fig. 5(i) to (l), the thinner and higher spectrum peak makes most of the original image brightness separated into low-frequency components and homogenized in subsequent processing, which is not conducive to the enhancement of the contrast of the reflected light image. Therefore, we must comprehensively consider the image brightness, unevenness, and blur to select an optimal value of $\sigma$. Generally, this value is set in the range of 20 to 120.

 figure: Fig. 5.

Fig. 5. (a)-(d) The spectrum maps of $G(x,y)$ with $\sigma =20$,40,60 and 80; (e)-(h) The normalized low-frequency components by Gaussian filtering; (i)-(l) The intensity distribution of low-frequency components.

Download Full Size | PDF

In Sec. 3.2, the value of $\sigma$ is proposed to reduce the weight values of pixels with low DoLP in the non-polarized image in polarization-weighted fusion. Therefore, the value of $\sigma$ is related to the DoLP distribution and the minimum value of the original underwater image. Then the weight values in Eq. (18) can be rewritten as

$$\left\{ \begin{matrix} {{\alpha }_{\max }}=\frac{{{P}_{\max }}(x,y)}{{{{\bar{P}}}_{total}}} \\ {{\beta }_{\max }}=\frac{{{{\bar{P}}}_{total}}}{{{P}_{\min }}(x,y)+\delta } \\ \end{matrix} \right.$$
where ${{\alpha }_{\max }}$ and ${{\beta }_{\max }}$ are the maximum values of the weight of the polarized and non-polarized images, ${{P}_{\max }}(x,y)$ and ${{P}_{\min }}(x,y)$ are the maximum and minimum DoLP values of the original image. To ensure that the maximum weight values of the two images are at the same level, the calculation formula of a value can be expressed as
$${\delta =\frac{{{({{{\bar{P}}}_{total}})}^{2}}}{{{P}_{\max }}(x,y)}-{{P}_{\min }}(x,y)}$$

According to Eq. (22), the value range of $\delta$ is from 0 to 1, and the value is limited to less than 0.5 in most cases.

4. Experiments and results

4.1 Experimental setup

Simulated underwater imaging experiments are designed in this section to test the performance of the proposed method. The setting of the experimental system is shown in Fig. 6. In the experimental system, the imaging device is a mono camera (Basler acA640-90gm) with $\text {450}\times \text {450}$ pixels and 8-bit wide, and a rotating linear polarizer is placed before the camera as an analyzer. A red LED (Central wavelength is 632.8nm) is chosen as the lighting source. The illumination is modulated by a diaphragm and a linear polarizer (Thorlabs LPVISC100-MP2). The light produced by the LED light source has the same intensity in each vibration direction, so it can be regarded as a natural light source. A linear polarizer placed in front of a natural light source can produce linearly polarized light with the same intensity in each polarization direction of the linear polarizer. Changing the polarization direction of the linear polarizer does not affect the intensity and DoLP of the incident light illuminating the underwater scene. This paper uses polarization-weighted fusion to achieve underwater imaging based on the intensity and DoLP of the original underwater image. The DoLP of backscattered light is mainly affected by the scattering angle, and it is not sensitive to the AoP of the incident light. Therefore, the incident light with different polarization directions can hardly affect the polarization degree of the backscattered or reflected light. In other words, the influence of incident light with different AoPs on the experimental results is not apparent.

 figure: Fig. 6.

Fig. 6. Experimental system of underwater polarization imaging.

Download Full Size | PDF

We use a transparent glass sink filled with water and make the water turbid by blending the clear water with milk to simulate the turbid underwater imaging environment. Then the camera is utilized to shoot the objects illuminated with polarized light through turbid water. We put a polarizer in front of the camera, and an arbitrary polarizer orientation is taken initially (set as 0$^{\circ }$). According to Ref. [26], 12 images are captured in 15$^{\circ }$ step size to calculate the DoLP of each pixel so as to reduce the interferences of image noise and polarization angle.

4.2 Evaluation metrics

In this section, the values of Enhancement Measure Evaluation (EME) and Peak Signal-Noise Ratio (PSNR) are calculated to quantitatively compare the underwater imaging effects of different methods. The value of EME is directly related to the local contrast of image [27], which is defined as Eq. (23).

$${EME=\frac{1}{M \cdot N}\sum_{k=1}^{M}{\sum_{l=1}^{N}{20\log \frac{{{I}_{\max ;k,l}}}{{{I}_{\min ;k,l}}}}}}$$
where, M · N denotes the number of divided image blocks, k and l refer to the serial number of image block, ${{I}_{\max ;k,l}}$ and ${{I}_{\min ;k,l}}$ represent the maximum gray value and the minimum gray value of the pixels in the numbered block, respectively.

The value of PSNR is estimated based on the clear and restored images and used to measure the noise level of images [28]. PSNR is defined as

$${PSNR=10{{\log }_{10}}\left( \frac{{{({{2}^{n}}-1)}^{2}}}{MSE} \right)}$$
where MSE is the mean square error between the enhanced image and the reference image, and n is the number of pixel bits, which is 8 in this paper. The calculation method of MSE can be expressed as
$${MSE=\frac{1}{H\cdot W}\sum_{i=1}^{H}{\sum_{j=1}^{W}{{{(X(i,j)-Y(i,j))}^{2}}}}}$$
where $X(i,j)$ and $Y(i,j)$ represent the pixel values of the reference image and the enhanced image at position $(i,j)$, respectively. The clear image is defined as the reference image in this paper.

4.3 Comparative experiments with traditional methods

The first experiment compares the underwater imaging performance of the proposed with that of traditional methods. Three additional methods, including the periodic polarization integration (PPI) approach [26], the contrast limited adaptive histogram equalization (CLAHE) approach [29], and the mutual information-based polarization imaging method (MI) [14] are used for active underwater imaging. The PPI is an underwater imaging method based on the polarization difference proposed in our previous work, and the CLAHE is a classic image enhancement algorithm. The MI-based method minimizes mutual information between the backscattered and reflected images to estimate the DoLP values of backscattered and reflected light to achieve underwater imaging, which can be expressed as

$${\left( P_{scat}^{optimal},P_{obj}^{optimal} \right)=\arg \underset{{{{\hat{P}}}_{scat}},{{{\hat{P}}}_{obj}}\in [0,1]}{\mathop{\min }}\,\left\{ MI\left[ \tilde{B}({{{\hat{P}}}_{scat}},{{{\hat{P}}}_{obj}}),\tilde{T}({{{\hat{P}}}_{scat}},{{{\hat{P}}}_{obj}}) \right] \right\}}$$

In Eq. (26), $\tilde {B}({{\hat {P}}_{scat}},{{\hat {P}}_{obj}})$ and $\tilde {T}({{\hat {P}}_{scat}},{{\hat {P}}_{obj}})$ donate the calculated results of the backscattered image and object’s reflection image when the DoLP of the backscattered light is ${{\hat {P}}_{scat}}$ and the DoLP of the object reflected light is ${{\hat {P}}_{obj}}$. $MI\left [ \tilde {B}({{{\hat {P}}}_{scat}},{{{\hat {P}}}_{obj}}),\tilde {T}({{{\hat {P}}}_{scat}},{{{\hat {P}}}_{obj}}) \right ]$ represents the mutual information between $\tilde {B}({{\hat {P}}_{scat}},{{\hat {P}}_{obj}})$and $\tilde {T}({{\hat {P}}_{scat}},{{\hat {P}}_{obj}})$. Finally, $P_{scat}^{optimal}$ and $P_{obj}^{optimal}$ are used to calculate the object reflected light image based on Eq. (3).

As shown in Fig. 7, we mixed different volumes of milk into 6250 ml of clear water to simulate different water turbidity levels. A metal coin with high polarization properties and a plastic disc with high depolarization properties are placed in turbid water.

 figure: Fig. 7.

Fig. 7. Underwater image enhancement results of different methods

Download Full Size | PDF

The first rows show the original underwater images and results by different methods. In terms of subjective effect, all four methods can improve the effectiveness of the original underwater image compared to original underwater images. Moreover, the proposed method performs better than the other methods in contrast or clarity.

The second rows show the grayscale value distribution of all images, which demonstrates the suppression effect of the proposed method for non-uniform illumination. The features of grayscale value distribution reveal that it is bright in the central region and dark in the marginal region in the original underwater image. Furthermore, the proposed method can significantly reduce the brightness difference between the central and marginal regions to homogenize the brightness of the whole image.

The third and fourth rows show enlarged underwater object drawings, which prove the clarity enhancement performance of the proposed method. The regions with more textures on objects with different polarization properties are selected and enlarged to compare the enhancement effect of texture sharpness. The words on the coin are almost indistinguishable for objects with high polarization properties (Row 3). Three different enhancement methods can enhance the clarity of text to a certain extent, and the proposed method has the most outstanding enhancement capability. For objects with high depolarization properties (Row 4), the texture on the plastic disc is wholly submerged in the backscattered light and cannot be distinguished from the original image. Due to the loss of a large amount of non-polarized reflected light, this part of the original image cannot be restored and enhanced by the traditional polarization difference method. By contrast, the proposed method can effectively enhance the main texture of the object with high depolarization properties.

The values of EME and PSNR are reported in Table 1, and the best indicators are marked in red. Firstly, the average EME value of original underwater images decreased by more than 90%, and the average PSNR value of original underwater images is less than 10 dB compared to clear images. This comparison results indicate that the backscattered light seriously reduces the contrast and sharpness of underwater images. Moreover, the average EME value of the PPI method is 1.1105, which is the lowest of the four methods. This is mainly because the PPI method does not apply to scenes containing objects with multiple polarization properties since the reflected light of high depolarization objects is eliminated with the backscattered light. The CLAHE is the most commonly used image enhancement algorithm based on histogram equalization. And the results of the CLAHE algorithm are better than the results of the PPI algorithm when the scene contains objects with different polarization properties. However, our method has obvious advantages in improving the EME and PSNR compared with the CLAHE algorithm. Finally, our method is superior to the MI method in the final image performance and quantitative indicators. It is worth mentioning that Ref. 14 first made a detailed and comprehensive analysis of the problem in active polarization imaging. Limited by the experimental conditions, the polarization imaging methods based on mutual information compared in this paper are only part of the work of this paper.

Tables Icon

Table 1. Quantitative results of different methods.

The values of $\delta$ and $\sigma$ directly affect the result of underwater imaging. In Sec. 3.4, we introduce the calculation method of $\delta$ and analyze the relationship between $\sigma$ and image smoothing. In many related pieces of literature, the methods of input parameter selection are based on contrast optimization, such as Ref. 14, Ref. 17, Ref. 19, and Ref. 21. At the same time, the input parameter $\sigma$ in this paper is a global variable. Therefore, we define a fixed step size to traverse $\sigma$ within the set range to find the best EME value of the final underwater image and select the value of $\sigma$ that can obtain the optimal underwater imaging result as the input parameter. In Scene 1, the value of $\sigma$ is 50 in the low turbidity water and 80 in the high turbidity water. In Scene 2, the value of $\sigma$ is 55 in the low turbidity water and 100 in the high turbidity water.

Therefore, the results of comparative experiments verify that the proposed method effectively suppresses the influence of non-uniform illuminated fields on underwater imaging and enhances underwater images containing objects with multiple polarization properties.

4.4 Ablation experiments of the proposed method

As shown in Fig. 1, the proposed method can be divided into illumination homogenization and polarization-weighted fusion. To illustrate the necessity of each operation, we compare the original underwater images and four enhanced images by different operations. The results are shown in Fig. 8. The normalized images are the enhanced result based on Eq. (20), which makes the pixel values in the range of [0,1]. The homogenized images are the processing results of the original underwater image based on Eq. (13) and Eq. (20). The weighted images are the processing results of the original underwater image based on Eq. (15) and Eq. (19). The final enhanced images are the results based on the processing flow shown in Fig. 1. And the corresponding relationship between different results and operations is shown in Table 2. The symbol $\bigcirc$ indicates that the operation is effective. The symbol $\times$ indicates that the operation is ineffective in the proposed underwater imaging process.

 figure: Fig. 8.

Fig. 8. Performance of combining different operations in different scenes.

Download Full Size | PDF

Tables Icon

Table 2. Corresponding relationship between enhanced results and image processing operations.

As shown in Fig. 8, the contrast of the normalized images has been improved to a certain extent. However, it is evident that the phenomenon of being overly bright in the central region still exists. Then, by comparing the homogenized and normalized images, the proposed method can significantly improve the contrast of the central region. Moreover, compared with the homogenized images, the overall contrast of the weighted images is significantly improved, while the non-uniformity suppression of the central region is not apparent. The qualities of the final enhanced images processed by the proposed method remarkably increase in non-uniform brightness suppression and overall contrast enhancement. The calculation results of quantitative indicators in Table 3 also prove the analysis above.

Tables Icon

Table 3. Quantitative results of the enhanced images with different operations.

The EME and PSNR of underwater enhanced images are shown in Table 3, and the best indicators are marked in red. Firstly, with the advancement of processing flow, the EME and PSNR of the processing results are significantly improved from low to high turbidity mediums in most cases, which reveals that each step of the proposed method significantly influences contrast enhancement and noise suppression. Moreover, the final enhanced images have the best EME and PSNR, indicating that the method presented in this paper has better performance compared with the weighted fusion results and the homogenized results separately. So, considering the final enhanced results shown in Fig. 8 and Table 3, each component of the proposed method has clear significance for the final result of underwater polarization imaging.

5. Conclusion

In this paper, we focus on eliminating the influence of non-uniform illumination distribution on underwater imaging and realizing underwater active polarization imaging for scenes with multiple polarization characteristic objects without prior information. According to the significant difference between the illumination distribution and the object reflection properties in the frequency domain, the proposed method divides the whole image into the incident and reflected components based on frequency domain filtering. The low-frequency component of the whole image is extracted as the incident component and multiplied by the reflected component after homogenization distribution to obtain the underwater image under simulating uniform illumination. Moreover, the reflected light distributions in different polarization images are analyzed based on the DoLP and intensity of each pixel. To ensure that most of the object reflected light is retained, we design the weight value calculation method of the object and use the weighted fusion method to obtain the enhanced image. Finally, both qualitative and quantitative experimental results demonstrate that the proposed method can suppress the influence of non-uniform illumination distribution and obtain underwater images with high definition and high contrast. It is worth noting that the proposed method in this paper does not need to extract the information of available areas in the scene and can recover underwater images containing objects with different properties. Therefore, it can significantly simplify the operation procedure of underwater optical imaging and has specific application prospects for underwater rescue and ocean exploration.

Funding

National Natural Science Foundation of China (62001234); Natural Science Foundation of Jiangsu Province (BK20200487); Shanghai Aerospace Science and Technology Innovation Foundation (SAST2020-071); Fundamental Research Funds for the Central Universities (JSGP202102); Equipment Pre-research Weapon Industry Application Innovation Project (627010402).

Disclosures

The authors declare no conflicts of interest.

Data Availability

Data that support the findings of this study are available in Code 1 [30]

References

1. J. S. Jaffe, “Underwater optical imaging: The past, the present, and the prospects,” IEEE J. Oceanic Eng. 40(3), 683–700 (2015). [CrossRef]  

2. Y. Shen, C. Zhao, Y. Liu, S. Wang, and F. Huang, “Underwater optical imaging: Key technologies and applications review,” IEEE Access 9, 85500–85514 (2021). [CrossRef]  

3. D. Chen, S. Gu, and S.-C. Chen, “Study of optical modulation based on binary masks with finite pixels,” Opt. Lasers Eng. 142, 106604 (2021). [CrossRef]  

4. J. N. Mait, G. W. Euliss, and R. A. Athale, “Computational imaging,” Adv. Opt. Photonics 10(2), 409–483 (2018). [CrossRef]  

5. R. Schettini and S. Corchs, “Underwater image processing: State of the art of restoration and image enhancement methods,” EURASIP J. Adv. Signal Process. 2010(1), 746052 (2010). [CrossRef]  

6. E. Trucco and A. Olmos-Antillon, “Self-tuning underwater image restoration,” IEEE J. Oceanic Eng. 31(2), 511–519 (2006). [CrossRef]  

7. P. Han, F. Liu, Y. Wei, and X. Shao, “Optical correlation assists to enhance underwater polarization imaging performance,” Opt. Lasers Eng. 134, 106256 (2020). [CrossRef]  

8. E. A. McLean, H. R. Burris, and M. P. Strand, “Short-pulse range-gated optical imaging in turbid water,” Appl. Opt. 34(21), 4343–4351 (1995). [CrossRef]  

9. S. Narasimhan, S. Nayar, B. Sun, and S. Koppal, “Structured light in scattering media,” in Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, (IEEE, 2005), pp. 420–427.

10. J. S. Jaffe, “Performance bounds on synchronous laser line scan systems,” Opt. Express 13(3), 738–748 (2005). [CrossRef]  

11. S. Yoon, M. Kim, M. Jang, Y. Choi, W. Choi, S. Kang, and W. Choi, “Deep optical imaging within complex scattering media,” Nat. Rev. Phys. 2(3), 141–158 (2020). [CrossRef]  

12. W. Gong and S. Han, “Correlated imaging in scattering media,” Opt. Lett. 36(3), 394–396 (2011). [CrossRef]  

13. Y. Schechner and N. Karpel, “Recovery of underwater visibility and structure by polarization analysis,” IEEE J. Oceanic Eng. 30(3), 570–587 (2005). [CrossRef]  

14. T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 385–399 (2009). [CrossRef]  

15. H. Li and X. Wu, “Densefuse: A fusion approach to infrared and visible images,” IEEE Trans. on Image Process. 28(5), 2614–2623 (2019). [CrossRef]  

16. H. Wang, H. Hu, J. Jiang, J. Li, X. Li, W. Zhang, Z. Cheng, and T. Liu, “Polarization differential imaging in turbid water via mueller matrix and illumination modulation,” Opt. Commun. 499, 127274 (2021). [CrossRef]  

17. H. Wang, H. Hu, J. Jiang, X. Li, W. Zhang, Z. Cheng, and T. Liu, “Automatic underwater polarization imaging without background region or any prior,” Opt. Express 29(20), 31283–31295 (2021). [CrossRef]  

18. Y. Zhang, Q. Cheng, Y. Zhang, and F. Han, “Image-restoration algorithm based on an underwater polarization imaging visualization model,” J. Opt. Soc. Am. A 39(5), 855–865 (2022). [CrossRef]  

19. Y. Zhao, W. He, H. Ren, Y. Li, and Y. Fu, “Polarization descattering imaging through turbid water without prior knowledge,” Opt. Lasers Eng. 148, 106777 (2022). [CrossRef]  

20. H. Hu, L. Zhao, X. Li, H. Wang, and T. Liu, “Underwater image recovery under the nonuniform optical field based on polarimetric imaging,” IEEE Photonics J. 10(1), 1–9 (2018). [CrossRef]  

21. B. Huang, T. Liu, H. Hu, J. Han, and M. Yu, “Underwater image recovery considering polarization effects of objects,” Opt. Express 24(9), 9826–9838 (2016). [CrossRef]  

22. F. Liu, Y. Wei, P. Han, K. Yang, L. Bai, and X. Shao, “Polarization-based exploration for clear underwater vision in natural illumination,” Opt. Express 27(3), 3629–3641 (2019). [CrossRef]  

23. M. H. Smith, “Optimization of a dual-rotating-retarder mueller matrix polarimeter,” Appl. Opt. 41(13), 2488–2493 (2002). [CrossRef]  

24. J. S. Tyo, “Design of optimal polarimeters: maximization of signal-to-noise ratio and minimization of systematic error,” Appl. Opt. 41(4), 619–630 (2002). [CrossRef]  

25. F. Liu, P. Han, Y. Wei, K. Yang, S. Huang, X. Li, G. Zhang, L. Bai, and X. Shao, “Deeply seeing through highly turbid water by active polarization imaging,” Opt. Lett. 43(20), 4903–4906 (2018). [CrossRef]  

26. J. Wang, M. Wan, G. Gu, W. Qian, K. Ren, Q. Huang, and Q. Chen, “Periodic integration-based polarization differential imaging for underwater image restoration,” Opt. Lasers Eng. 149, 106785 (2022). [CrossRef]  

27. D. R. I. M. Setiadi, “Psnr vs ssim: imperceptibility quality assessment for image steganography,” Multimed. Tools Appl. 80(6), 8423–8444 (2021). [CrossRef]  

28. M. A. Qureshi, A. Beghdadi, and M. Deriche, “Towards the design of a consistent image contrast enhancement evaluation measure,” Sig. Processing: Image Commun. 58, 212–227 (2017). [CrossRef]  

29. D. Garg, N. K. Garg, and M. Kumar, “Underwater image enhancement using blending of clahe and percentile methodologies,” Multimed. Tools Appl. 77(20), 26545–26561 (2018). [CrossRef]  

30. J. Wang, M. Wan, X. Cao, X. Zhang, G. Gu, and Q. Chen, “Active non-uniform illumination-based underwater polarization imaging method for objects with complex polarization properties,” Github, 2022https://github.com/MinjieWan/ANI-Based-Underwater-Polarization-Imaging-Method-for-Objects-with-Complex-Polarization-Properties.

Supplementary Material (1)

NameDescription
Code 1       Active non-uniform illuminationbased underwater polarization imaging method for objects with complex polarization properties

Data Availability

Data that support the findings of this study are available in Code 1 [30]

30. J. Wang, M. Wan, X. Cao, X. Zhang, G. Gu, and Q. Chen, “Active non-uniform illumination-based underwater polarization imaging method for objects with complex polarization properties,” Github, 2022https://github.com/MinjieWan/ANI-Based-Underwater-Polarization-Imaging-Method-for-Objects-with-Complex-Polarization-Properties.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. The flow chart of the proposed method.
Fig. 2.
Fig. 2. (a) The original underwater image; (b) The gray distribution of the original underwater image.
Fig. 3.
Fig. 3. (a) The original underwater image and local mesh graphs; (b) The DoLP image and local mesh graphs; (c) The polarized light image and local mesh graphs; (d) The non-polarized light image and local mesh graphs.
Fig. 4.
Fig. 4. (a) The curve graph corresponding to Eq. (17); (b) The curve graph corresponding to Eq. (18).
Fig. 5.
Fig. 5. (a)-(d) The spectrum maps of $G(x,y)$ with $\sigma =20$,40,60 and 80; (e)-(h) The normalized low-frequency components by Gaussian filtering; (i)-(l) The intensity distribution of low-frequency components.
Fig. 6.
Fig. 6. Experimental system of underwater polarization imaging.
Fig. 7.
Fig. 7. Underwater image enhancement results of different methods
Fig. 8.
Fig. 8. Performance of combining different operations in different scenes.

Tables (3)

Tables Icon

Table 1. Quantitative results of different methods.

Tables Icon

Table 2. Corresponding relationship between enhanced results and image processing operations.

Tables Icon

Table 3. Quantitative results of the enhanced images with different operations.

Equations (26)

Equations on this page are rendered with MathJax. Learn more.

I t o t a l = B + T
I t o t a l P t o t a l = B P s c a t + T P o b j
{ B = ( P t o t a l P o b j P s c a t P o b j ) I t o t a l T = ( P s c a t P t o t a l P s c a t P o b j ) I t o t a l  
I t o t a l ( x , y ) = I s o u r ( x , y ) J ( x , y ) t ( x , y ) + I s o u r ( x , y ) ( 1 t ( x , y ) )
I t o t a l ( x , y ) = η I s o u r ( x , y ) ( J ( x , y ) t ( x , y ) + ( 1 t ( x , y ) ) )
I t o t a l ( x , y ) = I s o u r ( x , y ) R r e f l ( x , y )
R r e f l ( x , y ) = η ( J ( x , y ) t ( x , y ) + 1 t ( x , y ) )
R r e f l ( x , y ) = I t o t a l ( x , y ) I s o u r ( x , y )
log ( R r e f l ( x , y ) ) = r r e f l ( x , y ) = log ( I t o t a l ( x , y ) ) log ( I s o u r ( x , y ) )
r r e f l ( x , y ) = log ( I t o t a l ( x , y ) ) log ( I t o t a l ( x , y ) G ( x , y ) )
G ( x , y ) = 1 2 π σ exp ( x 2 + y 2 2 σ 2 )
I ¯ s o u r = x = 1 W y = 1 H ( I t o t a l ( x , y ) G ( x , y ) ) W H
I u n i f ( x , y ) = R r e f l ( x , y ) I ¯ s o u r ( x , y ) = exp ( r r e f l ( x , y ) ) I ¯ s o u r
{ P s c a t ( x 1 , y 1 ) < P t o t a l ( x 1 , y 1 ) < P o b j ( x 1 , y 1 )   ( x 1 , y 1 ) Region 1 P s c a t ( x 2 , y 2 ) > P t o t a l ( x 2 , y 2 ) > P o b j ( x 2 , y 2 )   ( x 2 , y 2 ) Region 2
I f u s e d ( x , y ) = ( α ( x , y ) P t o t a l ( x , y ) + β ( x , y ) ( 1 P t o t a l ( x , y ) ) ) I t o t a l ( x , y )
P ¯ t o t a l = x = 1 W y = 1 H P t o t a l ( x , y ) W H
{ α ( x , y ) = P t o t a l ( x , y ) P ¯ t o t a l β ( x , y ) = P ¯ t o t a l P t o t a l ( x , y )
{ α ( x , y ) = P t o t a l ( x , y ) P ¯ t o t a l β ( x , y ) = P ¯ t o t a l P t o t a l ( x , y ) + δ
I e n h a n ( x , y ) = ( α ( x , y ) P t o t a l ( x , y ) + β ( x , y ) ( 1 P t o t a l ( x , y ) ) ) I u n i f ( x , y )
I n o r m ( x , y ) = I e n h a n ( x , y ) min ( I e n h a n ( x , y ) ) max ( I e n h a n ( x , y ) ) min ( I e n h a n ( x , y ) )
{ α max = P max ( x , y ) P ¯ t o t a l β max = P ¯ t o t a l P min ( x , y ) + δ
δ = ( P ¯ t o t a l ) 2 P max ( x , y ) P min ( x , y )
E M E = 1 M N k = 1 M l = 1 N 20 log I max ; k , l I min ; k , l
P S N R = 10 log 10 ( ( 2 n 1 ) 2 M S E )
M S E = 1 H W i = 1 H j = 1 W ( X ( i , j ) Y ( i , j ) ) 2
( P s c a t o p t i m a l , P o b j o p t i m a l ) = arg min P ^ s c a t , P ^ o b j [ 0 , 1 ] { M I [ B ~ ( P ^ s c a t , P ^ o b j ) , T ~ ( P ^ s c a t , P ^ o b j ) ] }
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.