Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Variational model for simultaneously image denoising and contrast enhancement

Open Access Open Access

Abstract

The performance of contrast enhancement is degraded when input images are noisy. In this paper, we propose and develop a variational model for simultaneously image denoising and contrast enhancement. The idea is to propose a variational approach containing an energy functional to adjust the pixel values of an input image directly so that the resulting histogram can be redistributed to be uniform and the noise of the image can be removed. In the proposed model, a histogram equalization term is considered for image contrast enhancement, a total variational term is incorporate to remove the noise of the input image, and a fidelity term is added to keep the structure and the texture of the input image. The existence of the minimizer and the convergence of the proposed algorithm are studied and analyzed. Experimental results are presented to show the effectiveness of the proposed model compared with existing methods in terms of several measures: average local contrast, discrete entropy, structural similarity index, measure of enhancement, absolute measure of enhancement, and second derivative like measure of enhancement.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The purpose of contrast enhancement is to intensify the visibility of images and make images more suitable for other image processing applications. Histogram equalization (HE) [1] method is an effective and powerful technique for contrast enhancement. The fundamental principle of HE is to make the histogram of gray levels in an image uniform. The most popular method for this purpose is to apply a transformation function to an input image such that the contrast of the transformed image is enhanced. For example, the cumulative distribution function of the normalized histogram of the input image gray-levels was selected to be the transformation function in global histogram equalization(GHE) [2] for the histogram equalization purpose. A popular variational histogram equalization model was the constrained variational histogram equalization (CVHE) [3] which basically extended the variational definition of the GHE algorithm by adding a mean brightness constraint to formulate a functional optimization problem, the solution of which defined a new gray-level transformation function for histogram equalization. Contrast enhancement based on local information by using histogram equalization was proposed by Wang and Ng in [4], and the authors proposed a variational approach containing an energy functional to determine a local transformation such that the histogram can be redistributed locally, and the brightness of the transformed image can be preserved. In order to minimize the differences among the local transformation at the nearby pixel locations, the spatial regularization of the transformation was also incorporated into the functional for the equalization process. In [5], an image pixel based histogram equalization model for image contrast enhancement was proposed. The idea was to formulate a variational model containing an energy functional to adjust the pixel values of an input image directly so that the resulting histogram can be redistributed to be uniform. The target was an enhanced output image instead of a mapping computed by histogram equalization algorithms. In this model, a mean brightness term was incorporated to preserve the brightness of the input image, and a geometry constraint was also added to keep the geometry structure of the input image. It is assumed that the input image is without noise degradation. The performance of contrast enhancement is poor when noisy images are used. The Automatic Color Enhancement (ACE) [6] method of Gatta et al. [7] and further developed by Rizzi et al. [8,9] and Bertalmio et al. [10] is another effective image enhancement method based on a simple model of the human visual system, and the enhancement process is consistent with perception. In [11,12], Nikolova et al. proposed a simple image enhancement algorithm (HPE) which conserved the hue and preserved the range (gamut) of an image in an optimal way. In [13], Ferradans et al. built a general variational framework (PLCE) to perform perceptual color correction and to handle the problem of local contrast enhancement.

On the other hand, image denoising is to find the unknown true image $\mathbf {u}$ from a noisy image $\mathbf {f}$. However, inverse problems are ill-posed, a regularization technique must be used to make them well-posed. This idea was introduced in 1977 by Tikhonov and Arsenin [14]. In other words, we search for a $\mathbf {u}$ that best fits the data, meanwhile, its gradient is low, and the noise will be removed. The squared $L^2$ norm of the gradient allows us to remove the noise but unfortunately penalizes too much the gradients corresponding to edges in [14]. Rudin, Osher, and Fatemi proposed to use $L^1$ norm of the gradient of $\mathbf {u}$, also called the total variation (TV) [15,16], instead of the squared $L^2$ norm. There are many efficient numerical methods to minimize TV minimization problem such as the dual method in [17], the second-order cone programming method in [18], The split Bregman method in [19,20], and the automatic algorithm in [21], etc. Many applications of total variation have been studied in the literature, such as blind deconvolution [2227], image inpainting [28], image dehazing [29], image enhancement [4,30,31].

To the best of our knowledge, there is a few research work in the literature for simultaneously image denoising and contrast enhancement. The related work in this direction is usually two-step methods or multiple step methods. In [32], Irrera et al. introduced a patch-based filter for X-ray images. The filtered images were then used as oracles to define non-parametric noise containment maps which were applied in a contrast enhancement framework. In [33], a smooth base layer extracted by BM3D filter, and a detail layer extracted by the first order differential of the inverted image were adaptively combined to get a noise-free and detail-preserved image. Then an adaptive enhancement parameter was adopted into the dark channel to enhance the contrast of the resulting image. In [34], Lim et al. categorized each pixel into one of two classes: noise-free or noisy. They then performed the selective histogram equalization to enhance the contrast of the noise-free pixels only, and restored the missing values of the noisy pixels using the enhanced noise-free pixel values, by employing a low-rank matrix completion scheme. In [35], an enhancement operator described by a graph Laplacian matrix was constructed and a minimization approach was used to denoise and enhance degraded images. In [36], a non-linear enhancement function was designed based on the local dispersion of the wavelet coefficients, and a noise reduction operation was performed by using means of a shrinkage function.

In this paper, we propose and develop a variational model for simultaneously image contrast enhancement and image denoising. The contribution of this paper is to propose a variational approach containing an energy functional to adjust the pixel values of an input image directly so that the resulting histogram can be redistributed to be uniform and the noise of the image is removed. In the proposed model, a histogram equalization term is considered for image contrast enhancement, a total variational term is incorporate to remove the noise of the input image, and a fidelity term is added to keep the structure and the texture of the input image. Theoretically, the existence of the minimizer and the convergence of the proposed model are given. Experimental results are reported to demonstrate that the performance of the proposed model is competitive with other testing models for several testing images.

The outline of this paper is as follows. In Section 2, we will describe the proposed model and present an efficient algorithm to solve it numerically. We will study the convergence of the proposed algorithm. In Section 3, we will give some numerical examples to illustrate the effectiveness of the proposed algorithm. The concluding remarks will be given in Section 4.

2. The proposed model

Let $\mathbf {u}$: $\Omega \rightarrow \mathbb {Z}$ and $\mathbf {f}$: $\Omega \rightarrow \mathbb {Z}$ denote the discrete version of the output and the input low-contrast and noisy image in the following discussion, where $\Omega$ is the image domain and $\mathbb {Z}$ is the set of integers. Assume that the gray value is in between $[l, L]$, where we introduce $l$ and $L$ to denote the lower bound and the upper bound of the gray value in general ([0, 255] for a 8-bit image). Assume that $\lambda$ is an integer in between $[l, L]$, $H_{\mathbf {u}}(\lambda ): [l, L]\cap \mathbb {Z}\rightarrow$ [0, 1] and $H_{\mathbf {c}}(\lambda )$: $[l, L]\cap \mathbb {Z} \rightarrow [0,1]$ are the cumulative histogram corresponding to the output image $\mathbf {u}$ and an image with uniform histogram, where,

$$H_{\mathbf{u}}(\lambda)=\frac{1}{|\Omega|} \sum_{i\in \Omega} \chi_{[0, \lambda]}(\mathbf{u}(i)),$$
and
$$H_{\mathbf{c}}(\lambda)=\frac{\lambda-l}{L-l},$$
with
$$ \chi_{[0, \lambda]}(\mathbf{u}(i))=\begin{cases} 1, & \mathbf{u}(i)\in[0, \lambda]\\ 0, & \textrm{otherwise}. \end{cases} $$
The proposed energy functional is formulated as follows:
$$\begin{aligned} \min_{l\leq \mathbf{u}\leq L}\left\{ {\mathcal{E}}_1(\mathbf{u}) \equiv||\mathbf{u}-\mathbf{f}||_2^2 \right. & +\mu ||\mathbf{u}||_{TV}\\ &\left.+\frac{\alpha}{2}\sum_{\lambda=l}^{L} \big(H_\mathbf{u}(\lambda)-H_\mathbf{c}(\lambda)\big)^2\right\}, \end{aligned}$$
where,
$$|| \mathbf{u}||_{TV}=\begin{cases} |\triangledown_x \mathbf{u}|+|\triangledown_y \mathbf{u}|, & \textrm{anisotropic TV,}\\ \sum_{i,j}\sqrt{(\triangledown_x \mathbf{u})_{i,j}^2+(\triangledown_y \mathbf{u})_{i,j}^2}, & \textrm{isotropic TV.} \end{cases}$$
The first term is the data fitting term which is added to ensure that the output image has similar textures with the input image. The second term is incorporated to remove the noise of the input image $\mathbf {f}$. The third term which was proposed in [5] is used to enhance the contrast of the input image $\mathbf {f}$. $\mu$ and $\alpha$ are parameters to balance the three terms.

Theorem 2.1 The Eq. (3) has at least one solution for fixed $\alpha$ and $\mu$.

Proof 1 Noting that if we set $\mathbf {u}$ to be the input $\mathbf {f}$, the energy functional will be finite. Assume that $\{\mathbf {u^{k}}\}$ is a minimizing sequence of the energy in Eq. (3). Because that $\triangledown _x$ and $\triangledown _y$ are two matrixes with bounded spectral norms. Then we get that the sequence $\{\mathbf {u^{k}}\}$ is bounded, and up to a subsequence, there exists $\mathbf {u^{*}}$ such that $\mathbf {u^{k}}\rightarrow \mathbf {u}^{*}$. By combining the definition of $H_{\mathbf {u}}(\lambda )$ with the above convergence result, we can easily deduce that:

$$\begin{aligned}||\mathbf{u}^{k}-\mathbf{f}||_2^2+\frac{\alpha}{2}\sum_{\lambda=l}^{L} &\big(H_{\mathbf{u}^{k}}(\lambda)-H_\mathbf{c}(\lambda)\big)^2 \\ &\rightarrow||\mathbf{u}^{*}-\mathbf{f}||_2^2+\frac{\alpha}{2}\sum_{\lambda=l}^{L}\big(H_{\mathbf{u}^{*}}(\lambda)-H_\mathbf{c}(\lambda)\big)^2. \end{aligned}$$
Meanwhile, we can derive the following results about $\mathbf {u}^{*}$ by using the above convergence result of $\{\mathbf {u^{k}}\}$,
$$\mu || \mathbf{u^k}||_{TV} \rightarrow \mu||\mathbf{u}^{*}||_{TV},\ \textrm{and}\ l\leq\mathbf{u}^{*}\leq L.$$
This completes the proof.

2.1 The proposed algorithm

In this subsection, we will propose an efficient algorithm to solve the problem in Eq. (3). We first introduce two notations,

$$\|\triangledown \mathbf{u}\|_{\textrm{D}} = \| \mathbf{u}\|_{\textrm{TV}},\ \iota(\mathbf{v}):= \begin{cases}0, & l \leq \mathbf{v} \leq L,\\ +\infty, & \textrm{otherwise}. \end{cases}$$
where $\triangledown \mathbf {u}=(\triangledown _{x}\mathbf {u},\triangledown _{y}\mathbf {u})$ is the gradient operator. Then Eq. (3) can be rewritten as follows:
$$\begin{aligned} \min_{\mathbf{u}=\mathbf{v},\ \triangledown \mathbf{u} = \mathbf{z}}\left\{ {\mathcal{E}}_2(\mathbf{u}) \equiv \iota(\mathbf{v}) \right. &+ \|\mathbf{u}-{\mathbf{f}}\|_2^2+\mu \|\mathbf{z}\|_{\textrm{D}}\\ &\left.+\frac{\alpha}{2}\sum_{\lambda=l}^{L} \big(H_\mathbf{u}(\lambda)-H_\mathbf{c}(\lambda)\big)^2\right\}. \end{aligned}$$
For this optimization problem, we employ the alternating direction method of multipliers (ADMM) [37] to solve it. By using the Lagrangian multiplier $\boldsymbol {\gamma }_1, \boldsymbol {\gamma }_2$ to the linear constraints, the augmented largrangian function is given as:
$$\begin{aligned}{\mathcal{L}}(\mathbf{u}, \mathbf{v}, \mathbf{z}, \boldsymbol{\gamma}_1, \boldsymbol{\gamma}_2)&=\iota(\mathbf{v})+\|\mathbf{u}-\mathbf{f}\|_2^{2}+\mu\|\mathbf{z}\|_\textrm{D}+\frac{\alpha}{2}\sum_{\lambda}\big( H_\mathbf{u}(\lambda)-H_\mathbf{c}(\lambda)\big)^{2}\\ &+\frac{\beta}{2}\|\triangledown \mathbf{u}-\mathbf{z}\|_2^{2}+\;<\;\boldsymbol{\gamma}_1,\triangledown\mathbf{u}-\mathbf{z}>+\frac{\beta}{2}\|\mathbf{u}-\mathbf{v}\|_2^{2}+\;<\;\boldsymbol{\gamma}_2,\mathbf{u}-\mathbf{v}\;>. \end{aligned}$$
Here $<\;\cdot ,\cdot\;>$ is the corresponding inner scalar of the Euclidean space. Then the ADMM iterations are described in the following steps:

Algorithm 1

step1: Set $\mathbf {u}^{0}=\mathbf {f},\boldsymbol {\gamma }_1^0= \boldsymbol {\gamma }_2^0 = \mathbf {0}$ to be the initial data;

step2: At the $k$-th iteration,

  • Given $\mathbf {u}^{k}, \boldsymbol {\gamma }_1^{k}, \boldsymbol {\gamma }_2^{k}$, and compute $\mathbf {v}^{k+1},\mathbf {z}^{k+1},$ by solving:
    $$(\mathbf{v}^{k+1},\mathbf{z}^{k+1}) = argmin_{\mathbf{v},\mathbf{z}} {\mathcal{L}}(\mathbf{u}^{k}, \mathbf{v}, \mathbf{z}, \boldsymbol{\gamma}_1^{k}, \boldsymbol{\gamma}_2^{k}),$$
  • Given $\mathbf {v}^{k+1}, \mathbf {z}^{k+1},\boldsymbol {\gamma }_1^{k}, \boldsymbol {\gamma }_2^{k}$, and calculate $\mathbf {u}^{k+1}$ by solving:
    $$\mathbf{u}^{k+1}=argmin_{\mathbf{u}} {\mathcal{L}}(\mathbf{u}, \mathbf{v}^{k+1}, \mathbf{z}^{k+1}, \boldsymbol{\gamma}_1^{k}, \boldsymbol{\gamma}_2^{k}),$$
  • Update $\boldsymbol {\gamma }_1^{k+1}, \boldsymbol {\gamma }_2^{k+1}$ by using:
    $$\boldsymbol{\gamma}_1^{k+1}=\boldsymbol{\gamma}_1^{k}+\beta(\triangledown\mathbf{u}^{k+1}-\mathbf{z}^{k+1}),$$
    $$\boldsymbol{\gamma}_2^{k+1}=\boldsymbol{\gamma}_2^{k}+\beta(\mathbf{u}^{k+1}-\mathbf{v}^{k+1}),$$
step3: Go back to Step2 until $\frac {\|\mathbf {u}^{k+1}-\mathbf {u}^{k}\|}{\|\mathbf {u}^{k+1}\|}\leq \epsilon$.

2.1.1 The solution of Eq. (10)

For Eq. (10), we rewrite the $\mathbf {v}$ subproblem in detail as follows:

$$argmin_{\mathbf{v}} {\mathcal{L}}(\mathbf{u}^{k}, \mathbf{v}, \mathbf{z}^{k}, \boldsymbol{\gamma}_1^{k}, \boldsymbol{\gamma}_2^{k})=argmin_\mathbf{v}{{\mathcal{L}}_0(\mathbf{v})},$$
where
$${\mathcal{L}}_0(\mathbf{v})=\iota(\mathbf{v})+\frac{\beta}{2}\|\mathbf{v}-(\mathbf{u}^{k}+\frac{1}{\beta}\boldsymbol{\gamma}_2^{k})\|^2.$$
The corresponding solution is given by the following projection:
$$\mathbf{v}^{k+1}=max(min(\mathbf{u}^{k}+\frac{1}{\beta}\boldsymbol{\gamma}^{k},L),l).$$
For the $\mathbf {z}$ subproblem, we consider both anisotropic TV problem and isotropic TV problem. Noting that:
$$\mathbf{z}=(\mathbf{z}_1,\mathbf{z}_2)=\triangledown\mathbf{u}=(\triangledown_{x}\mathbf{u},\triangledown_{y}\mathbf{u}),$$
where $\triangledown _x$, $\triangledown _y$ are the matrix forms of the gradient operators corresponding to the $x$ direction and the $y$ direction. Then for the anisotropic TV problem, the proposed functional ${\mathcal {L}}$ becomes:
$$\begin{aligned}&{\mathcal{L}}(\mathbf{u},\mathbf{v}, \mathbf{z_1},\mathbf{z_2}, \boldsymbol{\gamma}_1,\boldsymbol{\gamma}_2)= {\mathcal{L}}(\mathbf{u}, \mathbf{v}, \mathbf{z_1}, \mathbf{z_2}, \boldsymbol{\gamma}_{11}, \boldsymbol{\gamma}_{12}, \boldsymbol{\gamma}_2)\\ \hspace{0mm}=&\iota(\mathbf{v})+\|\mathbf{u}-\mathbf{f}\|_2^{2}+\mu(|\mathbf{z}_{1}|+|\mathbf{z}_{2}|)+\frac{\alpha}{2}\sum_{\lambda}\big( H_\mathbf{u}(\lambda)-H_\mathbf{c}(\lambda)\big)^{2}\\ \hspace{0mm}&+\frac{\beta}{2}\|\triangledown_{x} \mathbf{u}-\mathbf{z}_1\|_2^{2}+\frac{\beta}{2}\|\triangledown_{y}\mathbf{u}-\mathbf{z}_2\|_2^{2}+\;<\;\boldsymbol{\gamma}_{11},\triangledown_{x}\mathbf{u}-\mathbf{z}_{1}>\\ &+\;<\;\boldsymbol{\gamma}_{12},\triangledown_{y}\mathbf{u}-\mathbf{z}_{2}\;>\;+\frac{\beta}{2}\|\mathbf{u}-\mathbf{v}\|_2^{2}+\;<\;\boldsymbol{\gamma}_{2},\mathbf{u}-\mathbf{v}\;>, \end{aligned}$$
$\mathbf {z}$ subproblem can be solved as follows:
$$\mathbf{z}_1^{k+1}=argmin_{\mathbf{z}_1} {\mathcal{L}}(\mathbf{u}^{k}, \mathbf{v}^{k+1}, \mathbf{z}_1,\mathbf{z}_2^{k},\boldsymbol{\gamma}_{11}^{k}, \boldsymbol{\gamma}_{12}^{k},\boldsymbol{\gamma}_{2}^{k}),$$
$$\mathbf{z}_2^{k+1}=argmin_{\mathbf{z}_2} {\mathcal{L}}(\mathbf{u}^{k}, \mathbf{v}^{k+1}, \mathbf{z}_1^{k+1}, \mathbf{z}_2,\boldsymbol{\gamma}_{11}^{k}, \boldsymbol{\gamma}_{12}^{k},\boldsymbol{\gamma}_2^{k}).$$
We rewrite Eq. (19) in detail as follows:
$$argmin_{\mathbf{z}_1} {\mathcal{L}}(\mathbf{u}^{k}, \mathbf{v}^{k+1}, \mathbf{z}_1,\mathbf{z}_2^{k},\boldsymbol{\gamma}_{11}^{k}, \boldsymbol{\gamma}_{12}^{k},\boldsymbol{\gamma}_{2}^{k})=argmin_{\mathbf{z}_1}\{ {\mathcal{L}}_1(\mathbf{z}_1)\},$$
where
$${\mathcal{L}}_1(\mathbf{z}_1)=\mu|\mathbf{z}_1|+\frac{\beta}{2}\|\triangledown_{x}\mathbf{u}^k-\mathbf{z}_1+\frac{1}{\beta}\boldsymbol{\gamma}_{11}^k\|_2^{2}.$$
The corresponding solution is given by the following projection:
$$\mathbf{z}_1^{k+1}=shrink(\triangledown_{x}\mathbf{u}^k+\frac{1}{\beta}\boldsymbol{\gamma}_{11}^k, \frac{\mu}{\beta}),$$
where
$$shrink(x,\rho)=\frac{x}{|x|}\cdot max(|x|-\rho,0)$$
is the shrinkage operator [19]. $\mathbf {z}_2^{k+1}$ can be solved by using the same argument,
$$\mathbf{z}_2^{k+1}=shrink(\triangledown_{y}\mathbf{u}^k+\frac{1}{\beta}\boldsymbol{\gamma}_{12}^k,\frac{\mu}{\beta}).$$
For the isotropic TV problem, the proposed functional ${\mathcal {L}}$ becomes:
$$\begin{aligned}&{\mathcal{L}}(\mathbf{u},\mathbf{v}, \mathbf{z_1},\mathbf{z_2}, \boldsymbol{\gamma}_1,\boldsymbol{\gamma}_2)= {\mathcal{L}}(\mathbf{u}, \mathbf{v}, \mathbf{z_1}, \mathbf{z_2}, \boldsymbol{\gamma}_{11}, \boldsymbol{\gamma}_{12}, \boldsymbol{\gamma}_2)\\ =&\iota(\mathbf{v})+\|\mathbf{u}-\mathbf{f}\|_2^{2}+\mu\|(\mathbf{z}_1,\mathbf{z}_2)\|_{D}+\frac{\alpha}{2}\sum_{\lambda}\big(H_\mathbf{u}(\lambda)-H_\mathbf{c}(\lambda)\big)^{2}\\ &+\frac{\beta}{2}\|\triangledown_{x} \mathbf{u}-\mathbf{z}_1\|_2^{2}+<\boldsymbol{\gamma}_{11},\triangledown_{x}\mathbf{u}-\mathbf{z}_{1}>+\frac{\beta}{2}\|\triangledown_{y}\mathbf{u}-\mathbf{z}_2\|_2^2\\ &+<\boldsymbol{\gamma}_{12},\triangledown_{y}\mathbf{u}-\mathbf{z}_{2}>+\frac{\beta}{2}\|\mathbf{u}-\mathbf{v}\|_2^{2}+<\boldsymbol{\gamma}_{2},\mathbf{u}-\mathbf{v}>, \end{aligned}$$
where
$$\|(\mathbf{z}_1, \mathbf{z}_2)\|_D=\sum_{i,j}\sqrt{(\mathbf{z}_1)_{i,j}^{2}+(\mathbf{z}_2)_{i,j}^{2}}.$$
Then the corresponding solution of $\mathbf {z}$ is given as follows in this case:
$$(\mathbf{z}_1^{k+1}, \mathbf{z}_2^{k+1})=argmin_{\mathbf{z}_1, \mathbf{z}_2}{\mathcal{L}}(\mathbf{u}^{k}, \mathbf{v}^{k+1},\mathbf{z}_1, \mathbf{z}_2,\boldsymbol{\gamma}_{11}^{k}, \boldsymbol{\gamma}_{12}^{k},\boldsymbol{\gamma}_2^{k}).$$
And the corresponding solution is given by the following projections:
$$\mathbf{z}_1^{k+1}=max(s^{k}-\frac{\mu}{\beta},0)\frac{\triangledown_{x}\mathbf{u}^{k}+\frac{1}{\beta}\boldsymbol{\gamma}_{11}^{k}}{s^{k}},$$
$$\mathbf{z}_2^{k+1}=max(s^{k}-\frac{\mu}{\beta},0)\frac{\triangledown_{y}\mathbf{u}^{k}+\frac{1}{\beta}\boldsymbol{\gamma}_{12}^{k}}{s^{k}},$$
where,
$$s^{k}=\sqrt{|\triangledown_{x}\mathbf{u}^{k}+\frac{1}{\beta}\boldsymbol{\gamma}_{11}^{k}|^{2}+|\triangledown_{y}\mathbf{u}^{k}+\frac{1}{\beta}\boldsymbol{\gamma}_{12}^{k}}|^{2}.$$

2.1.2 The solution of Eq. (11)

For the solution of subproblem of Eq. (11), we first rewrite subproblem of Eq. (11) as follows:

$$argmin_{\mathbf{u}} {\mathcal{L}}(\mathbf{u},\mathbf{v}^{k+1}, \mathbf{z}^{k+1}, \boldsymbol{\gamma}_1^{k},\boldsymbol{\gamma}_2^{k})=argmin_{\mathbf{u}}\{{\mathcal{L}}_2(\mathbf{u})\},$$
where
$$\begin{aligned}{\mathcal{L}}_{2}(\mathbf{u})=\|\mathbf{u}-\mathbf{f}\|_2^{2}&+\frac{\alpha}{2}\sum_{\lambda}(H_\mathbf{u}(\lambda)-H_\mathbf{c}(\lambda))^{2}\\ &+\frac{\beta}{2}\|\triangledown\mathbf{u}-\mathbf{z}+\frac{1}{\beta}\boldsymbol{\gamma}_1\|_2^{2}+\frac{\beta}{2}\|\mathbf{u}-\mathbf{v}+\frac{1}{\beta}\boldsymbol{\gamma}_2\|_2^2. \end{aligned}$$
The corresponding Euler Lagrange equation is given as follows, and we show the details of the calculation in the appendix,
$$\begin{aligned}2(\mathbf{u}-\mathbf{f})-\frac{\alpha}{|\Omega|}(H_{\mathbf{u}}\big(\mathbf{u})-H_{\mathbf{c}}(\mathbf{u})\big)&+\beta\triangledown^{T}({\triangledown\mathbf{u}}-\mathbf{z}+\frac{1}{\beta}\boldsymbol{\gamma}_1) \\ &+\beta(\mathbf{u}-\mathbf{v}+\frac{1}{\beta}\boldsymbol{\gamma}_2)=0. \end{aligned}$$
We then use the gradient decent method to solve the corresponding minimization problem, the solution of the subproblem of Eq. (11) can be expressed as follows:
$$\begin{aligned}\mathbf{u}^{j+1}=\mathbf{u}^{j}+\tau\bigg(-2(\mathbf{u}-\mathbf{f})&+\frac{\alpha}{|\Omega|}\big(H_{\mathbf{u}}(\mathbf{u})-H_{\mathbf{c}}(\mathbf{u})\big)\\ &-\beta\triangledown^{T}({\triangledown\mathbf{u}}-\mathbf{z}+\frac{1}{\beta}\boldsymbol{\gamma}_1)-\beta(\mathbf{u}-\mathbf{v}+\frac{1}{\beta}\boldsymbol{\gamma}_2)\bigg). \end{aligned}$$
In the next section, we assess the performance of the proposed model.

3. Numerical experiments and discussion

In the following tests, we make use of several standard testing images in the USC-SIPI image database (http://sipi.usc.edu/database/) to evaluate the proposed model.

In the numerical experiments, we set the maximum number of iterations to be 500. Noting that the time step $\tau$ and the penalty parameter $\beta$ are not sensitive, we set them to be 0.001 and 1 respectively. The stopping criteria is set to be $10^{-4}$ in the following experiments. All the computations are performed under the MATLAB implementation on a personal computer with a 2.9 GHz Intel Core i5 CPU.

In the following experiments, we firstly test the effect of the parameters $\mu$ and $\alpha$ by setting different values. We then show the performance of the histogram equalization effect and the denoising effect of the proposed model in terms of different contrast and different noise levels. In the third experiment, we compare the proposed model with ROF model [38], PBHE model [5], ACE model [6], PLCE model [13], HPE model [11,12], Liu’s method [35] and JED [39] based on several degraded images. In the last experiment, we generate five low contrast images from high contrast ground truth images, and we then compare the proposed model with other testing methods in terms of PSNR values and SSIM values.

3.1 The performance measures

In the numerical tests, we use the following seven measures to show the performance of the proposed model.

  • • The first measure is adopted in [40] and is called Average Local Contrast (ALC) for an image of $N$ pixels,
    $$ALC=\frac{1}{N}\sum_{i=1}^{N}\frac{|r_i-E_i|}{r_i+E_i},$$
    here $r_i$ is the grey-level value at pixel $i$, and $E_i$ is the mean edge grey-level which is defined in a neighborhood $N^i$ of size $m\times m$ and centered at pixel $i$,
    $$E_i=\frac{\sum_{k\in N^i}C_kr_k}{\sum_{k\in N^i}C_k},$$
    where $C_k$ is the dege value computed by Sobel operators (see [40]).
  • • The second measure is the Discrete Entropy (DE) [41] of an image. It is defined as follows:
    $$DE(I)=-\sum_{k}p(I(k))log p(I(k)),$$
    where $p(I(k))$ is the probability of pixel intensity $I(k)$ which is estimated from the normalized histogram.The higher the value of discrete entropy, the better the enhancement is in terms of providing better image details.
  • • The third measure is highest Peak Signal-to-Noise Ratio (PSNR), which is defined as follows:
    $$PSNR=10log_{10}\frac{\mathbf{u}_{max}^2}{\frac{1}{n^2}\|\mathbf{u}_c-\mathbf{u}\|_2^2},$$
    where $\mathbf {u}_c$, $\mathbf {u}$ and $\mathbf {u}_{max}$ are the restored image, the original image, and the maximum pixel value of the original image, respectively [24].
  • • The forth measure is the Structural Similarity Index (SSIM) [42], which is defined as:
    $$SSIM(x,y)=\frac{(2\mu_x\mu_y)(2\sigma_{xy}+c_2)}{(\mu_x^2+\mu_y^2+c_1)(\sigma_x^2+\sigma_y^2+c_2)},$$
    where, $\mu _x$ and $\mu _y$ are the average of $x$ and $y$, respectively, $\sigma _x^2$ and $\sigma _y^2$ are the variance of $x$ and $y$, respectively, $\sigma _{xy}$ is the covariance of $x$ and $y$, $L$ is the dynamic range of the pixel-values and $c_1=(k_1L)^2$ and $c_2=(k_2L)^2$ are two variables to stabilize the division with weak denominator.
  • • The fifth measure is the Measure of Enhancement (EME) [43], which is defined as:
    $$EME=\frac{1}{B_1\times B_2}\sum_{i=1}^{B_1}\sum_{j=1}^{B_2}20ln(\frac{I_{i,j}^{max}}{I_{i,j}^{min}+c}),$$
    where, $I_{i,j}^{max}$ and $I_{i,j}^{min}$ are the maximum and minimum pixel intensity within the block $(i,j)$, respectively.
  • • The sixth measure is the Absolute Measure of Enhancement (AME) [43], which is defined as:
    $$AME=\frac{-1}{B_1\times B_2}\sum_{i=1}^{B_1}\sum_{j=1}^{B_2}20ln(\frac{I_{i,j}^{max}-I_{i,j}^{min}}{I_{i,j}^{max}+I_{i,j}^{min}}),$$
    where, $I_{i,j}^{max}$ and $I_{i,j}^{min}$ are the maximum and minimum pixel intensity within the block $(i,j)$, respectively.
  • • The last measure is the Second Derivative like Measure of Enhancement (SDME) [43], which is defined as:
    $$SDME=\frac{-1}{B_1\times B_2}\sum_{i=1}^{B_1}\sum_{j=1}^{B_2}20ln(\frac{I_{i,j}^{max}-2I_{i,j}^{cen}-I_{i,j}^{min}}{I_{i,j}^{max}+2I_{i,j}^{cen}+I_{i,j}^{min}}),$$
    where, $I_{i,j}^{max}$, $I_{i,j}^{cen}$ and $I_{i,j}^{min}$ are the maximum and minimum pixel intensity within the block $(i,j)$, respectively.

3.2 The effect of parameters $\mu$ and $\alpha$

In this subsection, we set different values of $\alpha$ in the proposed model to check the role of the histogram equalization term. Meanwhile, we also show the effect of the parameter $\mu$ by setting different values, which is corresponding to the effect of the TV term.

The input low contrast image (with size of $300\times 300$) is shown in the first row of Fig. 1, and it is further corrupted by an Additive White Gaussian noise whose mean is 0, and the variance is $0.02^2$. In Fig. 1, we display the enhanced images by using several pairs of $(\mu , \alpha )$. In the second row, the enhanced results corresponding to $(\mu , \alpha )$ equal to (0.02, 1.5), (0.02, 2), (0.02,2.5) and (0.02, 3) are shown. The parameter pairs of $(\mu , \alpha )$ are (0.04, 1.5), (0.04, 2), (0.04, 2.5) and (0.04, 3); (0.06, 1.5), (0.06, 2), (0.06, 2.5), and (0.06, 3); (0.08, 1.5), (0.08, 2), (0.08, 2.5) and (0.08, 3) corresponding to the third row, the forth row, and the fifth row respectively. The corresponding histograms are displayed on the right-hand side of the images in Fig. 1. We see from the figures that the contrast of the enhanced results becomes higher, and the corresponding histogram is getting more and more uniform as $\alpha$ getting larger, which is corresponding to the role of the histogram equalization term. Meanwhile, the enhanced results are more and more smooth as $\mu$ increases, which is corresponding to the role of the TV regularization.

 figure: Fig. 1.

Fig. 1. (Test of parameters) The first row (from left to right): the low-contrast image, the low-contrast and noisy image; the second row: the enhanced results by setting $\alpha =1.5, 2, 2.5, 3,\ \mu =0.02$; the third row: the enhanced results by setting $\alpha =1.5, 2, 2.5, 3,\ \mu =0.04$; the forth row: the enhanced results by setting $\alpha =1.5, 2, 2.5, 3,\ \mu =0.06$; the fifth row: the enhanced results by setting $\alpha =1.5, 2, 2.5, 3,\ \mu =0.08$.

Download Full Size | PDF

In the following two subsections (Section 3.2 and 3.3), we make use of the measures ALC and DE to test the performance of the proposed model. The default value of the block size for calculating ALC is $3\times 3$. We remark that higher values of ALC and DE of an image correspond to higher contrast and better image details respectively. The corresponding ALC values and DE values are reported below each image in Fig. 1. We see from the numbers that ALC value increases as $\alpha$ becomes larger, which is consistent with the visual effect. Noting that high contrast usually destroys the details, we find that DE value (the higher value of DE is corresponding to more details) increases as $\alpha$ getting bigger until $\alpha = 2.5$, and then decreases. Because of the smoothing role of the TV regularization, we observe from the numbers that ALC value and DE value decrease as $\mu$ increases.

3.3 Contrast enhancement and denoising

In this subsection, we present some experimental results to illustrate the effectiveness of the proposed model for contrast enhancement and denoising. We firstly choose a high contrast image (the first row of Fig. 2) and adjust the contrast range of the image so that we can obtain some low-contrast images. In the first experiment, we generate four low contrast images with different contrast level, which are shown in the first row of Fig. 2. We then add a random Additive White Gaussian noise (whose mean is 0 and the variance is $0.02^2$) to the low-contrast images. Then the proposed model is adopted to process these low-contrast and noisy images. Here the ground truth image can be used to check the performance measure. Both ALC and DE are employed, and the corresponding ALC values and DE values are given below each output image. In the next experiment, we test the performance of the proposed model in denoising. We fix the contrast level, and generate four noisy images. Meanwhile, the ALC values and DE values are also reported.

 figure: Fig. 2.

Fig. 2. The first row (from left to right): the ground truth image, the low-contrast and noisy images whose contrast gets higher; the second row (from left to right): the enhanced results by using the proposed model with $\mu =0.08,0.07,0.06,0.05$ and $\alpha =2.5$; the corresponding ALC values and DE values (ALC, DE) are given below each output image.

Download Full Size | PDF

In the first test, the corresponding parameters $(\mu ,\alpha )$ are set to be (0.08,2.5), (0.07,2.5), (0.06,2.5) and (0.05,2.5) respectively. The enhanced results are displayed in the second row of Fig. 2. We can see from the results that the proposed model is quite good in terms of the effectiveness of enhancing effect with respect to same noise level and different contrast.

In the second test, the variance of the noise are $0.01^2$, $0.02^2$, $0.03^2$, $0.04^2$ respectively, and the mean of the noise is 0. The parameter $\alpha$ is set to be 2.5 and the parameter $\mu$ is respectively chosen to be 0.04, 0.06, 0.085, 0.12 in practice. The degraded images are given in the first row of Fig. 3, and the corresponding enhanced results are displayed in the second row of Fig. 3. Again we find that the proposed model is very good in terms of the effectiveness of enhancing effect with respect to different noise levels and same contrast.

 figure: Fig. 3.

Fig. 3. The first row (from left to right): the low contrast images with noise whose variances are $0.01^2,0.02^2,0.03^2,0.04^2$ respectively; the second row (from left to right): the corresponding enhanced results by using the proposed model with $\mu$=$0,04$, $0.06$, $0.085$, $0.12$ and $\alpha$=$2.5$; the corresponding ALC values and DE values (ALC, DE) are given below each output image.

Download Full Size | PDF

3.4 Comparisons with image enhancement methods I

In this subsection, we intend to compare the proposed model with ROF model [38], PBHE model [5], ACE model [6], PLCE model [13], HPE model [11,12], Liu’s method [35] and JED model [39] for image enhancement. We set the parameter for the regularization term in ROF model to be 0.005, 0.01, 0.015, 0.02, 0.025, 0.03, 0.035, 0.04, and we set the parameter for the mean brightness term in PBHE model to be 0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5 to see the results. For ACE model, PLCE model, HPE model, and JED model, we use the default settings of parameters. For Liu’s method, we set the parameter to be $(\lambda , \gamma ) = (0.3, 0.001)$ which was the default setting (only setting) in [35]. For the proposed model, we set $\mu$ to be 0.005, 0.01, 0.0125, 0.015, 0.0175, 0.02, 0.0225, 0.025, and we set $\alpha$ to be in between (1.5, 2.5).

Six low contrast images including two color images are considered in this experiment, and they are further corrupted by an Additive White Gaussian noise with zero mean and $0.02^2$ variance. The degraded images are displayed in the first row of Fig. 4. In this subsection, we make use of the measures ALC, DE, EME, AME, and SDME to test the performance of the proposed model. The corresponding ALC values, DE values, EME values, AME values and SDME values are reported in Table 1.

 figure: Fig. 4.

Fig. 4. From top to bottom: low contrast and noisy images; restored results by using ROF mode; enhanced results by using PBHE model; enhanced results by using ACE model; enhanced results by using PLCE model; enhanced results by using HPE model; enhanced results by using Liu’s method; enhanced results by using JED method; enhanced results by using the proposed model.

Download Full Size | PDF

Tables Icon

Table 1. ALC, DE, EME, AME, and SDME values for enhanced results of different models.

As an example, we display the restored results by using ROF model (with parameter 0.02) in the second row of Fig. 4. For color image, we perform ROF model channel by channel in RGB color space. We see from the results that the noise has been removed, however, the contrast is still in a low level. The enhanced results by using PBHE model (with parameter 0.001) are shown in the third row of Fig. 4. We find that the contrast has been enhanced, while the noise is even amplified. The enhanced results by using ACE model, PLCE model, and HPE model are given in the forth row, the fifth row, and the sixth row respectively, and the results are also visually unpleasant. The enhanced results by using Liu’s method are given in the seventh row of Fig. 4. We find that the results are over enhanced, and the noise is still kept. The enhanced results by using JED model are given in the eighth row of Fig. 4. We find that the results are also over enhanced. The enhanced results by using the proposed model (with parameter pairs (0.02, 1.5)) are displayed in the ninth row of Fig. 4. We observe from the results that the contrast has been enhanced, meanwhile, the noise has been removed. The visual quality of the enhanced results by using the proposed model is more pleasant than that of the other testing models. We also show the corresponding zooming parts of the enhanced results in Fig. 5. We find that the proposed model is good at restoring details during the contrast enhancement and the denoising procedure. See especially the hair of the girl and the texture on the tank.

 figure: Fig. 5.

Fig. 5. The corresponding zooming parts of Fig. 4.

Download Full Size | PDF

The default value of the block size for calculating ALC is $3\times 3$. The EME value, the AME value, and the SDME value are computed on non-overlapping blocks of size $7\times 7$. We remark that higher values of ALC, EME and DE of an image correspond to better image quality, whereas for AME and SDME, small values correspond to good quality. We see from Table 1 that the proposed model provides competitive values of ALC, DE, EME, AME and SDME in terms of the visual quality.

3.5 Comparisons with image enhancement methods II

In this subsection, we make use of five high contrast pictures as the ground truth images, and then generate five low contrast images. All the low contrast images are further degraded by an Additive White Gaussian noise with zero mean and $0.02^2$ variance. Then we can measure the difference between the enhanced results and the ground truth images. In this subsection, we also make use of the measures ALC, DE, EME, AME, and SDME to test the performance of the proposed model. We make use of the same setting given in section 3.4 for the above measures. Noting that we have ground truth images in this test, therefore, we can measure the difference between the enhanced results and the ground truth images by using PSNR and SSIM to show the effectiveness of the proposed model for image restoration. We report the corresponding ALC, EME, AME, SDME and DE values in Table 2. We also give the corresponding PSNR values and SSIM values to illustrate the restoration effect of the proposed model.

Tables Icon

Table 2. ALC, DE, EME, AME, SDME, PSNR, and SSIM values for enhanced results of different models.

The ground truth images are shown in the first row of Fig. 6, and the low contrast and noisy images are given in the second row of Fig. 6. We make use of the same setting of parameters as that in section 3.4. Then we display the restored results by using ROF model with the highest SSIM values in the third row of Fig. 6. Again we see from the results that the contrast of the output results is still low. We show the enhanced results by using PBHE model with the highest SSIM values in the forth row of Fig. 6. We find that the noise is kept after the enhancement procedure. The enhanced results by using ACE model, PLCE model, and HPE model in the fifth row, the sixth row, the seventh row respectively. We see that the results are still noisy. The enhanced results by using Liu’s method are shown in the eighth row of Fig. 6. The enhanced results by using JED model are given in the ninth row of Fig. 6. Then we give the enhanced results by using the proposed model with the highest SSIM values in the tenth row of Fig. 6, and we observe that the visual quality is still the best comparing to that of other testing models. We also display the corresponding zooming parts in Fig. 7. Again we can see the effectiveness of the detail restoration of the proposed model.

 figure: Fig. 6.

Fig. 6. From top to bottom: the high-contrast images; the low-contrast and noisy images; the restored results by using ROF model; the enhanced results by using PBHE model; the enhanced results by using ACE model; the enhanced results by using PLCE model; the enhanced results by using HPE model; the enhanced results by using Liu’s method; the enhanced results by using JED model; the enhanced results by using the proposed model.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. The corresponding zooming parts of Fig. 6.

Download Full Size | PDF

We find that DE values of the enhanced results by using the proposed model are good, and ALC, EME, AME, SDME values corresponding to the proposed model are competitive. Meanwhile, we observe from the numbers that the proposed model provides almost all the best values of PSNR and SSIM, which demonstrates the effectiveness of the proposed model.

3.6 Comparisons with a two-step method

In the previous experiments, we find that the proposed model is much better than other testing models in combining image denoising and image contrast enhancement. It is interesting to compare the proposed model to a two-step method including the denoising procedure and the enhancement procedure. As is shown in the previous experiments, we observe that PBHE model enhances the noise in the restored images. So we consider to test a two-step method as follows. We first denoise the degraded image by using ROF model and then enhance the denoised image by using PBHE model. In order to make a fair comparison, we consider different setting of parameters in the proposed model and also in the testing of two-step method. Noting that there is only one parameter in both ROF model and PBHE model, i.e., the parameter for the regularization term in ROF model (parameter 1) and the parameter for the mean brightness term in PBHE model (parameter 2). In the following experiments, we set parameter 1 to be 0.005, 0.01, 0.015, 0.02, 0.025, 0.03, 0.035, 0.04, and we set parameter 2 to be 0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5 to see the results. Then for each input degraded image, we can derive 64 restored results corresponding to 64 different parameter pairs in the testing two-step method. For the proposed model, we set $\mu$ (parameter 1) to be 0.005, 0.01, 0.0125, 0.015, 0.0175, 0.02, 0.0225, 0.025, and we set $\alpha$ (parameter 2) to be in between (1.5, 2.5). Then we also have 64 restored results for each input image.

In this subsection, we make use of the measures ALC, DE, PSNR and SSIM to test the performance of the proposed model. We make use of the same setting as in the previous sections for the above measures. First, we make use of four noisy and low contrast images in Fig. 4, and show the restored results in Fig. 8. The distributions of ALC and DE values of the restored results are given in Fig. 9. The magenta surface is corresponding to the proposed model, and the blue surface is corresponding to the testing two-step method. We also mark the largest value of the measure in each distribution figure (black points). We show the zoom-in parts of Fig. 8 in Fig. 10. Then we make use of the five noisy and low contrast images in Fig. 6 to test the effect of the proposed model, and display the restored results in Fig. 11. Noting that we have ground truth images in Fig. 6, therefore, we can calculate PSNR and SSIM values in this experiment. We display the distributions of ALC, DE, PSNR, and SSIM values of the restored results in Fig. 12. We also report the corresponding ALC, DE, PSNR, SSIM values below each restored image. We show the zoom-in parts of Fig. 11 in Fig. 13.

 figure: Fig. 8.

Fig. 8. From top to bottom: the degradated images; the enhanced results by using the two-step method; the enhanced results by using the proposed model. The corresponding ALC values and DE values (ALC, DE) are shown below each output image.

Download Full Size | PDF

 figure: Fig. 9.

Fig. 9. From top to bottom: the distribution of ALC, DE values of the restored results in Fig. 8.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. The corresponding zooming parts of Fig. 8.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. From top to bottom: the ground-truth images; the degradated images; the enhanced results by using the two-step method; the enhanced results by using the proposed model. The corresponding ALC values and DE values (ALC, DE) are shown below each output image and the corresponding PSNR values and SSIM values (PSNR, SSIM) are shown below (ALC, DE).

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. From top to bottom: the distribution of ALC, DE, PSNR, SSIM values of the restored results in Fig. 11.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. The corresponding zooming parts of Fig. 11.

Download Full Size | PDF

In Fig. 9, we find that the proposed model gives almost all (7/8) the largest ALC and DE values. Noting that larger ALC and DE value is not necessarily corresponding to better visual quality. As an example, we show the restored results by using the two-step method with (parameter1, parameter2) = (0.03, 0.05) in the second row of Fig. 8, and the restored results by using the proposed model with $(\mu , \alpha ) = (0.02, 1.5)$ in the third row of Fig. 8. We see from the results that the proposed model is competitive by balancing the visual quality and the values. In Fig. 12, we observe that the proposed model generates all the best DE values, all the best PSNR values, and all the best SSIM values. As an example, we show the restored results with the best SSIM values by using the two-step method in the third row of Fig. 11, and we display restored results with the best SSIM values by using the proposed model in the forth row of Fig. 11. It is clear that the proposed model always gives better PSNR values and better SSIM values, meanwhile, the corresponding DE values of the restored results by using the proposed model are also better for all the testing images. We show the zoom-in parts in Fig. 10 and Fig. 13. Again we find that the proposed model is better than the testing two-step method in terms of the values of the measures and the visual quality of the restored results.

In the previous experiments, we find that the computational time of each iteration of ROF model is 0.03s, 0.03s, 0.03s, 0.04s, 0.04s respectively for the five testing images in Fig. 11, and the computational time of each iteration of PBHE model is 0.12s, 0.12s, 0.13s, 0.17s, 0.15s respectively. Therefore, each iteration of the two-step method takes 0.15s, 0.15s, 0.16s, 0.21s, 0.19s respectively to handle the five input testing images. However, each iteration of the proposed model takes 0.07s, 0.07s, 0.07s, 0.09s, 0.09s respectively for the same testing images. It is very clear that the proposed model is more efficient and effective by considering the efficiency of the computation and the effectiveness of the restoration.

4. Concluding remarks

In order to handle the problem of simultaneously image denoising and image contrast enhancement, we propose and develop a variational model in this paper. The idea is to propose a variational approach containing an energy functional to adjust the pixel values of an input image directly so that the resulting histogram can be redistributed to be uniform and the noise of the image is removed. In the proposed model, a histogram equalization term is considered for image contrast enhancement, a total variational term is incorporate to remove the noise of the input image, and a fidelity term is added to keep the structure and the texture of the input image. Theoretically, the existence of the minimizer and the convergence of the proposed model are given. We compare the proposed model with ROF model [38], PBHE model [5], ACE model [6], PLCE model [13], HPE model [11,12], Liu’s method [35] and JED model [39] for image enhancement. Experimental results are reported to demonstrate that the proposed model is competitive with these testing models for several testing images in terms of several measures such as ALC, DE, PSNR, SSIM, EME, AME, and SDME.

In this paper, we only consider RGB space for color images. In the future research work, we plan to study the proposed model to extremely low contrast images, high noise level images and color images in different color spaces. It is also interesting to extend and analyze the proposed model to hyperspectral images for denoising and contrast enhancement applications.

Appendix

5.1. The calculations of the Euler-Lagrange equation corresponding to ${\mathcal {L}}_2(\mathbf {u})$

In this section, it is obligatory to show the calculations of the Euler Lagrange equation corresponding to ${\mathcal {L}}_2(\mathbf {u})$, where,

$$\begin{aligned}{\mathcal{L}}_{2}(\mathbf{u})=\|\mathbf{u}-\mathbf{f}\|_2^{2}&+\frac{\alpha}{2}\sum_{\lambda}(H_\mathbf{u}(\lambda)-H_\mathbf{c}(\lambda))^{2}\\ &+\frac{\beta}{2}\|\triangledown\mathbf{u}-\mathbf{z}+\frac{1}{\beta}\boldsymbol{\gamma}_1\|_2^{2}+\frac{\beta}{2}\|\mathbf{u}-\mathbf{v}+\frac{1}{\beta}\boldsymbol{\gamma}_2\|_2^2 \end{aligned}$$
with
$$H_{\mathbf{u}}(\lambda)=\frac{1}{|\Omega|} \sum_{i\in \Omega} \chi_{[0, \lambda]}\mathbf{u(i)}.$$
We show the details of the calculation as follows, actually, if $\mathbf {w}: \Omega \rightarrow R^{+}$ denotes a perturbation, we have:
$$\begin{aligned}\frac{d}{d\eta}\bigg|_{\eta=0 }H_{\mathbf{u}+\eta \mathbf{w}}(\lambda)&=\frac{1}{|\Omega|}\frac{d}{d\eta}\bigg|_{\eta=0}\sum_{i\in\Omega}{\chi_{[0,\lambda]}}(\mathbf{u}+\eta \mathbf{w}) \\ &=\frac{1}{|\Omega|}\sum_{i\in\Omega}\frac{d}{d\eta}\bigg|_{\eta=0}{\chi_{[0,\lambda]}}(\mathbf{u}+\eta \mathbf{w}) \\ &=-\frac{1}{|\Omega|}\sum_{i\in\Omega}\delta(\mathbf{u}(i)-\lambda)\mathbf{w}(i). \end{aligned}$$
Then we can calculate the variation of the proposed energy functional and set it to be 0,
$$\begin{aligned}&\frac{d}{d\eta}\bigg|_{\eta=0}\sum_{\lambda}\big(H_{\mathbf{u}+\eta \mathbf{w}}(\lambda)-H_\mathbf{c}(\lambda)\big)^{2}\\ =&-\frac{2}{|\Omega|}\sum_{\lambda}\big(H_\mathbf{u}(\lambda)-H_\mathbf{c}(\lambda)\big) \sum_{i}\delta\big(\mathbf{w}(i)-\lambda\big)\mathbf{w}(i)\\ =&-\frac{2}{|\Omega|}\sum_{i}\bigg(H_\mathbf{u}(\mathbf{u}(i)-H_\mathbf{c}\big(\mathbf{u}(i)\big)\bigg)\mathbf{w}(i)\\ =&0, \end{aligned}$$
by noting the randomicity of $\mathbf {w},$ we have,
$$-\frac{2}{|\Omega|}\big(H_\mathbf{u}(\mathbf{u})-H_\mathbf{c}(\mathbf{u})\big)=0.$$
Then we calculate the first variation of the first term in the proposed energy functional and set it to be $0$,
$$\begin{aligned}\frac{d}{d\eta}\bigg|_{\eta=0}(\mathbf{u}+\eta\mathbf{w}-\mathbf{f})^2 &=2(\mathbf{u}+\eta\mathbf{w}-\mathbf{f})|_{\eta=0} \\ &=2(\mathbf{u}-\mathbf{f})\mathbf{w} \\ &=0, \end{aligned}$$
by noting the randomicity of $\mathbf {w}$, we have, $2(\mathbf {u}-\mathbf {f})=0$, therefore, $\mathbf {u}=\mathbf {f}.$ The calculation of the first variation for the rest parts of the function in ${\mathcal {L}}_2(\mathbf {u})$ is standard, and can be easily derived as is shown in Eq. (34).

5.2. Convergence analysis

In this subsection, we provide some analysis on the convergence of the proposed ADMM algorithm. First, we consider the following equivalent form of Eq. (8):

$$\begin{aligned} &\min_{ \mathbf{u},\mathbf{v}}\left\{ ||\mathbf{u}-\mathbf{f}||_2^2+\mu ||\mathbf{u}||_{TV}+\frac{\alpha}{2}\sum_{\lambda} \big(H_\mathbf{u}(\lambda)-H_\mathbf{c}(\lambda)\big)^2\right\},\\ &\textrm{such that},\ \mathbf{u}-\mathbf{v}=\mathbf{0}, \mathbf{v}-l\geq0, \mathbf{v}-L\leq0. \end{aligned}$$
We recall that a point is a KKT point of Eq. (50) if it satisfies the KKT conditions of Eq. (50). The KKT conditions is given as follows:
$$ 2(\mathbf{u}-\mathbf{f})-\frac{\alpha}{|\Omega|}(H_{\mathbf{u}}\big(\mathbf{u})-H_{\mathbf{c}}(\mathbf{u})\big)+\triangledown_{x}^{T}\boldsymbol{\omega}_{11}+\triangledown_y^{T}\boldsymbol{\omega}_{12}+\boldsymbol{\omega}_2=\mathbf{0}, $$
$$ \triangledown\mathbf{u}=(\triangledown_{x}\mathbf{u},\triangledown_{y}\mathbf{u})=(\mathbf{z}_1,\mathbf{z}_2)=\mathbf{z}, $$
$$ \boldsymbol{\omega}_2-\boldsymbol{\omega}_3-\boldsymbol{\omega}_4=0, $$
$$ \mathbf{u}-\mathbf{v}=\mathbf{0}, $$
$$ \boldsymbol{\omega}_3\leq\mathbf{0}\leq\mathbf{v}-l,\ \boldsymbol{\omega}_3\odot (\mathbf{v}-l)=0, $$
$$ \boldsymbol{\omega}_4\geq\mathbf{0}\geq\mathbf{v}-L,\ \boldsymbol{\omega}_4\odot (\mathbf{v}-L)=0. $$
Then we have the following convergence result of the proposed ADMM algorithm.

Theorem 5.1 Let $\{\mathbf {X}^{k} = (\mathbf {u}^{k},\mathbf {v}^{k},\mathbf {z}^{k}, \boldsymbol {\gamma }_1^{k},\boldsymbol {\gamma }_2^{k})\}$ be a sequence generated by the proposed ADMM algorithm in Eqs. (10)–(13), and it satisfies $\lim _{k\to \infty }(\mathbf {X}^{k+1}-\mathbf {X^{k}})=0.$ Then any accumulation point of $\{\mathbf {X}^{k}\}$ is a KKT point of Eq. (50).

Proof 2 For the anisotropic problem, we first rewrite the ADMM iterations as follows:

$$ \mathbf{v}^{k+1}=max(min(\mathbf{u}^{k}+\frac{1}{\beta}\boldsymbol{\gamma}^{k},L),l), $$
$$ \mathbf{z}_1^{k+1}=\frac{\triangledown_{x}\mathbf{u}^{k}+\frac{1}{\beta}\boldsymbol{\gamma}_{11}^{k}}{|\triangledown_{x}\mathbf{u}^{k}+\frac{1}{\beta}\boldsymbol{\gamma}_{11}^{k}|}\cdot max(|\triangledown_{x}\mathbf{u}^{k}+\frac{1}{\beta}\boldsymbol{\gamma}_{11}^{k}|-\frac{\mu}{\beta},0), $$
$$ \mathbf{z}_2^{k+1}=\frac{\triangledown_{y}\mathbf{u}^{k}+\frac{1}{\beta}\boldsymbol{\gamma}_{12}^{k}}{|\triangledown_{y}\mathbf{u}^{k}+\frac{1}{\beta}\boldsymbol{\gamma}_{12}^{k}|}\cdot max(|\triangledown_{y}\mathbf{u}^{k}+\frac{1}{\beta}\boldsymbol{\gamma}_{12}^{k}|-\frac{\mu}{\beta},0),$$
$$\begin{aligned} &\hspace{-10mm}2(\mathbf{u}^{k+1}-\mathbf{f})-\frac{\alpha}{|\Omega|}(H_{\mathbf{u}^{k+1}}\big(\mathbf{u}^{k+1})-H_{\mathbf{c}}(\mathbf{u}^{k+1})\big)\\ &\hspace{-10mm}+\beta\triangledown^{T}({\triangledown\mathbf{u}^{k+1}}-\mathbf{z}_1^{k+1}+\frac{1}{\beta}\boldsymbol{\gamma}_{11}^{k})+\beta\triangledown^{T}({\triangledown\mathbf{u}^{k+1}}-\mathbf{z}_2^{k+1}+\frac{1}{\beta}\boldsymbol{\gamma}_{12}^{k})\\ &\hspace{30mm}+\beta(\mathbf{u}^{k+1}-\mathbf{v}^{k+1}+\frac{1}{\beta}\boldsymbol{\gamma}_2^{k+1})=\mathbf{0}, \end{aligned}$$
$$ \boldsymbol{\gamma}_{11}^{k+1}=\boldsymbol{\gamma}_{11}^{k}+\beta(\triangledown_{x}\mathbf{u}^{k+1}-\mathbf{z}_1^{k+1}), $$
$$ \boldsymbol{\gamma}_{12}^{k+1}=\boldsymbol{\gamma}_{12}^{k}+\beta(\triangledown_{y}\mathbf{u}^{k+1}-\mathbf{z}_{12}^{k+1}), $$
$$ \boldsymbol{\gamma}_2^{k+1}=\boldsymbol{\gamma}_2^{k}+\beta(\mathbf{u}^{k+1}-\mathbf{v}^{k+1}). $$
Assume $\mathbf {X}^{k}\rightarrow \hat {\mathbf {X}}=(\hat {\mathbf {u}},\hat {\mathbf {v}},\hat {\mathbf {z}_1}, \hat {\mathbf {z}_2},\hat {\boldsymbol {\gamma }_{11}},\hat {\boldsymbol {\gamma }_{12}},\hat {\boldsymbol {\gamma }_2})$. Consider Eqs. (61)–(63), and let $k$ go to infinity, we have the following rsults by noting that $\lim _{k\to \infty }(\mathbf {X}^{k+1}-\mathbf {X^{k}})=0,$
$$\triangledown_x\mathbf{u}^{k+1}\rightarrow\triangledown_{x}\hat{\mathbf{u}}=\hat{\mathbf{z}_{1}}\leftarrow\mathbf{z}_{1}^{k},$$
$$\triangledown_y\mathbf{u}^{k+1}\rightarrow\triangledown_{y}\hat{\mathbf{u}}=\hat{\mathbf{z}_{2}}\leftarrow\mathbf{z}_{2}^{k}.$$
Consider Eq. (60) by using the above convergence results, and let $k$ go to infinity, we derive:
$$2(\hat{\mathbf{u}}-\mathbf{f})-\frac{\alpha}{|\Omega|}(H_{\hat{\mathbf{u}}}\big(\hat{\mathbf{u}})-H_{\mathbf{c}}(\hat{\mathbf{u}})\big)+\triangledown_{x}^{T}\boldsymbol{\hat{\gamma}_{11}}+\triangledown_{y}^{T}\boldsymbol{\hat{\gamma}_{12}}+\hat{\boldsymbol{\gamma}}_2=0.$$
Then we consider Eq. (57), and let $k$ goes to infinity, we have:
$$\hat{\mathbf{v}}=max(min(\hat{\mathbf{u}}+\frac{1}{\beta}\hat{\boldsymbol{\gamma}}_2,L),l).$$
We get the following results by using the above projection,
$$\begin{cases} \hat{\boldsymbol{\gamma}}_2=\mathbf{0}, & l<\hat{\mathbf{u}}=\hat{\mathbf{v}}<L,\\ \hat{\boldsymbol{\gamma}}_2\leq\mathbf{0}, & \hat{\mathbf{u}}=\hat{\mathbf{v}}=l,\\ \hat{\boldsymbol{\gamma}}_2\geq\mathbf{0}, & \hat{\mathbf{u}}=\hat{\mathbf{v}}=L. \end{cases}$$
Then we find that $\hat {\boldsymbol {\gamma }}_2$, $\hat {\mathbf {v}}$ satisfying the following conditions:
$$\begin{cases} \hat{\boldsymbol{\gamma}}_2\leq \mathbf{0} \leq \hat{\mathbf{v}}-l, & \hat{\boldsymbol{\gamma}}_2 \odot(\hat{\mathbf{v}}-l)=\mathbf{0}\\ \hat{\boldsymbol{\gamma}}_2\geq\mathbf{0}\geq \hat{\mathbf{v}}-L, & \hat{\boldsymbol{\gamma}}_2\odot(\hat{\mathbf{v}}-L)=\mathbf{0}. \end{cases}$$
Then we consider Eqs. (58)–(59), and let $k$ goes to infinity, we get:
$$\hat{\mathbf{z}}_1=\frac{\triangledown_{x}\hat{\mathbf{u}}+\frac{1}{\beta}\hat{\boldsymbol{\gamma}}_{11}}{|\triangledown_{x}\hat{\mathbf{u}}+\frac{1}{\beta}\hat{\boldsymbol{\gamma}}_{11}|}\cdot max(|\triangledown_{x}\hat{\mathbf{u}}+\frac{1}{\beta}\hat{\boldsymbol{\gamma}}_{11}|-\frac{\mu}{\beta},0),$$
$$\hat{\mathbf{z}}_2=\frac{\triangledown_{y}\hat{\mathbf{u}}+\frac{1}{\beta}\hat{\boldsymbol{\gamma}}_{12}}{|\triangledown_{y}\hat{\mathbf{u}}+\frac{1}{\beta}\hat{\boldsymbol{\gamma}}_{12}|}\cdot max(|\triangledown_{y}\hat{\mathbf{u}}+\frac{1}{\beta}\hat{\boldsymbol{\gamma}}_{12}|-\frac{\mu}{\beta},0).$$
Finally, noting the results in Eqs. (64), (65) and (66), we can derive that $\hat {\mathbf {X}}$ = $(\hat {\mathbf {u}}$, $\hat {\mathbf {v}}$, $\hat {\mathbf {z}}_1$, $\hat {\mathbf {z}}_2$, $\hat {\boldsymbol {\gamma }_{11}}$, $\hat {\boldsymbol {\gamma }}_{12}$, $\hat {\boldsymbol {\gamma }}_{2})$ satisfies the KKT conditions in Eqs. (51)–(56) of Eq. (50) by setting $\boldsymbol {\omega }_{11} = \hat {\boldsymbol {\gamma }}_{11}, \boldsymbol {\omega }_{12} = \hat {\boldsymbol {\gamma }}_{12}$, $\boldsymbol {\omega }_2 = \hat {\boldsymbol {\gamma }}_2$, $\boldsymbol {\omega }_3 = min(\hat {\boldsymbol {\gamma }}_2, 0)$, $\boldsymbol {\omega }_4 = max(\hat {\boldsymbol {\gamma }}_2, 0)$.

For the isotropic problem, we can get the same result by using similar argument. This commpletes the proof.

We then easily deduce the following corollary.

Corollary 1 If $\{\mathbf {X}^{k}\}$ converges, it converges to a KKT point of Eq. (50).

Funding

Natural Science Foundation of Shanghai (18ZR1441800); Fundamental Research Funds for the Central Universities (22120180255, 22120180067); HKRGC GRF (12306616, 12200317, 12300218, 12300519); HKU Grant (104005583).

Disclosures

The authors declare no conflicts of interest.

References

1. R. Hummel, “Image enhancement by histogram transformation,” Comp. Graph. Image Process. 6(2), 184–195 (1977). [CrossRef]  

2. R. C. Gonzalez and R. E. Woods, Digital Image processing (Prentice Hall, 2002).

3. D. J. Ketchum, “Real-time image enhancement techniques,” Proc. SPIE 0074, 120–125 (1976). [CrossRef]  

4. W. Wang and M. K. Ng, “A variational histogram equalization method for image contrast enhancement,” SIAM J. Imaging Sci. 6(3), 1823–1849 (2013). [CrossRef]  

5. W. Wang, C. Chen, and M. K. Ng, “An image pixel based variational model for histogram equalization,” J. Vis. Commun. Image R. 34, 118–134 (2016). [CrossRef]  

6. Getreuer Pascal, “Automatic color enhancement (ACE) and its fast implementation,” Image Process. Line 2, 266–277 (2012). [CrossRef]  

7. C. Gatta, A. Rizzi, and D. Marini, “ACE: an automatic color equalization algorithm,” Proceedings of the First European Conference on Color in Graphics Image and Vision (CGIV02) (2002).

8. A. Rizzi, C. Gatta, and D. Marini, “A new algorithm for unsupervised global and local color correction,” Pattern Recogn. Lett. 24(11), 1663–1677 (2003). [CrossRef]  

9. A. Rizzi, C. Gatta, and D. Marini, “From retinex to automatic color equalization: issues in developing a new algorithm for unsupervised color equalization,” J. Electron. Imaging 13(1), 75–84 (2004). [CrossRef]  

10. M. Bertalmio, V. Caselles, E. Provenzi, and A. Rizzi, “Perceptual color correction through variational techniques,” IEEE Transactions in Image Processing 16(4), 1058–1072 (2007). [CrossRef]  

11. M. Nikolova and G. Steidl, “Fast hue and range preserving histogram specification: theory and new algorithms for color image enhancement,” IEEE Transactions on Image Processing. 23(9), 4087–4100 (2014). [CrossRef]  

12. M. Nikolova and G. Steidl, “Fast sorting algorithm for exact histogram specification,” IEEE Trans. Image Process. 23(12), 5274–5283 (2014).

13. S. Ferradans, R. Palma-Amestoy, and E. Provenzi, “An algorithmic analysis of variational models for perceptual local contrast enhancement,” Image Process. Line 5, 219–233 (2015). [CrossRef]  

14. A. N. Tikhonov and V. Y. Arsenin, Solutions of ill-posed problems (Winston and Sons, 1977).

15. L. Rudin and S. Osher, “Total variation based image restoration with free local constraints,” In Proceedings of the International Conference on Image Processing volume I, 31–35 (1994).

16. L. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D 60(1-4), 259–268 (1992). [CrossRef]  

17. A. Chambolle, “An algorithm for total variation minimization and applications,” J. Math. Imaging Vis. 20(1/2), 163–177 (2004). [CrossRef]  

18. D. Goldfarb and W. Yin, “Second-order cone programming methods for total variation based image restoration,” SIAM J. Sci. Comput. 27(2), 622–645 (2005). [CrossRef]  

19. T. Goldstein and S. Osher, “The split Bregman method for L1-regularized problems,” SIAM J. Imaging Sci. 2(2), 323–343 (2009). [CrossRef]  

20. F. Li, T. Zeng, and G. Zhang, “Lagrangian multipliers and split Bregman methods for minimization problems constrained on Sn−1,” J. Vis. Commun. Image Represent. 23(7), 1041–1050 (2012). [CrossRef]  

21. W. Wang and M. K. Ng, “On algorithms for automatic deblurring from a single image,” J. Comput. Math. 30(1), 80–100 (2012). [CrossRef]  

22. T. F. Chan and C. K. Wong, “Total variation blind deconvolution,” IEEE Trans. on Image Process. 7(3), 370–375 (1998). [CrossRef]  

23. Y. Huang and M. Ng, “Lipschitz and total-variational regularization for blind deconvolution,” Commun. Comput. Phys. 4, 195–206 (2008).

24. M. K. Ng, W. Wang, and X. L. Zhao, “A variational approach for restoring images corrupted by noisy blur kernels and additive noise,” Numer. Linear Algebra Appl. 24(6), e2100 (2017). [CrossRef]  

25. W. Wang and M. K. Ng, “Convex regularized inverse filtering methods for blind image deconvolution,” Signal, Image and Video Process. 10(7), 1353–1360 (2016). [CrossRef]  

26. W. Wang, X. l. Zhao, and M. Ng, “A cartoon-plus-texture image decomposition model for blind deconvolution,” Multidim. Syst. Sign. Process. 27(2), 541–562 (2016). [CrossRef]  

27. X. L. Zhao, W. Wang, T. Y. Zeng, T. Z. Huang, and M. K. Ng, “Total variation structured total least squares method for image restoration,” SIAM J. Sci. Comput. 35(6), B1304–B1320 (2013). [CrossRef]  

28. T. Chan, S. Esedoglu, F. Park, and A. Yip, “Total variation image restoration: overview and recent developments,” in Handbook of Mathematical Models in Computer Vision, Springer, New York, 17–31 (2006).

29. F. Fang, F. Li, and T. Zeng, “Single image dehazing and denoising: a fast variational approach,” SIAM J. Imaging Sci. 7(2), 969–996 (2014). [CrossRef]  

30. W. Ma and S. Osher, “A TV Bregman iterative model of retinex theory,” Inverse Problems and Image 6(4), 697–708 (2012). [CrossRef]  

31. M. K. Ng and W. Wang, “A total variation model for retinex,” SIAM J. Imaging Sci. 4(1), 345–365 (2011). [CrossRef]  

32. P. Irrera, I. Blocha, and M. Delplanque, “A flexible patch based approach for combined denoising and contrast enhancement of digital X-ray images,” Med. Image Anal. 28, 33–45 (2016). [CrossRef]  

33. L. Li, R. Wang, W. Wang, and W. Gao, “A low-light image enhancement method for both denoising and contrast enlarging,” IEEE International Conference on Image Processing, 3730–3734 (2015).

34. J. Lim, J. H. Kim, J. Y. Sim, and C. S. Kim, “Robust contrast enhancement of noisy low-light images: denoising-enhancement-completion,” IEEE International Conference on Image Processing, 4131–4135 (2015).

35. X. Liu, G. Cheung, and X. Wu, “Joint denoising and contrast enhancement of images using graph laplacian operator,” IEEE International Conference on Acoustics, Speech and Signal Processing, 2274–2278 (2015).

36. A. Loza, M. Al-Mualla, P. Verkade, P. Hill, D. Bull, and A. Achim, “Joint denoising and contrast enhancement for light microscopy image sequences,” IEEE International Symposium on Biomedical Imaging, 1083–1086 (2014).

37. J. Eckstein and D. Bertekas, “On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators,” Math. Program. 55(1-3), 293–318 (1992). [CrossRef]  

38. X. Bresson and T. F. Chan, “Fast minimization of the vectorial total variation norm and applications to color image processing,” UCLA CAM Report, 7–25 (2007).

39. X. Ren, M. Li, W.-H. Cheng, and J. Liu, “Joint enhancement and denoising method via sequential decomposition,” IEEE International Symposium on Circuits and Systems (2009).

40. A. Beghdadi and A. L. Negrate, “Contrast enhancement technique based on local detection of edges,” Computer Version, Graphics, and Image Process. 46(2), 162–174 (1989). [CrossRef]  

41. C. E. Shan, “A mathematical theory of communication,” Bell Syst. Tech. J. 27(3), 379–423 (1948). [CrossRef]  

42. Z. Wang, A. Bovik, H. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structrual similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

43. K. Panetta, C. Gao, and S. Agaian, “No reference color image contrast and quality measures,” IEEE Trans. on Consumer Electron. 59(3), 243–248 (2013). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (13)

Fig. 1.
Fig. 1. (Test of parameters) The first row (from left to right): the low-contrast image, the low-contrast and noisy image; the second row: the enhanced results by setting $\alpha =1.5, 2, 2.5, 3,\ \mu =0.02$ ; the third row: the enhanced results by setting $\alpha =1.5, 2, 2.5, 3,\ \mu =0.04$ ; the forth row: the enhanced results by setting $\alpha =1.5, 2, 2.5, 3,\ \mu =0.06$ ; the fifth row: the enhanced results by setting $\alpha =1.5, 2, 2.5, 3,\ \mu =0.08$ .
Fig. 2.
Fig. 2. The first row (from left to right): the ground truth image, the low-contrast and noisy images whose contrast gets higher; the second row (from left to right): the enhanced results by using the proposed model with $\mu =0.08,0.07,0.06,0.05$ and $\alpha =2.5$ ; the corresponding ALC values and DE values (ALC, DE) are given below each output image.
Fig. 3.
Fig. 3. The first row (from left to right): the low contrast images with noise whose variances are $0.01^2,0.02^2,0.03^2,0.04^2$ respectively; the second row (from left to right): the corresponding enhanced results by using the proposed model with $\mu$ = $0,04$ , $0.06$ , $0.085$ , $0.12$ and $\alpha$ = $2.5$ ; the corresponding ALC values and DE values (ALC, DE) are given below each output image.
Fig. 4.
Fig. 4. From top to bottom: low contrast and noisy images; restored results by using ROF mode; enhanced results by using PBHE model; enhanced results by using ACE model; enhanced results by using PLCE model; enhanced results by using HPE model; enhanced results by using Liu’s method; enhanced results by using JED method; enhanced results by using the proposed model.
Fig. 5.
Fig. 5. The corresponding zooming parts of Fig. 4.
Fig. 6.
Fig. 6. From top to bottom: the high-contrast images; the low-contrast and noisy images; the restored results by using ROF model; the enhanced results by using PBHE model; the enhanced results by using ACE model; the enhanced results by using PLCE model; the enhanced results by using HPE model; the enhanced results by using Liu’s method; the enhanced results by using JED model; the enhanced results by using the proposed model.
Fig. 7.
Fig. 7. The corresponding zooming parts of Fig. 6.
Fig. 8.
Fig. 8. From top to bottom: the degradated images; the enhanced results by using the two-step method; the enhanced results by using the proposed model. The corresponding ALC values and DE values (ALC, DE) are shown below each output image.
Fig. 9.
Fig. 9. From top to bottom: the distribution of ALC, DE values of the restored results in Fig. 8.
Fig. 10.
Fig. 10. The corresponding zooming parts of Fig. 8.
Fig. 11.
Fig. 11. From top to bottom: the ground-truth images; the degradated images; the enhanced results by using the two-step method; the enhanced results by using the proposed model. The corresponding ALC values and DE values (ALC, DE) are shown below each output image and the corresponding PSNR values and SSIM values (PSNR, SSIM) are shown below (ALC, DE).
Fig. 12.
Fig. 12. From top to bottom: the distribution of ALC, DE, PSNR, SSIM values of the restored results in Fig. 11.
Fig. 13.
Fig. 13. The corresponding zooming parts of Fig. 11.

Tables (2)

Tables Icon

Table 1. ALC, DE, EME, AME, and SDME values for enhanced results of different models.

Tables Icon

Table 2. ALC, DE, EME, AME, SDME, PSNR, and SSIM values for enhanced results of different models.

Equations (72)

Equations on this page are rendered with MathJax. Learn more.

H u ( λ ) = 1 | Ω | i Ω χ [ 0 , λ ] ( u ( i ) ) ,
H c ( λ ) = λ l L l ,
χ [ 0 , λ ] ( u ( i ) ) = { 1 , u ( i ) [ 0 , λ ] 0 , otherwise .
min l u L { E 1 ( u ) | | u f | | 2 2 + μ | | u | | T V + α 2 λ = l L ( H u ( λ ) H c ( λ ) ) 2 } ,
| | u | | T V = { | x u | + | y u | , anisotropic TV, i , j ( x u ) i , j 2 + ( y u ) i , j 2 , isotropic TV.
| | u k f | | 2 2 + α 2 λ = l L ( H u k ( λ ) H c ( λ ) ) 2 | | u f | | 2 2 + α 2 λ = l L ( H u ( λ ) H c ( λ ) ) 2 .
μ | | u k | | T V μ | | u | | T V ,   and   l u L .
u D = u TV ,   ι ( v ) := { 0 , l v L , + , otherwise .
min u = v ,   u = z { E 2 ( u ) ι ( v ) + u f 2 2 + μ z D + α 2 λ = l L ( H u ( λ ) H c ( λ ) ) 2 } .
L ( u , v , z , γ 1 , γ 2 ) = ι ( v ) + u f 2 2 + μ z D + α 2 λ ( H u ( λ ) H c ( λ ) ) 2 + β 2 u z 2 2 + < γ 1 , u z > + β 2 u v 2 2 + < γ 2 , u v > .
( v k + 1 , z k + 1 ) = a r g m i n v , z L ( u k , v , z , γ 1 k , γ 2 k ) ,
u k + 1 = a r g m i n u L ( u , v k + 1 , z k + 1 , γ 1 k , γ 2 k ) ,
γ 1 k + 1 = γ 1 k + β ( u k + 1 z k + 1 ) ,
γ 2 k + 1 = γ 2 k + β ( u k + 1 v k + 1 ) ,
a r g m i n v L ( u k , v , z k , γ 1 k , γ 2 k ) = a r g m i n v L 0 ( v ) ,
L 0 ( v ) = ι ( v ) + β 2 v ( u k + 1 β γ 2 k ) 2 .
v k + 1 = m a x ( m i n ( u k + 1 β γ k , L ) , l ) .
z = ( z 1 , z 2 ) = u = ( x u , y u ) ,
L ( u , v , z 1 , z 2 , γ 1 , γ 2 ) = L ( u , v , z 1 , z 2 , γ 11 , γ 12 , γ 2 ) = ι ( v ) + u f 2 2 + μ ( | z 1 | + | z 2 | ) + α 2 λ ( H u ( λ ) H c ( λ ) ) 2 + β 2 x u z 1 2 2 + β 2 y u z 2 2 2 + < γ 11 , x u z 1 > + < γ 12 , y u z 2 > + β 2 u v 2 2 + < γ 2 , u v > ,
z 1 k + 1 = a r g m i n z 1 L ( u k , v k + 1 , z 1 , z 2 k , γ 11 k , γ 12 k , γ 2 k ) ,
z 2 k + 1 = a r g m i n z 2 L ( u k , v k + 1 , z 1 k + 1 , z 2 , γ 11 k , γ 12 k , γ 2 k ) .
a r g m i n z 1 L ( u k , v k + 1 , z 1 , z 2 k , γ 11 k , γ 12 k , γ 2 k ) = a r g m i n z 1 { L 1 ( z 1 ) } ,
L 1 ( z 1 ) = μ | z 1 | + β 2 x u k z 1 + 1 β γ 11 k 2 2 .
z 1 k + 1 = s h r i n k ( x u k + 1 β γ 11 k , μ β ) ,
s h r i n k ( x , ρ ) = x | x | m a x ( | x | ρ , 0 )
z 2 k + 1 = s h r i n k ( y u k + 1 β γ 12 k , μ β ) .
L ( u , v , z 1 , z 2 , γ 1 , γ 2 ) = L ( u , v , z 1 , z 2 , γ 11 , γ 12 , γ 2 ) = ι ( v ) + u f 2 2 + μ ( z 1 , z 2 ) D + α 2 λ ( H u ( λ ) H c ( λ ) ) 2 + β 2 x u z 1 2 2 + < γ 11 , x u z 1 > + β 2 y u z 2 2 2 + < γ 12 , y u z 2 > + β 2 u v 2 2 + < γ 2 , u v > ,
( z 1 , z 2 ) D = i , j ( z 1 ) i , j 2 + ( z 2 ) i , j 2 .
( z 1 k + 1 , z 2 k + 1 ) = a r g m i n z 1 , z 2 L ( u k , v k + 1 , z 1 , z 2 , γ 11 k , γ 12 k , γ 2 k ) .
z 1 k + 1 = m a x ( s k μ β , 0 ) x u k + 1 β γ 11 k s k ,
z 2 k + 1 = m a x ( s k μ β , 0 ) y u k + 1 β γ 12 k s k ,
s k = | x u k + 1 β γ 11 k | 2 + | y u k + 1 β γ 12 k | 2 .
a r g m i n u L ( u , v k + 1 , z k + 1 , γ 1 k , γ 2 k ) = a r g m i n u { L 2 ( u ) } ,
L 2 ( u ) = u f 2 2 + α 2 λ ( H u ( λ ) H c ( λ ) ) 2 + β 2 u z + 1 β γ 1 2 2 + β 2 u v + 1 β γ 2 2 2 .
2 ( u f ) α | Ω | ( H u ( u ) H c ( u ) ) + β T ( u z + 1 β γ 1 ) + β ( u v + 1 β γ 2 ) = 0.
u j + 1 = u j + τ ( 2 ( u f ) + α | Ω | ( H u ( u ) H c ( u ) ) β T ( u z + 1 β γ 1 ) β ( u v + 1 β γ 2 ) ) .
A L C = 1 N i = 1 N | r i E i | r i + E i ,
E i = k N i C k r k k N i C k ,
D E ( I ) = k p ( I ( k ) ) l o g p ( I ( k ) ) ,
P S N R = 10 l o g 10 u m a x 2 1 n 2 u c u 2 2 ,
S S I M ( x , y ) = ( 2 μ x μ y ) ( 2 σ x y + c 2 ) ( μ x 2 + μ y 2 + c 1 ) ( σ x 2 + σ y 2 + c 2 ) ,
E M E = 1 B 1 × B 2 i = 1 B 1 j = 1 B 2 20 l n ( I i , j m a x I i , j m i n + c ) ,
A M E = 1 B 1 × B 2 i = 1 B 1 j = 1 B 2 20 l n ( I i , j m a x I i , j m i n I i , j m a x + I i , j m i n ) ,
S D M E = 1 B 1 × B 2 i = 1 B 1 j = 1 B 2 20 l n ( I i , j m a x 2 I i , j c e n I i , j m i n I i , j m a x + 2 I i , j c e n + I i , j m i n ) ,
L 2 ( u ) = u f 2 2 + α 2 λ ( H u ( λ ) H c ( λ ) ) 2 + β 2 u z + 1 β γ 1 2 2 + β 2 u v + 1 β γ 2 2 2
H u ( λ ) = 1 | Ω | i Ω χ [ 0 , λ ] u ( i ) .
d d η | η = 0 H u + η w ( λ ) = 1 | Ω | d d η | η = 0 i Ω χ [ 0 , λ ] ( u + η w ) = 1 | Ω | i Ω d d η | η = 0 χ [ 0 , λ ] ( u + η w ) = 1 | Ω | i Ω δ ( u ( i ) λ ) w ( i ) .
d d η | η = 0 λ ( H u + η w ( λ ) H c ( λ ) ) 2 = 2 | Ω | λ ( H u ( λ ) H c ( λ ) ) i δ ( w ( i ) λ ) w ( i ) = 2 | Ω | i ( H u ( u ( i ) H c ( u ( i ) ) ) w ( i ) = 0 ,
2 | Ω | ( H u ( u ) H c ( u ) ) = 0.
d d η | η = 0 ( u + η w f ) 2 = 2 ( u + η w f ) | η = 0 = 2 ( u f ) w = 0 ,
min u , v { | | u f | | 2 2 + μ | | u | | T V + α 2 λ ( H u ( λ ) H c ( λ ) ) 2 } , such that ,   u v = 0 , v l 0 , v L 0.
2 ( u f ) α | Ω | ( H u ( u ) H c ( u ) ) + x T ω 11 + y T ω 12 + ω 2 = 0 ,
u = ( x u , y u ) = ( z 1 , z 2 ) = z ,
ω 2 ω 3 ω 4 = 0 ,
u v = 0 ,
ω 3 0 v l ,   ω 3 ( v l ) = 0 ,
ω 4 0 v L ,   ω 4 ( v L ) = 0.
v k + 1 = m a x ( m i n ( u k + 1 β γ k , L ) , l ) ,
z 1 k + 1 = x u k + 1 β γ 11 k | x u k + 1 β γ 11 k | m a x ( | x u k + 1 β γ 11 k | μ β , 0 ) ,
z 2 k + 1 = y u k + 1 β γ 12 k | y u k + 1 β γ 12 k | m a x ( | y u k + 1 β γ 12 k | μ β , 0 ) ,
2 ( u k + 1 f ) α | Ω | ( H u k + 1 ( u k + 1 ) H c ( u k + 1 ) ) + β T ( u k + 1 z 1 k + 1 + 1 β γ 11 k ) + β T ( u k + 1 z 2 k + 1 + 1 β γ 12 k ) + β ( u k + 1 v k + 1 + 1 β γ 2 k + 1 ) = 0 ,
γ 11 k + 1 = γ 11 k + β ( x u k + 1 z 1 k + 1 ) ,
γ 12 k + 1 = γ 12 k + β ( y u k + 1 z 12 k + 1 ) ,
γ 2 k + 1 = γ 2 k + β ( u k + 1 v k + 1 ) .
x u k + 1 x u ^ = z 1 ^ z 1 k ,
y u k + 1 y u ^ = z 2 ^ z 2 k .
2 ( u ^ f ) α | Ω | ( H u ^ ( u ^ ) H c ( u ^ ) ) + x T γ ^ 11 + y T γ ^ 12 + γ ^ 2 = 0.
v ^ = m a x ( m i n ( u ^ + 1 β γ ^ 2 , L ) , l ) .
{ γ ^ 2 = 0 , l < u ^ = v ^ < L , γ ^ 2 0 , u ^ = v ^ = l , γ ^ 2 0 , u ^ = v ^ = L .
{ γ ^ 2 0 v ^ l , γ ^ 2 ( v ^ l ) = 0 γ ^ 2 0 v ^ L , γ ^ 2 ( v ^ L ) = 0 .
z ^ 1 = x u ^ + 1 β γ ^ 11 | x u ^ + 1 β γ ^ 11 | m a x ( | x u ^ + 1 β γ ^ 11 | μ β , 0 ) ,
z ^ 2 = y u ^ + 1 β γ ^ 12 | y u ^ + 1 β γ ^ 12 | m a x ( | y u ^ + 1 β γ ^ 12 | μ β , 0 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.