Abstract

Images captured under hazy conditions (e.g. fog, air pollution) usually present faded colors and loss of contrast. To improve their visibility, a process called image dehazing can be applied. Some of the most successful image dehazing algorithms are based on image processing methods but do not follow any physical image formation model, which limits their performance. In this paper, we propose a post-processing technique to alleviate this handicap by enforcing the original method to be consistent with a popular physical model for image formation under haze. Our results improve upon those of the original methods qualitatively and according to several metrics, and they have also been validated via psychophysical experiments. These results are particularly striking in terms of avoiding over-saturation and reducing color artifacts, which are the most common shortcomings faced by image dehazing methods.

Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. Introduction

Images captured under adverse weather conditions, such as for or smog, present distorted colors and a loss of contrast, minimizing the quality of the captured image. Different physical models aiming at describing this phenomenon have been proposed, the more widespread being the one by Koschmieder [1] .

Koschmieder’s model teaches us that the hazy image $\boldsymbol {I}$ depends on the clear image $\boldsymbol {J}$ (i.e., how the image would look without atmospheric scatter), a transmission map that only depends on the image depth and is therefore equal for the three channels $\boldsymbol {t}$, and the airlight color $\boldsymbol {A}$. Mathematically, the model is written as

$$\boldsymbol{I_{x,\cdot}}=t_x \boldsymbol{J_{x,\cdot}}+(1-t_x)\boldsymbol{A},$$
where $x$ is a particular image pixel, and $\boldsymbol {J_{x,\cdot }}$, $\boldsymbol {I}_{x,\cdot }$ are respectively the $1$-by-$3$ vector of the R,G,B values at pixel $x$ of the clear and the hazy image. Let us note here that Koschmieder’s model is a relatively simple model of the atmosphere. It is by no means a complete model, as the optical scattering is extremely complex, due to the wide variability of particle distributions within the atmosphere. This said, even if it relies on physical assumptions that will not always hold, it provides us with a mathematically tractable setting. For this reason, this model is used in almost all the image dehazing literature, and therefore it will be also considered in this paper.

There exists a large number of image dehazing methods based on imposing Eq. (1) as a constraint in the solution. However, there also exists a second type of method that is based on applying image enhancement or image fusion techniques to the original hazy image. This second type of method has been proven effective for removing the haze on images, but it does not include a reliable physical model. In this paper we propose a post-processing procedure for this second type of method. Our goal is to obtain a final result as close as possible to the original algorithm solution, but accomplishing the constraints given by Eq. (1). Our proposed solution can therefore be understood as a bridge linking the two different type of methods.

This paper is an extension of our conference work presented in [2]. In particular, we have modified our original formulation to constrain the transmission result by a DCT-basis (effectively, the transmission is modelled to be smoothly varying across the scene/image), and we have performed a much larger number of experiments, where we numerically prove that this new approach outperforms both the original non-physics dehazing methods and our previous work presented in [2].

2. Related work

Image dehazing has arisen as a prolific topic of research in recent years. This increased interest on research in image dehazing is mostly related to its importance as a pre-processing tool for computer vision methods that need to work in the wild. Some particular examples are surveillance and tracking through CCTV cameras, or self-driving of vehicles and drones.

In this section we will divide the different methods proposed between Physically-based methods and Image processing methods.

Physically-based methods: These methods search for a single transmission $\boldsymbol {t}$ and an airlight vector $\boldsymbol {A}$. Once these two quantities are found, they obtain the haze-free image $\boldsymbol {J_{x,\cdot }}$ by inverting Eq. (1). This said, solving for $\boldsymbol {t}$ and $\boldsymbol {A}$ is an underconstrained problem but can be solved if assumptions are placed on the form of the final solution. Some examples of this type of method are [3], [4], [5], or [6]. A special mention should be given to the Dark Channel prior [7] (probably the most used image dehazing method), where the authors assume that the minimum of an image region over the three color channels should be zero. The Dark-channel prior has been largely extended and improved, for example in [813]. Learning-based techniques have also been studied for this case. Some examples of them are [14], [15]. Recently, some deep learning techniques have also been used [16], [17].

Image processing approaches: These methods aim to modify the original image to compensate for the visual effect of haze on images. In particular, these methods usually focus on the amount of contrast, saturation or other possible indicators of the presence of haze, and try to compensate for them. For example, [18] proposed to remove contrast loss in hazy images through a linear model of the presence of excessive brightness, based on the ratio between local mean and standard deviation. In [19,20] the authors use a multiscale image fusion approach in which they blend several images derived from the input, such as a white-balanced and a contrast-enhanced version of it. Different approaches based on models of the Human Visual System (HVS), such as Retinex, have also been proposed in [2126]. [27] proposed a combination of the last two approaches: a variational formulation based on the HVS is combined with a fusion-based approach. Very recently a dual relation between image dehazing and Retinex has been proven [28]. This relation proves that any threshold-free Retinex method applied on inverse intensities performs image dehazing. Finally, machine-learning techniques have also been used for this type of method. For example, a haze density predictor based on natural scene statistics was presented in [29].

There are very few methods focusing on the removal of artifacts for image dehazing. Matlin and Milanfar [30] proposed an iterative regression method that simultaneously performs denoising and dehazing. Li et al. [31] proposed to decompose the original image into high and low frequencies, performing image dehazing only in the low frequencies, thus avoiding blocking artifacts. Chen [32] applied both a smoothing filter for the refinement of the transmission and an energy minimization to avoid the appearance of gradients that were not presented in the original image.

3. Coupled iterative minimization for image dehazing

In this paper we focus on the post-processing of dehazing methods that do not enforce a physical model, i.e. mostly those listed as image processing approaches in the previous section. Our goal is that, given an original hazy image $\boldsymbol {I}$ and the solution of a dehazing method that does not fulfil a physical model $\boldsymbol {J^{np}}$, we obtain a new dehazing result $\boldsymbol {J^{our}}$ that:

  • • Accomplishes the constraint given by Eq. (1)
  • • Is as close as possible to the initial solution $\boldsymbol {J^{np}}$.
The most straightforward approach to accomplish both these requirements is to minimize the error in Eq. (1) when the result of the image processing method $\boldsymbol {J^{np}}$ is considered. As an aid to our derivations below, we will represent colour and scalar images as respectively $N$-by-$3$ and $N$-by-$1$ matrices (where N denotes the total number of pixels in the image). Mathematically, we can write this minimization in matrix form as
$$\{\boldsymbol{A^{our}},\boldsymbol{t^{our}}\}=arg min_{\boldsymbol{A^*},\boldsymbol{t^{*}}} \| (\boldsymbol{1}-\boldsymbol{t^{*}})\cdot \boldsymbol{A^*}-\boldsymbol{I}+\boldsymbol{T^{*}} \cdot \boldsymbol{J^{np}}\|_2,$$
where $\boldsymbol {1}$ is an $N$-by-$1$ vector that has a value of $1$ in every entry, $\boldsymbol {t^{*}}$ is an $N$-by-$1$ vector that represents the transmission, $\boldsymbol {A^{*}}$ is a $1$-by-$3$ vector that provides us with the airlight, $\boldsymbol {I}$, $\boldsymbol {J^{np}}$ are $N$-by-$3$ matrices representing the input image and the non-physical dehazing solution, $N$ is the number of pixels, and $\boldsymbol {T^{*}}$ is a $N$-by-$N$ matrix that has zeros everywhere except in the diagonal, where it has the values of $\boldsymbol {t^{*}}$.

Intuitively, it is easy to see that we need to perform the minimization of Eq. (2) iteratively in two different dimensions. In particular, when looking for $\boldsymbol {t^{our}}$ we need to perform the minimization for each pixel $x$ of the image over the three color channels, while when looking for $\boldsymbol {A^{our}}$ we need to perform the minimization for each color channel $c$ over all the pixels.

In the next paragraphs we explain how we perform each of these two minimizations.

Minimizing for $\boldsymbol {t^{our}}$: Let us start supposing that we have an original value for $\boldsymbol {A^{our}}$. This is a standard case in many image dehazing works, where it is usually supposed either $\boldsymbol {A}=[1,1,1]$ or $\boldsymbol {A}=[\max (\boldsymbol {I_{\cdot ,R}}),\max (\boldsymbol {I_{\cdot ,G}}), \max (\boldsymbol {I_{\cdot ,B}})]$. Let us denote as $\boldsymbol {\Lambda }$ the N-by-3 matrix obtained by the replication of $\boldsymbol {A^{our}}$ for the $N$ image pixels. Then, our minimization for the transmission can be rewritten as

$$\enspace \boldsymbol{t^{our}}=diag(\boldsymbol{T}) \quad | \quad \boldsymbol{T}=arg min_{\boldsymbol{T}^{*}} \|(\boldsymbol{I}-\boldsymbol{\Lambda})-\boldsymbol{T}^{*}(\boldsymbol{J^{np}}-\boldsymbol{\Lambda}) \|_{2}.$$
Note $(\boldsymbol {I}-\boldsymbol {\Lambda })$ is an $N$-by-$3$ matrix as is $(\boldsymbol {J^{np}}-\boldsymbol {\Lambda })$ so the single solution of Eq. (3) involves scaling the rows of $(\boldsymbol {J^{np}}-\boldsymbol {\Lambda })$ to match $(\boldsymbol {I}-\boldsymbol {\Lambda })$. This minimization has the same structure of the one considered for the Alternative Least Squares method [33], and can be therefore constrained by the use of some basis function. Therefore, we impose a further constraint for $\boldsymbol {t^{our}}$, specifically that the per pixel multiplication implied by $\boldsymbol {T}$ should be smooth. We implement smoothness by enforcing $\boldsymbol {T}$ to be represented as a linear combination of the first few terms in a DCT expansion. The new smooth adjustment, that we call $\boldsymbol {T^{DCT}}$, is calculated in 3 steps. First we map the $(\boldsymbol {I}-\boldsymbol {\Lambda })$ and $(\boldsymbol {J^{np}}-\boldsymbol {\Lambda })$ to images (with $P \times Q$ pixels) $(\underline {\boldsymbol {I}-\boldsymbol {\Lambda }})(x,y)$ and $(\underline {\boldsymbol {J^{np}}-\boldsymbol {\Lambda }})(x,y)$ (underscoring remarks that these are RGB images, each pixel has 3 numbers) and $(x,y)$ indexes the pixel location. Now we find the image that minimizes
$$\begin{aligned} {T}(x,y)= arg min_{T^{*}(x,y)} \|(\underline{\boldsymbol{I}-\boldsymbol{\Lambda}})(x,y)-{T}^{*}(x,y) (\underline{\boldsymbol{J^{np}}-\boldsymbol{\Lambda}})(x,y) \|_{2},\\ s.t.\;{T}^{*}(x,y)=\sum_{k=1}^K \alpha_k G_k(x,y). \end{aligned}$$
where $G_k(\cdot )$ represents the $k$th DCT basis image. Finally, we map the recovered image back to the diagonal matrix representation: ${T}(x,y)\rightarrow \boldsymbol {T^{DCT}}$.

The computation of the weight vector $\boldsymbol {\alpha } = \{\alpha _1,\ldots , \alpha _K\}$ in Eq. (4) is obtained as follows. Let ${(\boldsymbol {J^{np}}-\boldsymbol {\Lambda })}_j$ denote the $j$th color channel of the image stretched out as a vector, and let $\boldsymbol {G}_k$ denote the $k$th basis image stretched out as a vector. Then, for each of the three color channels we calculate $K$ vectors as the following pixel-wise products: $\boldsymbol {H}_{j,1}= {{(\boldsymbol {J^{np}}-\boldsymbol {\Lambda })}_j}\cdot {\boldsymbol {G}_1}$, $\boldsymbol {H}_{j,2}={{(\boldsymbol {J^{np}}-\boldsymbol {\Lambda })}_j}\cdot {\boldsymbol {G}_2}$, $\cdots$ , $\boldsymbol {H}_{j,K}={{(\boldsymbol {J^{np}}-\boldsymbol {\Lambda })}_j}\cdot {\boldsymbol {G}_K}$. With those vectors, we form a $3N \times K$ matrix $\boldsymbol {H}$ -where $N$ is the number of pixels- as

$$\boldsymbol{H}=\begin{bmatrix} \boldsymbol{H}_{r,1} & \cdots & \boldsymbol{H}_{r,K} \\ \boldsymbol{H}_{g,1} & \cdots & \boldsymbol{H}_{g,K} \\ \boldsymbol{H}_{b,1} & \cdots & \boldsymbol{H}_{b,K} \\ \end{bmatrix}.$$
Similarly, we create a $3N \times 1$ vector $\boldsymbol {u}$ as
$$\boldsymbol{u}=\begin{bmatrix} {(\boldsymbol{I}-\boldsymbol{\Lambda})}_r \\ {(\boldsymbol{I}-\boldsymbol{\Lambda})}_g \\ {(\boldsymbol{I}-\boldsymbol{\Lambda})}_b\\ \end{bmatrix}.$$
Finally, the weight vector $\boldsymbol {\alpha }$ is obtained as follows
$$\boldsymbol{\alpha}=\boldsymbol{H}^{+} \boldsymbol{u}$$
where $^+$ denotes the pseudo-inverse.

Minimizing for $\boldsymbol {A^{our}}$: Let us now focus on the minimization of $\boldsymbol {A^{our}}$ given a value for $\boldsymbol {t^{our}}$. In this case, let us denote as $\boldsymbol {T^{our}}$ the $N$-by-$N$ matrix that has zeros everywhere except in the diagonal, where it has the values of $\boldsymbol {t^{our}}$. In this way, the minimization can be rewritten as

$$\boldsymbol{A^{our}}=arg min_{\boldsymbol{A^*}} \|\boldsymbol{(\boldsymbol{1}-\boldsymbol{t^{our}})} \cdot \boldsymbol{A^*}-\boldsymbol{I}-\boldsymbol{T^{our}} \cdot\boldsymbol{J^{np}} \|_2. \quad$$
For performing this last minimization we individually minimize the error for each color channel.

Performing the iterative minimization: The previous minimizations are finally combined in an iterative manner. This means the value found for $\boldsymbol {t^{our}}$ in an iteration ($it$) is used for obtaining $\boldsymbol {A^{our}}$ at the same iteration, and this latter value is used in the following iteration ($it+1$) for obtaining the new value of $\boldsymbol {t^{our}}$.

Once the method is run for the desired iterations or the desired stopping criteria, our final result is computed as

$$\boldsymbol{J^{our}_{x,\cdot}}=\frac{\boldsymbol{I^{or}_{x,\cdot}}-(1-t^{our}_x)\boldsymbol{A^{our}}}{t^{our}_x}$$
where $x$ is a particular image pixel, and $\boldsymbol {J_{x,\cdot }^{our}}$, $\boldsymbol {I}_{x,\cdot }^{or}$ are the $1$-by-$3$ vectors of the R,G,B values at pixel $x$.

A pseudocode for our method can be found in Algorithm 1.

4. Experiments and results

We have performed different experiments to address the performance of our approach. First, we start by studying how our iterative minimization for $\boldsymbol {t^{our}}$ and $\boldsymbol {A^{our}}$ affects the output image $\boldsymbol {J^{our}}$. Then, we show some qualitative results where our method clearly outperforms the original dehazing method. Later, we show how our method improves the original dehazing ones quantitatively, both considering reference-based and non-reference image metrics. At the end of the section we also validate our method through a psychophysical experiment where observers were asked to select their preferred image. In all this section, we will compare our method against the following original dehazing algorithms: the EVID method [21], the FVID method [27], the Choi et al. method [29], the Wang et al. method [26], and the use of two Retinex algorithms -SRIE [34] and MSCR [35]- as dual solutions for the dehazing problem as suggested in [28]. For our method we have considered $10$ iterations. The number of DCT basis considered for our coupled-DCT method is $10$ -i.e. we compute DCT basis up to order 4- unless otherwise stated. Also, we set $A^0=[1,1,1]$ for all the quantitative and psychophysical evaluations.

4.1 On reaching steady state for image $\boldsymbol {J^{our}}$

Our minimization looks for $\boldsymbol {t^{our}}$ and $\boldsymbol {A^{our}}$, but we are interested in the image $\boldsymbol {J^{our}}$ as our final result. Therefore, it is natural to wonder about the effect the iterative minimization of $\boldsymbol {t^{our}}$ and $\boldsymbol {A^{our}}$ has in the image $\boldsymbol {J^{our}}$. In particular, it will be interesting to study how the image $\boldsymbol {J^{our}}$ reaches steady state. To this end Fig. 1 shows the difference between two consecutive iterations of the output image $J_{x,c}^{our}$ -where $c$ denotes the R,G,B channels- for the set of $500$ hazy images proposed in the FADE dataset by Choi et al. [29]. We compute this difference in the Mean Square Error (MSE) form, which for iteration $k$ is defined as

$$MSE(k)=\frac{1}{3 \cdot N} \sum_{j=1}^{3} \sum_{i=1}^{N} (J^{our}_{i,j}(k+1)-J^{our}_{i,j}(k))^2,$$
where $N$ is the total number of pixels. For visualization purposes, we show the cube root of the MSE in the figure. We can clearly see in the figure that for all the methods the difference ends up being negligible, signifying that in practice the image $\boldsymbol {J^{our}}$ reaches steady state without any significance problem.

 figure: Fig. 1.

Fig. 1. Study about the effect of the iterations on the steady state of $\boldsymbol {J^{our}}$ for different original algorithms in the 500 images of the dataset in Choi et al.. We can clearly see that for any original algorithm $\boldsymbol {J^{our}}$ reaches steady state.

Download Full Size | PPT Slide | PDF

4.2 Qualitative results

Fig. 2 presents some visual results for our approach with regards to the $6$ non-physics methods selected, and to two different airlights: $\boldsymbol {A^0}=[1,1,1]$ and $\boldsymbol {A^0}=[\max (\boldsymbol {I_{\cdot ,R}}),\max (\boldsymbol {I_{\cdot ,G}}), \max (\boldsymbol {I_{\cdot ,B}})]$.

 figure: Fig. 2.

Fig. 2. Qualitative results for our approach, for 6 different non-physical dehazing methods and 2 different starting airlights. Our method improves all the original methods. Furthermore, our results for both airlights are very similar, showing the robustness of our approach.

Download Full Size | PPT Slide | PDF

In terms of the starting airlight $\boldsymbol A^0$ (last two columns of the Figure), we can clearly see that our approach gives very similar results for both of them, therefore showing that our approach is very robust in this respect.

Looking now at the different algorithms -each algorithm is a different row in the Figure-, we can clearly see that in the case of the Choi et al. algorithm our method is able to correct the excessive saturation presented in the field, outputting more natural colors in the image. In the case of the EVID algorithm, our approach is able to correct the over-contrast introduced by the non-physics method in the cow, grass and rocks. Equivalently, the over-contrast is also corrected for the Wang et al. method, especially noticeable in the tree and the close vegetation, and the Ret-MSCR method in the grass and close-by ducks.

In the case of the FVID algorithm we can clearly see that our approach corrects the artifacts appearing in the sky in the original method. Similarly, the Ret-SRIE mehod presents a halo artifact around the main building in the image that is clearly alleviated by our approach.

In summary, this Figure presents the two main advantages of applying our post-processing approach. First, it is able to correct over-saturation and over-contrast problems, and second, it is able to alleviate the artefacts that can appear when dehazing an image.

4.3 Quantitative results

4.3.1 Non-reference metrics

In this subsection we study the performance of our method when considering non-reference based metrics. To this end, we consider the set of $500$ hazy images proposed by Choi et al. in [29]. We evaluate our results with respect to two very well-known non-reference image metrics: NIQE [36] and BRISQUE [37]. For both metrics, a smaller number means a better method. Table 1 shows the results for the $6$ methods considered in this paper. We can see how the simple coupled-method is already able to outperform the original method for almost all of those tested. Our Coupled-DCT approach drops the error metrics even further, and outperforms the original method and the coupled approaches in $10$ and $9$ out of $12$ cases, respectively.

Tables Icon

Table 1. Results reported as the mean for all the 500 images in the Choi et al. dataset.

4.3.2 Reference metrics

In this subsection we focus on reference-based metrics. In this case, we need a dataset that presents pairs of hazy-clean(ground-truth) images. We have selected to use the Middleburry set of the D-Hazy dataset [38]. In this case, images are indoor, and for this reason we run our method with a higher number of $DCT$ basis: $55$ (i.e. we compute DCT basis up to order $10$). In this subsection we look at $3$ different metrics: the CID [39], which is a color extension of SSIM, the perceptual color difference $\Delta _{E_{00}}$, and the Visual Information Fidelity (VIF) metric [40]. In the case of the CID metric and the $\Delta _{E_{00}}$, lower values mean better methods. For the VIF metric, the closer to $1$ is the value, the better the method -as this will mean that both result and the ground-truth are equal in terms of the visual information present in the images-. A VIF value larger than one means that the result is over-enhanced, while VIF values smaller than 1 mean that the result is under-enhanced.

Results are shown in Table 2. We can clearly see that our Coupled-DCT approach outperforms all the others in $16$ out of $18$ cases. Also, the simple Coupled method outperforms the original dehazing method in $10$ cases and draw with it in another $3$ cases (see the results for RET-SRIE).

Tables Icon

Table 2. Results reported as the mean for all the 23 images in the Middleburry D-Hazy dataset.

4.4 Preference ranking

We also performed a psychophysical experiment for which details are given below.

4.4.1 Subjects

Twelve subjects completed the experiment. None of them is an author of the paper. All observers were tested for normal color vision using the Ishihara color blindness test. Ethics was approved by the Comité Ético de Investigación Clínica, Parc de Salut MAR, Barcelona, Spain and all procedures complied with the declaration of Helsinki.

4.4.2 Apparatus

The experiment was conducted on an AOC I2781FH LCD monitor set to “sRGB” mode with a luminance range from $0.1cdm^{-2}$ to $175cdm^{-2}$, with spatial and temporal resolutions of 1920 by 1080 pixels and 60 Hz. The display was viewed at a distance of approximately 70 cm so that 40 pixels subtended 1 degree of visual angle. The full display subtended 49 by 27.5 degrees. The decoding nonlinearity of the monitor was recorded using a Konica Minolta LS 100 photometer and was found to be closely approximated by a gamma function with an exponent of 2.2. Stimuli were generated under Ubuntu 15.04 LTS running MATLAB (MathWorks) with functions from the Psychtoolbox [41,42]. The experiment was conducted in a dark room.

4.4.3 Stimuli

25 randomly selected images were taken from the FADE dataset [29]. They are shown in Fig. 3. For each image, the six original dehazing methods listed at the beginning of the section were computed. Then, the Coupled-DCT approach proposed in this paper with $10$ DCT basis -i.e. the same parameters used for this dataset before- was also computed for each of the original methods.

 figure: Fig. 3.

Fig. 3. Images uses in the psychophysical experiment.

Download Full Size | PPT Slide | PDF

4.4.4 Procedure

The experiment was independently run for each of the $6$ original dehazing methods. The dehazed images -the result of the original method and the result of our coupled-DCT approach- were viewed on either sides of the original hazy image. Subjects were asked to select the image that they preferred out of the two dehazed images. The total number of comparisons was 150 -25 comparisons for each of the $6$ original dehazing methods. On average, the experiment took around 25 minutes.

4.4.5 Analysis of the results

We have analyzed the result of our experiment in terms of the Thurstone Case V Law of Comparative Judgment. Figure 4 presents the results for the whole set of $150$ comparisons. We can clearly see that our approach is preferred over the original non-physical dehazing methods, with statistical significance.

 figure: Fig. 4.

Fig. 4. Results of the psychophysical experiment using the Thurstone Case V test for the whole set of $150$ comparisons.

Download Full Size | PPT Slide | PDF

Results for each individual original algorithm are presented in Fig. 5. We can clearly see that our DCT-coupled approach is statistically preferred over the original method for all the cases, showing that it generalizes very well to different non-physical dehazing methods. These results also validate the effectiveness shown by our coupled-DCT method for most of the image metrics cases tested.

 figure: Fig. 5.

Fig. 5. Results of the psychophysical experiment using the Thurstone Case V test for each of the non-physical dehazing methods considered in this work.

Download Full Size | PPT Slide | PDF

5. Conclusions

We have presented an approach that induces a physical behaviour to non-physical dehazing methods. Its main notion is the consideration of an iterative coupling of the color channels, which is inspired by the Alternative Least Squares (ALS) method. We have shown how our method outperforms the original non-physical dehazing method qualitatively, quantitatively -both in terms of reference and non-reference metrics-. Finally, our method was also validated using psychophysical tests.

Funding

Horizon 2020 Framework Programme (761544, 780470); Engineering and Physical Sciences Research Council (EP/028730, EP/M001768); Spanish Government MINECO and Feder Fund (PGC2018-099651-B-I00).

Acknowledgments

We want to thank all the observers that participated in the preference study.

Disclosures

GF is a visiting professor at Simon Fraser University and at the University of Leeds. He currently has a joint project with the University of Cambridge.

References

1. H. Koschmieder, Theorie der horizontalen Sichtweite: Kontrast und Sichtweite (Keim & Nemnich, 1925).

2. J. Vazquez-Corral, G. D. Finlayson, and M. Bertalmío, “Physically plausible dehazing for non-physical dehazing algorithms,” in Computational Color Imaging, S. Tominaga, R. Schettini, A. Trémeau, and T. Horiuchi, eds. (Springer International Publishing, 2019), pp. 233–244.

3. R. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2008), pp. 1–8.

4. R. Fattal, “Single Image Dehazing,” in ACM SIGGRAPH 2008 Papers, (ACM, 2008), SIGGRAPH ’08, pp. 72:1–72:9.

5. K. Nishino, L. Kratz, and S. Lombardi, “Bayesian Defogging,” Int. J. Comput. Vis. 98(3), 263–278 (2012). [CrossRef]  

6. J.-P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2009), pp. 2201–2208.

7. K. He, J. Sun, and X. Tang, “Single Image Haze Removal Using Dark Channel Prior,” IEEE Transactions on Pattern Analysis Mach. Intell. 33(12), 2341–2353 (2011). [CrossRef]  

8. G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, “Efficient Image Dehazing with Boundary Constraint and Contextual Regularization,” in Proceedings of IEEE International Conference on Computer Vision (ICCV), (IEEE, 2013), pp. 617–624.

9. W. Sun, “A new single-image fog removal algorithm based on physical model,” Optik 124(21), 4770–4775 (2013). [CrossRef]  

10. Y. Gao, H.-M. Hu, S. Wang, and B. Li, “A fast image dehazing algorithm based on negative correction,” Signal Process. 103, 380–398 (2014). [CrossRef]  

11. J.-B. Wang, N. He, L.-L. Zhang, and K. Lu, “Single image dehazing with a physical model and dark channel prior,” Neurocomputing 149, 718–728 (2015). [CrossRef]  

12. Z. Li and J. Zheng, “Edge-Preserving Decomposition-Based Single Image Haze Removal,” IEEE Transactions on Image Process. 24(12), 5432–5441 (2015). [CrossRef]  

13. Y.-H. Lai, Y.-L. Chen, C.-J. Chiou, and C.-T. Hsu, “Single-Image Dehazing via Optimal Transmission Map Under Scene Priors,” IEEE Transactions on Circuits Syst. for Video Technol. 25(1), 1–14 (2015).

14. K. Tang, J. Yang, and J. Wang, “Investigating Haze-Relevant Features in a Learning Framework for Image Dehazing,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2014), pp. 2995–3002.

15. Q. Zhu, J. Mai, and L. Shao, “A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior,” IEEE Transactions on Image Process. 24(11), 3522–3533 (2015). [CrossRef]  

16. B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “DehazeNet: An End-to-End System for Single Image Haze Removal,” arXiv:1601.07661 (2016).

17. H. Zhang and V. M. Patel, “Densely connected pyramid dehazing network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018), pp. 3194–3203.

18. J. Oakley and H. Bu, “Correction of Simple Contrast Loss in Color Images,” IEEE Transactions on Image Process. 16(2), 511–522 (2007). [CrossRef]  

19. C. O. Ancuti, C. Ancuti, C. Hermans, and P. Bekaert, “A Fast Semi-inverse Approach to Detect and Remove the Haze from a Single Image, in Asian Conference on Computer Vision, ACCV-2010, (2010) 6493, pp. 501–514.

20. C. Ancuti and C. Ancuti, “Single Image Dehazing by Multi-Scale Fusion,” IEEE Transactions on Image Process. 22(8), 3271–3282 (2013). [CrossRef]  

21. A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Enhanced Variational Image Dehazing,” SIAM J. on Imaging Sci. 8(3), 1519–1546 (2015). [CrossRef]  

22. V. De Dravo and J. Hardeberg, “Stress for dehazing, in Colour and Visual Computing Symposium (CVCS), 2015, (2015), pp. 1–6.

23. X.-S. Zhang, S.-B. Gao, C.-Y. Li, and Y.-J. Li, “A Retina Inspired Model for Enhancing Visibility of Hazy Images,” Front. Comput. Neurosci. 9, 151 (2015). [CrossRef]  

24. Y. Wang, H. Wang, C. Yin, and M. Dai, “Biologically inspired image enhancement based on Retinex,” Neurocomputing 177, 373–384 (2016). [CrossRef]  

25. J. Vazquez-Corral, A. Galdran, P. Cyriac, and M. Bertalmío, “A fast image dehazing method that does not introduce color artifacts,” Journal of Real-Time Image Processing, pp. doi: 10.1007/s11554–018–0816–6 posted 29 August 2018, in press.

26. S. Wang, W. Cho, J. Jang, M. A. Abidi, and J. Paik, “Contrast-dependent saturation adjustment for outdoor image enhancement,” J. Opt. Soc. Am. A 34(1), 7–17 (2017). [CrossRef]  

27. A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Fusion-based variational image dehazing,” IEEE Signal Process. Lett. 24, 1 (2016). [CrossRef]  

28. A. Galdran, A. Alvarez-Gila, A. Bria, J. Vazquez-Corral, and M. Bertalmío, “On the duality between retinex and image dehazing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018).

29. L. K. Choi, J. You, and A. C. Bovik, “Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging,” IEEE Transactions on Image Process. 24(11), 3888–3901 (2015). [CrossRef]  

30. E. Matlin and P. Milanfar, “Removal of haze and noise from a single image,” Proc. SPIE 8296, 82960T (2012). [CrossRef]  

31. Y. Li, F. Guo, R. T. Tan, and M. S. Brown, “A contrast enhancement framework with jpeg artifacts suppression,” in Computer Vision - ECCV 2014 - 13th European Conference, (2014), pp. 174–188.

32. C. Chen, M. N. Do, and J. Wang, “Robust image and video dehazing with visual artifact suppression via gradient residual minimization, in Computer Vision - ECCV 2016 - 14th European Conference, (2016), pp. 576–591.

33. G. D. Finlayson, M. M. Darrodi, and M. Mackiewicz, “The alternating least squares technique for nonuniform intensity color correction,” Color Res. Appl. 40(3), 232–242 (2015). [CrossRef]  

34. X. Fu, D. Zeng, Y. Huang, X. P. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 2782–2790.

35. A. B. Petro, C. Sbert, and J.-M. Morel, “Multiscale Retinex,” Image Processing On Line 4, 71–88 (2014). [CrossRef]  

36. A. Mittal, R. Soundararajan, and A. Bovik, “Making a completely blind image quality analyzer,” Signal Process. Lett. IEEE 20(3), 209–212 (2013). [CrossRef]  

37. A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Transactions on Image Process. 21(12), 4695–4708 (2012). [CrossRef]  

38. C. Ancuti, C. O. Ancuti, and C. D. Vleeschouwer, “D-hazy: A dataset to evaluate quantitatively dehazing algorithms,” in Proceedings of the IEEE International Conference on Image Processing, (IEEE, 2016), CIP’16.

39. I. Lissner, J. Preiss, P. Urban, M. S. Lichtenauer, and P. Zolliker, “Image-difference prediction: From grayscale to color,” IEEE Transactions on Image Process. 22(2), 435–446 (2013). [CrossRef]  

40. H. R. Sheikh and A. C. Bovik, “Image information and visual quality,” IEEE Transactions on Image Process. 15(2), 430–444 (2006). [CrossRef]  

41. D. H. Brainard, “The Psychophysics Toolbox,” Spat Vis. 10(4), 433–436 (1997). [CrossRef]  

42. D. G. Pelli, “The VideoToolbox software for visual psychophysics: transforming numbers into movies,” Spat Vis. 10(4), 437–442 (1997). [CrossRef]  

References

  • View by:

  1. H. Koschmieder, Theorie der horizontalen Sichtweite: Kontrast und Sichtweite (Keim & Nemnich, 1925).
  2. J. Vazquez-Corral, G. D. Finlayson, and M. Bertalmío, “Physically plausible dehazing for non-physical dehazing algorithms,” in Computational Color Imaging, S. Tominaga, R. Schettini, A. Trémeau, and T. Horiuchi, eds. (Springer International Publishing, 2019), pp. 233–244.
  3. R. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2008), pp. 1–8.
  4. R. Fattal, “Single Image Dehazing,” in ACM SIGGRAPH 2008 Papers, (ACM, 2008), SIGGRAPH ’08, pp. 72:1–72:9.
  5. K. Nishino, L. Kratz, and S. Lombardi, “Bayesian Defogging,” Int. J. Comput. Vis. 98(3), 263–278 (2012).
    [Crossref]
  6. J.-P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2009), pp. 2201–2208.
  7. K. He, J. Sun, and X. Tang, “Single Image Haze Removal Using Dark Channel Prior,” IEEE Transactions on Pattern Analysis Mach. Intell. 33(12), 2341–2353 (2011).
    [Crossref]
  8. G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, “Efficient Image Dehazing with Boundary Constraint and Contextual Regularization,” in Proceedings of IEEE International Conference on Computer Vision (ICCV), (IEEE, 2013), pp. 617–624.
  9. W. Sun, “A new single-image fog removal algorithm based on physical model,” Optik 124(21), 4770–4775 (2013).
    [Crossref]
  10. Y. Gao, H.-M. Hu, S. Wang, and B. Li, “A fast image dehazing algorithm based on negative correction,” Signal Process. 103, 380–398 (2014).
    [Crossref]
  11. J.-B. Wang, N. He, L.-L. Zhang, and K. Lu, “Single image dehazing with a physical model and dark channel prior,” Neurocomputing 149, 718–728 (2015).
    [Crossref]
  12. Z. Li and J. Zheng, “Edge-Preserving Decomposition-Based Single Image Haze Removal,” IEEE Transactions on Image Process. 24(12), 5432–5441 (2015).
    [Crossref]
  13. Y.-H. Lai, Y.-L. Chen, C.-J. Chiou, and C.-T. Hsu, “Single-Image Dehazing via Optimal Transmission Map Under Scene Priors,” IEEE Transactions on Circuits Syst. for Video Technol. 25(1), 1–14 (2015).
  14. K. Tang, J. Yang, and J. Wang, “Investigating Haze-Relevant Features in a Learning Framework for Image Dehazing,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2014), pp. 2995–3002.
  15. Q. Zhu, J. Mai, and L. Shao, “A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior,” IEEE Transactions on Image Process. 24(11), 3522–3533 (2015).
    [Crossref]
  16. B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “DehazeNet: An End-to-End System for Single Image Haze Removal,” arXiv:1601.07661 (2016).
  17. H. Zhang and V. M. Patel, “Densely connected pyramid dehazing network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018), pp. 3194–3203.
  18. J. Oakley and H. Bu, “Correction of Simple Contrast Loss in Color Images,” IEEE Transactions on Image Process. 16(2), 511–522 (2007).
    [Crossref]
  19. C. O. Ancuti, C. Ancuti, C. Hermans, and P. Bekaert, “A Fast Semi-inverse Approach to Detect and Remove the Haze from a Single Image, in Asian Conference on Computer Vision, ACCV-2010, (2010) 6493, pp. 501–514.
  20. C. Ancuti and C. Ancuti, “Single Image Dehazing by Multi-Scale Fusion,” IEEE Transactions on Image Process. 22(8), 3271–3282 (2013).
    [Crossref]
  21. A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Enhanced Variational Image Dehazing,” SIAM J. on Imaging Sci. 8(3), 1519–1546 (2015).
    [Crossref]
  22. V. De Dravo and J. Hardeberg, “Stress for dehazing, in Colour and Visual Computing Symposium (CVCS), 2015, (2015), pp. 1–6.
  23. X.-S. Zhang, S.-B. Gao, C.-Y. Li, and Y.-J. Li, “A Retina Inspired Model for Enhancing Visibility of Hazy Images,” Front. Comput. Neurosci. 9, 151 (2015).
    [Crossref]
  24. Y. Wang, H. Wang, C. Yin, and M. Dai, “Biologically inspired image enhancement based on Retinex,” Neurocomputing 177, 373–384 (2016).
    [Crossref]
  25. J. Vazquez-Corral, A. Galdran, P. Cyriac, and M. Bertalmío, “A fast image dehazing method that does not introduce color artifacts,” Journal of Real-Time Image Processing, pp. doi: 10.1007/s11554–018–0816–6 posted 29 August 2018, in press.
  26. S. Wang, W. Cho, J. Jang, M. A. Abidi, and J. Paik, “Contrast-dependent saturation adjustment for outdoor image enhancement,” J. Opt. Soc. Am. A 34(1), 7–17 (2017).
    [Crossref]
  27. A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Fusion-based variational image dehazing,” IEEE Signal Process. Lett. 24, 1 (2016).
    [Crossref]
  28. A. Galdran, A. Alvarez-Gila, A. Bria, J. Vazquez-Corral, and M. Bertalmío, “On the duality between retinex and image dehazing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018).
  29. L. K. Choi, J. You, and A. C. Bovik, “Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging,” IEEE Transactions on Image Process. 24(11), 3888–3901 (2015).
    [Crossref]
  30. E. Matlin and P. Milanfar, “Removal of haze and noise from a single image,” Proc. SPIE 8296, 82960T (2012).
    [Crossref]
  31. Y. Li, F. Guo, R. T. Tan, and M. S. Brown, “A contrast enhancement framework with jpeg artifacts suppression,” in Computer Vision - ECCV 2014 - 13th European Conference, (2014), pp. 174–188.
  32. C. Chen, M. N. Do, and J. Wang, “Robust image and video dehazing with visual artifact suppression via gradient residual minimization, in Computer Vision - ECCV 2016 - 14th European Conference, (2016), pp. 576–591.
  33. G. D. Finlayson, M. M. Darrodi, and M. Mackiewicz, “The alternating least squares technique for nonuniform intensity color correction,” Color Res. Appl. 40(3), 232–242 (2015).
    [Crossref]
  34. X. Fu, D. Zeng, Y. Huang, X. P. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 2782–2790.
  35. A. B. Petro, C. Sbert, and J.-M. Morel, “Multiscale Retinex,” Image Processing On Line 4, 71–88 (2014).
    [Crossref]
  36. A. Mittal, R. Soundararajan, and A. Bovik, “Making a completely blind image quality analyzer,” Signal Process. Lett. IEEE 20(3), 209–212 (2013).
    [Crossref]
  37. A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Transactions on Image Process. 21(12), 4695–4708 (2012).
    [Crossref]
  38. C. Ancuti, C. O. Ancuti, and C. D. Vleeschouwer, “D-hazy: A dataset to evaluate quantitatively dehazing algorithms,” in Proceedings of the IEEE International Conference on Image Processing, (IEEE, 2016), CIP’16.
  39. I. Lissner, J. Preiss, P. Urban, M. S. Lichtenauer, and P. Zolliker, “Image-difference prediction: From grayscale to color,” IEEE Transactions on Image Process. 22(2), 435–446 (2013).
    [Crossref]
  40. H. R. Sheikh and A. C. Bovik, “Image information and visual quality,” IEEE Transactions on Image Process. 15(2), 430–444 (2006).
    [Crossref]
  41. D. H. Brainard, “The Psychophysics Toolbox,” Spat Vis. 10(4), 433–436 (1997).
    [Crossref]
  42. D. G. Pelli, “The VideoToolbox software for visual psychophysics: transforming numbers into movies,” Spat Vis. 10(4), 437–442 (1997).
    [Crossref]

2017 (1)

2016 (2)

A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Fusion-based variational image dehazing,” IEEE Signal Process. Lett. 24, 1 (2016).
[Crossref]

Y. Wang, H. Wang, C. Yin, and M. Dai, “Biologically inspired image enhancement based on Retinex,” Neurocomputing 177, 373–384 (2016).
[Crossref]

2015 (8)

A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Enhanced Variational Image Dehazing,” SIAM J. on Imaging Sci. 8(3), 1519–1546 (2015).
[Crossref]

X.-S. Zhang, S.-B. Gao, C.-Y. Li, and Y.-J. Li, “A Retina Inspired Model for Enhancing Visibility of Hazy Images,” Front. Comput. Neurosci. 9, 151 (2015).
[Crossref]

L. K. Choi, J. You, and A. C. Bovik, “Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging,” IEEE Transactions on Image Process. 24(11), 3888–3901 (2015).
[Crossref]

J.-B. Wang, N. He, L.-L. Zhang, and K. Lu, “Single image dehazing with a physical model and dark channel prior,” Neurocomputing 149, 718–728 (2015).
[Crossref]

Z. Li and J. Zheng, “Edge-Preserving Decomposition-Based Single Image Haze Removal,” IEEE Transactions on Image Process. 24(12), 5432–5441 (2015).
[Crossref]

Y.-H. Lai, Y.-L. Chen, C.-J. Chiou, and C.-T. Hsu, “Single-Image Dehazing via Optimal Transmission Map Under Scene Priors,” IEEE Transactions on Circuits Syst. for Video Technol. 25(1), 1–14 (2015).

Q. Zhu, J. Mai, and L. Shao, “A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior,” IEEE Transactions on Image Process. 24(11), 3522–3533 (2015).
[Crossref]

G. D. Finlayson, M. M. Darrodi, and M. Mackiewicz, “The alternating least squares technique for nonuniform intensity color correction,” Color Res. Appl. 40(3), 232–242 (2015).
[Crossref]

2014 (2)

A. B. Petro, C. Sbert, and J.-M. Morel, “Multiscale Retinex,” Image Processing On Line 4, 71–88 (2014).
[Crossref]

Y. Gao, H.-M. Hu, S. Wang, and B. Li, “A fast image dehazing algorithm based on negative correction,” Signal Process. 103, 380–398 (2014).
[Crossref]

2013 (4)

W. Sun, “A new single-image fog removal algorithm based on physical model,” Optik 124(21), 4770–4775 (2013).
[Crossref]

C. Ancuti and C. Ancuti, “Single Image Dehazing by Multi-Scale Fusion,” IEEE Transactions on Image Process. 22(8), 3271–3282 (2013).
[Crossref]

A. Mittal, R. Soundararajan, and A. Bovik, “Making a completely blind image quality analyzer,” Signal Process. Lett. IEEE 20(3), 209–212 (2013).
[Crossref]

I. Lissner, J. Preiss, P. Urban, M. S. Lichtenauer, and P. Zolliker, “Image-difference prediction: From grayscale to color,” IEEE Transactions on Image Process. 22(2), 435–446 (2013).
[Crossref]

2012 (3)

A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Transactions on Image Process. 21(12), 4695–4708 (2012).
[Crossref]

E. Matlin and P. Milanfar, “Removal of haze and noise from a single image,” Proc. SPIE 8296, 82960T (2012).
[Crossref]

K. Nishino, L. Kratz, and S. Lombardi, “Bayesian Defogging,” Int. J. Comput. Vis. 98(3), 263–278 (2012).
[Crossref]

2011 (1)

K. He, J. Sun, and X. Tang, “Single Image Haze Removal Using Dark Channel Prior,” IEEE Transactions on Pattern Analysis Mach. Intell. 33(12), 2341–2353 (2011).
[Crossref]

2007 (1)

J. Oakley and H. Bu, “Correction of Simple Contrast Loss in Color Images,” IEEE Transactions on Image Process. 16(2), 511–522 (2007).
[Crossref]

2006 (1)

H. R. Sheikh and A. C. Bovik, “Image information and visual quality,” IEEE Transactions on Image Process. 15(2), 430–444 (2006).
[Crossref]

1997 (2)

D. H. Brainard, “The Psychophysics Toolbox,” Spat Vis. 10(4), 433–436 (1997).
[Crossref]

D. G. Pelli, “The VideoToolbox software for visual psychophysics: transforming numbers into movies,” Spat Vis. 10(4), 437–442 (1997).
[Crossref]

Abidi, M. A.

Alvarez-Gila, A.

A. Galdran, A. Alvarez-Gila, A. Bria, J. Vazquez-Corral, and M. Bertalmío, “On the duality between retinex and image dehazing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018).

Ancuti, C.

C. Ancuti and C. Ancuti, “Single Image Dehazing by Multi-Scale Fusion,” IEEE Transactions on Image Process. 22(8), 3271–3282 (2013).
[Crossref]

C. Ancuti and C. Ancuti, “Single Image Dehazing by Multi-Scale Fusion,” IEEE Transactions on Image Process. 22(8), 3271–3282 (2013).
[Crossref]

C. O. Ancuti, C. Ancuti, C. Hermans, and P. Bekaert, “A Fast Semi-inverse Approach to Detect and Remove the Haze from a Single Image, in Asian Conference on Computer Vision, ACCV-2010, (2010) 6493, pp. 501–514.

C. Ancuti, C. O. Ancuti, and C. D. Vleeschouwer, “D-hazy: A dataset to evaluate quantitatively dehazing algorithms,” in Proceedings of the IEEE International Conference on Image Processing, (IEEE, 2016), CIP’16.

Ancuti, C. O.

C. Ancuti, C. O. Ancuti, and C. D. Vleeschouwer, “D-hazy: A dataset to evaluate quantitatively dehazing algorithms,” in Proceedings of the IEEE International Conference on Image Processing, (IEEE, 2016), CIP’16.

C. O. Ancuti, C. Ancuti, C. Hermans, and P. Bekaert, “A Fast Semi-inverse Approach to Detect and Remove the Haze from a Single Image, in Asian Conference on Computer Vision, ACCV-2010, (2010) 6493, pp. 501–514.

Bekaert, P.

C. O. Ancuti, C. Ancuti, C. Hermans, and P. Bekaert, “A Fast Semi-inverse Approach to Detect and Remove the Haze from a Single Image, in Asian Conference on Computer Vision, ACCV-2010, (2010) 6493, pp. 501–514.

Bertalmío, M.

A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Fusion-based variational image dehazing,” IEEE Signal Process. Lett. 24, 1 (2016).
[Crossref]

A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Enhanced Variational Image Dehazing,” SIAM J. on Imaging Sci. 8(3), 1519–1546 (2015).
[Crossref]

J. Vazquez-Corral, G. D. Finlayson, and M. Bertalmío, “Physically plausible dehazing for non-physical dehazing algorithms,” in Computational Color Imaging, S. Tominaga, R. Schettini, A. Trémeau, and T. Horiuchi, eds. (Springer International Publishing, 2019), pp. 233–244.

J. Vazquez-Corral, A. Galdran, P. Cyriac, and M. Bertalmío, “A fast image dehazing method that does not introduce color artifacts,” Journal of Real-Time Image Processing, pp. doi: 10.1007/s11554–018–0816–6 posted 29 August 2018, in press.

A. Galdran, A. Alvarez-Gila, A. Bria, J. Vazquez-Corral, and M. Bertalmío, “On the duality between retinex and image dehazing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018).

Bovik, A.

A. Mittal, R. Soundararajan, and A. Bovik, “Making a completely blind image quality analyzer,” Signal Process. Lett. IEEE 20(3), 209–212 (2013).
[Crossref]

Bovik, A. C.

L. K. Choi, J. You, and A. C. Bovik, “Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging,” IEEE Transactions on Image Process. 24(11), 3888–3901 (2015).
[Crossref]

A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Transactions on Image Process. 21(12), 4695–4708 (2012).
[Crossref]

H. R. Sheikh and A. C. Bovik, “Image information and visual quality,” IEEE Transactions on Image Process. 15(2), 430–444 (2006).
[Crossref]

Brainard, D. H.

D. H. Brainard, “The Psychophysics Toolbox,” Spat Vis. 10(4), 433–436 (1997).
[Crossref]

Bria, A.

A. Galdran, A. Alvarez-Gila, A. Bria, J. Vazquez-Corral, and M. Bertalmío, “On the duality between retinex and image dehazing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018).

Brown, M. S.

Y. Li, F. Guo, R. T. Tan, and M. S. Brown, “A contrast enhancement framework with jpeg artifacts suppression,” in Computer Vision - ECCV 2014 - 13th European Conference, (2014), pp. 174–188.

Bu, H.

J. Oakley and H. Bu, “Correction of Simple Contrast Loss in Color Images,” IEEE Transactions on Image Process. 16(2), 511–522 (2007).
[Crossref]

Cai, B.

B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “DehazeNet: An End-to-End System for Single Image Haze Removal,” arXiv:1601.07661 (2016).

Chen, C.

C. Chen, M. N. Do, and J. Wang, “Robust image and video dehazing with visual artifact suppression via gradient residual minimization, in Computer Vision - ECCV 2016 - 14th European Conference, (2016), pp. 576–591.

Chen, Y.-L.

Y.-H. Lai, Y.-L. Chen, C.-J. Chiou, and C.-T. Hsu, “Single-Image Dehazing via Optimal Transmission Map Under Scene Priors,” IEEE Transactions on Circuits Syst. for Video Technol. 25(1), 1–14 (2015).

Chiou, C.-J.

Y.-H. Lai, Y.-L. Chen, C.-J. Chiou, and C.-T. Hsu, “Single-Image Dehazing via Optimal Transmission Map Under Scene Priors,” IEEE Transactions on Circuits Syst. for Video Technol. 25(1), 1–14 (2015).

Cho, W.

Choi, L. K.

L. K. Choi, J. You, and A. C. Bovik, “Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging,” IEEE Transactions on Image Process. 24(11), 3888–3901 (2015).
[Crossref]

Cyriac, P.

J. Vazquez-Corral, A. Galdran, P. Cyriac, and M. Bertalmío, “A fast image dehazing method that does not introduce color artifacts,” Journal of Real-Time Image Processing, pp. doi: 10.1007/s11554–018–0816–6 posted 29 August 2018, in press.

Dai, M.

Y. Wang, H. Wang, C. Yin, and M. Dai, “Biologically inspired image enhancement based on Retinex,” Neurocomputing 177, 373–384 (2016).
[Crossref]

Darrodi, M. M.

G. D. Finlayson, M. M. Darrodi, and M. Mackiewicz, “The alternating least squares technique for nonuniform intensity color correction,” Color Res. Appl. 40(3), 232–242 (2015).
[Crossref]

De Dravo, V.

V. De Dravo and J. Hardeberg, “Stress for dehazing, in Colour and Visual Computing Symposium (CVCS), 2015, (2015), pp. 1–6.

Ding, X.

X. Fu, D. Zeng, Y. Huang, X. P. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 2782–2790.

Do, M. N.

C. Chen, M. N. Do, and J. Wang, “Robust image and video dehazing with visual artifact suppression via gradient residual minimization, in Computer Vision - ECCV 2016 - 14th European Conference, (2016), pp. 576–591.

Duan, J.

G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, “Efficient Image Dehazing with Boundary Constraint and Contextual Regularization,” in Proceedings of IEEE International Conference on Computer Vision (ICCV), (IEEE, 2013), pp. 617–624.

Fattal, R.

R. Fattal, “Single Image Dehazing,” in ACM SIGGRAPH 2008 Papers, (ACM, 2008), SIGGRAPH ’08, pp. 72:1–72:9.

Finlayson, G. D.

G. D. Finlayson, M. M. Darrodi, and M. Mackiewicz, “The alternating least squares technique for nonuniform intensity color correction,” Color Res. Appl. 40(3), 232–242 (2015).
[Crossref]

J. Vazquez-Corral, G. D. Finlayson, and M. Bertalmío, “Physically plausible dehazing for non-physical dehazing algorithms,” in Computational Color Imaging, S. Tominaga, R. Schettini, A. Trémeau, and T. Horiuchi, eds. (Springer International Publishing, 2019), pp. 233–244.

Fu, X.

X. Fu, D. Zeng, Y. Huang, X. P. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 2782–2790.

Galdran, A.

A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Fusion-based variational image dehazing,” IEEE Signal Process. Lett. 24, 1 (2016).
[Crossref]

A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Enhanced Variational Image Dehazing,” SIAM J. on Imaging Sci. 8(3), 1519–1546 (2015).
[Crossref]

A. Galdran, A. Alvarez-Gila, A. Bria, J. Vazquez-Corral, and M. Bertalmío, “On the duality between retinex and image dehazing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018).

J. Vazquez-Corral, A. Galdran, P. Cyriac, and M. Bertalmío, “A fast image dehazing method that does not introduce color artifacts,” Journal of Real-Time Image Processing, pp. doi: 10.1007/s11554–018–0816–6 posted 29 August 2018, in press.

Gao, S.-B.

X.-S. Zhang, S.-B. Gao, C.-Y. Li, and Y.-J. Li, “A Retina Inspired Model for Enhancing Visibility of Hazy Images,” Front. Comput. Neurosci. 9, 151 (2015).
[Crossref]

Gao, Y.

Y. Gao, H.-M. Hu, S. Wang, and B. Li, “A fast image dehazing algorithm based on negative correction,” Signal Process. 103, 380–398 (2014).
[Crossref]

Guo, F.

Y. Li, F. Guo, R. T. Tan, and M. S. Brown, “A contrast enhancement framework with jpeg artifacts suppression,” in Computer Vision - ECCV 2014 - 13th European Conference, (2014), pp. 174–188.

Hardeberg, J.

V. De Dravo and J. Hardeberg, “Stress for dehazing, in Colour and Visual Computing Symposium (CVCS), 2015, (2015), pp. 1–6.

Hautiere, N.

J.-P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2009), pp. 2201–2208.

He, K.

K. He, J. Sun, and X. Tang, “Single Image Haze Removal Using Dark Channel Prior,” IEEE Transactions on Pattern Analysis Mach. Intell. 33(12), 2341–2353 (2011).
[Crossref]

He, N.

J.-B. Wang, N. He, L.-L. Zhang, and K. Lu, “Single image dehazing with a physical model and dark channel prior,” Neurocomputing 149, 718–728 (2015).
[Crossref]

Hermans, C.

C. O. Ancuti, C. Ancuti, C. Hermans, and P. Bekaert, “A Fast Semi-inverse Approach to Detect and Remove the Haze from a Single Image, in Asian Conference on Computer Vision, ACCV-2010, (2010) 6493, pp. 501–514.

Hsu, C.-T.

Y.-H. Lai, Y.-L. Chen, C.-J. Chiou, and C.-T. Hsu, “Single-Image Dehazing via Optimal Transmission Map Under Scene Priors,” IEEE Transactions on Circuits Syst. for Video Technol. 25(1), 1–14 (2015).

Hu, H.-M.

Y. Gao, H.-M. Hu, S. Wang, and B. Li, “A fast image dehazing algorithm based on negative correction,” Signal Process. 103, 380–398 (2014).
[Crossref]

Huang, Y.

X. Fu, D. Zeng, Y. Huang, X. P. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 2782–2790.

Jang, J.

Jia, K.

B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “DehazeNet: An End-to-End System for Single Image Haze Removal,” arXiv:1601.07661 (2016).

Koschmieder, H.

H. Koschmieder, Theorie der horizontalen Sichtweite: Kontrast und Sichtweite (Keim & Nemnich, 1925).

Kratz, L.

K. Nishino, L. Kratz, and S. Lombardi, “Bayesian Defogging,” Int. J. Comput. Vis. 98(3), 263–278 (2012).
[Crossref]

Lai, Y.-H.

Y.-H. Lai, Y.-L. Chen, C.-J. Chiou, and C.-T. Hsu, “Single-Image Dehazing via Optimal Transmission Map Under Scene Priors,” IEEE Transactions on Circuits Syst. for Video Technol. 25(1), 1–14 (2015).

Li, B.

Y. Gao, H.-M. Hu, S. Wang, and B. Li, “A fast image dehazing algorithm based on negative correction,” Signal Process. 103, 380–398 (2014).
[Crossref]

Li, C.-Y.

X.-S. Zhang, S.-B. Gao, C.-Y. Li, and Y.-J. Li, “A Retina Inspired Model for Enhancing Visibility of Hazy Images,” Front. Comput. Neurosci. 9, 151 (2015).
[Crossref]

Li, Y.

Y. Li, F. Guo, R. T. Tan, and M. S. Brown, “A contrast enhancement framework with jpeg artifacts suppression,” in Computer Vision - ECCV 2014 - 13th European Conference, (2014), pp. 174–188.

Li, Y.-J.

X.-S. Zhang, S.-B. Gao, C.-Y. Li, and Y.-J. Li, “A Retina Inspired Model for Enhancing Visibility of Hazy Images,” Front. Comput. Neurosci. 9, 151 (2015).
[Crossref]

Li, Z.

Z. Li and J. Zheng, “Edge-Preserving Decomposition-Based Single Image Haze Removal,” IEEE Transactions on Image Process. 24(12), 5432–5441 (2015).
[Crossref]

Lichtenauer, M. S.

I. Lissner, J. Preiss, P. Urban, M. S. Lichtenauer, and P. Zolliker, “Image-difference prediction: From grayscale to color,” IEEE Transactions on Image Process. 22(2), 435–446 (2013).
[Crossref]

Lissner, I.

I. Lissner, J. Preiss, P. Urban, M. S. Lichtenauer, and P. Zolliker, “Image-difference prediction: From grayscale to color,” IEEE Transactions on Image Process. 22(2), 435–446 (2013).
[Crossref]

Lombardi, S.

K. Nishino, L. Kratz, and S. Lombardi, “Bayesian Defogging,” Int. J. Comput. Vis. 98(3), 263–278 (2012).
[Crossref]

Lu, K.

J.-B. Wang, N. He, L.-L. Zhang, and K. Lu, “Single image dehazing with a physical model and dark channel prior,” Neurocomputing 149, 718–728 (2015).
[Crossref]

Mackiewicz, M.

G. D. Finlayson, M. M. Darrodi, and M. Mackiewicz, “The alternating least squares technique for nonuniform intensity color correction,” Color Res. Appl. 40(3), 232–242 (2015).
[Crossref]

Mai, J.

Q. Zhu, J. Mai, and L. Shao, “A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior,” IEEE Transactions on Image Process. 24(11), 3522–3533 (2015).
[Crossref]

Matlin, E.

E. Matlin and P. Milanfar, “Removal of haze and noise from a single image,” Proc. SPIE 8296, 82960T (2012).
[Crossref]

Meng, G.

G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, “Efficient Image Dehazing with Boundary Constraint and Contextual Regularization,” in Proceedings of IEEE International Conference on Computer Vision (ICCV), (IEEE, 2013), pp. 617–624.

Milanfar, P.

E. Matlin and P. Milanfar, “Removal of haze and noise from a single image,” Proc. SPIE 8296, 82960T (2012).
[Crossref]

Mittal, A.

A. Mittal, R. Soundararajan, and A. Bovik, “Making a completely blind image quality analyzer,” Signal Process. Lett. IEEE 20(3), 209–212 (2013).
[Crossref]

A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Transactions on Image Process. 21(12), 4695–4708 (2012).
[Crossref]

Moorthy, A. K.

A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Transactions on Image Process. 21(12), 4695–4708 (2012).
[Crossref]

Morel, J.-M.

A. B. Petro, C. Sbert, and J.-M. Morel, “Multiscale Retinex,” Image Processing On Line 4, 71–88 (2014).
[Crossref]

Nishino, K.

K. Nishino, L. Kratz, and S. Lombardi, “Bayesian Defogging,” Int. J. Comput. Vis. 98(3), 263–278 (2012).
[Crossref]

Oakley, J.

J. Oakley and H. Bu, “Correction of Simple Contrast Loss in Color Images,” IEEE Transactions on Image Process. 16(2), 511–522 (2007).
[Crossref]

Paik, J.

Pan, C.

G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, “Efficient Image Dehazing with Boundary Constraint and Contextual Regularization,” in Proceedings of IEEE International Conference on Computer Vision (ICCV), (IEEE, 2013), pp. 617–624.

Pardo, D.

A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Fusion-based variational image dehazing,” IEEE Signal Process. Lett. 24, 1 (2016).
[Crossref]

A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Enhanced Variational Image Dehazing,” SIAM J. on Imaging Sci. 8(3), 1519–1546 (2015).
[Crossref]

Patel, V. M.

H. Zhang and V. M. Patel, “Densely connected pyramid dehazing network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018), pp. 3194–3203.

Pelli, D. G.

D. G. Pelli, “The VideoToolbox software for visual psychophysics: transforming numbers into movies,” Spat Vis. 10(4), 437–442 (1997).
[Crossref]

Petro, A. B.

A. B. Petro, C. Sbert, and J.-M. Morel, “Multiscale Retinex,” Image Processing On Line 4, 71–88 (2014).
[Crossref]

Preiss, J.

I. Lissner, J. Preiss, P. Urban, M. S. Lichtenauer, and P. Zolliker, “Image-difference prediction: From grayscale to color,” IEEE Transactions on Image Process. 22(2), 435–446 (2013).
[Crossref]

Qing, C.

B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “DehazeNet: An End-to-End System for Single Image Haze Removal,” arXiv:1601.07661 (2016).

Sbert, C.

A. B. Petro, C. Sbert, and J.-M. Morel, “Multiscale Retinex,” Image Processing On Line 4, 71–88 (2014).
[Crossref]

Shao, L.

Q. Zhu, J. Mai, and L. Shao, “A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior,” IEEE Transactions on Image Process. 24(11), 3522–3533 (2015).
[Crossref]

Sheikh, H. R.

H. R. Sheikh and A. C. Bovik, “Image information and visual quality,” IEEE Transactions on Image Process. 15(2), 430–444 (2006).
[Crossref]

Soundararajan, R.

A. Mittal, R. Soundararajan, and A. Bovik, “Making a completely blind image quality analyzer,” Signal Process. Lett. IEEE 20(3), 209–212 (2013).
[Crossref]

Sun, J.

K. He, J. Sun, and X. Tang, “Single Image Haze Removal Using Dark Channel Prior,” IEEE Transactions on Pattern Analysis Mach. Intell. 33(12), 2341–2353 (2011).
[Crossref]

Sun, W.

W. Sun, “A new single-image fog removal algorithm based on physical model,” Optik 124(21), 4770–4775 (2013).
[Crossref]

Tan, R.

R. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2008), pp. 1–8.

Tan, R. T.

Y. Li, F. Guo, R. T. Tan, and M. S. Brown, “A contrast enhancement framework with jpeg artifacts suppression,” in Computer Vision - ECCV 2014 - 13th European Conference, (2014), pp. 174–188.

Tang, K.

K. Tang, J. Yang, and J. Wang, “Investigating Haze-Relevant Features in a Learning Framework for Image Dehazing,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2014), pp. 2995–3002.

Tang, X.

K. He, J. Sun, and X. Tang, “Single Image Haze Removal Using Dark Channel Prior,” IEEE Transactions on Pattern Analysis Mach. Intell. 33(12), 2341–2353 (2011).
[Crossref]

Tao, D.

B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “DehazeNet: An End-to-End System for Single Image Haze Removal,” arXiv:1601.07661 (2016).

Tarel, J.-P.

J.-P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2009), pp. 2201–2208.

Urban, P.

I. Lissner, J. Preiss, P. Urban, M. S. Lichtenauer, and P. Zolliker, “Image-difference prediction: From grayscale to color,” IEEE Transactions on Image Process. 22(2), 435–446 (2013).
[Crossref]

Vazquez-Corral, J.

A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Fusion-based variational image dehazing,” IEEE Signal Process. Lett. 24, 1 (2016).
[Crossref]

A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Enhanced Variational Image Dehazing,” SIAM J. on Imaging Sci. 8(3), 1519–1546 (2015).
[Crossref]

J. Vazquez-Corral, G. D. Finlayson, and M. Bertalmío, “Physically plausible dehazing for non-physical dehazing algorithms,” in Computational Color Imaging, S. Tominaga, R. Schettini, A. Trémeau, and T. Horiuchi, eds. (Springer International Publishing, 2019), pp. 233–244.

J. Vazquez-Corral, A. Galdran, P. Cyriac, and M. Bertalmío, “A fast image dehazing method that does not introduce color artifacts,” Journal of Real-Time Image Processing, pp. doi: 10.1007/s11554–018–0816–6 posted 29 August 2018, in press.

A. Galdran, A. Alvarez-Gila, A. Bria, J. Vazquez-Corral, and M. Bertalmío, “On the duality between retinex and image dehazing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018).

Vleeschouwer, C. D.

C. Ancuti, C. O. Ancuti, and C. D. Vleeschouwer, “D-hazy: A dataset to evaluate quantitatively dehazing algorithms,” in Proceedings of the IEEE International Conference on Image Processing, (IEEE, 2016), CIP’16.

Wang, H.

Y. Wang, H. Wang, C. Yin, and M. Dai, “Biologically inspired image enhancement based on Retinex,” Neurocomputing 177, 373–384 (2016).
[Crossref]

Wang, J.

K. Tang, J. Yang, and J. Wang, “Investigating Haze-Relevant Features in a Learning Framework for Image Dehazing,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2014), pp. 2995–3002.

C. Chen, M. N. Do, and J. Wang, “Robust image and video dehazing with visual artifact suppression via gradient residual minimization, in Computer Vision - ECCV 2016 - 14th European Conference, (2016), pp. 576–591.

Wang, J.-B.

J.-B. Wang, N. He, L.-L. Zhang, and K. Lu, “Single image dehazing with a physical model and dark channel prior,” Neurocomputing 149, 718–728 (2015).
[Crossref]

Wang, S.

S. Wang, W. Cho, J. Jang, M. A. Abidi, and J. Paik, “Contrast-dependent saturation adjustment for outdoor image enhancement,” J. Opt. Soc. Am. A 34(1), 7–17 (2017).
[Crossref]

Y. Gao, H.-M. Hu, S. Wang, and B. Li, “A fast image dehazing algorithm based on negative correction,” Signal Process. 103, 380–398 (2014).
[Crossref]

Wang, Y.

Y. Wang, H. Wang, C. Yin, and M. Dai, “Biologically inspired image enhancement based on Retinex,” Neurocomputing 177, 373–384 (2016).
[Crossref]

G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, “Efficient Image Dehazing with Boundary Constraint and Contextual Regularization,” in Proceedings of IEEE International Conference on Computer Vision (ICCV), (IEEE, 2013), pp. 617–624.

Xiang, S.

G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, “Efficient Image Dehazing with Boundary Constraint and Contextual Regularization,” in Proceedings of IEEE International Conference on Computer Vision (ICCV), (IEEE, 2013), pp. 617–624.

Xu, X.

B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “DehazeNet: An End-to-End System for Single Image Haze Removal,” arXiv:1601.07661 (2016).

Yang, J.

K. Tang, J. Yang, and J. Wang, “Investigating Haze-Relevant Features in a Learning Framework for Image Dehazing,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2014), pp. 2995–3002.

Yin, C.

Y. Wang, H. Wang, C. Yin, and M. Dai, “Biologically inspired image enhancement based on Retinex,” Neurocomputing 177, 373–384 (2016).
[Crossref]

You, J.

L. K. Choi, J. You, and A. C. Bovik, “Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging,” IEEE Transactions on Image Process. 24(11), 3888–3901 (2015).
[Crossref]

Zeng, D.

X. Fu, D. Zeng, Y. Huang, X. P. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 2782–2790.

Zhang, H.

H. Zhang and V. M. Patel, “Densely connected pyramid dehazing network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018), pp. 3194–3203.

Zhang, L.-L.

J.-B. Wang, N. He, L.-L. Zhang, and K. Lu, “Single image dehazing with a physical model and dark channel prior,” Neurocomputing 149, 718–728 (2015).
[Crossref]

Zhang, X. P.

X. Fu, D. Zeng, Y. Huang, X. P. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 2782–2790.

Zhang, X.-S.

X.-S. Zhang, S.-B. Gao, C.-Y. Li, and Y.-J. Li, “A Retina Inspired Model for Enhancing Visibility of Hazy Images,” Front. Comput. Neurosci. 9, 151 (2015).
[Crossref]

Zheng, J.

Z. Li and J. Zheng, “Edge-Preserving Decomposition-Based Single Image Haze Removal,” IEEE Transactions on Image Process. 24(12), 5432–5441 (2015).
[Crossref]

Zhu, Q.

Q. Zhu, J. Mai, and L. Shao, “A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior,” IEEE Transactions on Image Process. 24(11), 3522–3533 (2015).
[Crossref]

Zolliker, P.

I. Lissner, J. Preiss, P. Urban, M. S. Lichtenauer, and P. Zolliker, “Image-difference prediction: From grayscale to color,” IEEE Transactions on Image Process. 22(2), 435–446 (2013).
[Crossref]

Color Res. Appl. (1)

G. D. Finlayson, M. M. Darrodi, and M. Mackiewicz, “The alternating least squares technique for nonuniform intensity color correction,” Color Res. Appl. 40(3), 232–242 (2015).
[Crossref]

Front. Comput. Neurosci. (1)

X.-S. Zhang, S.-B. Gao, C.-Y. Li, and Y.-J. Li, “A Retina Inspired Model for Enhancing Visibility of Hazy Images,” Front. Comput. Neurosci. 9, 151 (2015).
[Crossref]

IEEE Signal Process. Lett. (1)

A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Fusion-based variational image dehazing,” IEEE Signal Process. Lett. 24, 1 (2016).
[Crossref]

IEEE Transactions on Circuits Syst. for Video Technol. (1)

Y.-H. Lai, Y.-L. Chen, C.-J. Chiou, and C.-T. Hsu, “Single-Image Dehazing via Optimal Transmission Map Under Scene Priors,” IEEE Transactions on Circuits Syst. for Video Technol. 25(1), 1–14 (2015).

IEEE Transactions on Image Process. (8)

Q. Zhu, J. Mai, and L. Shao, “A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior,” IEEE Transactions on Image Process. 24(11), 3522–3533 (2015).
[Crossref]

Z. Li and J. Zheng, “Edge-Preserving Decomposition-Based Single Image Haze Removal,” IEEE Transactions on Image Process. 24(12), 5432–5441 (2015).
[Crossref]

J. Oakley and H. Bu, “Correction of Simple Contrast Loss in Color Images,” IEEE Transactions on Image Process. 16(2), 511–522 (2007).
[Crossref]

C. Ancuti and C. Ancuti, “Single Image Dehazing by Multi-Scale Fusion,” IEEE Transactions on Image Process. 22(8), 3271–3282 (2013).
[Crossref]

L. K. Choi, J. You, and A. C. Bovik, “Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging,” IEEE Transactions on Image Process. 24(11), 3888–3901 (2015).
[Crossref]

I. Lissner, J. Preiss, P. Urban, M. S. Lichtenauer, and P. Zolliker, “Image-difference prediction: From grayscale to color,” IEEE Transactions on Image Process. 22(2), 435–446 (2013).
[Crossref]

H. R. Sheikh and A. C. Bovik, “Image information and visual quality,” IEEE Transactions on Image Process. 15(2), 430–444 (2006).
[Crossref]

A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Transactions on Image Process. 21(12), 4695–4708 (2012).
[Crossref]

IEEE Transactions on Pattern Analysis Mach. Intell. (1)

K. He, J. Sun, and X. Tang, “Single Image Haze Removal Using Dark Channel Prior,” IEEE Transactions on Pattern Analysis Mach. Intell. 33(12), 2341–2353 (2011).
[Crossref]

Image Processing On Line (1)

A. B. Petro, C. Sbert, and J.-M. Morel, “Multiscale Retinex,” Image Processing On Line 4, 71–88 (2014).
[Crossref]

Int. J. Comput. Vis. (1)

K. Nishino, L. Kratz, and S. Lombardi, “Bayesian Defogging,” Int. J. Comput. Vis. 98(3), 263–278 (2012).
[Crossref]

J. Opt. Soc. Am. A (1)

Neurocomputing (2)

J.-B. Wang, N. He, L.-L. Zhang, and K. Lu, “Single image dehazing with a physical model and dark channel prior,” Neurocomputing 149, 718–728 (2015).
[Crossref]

Y. Wang, H. Wang, C. Yin, and M. Dai, “Biologically inspired image enhancement based on Retinex,” Neurocomputing 177, 373–384 (2016).
[Crossref]

Optik (1)

W. Sun, “A new single-image fog removal algorithm based on physical model,” Optik 124(21), 4770–4775 (2013).
[Crossref]

Proc. SPIE (1)

E. Matlin and P. Milanfar, “Removal of haze and noise from a single image,” Proc. SPIE 8296, 82960T (2012).
[Crossref]

SIAM J. on Imaging Sci. (1)

A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Enhanced Variational Image Dehazing,” SIAM J. on Imaging Sci. 8(3), 1519–1546 (2015).
[Crossref]

Signal Process. (1)

Y. Gao, H.-M. Hu, S. Wang, and B. Li, “A fast image dehazing algorithm based on negative correction,” Signal Process. 103, 380–398 (2014).
[Crossref]

Signal Process. Lett. IEEE (1)

A. Mittal, R. Soundararajan, and A. Bovik, “Making a completely blind image quality analyzer,” Signal Process. Lett. IEEE 20(3), 209–212 (2013).
[Crossref]

Spat Vis. (2)

D. H. Brainard, “The Psychophysics Toolbox,” Spat Vis. 10(4), 433–436 (1997).
[Crossref]

D. G. Pelli, “The VideoToolbox software for visual psychophysics: transforming numbers into movies,” Spat Vis. 10(4), 437–442 (1997).
[Crossref]

Other (17)

C. Ancuti, C. O. Ancuti, and C. D. Vleeschouwer, “D-hazy: A dataset to evaluate quantitatively dehazing algorithms,” in Proceedings of the IEEE International Conference on Image Processing, (IEEE, 2016), CIP’16.

X. Fu, D. Zeng, Y. Huang, X. P. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 2782–2790.

Y. Li, F. Guo, R. T. Tan, and M. S. Brown, “A contrast enhancement framework with jpeg artifacts suppression,” in Computer Vision - ECCV 2014 - 13th European Conference, (2014), pp. 174–188.

C. Chen, M. N. Do, and J. Wang, “Robust image and video dehazing with visual artifact suppression via gradient residual minimization, in Computer Vision - ECCV 2016 - 14th European Conference, (2016), pp. 576–591.

V. De Dravo and J. Hardeberg, “Stress for dehazing, in Colour and Visual Computing Symposium (CVCS), 2015, (2015), pp. 1–6.

C. O. Ancuti, C. Ancuti, C. Hermans, and P. Bekaert, “A Fast Semi-inverse Approach to Detect and Remove the Haze from a Single Image, in Asian Conference on Computer Vision, ACCV-2010, (2010) 6493, pp. 501–514.

J. Vazquez-Corral, A. Galdran, P. Cyriac, and M. Bertalmío, “A fast image dehazing method that does not introduce color artifacts,” Journal of Real-Time Image Processing, pp. doi: 10.1007/s11554–018–0816–6 posted 29 August 2018, in press.

A. Galdran, A. Alvarez-Gila, A. Bria, J. Vazquez-Corral, and M. Bertalmío, “On the duality between retinex and image dehazing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018).

B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “DehazeNet: An End-to-End System for Single Image Haze Removal,” arXiv:1601.07661 (2016).

H. Zhang and V. M. Patel, “Densely connected pyramid dehazing network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018), pp. 3194–3203.

K. Tang, J. Yang, and J. Wang, “Investigating Haze-Relevant Features in a Learning Framework for Image Dehazing,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2014), pp. 2995–3002.

J.-P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2009), pp. 2201–2208.

G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, “Efficient Image Dehazing with Boundary Constraint and Contextual Regularization,” in Proceedings of IEEE International Conference on Computer Vision (ICCV), (IEEE, 2013), pp. 617–624.

H. Koschmieder, Theorie der horizontalen Sichtweite: Kontrast und Sichtweite (Keim & Nemnich, 1925).

J. Vazquez-Corral, G. D. Finlayson, and M. Bertalmío, “Physically plausible dehazing for non-physical dehazing algorithms,” in Computational Color Imaging, S. Tominaga, R. Schettini, A. Trémeau, and T. Horiuchi, eds. (Springer International Publishing, 2019), pp. 233–244.

R. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2008), pp. 1–8.

R. Fattal, “Single Image Dehazing,” in ACM SIGGRAPH 2008 Papers, (ACM, 2008), SIGGRAPH ’08, pp. 72:1–72:9.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Study about the effect of the iterations on the steady state of $\boldsymbol {J^{our}}$ for different original algorithms in the 500 images of the dataset in Choi et al.. We can clearly see that for any original algorithm $\boldsymbol {J^{our}}$ reaches steady state.
Fig. 2.
Fig. 2. Qualitative results for our approach, for 6 different non-physical dehazing methods and 2 different starting airlights. Our method improves all the original methods. Furthermore, our results for both airlights are very similar, showing the robustness of our approach.
Fig. 3.
Fig. 3. Images uses in the psychophysical experiment.
Fig. 4.
Fig. 4. Results of the psychophysical experiment using the Thurstone Case V test for the whole set of $150$ comparisons.
Fig. 5.
Fig. 5. Results of the psychophysical experiment using the Thurstone Case V test for each of the non-physical dehazing methods considered in this work.

Tables (2)

Tables Icon

Table 1. Results reported as the mean for all the 500 images in the Choi et al. dataset.

Tables Icon

Table 2. Results reported as the mean for all the 23 images in the Middleburry D-Hazy dataset.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

I x , = t x J x , + ( 1 t x ) A ,
{ A o u r , t o u r } = a r g m i n A , t ( 1 t ) A I + T J n p 2 ,
t o u r = d i a g ( T ) | T = a r g m i n T ( I Λ ) T ( J n p Λ ) 2 .
T ( x , y ) = a r g m i n T ( x , y ) ( I Λ _ ) ( x , y ) T ( x , y ) ( J n p Λ _ ) ( x , y ) 2 , s . t . T ( x , y ) = k = 1 K α k G k ( x , y ) .
H = [ H r , 1 H r , K H g , 1 H g , K H b , 1 H b , K ] .
u = [ ( I Λ ) r ( I Λ ) g ( I Λ ) b ] .
α = H + u
A o u r = a r g m i n A ( 1 t o u r ) A I T o u r J n p 2 .
J x , o u r = I x , o r ( 1 t x o u r ) A o u r t x o u r
M S E ( k ) = 1 3 N j = 1 3 i = 1 N ( J i , j o u r ( k + 1 ) J i , j o u r ( k ) ) 2 ,

Metrics