Abstract
Images captured under hazy conditions (e.g. fog, air pollution) usually present faded colors and loss of contrast. To improve their visibility, a process called image dehazing can be applied. Some of the most successful image dehazing algorithms are based on image processing methods but do not follow any physical image formation model, which limits their performance. In this paper, we propose a post-processing technique to alleviate this handicap by enforcing the original method to be consistent with a popular physical model for image formation under haze. Our results improve upon those of the original methods qualitatively and according to several metrics, and they have also been validated via psychophysical experiments. These results are particularly striking in terms of avoiding over-saturation and reducing color artifacts, which are the most common shortcomings faced by image dehazing methods.
Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.
1. Introduction
Images captured under adverse weather conditions, such as for or smog, present distorted colors and a loss of contrast, minimizing the quality of the captured image. Different physical models aiming at describing this phenomenon have been proposed, the more widespread being the one by Koschmieder [1] .
Koschmieder’s model teaches us that the hazy image $\boldsymbol {I}$ depends on the clear image $\boldsymbol {J}$ (i.e., how the image would look without atmospheric scatter), a transmission map that only depends on the image depth and is therefore equal for the three channels $\boldsymbol {t}$, and the airlight color $\boldsymbol {A}$. Mathematically, the model is written as
where $x$ is a particular image pixel, and $\boldsymbol {J_{x,\cdot }}$, $\boldsymbol {I}_{x,\cdot }$ are respectively the $1$-by-$3$ vector of the R,G,B values at pixel $x$ of the clear and the hazy image. Let us note here that Koschmieder’s model is a relatively simple model of the atmosphere. It is by no means a complete model, as the optical scattering is extremely complex, due to the wide variability of particle distributions within the atmosphere. This said, even if it relies on physical assumptions that will not always hold, it provides us with a mathematically tractable setting. For this reason, this model is used in almost all the image dehazing literature, and therefore it will be also considered in this paper.There exists a large number of image dehazing methods based on imposing Eq. (1) as a constraint in the solution. However, there also exists a second type of method that is based on applying image enhancement or image fusion techniques to the original hazy image. This second type of method has been proven effective for removing the haze on images, but it does not include a reliable physical model. In this paper we propose a post-processing procedure for this second type of method. Our goal is to obtain a final result as close as possible to the original algorithm solution, but accomplishing the constraints given by Eq. (1). Our proposed solution can therefore be understood as a bridge linking the two different type of methods.
This paper is an extension of our conference work presented in [2]. In particular, we have modified our original formulation to constrain the transmission result by a DCT-basis (effectively, the transmission is modelled to be smoothly varying across the scene/image), and we have performed a much larger number of experiments, where we numerically prove that this new approach outperforms both the original non-physics dehazing methods and our previous work presented in [2].
2. Related work
Image dehazing has arisen as a prolific topic of research in recent years. This increased interest on research in image dehazing is mostly related to its importance as a pre-processing tool for computer vision methods that need to work in the wild. Some particular examples are surveillance and tracking through CCTV cameras, or self-driving of vehicles and drones.
In this section we will divide the different methods proposed between Physically-based methods and Image processing methods.
Physically-based methods: These methods search for a single transmission $\boldsymbol {t}$ and an airlight vector $\boldsymbol {A}$. Once these two quantities are found, they obtain the haze-free image $\boldsymbol {J_{x,\cdot }}$ by inverting Eq. (1). This said, solving for $\boldsymbol {t}$ and $\boldsymbol {A}$ is an underconstrained problem but can be solved if assumptions are placed on the form of the final solution. Some examples of this type of method are [3], [4], [5], or [6]. A special mention should be given to the Dark Channel prior [7] (probably the most used image dehazing method), where the authors assume that the minimum of an image region over the three color channels should be zero. The Dark-channel prior has been largely extended and improved, for example in [8–13]. Learning-based techniques have also been studied for this case. Some examples of them are [14], [15]. Recently, some deep learning techniques have also been used [16], [17].
Image processing approaches: These methods aim to modify the original image to compensate for the visual effect of haze on images. In particular, these methods usually focus on the amount of contrast, saturation or other possible indicators of the presence of haze, and try to compensate for them. For example, [18] proposed to remove contrast loss in hazy images through a linear model of the presence of excessive brightness, based on the ratio between local mean and standard deviation. In [19,20] the authors use a multiscale image fusion approach in which they blend several images derived from the input, such as a white-balanced and a contrast-enhanced version of it. Different approaches based on models of the Human Visual System (HVS), such as Retinex, have also been proposed in [21–26]. [27] proposed a combination of the last two approaches: a variational formulation based on the HVS is combined with a fusion-based approach. Very recently a dual relation between image dehazing and Retinex has been proven [28]. This relation proves that any threshold-free Retinex method applied on inverse intensities performs image dehazing. Finally, machine-learning techniques have also been used for this type of method. For example, a haze density predictor based on natural scene statistics was presented in [29].
There are very few methods focusing on the removal of artifacts for image dehazing. Matlin and Milanfar [30] proposed an iterative regression method that simultaneously performs denoising and dehazing. Li et al. [31] proposed to decompose the original image into high and low frequencies, performing image dehazing only in the low frequencies, thus avoiding blocking artifacts. Chen [32] applied both a smoothing filter for the refinement of the transmission and an energy minimization to avoid the appearance of gradients that were not presented in the original image.
3. Coupled iterative minimization for image dehazing
In this paper we focus on the post-processing of dehazing methods that do not enforce a physical model, i.e. mostly those listed as image processing approaches in the previous section. Our goal is that, given an original hazy image $\boldsymbol {I}$ and the solution of a dehazing method that does not fulfil a physical model $\boldsymbol {J^{np}}$, we obtain a new dehazing result $\boldsymbol {J^{our}}$ that:
- • Accomplishes the constraint given by Eq. (1)
- • Is as close as possible to the initial solution $\boldsymbol {J^{np}}$.
Intuitively, it is easy to see that we need to perform the minimization of Eq. (2) iteratively in two different dimensions. In particular, when looking for $\boldsymbol {t^{our}}$ we need to perform the minimization for each pixel $x$ of the image over the three color channels, while when looking for $\boldsymbol {A^{our}}$ we need to perform the minimization for each color channel $c$ over all the pixels.
In the next paragraphs we explain how we perform each of these two minimizations.
Minimizing for $\boldsymbol {t^{our}}$: Let us start supposing that we have an original value for $\boldsymbol {A^{our}}$. This is a standard case in many image dehazing works, where it is usually supposed either $\boldsymbol {A}=[1,1,1]$ or $\boldsymbol {A}=[\max (\boldsymbol {I_{\cdot ,R}}),\max (\boldsymbol {I_{\cdot ,G}}), \max (\boldsymbol {I_{\cdot ,B}})]$. Let us denote as $\boldsymbol {\Lambda }$ the N-by-3 matrix obtained by the replication of $\boldsymbol {A^{our}}$ for the $N$ image pixels. Then, our minimization for the transmission can be rewritten as
The computation of the weight vector $\boldsymbol {\alpha } = \{\alpha _1,\ldots , \alpha _K\}$ in Eq. (4) is obtained as follows. Let ${(\boldsymbol {J^{np}}-\boldsymbol {\Lambda })}_j$ denote the $j$th color channel of the image stretched out as a vector, and let $\boldsymbol {G}_k$ denote the $k$th basis image stretched out as a vector. Then, for each of the three color channels we calculate $K$ vectors as the following pixel-wise products: $\boldsymbol {H}_{j,1}= {{(\boldsymbol {J^{np}}-\boldsymbol {\Lambda })}_j}\cdot {\boldsymbol {G}_1}$, $\boldsymbol {H}_{j,2}={{(\boldsymbol {J^{np}}-\boldsymbol {\Lambda })}_j}\cdot {\boldsymbol {G}_2}$, $\cdots$ , $\boldsymbol {H}_{j,K}={{(\boldsymbol {J^{np}}-\boldsymbol {\Lambda })}_j}\cdot {\boldsymbol {G}_K}$. With those vectors, we form a $3N \times K$ matrix $\boldsymbol {H}$ -where $N$ is the number of pixels- as
Minimizing for $\boldsymbol {A^{our}}$: Let us now focus on the minimization of $\boldsymbol {A^{our}}$ given a value for $\boldsymbol {t^{our}}$. In this case, let us denote as $\boldsymbol {T^{our}}$ the $N$-by-$N$ matrix that has zeros everywhere except in the diagonal, where it has the values of $\boldsymbol {t^{our}}$. In this way, the minimization can be rewritten as
Performing the iterative minimization: The previous minimizations are finally combined in an iterative manner. This means the value found for $\boldsymbol {t^{our}}$ in an iteration ($it$) is used for obtaining $\boldsymbol {A^{our}}$ at the same iteration, and this latter value is used in the following iteration ($it+1$) for obtaining the new value of $\boldsymbol {t^{our}}$.
Once the method is run for the desired iterations or the desired stopping criteria, our final result is computed as
A pseudocode for our method can be found in Algorithm 1.
4. Experiments and results
We have performed different experiments to address the performance of our approach. First, we start by studying how our iterative minimization for $\boldsymbol {t^{our}}$ and $\boldsymbol {A^{our}}$ affects the output image $\boldsymbol {J^{our}}$. Then, we show some qualitative results where our method clearly outperforms the original dehazing method. Later, we show how our method improves the original dehazing ones quantitatively, both considering reference-based and non-reference image metrics. At the end of the section we also validate our method through a psychophysical experiment where observers were asked to select their preferred image. In all this section, we will compare our method against the following original dehazing algorithms: the EVID method [21], the FVID method [27], the Choi et al. method [29], the Wang et al. method [26], and the use of two Retinex algorithms -SRIE [34] and MSCR [35]- as dual solutions for the dehazing problem as suggested in [28]. For our method we have considered $10$ iterations. The number of DCT basis considered for our coupled-DCT method is $10$ -i.e. we compute DCT basis up to order 4- unless otherwise stated. Also, we set $A^0=[1,1,1]$ for all the quantitative and psychophysical evaluations.
4.1 On reaching steady state for image $\boldsymbol {J^{our}}$
Our minimization looks for $\boldsymbol {t^{our}}$ and $\boldsymbol {A^{our}}$, but we are interested in the image $\boldsymbol {J^{our}}$ as our final result. Therefore, it is natural to wonder about the effect the iterative minimization of $\boldsymbol {t^{our}}$ and $\boldsymbol {A^{our}}$ has in the image $\boldsymbol {J^{our}}$. In particular, it will be interesting to study how the image $\boldsymbol {J^{our}}$ reaches steady state. To this end Fig. 1 shows the difference between two consecutive iterations of the output image $J_{x,c}^{our}$ -where $c$ denotes the R,G,B channels- for the set of $500$ hazy images proposed in the FADE dataset by Choi et al. [29]. We compute this difference in the Mean Square Error (MSE) form, which for iteration $k$ is defined as

Fig. 1. Study about the effect of the iterations on the steady state of $\boldsymbol {J^{our}}$ for different original algorithms in the 500 images of the dataset in Choi et al.. We can clearly see that for any original algorithm $\boldsymbol {J^{our}}$ reaches steady state.
4.2 Qualitative results
Fig. 2 presents some visual results for our approach with regards to the $6$ non-physics methods selected, and to two different airlights: $\boldsymbol {A^0}=[1,1,1]$ and $\boldsymbol {A^0}=[\max (\boldsymbol {I_{\cdot ,R}}),\max (\boldsymbol {I_{\cdot ,G}}), \max (\boldsymbol {I_{\cdot ,B}})]$.

Fig. 2. Qualitative results for our approach, for 6 different non-physical dehazing methods and 2 different starting airlights. Our method improves all the original methods. Furthermore, our results for both airlights are very similar, showing the robustness of our approach.
In terms of the starting airlight $\boldsymbol A^0$ (last two columns of the Figure), we can clearly see that our approach gives very similar results for both of them, therefore showing that our approach is very robust in this respect.
Looking now at the different algorithms -each algorithm is a different row in the Figure-, we can clearly see that in the case of the Choi et al. algorithm our method is able to correct the excessive saturation presented in the field, outputting more natural colors in the image. In the case of the EVID algorithm, our approach is able to correct the over-contrast introduced by the non-physics method in the cow, grass and rocks. Equivalently, the over-contrast is also corrected for the Wang et al. method, especially noticeable in the tree and the close vegetation, and the Ret-MSCR method in the grass and close-by ducks.
In the case of the FVID algorithm we can clearly see that our approach corrects the artifacts appearing in the sky in the original method. Similarly, the Ret-SRIE mehod presents a halo artifact around the main building in the image that is clearly alleviated by our approach.
In summary, this Figure presents the two main advantages of applying our post-processing approach. First, it is able to correct over-saturation and over-contrast problems, and second, it is able to alleviate the artefacts that can appear when dehazing an image.
4.3 Quantitative results
4.3.1 Non-reference metrics
In this subsection we study the performance of our method when considering non-reference based metrics. To this end, we consider the set of $500$ hazy images proposed by Choi et al. in [29]. We evaluate our results with respect to two very well-known non-reference image metrics: NIQE [36] and BRISQUE [37]. For both metrics, a smaller number means a better method. Table 1 shows the results for the $6$ methods considered in this paper. We can see how the simple coupled-method is already able to outperform the original method for almost all of those tested. Our Coupled-DCT approach drops the error metrics even further, and outperforms the original method and the coupled approaches in $10$ and $9$ out of $12$ cases, respectively.
Table 1. Results reported as the mean for all the 500 images in the Choi et al. dataset.
4.3.2 Reference metrics
In this subsection we focus on reference-based metrics. In this case, we need a dataset that presents pairs of hazy-clean(ground-truth) images. We have selected to use the Middleburry set of the D-Hazy dataset [38]. In this case, images are indoor, and for this reason we run our method with a higher number of $DCT$ basis: $55$ (i.e. we compute DCT basis up to order $10$). In this subsection we look at $3$ different metrics: the CID [39], which is a color extension of SSIM, the perceptual color difference $\Delta _{E_{00}}$, and the Visual Information Fidelity (VIF) metric [40]. In the case of the CID metric and the $\Delta _{E_{00}}$, lower values mean better methods. For the VIF metric, the closer to $1$ is the value, the better the method -as this will mean that both result and the ground-truth are equal in terms of the visual information present in the images-. A VIF value larger than one means that the result is over-enhanced, while VIF values smaller than 1 mean that the result is under-enhanced.
Results are shown in Table 2. We can clearly see that our Coupled-DCT approach outperforms all the others in $16$ out of $18$ cases. Also, the simple Coupled method outperforms the original dehazing method in $10$ cases and draw with it in another $3$ cases (see the results for RET-SRIE).
Table 2. Results reported as the mean for all the 23 images in the Middleburry D-Hazy dataset.
4.4 Preference ranking
We also performed a psychophysical experiment for which details are given below.
4.4.1 Subjects
Twelve subjects completed the experiment. None of them is an author of the paper. All observers were tested for normal color vision using the Ishihara color blindness test. Ethics was approved by the Comité Ético de Investigación Clínica, Parc de Salut MAR, Barcelona, Spain and all procedures complied with the declaration of Helsinki.
4.4.2 Apparatus
The experiment was conducted on an AOC I2781FH LCD monitor set to “sRGB” mode with a luminance range from $0.1cdm^{-2}$ to $175cdm^{-2}$, with spatial and temporal resolutions of 1920 by 1080 pixels and 60 Hz. The display was viewed at a distance of approximately 70 cm so that 40 pixels subtended 1 degree of visual angle. The full display subtended 49 by 27.5 degrees. The decoding nonlinearity of the monitor was recorded using a Konica Minolta LS 100 photometer and was found to be closely approximated by a gamma function with an exponent of 2.2. Stimuli were generated under Ubuntu 15.04 LTS running MATLAB (MathWorks) with functions from the Psychtoolbox [41,42]. The experiment was conducted in a dark room.
4.4.3 Stimuli
25 randomly selected images were taken from the FADE dataset [29]. They are shown in Fig. 3. For each image, the six original dehazing methods listed at the beginning of the section were computed. Then, the Coupled-DCT approach proposed in this paper with $10$ DCT basis -i.e. the same parameters used for this dataset before- was also computed for each of the original methods.
4.4.4 Procedure
The experiment was independently run for each of the $6$ original dehazing methods. The dehazed images -the result of the original method and the result of our coupled-DCT approach- were viewed on either sides of the original hazy image. Subjects were asked to select the image that they preferred out of the two dehazed images. The total number of comparisons was 150 -25 comparisons for each of the $6$ original dehazing methods. On average, the experiment took around 25 minutes.
4.4.5 Analysis of the results
We have analyzed the result of our experiment in terms of the Thurstone Case V Law of Comparative Judgment. Figure 4 presents the results for the whole set of $150$ comparisons. We can clearly see that our approach is preferred over the original non-physical dehazing methods, with statistical significance.

Fig. 4. Results of the psychophysical experiment using the Thurstone Case V test for the whole set of $150$ comparisons.
Results for each individual original algorithm are presented in Fig. 5. We can clearly see that our DCT-coupled approach is statistically preferred over the original method for all the cases, showing that it generalizes very well to different non-physical dehazing methods. These results also validate the effectiveness shown by our coupled-DCT method for most of the image metrics cases tested.

Fig. 5. Results of the psychophysical experiment using the Thurstone Case V test for each of the non-physical dehazing methods considered in this work.
5. Conclusions
We have presented an approach that induces a physical behaviour to non-physical dehazing methods. Its main notion is the consideration of an iterative coupling of the color channels, which is inspired by the Alternative Least Squares (ALS) method. We have shown how our method outperforms the original non-physical dehazing method qualitatively, quantitatively -both in terms of reference and non-reference metrics-. Finally, our method was also validated using psychophysical tests.
Funding
Horizon 2020 Framework Programme (761544, 780470); Engineering and Physical Sciences Research Council (EP/028730, EP/M001768); Spanish Government MINECO and Feder Fund (PGC2018-099651-B-I00).
Acknowledgments
We want to thank all the observers that participated in the preference study.
Disclosures
GF is a visiting professor at Simon Fraser University and at the University of Leeds. He currently has a joint project with the University of Cambridge.
References
1. H. Koschmieder, Theorie der horizontalen Sichtweite: Kontrast und Sichtweite (Keim & Nemnich, 1925).
2. J. Vazquez-Corral, G. D. Finlayson, and M. Bertalmío, “Physically plausible dehazing for non-physical dehazing algorithms,” in Computational Color Imaging, S. Tominaga, R. Schettini, A. Trémeau, and T. Horiuchi, eds. (Springer International Publishing, 2019), pp. 233–244.
3. R. Tan, “Visibility in bad weather from a single image,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2008), pp. 1–8.
4. R. Fattal, “Single Image Dehazing,” in ACM SIGGRAPH 2008 Papers, (ACM, 2008), SIGGRAPH ’08, pp. 72:1–72:9.
5. K. Nishino, L. Kratz, and S. Lombardi, “Bayesian Defogging,” Int. J. Comput. Vis. 98(3), 263–278 (2012). [CrossRef]
6. J.-P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2009), pp. 2201–2208.
7. K. He, J. Sun, and X. Tang, “Single Image Haze Removal Using Dark Channel Prior,” IEEE Transactions on Pattern Analysis Mach. Intell. 33(12), 2341–2353 (2011). [CrossRef]
8. G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, “Efficient Image Dehazing with Boundary Constraint and Contextual Regularization,” in Proceedings of IEEE International Conference on Computer Vision (ICCV), (IEEE, 2013), pp. 617–624.
9. W. Sun, “A new single-image fog removal algorithm based on physical model,” Optik 124(21), 4770–4775 (2013). [CrossRef]
10. Y. Gao, H.-M. Hu, S. Wang, and B. Li, “A fast image dehazing algorithm based on negative correction,” Signal Process. 103, 380–398 (2014). [CrossRef]
11. J.-B. Wang, N. He, L.-L. Zhang, and K. Lu, “Single image dehazing with a physical model and dark channel prior,” Neurocomputing 149, 718–728 (2015). [CrossRef]
12. Z. Li and J. Zheng, “Edge-Preserving Decomposition-Based Single Image Haze Removal,” IEEE Transactions on Image Process. 24(12), 5432–5441 (2015). [CrossRef]
13. Y.-H. Lai, Y.-L. Chen, C.-J. Chiou, and C.-T. Hsu, “Single-Image Dehazing via Optimal Transmission Map Under Scene Priors,” IEEE Transactions on Circuits Syst. for Video Technol. 25(1), 1–14 (2015).
14. K. Tang, J. Yang, and J. Wang, “Investigating Haze-Relevant Features in a Learning Framework for Image Dehazing,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2014), pp. 2995–3002.
15. Q. Zhu, J. Mai, and L. Shao, “A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior,” IEEE Transactions on Image Process. 24(11), 3522–3533 (2015). [CrossRef]
16. B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “DehazeNet: An End-to-End System for Single Image Haze Removal,” arXiv:1601.07661 (2016).
17. H. Zhang and V. M. Patel, “Densely connected pyramid dehazing network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018), pp. 3194–3203.
18. J. Oakley and H. Bu, “Correction of Simple Contrast Loss in Color Images,” IEEE Transactions on Image Process. 16(2), 511–522 (2007). [CrossRef]
19. C. O. Ancuti, C. Ancuti, C. Hermans, and P. Bekaert, “A Fast Semi-inverse Approach to Detect and Remove the Haze from a Single Image, in Asian Conference on Computer Vision, ACCV-2010, (2010) 6493, pp. 501–514.
20. C. Ancuti and C. Ancuti, “Single Image Dehazing by Multi-Scale Fusion,” IEEE Transactions on Image Process. 22(8), 3271–3282 (2013). [CrossRef]
21. A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Enhanced Variational Image Dehazing,” SIAM J. on Imaging Sci. 8(3), 1519–1546 (2015). [CrossRef]
22. V. De Dravo and J. Hardeberg, “Stress for dehazing, in Colour and Visual Computing Symposium (CVCS), 2015, (2015), pp. 1–6.
23. X.-S. Zhang, S.-B. Gao, C.-Y. Li, and Y.-J. Li, “A Retina Inspired Model for Enhancing Visibility of Hazy Images,” Front. Comput. Neurosci. 9, 151 (2015). [CrossRef]
24. Y. Wang, H. Wang, C. Yin, and M. Dai, “Biologically inspired image enhancement based on Retinex,” Neurocomputing 177, 373–384 (2016). [CrossRef]
25. J. Vazquez-Corral, A. Galdran, P. Cyriac, and M. Bertalmío, “A fast image dehazing method that does not introduce color artifacts,” Journal of Real-Time Image Processing, pp. doi: 10.1007/s11554–018–0816–6 posted 29 August 2018, in press.
26. S. Wang, W. Cho, J. Jang, M. A. Abidi, and J. Paik, “Contrast-dependent saturation adjustment for outdoor image enhancement,” J. Opt. Soc. Am. A 34(1), 7–17 (2017). [CrossRef]
27. A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío, “Fusion-based variational image dehazing,” IEEE Signal Process. Lett. 24, 1 (2016). [CrossRef]
28. A. Galdran, A. Alvarez-Gila, A. Bria, J. Vazquez-Corral, and M. Bertalmío, “On the duality between retinex and image dehazing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018).
29. L. K. Choi, J. You, and A. C. Bovik, “Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging,” IEEE Transactions on Image Process. 24(11), 3888–3901 (2015). [CrossRef]
30. E. Matlin and P. Milanfar, “Removal of haze and noise from a single image,” Proc. SPIE 8296, 82960T (2012). [CrossRef]
31. Y. Li, F. Guo, R. T. Tan, and M. S. Brown, “A contrast enhancement framework with jpeg artifacts suppression,” in Computer Vision - ECCV 2014 - 13th European Conference, (2014), pp. 174–188.
32. C. Chen, M. N. Do, and J. Wang, “Robust image and video dehazing with visual artifact suppression via gradient residual minimization, in Computer Vision - ECCV 2016 - 14th European Conference, (2016), pp. 576–591.
33. G. D. Finlayson, M. M. Darrodi, and M. Mackiewicz, “The alternating least squares technique for nonuniform intensity color correction,” Color Res. Appl. 40(3), 232–242 (2015). [CrossRef]
34. X. Fu, D. Zeng, Y. Huang, X. P. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 2782–2790.
35. A. B. Petro, C. Sbert, and J.-M. Morel, “Multiscale Retinex,” Image Processing On Line 4, 71–88 (2014). [CrossRef]
36. A. Mittal, R. Soundararajan, and A. Bovik, “Making a completely blind image quality analyzer,” Signal Process. Lett. IEEE 20(3), 209–212 (2013). [CrossRef]
37. A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Transactions on Image Process. 21(12), 4695–4708 (2012). [CrossRef]
38. C. Ancuti, C. O. Ancuti, and C. D. Vleeschouwer, “D-hazy: A dataset to evaluate quantitatively dehazing algorithms,” in Proceedings of the IEEE International Conference on Image Processing, (IEEE, 2016), CIP’16.
39. I. Lissner, J. Preiss, P. Urban, M. S. Lichtenauer, and P. Zolliker, “Image-difference prediction: From grayscale to color,” IEEE Transactions on Image Process. 22(2), 435–446 (2013). [CrossRef]
40. H. R. Sheikh and A. C. Bovik, “Image information and visual quality,” IEEE Transactions on Image Process. 15(2), 430–444 (2006). [CrossRef]
41. D. H. Brainard, “The Psychophysics Toolbox,” Spat Vis. 10(4), 433–436 (1997). [CrossRef]
42. D. G. Pelli, “The VideoToolbox software for visual psychophysics: transforming numbers into movies,” Spat Vis. 10(4), 437–442 (1997). [CrossRef]
