Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Illumination separation of non-Lambertian scenes from a single hyperspectral image

Open Access Open Access

Abstract

In this paper, we propose a general framework to estimate the spectrum of the illumination from global specular information in a single hyperspectral image. By utilizing the specular independent subspace, we iteratively separate the reflectance components and shape a weight scheme in order to find specular-contaminated pixels. After that, the illumination can be directly estimated by factorizing the weighted specular-contaminated pixels. The proposed method enables a direct and effective decomposition of the illumination and reflectance components from a single hyperspectral image. We demonstrate the robustness and accuracy of our method on simulation and real experiments. Moreover, we capture a hyperspectral image dataset with ground-truth illumination to quantitative compare the performance.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

With the rapid development of spectrometry, ground-based hyperspectral imaging platforms have become commercially available. The advent of these hyperspectral cameras opens up great opportunities in many computer vision tasks, including material segmentation [1], object recognition [2, 3], tracking [4] and relighting [5]. Different from hyperspectral images taken by remote sensing spectrometer, ground-based hyperspectral images captured during daily life scenarios are influenced by diverse illumination conditions, such as outdoor daylight spectra, indoor fluorescent light, and other artificial illuminants. The recorded spectrum of an object deviates because of the illumination and makes it hard to characterizes its surface material. In order to obtain reflectance spectrum invariant to the illuminant, the spectrum of the illumination need to be estimated. Therefore, accurate illumination estimation is crucial for applying ground-based hyperspectral imaging in daily applications.

The estimation of illumination has been extensively studied for decades, but still remains an open problem. Many previous methods are limited to Lambertian surfaces which assumes that the object surfaces in the scene are matte diffuse and disregards the effects beyond diffuse reflectance as outliers [6]. The estimation performance of Lambertian-based methods would degenerate in non-Lambertian circumstances which exist widely in practice.

As for non-Lambertian scenes, the illumination can be estimated from specularity which has the same chromaticity as that of the light source [7]. Previous methods have been demonstrated to be of high accuracy under dichromatic reflectance model [8], which assumes the surface reflectance is a linear combination of diffuse and specular components. Nevertheless, accurate image pre-segmentation is required for uniform surface reflectance or highlight area, which is also a tough computer vision problem.

To tackle this problem, we present an illumination separation method as shown in Fig. 1, in which the illumination of a hyperspectral image is decomposed by taking fully advantage of the global specular information separated in a specular-independent subspace. The effectiveness of the proposed method is further verified on many non-Lambertian scenes. The cutting-edge performance is achieved on extensive hyperspectral datasets [9, 10]. We also capture an additional dataset in order to add more challenging scenes, the dataset is also made public to facilitate the future researches on this topic.

 figure: Fig. 1

Fig. 1 Illumination and reflectance spectra separation of non-Lambertian scenes. (a) The input hyperspectral image rendered under a greenish light source. (b)(c)(d) The results of spectral illumination estimation, including the estimated illumination (solid black: ground truth, dotted blue: predicted illumination curve by our method), the specular image and the diffuse image. Note that, each image is integrated to 3-channel image by using the response curves of RGB sensors for visualization.

Download Full Size | PDF

The rest of the paper is organized as follows: We commence with the motivation of our method and the related work in Section 2. In Section 3, we introduce the formulation and propose our method. Section 4 contains the extensive experiments of our method on datasets on both indoor and outdoor scenes, and challenging cases like non-Lambertian surfaces.

2. Related work

Illumination Estimation from a single input image is a difficult problem that has been studied for decades. Existing works mostly originate two categories: one is based on Lambertian model, the other is based on dichromatic model for non-Lambertian scenes.

The Lambertian-based methods rely on various simplified or generative assumptions of color constancy. For example, the white-patch algorithm [11] is based on the white-patch assumption that the maximum response in the channels is caused by a perfect reflectance. The principles of gray-world [12] and gray-edge [13] algorithms are that the distribution of the colors and edges in a scene under a neutral light source is achromatic. Gamut-based algorithms [14] assume that, under a specific illuminant, only limited number of colors can be observed. Researchers have also generalized some of these assumptions, like [12, 13, 15], for the illumination and reflectance separation problem of a hyperspectral image [16], since their underlying assumptions on the scene are band-independent. All these methods are limited to Lambertian surfaces, and the algorithm performance would degenerate in non-Lambertian surfaces.

The specular component of a non-Lambertian surface has the same chromaticity with the illumination. As a consequence, it can be used to estimate the illumination information. According to the dichromatic reflection model [8], the reflected radiance of pixels belong to the same material falls on a hyper-plane spanned by the illumination spectrum and reflectance spectrum of this material. For scenes under spatially uniform illumination, the illumination spectrum can be calculated as the intersection of dichromatic hyper-planes of different materials [17–19]. A fundamental problem with the dichromatic hyper-planes for illumination estimation is that, it is not a priori which observed pixels originate from the same material. Therefore, most dichromatic-based methods require accurate pre-segmentation that identifies regions of uniform surface reflectance with clear specularity [5, 20–23], which presents obstacle in practice.

Recently, particularly targeting on hyperspectral images, Zheng et al. [24] employs the low-dimensionality assumption of diffuse reflectance, and modeled the illumination and reflectance spectra separation of a single hyperspectral image into a low-rank matrix factorization problem. Nevertheless, their method had limitation on handling non-Lambertian surfaces since the effects beyond diffuse reflectance, such as specularity, were treated as outliers. An et al. [25] proposed an optimization framework based on dichromatic model to estimate the illumination spectrum from the specular component. A very strong assumption is made that the correlation between material chromaticity and illumination is relatively low, which penalizes the solutions deviating towards the material chromaticity. The performance will drop when high correlation exists between the illumination chromaticity and diffuse reflectance.

3. Our work

In this paper, we develop a generally applicable and robust illumination estimation method for a single hyperspectral image as shown in Fig. 2. Making use of the global information, the proposed method can handle images with large area or strong specularity. Experiments on varying scenes, illuminations, and specularity strengths are conducted, and the results show that our approach can provide consistently superior performance compared to state-of-the-art methods.

 figure: Fig. 2

Fig. 2 The schematic diagram of the proposed method. The left column shows the input image and the initial illumination spectrum. The middle column is the iterative procedure of our approach. We iteratively decompose the reflectance components in specular-independent subspace, then the weight map is computed based on the reflectance components. After applying the weight map to the original image radiance, the illumination can be estimated via factorizing the weighted radiance. The convergence criterion is determined by the changes of estimated illumination.

Download Full Size | PDF

3.1. Formulation

In this study, we assume that there is only one light source in the scene, or multiple light sources with the same normalized spectrum. Therefore, the chromaticity of illumination is consistent over all the pixels. We employ the widely used dichromatic reflectance model [8] so as to describe the reflection properties of non-Lambertian surfaces. This model assumes uniform illumination across the spatial domain of the scene and decomposes the surface radiance into a diffuse and a specular component. Let an object with surface radiance I(u, λ) at pixel location u and wavelength λ be illuminated by an illuminant whose spectrum is L(λ). With these ingredients, the dichromatic model then becomes:

I(u,λ)=g(u)L(λ)S(u,λ)+k(u)L(λ).
In Eq. (1), the two terms on the right-hand side are the diffuse and the specular components of the image, respectively. The shading factor g(u) governs the proportion of diffuse light reflected from the object and S(u, λ) represents the diffuse spectral reflectance at location u and wavelength λ. On the other hand, the factor k(u) models surface irregularities that cause specularities in the scene. For convenience, the above equation can be simplified as
I=gR+kL,
where R = L · S is the spectrum of diffuse component; L is the mutual spectrum of specular component and the illumination; and · denotes the element-wise product between two vectors.

According to Eq. (1), the specular component only reflects the property of illumination, while the diffuse component is affected by both the illumination and the surface reflectance. Therefore, one can first decompose the hyperspectral image of a non-Lambertian scene into diffuse and specular components and then use the latter to estimate the illumination. Since the decomposition of reflectance components, also been called as highlight removal, are usually implemented with a known illumination spectrum, it forms an iterative procedure. The following subsections will detail the two iterative steps: reflectance components recovery and illumination estimation.

3.2. Reflectance components recovering

By adopting the orthogonal subspace projection [26], the radiance of a specular contaminated image can be projected onto two orthogonal subspaces, one is parallel with while the other is orthogonal to the illumination spectrum L. Based on the theory of matrix projections, we can design the orthogonal projector

P=ELLT/L,
where E is the identity matrix, and then multiply both sides of Eq. (2) by projector P :
PI=gPR+kPL=gPR.
As a result, the specular component is removed, while the diffuse component is preserved at the cost of losing one dimension of information. Denoting the spectrum of PR as L, the projection procedure can be rewritten into a matrix form:
I=aL+bL,
where a and b are the projection coefficients.

Based on Eq. (4), one can find that the pixels from the same material, whose diffuse reflectance is not identical to the illumination spectrum, will be in the same hyper-line or subspace L despite the difference in shading g and specularity k. Due to the rich-information of hyperspectral image, it is reasonable to suppose that their counterparts in specular-free subspace are able to recognize different materials. Thus, we use the preserved diffuse component in the specular-independent subspace to separate reflectance components.

As illustrated in Fig. 3, the entire spectrums of the same material lie on the hyperplane spanned by L and L. Supposing that point A has the strongest specularity and point B is the pure diffuse pixel of this material, all the other spectrums lie within the yellow region. The reflectance components can be separated by projecting onto the spectrum of point B. Note that, point B has the minimum included angle between its spectrum and L, which leads to the smallest radio of a/b. Thus, finding the pure diffuse pixel is to finding the smallest radio of a/b. Denoting the smallest radio as r, r = min a/b = aB/bB, the reflectance components can be separated by

Id=rbL+bL,Is=IId.
Here, Id and Is are the separated diffuse component and specular component respectively. Results of the reflectance component separation are shown in Fig. 4.

 figure: Fig. 3

Fig. 3 Geometric interpretation of the reflectance component separation. The spectrums of the same material lie on the hyperplane spanned by L and R, which are the spectrums of illumination and diffuse reflectance. After projecting onto the specular-independent subspace, the spectrums from the same material will be in the same direction L. We label two pixels A and B, where the former has the strongest specularity and the latter is the pure diffuse pixel. The yellow area is the region of the spectrums from same material in the scene. Note that, point B has the smallest included angle between its spectrum and L. The reflectance components can be separated by projecting the spectrum onto the spectrum of point B, which is same as that of diffuse reflectance.

Download Full Size | PDF

 figure: Fig. 4

Fig. 4 A comparison between the reflectance component separation results of our method and that by Shen and Zheng [27]. (a) The input image. (b)(c) The diffuse and specular components resulting from our method. (d)(e) The diffuse and specular components yielded by Shen and Zheng [27].

Download Full Size | PDF

Note that, if there is no pure diffuse pixel in the material, we will regard the pixel with the weakest specularity as the the pure diffuse pixel. The entire intensities of the separated specular component will drop a little, but it does not change the relative intensities of specularity in this material. The specular-influenced pixels still have larger intensities of specular component than other pixels.

3.3. Illumination estimation

According to Eq. (1), the specular component, which is identical with the illumination spectrum, is the joint component of all the pixels. Based on the observation that, the relative intensity of diffuse reflectance is typically small when there is significant specularity, we can estimate the illumination directly from the specular-contaminated pixels.

Since a higher relative intensity of specular reflectance indicates stronger specularity, we propose a weight map based on the the relative intensity of specular to find and emphasis the specular-contaminated pixels. The weight map is set as the function of the relative intensity of the specular component as follow:

ω=ρ(Is/I).
In this equation, ρ(·) is the weighting function that assigns the radio to each pixel. The formal expression of ρ(·) is
ρ(f(u))=1/{1+exp[α(uβ)]}
where α and β are parameters of sigmoid function. Here, ρ(·) is used to magnify the relative intensity and make it robust to the noise. After weighting the input image, we regard the top principal component of the weighted specular-contaminated spectra as the estimated illumination.

3.4. Computational procedure

By combining the above mentioned steps, the illumination can be estimated by solving the following iterative algorithm. Given the input specular-contaminated image and the initialized illumination, we iteratively conduct two steps:

  1. Conduct scene-adaptive specular-free clustering and separate the reflectance components in each cluster;
  2. Estimate the illumination from the specular-contaminated pixels discriminated by the relative intensity of specular.
The convergence criterion of the iteration is defined as the changes in estimated illumination. The algorithm is summarized in Alg.(1).

Tables Icon

Algorithm 1. Illumination Estimation

Initialization

We set the initial illumination spectrum using the spectrum of the CIE standard illuminant E which gives equal weight to all wavelength. Nevertheless, our method is robust to the initial illumination according to the experiments in Section 4.

Material clustering

It is well known that the number of cluster is hard to determine. Here, we employ a scene-adaptive cluster scheme. We iteratively conduct specular-free clustering by increasing the number of clusters until converge. As a result, the proposed algorithm determines the number of clusters adaptively. Usually, we set the initial cluster number as 1 for safe which is no larger than the true material types.

The accurate clustering of materials is nontrivial even in hyperspectral domain. Fortunately, we do not need a strictly accurate clustering. The incorrectly clustering results in incorrect weight and makes consistent change in the weight of a certain kind of material. The consistent deviation of weight is not comparable with differences between the specular-contaminated pixels and diffuse pixels, and it will be penalized by Eq. (8). This is helpful for the convergence of our method. Experimentally, the algorithm converges within 5 iterations for most scenes.

4. Experiments

In this section, a series of extensive experiments are performed to evaluate the proposed method. Firstly, we test our method on both indoor and outdoor scenes with ground truth illuminations, and show both quantitatively results and visual effects on comparison with the previous methods. Then, we qualitatively analyze the performance of our approach. In the last, we evaluate the performance of our method in several extreme cases, such as transparent objects, CD, fluffy objects.

Datasets

The hyperspectral images in our experiments are taken from three datasets with ground-truth illumination. It is notable that diverse illuminations can be used to realistically render the same scene under different light sources to enlarge the datasets. To this end, all the three datasets can be converted into an arbitrary number of images using various illumination spectrums.

One of the datasets is the widely used CAVE Multispectral Image Database [9]. It is an indoor multispectral image dataset which contains a wide variety of real-world materials and objects. The dataset consists of 32 scenes, each recorded full spectral resolution reflectance data from 400 nm to 700 nm at 10 nm steps.

Another dataset, released by Foster et al. [10], comprises of eight hyperspectral images of outdoor scenes with ground truth illumination. The wavelength range of 400–720 nm was sampled at 10 nm intervals.

To demonstrate the performance in cases of close-up outdoor scenes and challenging indoor scenes, we collect a third database by using Prism-Mask Imaging Spectrometer (PMIS) [28]. The dataset comprises images of 11 indoor scenes and 6 outdoor natural scenes illuminated by direct sunlight under a clear sky. These images are resolved in the wavelength range from 405nm to 700nm with unequal intervals. The ground truth illumination for each image is obtained by using a Macbeth ColorChecker [29]. Samples of our hyperspectral dataset are shown in the Fig. 5.

 figure: Fig. 5

Fig. 5 A glance at our hyperspectral dataset. Each image contains the color checker to achieve ground truth illumination.

Download Full Size | PDF

Error computation

The accuracy of the estimated illumination with respect to the ground truth is evaluated by employing the Euclidean angle, i.e. the angular deviation between our estimated illumination L and the ground truth illumination Lgt, measured in degrees. It has been widely used in the literature and is given by

bias=arccos<L,Lgt>

Experiments on overall datasets

In order to evaluate the performance on hyperspectral images, we compare the results of our method with several approaches of illumination estimation for hyperspectral images. Considering that our method is not applicable to trichromatic image due to the orthogonal subspace projection, we generalize the assumptions of [11–13, 15, 30] in trichromatic computational color constancy for spectral illumination estimation in hyperspectral images. We also compare our method with the state-of-art illumination and reflectance spectra separation (IRSS) method for hyperspectral images proposed by Zheng et al. [24].

Table 1 evaluates our results for both indoor and outdoor scenes quantitatively in comparison with illumination estimation methods [11–13, 15, 24, 30]. The experiment contains a total of 43 indoor scenes from [9, 28] and 14 outdoor scenes from [10,28]. Various illuminations are used to realistically render the same scene under different light sources. To provide a fair comparison between the methods under consideration, we employ the mean, the best 25%, the worst 25% errors and the standard deviation so as to better summarize error distributions. Particularly, the color checkers in the scenes are cut out during experiment. According to table 1, our method obtain an estimation closer to the ground truth than that obtained by other state-of-the-art methods. This improvement is fairly consistent for both indoor and outdoor scenes.

Tables Icon

Table 1. Performance comparison on the overall datasets of both indoor and outdoor scenes, including the mean, the best 25%, the worst 25% errors and the standard deviation (SD).

As for images in Fig. 6, we qualitatively analyze the performance of our method. Whether the specularity is sharp or smooth, as illustrated in the first two columns, our method can separate the specular component effectively. It results in the accuracy of illumination factorized from specular-based weighted radiance. For outdoor scenes, as illustrated in the last two columns, that most surfaces are diffuse, specularity is still inevitable. By enhancing the intensities of specular-influenced pixels adaptively, our method achieve reliable results and is insensitive to large uniformly colored surfaces. Specifically, these images are rendered under the spectrum of the 14th, 8th, 12th patch of Macbeth ColorChecker and the CIE standard illuminant F1. This demonstrate the ability of our method in dealing with diverse illumination.

 figure: Fig. 6

Fig. 6 Illumination estimation results of our method. (a) The input images rendered under different illumination spectrums. (b) The weighted radiance. (c) The ground truth reflectance image. (d)(e)(f) The output image corrected by the illumination estimated by our method, weighted Gray-Edge [15] and Zheng et al. [24]. (g) The estimated illumination curve.

Download Full Size | PDF

Moreover, we test the computational cost of the proposed method on an Intel 4.00 GHz CPU platform. On average, our method can successfully converge within 5 iterations. Typically, the estimation for a single image from CAVE Multispectral Image Database [9], which has about 500 × 500 pixels and 31 bands, takes less than 6 seconds.

Experiments on challenging scenes

In Fig. 7, we test the performance and analyze the behaviors of our algorithm in challenging cases. Here, images are rendered under the spectrum of the CIE standard illuminant D65. In the scene displayed on the left side of Fig. 7, the specularity is caused by the transparent glass and the yellow diffuse reflectance comes from the beer. In our method, they are regarded as a whole yellow glass, and the specular parts will be separated as the specular component of the whole. In this manner, our approach can result in a good estimation. On the right side of Fig. 7, the transparent upper surface of the CD causes nonuniform reflectance, which will be regarded as different materials when separating the reflectance component. The specular component is separated within each material.

 figure: Fig. 7

Fig. 7 Results in some challenging cases.

Download Full Size | PDF

Experiments on different initial illuminations

Here, we evaluate the robustness of our algorithm on different initial illuminations. Experiments are taken under both relatively flat and smooth illuminations such as illuminant E, and challenging indoor illumination like fluorescent lamps (e.g. Illuminant series F). According to Table 2, we can notice that different initial illumination results in similar result.

Tables Icon

Table 2. Performance comparison on different initial illuminations in indoor scenes, including the mean, the best 25%, the worst 25% errors and the standard deviation (SD).

Experiments on weighting parameters

To clarify the impact of weighting scheme, we test the performance of our method for parameters setting as well. We set the parameters α and β of sigmoid function as zeros, which makes the weights uniform for all pixels. This is equivalent to estimating the illumination directly by principal component without weighting scheme. The effectiveness of weighting scheme is shown in Table 3.

Tables Icon

Table 3. Performance comparison on parameters setting, including the mean, the best 25%, the worst 25% errors and the standard deviation (SD).

5. Conclusion

In this paper, we introduced an illumination estimation method from a single hyperspectral image contaminated by specularities. The proposed method takes full advantage of global specular information. Benefit from the scene-adaptive material cluster, reliable reflectance separation, specular-based weight map and effective numerical solution, our method is able to achieve superior performance compared to previous counterparts. Experiments demonstrate that our method is robust to the diversity of non-Lambertian surfaces, indoor and outdoor scenes, illumination types, and the specular intensities. Although being effective in various challenging cases, our approach may degenerate a little in scenes with weak specularity, which is the direction of future efforts.

References

1. Y. Zhang, C. P. Huynh, N. Habili, and K. N. Ngan, “Material segmentation in hyperspectral images with minimal region perimeters,” in 2016 IEEE International Conference on Image Processing (ICIP), (2016), pp. 834–838.

2. A. Chakrabarti and T. Zickler, “Statistics of real-world hyperspectral images,” in CVPR 2011, (2011), pp. 193–200.

3. Z. Pan, G. Healey, M. Prasad, and B. Tromberg, “Face recognition in hyperspectral images,” IEEE Transactions on Pattern Analysis Mach. Intell. 25, 1552–1560 (2003). [CrossRef]  

4. H. V. Nguyen, A. Banerjee, and R. Chellappa, “Tracking via object reflectance using a hyperspectral video camera,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, (2010), pp. 44–51.

5. L. Gu, A. A. Robles-Kelly, and J. Zhou, “Efficient estimation of reflectance parameters from imaging spectroscopy,” IEEE Transactions on Image Process. 22, 3648–3663 (2013). [CrossRef]  

6. A. Gijsenij, T. Gevers, and J. van de Weijer, “Computational color constancy: Survey and experiments,” IEEE Transactions on Image Process. 20, 2475–2489 (2011). [CrossRef]  

7. P. Koirala, P. Pant, M. Hauta-Kasari, and J. Parkkinen, “Highlight detection and removal from spectral image,” J. Opt. Soc. Am. A 28, 2284–2291 (2011). [CrossRef]  

8. S. A. Shafer, “Using color to separate reflection components,” Color. Res. & Appl. 10, 210–218 (1985). [CrossRef]  

9. F. Yasuma, T. Mitsunaga, D. Iso, and S. K. Nayar, “Generalized assorted pixel camera: Postcapture control of resolution, dynamic range, and spectrum,” IEEE Transactions on Image Process. 19, 2241–2253 (2010). [CrossRef]  

10. D. H. Foster, K. Amano, S. M. Nascimento, and M. J. Foster, “Frequency of metamerism in natural scenes,” J. Opt. Soc. Am. A 23, 2359–2372 (2006). [CrossRef]  

11. D. H. Brainard and B. A. Wandell, “Analysis of the retinex theory of color vision,” J. Opt. Soc. Am. A 3, 1651–1661 (1986). [CrossRef]   [PubMed]  

12. G. Buchsbaum, “A spatial processor model for object colour perception,” J. Frankl. Inst. 310, 1 – 26 (1980). [CrossRef]  

13. J. van de Weijer, T. Gevers, and A. Gijsenij, “Edge-based color constancy,” IEEE Transactions on Image Process. 16, 2207–2214 (2007). [CrossRef]  

14. D. A. Forsyth, “A novel algorithm for color constancy,” Int. J. Comput. Vis. 5, 5–35 (1990). [CrossRef]  

15. A. Gijsenij, T. Gevers, and J. van de Weijer, “Improving color constancy by photometric edge weighting,” IEEE Transactions on Pattern Analysis Mach. Intell. 34, 918–929 (2012). [CrossRef]  

16. A. Robles-Kelly and C. Huynh, Imaging Spectroscopy for Scene Analysis(Springer-VerlagLondon, 2013). [CrossRef]  

17. H.-C. Lee, “Method for computing the scene-illuminant chromaticity from specular highlights,” J. Opt. Soc. Am. A 3, 1694–1699 (1986). [CrossRef]   [PubMed]  

18. S. Tominaga and B. A. Wandell, “Standard surface-reflectance model and illuminant estimation,” J. Opt. Soc. Am. A 6, 576–584 (1989). [CrossRef]  

19. S. Tominaga, “Multichannel vision system for estimating surface and illumination functions,” J. Opt. Soc. Am. A 13, 2163–2173 (1996). [CrossRef]  

20. T. M. Lehmann and C. Palm, “Color line search for illuminant estimation in real-world scenes,” J. Opt. Soc. Am. A 18, 2679–2691 (2001). [CrossRef]  

21. G. D. Finlayson and G. Schaefer, “Convex and non-convex illuminant constraints for dichromatic colour constancy,” in Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, vol. 1 (2001), pp. I-598–I-604.

22. G. D. Finlayson and G. Schaefer, “Solving for colour constancy using a constrained dichromatic reflection model,” Int. J. Comput. Vis. 42, 127–144 (2001). [CrossRef]  

23. C. P. Huynh and A. Robles-Kelly, “A solution of the dichromatic model for multispectral photometric invariance,” Int. J. Comput. Vis. 90, 1–27 (2010). [CrossRef]  

24. Y. Zheng, I. Sato, and Y. Sato, “Illumination and reflectance spectra separation of a hyperspectral image meets low-rank matrix factorization,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2015), pp. 1779–1787.

25. D. An, J. Suo, H. Wang, and Q. Dai, “Illumination estimation from specular highlight in a multi-spectral image,” Opt. Express 23, 17008–17023 (2015). [CrossRef]   [PubMed]  

26. Z. Fu, R. T. Tan, and T. Caelli, “Specular free spectral imaging using orthogonal subspace projection,” in 18th International Conference on Pattern Recognition (ICPR’06), vol. 1 (2006), pp. 812–815.

27. H.-L. Shen and Z.-H. Zheng, “Real-time highlight removal using intensity ratio,” Appl. Opt. 52, 4483–4493 (2013). [CrossRef]   [PubMed]  

28. X. Cao, H. Du, X. Tong, Q. Dai, and S. Lin, “A prism-mask system for multispectral video acquisition,” IEEE Transactions on Pattern Analysis Mach. Intell. 33, 2423–2435 (2011). [CrossRef]  

29. C. McCamy, H. Marcus, and J. Davidson, “A color-rendition chart,” J. App. Photog. Eng 2, 95–99 (1976).

30. G. D. Finlayson and E. Trezzi, “Shades of gray and colour constancy,” in Color and Imaging Conference, (2004), pp. 37–41.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Illumination and reflectance spectra separation of non-Lambertian scenes. (a) The input hyperspectral image rendered under a greenish light source. (b)(c)(d) The results of spectral illumination estimation, including the estimated illumination (solid black: ground truth, dotted blue: predicted illumination curve by our method), the specular image and the diffuse image. Note that, each image is integrated to 3-channel image by using the response curves of RGB sensors for visualization.
Fig. 2
Fig. 2 The schematic diagram of the proposed method. The left column shows the input image and the initial illumination spectrum. The middle column is the iterative procedure of our approach. We iteratively decompose the reflectance components in specular-independent subspace, then the weight map is computed based on the reflectance components. After applying the weight map to the original image radiance, the illumination can be estimated via factorizing the weighted radiance. The convergence criterion is determined by the changes of estimated illumination.
Fig. 3
Fig. 3 Geometric interpretation of the reflectance component separation. The spectrums of the same material lie on the hyperplane spanned by L and R, which are the spectrums of illumination and diffuse reflectance. After projecting onto the specular-independent subspace, the spectrums from the same material will be in the same direction L. We label two pixels A and B, where the former has the strongest specularity and the latter is the pure diffuse pixel. The yellow area is the region of the spectrums from same material in the scene. Note that, point B has the smallest included angle between its spectrum and L. The reflectance components can be separated by projecting the spectrum onto the spectrum of point B, which is same as that of diffuse reflectance.
Fig. 4
Fig. 4 A comparison between the reflectance component separation results of our method and that by Shen and Zheng [27]. (a) The input image. (b)(c) The diffuse and specular components resulting from our method. (d)(e) The diffuse and specular components yielded by Shen and Zheng [27].
Fig. 5
Fig. 5 A glance at our hyperspectral dataset. Each image contains the color checker to achieve ground truth illumination.
Fig. 6
Fig. 6 Illumination estimation results of our method. (a) The input images rendered under different illumination spectrums. (b) The weighted radiance. (c) The ground truth reflectance image. (d)(e)(f) The output image corrected by the illumination estimated by our method, weighted Gray-Edge [15] and Zheng et al. [24]. (g) The estimated illumination curve.
Fig. 7
Fig. 7 Results in some challenging cases.

Tables (4)

Tables Icon

Algorithm 1 Illumination Estimation

Tables Icon

Table 1 Performance comparison on the overall datasets of both indoor and outdoor scenes, including the mean, the best 25%, the worst 25% errors and the standard deviation (SD).

Tables Icon

Table 2 Performance comparison on different initial illuminations in indoor scenes, including the mean, the best 25%, the worst 25% errors and the standard deviation (SD).

Tables Icon

Table 3 Performance comparison on parameters setting, including the mean, the best 25%, the worst 25% errors and the standard deviation (SD).

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

I ( u , λ ) = g ( u ) L ( λ ) S ( u , λ ) + k ( u ) L ( λ ) .
I = g R + k L ,
P = E L L T / L ,
PI = g PR + k PL = g PR .
I = a L + b L ,
I d = r b L + b L , I s = I I d .
ω = ρ ( I s / I ) .
ρ ( f ( u ) ) = 1 / { 1 + exp [ α ( u β ) ] }
b i a s = arccos < L , L g t >
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.