Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Spectrogenic imaging: A novel approach to multispectral imaging in an uncontrolled environment

Open Access Open Access

Abstract

Increasing the number of imaging channels beyond the conventional three has been shown to be beneficial for a wide range of applications. However, it is mostly limited to imaging in a controlled environment, where the capture environment (illuminant) is known a priori. We propose here a novel system and methodology for multispectral imaging in an uncontrolled environment. Two images of a scene, a normal RGB and a filtered RGB are captured. The illuminant under which an image is captured is estimated using a chromagenic based algorithm, and the multispectral system is calibrated automatically using the estimated illuminant. A 6-band multispectral image of a scene is obtained from the two RGB images. The spectral reflectances of the scene are then estimated using an appropriate spectral estimation method. The proposed concept and methodology is generic one, as it is valid in whatever way we acquire the two images of a scene. A system that can acquire two images of a scene can be realized, for instance in two shots using a digital camera and a filter, or in a single shot using a stereo camera, or a custom color filter array design. Simulation experiments using a stereo camera based system confirms the effectiveness of the proposed method. This could be useful in many imaging applications and computer vision.

© 2014 Optical Society of America

1. Introduction

Multispectral imaging is advantageous to conventional three channel (usually RGB) color imaging which suffers from metamerism and environment dependency. Multispectral imaging aims to recover the spectral reflectance of a scene, and is therefore in principle environment independent and less prone to metamerism. Unlike conventional digital color cameras, they are not limited to the visual range, rather they can also be used in near infrared, infrared and ultraviolet spectrum as well, depending on optics and sensor responsivity range. These systems can significantly improve the color accuracy [1] and make color reproduction under different illumination environments possible with a reasonably good accuracy [2]. There are different types of multispectral imaging systems, for example, filter-based systems [35], filter array based systems [6, 7], LED based systems [810] etc. Multispectral imaging has a wide set of application domains, such as remote sensing, culture & heritage, biometrics, medical imaging, high-accuracy color printing, and machine vision. Despite all these advantages, applicability, and many different types of systems, the usage of multispectral imaging is so far mostly limited to controlled laboratory environments, where the capture environment (illuminant) is known a priori. This allows to calibrate the system under this known illuminant, before the actual capture process. However for many applications, the illuminant may not be known beforehand, for example in the case of natural outdoor scenes. One way of retrieving information about the illuminant is to capture images by placing a standard color target, such as the Macbeth Color Checker in the scene, as done in the multispectral image database by Yasuma et al. [11], or a Munsell gray chart as used by Valero et al. [12] to recover spectral data from natural scenes. But the imaging process then becomes constrained, and may not be practicable in many situations.

In this paper, we propose a novel system, named as spectrogenic imaging system, along with a framework and method of multispectral imaging in an uncontrolled environment. Two images of a scene are taken simultaneously: a normal RGB, and a filtered RGB using a special optical filter. The illuminant under which these images are captured is then estimated using a chromagenic based illuminant estimation method [13, 14]. The combination of the two images gives a 6-band multispectral image. The spectral reflectances of a scene are estimated by calibrating the system under the capture environment, using the estimated illuminant. Simulation experiments confirm the effectiveness of the proposed method. The main contribution of this paper is to provide a novel framework and method, which combines the existing illuminant estimation method and a multispectral acquisition technique in an innovative way, in order to enable multispectral imaging in any uncontrolled environment. This has numerous potential applications such as in machine vision and surveillance etc. which require imaging in a natural environment.

The rest of the paper is organized as follows. We present the proposed system and methodology in Section 2. We then present the experiments and results in Section 3. Finally, we conclude the paper in Section 4.

2. Proposed spectrogenic imaging system and methodology

Multispectral imaging systems acquire images in a number of spectral bands. Spectral reflectances are then estimated from the sensor responses, using a spectral estimation algorithm. For this, the system must be calibrated under the same environment (illuminant) under which the image is captured. We propose to estimate this illuminant from the two images of a scene, one normal RGB image (RGB) and one filtered RGB image (RGBF), using the chromagenic algorithm from Finlayson et al. [13] or the bright-chromagenic algorithm from Fredembach and Finlayson [14]. The filtered RGB image is the RGB image obtained by filtering through a special optical filter. The two images could be captured with a normal digital camera first as a normal RGB image, and then by placing the filter in front of the camera lens. This would require two sequential shots, with the obvious inconveniences that entails. A one shot solution using a stereo camera has been proposed by Shrestha and Hardeberg [15]. It uses a digital stereo camera with one of the lenses covered with a filter, allowing to acquire two images of a scene: a normal RGB and a filtered RGB images, in a single shot. That system assumes that the illuminant under which images are acquired is known (somehow measured). In this case optimal filter(s) are selected for accurate spectral or color estimation depending on the application requirement. The spectral reflectance of a scene is then recovered by calibrating the system using the known illuminant, using an appropriate spectral estimation method.

In this work, we combine the illuminant estimation and the multispectral imaging techniques in a simple but novel way, so as to make the resulting system capable of multispectral imaging in an uncontrolled environment, where the illuminant under which an image is acquired is unknown. Thus we first need to estimate the unknown illuminant. For this, a special optical filter which enables an accurate estimation of the illuminant as well as the spectra/color is used. We call such a filter spectrogenic, and the system is named as spectrogenic imaging system. The motivation behind the name is that the system is a novel spectral imaging system which uses a special filter, making it capable of acquiring spectral images under uncontrolled environment, and the system uses the chromagenic illuminant estimation algorithm. The term chromagenic comes from chromagen, which refers to contact lenses (specially chosen colored filters) prescribed to improve the vision of color deficient and dyslexic observers [13]. A spectrogenic filter can either be selected from a list of optical filters or be custom designed. Figure 1 shows a framework for the proposed spectrogenic imaging system.

 figure: Fig. 1

Fig. 1 Framework for the spectrogenic imaging system.

Download Full Size | PDF

The system comprises of four parts: image acquisition, illuminant estimation, system calibration, and spectral estimation, and the imaging process is carried out through these parts, in the order as numbered in the framework diagram. The system acquires two images of a scene: a normal RGB (RGB) and a filtered RGB (RGBF), allowing to obtain a 6-band multispectral image of the scene. The illuminant estimation unit estimates the illuminant (l) under which a test image (R) is acquired, using the two RGB images. The multispectral imaging system is then calibrated with the two simulated images of training targets (Rtrain), acquired under the estimated illuminant (lest). Spectral reflectances of the scene are then estimated from the 6-band sensor responses using a spectral estimation method. The whole process can be executed either on the fly, for example by incorporating all the steps into a built-in chip inside the camera system, or it can be done in post processing using the raw images from the sensor. In this way, we are able to acquire multispectral images of scenes irrespective of illuminant (environment) under which the image is acquired. The concept and methodology presented here is a generic one, which is valid irrespective of how the two images are of a scene are acquired. Two good examples are single shot 6-band multispectral imaging systems both proposed by Shrestha and Hardeberg, one using a stereo camera [16], and the other based on color filter arrays [7].

In the following subsections, we first discuss the system model of the proposed spectrogenic imaging system, and then present the illuminant and spectral estimation methods used.

2.1. System model

In order to model the proposed spectrogenic imaging system, the system can be considered to be comprised of two cameras: a normal RGB camera and a filtered RGB camera. Let S = [sR, sG, sB] be a matrix of spectral sensitivities of the three channels of the normal RGB camera, F a diagonal matrix of the spectral transmittance of the optical filter used, L a diagonal matrix of the spectral power distribution of the light source, and R the spectral reflectance of the surface captured by the camera. Let nN and nF be the noise vectors corresponding to the acquisition noise in the three channels of the normal and the filtered RGB cameras respectively. The camera responses of the normal and the filtered cameras: CN and CF are respectively given by:

CN=STLR+nN
andCF=STFLR+nF,
where ST denotes the transpose of the matrix S. The combined response C=[CNT,CFT]T of the two cameras gives six responses, leading to a 6-band multispectral image of the scene. In Eq. (2),FL can also be thought of as a resultant spectral power distribution of the light source when the filter is placed in front of the light source. This shows that the proposed concept works well even when the filter is placed in front of the light, instead of the camera lens. In other words, this is also true when the two images are acquired under two special illuminants. In this model, the two illuminants can be determined through training. From the six camera responses, spectral reflectances of the scene can then be reconstructed using a spectral estimation method. We will present a spectral estimation method used in this work in Section 2.3.

2.2. Illuminant estimation

There are many different illuminant estimation methods proposed in the literature, such as gray-world [17], max-RGB [18], gamut based algorithm [19], neural networks based [20], color-by-correlation [21], Bayesian based [22] and chromagenic color constancy [13,14]. We use an illuminant estimation method adapted from chromagenic estimation methods originally proposed by Finlayson et al. [13] and a modified method proposed by Fredembach and Finlayson [14], for consistent and improved performance [23]. Moreover, these methods use two images of a scene, which is in line with our proposed spectrogenic imaging system. We briefly present these two methods here in this section. The chromagenic illuminant estimation algorithms are based on the assumption that we know all the possible illuminants a priori. Let li(λ), i = 1,...,m be the spectral power distribution functions of these illuminants. A special filter is chosen so that for a given light the filtered RGBs are, as close as possible, a linear transform from normal unfiltered RGBs, and at the same time the linear transform changes with different illuminants. Such a filter is named as chromagenic. The two camera responses are then related as:

CFMIECN,
where MIE is a 3 × 3 linear transformation matrix. The matrix for each of the illuminant li is computed from the normal and the filtered images of the training targets, using the equation:
MIE=CFCN+,

The chromagenic illuminant estimation method in its original form uses this relation with three color values, RGB of the camera responses. Since the relation involves a matrix inversion, we propose in our approach to use a polynomial camera responses of degree n, Cp in place of simple camera responses C, for more robust results against ill-posed inversion problem and better data fitting [24]. This has been shown effective by simulation experiments. However, the higher the degree of polynomials, the more prominent will be the influence of noise. The Cp of 2nd degree polynomials of the camera responses without the cross terms: [CR,CG,CB,CR2,CG2,CB2,1]T is found to be optimal in our experiments. In this case, the transformation matrix becomes 3 × 7. The matrix MIE is then computed from the polynomial camera responses of the normal (Cp,N) and the filtered (Cp,F) images of the representative real world surfaces (training targets) as:

MIE=Cp,FCp,N+.

For a given test illuminant, one illuminant from among the plausible illuminants li is selected as the estimated illuminant lest, which produces the minimum fitting error:

est=argmini(ei),i=1,,m,
where ei is the fitting error which can be calculated as:
ei=MIE,iCp,NCp,F.

Fredembach and Finlayson [14] proposed the bright-chromagenic algorithm which uses a certain percentage of the brightest pixels (typically 1–3%) in an image, rather than all the pixels in the original chromagenic algorithm [13]. The brightest pixels are defined as the ones with the largest CR2+CG2+CB2 value. They argued that the bright-chromagenic algorithm is more robust since it does not make assumptions about which reflectances might or might not be present in the scene, i.e. if there are no bright reflectances, it will still have an equivalent performance to the chromagenic algorithm. Moreover, if the filter does not vary too drastically across the spectrum, the brightest unfiltered RGBs will be mapped to the brightest in the filtered RGBs, and by limiting the number of brightest pixels, the algorithm was expected to estimate illuminants even if the two images would not be registered. However, in our method, we allow to select any kind of filters that produce optimal results, without limiting the filters with the smoother variations only. In order to avoid mapping of completely different pixels in the two images, we select the brightest pixels in the non-filtered image, and the corresponding pixels in the filtered image are used.

2.3. Spectral estimation

Using 6 camera responses C and the estimated illuminant in the previous sub-sections, spectral reflectances of a scene (Rest) are estimated using a spectral estimation method. Let R denote the measured spectral reflectances of the scene. There are many different spectral estimation methods proposed in the literature. For some of the most commonly used methods, we refer to [4, 5, 2427]. In this paper, we use the polynomial method [24], for its better performance and at the same time consistence to the approach proposed in the illuminant estimation method. Let Ctrain and C denote the camera responses of the training (Rtrain) and the test (R) targets respectively. Then, the estimated reflectance with the polynomial method is given by the equation:

Rest=MSECp,
where MSE is the transformation matrix used for the spectral estimation. The matrix is computed as:
MSE=RtrainCp,train+,
where Cp, like in the illuminant estimation above, is the polynomials of degree n, formed from the camera responses C. The transformation matrix MSE is computed using the simulated camera responses for the training targets, under the illuminant estimated by the illuminant estimation algorithm presented above in Section 2.2.

3. Experiments

We have performed simulation based experiments in order to validate and evaluate the proposed spectrogenic imaging system and the methodology for the multispectral imaging in an uncontrolled environment. We first discuss the experimental setup in the next subsection, and then discuss the experiments and results in Section 3.2.

3.1. Experimental setup

Since the concept and the methodology of the spectrogenic imaging system proposed in this work is rather generic, it works well with any system that allows acquiring two images of a scene, one with and one without a filter. We use a stereo based system proposed by Shrestha and Hardeberg in [16] in our experiments. A simulated system, built with a modern digital stereo camera from Fujifilm: the Fujifilm FinePix REAL 3D W1 (in short, Fujifilm 3D) and Omega XF1078 filter in front of one of its lenses, is used. The sensitivities of the left and right sides of the stereo camera as measured in [5] have been used. The filter was selected from among 265 filters from Omega Optical Inc. [28], based on minimization of both color and illuminant estimation errors. Figure 2 shows the spectral transmittance of the Omega XF1078 filter used. The sensitivities of the left and right cameras of the stereo camera as measured with a monochromator system are shown in Fig. 3(a). Figure 3(b) shows the resulting normalized 6-band multispectral imaging system.

 figure: Fig. 2

Fig. 2 Spectral transmittance of the Omega XF1078 filter.

Download Full Size | PDF

 figure: Fig. 3

Fig. 3 Spectral sensitivities of the individual cameras, and the 6-channel multispectral system.

Download Full Size | PDF

The system allows to acquire a normal RGB and a filtered RGB of a scene in a single shot. For simplification, we assume that there is no occlusion and the two images are well registered. This is valid in imaging flat surfaces, for instance paintings. 1995 surface reflectances (Rtrain) and 87 measured training illuminants (Lallp) from Barnard et al. [29] have been used. The surface reflectance (Rtrain) which includes 1269 Munsell chips, 24 Macbeth Color Checker patches, and others are used for calibrating/training the multispectral system and computing the transformation matrices (Mi) for the illuminant estimation. The illuminants (Lallp) are used as the all possible illuminants in the estimation of the illuminants. Four standard illuminants A, D50, D65 and F3 are used for testing and validation. All the illuminants are normalized such that the spectral values at 550nm wavelength equals to one. To make the simulated multispectral system more realistic, as much as 2% normally distributed Gaussian noise is introduced as random shot noise, and 12-bit quantization noise is incorporated by directly quantizing the simulated camera responses after the application of the shot noise.

In order to evaluate the system, four hyperspectral images from the University of Eastern Finland’s spectral image database [30] have been used to acquire simulated images. Figure 4 shows the RGB images generated from these hyperspectral images.

 figure: Fig. 4

Fig. 4 RGB images rendered using four publicly available hyperspectral images from the Joensuu Spectral Image Database [30] (URL: http://www.uef.fi/fi/spectral/spectral-image-database), University of Eastern Finland.

Download Full Size | PDF

The modified bright-chromagenic algorithm is used for the illuminant estimation. The median angular error [31] is used to evaluate the performance of the illuminant estimation algorithm. The multispectral imaging system is evaluated using spectral as well as a colorimetric metrics. GFC (Goodness of Fit Coefficient) and PSNR (Peak-Signal-to-Noise-Ratio) have been used as the spectral metrics, and ΔEab* (CIELAB color difference) as the colorimetric metric. PSNR is calculated as 20log101RMSE, where RMSE is the root mean square error.

3.2. Experiments and results

First of all, the transformation matrices Mi corresponding to each of the Lallp illuminants are computed using the surface reflectances Rtrain, by using Eq. (5). Then, for each hyperspectral image, simulated, normal RGB and filtered RGB images are obtained under each of the test illuminant. Acquisition noise is introduced on them to make the simulation more realistic as discussed above in the experimental setup subsection.

A test illuminant is estimated using the modified bright-chromagenic algorithm as discussed in Section 2.2. Top 3% of the brightest pixels are used. The medium angular error is computed using the chromaticity value of the measured and the estimated illuminant for each of the test illuminant. Using the estimated illuminant, six camera responses (RGB and RGBF) for the training targets (Rtrain), Ctrain are obtained. Using Ctrain as the training data, spectral reflectance of each pixel of a test image is then estimated using the polynomial estimation method [Eq. (8)], using the normal and the filtered RGB images acquired under a test illuminant.

The three system evaluation metrics GFC, PSNR and ΔEab* are then computed using the measured and the estimated spectral reflectances of the image. Table 1 shows the mean values of the metrics for each of the test illuminant, along with the uncertainty. Illuminant estimation errors in terms of the median angular errors are also shown in the table.

Tables Icon

Table 1. Evaluation metric values for each of the test illuminant

The results show reasonably good performance with the mean GFC, PSNR and ΔEab* values of 0.972, 27.99, and 6.26 respectively. The illuminant estimation error is quite good with the median of the median angular error of 1.79. However, the GFC value is below 0.99 and the ΔEab* value is high, above 13 in the case of the D65 illuminant. The high values of the color difference might be because the system has not been optimized for accurate color reproduction. So far, to our knowledge, there is no single metric that leads to the optimal performance in terms of both spectral estimation and color estimation. Depending on the application, we can optimize the system for better spectral estimation or color difference while selecting the filter.

The RGB images acquired from the four hyperspectral images under the four test illuminants and the corresponding estimated illuminants are shown in Fig. 5. We see the images under the estimated illuminants close to the images under the original illuminants, except with the D65 illuminant where there is a slight difference in the two images. For a given test illuminant, the same illuminant is selected as the estimated illuminant, with all the four test images indicating the robustness of the method.

 figure: Fig. 5

Fig. 5 The RGB images captured under the four test illuminants (Odd column: measured, Even column: estimated). In most of the cases, the images rendered under the estimated illuminants are very close to the ground truth.

Download Full Size | PDF

As an illustration, Fig. 6 and Fig. 7 show the measured and the estimated spectral reflectances at five different pixel locations of the images acquired under illuminants A and D65 respectively. The pixels are selected as representatives of the different areas in the images. The plots shows very good estimations of the spectra in the case of the illuminant A, not that perfect but reasonably good estimations in the case of the illuminant D65.

 figure: Fig. 6

Fig. 6 Reflectance spectra at five different pixels in the four test images, acquired under illuminant A. The pixel locations [row column], are shown above the plots.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Reflectance spectra at the same five pixels (as in Fig. 6) in the four test images, acquired under illuminant D65. The pixel locations [row column], are shown above the plots.

Download Full Size | PDF

We have used the 87 measured illuminants data as all possible illuminants. The system picks a close one among them as an estimated illuminant. The system works with any test illuminant. However the performance obviously depends on the availability of the closer illuminant in the list of the possible illuminants. In reality there is a vast number of possible illuminants, especially in the outdoor environment. The system could, therefore, be improved by including more illuminants in the database. It can also be improved further by using a larger set of filters in the filter selection process.

4. Conclusion

Multispectral imaging, so far in most cases, is limited to indoor and controlled environment. We have proposed here a novel spectrogenic imaging system, with a framework and methodology of multispectral imaging in an uncontrolled environment. This could be applicable in outdoor environment as well. It can be easily realized, for instance using a stereo camera and a filter.

Since the system can be used both in indoor and outdoor uncontrolled environments, it could be useful in many different imaging and machine vision applications. As a future work, the system should be investigated with actual experiments using real camera(s), tested in an outdoor scenario.

References and links

1. M. Yamaguchi, R. Iwama, Y. Ohya, T. Obi, N. Ohyama, Y. Komiya, and T. Wada, “Natural color reproduction in the television system for telemedicime,” in “Medical Imaging: Image Display,” Proc. SPIE 3031, 482–489 (1997). [CrossRef]  

2. N. Tsumura, “Appearance reproduction and multispectral imaging,” Color Res. Appl. 31(4), 270–277 (2006). [CrossRef]  

3. S. Tominaga, “Spectral imaging by a multichannel camera,” J. Electronic Imaging 8(4), 332–341 (1999). [CrossRef]  

4. J. Y. Hardeberg, F. Schmitt, and H. Brettel, “Multispectral color image capture using a liquid crystal tunable filter,” Opt. Eng. 41(10), 2532–2548 (2002). [CrossRef]  

5. R. Shrestha, A. Mansouri, and J. Y. Hardeberg, “Multispectral imaging using a stereo camera: Concept, design and assessment,” EURASIP J. Adv. Sig. Pr. 2011(1), 57 (2011). [CrossRef]  

6. L. Miao and H. Qi, “The design and evaluation of a generic method for generating mosaicked multispectral filter arrays,” IEEE Trans. Image Process. 15(9), 2780–2791 (2006). [CrossRef]   [PubMed]  

7. R. Shrestha and J. Hardeberg, “CFA based simultaneous multispectral imaging and illuminant estimation,” in Proceedings of the 4th Computational Color Imaging Workshop, Vol. 7786 of LNCS (Springer, 2013), 158–170. [CrossRef]  

8. J. I. Park, M. H. Lee, M. D. D. Grossberg, and S. K. Nayar, “Multispectral imaging using multiplexed illumination,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2007), 1–8.

9. R. Shrestha, J. Hardeberg, and C. Boust, “LED based multispectral film scanner for accurate color imaging,” in Proceedings of the 8th International Conference on Signal Image Technology and Internet Based Systems (IEEE, 2012), 811–817.

10. R. Shrestha and J. Y. Hardeberg, “Multispectral imaging using LED illumination and an RGB camera,” in Proceedings of the 21st Color and Imaging Conference on Color Science and Engineering Systems, Technologies, and Applications (IS&T, 2013), 8–13.

11. F. Yasuma, T. Mitsunaga, D. Iso, and S. Nayar, “Generalized assorted pixel camera: Postcapture control of resolution, dynamic range, and spectrum,” IEEE T. Image Process. 19(9), 2241–2253 (2010). [CrossRef]  

12. E. M. Valero, J. L. Nieves, S. M. C. Nascimento, K. Amano, and D. H. Foster, “Recovering spectral data from natural scenes with an RGB digital camera,” Color Res. Appl. 32(5), 352–360 (2007). [CrossRef]  

13. G. D. Finlayson, S. D. Hordley, and P. Morovic, “Chromagenic colour constancy,” in Proceedings of the 10th Congress of the International Colour Association (AIC, 2005), 8–13.

14. C. Fredembach and G. D. Finlayson, “The bright-chromagenic algorithm for illuminant estimation,” J. Imaging Sci. Techn. 52(4), 0409061–04090811 (2008). [CrossRef]  

15. R. Shrestha and J. Y. Hardeberg, “Computational color constancy using a stereo camera,” in Proceedings of the 6th European Conference on Colour in Graphics, Imaging, and Vision (IS&T, 2012), 69–74.

16. R. Shrestha and J. Y. Hardeberg, “Simultaneous multispectral imaging and illuminant estimation using a stereo camera,” in Proceedings of the 5th International Conference on Image and Signal Processing,Vol. 7340 of LNCS (Springer, 2012), 45–55.

17. G. Buchsbaum, “A spatial processor model for object colour perception,” J. Franklin I. 310(1), 1–26 (1980). [CrossRef]  

18. E. H. Land, “The retinex theory of color vision,” Sci. Am. 237(6), 108–128 (1977). [CrossRef]   [PubMed]  

19. D. A. Forsyth, “A novel algorithm for color constancy,” Int. J. Comput. Vision 5(1), 5–36 (1990). [CrossRef]  

20. V. C. Cardei, B. Funt, and K. Barnard, “Estimating the scene illumination chromaticity by using a neural network,” J. Opt. Soc. Am. A 19(12), 2374–2386 (2002). [CrossRef]  

21. G. D. Finlayson, S. D. Hordley, and P. M. Hubel, “Color by correlation: a simple, unifying framework for color constancy,” IEEE Trans. Pattern Anal. 23(11), 1209–1221 (2001). [CrossRef]  

22. P. V. Gehler, C. Rother, A. Blake, T. Minka, and T. Sharp, “Bayesian color constancy revisited,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2008), 1–8.

23. R. Shrestha and J. Y. Hardeberg, “Computational color constancy using chromagenic filters in color filter arrays,” in “Sensors, Cameras, and Systems for Industrial/Scientific Applications XIII,” Proc. SPIE 8298, 82980S (2012). [CrossRef]  

24. D. R. Connah and J. Y. Hardeberg, “Spectral recovery using polynomial models,” in “Color Imaging X: Processing, Hardcopy, and Applications,” Proc. SPIE 5667, 65–75 (2005). [CrossRef]  

25. F. H. Imai, L. A. Taplin, and E. A. Day, “Comparative study of spectral reflectance estimation based on broadband imaging systems,” in Tech. rep., Center for Imaging Science, Munsell Color Science Laboratory, Rochester Institute of Technology, Rochester, New York, USA (2003).

26. D. R. Connah, J. Y. Hardeberg, and S. Westland, “Comparison of linear spectral reconstruction methods for multispectral imaging,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2004), 1497–1500.

27. A. M. Mansouri, F. S. Marzani, and P. Gouton, “Neural networks in two cascade algorithms for spectral reflectance reconstruction,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2005), 2053–2056.

28. Omega Optical Inc., “Omega optical filters,” https://www.omegafilters.com/Products/Curvomatic. Last visited: Feb. 2014.

29. K. Barnard, V. C. Cardei, and B. Funt, “A comparison of computational color constancy algorithms. I: Methodology and experiments with synthesized data,” IEEE Trans. Image Process. 11(9), 972–984 (2002). [CrossRef]  

30. University of Eastern Finland, Spectral Color Research Group, “Joensuu spectral image database,” https://www.uef.fi/spectral/spectral-image-database (2014). Last visited: Feb. 2014.

31. S. D. Hordley and G. D. Finlayson, “Re-evaluation of color constancy algorithm performance,” J. Opt. Soc. Am. A 23(5), 1008–1020 (2006). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Framework for the spectrogenic imaging system.
Fig. 2
Fig. 2 Spectral transmittance of the Omega XF1078 filter.
Fig. 3
Fig. 3 Spectral sensitivities of the individual cameras, and the 6-channel multispectral system.
Fig. 4
Fig. 4 RGB images rendered using four publicly available hyperspectral images from the Joensuu Spectral Image Database [30] (URL: http://www.uef.fi/fi/spectral/spectral-image-database), University of Eastern Finland.
Fig. 5
Fig. 5 The RGB images captured under the four test illuminants (Odd column: measured, Even column: estimated). In most of the cases, the images rendered under the estimated illuminants are very close to the ground truth.
Fig. 6
Fig. 6 Reflectance spectra at five different pixels in the four test images, acquired under illuminant A. The pixel locations [row column], are shown above the plots.
Fig. 7
Fig. 7 Reflectance spectra at the same five pixels (as in Fig. 6) in the four test images, acquired under illuminant D65. The pixel locations [row column], are shown above the plots.

Tables (1)

Tables Icon

Table 1 Evaluation metric values for each of the test illuminant

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

C N = S T L R + n N
and C F = S T F L R + n F ,
C F M I E C N ,
M I E = C F C N + ,
M I E = C p , F C p , N + .
est = argmin i ( e i ) , i = 1 , , m ,
e i = M I E , i C p , N C p , F .
R est = M S E C p ,
M S E = R train C p , train + ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.