Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multispectral camera as spatio-spectrophotometer under uncontrolled illumination

Open Access Open Access

Abstract

Multispectral constancy enables the illuminant invariant representation of multi-spectral data. This article proposes an experimental investigation of multispectral constancy through the use of multispectral camera as a spectrophotometer for the reconstruction of surface reflectance. Three images with varying illuminations are captured and the spectra of material surfaces is reconstructed. The acquired images are transformed into canonical representation through the use of diagonal transform and spectral adaptation transform. Experimental results show that use of multispectral constancy is beneficial for both filter-wheel and snapshot multi-spectral cameras. The proposed concept is robust to errors in illuminant estimation and is able to perform well with linear spectral reconstruction method. This work makes us one step closer to the use of multispectral imaging for computer vision.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

With advancements in the sensor technology, use of multispectral and hyperspectral imaging is developed for close-range imaging applications. Multispectral imaging is used to capture more spectral information from a scene as compared to conventional color images. Recently emerging snapshot technologies, such as the spectral filter arrays [1,2], enable a broader range of usage domains for multispectral imaging. The ability of multispectral imaging in acquisition of better spectral resolution is useful for material classification and identification by means of spectral reconstruction [3–7] of surfaces in a scene.

Acquisition of reflectance spectrum of a surface can be performed with a spectrometer as well but it provides spectral information of a single point. Hyperspectral camera is another option for obtaining high resolution radiance data. This data can be transformed into reflectance data through calibration of the imaging system for the illumination. A hyperspectral imaging system is complex both in terms of hardware and data. Multispectral imaging provides the intermediate solution as such a camera can be easily be used as a hand held device. The trade-off in this solution is lower number of spectral channels as compared to a hyperspectral image and thus, less spectral information.

The advantage of spectral reconstruction of surfaces was recognized in the 1980s [8,9]. Since then, many methods are developed to provide spectral reconstruction from the camera data. Most of these methods rely on the use of training data to learn the mapping between camera data and the desired spectra. This process is called calibration of the system and is performed through a training set of measured reflectances and radiance data with a given illuminant. In most of such systems, smooth spectra of natural objects and illuminations is assumed. To maintain a reasonable accuracy, the same illuminant is required during the scene acquisition [6,7,10–12]. This limitation of having the same illuminant for calibration and image acquisition is a major shortcoming for the generic use of multispectral imaging [12]. To address this issue, the idea of multispectral constancy is recently proposed by Khan et al. [13–15].

In this paper, we are demonstrating the ability of multispectral constancy to provide illuminant invariant representation. So far, the use of multispectral cameras is limited to remote sensing and indoor laboratory conditions. Knowledge of scene illumination is one of the constraints and generally it is either assumed to be constant (as in case of remote sensing) or measured separately, before processing the acquired multispectral data. Extending the use of multispectral imaging system from heavily constrained environments to real world applications is still an open challenge.

For the demonstration of multispectral constancy, three scenes are created in a viewing booth and are acquired through a multispectral camera under five different illuminations. Estimation of scene illuminant is performed and the multispectral data is transformed into a canonical representation. A linear method is used for spectral reconstruction by using a calibration matrix which is formed with the canonical illuminant. This method allows the use of multispectral camera as a spectrophotometer and its use in the uncontrolled imaging environment. Results show that the use of multispectral constancy for image transformation into a canonical representation improves the spectral reconstruction results.

2. Multispectral constancy

In this work, we consider a simplified imaging model. We assume that there are no deformations due to optics, electronics and any other distortions in the image during acquisition. In such a model, the acquired image value at a pixel f is dependent on the surface reflectance r(λ), light source e(λ) and the camera sensitivity s(λ)

f=ωe(λ)r(λ)s(λ)dλ.
In practice, we can formulate a discrete version of Eq. (1);
F=RES.
Here F is the image, R is the spectral reflectance of a surface, E is the scene illumination, and S represents the sensor sensitivity of the camera. Due to the dependence of scene illuminant, the captured color of objects generally changes when observed under different light sources. The human visual system has the natural ability to perceive constant color of surfaces despite the change in spectral composition of the illuminant [16] and this ability to discard illumination effects is known as “Color Constancy” [17]. Creating such model for a computer vision system is called “computational color constancy”. This concept is extended for multispectral images by Khan et al. [14] and is called multispectral constancy. The purpose of such a system is to represent illuminant invariant representation of images. It can be achieved either by exploiting the illuminant invariant features in images or by estimating the scene illumination and then removing its effect. In this work, we follow the second approach.

From the imaging model given in Eq. (1), illuminant in the sensor domain is given as:

e=ωe(λ)s(λ)dλ.
Illuminant in the sensor domain can be measured by using a white diffuser in the scene. It is a commonly used practice during calibration of multispectral and hyperspectral cameras in laboratory conditions. However, it is not always feasible if the camera is used outdoors. In the absence of a white diffuser, the scene illuminant has to be estimated [18]. After that, correction is applied to the acquired image in order to represent it as it would have been taken under a known light source. For color images, this process also expressed as “discounting the chromaticity of the illuminant” by D’Zmura and Lennie [19]. This transformation is performed as
Fc=Mc,illFill,
where Fill is the image taken in unknown light source and Fc is the transformed image as if taken under a canonical illuminant, while Mc,ill is the transformation matrix, which maps colors from captured image to their corresponding colors under a known illumination. Mc,ill consists of two parts, one is the diagonal transform Dc,ill where the components of this transformation matrix are the sensor responses to the illuminations Ec (canonical illuminant) and Eill (scene illuminant). It is represented by
Dc,ill=diag(Ec/Eill).
The second component of Mc,ill is the spectral adaptation transform (SAT). It is represented by ASAT and its purpose is to incorporate the intrinsic properties of imaging sensors [14]. SAT is computed by finding a matrix that can minimize the error between camera data and the corresponding reflectance. A set of measured reflectances for training, the sensor sensitivities and a canonical illumination are used during the computation of SAT [14].

Once the components of Mc,ill are computed, the input multispectral data is transformed into its canonical representation

Fc=Mc,illFill=ASATDc,illFill.
Using this transform, the problem of spectral reconstruction is limited to finding the transform Mc,ill. The spectral reconstruction in this case is mathematically represented as;
R^=WcMc,illFill
where Wc is the calibration matrix for spectral reconstruction. Details of Wc is provided in Section 3.5.

Multispectral constancy is analyzed in simulations by Khan et al. [14] and it is taken at the experimental level in this paper. Multispectral data is acquired for three scenes with varying illuminations and processed through the mltispectral constancy framework. The results are through spectral reconstruction of material surface reflectances in the scene. Details of the experimental setup are discussed in the following section.

3. Experimental setup

3.1. Objects and surfaces

For the demonstration of multispectral constancy, reflectance data of various objects is acquired through hyperspectral camera. The detail of those objects, image acquisition and reflectance computation is provided in [20]. The reflectance data of those objects is used as ground truth data for matching the performance of spectral reconstruction of the corresponding surfaces from multispectral images.

3.2. Scenes setup

To create complex scenes, the same objects are placed in a viewing booth. Three scenes created for capturing various materials. They consist of a scene of kitchen containing utensils, spices and pieces of clothes, and two scenes with different combination of textiles. The scenes are set in a viewing booth (GretagMacbeth Spectralight III) and each image is captured with illuminant A, D65, horizon light, coolwhite and TL84 illuminant. In Fig. 1, color illustration of those scenes is shown by using the linear mapping from N dimensions to 3 dimensions. This method is described in [21]. The multispectral images will be made available to public through the website of Colorlab as CID:MC, after the acceptance of paper. These images are shown with the simulation of illumination D65.

 figure: Fig. 1

Fig. 1 Color rendering of the scenes created in viewing booth and rendered under D65 illuminant.

Download Full Size | PDF

3.3. Image acquisition

After setting up the scenes in viewing booth, a multispectral filter wheel camera (Pixelteq Spectrocam) [22] is used to acquire multispectral images. For our experiments, we use the channels in the visible range. Sensitivities of the filters from the multispectral camera are shown in Fig. 2. During acquisition of multispectral image from a filter-wheel camera, the exposure time for each channel is set individually. The reason for doing so is to acquire the maximum information in terms of contrast and brightness [23]. In our experiments, the exposure time for each channel is set empirically by observing the histogram of a particular channel and avoiding over-exposure and under-exposure as much as possible. The exposure time for each channel is changed with the change in illumination. By doing so, the compensation for illumination is already partially incorporated into the resultant multispectral image. However, this is not the case for snapshot cameras, such as the spectral filter array camera [24], since the sensor integration time is same for all of the channels. In the experiments, we compare results of manually adjusted integration time with a simulation of snapshot camera to analyze the difference in both imaging techniques. The simulation for snapshot camera is created by normalizing the integration time for all the channels so that they appear to be taken with the same integration time.

 figure: Fig. 2

Fig. 2 Camera sensitivity (including the filters and sensor) in the visible range. X-axis show the wavelengths (in nm), while y-axis show the relative intensity. (Data taken from manufacturer).

Download Full Size | PDF

Another issue is the “chromatic aberration” effect along channels which results in blur. Snapshot multispectral cameras also face the same problem [25]. In filter-wheel multispectral camera, this issue is resolved by focusing the lens for each channel individually and then combining the resulting images to ensure that the images for each channel are as sharp as possible. The unfavorable effect of this process is the slight translation distortion. This distortion can be corrected through modeling the aberrations caused by filters placed in front of the lens [26,27]. However, in the experiments we do not perform it because we selected the surfaces manually and made sure that the selected regions do not contain edges. By doing so, we focus the experiments on spectral reconstruction of the selected surfaces while ignoring the geometric distortion at the edges.

3.4. Illuminant estimation and image transformation

In the work on illuminant estimation in multispectral imaging by Khan et al. [18], four algorithms based on the statistics based illuminant estimation method were extended for multispectral imaging. They found that spectral gray-edge algorithm, which is the extension of gray-edge algorithm [28], is robust and performs better than the other three algorithms tested in their work. Based on their observations, we use the spectral gray-edge algorithm for estimation of scene illuminant. The spectral gray-edge algorithm assumes that the average of reflectance derivative of each channel of the multispectral image is achromatic. This algorithm is expressed as:

([[Fσ]pdxdx)1/p=e^,
where Fσ is the smoothed image, after applying a Gaussian filter and p is the order of the Minkowski norm. In our experiments, we use p = 5 and σ = 2. These values are obtained in the simulations performed in [18]. Scene illuminant is estimated in the sensor domain after masking out the ColorChecker from the scene.

In this work, we assume uniform illumination all over the scene. However, in a real scenario, the spectral power distribution of illumination can differ at various locations in an image. There are various methods for dealing with such situations that address illuminant estimation a per-pixel level [29–31]. In our current work, we are demonstrating the multispectral constancy pipeline from image acquisition to the spectral reconstruction. This pipeline consist of several parts, and there is much room for further improvements in each module of this pipeline. Since our aim is to verify the effectiveness of multispectral constancy through real multispectral images in this work, we do so by limiting the experiments by assuming uniform illumination over the scene. Another important factor to mention is that we do not estimate the spectral power distribution of the scene illumination. The illuminant estimation is performed in the sensor domain in our experiments. Once it is verified that our proposed method is able to perform, the next step will be to improve the illuminant estimation and spectral reconstruction methods.

3.5. Spectral reflectance reconstruction

For spectral reconstruction of material surfaces from the camera data through linear method, a calibration matrix W is required. It is obtained by using measured reflectance spectra Rt and the camera sensor sensitivities S. For reducing the error between original spectra R and the estimated spectra , a covariance matrix of a set of measured reflectance samples can be used. Those measured reflectance samples provide the a-priori statistical information about the surfaces in a scene [32]. If the a-priori information is well chosen, error in the spectral reconstruction can be small.

There are several methods being proposed for the spectral reconstruction in literature [33]. We use a linear method, namely the Wiener estimation [34] because of its robustness to noise. It is defined as

W=RtRtT(SE)T((SE)RtRtT(SE)T+G)1.
Here, RtRtT and G are the autocorrelation matrices of the training spectra and additive noise, respectively. G is in the form of a diagonal matrix consisting of the variance of noise in the form of σ2I, where σ2 is the variance of estimated noise and I is an identity matrix. σ2 is estimated using the method proposed by Shen et. al [34]. Training for obtaining the matrix Wc is performed with CIE illuminant E as the canonical illuminant Ec and the SFU spectral reflectance data [35]. The obtained calibration matrix is used for spectral reconstruction,
R^=WcASATDc,illFill.

3.6. Evaluation

To measure the performance of the spectral reconstruction, we compare the reconstruction for each selected material surface with the corresponding measured reflectance r, through root mean square error (RMSE) as

RMSE=1Nj=1N(rjr^j)2
For further evaluation of the reconstructed spectra, cosine distance [36] is also used to compare the results with original reflectance spectra. Cosine distance measures the orientation of two vectors without considering the magnitude and is calculated as
cosinedistance=1rTr^(rTr)(r^Tr^)

Cosine distance serves as the compliment of goodness of fit coefficient [37] with 100% correlation. The correlation of RMSE results with cosine distance is 0.90. Therefore, we report the results from RMSE in this paper, while the detailed results are provided in the appendix. We also report the error in illuminant estimation in the form of angular error (ΔA). It is computed between the values obtained from white patch of ColorChecker e and the estimated illuminant ê in the sensor domain as;

ΔA=arccoseTe^(eTe)(e^Te^)
Since we do not have ground truth reflectance values for all of the surfaces in the scene, a binary mask is applied on each image for selection of those surfaces whose reflectance data is acquired by the hyperspectral camera. The selected surfaces are shown in Fig. 3. The binary mask is created for each of the three images individually and sensor values for the selected surfaces are obtained. In case of surface with varying texture, the mean of measured reflectance is used. Although this practice may case some errors in the regions containing different reflectance value, our primary aim is to evaluate the overall performance of multispectral constancy and we do not perform pixel to pixel mapping for comparison of results. Those values are used further for spectral reconstruction and comparison with ground truth reflectance of the same material.

 figure: Fig. 3

Fig. 3 Binary masks and the rendering of selected surfaces in color for the three images used in experiments. The whole scenes are shown in Fig. 1. Ground-truth for the selected patches is taken from hyperspectral dataset in [20].

Download Full Size | PDF

4. Experimental results

For the demonstration of spectral reconstruction results, we selected 16 material surfaces from both “Kitchen” and “Textile_1” images, while 20 material surfaces from the “Textile_2” image. The reflectance data for these surfaces is acquired from hyperspectral camera and is used as ground truth for comparison of results. We present results from two imaging techniques in this paper; one is through the filter-wheel camera where the integration time for each channel is adjusted manually while the other imaging technique is snapshot camera where all channels are acquired with the same integration time. We get the snapshot like effect by normalizing the acquired image through filter-wheel camera with respect to sensor integration time for each channel. It is assumed that the camera response curve is linear. In this way, all of the channels appear to be taken with the same integration time. It is important to mention that the approximation for snapshot camera is modifying the noise on each channel in a multiplicative manner, rather than being additive. In the experiments, we ignore this effect. Also, we do not consider other effects like demosaicing and cross-talk between the filters. For each test illuminant, results of spectral reconstruction are shown for five experiments which are;

  • “Do nothing”, when the multispectral data is used for spectral reconstruction, without any processing.
  • “Patch select”, the white patch of ColorChecker is selected manually and those values are used as the scene illuminant. Data transformation is performed by two methods,
    • Using the extracted values in a diagonal transform, Dc,ill
    • Using SAT along with the diagonal transform, ASATDc,ill
  • “SG-E”, illuminant estimation with spectral gray-edge algorithm and then transforming the data using;
    • – Dc,ill with values obtained from illuminant estimation.
    • Using SAT along with the diagonal transform, ASATDc,ill

Analysis of the results is provided in the following.

4.1. Influence of the imaging technique

It is a standard practice to adjust the integration time of each channel manually for image acquisition with filter-wheel camera. The aim is to acquire each channel with minimum saturated and dark pixels, and to enhance the contrast. Generally the integration time is set through observation of image and the histogram. On the other hand, a snapshot camera captures the image data with same integration time for all of the channels. Tables 1, 3 & 5 show results with manually set integration time for each channel and this time varies for each illumination. Tables 2, 4 & 6 show results obtained after the integration time is normalized for all channels. Based on the results, it can be seen that while the results with do nothing varies with the change in imaging technique, the proposed multispectral constancy is robust to change in imaging technique. Although there are variations in the results of both techniques and it is not possible to conclude which imaging technique is better, the spectral reconstruction error is reduced when multispectral constancy is used as compared with do nothing. This observation illustrates the importance of removing the effect of scene illuminant before using the image data for further analysis.

Tables Icon

Table 1. Spectral reconstruction result of selected surfaces from the “Kitchen” image. Each channel is acquired with manually adjusted integration time. Results are presented in form of mean and 95 percentile of error metric. Results of manually selecting the white patch of ColorChecker is shown with “Patch select”, while “SG-E” show results of Spectral gray-edge algo. with angular error (ΔA), after ColorChecker is masked out from image.

Tables Icon

Table 2. Spectral reconstruction result of selected surfaces from the “Kitchen” image, taken with simulation of snapshot camera. Results are presented in form of mean and 95 percentile of error metric. Results of manually selecting the white patch of ColorChecker is shown with “Patch select”, while “SG-E” show results of Spectral gray-edge algo. with angular error (ΔA), after ColorChecker is masked out from image.

Tables Icon

Table 3. Spectral reconstruction result of selected surfaces from the “Textile 1” image. Each channel is acquired with manually adjusted integration time. Results are presented in form of mean and 95 percentile of error metric. Results of manually selecting the white patch of ColorChecker is shown with “Patch select”, while “SG-E” show results of Spectral gray-edge algo. with angular error (ΔA), after ColorChecker is masked out from image. This scene is taken without illuminant A.

Tables Icon

Table 4. Spectral reconstruction result of selected surfaces from the “Textile 1” image, taken with simulation of snapshot camera. Results are presented in form of mean and 95 percentile of error metric. Results of manually selecting the white patch of ColorChecker is shown with “Patch select”, while “SG-E” show results of Spectral gray-edge algo. with angular error (ΔA), after ColorChecker is masked out from image.This scene is taken without illuminant A.

Tables Icon

Table 5. Spectral reconstruction result of selected surfaces from the “Textile 2” image. Each channel is acquired with manually adjusted integration time. Results are presented in form of mean and 95 percentile of error metric. Results of manually selecting the white patch of ColorChecker is shown with “Patch select”, while “SG-E” show results of Spectral gray-edge algo. with angular error (ΔA), after ColorChecker is masked out from image.

Tables Icon

Table 6. Spectral reconstruction result of selected surfaces from the “Textile_2” image, taken with simulation of snapshot camera. Results are presented in form of mean and 95 percentile of error metric. Results of manually selecting the white patch of ColorChecker is shown with “Patch select”, while “SG-E” show results of Spectral gray-edge algo. with angular error (ΔA), after ColorChecker is masked out from image.

4.2. Manual selection of white patch vs. illuminant estimation

The results from spectral reconstruction provide an interesting observation that the error in spectral reconstruction is slightly reduced when correction is applied after illuminant estimation instead of manual selection of white patch in the ColorChecker. The reason for this effect is due to the presence of saturated pixels in the channels which results in erroneous transformation towards the canonical representation. During illuminant estimation, the pixels with peak value (255 for 8-bit image) are ignored by the spectral gray-edge algorithm and the resultant estimate of scene illuminant is free from saturated values. In a few cases, the error in reconstructed spectra is slightly increased when image is transformed through the estimated illuminant. The reason for it is the error in illuminant estimation itself. Although the improvement in spectral reconstruction result is not significant when data is transformed by using the estimated illuminant, it is worth noting that in practical situations, it is not feasible to use ColorChecker with every image acquisition. The results from illuminant estimation show the strength and robustness of spectral gray-edge algorithm for illuminant estimation in multispectral imaging and this will be a key factor in enabling the use of multispectral camera in uncontrolled imaging environment.

4.3. Performance of spectral adaptation transform

During simulation in [14], SAT showed significant improvement in spectral reconstruction results. However, the improvement due to SAT in results from our experiments is marginal. The purpose of SAT is to incorporate the intrinsic characteristics of imaging sensor and address the inter-channel overlapping. In [14], SAT is able to improve results of simulated overlapping sensors. The overlapping of filters in the filter-wheel camera used in our experiments is small as can be seen in Fig. 2. The diagonal transform assumes that there are no inter-channel dependencies in the camera and each of the channels is treated individually. SAT accounts for those dependencies and if they are small (as for the filter-wheel camera in our experiments), the influence of SAT is also low. When we observed the acquired SAT from Eq. (6), we found that the diagonal entries of this matrix consist of large numbers while the rest of the entries are very small. This explains why SAT does not change the results significantly in comparison with a diagonal transform. Nevertheless, based on our theoretical results and simulations in [14], it is most likely that SAT will improve the spectral reconstruction results significantly when imaging sensors having large overlapping are used. This will be the case with snapshot spectral filter array camera, where the cross-talk between the filters and overlapping of the sensitivities of filters are significant [24].

4.4. Discussion on results

Fig. 4 show promising results in terms of spectral reconstruction from the multispectral images. Use of multispectral constancy is able to reduce the error significantly as compared to do nothing. The same effect can be seen in the spectral reconstruction examples in Fig. 5. Spectral reconstruction results can be improved by improving the accuracy of illuminant estimation, properly addressing the intrinsic properties of imaging sensor (spectral adaptation transform), and then by introducing improvements in the spectral reconstruction algorithm itself. In this work, we used linear method for spectral reconstruction. It is possible to improve the spectral reconstruction results through the use of non-linear methods, particularly machine learning. However, we will need a large amount of data for training such a system and once the data is available, the results will definitely be improved. These results show that a multispectral camera can be used as a spectrophotometer for outdoor imaging by incorporating the idea of multispectral constancy in the imaging pipeline. Although we can see marked improvement in spectral reconstruction results in terms of RMSE and cosine distance, it is yet to be investigated that how much accuracy is required in order to enable the use of multispectral cameras for computer vision applications. Use of multispectral constancy for extraction of stable spectral information as a pre-processing step before the computation of computer vision traditional features will definitely improve the accuracy and open new paths for research by using multispectral imaging for computer vision applications.

 figure: Fig. 4

Fig. 4 Mean of RMSE from all the illuminants used in experiments.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Spectral reconstruction results from two surfaces along with measured (ground truth) spectrum. Results show that use of SAT along with diagonal transform causes reduction in the spectral reconstruction error.

Download Full Size | PDF

5. Conclusion

In this paper, we have provided practical demonstration and evaluation of the concept of multi-spectral constancy. Three different scenes are created in viewing booth and multispectral images are captured while using different illuminants. The acquired multispectral data is transformed into canonical representation through manual selection of white patch of ColorChecker in the image and by estimating the illuminant. Spectral reconstruction is performed by using Wiener estimation method and results are evaluated in terms of RMSE, cosine distance and spectral angle mapper. Results show a promising aspect of multispectral imaging as they can be used as a spectrophotometer for getting spectral information of a whole scene. We found that the spectral adaptation transform do not provide significant improvement in the results for this specific camera. SAT is used to address the inter-channel dependencies in the imaging sensor and if the overlapping among sensors is low, SAT do not provide improvement in the results. However, SAT showed significant improvement with simulated sensors that were overlapping in the wavelength spectrum. Therefore, we recommend the use of spectral adaptation transform to address the intrinsic characteristics of imaging sensor. With the use of multispectral constancy pipeline, we have demonstrated that a multispectral camera can be used to acquire reflectance data from material surfaces when the imaging conditions, i.e. illumination is uncontrolled.

The proposed concept of multispectral constancy is valid for both filter-wheel and snapshot type of multispectral cameras. However, further experiments have to be performed for snapshot cameras to include the demosaicing and filter cross-talk effects. The next step from this work is to use multispectral imaging for material identification and surface classification. The spectral information can help in distinguishing among two material surfaces but for the classification task, spatial information (i.e. texture, shape etc.) is also important. Multispectral constancy will help in providing illuminant invariant representation of multispectral images. Having promising results in terms of spectral reconstruction from multispectral images under unknown illumination conditions, we are one step closer towards enabling the use of multispectral imaging for computer vision applications.

Appendix

In this section, detailed results of root mean square error (RMSE) and cosine distance are provided. Results are reported in the form of mean and 95 percentile. In the experiments with illuminant estimation, color checker is masked out from the image and error in estimation of illumination is also provided in the form of angular error.

Funding

Norges Forskningsråd.

References

1. J.-B. Thomas, P.-J. Lapray, P. Gouton, and C. Clerc, “Spectral characterization of a prototype SFA camera for joint visible and NIR acquisition,” Sensors 16, 993 (2016). [CrossRef]  

2. R. Shrestha, J. Y. Hardeberg, and R. Khan, “Spatial arrangement of color filter array for multispectral image acquisition,” Proc. SPIE 7875, 787503 (2011).

3. F. H. Imai and R. S. Berns, “Spectral estimation using trichromatic digital cameras,” in Proceedings of the International Symposium on Multispectral Imaging and Color Reproduction for Digital Archives, (Chiba University Chiba, Japan, 1999), pp. 1–8.

4. J. Y. Hardeberg, Acquisition and reproduction of color images: Colorimetric and multispectral approaches (Universal Publishers, 2001).

5. D. Connah, S. Westland, and M. G. A. Thomson, “Recovering spectral information using digital camera systems,” Color. Technol. 117, 309–312 (2001). [CrossRef]  

6. E. M. Valero, J. L. Nieves, S. M. C. Nascimento, K. Amano, and D. H. Foster, “Recovering spectral data from natural scenes with an RGB digital camera and colored filters,” Color. Res. & Appl. 32, 352–360 (2007). [CrossRef]  

7. J. Y. Hardeberg and R. Shrestha, “Multispectral colour imaging: Time to move out of the lab?” in Mid-term meeting of the International Colour Association (AIC), (2015), pp. 28–32.

8. L. T. Maloney, “Evaluation of linear models of surface spectral reflectance with small numbers of parameters,” J. Opt. Soc. Am. A 3, 1673–1683 (1986). [CrossRef]   [PubMed]  

9. J. P. S. Parkkinen, J. Hallikainen, and T. Jaaskelainen, “Characteristic spectra of Munsell colors,” J. Opt. Soc. Am. A 6, 318–322 (1989). [CrossRef]  

10. F. Imai and R. Berns, “Spectral estimation using trichromatic digital cameras,” in International Symposium on Multispectral Imaging and Color Reproduction for Digital Archives, (1999), pp. 42–49.

11. D. Connah, S. Westland, and M. G. A. Thomson, “Recovering spectral information using digital camera systems,” Color. Technol. 117, 309–312 (2001). [CrossRef]  

12. R. Shrestha and J. Y. Hardeberg, “Spectrogenic imaging: A novel approach to multispectral imaging in an uncontrolled environment,” Opt. Express 22, 9123–9133 (2014). [CrossRef]   [PubMed]  

13. H. A. Khan, J. B. Thomas, and J. Y. Hardeberg, “Multispectral constancy based on spectral adaptation transform,” in 20th Scandinavian Conf. on Image Analysis, (2017), pp. 459–470. [CrossRef]  

14. H. A. Khan, J.-B. Thomas, J. Y. Hardeberg, and O. Laligant, “Spectral adaptation transform for multispectral constancy,” J. Imaging Sci. Technol. 62, 1020504 (2018). [CrossRef]  

15. H. A. Khan, “Multispectral constancy for illuminant invariant representation of multispectral images,” PhD thesis, Norwegian University of Science and Technology (2018).

16. O. Bertr and C. Tallon-Baudry, “Oscillatory gamma activity in humans: a possible role for object representation,” Trends Cogn. Sci. 3, 151–162 (1999). [CrossRef]  

17. M. Ebner, Color Constancy (Wiley Publishing, 2007), 1st ed.

18. H. A. Khan, J.-B. Thomas, J. Y. Hardeberg, and O. Laligant, “Illuminant estimation in multispectral imaging,” J. Opt. Soc. Am. A 34, 1085–1098 (2017). [CrossRef]  

19. M. D’Zmura and P. Lennie, “Mechanisms of color constancy,” J. Opt. Soc. Am. A 3, 1662–1672 (1986). [CrossRef]  

20. H. A. Khan, S. Mihoubi, B. Mathon, J.-B. Thomas, and J. Y. Hardeberg, “Hytexila: High resolution visible and near infrared hyperspectral texture images,” Sensors 18(7), 2045 (2018). [CrossRef]  

21. H. A. Khan and P. Green, “Color characterization methods for a multispectral camera,” in International Symposium on Electronic Imaging 2018: Color Imaging XXIII: Displaying, Processing, Hardcopy, and Applications, (IS&T, San Francisco, United States, 2018), pp. 221–1–221–8.

22. “SpectroCam Multispectral Wheel Cameras,” https://pixelteq.com/spectrocam/. Accessed: 07-06-2018.

23. A. Sohaib, N. Habili, and A. Robles-Kelly, “Automatic exposure control for multispectral cameras,” in IEEE International Conference on Image Processing, (2013), pp. 2043–2047.

24. P.-J. Lapray, X. Wang, J.-B. Thomas, and P. Gouton, “Multispectral filter arrays: Recent advances and practical implementation,” Sensors 14, 21626–21659 (2014). [CrossRef]   [PubMed]  

25. C. Ni, J. Jia, M. Howard, K. Hirakawa, and A. Sarangan, “Single-shot multispectral imager using spatially multiplexed fourier spectral filters,” J. Opt. Soc. Am. B 35, 1072–1079 (2018). [CrossRef]  

26. J. Brauers, N. Schulte, and T. Aach, “Multispectral filter-wheel cameras: Geometric distortion model and compensation algorithms,” IEEE Transactions on Image Process. 17, 2368–2380 (2008). [CrossRef]  

27. J. Klein and T. Aach, “Multispectral filter wheel cameras: Modeling aberrations for filters in front of lens,” in Digital Photography VIII, vol. 8299 (International Society for Optics and Photonics, 2012), pp. 8299.

28. J. van de Weijer, T. Gevers, and A. Gijsenij, “Edge-based color constancy,” IEEE Transactions on Image Process. 16, 2207–2214 (2007). [CrossRef]  

29. S. Ratnasingam, S. Collins, and J. Hernández-Andrés, “Optimum sensors for color constancy in scenes illuminated by daylight,” J. Opt. Soc. Am. A 27, 2198–2207 (2010). [CrossRef]  

30. S. Ratnasingam, S. Collins, and J. Hernández-Andrés, “Extending “color constancy” outside the visible region,” J. Opt. Soc. Am. A 28, 541–547 (2011). [CrossRef]  

31. S. Ratnasingam and J. Hernández-Andrés, “Illuminant spectrum estimation at a pixel,” J. Opt. Soc. Am. A 28, 696–703 (2011). [CrossRef]  

32. J. Conde, H. Haneishi, M. Yamaguchi, N. Ohyama, and J. Baez, “Spectral reflectance estimation of ancient Mexican codices, multispectral images approach,” Revista Mexicana de Fisica 50, 484–489 (2004).

33. D. Connah, J. Y. Hardeberg, and S. Westland, “Comparison of linear spectral reconstruction methods for multispectral imaging,” in International Conference on Image Processing, ICIP, vol. 3 (2004), pp. 1497–1500.

34. H.-L. Shen, P.-Q. Cai, S.-J. Shao, and J. H. Xin, “Reflectance reconstruction for multispectral imaging by adaptive wiener estimation,” Opt. Express 15, 15545–15554 (2007). [CrossRef]   [PubMed]  

35. K. Barnard, L. Martin, B. Funt, and A. Coath, “A data set for color research,” Color. Res. & Appl. 27, 147–151 (2002). [CrossRef]  

36. D. Zhang and G. Lu, “Evaluation of similarity measurement for image retrieval,” in International Conference on Neural Networks and Signal Processing, vol. 2 (2003), pp. 928–931.

37. J. Hernández-Andrés, J. Romero, J. L. Nieves, and R. L. Lee, “Color and spectral analysis of daylight in southern europe,” J. Opt. Soc. Am. A 18, 1325–1335 (2001). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Color rendering of the scenes created in viewing booth and rendered under D65 illuminant.
Fig. 2
Fig. 2 Camera sensitivity (including the filters and sensor) in the visible range. X-axis show the wavelengths (in nm), while y-axis show the relative intensity. (Data taken from manufacturer).
Fig. 3
Fig. 3 Binary masks and the rendering of selected surfaces in color for the three images used in experiments. The whole scenes are shown in Fig. 1. Ground-truth for the selected patches is taken from hyperspectral dataset in [20].
Fig. 4
Fig. 4 Mean of RMSE from all the illuminants used in experiments.
Fig. 5
Fig. 5 Spectral reconstruction results from two surfaces along with measured (ground truth) spectrum. Results show that use of SAT along with diagonal transform causes reduction in the spectral reconstruction error.

Tables (6)

Tables Icon

Table 1 Spectral reconstruction result of selected surfaces from the “Kitchen” image. Each channel is acquired with manually adjusted integration time. Results are presented in form of mean and 95 percentile of error metric. Results of manually selecting the white patch of ColorChecker is shown with “Patch select”, while “SG-E” show results of Spectral gray-edge algo. with angular error (ΔA), after ColorChecker is masked out from image.

Tables Icon

Table 2 Spectral reconstruction result of selected surfaces from the “Kitchen” image, taken with simulation of snapshot camera. Results are presented in form of mean and 95 percentile of error metric. Results of manually selecting the white patch of ColorChecker is shown with “Patch select”, while “SG-E” show results of Spectral gray-edge algo. with angular error (ΔA), after ColorChecker is masked out from image.

Tables Icon

Table 3 Spectral reconstruction result of selected surfaces from the “Textile 1” image. Each channel is acquired with manually adjusted integration time. Results are presented in form of mean and 95 percentile of error metric. Results of manually selecting the white patch of ColorChecker is shown with “Patch select”, while “SG-E” show results of Spectral gray-edge algo. with angular error (ΔA), after ColorChecker is masked out from image. This scene is taken without illuminant A.

Tables Icon

Table 4 Spectral reconstruction result of selected surfaces from the “Textile 1” image, taken with simulation of snapshot camera. Results are presented in form of mean and 95 percentile of error metric. Results of manually selecting the white patch of ColorChecker is shown with “Patch select”, while “SG-E” show results of Spectral gray-edge algo. with angular error (ΔA), after ColorChecker is masked out from image.This scene is taken without illuminant A.

Tables Icon

Table 5 Spectral reconstruction result of selected surfaces from the “Textile 2” image. Each channel is acquired with manually adjusted integration time. Results are presented in form of mean and 95 percentile of error metric. Results of manually selecting the white patch of ColorChecker is shown with “Patch select”, while “SG-E” show results of Spectral gray-edge algo. with angular error (ΔA), after ColorChecker is masked out from image.

Tables Icon

Table 6 Spectral reconstruction result of selected surfaces from the “Textile_2” image, taken with simulation of snapshot camera. Results are presented in form of mean and 95 percentile of error metric. Results of manually selecting the white patch of ColorChecker is shown with “Patch select”, while “SG-E” show results of Spectral gray-edge algo. with angular error (ΔA), after ColorChecker is masked out from image.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

f = ω e ( λ ) r ( λ ) s ( λ ) d λ .
F = RES .
e = ω e ( λ ) s ( λ ) d λ .
F c = M c , ill F ill ,
D c , ill = diag ( E c / E ill ) .
F c = M c , ill F ill = A SAT D c , ill F ill .
R ^ = W c M c , ill F ill
( [ [ F σ ] p d x d x ) 1 / p = e ^ ,
W = R t R t T ( SE ) T ( ( SE ) R t R t T ( SE ) T + G ) 1 .
R ^ = W c A SAT D c , ill F ill .
RMSE = 1 N j = 1 N ( r j r ^ j ) 2
cosine distance = 1 r T r ^ ( r T r ) ( r ^ T r ^ )
Δ A = arccos e T e ^ ( e T e ) ( e ^ T e ^ )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.