Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Matched illumination: using light modulation as a proxy for a color filter that makes a camera more colorimetric

Open Access Open Access

Abstract

In previous work, it was shown that a camera can theoretically be made more colorimetric–its RGBs become more linearly related to XYZ tristimuli–by placing a specially designed color filter in the optical path. While the prior art demonstrated the principle, the optimal color-correction filters were not actually manufactured. In this paper, we provide a novel way of creating the color filtering effect without making a physical filter: we modulate the spectrum of the light source by using a spectrally tunable lighting system to recast the prefiltering effect from a lighting perspective. According to our method, if we wish to measure color under a D65 light, we relight the scene with a modulated D65 spectrum where the light modulation mimics the effect of color prefiltering in the prior art. We call our optimally modulated light, the matched illumination. In the experiments, using synthetic and real measurements, we show that color measurement errors can be reduced by about 50% or more on simulated data and 25% or more on real images when the matched illumination is used.

Published by Optica Publishing Group under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. Introduction

Digital cameras measure the color information in a real-world scene like a human observer only if a camera meets the Luther condition [1,2]. The Luther condition requires the camera sensitivity functions are a linear combination of the color matching functions of the human visual system [3]. If a camera meets the Luther condition, the colors it measures are linearly related to the device-independent tristimulus values, such as CIE XYZ tristimuli [4]. Such a camera is said to be colorimetric. However, off-the-shelf RGB cameras do not satisfy the Luther condition and so cannot be used for precise color measurement [5].

A viable way to improve the colorimetric accuracy of a camera is to capture multiple images, each with a different color filter placed in front of the camera [612]. This multi-shot technique can gather more color information than in a single shot (greater than 3-dimensional color signals) and when they are mapped to colorimetric XYZ values, we can obtain greater accuracy. Generally, the color filters are chosen from commercial products [13] either heuristically or by using an exhaustive search process [14]. An alternative way is to capture images under multiple lights, such as using a light booth with different illuminants [15,16]. Here, the multiple lights perform an analogous role as the filters. However, both methods require multiple shots of images which take a longer capture process and if nothing else, the registration between images is a problem itself.

Finlayson and Zhu [1719] recently proposed to improve the colorimetric accuracy of a digital camera by placing a carefully-designed color filter in the optical path way with a single-shot image. The spectral transmittances of such a filter can be optimally designed for making a camera better meet the Luther condition. In Fig. 1, the top row illustrates how a color-correction filter can make a camera more colorimetric. Figure 1(a) shows the spectral sensitivities of a camera, which are notably different from the CIE XYZ color matching functions (see the solid lines in Fig. 1(c)). The physical effect of placing a color filter - such as the one shown in Fig. 1(b) - in front of a camera can be reasonably modeled as the multiplication of the filter spectral transmittance and the camera sensitivities on a per wavelength basis over the visible spectrum. After prefiltering, the camera sensitivity functions are linearly fitted to the reference CIE XYZ color matching functions [4]. The corrected sensitivities are shown in Fig. 1(c). The solid lines show the reference color matching functions and the dashed lines show the effective camera sensitivities after the best linear fitting. Clearly, we see that by using a color filter we can make the camera curves a close approximation to the visual sensitivities. At the time of writing this manuscript, it is not known whether the optimal color filters can, in fact, be manufactured.

 figure: Fig. 1.

Fig. 1. In the top panel, we show the filter-modified Luther-condition. Given a camera with known RGB sensitivities as in (a), an optimal filter (b) can be found that after a linear regression fit, the corrected camera sensitivities (dashed lines in (c)) are good approximation to the XYZ color matching functions (solid lines in (c)). A matched illumination (f) is determined given the spectral characteristics of the desired measurement light (d) and the optimal color filter (e).

Download Full Size | PDF

Our contribution begins with the observation that for typical color measurement scenarios, the effect of a color filter placed in front of the camera can be achieved by placing the same filter in front of the light source. We call the modulated light source the Matched Illumination. It follows that if we wish to measure colors under a target measurement light source, say the standard daylight of D65 (see the bottom panel in Fig. 1), we need to match it to a new illumination, effectively a filtered D65 (that is not D65). Then the camera will capture the object colors using this matched illumination to predict the ground-truth XYZ tristimuli of the desired measurement light source. Note that the derived filter shown in Fig. 1(e) is not the same as Fig. 1(b) since it is derived with respect to a tunable LED illuminator (discussed below). This illuminator places more physical constraints on the design of the filter and matched illumination compared with the original camera+filter work.

In this paper, our theory of matched illumination is developed using a Gamma Scientific RS-5B spectral illuminator. The Gamma Scientific illuminator has eight narrow and two broad band LEDs. We will show how, for a given light (produced by the illuminator), we can solve for the best matched illumination. While our algorithm development is tied to the Gamma Scientific illuminator, the techniques are generally phrased and so could be deployed to other multi-band lights. As an important detail, we deal with the ornery issue that the spectral shape of LED outputs shift as the driving voltage changes.

The work of [20] is related to our approach. There, a similar illuminator is tuned - by means of a genetic algorithm - to find a spectrum of light that better allows reflectance spectra to be recovered from camera RGBs. Though, for the SFU reflectance set [21] (used in [20] and our study), the spectra recovered under their derived optimal illuminant are no more accurate than the spectra recovered under a fixed illuminant.

Experiments validate our approach. We show that we can significantly reduce color measurement error for a desired measurement light by solving for and then measuring with respect to the matched illumination. A novel aspect of our experimental work is that we develop and deploy a novel new technique for generating large spectra data sets given only a small number of spectral measurements. We exploit the idea that - in raw image capture - the RGBs computed from a linear combination of RGBs - up to noise - must be the same as the single RGB measured by viewing a linear combination of the underlying reflectances. Using this idea, we generate the RGBs for the large set of 1995 reflectances (SFU reflectance data set) using only 24 RGBs measured in a Macbeth ColorChecker chart [22].

In Section 2, we present the prior art to our method as well as the relevant background on image formation. In Section 3, we present our method for calculating the matched illumination. Experiments are reported in Section 4. In Section 5, there is a short conclusion.

2. Background

2.1 Color formation with a filter

The physical process of forming a color pixel underpins our idea of illumination matching. The color recorded by a digital camera mainly depends on the light stimulus, the object reflectance, and the sensitivity responses of the camera. They are respectively represented by the spectral functions $E(\lambda )$, $R(\lambda )$, and $Q_{k}(\lambda )$. The RGB response is written as:

$$\rho_{k} = \int_{\omega} \! R(\lambda) E(\lambda) Q_{k}(\lambda)\,\mathrm{d}\lambda, {\kern 7pt} k \in \{R, G, B\}$$
where $\rho _{k}$ denotes one of the RGB color values. Here and henceforth, $\lambda$ denotes the wavelength variable defined over the visible spectrum $\omega$.

When a transmissive color filter $F(\lambda )$ is placed in the optical pathway, the filtered RGB is written as:

$$\rho^{filtered}_{k} = \int_{\omega} \! R(\lambda) E(\lambda) F(\lambda)Q_{k}(\lambda)\,\mathrm{d}\lambda, {\kern 7pt} k \in \{R, G, B\}$$
where $F(\lambda )$ denotes the spectral transmittance of the filter with respect to the wavelength variable.

It is useful to sample spectral data and describe them in the discrete vector-matrix representation. Let $\mathbf {Q}$ denote the spectral sensitivities of a camera. The columns in the matrix represent the spectral sensitivity functions for each sensor channel and the rows denote the sensor responses at sampled wavelengths. Hence, $\mathbf {Q}$ is an $n \times 3$ matrix where $n$ is the number of sampled points across the visible range. In this paper, the spectral data are collected in the visible range from 400 nm to 700 nm for every 10 nm. Thus, we have $n=31$.

Similarly, let the 31-vectors $\mathbf {e}$ and $\mathbf {r}$ denote sampled representations of a light and a surface. Let $diag()$ denote the function which takes an $n$-vector as an argument and maps it to an $n \times n$ diagonal matrix. We can rewrite the image formation in Equation (1) as:

$$\boldsymbol{\rho}=\mathbf{Q}^{T} diag(\mathbf{e})\mathbf{r}$$
where we assume the wavelength sampling is incorporated in $\mathbf {Q}$ and $\boldsymbol {\rho }$ is a $3 \times 1$ vector denoting the RGB triplet values.

2.2 Luther condition

A camera is said to be colorimetric if it satisfies the Luther condition: the camera sensitivities are a linear combination of the standard color matching functions [3]. Let $\mathbf {X}$ denote the $31\times 3$ matrix where the columns are the X, Y and Z color matching functions (again we sample from 400 nm to 700 nm at a 10 nm sampling interval). In this discrete representation, the Luther condition is written as:

$$\mathbf{X = QM}$$
where $\mathbf {M}$ is a $3 \times 3$ full rank matrix denoting the linear transform between two sets of sensitivities.

The Luther condition is rarely met by an off-the-shelf digital camera. In [18], we proposed a new filtered version of the Luther condition. If there exists a color filter vector $\mathbf {f}$ such that:

$$\mathbf{X} = diag(\mathbf{f})\mathbf{QM}$$
then the Filtered Luther condition is met.

Of course neither the Luther condition nor the filtered variant is likely to hold exactly. Thus, a key focus of the prior art work on filter design [18] was to develop the numerical methods to find filters that make cameras most colorimetric, i.e. that make them best satisfy the Luther condition.

2.3 Color correction

To use an off-the-shelf RGB camera for color measurement, whether we use a color filter or not, the recorded camera RGBs are color corrected to XYZ counterparts using a $3\times 3$ correction matrix. While other non-linear color correction methods could be used (e.g. [2325]), a linear color correction has several advantages. First, based on arguments from image formation, a $3\times 3$ matrix correction should work well [26]. Second, a linear transform is scalar invariant. If we double the illumination intensity that lights a scene, then the corresponding RGBs and XYZs also double and the goodness of fit afforded by a $3\times 3$ matrix remains unchanged. Finally, if colors fall on a line in the RGB space, they still fall on a line after color correction (an important physical consideration for correctly mapping highlights in photographic images [27,28]).

Therefore, to assess the color measuring performance of a digital camera in practical use, we will evaluate and present the color accuracy of our proposed method under the linear color correction transform.

2.4 Gamma scientific RS-5B illuminator

The Gamma Scientific illuminator system has six lamps carefully arranged in the perimeter of the integrating sphere with white diffusing baffles installed inside for creating spatially uniform lighting, see Fig. 2. Uniformity is useful in our experiments because we will need to measure RGBs and XYZs for the same surface lit the same way. And from these pairs, we will evaluate how well RGBs under a given light and its matched illuminant can be color corrected to XYZ. However, outside of this evaluation, we do not need to assume uniformity when deploying the matched illuminant to unseen data. In detail, suppose that under a matched illuminant, a RGB for a given surface is denoted as $\mathbf {\rho }$. We multiply it by the color correction matrix $\mathbf {M}$ to estimate the corresponding XYZ tristimuli, denoted as $\mathbf {x}$: assume that $\mathbf {M} \mathbf {\rho }\approx \mathbf {x}$ (with respect to the matched illuminant we can map RGBs to estimate of XYZs). Because we are using linear color correction, the efficacy of this color correction mapping remains unchanged when exposure of the surface changes. Mathematically, if we multiply the RGB by a scaling factor $k$, then it is also true that $\mathbf {M} k \mathbf {\rho }\approx k\mathbf {x}$.

 figure: Fig. 2.

Fig. 2. On the left, we show the experimental setup: a digital camera is set on a tripod to capture the image of the object on the table illuminated by a desired light generated by the illuminator system. On the back of the half-sphere illuminator, a tele-spectroradiometer is used to measure the spectrum of the light. The illuminator consists of six lamps arranged in the integrating sphere. Its sectional arrangement is drawn on the top right. Each lamp has 10 LED channels and their relative spectral power distributions at their maximum intensity are plotted on the bottom right.

Download Full Size | PDF

In the sphere, each lamp consists of 10 different LED channels. The spectral power distributions of each LED channel (when the maximum current is driven) are shown in the bottom right of Fig. 2. Note that only nine spectra can be seen in the figure as two broad LED lights have almost the same spectral shape. From the figure, we can see that eight of them are narrow-band LED lights ranging from blue to red while two are identical yellowish broad-band LEDs. A broadband LED is used because of the lack of green LEDs in the range between 525 nm to 615 nm.

The intensity of each LED light can be digitally controlled and programmed (using a serial communication port) in any combination and proportion to generate a desired illumination spectrum. Ideally, the light spectrum driven at partial intensity should have the same spectral shape only with a scaling factor as that driven by the maximum intensity. In such a condition, we say the spectrum scales linearly with the intensity levels. When the linearity holds and the light spectra at its maximum intensity are measured, we are able to predict the illumination spectrum when we program the intensity levels of the light sources.

However, in practice, when we adjust the intensity level (driving current) of the light sources, we find that, for some LEDs, the peak wavelength of the spectra shifts. So, we characterize the illuminator system by measuring the spectral distributions of each light source at varied driving current levels between 0% and 100% of its full intensity, i.e. $[0, 0.1, 0.2, \ldots, 1]$. Their spectral distributions are plotted in Fig. 3(a). It can be seen that there is some shift in the peak wavelength when intensity level changes. For example, as intensity decreases, the peaks of the fifth (from left to right in Fig. 3(a)) LED channel shown in green lines slightly shift towards the longer wavelength. The shift reaches 17 nm between the maximum and minimum intensities. We also calculate the u’v’ chromaticity coordinates [29] for all intensity levels for each LED channel and plot in the chromaticity diagram, see Fig. 3(b). Each LED channel is depicted by one color. We can see 9 colored clusters with respect to 9 LED types. Among them, we see that two LEDs in the green-cyan area have noticeable chromatic shift while others are relatively stable (e.g. red LEDs). When the chromatic shift is significant, we can no longer predict the illumination spectrum under the assumption of linearity.

 figure: Fig. 3.

Fig. 3. The relative spectral power distributions at varied intensity levels are plotted in (a). Their u’v’ coordinates are plotted in the chromaticity diagram in (b). Note the horse-shoe shaped outline in (b) is the spectral locus.

Download Full Size | PDF

As a final comment, returning to Figure 2, we see that different LEDs have significantly different power ranges. The importance of this physical feature is that it places a constraint on the spectral power distribution of any matched illumination. Indeed, for us to replicate the prior art work on transmissive filters in the lighting world, we would need narrow band lights across the visible spectrum that had the same peak maximum intensities. Thus, a priori we expect our matched lights to perform less well than unconstrained optimized filters. This said, our matched lights have the advantage over the prior filter design work that they can be - as we show next - physically realized.

2.5 Optimized illumination

Before presenting our method, we wish to point the reader to prior art reported in the literature. In [20], a lighting system with spectrally-tunable LEDs was used for the spectral reconstruction (SR) problem. In SR, we attempt to recover spectra from camera RGB responses. In [20], the best composition of the LED lights was sought that subserves the SR task. For a variety of different regression-based SR algorithms, a genetic algorithm was used to solve for the optimal measurement light.

While not the focus of their optimization, they did examine their recovery error - as we will do later - in terms of errors in the CIELAB color space [29]. For the SFU reflectance set [21], they found that their optimization method did not help them to significantly reduce $\Delta E^{*}_{ab}$ error (compared to using a non-optimized light). As we will report later, our optimization - based on a different mathematical formalism - does lead to significantly lower error for this data set.

3. Matched illumination

Returning to Equation (2), it is apparent that we can think of a filter as modulating the spectral sensitivities of the sensors - $F(\lambda )Q_k(\lambda )$ - or equivalently as modulating the spectral power of the light, $E(\lambda )F(\lambda )$. We call the modulated light, $E^{m}(\lambda )$ the matched illumination:

$$E^{m}(\lambda)=E(\lambda)F(\lambda).$$

A camera with a filter $F(\lambda )$ placed in its optical pathway viewing the scene lit by a light $E(\lambda )$ makes the same measurement as the same camera without any filter but where the scene is illuminated by $E^{m}(\lambda )$ (assuming a simple viewing environment where we can ignore effects such as interreflection).

Let us move our development of the matched illumination idea to the discrete domain. Given a $31\times 1$ illumination $\mathbf {e}$, we are looking for a matched illuminant $\mathbf {e}^{m}$ that makes the camera more colorimetric (more able to measure XYZs under the illuminant $\mathbf {e}$). Noting that

$$\mathbf{e}^{m}=diag(\mathbf{e})\mathbf{f}.$$

Our optimization statement for the design of matched illuminations is written as:

$$\mathop{\mathrm{arg\,min}}\limits_{ \mathbf{e}^{m},\mathbf{M}}\parallel{ diag(\mathbf{e}^{m})\mathbf{QM} - diag(\mathbf{e})\mathbf{X}}\parallel^{2}_F$$
where $\parallel {}\parallel ^{2}_F$ denotes the square of the Frobenius norm and, as before, $\mathbf {M}$ is a $3 \times 3$ full rank matrix.

3.1 Simple matched illumination

It is convenient to think of the lights (in a spectral illuminator) as a simple linear basis which can be used to describe a range of lights:

$$\mathbf{e} = \mathbf{Bc} \;,\;\; \; 0\preceq \mathbf{c} \preceq 1 .$$

For an illuminator with $k$ LED lights, $\mathbf {B}$ is a $31 \times k$ matrix. The $i$th column of the basis matrix $\mathbf {B}$ lists the maximum power of the $i$th LED light spectrum. $\mathbf {c}$ is a $k \times 1$ vector giving the intensity weights of the LED light channels. Additionally, of course, each coefficient is restrained by $c_i \in [0,1]$: it has to be between 0 and 100% maximum power. In the simple basis world, we ignore the issue that the peaks of the basic light spectra shift as their intensity is changed.

For a viewing illuminant $\mathbf {e}=\mathbf {B}\mathbf {c}$, we can solve for the matched illumination $\mathbf {e}^{m}=\mathbf {B}\mathbf {c}^{m}$ (again $0\preceq \mathbf {c}^{m} \preceq 1$) by modifying Equation (8):

$$\mathop{\mathrm{arg\,min}}\limits_{{\boldsymbol{c}^{m}}, \; {\boldsymbol{M}}}\parallel{ diag(\mathbf{B} {\boldsymbol{c}^{m}})\mathbf{Q}{\boldsymbol{M}} - diag(\mathbf{e})\mathbf{X}}\parallel^{2}_F \;\;\; \text{s.t.}\;\; 0\preceq \boldsymbol{c}^{m} \preceq 1.$$

To solve this optimization, we must estimate two unknown variables: the coefficient vector $\mathbf {c}^{m}$ defining the matched illuminant and the $3 \times 3$ correction matrix $\mathbf {M}$. There is no closed-form solution to the problem. Analogously, to the prior art [18], we solve for $\mathbf {c}^{m}$ and $M$ using alternating least-squares regression:

oe-30-12-22006-i001

First, we make an initial guess for the light coefficients (for the matched illumination). Then, it is straightforward to calculate the correction matrix $\mathbf {M}$ simply using the least-squares regression. Then we hold $\mathbf {M}$ fixed and solve for the optimal solution for $\mathbf {c}^{m}$ using Quadratic Programming [30] (to enforce the boundedness constraints). The iteration continues until the difference between the current and previous solutions is below a criterion amount. The optimization is guaranteed to terminate.

A priori, we know that the peak lights - for some of the LEDs in our illuminator - do shift. But, if the shifts are small (generally they are), we should be able to adopt the simple algorithm and still obtain a good matched illuminant.

3.2 Complex matched illumination

In the complex model, we can still use the basic framework in Algorithm 1 to determine the matched illumination and the mapping matrix. However, we will address the problem that peak of the LED spectra shift as they are driven at different intensities.

To deal with the problem that the LED spectra shift, we will measure the spectra power distribution emitted from each LED at a variety of intensity levels. Together, these spectra form an extended basis function set that better characterizes the illuminator system. We choose to use 10 uniform steps from 0 to maximum: $\mathbf {w} = [0, 0.1, 0.2, \ldots, 1]$. Note that the option of 0 intensity refers to the fact that the LED channel is powered off. With these measurements in hand, and given an arbitrary intensity level, we can use interpolation to estimate the light spectrum - for an arbitrary intensity level - as a convex combination of the two neighbouring intensities. For example, if we would like to know the spectrum at the intensity of 0.65, we calculate $\mathbf {e}_{0.65} = 0.5 * \mathbf {e}_{0.6} + 0.5* \mathbf {e}_{0.7}$ (where respectively $\mathbf {e}_{0.6}$ and $\mathbf {e}_{0.7}$ denotes an LED light driven at, respectively, 60 and 70% of its maximum intensity)

Let us group all the measured lights into an array $\mathbf {A}$ with size of $31 \times 10 \times 11$, respectively $\#SampledWavelengths \times \#Channels \times \#IntensityLevels$. We can extract a ‘local’ basis from $\mathbf {A}$. For example, the $31\times 10$ maximum intensity basis (used in Algorithm 1) $\mathbf {B}=\mathbf {A}(:,:,11)$ where the ‘:’ means to use all indices in that dimension (for those that use Matlab, we take the notation from there). The vectors $\mathbf {A}(:,5,7)$ and $\mathbf {A}(:,5,8)$ denote the 5th LED spectrum driven to 60% and 70% of the maximum intensities. Let us now define a normalized array of lights $\mathbf {A}^{n}$ where each light is divided by its intensity. As an example, $\mathbf {A}^{n}(:,5,8)=\mathbf {A}(:,5,8)/w_8$, which implies $w_8\mathbf {A}^{n}(:,5,8)=\mathbf {A}(:,5,8)$.

We use Algorithm 1 to calculate the matched illuminant (when using an illuminator where the peaks change as function of intensity). At initialization, we use a basis driven at their maximum intensities $\mathbf {B}=\mathbf {A}(:,:,11)$. As in Algorithm 1, we calculate $\mathbf {M}$ and then we calculate the weight vector $\mathbf {c}^{m}$ - for the matched illumination - again using Quadratic Programming. As the algorithm proceeds, we update the basis matrix $\mathbf {B}$.

oe-30-12-22006-i002

Depending on the coefficient vector value solved in Step 5, we have an indication of the basis that we ‘should’ use. For example, if the $\mathbf {c}^{m}_6=0.5$, then this is proposing the 6th light (which on the first iteration has max power) should be driven at 50% of the maximum intensity. Since we are aware of the spectral shift as the power changes, it makes sense to substitute the 50% power spectrum (for the 6th light) into $\mathbf {B}$ for the next iteration. Actually, we substitute the power normalized spectrum $\mathbf {A}^{n}(:,6,5)$ for $\mathbf {B}(:,6)$. This is because on the next iteration if $\mathbf {c}^{m}_6 = 0.5$ we do not want to swap the basis again. A similar algorithm was used by Mackiewicz and et al. [31] to generate a light metamer for vision research. Interested readers are referred to the work for more details.

3.3 Algorithm for making new reflectance data

In the next section, we will present synthetic and real color correction results for two object data sets: the ColorChecker Color Rendition Chart (often referred to as Macbeth) [22] and 1995 reflectance spectra (SFU1995) [21]. The Macbeth chart is a standard chart used for characterizing and evaluating cameras. And SFU1995 is a composite set comprising 1269 Munsell chips [32], 120 Dupont paint chips [33], 170 natural objects [33], 350 surfaces in [34], 24 Macbeth chart patches and 57 surfaces measured in Simon Fraser University.

The Macbeth Color checker only has 24 patches. And we do not - nor does anyone else - have access to the physical samples in the SFU1995 data set. But, this reflectance set is often used to benchmark algorithms; so we’d like to quote real experimental results for SFU1995.

To bridge this experimental gap, we propose to describe the reflectances in SFU1995 by a linear combination of no more than 4 color samples in the Macbeth data set:

$$\mathbf{r}_{target} \approx c_1 \mathbf{r}_1 + c_2 \mathbf{r}_2 + c_3 \mathbf{r}_3 + c_4 \mathbf{r}_4 = \mathbf{r}_{fit}$$
where $\mathbf {r}_1, \mathbf {r}_2,\mathbf {r}_3$ and $\mathbf {r}_4$ are 4 spectra selected from the Macbeth chart and $\mathbf {r}_{target}$ is one of the reflectances in SFU1995 . To simplify matters further, we used only 1 (out of 6) achromatic scale. Thus we wished to describe each reflectance - e.g. in SFU1995 data set - as a combination of 4 (selected out of 19) Macbeth reflectances.

Assuming raw image capture, the RGB response to the target color can be calculated as the linearly composed RGBs of the 4 chosen color patches:

$$\boldsymbol{\rho}_{target} \approx c_1 \boldsymbol{\rho}_1 + c_2 \boldsymbol{\rho}_2 + c_3 \boldsymbol{\rho}_3 + c_4 \boldsymbol{\rho}_4 .$$

It follows that we can simulate the response to an unseen reflectance by applying the same linear combination - that approximates $\mathbf {r}_{target}$ - to the measured Macbeth RGBs.

As a design choice, we choose to limit the number of reflectances to 4 in order to try and prevent linear combinations with large negative and positive coefficients (these coefficients could result in the RGB estimates to be susceptible to noise).

Let us validate our method for the X-rite ColorChecker Digital SG chart. We will take the 19 Macbeth reflectances (again, we choose only one achromatic color) to predict the SG reflectances using our method. That is, per color patch in the SG chart, we calculate the linear combination coefficients of 4 Macbeth colors selected that best predict each spectral reflectance (with the least fitting error). The results are summarized in Table 1 where we show a percentage fitting error and a CIELAB $\Delta E_{ab}^{*}$. The percentage error is defined as $\frac {||\mathbf {r}_{target}-\mathbf {r}_{fit}||}{||\mathbf {r}_{target}||} \times 100\%$ and $\Delta E_{ab}^{*}$ calculates the color error between the desired and predicted reflectance in the CIELAB color space [4] for a D65 viewing illuminant. The fit is surprisingly good, especially in terms of the CIELAB error. The mean and median fit are less than 1 $\Delta E_{ab}^{*}$ and the max is 3.5 $\Delta E_{ab}^{*}$.

Tables Icon

Table 1. Results of predicting the SG chart reflectances by the proposed linear fitting.

Let us now apply our method to the more challenging SFU1995 set of 1995 reflectances (which includes surfaces that are highly chromatic). In Fig. 4, we show three statistically representative reflectances (solid black lines) drawn from SFU1995 and three reconstructions (dashed lines) according to the method above. In order from panels (a) to (c), we are showing the median spectral error fit, the 75-percentile and the max error fit. In panel (d), we show the histogram of the coefficients for the 1995 reflectances (where each reflectance is fit with 4 different Macbeth spectra). The coefficients are in the range [-3, 3] indicating that any noise increase in the transformed RGBs will be small. Notice the peak of the histogram seems to be at 0. Actually, there are not many coefficients that are exactly zero, the histogram bin counts a range of coefficients. Evidently, some Macbeth reflectances make only small contributions to the linear combination matching to a target reflectance.

 figure: Fig. 4.

Fig. 4. In Panels (a), (b) and (c), the solid and dashed lines show respectively the actual and fitted reflectances for the median, 75-percentile and max error fits. Panel (d) shows the histogram of the fitting coefficients

Download Full Size | PDF

Overall, the fit of Macbeth reflectances to the SFU1995 is surprisingly good.

4. Results

A D65 illuminant metamer generated by the LED Illuminator system is shown in Fig. 5(a) (where its maximum power is normalized to one). In the figure, we also plot the theoretical CIE D65 in solid line. A D65 metamer is a spectrum that produces the same XYZ tristimulus values (relative XYZ tristimulus values of [0.9385, 1.0000, 1.0472]) as the theoretical D65 illuminant; yet, they are different in the spectral composition [35]. Our illuminator can make many D65 metamers [36], here we chose the metamer that has the least spectral error from the standard D65.

 figure: Fig. 5.

Fig. 5. Relative spectral power distributions of the CIE D65 (solid line) and its metamer (dashed line) generated by the LED illuminator. (b) The matched illuminations solved by the simple (dashed line) and complex models (solid line).

Download Full Size | PDF

In Fig. 5(b), we show the matched illuminations calculated with respect to the simple and complex illuminator models (respectively, solved using Algorithms 1 and 2). From the figure, we see that the two matched illuminants are similar. There are, however, small spectral differences in the range of 450 nm to 550 nm. Both matched lights are even bluer (have more radiant power in the short-wave region of the visible spectrum) than the actual measurement D65 metamer.

In the results that follow, we will calculate the XYZ tristimuli and compute color difference errors. In our own papers [1719], we have used CIELAB error metric to measure color performance (as many other filter design papers [12,3739]). Thus, the results presented here can be directly compared to the prior art. For reference where we report performance on experiments with real illuminants in Section 4.2, we will also tabulate the CIEDE2000 color error [40] (where we will see the trend in the data is very similar).

4.1 Simulated experiments

In the simulated experiments, we evaluated how well a Nikon camera (see Fig. 6(a)) can measure the colors of two object sets: the Macbeth ColorChecker Chart and 1995 reflectance spectra (SFU1995). The reflectance data of the Macbeth chart were measured by a Konica Minolta spectrophotometer CM700d in the range of 400 nm to 700 nm for every 10 nm at our laboratory. And the Simon Fraser dataset SFU1995 is described in [21].

 figure: Fig. 6.

Fig. 6. (a) The spectral sensitivity functions of a Nikon D5100 camera. (b) The linearity property of the camera responses with respect to six neutral colors on the Macbeth color chart captured under the D65 illuminant.

Download Full Size | PDF

We first calculated the camera RGB responses of the Nikon camera according to Equation (3) provided the spectral data of the D65 metamer, the matched illuminations, the reflectances, and the camera. In the tables that follow we respectively call the matched illuminations derived using Algorithms 1 and 2 the Simple and Complex matched illuminations. Also, the corresponding ground-truth XYZ values under the D65 illuminant metamer were computed.

Our three sets of camera RGBs - for the D65 metamer and the simple and complex matched illuminants - were separately mapped (color corrected) using least-squares regression to estimate the XYZs. The predicted and the ground-truth XYZs were converted into the CIELAB color space and then the color difference between them was evaluated in terms of $\Delta E_{ab}^{*}$ [4]. The error statistics were calculated over all test reflectances.

The results of this experiment for D65 illumination are summarized in Table 2. The left and right of the table report the experiments for, respectively, the Macbeth and SFU1995 data sets. We calculated the $\Delta E_{ab}^{*}$ color errors for three cases. First, when the native camera RGBs - recorded under the D65 metamer - were color corrected to XYZs. Then we color corrected the RGBs measured under the Simple and Complex matched illuminations. The Mean, Median and Max errors are shown for both reflectance sets. For the SFU1995 set which has a much larger number of reflectances, we also calculated the 95- and 99-percentile errors.

Tables Icon

Table 2. $\Delta E_{ab}^{*}$ statistics of simulated color correction performance for two testing datasets when using the color corrected native camera under D65 metamer, the color corrected camera with the matched illuminations generated by the illuminator system under the simple and complex models for a Nikon D5100 camera.

From Table 2, it is clear that measuring and then color correcting RGBs measured under the matched illumination lead to better color measurement accuracy compared to the original D65 metamer. For the Macbeth color chart, respectively by the simple and complex matched illuminations, we find there is a reduction of 44% and 48% in terms of mean $\Delta E_{ab}^{*}$ error, 64% and 65% for median $\Delta E_{ab}^{*}$ error, and 24% and 16% for max $\Delta E_{ab}^{*}$ error. For the SFU1995 dataset, the 95- and 99- percentile errors are substantially improved; they are halved for the complex matched illumination.

However, readers will notice that there are very large color errors for a few of the SFU1995 reflectances and this is true for the D65 Metamer and the matched illuminant (as indicated by the 99-percentile and maximum error values in Table 2). By looking into the data set, we find that the reflectances giving the highest color errors are highly saturated surface colors. The high error is also related to the fact that we are using a linear color correction (which is known to struggle with the most saturated colors).

Overall, there is a modest improvement when the complex (as opposed to the simple) matched illumination is used. This is encouraging since it is the complex model that actually corresponds to the physical properties of the illuminator we have in our lab.

4.1.1 Multiple cameras

To test the robustness of the proposed methods, we extend the color correction experiments to a much broader collection of digital cameras. The data set consists of 28 cameras with measured spectral sensitivity data, including professional single-lens reflex, industrial and mobile cameras [41]. For each camera, we solve for the optimized matched illumination under D65. As before, we calculate the best - post color correction - estimates of XYZs for each camera using the actual D65 light and the per-camera matched counterpart.

Figure 7 summarizes the per camera (a) mean and (b) 99-percentile $\Delta E_{ab}^{*}$ performance for the SFU1995 data set. The color errors for the matched illuminations are shown with light blue (dashed) bars and are compared with D65, solid blue bars. Right (in green) we record the average results for the 28 cameras. For these cameras, by using the matched illuminations, the mean color error reduces from $1.60 \pm 0.51$ to $0.81 \pm 0.29$ and the 99-percentile error from $11.38 \pm 2.54$ to $4.49 \pm 1.31$ (where the $\pm$ value is the standard deviation of the error metric). It is evident that using the filtered illuminations supports improved color measurement accuracy for all 28 testing cameras and on average the performance increment is significant.

 figure: Fig. 7.

Fig. 7. Color correction performance for a collection of 28 cameras with and without using the matched illumination evaluated in terms of average (a) mean and (b) 99-percentile color errors. Note that the last color bar (depicted in different color) shows the averaged results across all 28 testing cameras.

Download Full Size | PDF

4.1.2 Multiple illuminants

For our Nikon D5100 camera, we repeated the simulation experiments for 3 extra illuminants: CIE A, D55 and D75 Illuminants. As before, we found the optimal illuminant metamer that could be produced by our illuminator for these 3 lights. Then, we solved for their simple and complex matched illuminants, see Fig. 8.

 figure: Fig. 8.

Fig. 8. Relative spectral power distributions of the CIE A (a), D55 (c) and D75(e) illuminants (solid line) and its metamer (dashed line) generated by the LED illuminator. The right column shows the matched illuminations solved by the simple (dashed line) and complex models (solid line).

Download Full Size | PDF

The experimental results of these illuminants are summarized in Tables 35. We see the trend of the data is the same as for the D65 illuminant. As before, there is a significant improvement in color measurement error when colors are recorded and corrected under matched illuminants. And, once more, we find there is a small advantage of using the complex illuminator model.

Tables Icon

Table 3. $\Delta E_{ab}^{*}$ statistics of simulated color correction performance for two testing data sets when using the color corrected native camera under A Metamer, the color corrected camera with the matched illuminations generated by the illuminator system under the simple and complex models for Nikon D5100 camera.

Tables Icon

Table 4. $\Delta E_{ab}^{*}$ statistics of simulated color correction performance for two testing data sets when using the color corrected native camera under D55 Metamer, the color corrected camera with the matched illuminations generated by the illuminator system under the simple and complex models for Nikon D5100 camera.

Tables Icon

Table 5. $\Delta E_{ab}^{*}$ statistics of simulated color correction performance for two testing data sets when using the color corrected native camera under D75 Metamer, the color corrected camera with the matched illuminations generated by the illuminator system under the simple and complex models for Nikon D5100 camera.

4.2 Experiments using measured data

Images of the Macbeth checker - under the D65 Metamer and the simple and complex matched illuminants - were captured with a Nikon D5100 digital single-lens reflex camera. The camera used a fixed focal length of 35 mm with f-number of 5, ISO at 1600 and exposure time at 1/40 s. To check camera linearity, we plot mean reflectance, for the 6 achromatic colors on the Macbeth chart - against the mean of their RGB responses, see Fig. 6(b). The dashed lines are the linear fitting curves with its function shown in the figure. We see that the curves almost pass through the origin which confirms the good linearity of our camera.

To obtain the RGBs that we will use in our experiments, raw Nikon image files (NEF) of the Macbeth chart were captured, converted and demosaiced into TIFF format using DCRAW [42]. Then the camera raw RGBs of a selected area of about $200 \times 200$ pixels were averaged for each color patch in the Macbeth chart. To ensure lighting uniformity, we also captured images of an X-rite White Balance chart placed at the same spot as the Macbeth chart. By dividing out the RGBs in the checker by the corresponding RGBs measured in the white chart, we corrected for non-uniform illumination. Of course dividing by white can be thought of as multiplying by a diagonal matrix (whose diagonal components are the reciprocal of the RGBs in the white reference chart). This, however, does not change our color correction optimization. If $\mathbf {M}$ denotes a $3\times 3$ matrix optimally mapping the RGBs of a camera under a given light to the corresponding XYZs and we then multiply the RGBs by a diagonal matrix $D$, then least-squares color correction will return $\mathbf {D}^{-1}\mathbf {M}$. That is, the output from color correction will be the same.

For the ground-truth values, we used the same XYZs that were calculated for the synthetic experiments (discussed in Section 4.1). Our ground truth are the XYZ tristimuli of the Macbeth color checker illuminated by the D65 metamer.

We now repeat the color correction experiment for real RGB data. But, we used the method set forth in Section 3.3 to allow us to investigate the performance for the SFU1995 dataset. That is, we model each SFU reflectance as a linear sum of four Macbeth reflectances. Because of the linearity of capture, applying the same linear combination to the corresponding Macbeth RGBs will result in an RGB that corresponds to the linearly combined reflectances. In this way, given the measurements from a Macbeth checker, we can test the matched illuminant approach on a much larger reflectance dataset. Also, although computing RGBs in this way will increase noise, we are averaging the responses over 4$200 \times 200$ pixels; so the effect of the noise is negligible.

The color performance results are evaluated in terms of $\Delta E_{ab}^{*}$ in Table 6. From the table, we see that the error between the color corrected native RGBs and the the ground truth XYZs is significantly higher compared to those in the simulated experiments. This is to be expected. We are choosing our matched illuminant based on estimated spectral sensitivities and measured illuminants and there will certainly be discrepancies in both. Further, although care is taken to measure the color checker to minimize any specular reflectance, there is likely a small specular component in our data (not present in the synthetic experiment). For interest, we also show the results in terms of CIEDE2000 errors in Table 7 and the results show very similar trend.

Tables Icon

Table 6. Error statistics $\Delta E_{ab}^{*}$ of experimental measured data for two object sets for the color corrected native camera under D65 Metamer, the color corrected camera with the matched illuminations generated by the illuminator system under the simple and complex models for a Nikon D5100 camera.

Tables Icon

Table 7. Error statistics CIEDE2000 of experimental measured data for two object sets for the color corrected native camera under D65 Metamer, the color corrected camera with the matched illuminations generated by the illuminator system under the simple and complex models for a Nikon D5100 camera.

Significantly, when we measure and correct real RGBs measured under matched illuminations, we record significantly lower color errors. The error for the Macbeth reflectances is reduced by a modest amount (e.g. 16% for the mean metric by the complex matched illuminant). The performance difference for the larger SFU1995 reflectance set is larger: the corrected RGBs (measured under the D65 Metamer) are, for the mean, median and 99-percentile errors, respectively 28%, 27% and 33% higher compared with the measurements taken under the complex matched illumination. As before, we find the complex matched illumination condition leads to the lowest errors overall.

We also repeated this experiment for CIE A illuminant where we calculate the CIELAB $\Delta E_{ab}^{*}$ and CIEDE2000 errors, see Tables 8 and 9. We see similar trend in the results found with respect to D65 illuminant. There is a significant improvement in color measurement error when colors are recorded and corrected under a matched A illuminant.

Tables Icon

Table 8. Error statistics $\Delta E_{ab}^{*}$ of experimental tests for two object data sets for the color corrected native camera under A Metamer, the color corrected camera with the matched illuminations generated by the illuminator system under the simple and complex models for Nikon D5100 camera.

Tables Icon

Table 9. CIEDE2000 statistics of experimental tests for two object data sets for the color corrected native camera under A Metamer, the color corrected camera with the matched illuminations generated by the illuminator system under the simple and complex models for Nikon D5100 camera.

5. Conclusion

In prior work (e.g. [1719]), it has been shown that it is possible to design a color prefilter that when it is placed in the optical path of a camera it can make the camera almost colorimetric. However, none of the filters previously designed have been manufactured. And, it is not known to what extent they can be manufactured.

In this paper, we pose the filter-design problem in an equivalent form. We propose that placing a filter in front of a light source is broadly equivalent to placing the filter in front of the camera. Since we now have tunable multi-spectral LED illuminators, we can model the function of the filter as a modulation of the light source. For a given measurement light and a camera, we show how we can optimally modulate a light source to solve for a Matched Illumination. The matched illumination for D65 is spectrally quite different but results in RGBs which are more able to be color corrected to CIE XYZ tristimuli than RGBs measured under D65.

Experiments validate our results. On synthetic and real data, we show that there is a significant benefit (up to 50%) in using a matched illumination for measuring color (compared to those under a desired measurement light). A novel aspect of our experimental methodology is that we show how the measurements made for a Macbeth ColorChecker chart can be used to calculate results for a much larger reflectance dataset.

Funding

Engineering and Physical Sciences Research Council (EP/S028730/1).

Acknowledgments

The authors would like to express their gratitude to Dietmar Wueller (Image Engineering) for his generous help of measuring the spectral sensitivity functions of the experimental camera. We also would like to thank Dr. Michal Mackiewicz for his help on the illuminator system.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data presented in this paper may be shared by the authors upon reasonable request.

References

1. H. E. Ives, “The transformation of color-mixture equations from one system to another,” J. Frankl. Inst. 180(6), 673–701 (1915). [CrossRef]  

2. R. Luther, “Aus dem gebiet der farbreizmetrik,” Zeitschrift Technische Physik 8, 540–558 (1927).

3. B. K. P. Horn, “Exact reproduction of colored images,” Comput. Vision, Graph. Image Process. 26(2), 135–167 (1984). [CrossRef]  

4. N. Ohta and A. Robertson, Colorimetry: fundamentals and applications (John Wiley & Sons, 2006).

5. J. Nakamura, Image sensors and signal processing for digital still cameras (CRC press, 2016).

6. J. E. Farrell and B. A. Wandell, “Method and apparatus for identifying the color of an image,” U.S. Patent 5479524 (26 December 1995).

7. P. D. Burns and R. S. Berns, “Analysis multispectral image capture,” in Color and Imaging Conference, (Society for Imaging Science and Technology, 1996), pp. 19–22.

8. F. H. Imai and R. S. Berns, “Spectral estimation using trichromatic digital cameras,” in Proceedings of the International Symposium on Multispectral Imaging and Color Reproduction for Digital Archives, (Chiba University, Japan, 1999), pp. 1–8.

9. F. H. Imai, S. Quan, M. R. Rosen, and R. S. Berns, “Digital camera filter design for colorimetric and spectral accuracy,” in Proceedings of the 3rd International Conference on Multispectral Color Science, (University of Joensuu, Finland, 2001), pp. 13–16.

10. W. Wu, J. P. Allebach, and M. Analoui, “Imaging colorimetry using a digital camera,” J. Imaging Sci. Technol. 44, 267–279 (2000).

11. J. Y. Hardeberg, F. J. Schmitt, and H. Brettel, “Multispectral color image capture using a liquid crystal tunable filter,” Opt. Eng. 41(10), 2532–2548 (2002). [CrossRef]  

12. J. Y. Hardeberg, “Filter selection for multispectral color image acquisition,” J. Imaging Sci. Technol. 48, 105–110 (2004).

13. D. L. MacAdam, “Colorimetric specifications of Wratten light filters,” J. Opt. Soc. Am. 35(10), 670–675 (1945). [CrossRef]  

14. P. L. Vora and H. J. Trussell, “Mathematical methods for the design of color scanning filters,” IEEE Trans. on Image Process. 6, 312–320 (1997). [CrossRef]  

15. P. Xu and H. Xu, “Filter selection based on light source for multispectral imaging,” Opt. Eng. 55(7), 074102 (2016). [CrossRef]  

16. M. Á. Martínez-Domingo, M. Melgosa, K. Okajima, V. J. Medina, and F. J. Collado-Montero, “Spectral image processing for museum lighting using CIE LED illuminants,” Sensors 19(24), 5400 (2019). [CrossRef]  

17. G. D. Finlayson, Y. Zhu, and H. Gong, “Using a simple colour pre-filter to make cameras more colorimetric,” in Color and Imaging Conference, (Society for Imaging Science and Technology, 2018), pp. 182–186.

18. G. D. Finlayson and Y. Zhu, “Designing color filters that make cameras more colorimetric,” IEEE Trans. on Image Process. 30, 853–867 (2021). [CrossRef]  

19. Y. Zhu and G. D. Finlayson, “A mathematical investigation into the design of prefilters that make cameras more colorimetric,” Sensors 20(23), 6882 (2020). [CrossRef]  

20. L. Wang, A. Sole, J. Y. Hardeberg, and X. Wan, “Optimized light source spectral power distribution for RGB camera based spectral reflectance recovery,” Opt. Express 29(16), 24695–24713 (2021). [CrossRef]  

21. K. Barnard, L. Martin, B. Funt, and A. Coath, “A data set for color research,” Color Res. Appl. 27, 147–151 (2002). [CrossRef]  

22. C. S. McCamy, H. Marcus, and J. G. Davidson, “A color-rendition chart,” J. Appl. Photogr. Eng. 2, 95–99 (1976).

23. G. Hong, M. R. Luo, and P. A. Rhodes, “A study of digital camera colorimetric characterization based on polynomial modeling,” Color Res. Appl. 26, 76–84 (2001). [CrossRef]  

24. P.-C. Hung, “Colorimetric calibration in electronic imaging devices using a look-up-table model and interpolations,” J. Electron. Imaging 2(1), 53–62 (1993). [CrossRef]  

25. G. D. Finlayson, M. Mackiewicz, and A. Hurlbert, “Color correction using root-polynomial regression,” IEEE Trans. on Image Process. 24(5), 1460–1470 (2015). [CrossRef]  

26. M. S. Drew and B. V. Funt, “Natural metamers,” CVGIP: Image Underst. 56(2), 139–151 (1992). [CrossRef]  

27. C. F. Andersen and D. Connah, “Weighted constrained hue-plane preserving camera characterization,” IEEE Trans. on Image Process. 25(9), 4329–4339 (2016). [CrossRef]  

28. M. Mackiewicz, C. F. Andersen, and G. D. Finlayson, “Method for hue plane preserving color correction,” J. Opt. Soc. Am. 33(11), 2166–2177 (2016). [CrossRef]  

29. R. W. G. Hunt and M. R. Pointer, Measuring Colour (John Wiley & Sons, 2011), 4th ed.

30. D. G. Luenberger and Y. Ye, Linear and nonlinear programming (Springer, 2015), 4th ed.

31. M. Mackiewicz, S. Crichton, S. Newsome, R. Gazerro, G. D. Finlayson, and A. Hurlbert, “Spectrally tunable LED illuminator for vision research,” in Conference on Colour in Graphics, Imaging, and Vision, (Society for Imaging Science and Technology, 2012), pp. 372–377.

32. J. P. S. Parkkinen, J. Hallikainen, and T. Jaaskelainen, “Characteristic spectra of Munsell colors,” J. Opt. Soc. Am. 6(2), 318–322 (1989). [CrossRef]  

33. M. J. Vrhel, R. Gershon, and L. S. Iwan, “Measurement and analysis of object reflectance spectra,” Color Res. Appl. 19, 4–9 (1994). [CrossRef]  

34. E. L. Krinov, “Spectral reflectance properties of natural formations,” Tech. rep., National Research Council of Canada (1947).

35. G. Wyszecki and W. S. Stiles, Color science: concepts and methods, quantitative data and formulae (Wiley New York, 1982), 2nd ed.

36. G. Finlayson, M. Mackiewicz, A. Hurlbert, B. Pearce, and S. Crichton, “On calculating metamer sets for spectrally tunable LED illuminators,” J. Opt. Soc. Am. 31(7), 1577–1587 (2014). [CrossRef]  

37. R. Shrestha and J. Y. Hardeberg, “Multispectral imaging using LED illumination and an RGB camera,” in Color and Imaging Conference, (Society for Imaging Science and Technology, 2013), pp. 8–13.

38. M. J. Vrhel, “Improved camera color accuracy in the presence of noise with a color prefilter,” in Color and Imaging Conference, (Society for Imaging Science and Technology, 2020), pp. 187–192.

39. H. J. Rivertz, “On filters making an imaging sensor more colorimetric,” in Color and Imaging Conference, (Society for Imaging Science and Technology, 2020), pp. 169–174.

40. M. R. Luo, G. Cui, and B. Rigg, “The development of the CIE 2000 colour-difference formula: CIEDE2000,” Color Res. Appl. 26, 340–350 (2001). [CrossRef]  

41. J. Jiang, D. Liu, J. Gu, and S. Süsstrunk, “What is the space of spectral sensitivity functions for digital color cameras?” in 2013 IEEE Workshop on Applications of Computer Vision (WACV), (IEEE, 2013), pp. 168–179.

42. D. Coffin, “Decoding raw digital photos in Linux,” Available from https://www.dechifro.org/dcraw/.

Data availability

Data presented in this paper may be shared by the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. In the top panel, we show the filter-modified Luther-condition. Given a camera with known RGB sensitivities as in (a), an optimal filter (b) can be found that after a linear regression fit, the corrected camera sensitivities (dashed lines in (c)) are good approximation to the XYZ color matching functions (solid lines in (c)). A matched illumination (f) is determined given the spectral characteristics of the desired measurement light (d) and the optimal color filter (e).
Fig. 2.
Fig. 2. On the left, we show the experimental setup: a digital camera is set on a tripod to capture the image of the object on the table illuminated by a desired light generated by the illuminator system. On the back of the half-sphere illuminator, a tele-spectroradiometer is used to measure the spectrum of the light. The illuminator consists of six lamps arranged in the integrating sphere. Its sectional arrangement is drawn on the top right. Each lamp has 10 LED channels and their relative spectral power distributions at their maximum intensity are plotted on the bottom right.
Fig. 3.
Fig. 3. The relative spectral power distributions at varied intensity levels are plotted in (a). Their u’v’ coordinates are plotted in the chromaticity diagram in (b). Note the horse-shoe shaped outline in (b) is the spectral locus.
Fig. 4.
Fig. 4. In Panels (a), (b) and (c), the solid and dashed lines show respectively the actual and fitted reflectances for the median, 75-percentile and max error fits. Panel (d) shows the histogram of the fitting coefficients
Fig. 5.
Fig. 5. Relative spectral power distributions of the CIE D65 (solid line) and its metamer (dashed line) generated by the LED illuminator. (b) The matched illuminations solved by the simple (dashed line) and complex models (solid line).
Fig. 6.
Fig. 6. (a) The spectral sensitivity functions of a Nikon D5100 camera. (b) The linearity property of the camera responses with respect to six neutral colors on the Macbeth color chart captured under the D65 illuminant.
Fig. 7.
Fig. 7. Color correction performance for a collection of 28 cameras with and without using the matched illumination evaluated in terms of average (a) mean and (b) 99-percentile color errors. Note that the last color bar (depicted in different color) shows the averaged results across all 28 testing cameras.
Fig. 8.
Fig. 8. Relative spectral power distributions of the CIE A (a), D55 (c) and D75(e) illuminants (solid line) and its metamer (dashed line) generated by the LED illuminator. The right column shows the matched illuminations solved by the simple (dashed line) and complex models (solid line).

Tables (9)

Tables Icon

Table 1. Results of predicting the SG chart reflectances by the proposed linear fitting.

Tables Icon

Table 2. Δ E a b statistics of simulated color correction performance for two testing datasets when using the color corrected native camera under D65 metamer, the color corrected camera with the matched illuminations generated by the illuminator system under the simple and complex models for a Nikon D5100 camera.

Tables Icon

Table 3. Δ E a b statistics of simulated color correction performance for two testing data sets when using the color corrected native camera under A Metamer, the color corrected camera with the matched illuminations generated by the illuminator system under the simple and complex models for Nikon D5100 camera.

Tables Icon

Table 4. Δ E a b statistics of simulated color correction performance for two testing data sets when using the color corrected native camera under D55 Metamer, the color corrected camera with the matched illuminations generated by the illuminator system under the simple and complex models for Nikon D5100 camera.

Tables Icon

Table 5. Δ E a b statistics of simulated color correction performance for two testing data sets when using the color corrected native camera under D75 Metamer, the color corrected camera with the matched illuminations generated by the illuminator system under the simple and complex models for Nikon D5100 camera.

Tables Icon

Table 6. Error statistics Δ E a b of experimental measured data for two object sets for the color corrected native camera under D65 Metamer, the color corrected camera with the matched illuminations generated by the illuminator system under the simple and complex models for a Nikon D5100 camera.

Tables Icon

Table 7. Error statistics CIEDE2000 of experimental measured data for two object sets for the color corrected native camera under D65 Metamer, the color corrected camera with the matched illuminations generated by the illuminator system under the simple and complex models for a Nikon D5100 camera.

Tables Icon

Table 8. Error statistics Δ E a b of experimental tests for two object data sets for the color corrected native camera under A Metamer, the color corrected camera with the matched illuminations generated by the illuminator system under the simple and complex models for Nikon D5100 camera.

Tables Icon

Table 9. CIEDE2000 statistics of experimental tests for two object data sets for the color corrected native camera under A Metamer, the color corrected camera with the matched illuminations generated by the illuminator system under the simple and complex models for Nikon D5100 camera.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

ρ k = ω R ( λ ) E ( λ ) Q k ( λ ) d λ , k { R , G , B }
ρ k f i l t e r e d = ω R ( λ ) E ( λ ) F ( λ ) Q k ( λ ) d λ , k { R , G , B }
ρ = Q T d i a g ( e ) r
X = Q M
X = d i a g ( f ) Q M
E m ( λ ) = E ( λ ) F ( λ ) .
e m = d i a g ( e ) f .
a r g m i n e m , M d i a g ( e m ) Q M d i a g ( e ) X F 2
e = B c , 0 c 1 .
a r g m i n c m , M d i a g ( B c m ) Q M d i a g ( e ) X F 2 s.t. 0 c m 1.
r t a r g e t c 1 r 1 + c 2 r 2 + c 3 r 3 + c 4 r 4 = r f i t
ρ t a r g e t c 1 ρ 1 + c 2 ρ 2 + c 3 ρ 3 + c 4 ρ 4 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.