Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Spectral sensitivity estimation of trichromatic camera based on orthogonal test and window filtering

Open Access Open Access

Abstract

The three-channel spectral sensitivity of a trichromatic camera represents the characteristics of system color space. It is a mapping bridge from the spectral information of a scene to the response value of a camera. In this paper, we propose an estimation method for three-channel spectral sensitivity of a trichromatic camera. It includes calibration experiment by orthogonal test design and the data processing by window filtering. The calibration experiment was first designed by an orthogonal table of the 9-level and 3-factor. A rough estimation model of spectral sensitivity is established on the data pairs of the system input and output in calibration experiments. The data of rough estimation is then modulated by two window filters on frequency and spatial domain. The Luther-Ives condition and the smoothness condition are introduced to design the window, and help to achieve the optimal estimation of the system spectral sensitivity. Finally, the proposed method is verified by some comparison experiments. The results show that the estimated spectral sensitivity is basically consistent with the measured results of the monochromator experiments, the relative full-scale errors of the RGB three-channel is obviously lower than the Wiener filtering method and the Fourier band-limitedness method. The proposed method can estimate the spectral sensitivity of the trichromatic digital camera very well, which is of great significance for the colorimetric characterization and evaluation of imaging systems.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The spectral sensitivity describes the relative efficiency of the system radiation detection at different wavelengths. It is a mapping relationship between the color channel responsivity and the wavelength for a color imaging system. It is also a foundation for the colorimetric characterization of a trichromatic camera [13]. Spectral sensitivity is an important reference index for system applications and performance evaluation. The estimation of the spectral sensitivity is of great significance for system colorimetric characterization, evaluation and improvement of the system colorimetric accuracy, spectral reflectance reconstruction, and system simulation [4]. However, some spectral sensitivity measuring equipment are very expensive. Some factors such as noise, system response nonuniformity and system linearity all affect colorimetrical characterization. The spectral sensitivity estimation is an ill-posed problem. These factors affect the estimation of camera spectral sensitivity. The measurement of the imaging system spectral sensitivity can be classified as the direct measurement method and the indirect estimation method [57]. The direct method is the main method for spectral sensitivity measurement at present. It demands monochromators and other devices. Stable monochromatic light is the key factor for this method [5]. This type of method includes standard substitution, direct comparison, broadband filter and Fourier transform methods, which has more requirements for experiments and instruments [816]. Most of the indirect estimation methods are based on some constraints [1721]. Other researches are based on the input spectrum [2229]. There are two typical input spectrum-based estimation methods, which are estimation methods based on some known input spectral characteristics [26,27] and also based on fluorescent reflection targets [28]. Another special research is based on a set of collection basis function, which comes from some camera spectral sensitivity [29]. All indirect methods almost depend on the estimation model. Therefore, the accuracy of the basis function method is mainly affected by the type of basis function. The input spectrum-based estimation method often involves the operation of the pseudo-inverse of a matrix. Due to the influence of noise and the dimensionality of the matrix, this type of problem is always an ill-posed inverse problem. Therefore, the accuracy of the sensitivity estimation is limited by the solving method. Two important factors of indirect estimation methods are the mapping data pair and the estimation model. It involves two difficulties: input space sampling, inverse problem modeling and its solution. The orthogonal test design is a statistical test design method; it can help in the input space sampling. An orthogonal table can guarantee orthogonality of samples combination.

In this paper, we propose a spectral sensitivity estimation method. The sRGB standard display is adopted as the input light source and an orthogonal test is designed in the calibration experiment. It helps to determine the input color samples. A spectral irradiance colorimeter is used to measure the input spectrum information. Combining with the output of the imaging system, the pairs of input and output data can be obtained. A rough estimation of the system's spectral sensitivity can be obtained from the mapping relationship. The Luther-Ives condition and smoothness condition are further integrated to optimize the spectral sensitivity. For reduction the number of color patches, we also provide some selection of color patches. The proposed method avoids the dependence on monochromatic light, and overcomes the ill-posedness in the spectral sensitivity estimation solution.

2. Spectral sensitivity estimation model

According to the physical process of radiation transmission, the three-channel response value of a trichromatic imaging system can be described as:

$$\left\{ \begin{array}{l} {R_{raw}} = K\sum\limits_{380}^{780} {\varphi (\lambda )} {\tau_r}(\lambda )\gamma (\lambda )\Delta \lambda \textrm{ + }{B_r}\\ {G_{raw}} = K\sum\limits_{380}^{780} {\varphi (\lambda )} {\tau_g}(\lambda )\gamma (\lambda )\Delta \lambda + {B_g}\\ {B_{raw}} = K\sum\limits_{380}^{780} {\varphi (\lambda )} {\tau_b}(\lambda )\gamma (\lambda )\Delta \lambda + {B_b} \end{array} \right., $$
where ${R_{raw}}$, ${G_{raw}}$ and ${B_{raw}}$ represent the three-channel response values of systems, $\varphi (\lambda )$ represents the spectral radiant flux, and it is object-dependent; ${\tau _r}(\lambda )$, ${\tau _g}(\lambda )$ and ${\tau _b}(\lambda )$ represents the spectral transmittance of the filter of three channels, they are object-independent; $\gamma (\lambda )$ represents the spectral sensitivity of the detector, it is also object-independent; K is the gain coefficient, it is object-independent still; ${B_r}$, ${B_g}$ and ${B_b}$ are the offset coefficient, which is composed of two parts. The first part is signal dependent part (shot noise), the other is signal independent part (dark noise). When the darkness response ${B_0}$ is removed, Eq. (1) is transformed into:
$$\left\{ \begin{array}{l} {R_{raw}} - {B_0} = K\sum\limits_{380}^{780} {\varphi (\lambda )} {\tau_r}(\lambda )\gamma (\lambda )\Delta \lambda + {n_r}\\ {G_{raw}} - {B_0} = K\sum\limits_{380}^{780} {\varphi (\lambda )} {\tau_g}(\lambda )\gamma (\lambda )\Delta \lambda + {n_g}\\ {B_{raw}} - {B_0} = K\sum\limits_{380}^{780} {\varphi (\lambda )} {\tau_b}(\lambda )\gamma (\lambda )\Delta \lambda + {n_b} \end{array} \right., $$
where ${R_{raw}} - {B_0}$ represents linear response of input radiation, ${n_r}$, ${n_g}$ and ${n_b}$ represent system random noise. Due to the some practical deviations from linear behavior it is common to linearize imaging systems via some sort of gamma correcting functions [30]. Equation (2) can be transformed into:
$$\left\{ \begin{array}{l} R = {F_r}\left[ {\sum\limits_{380}^{780} {\varphi (\lambda )} {\tau_r}(\lambda )\gamma (\lambda )\Delta \lambda + {n_r}} \right]\\ G = {F_g}\left[ {\sum\limits_{380}^{780} {\varphi (\lambda )} {\tau_g}(\lambda )\gamma (\lambda )\Delta \lambda + {n_g}} \right]\\ B = {F_b}\left[ {\sum\limits_{380}^{780} {\varphi (\lambda )} {\tau_b}(\lambda )\gamma (\lambda )\Delta \lambda + {n_b}} \right] \end{array} \right., $$
where R, $G$ and $B$ represent the three-channel digital values though a transformation; if ${F_r}$, ${F_g}$ and ${F_r}$ are some linear transformation of the system which corresponds to three channels. Equation (3) can be rewritten as
$$\begin{array}{{cc}} {\textbf D}&{{\textbf = }{{{\mathrm{\boldsymbol \Phi}} }^T}{ \times {\textbf T}}} \end{array}{ + {\textbf n}}, $$
where ${\textbf D} = \left[ {\begin{array}{{ccc}} {{R_i}}&{{G_i}}&{{B_i}} \end{array}} \right]|{_{i = 1 \cdots n}} $ represents a row vector corresponding to the responses of three channels; ${\mathrm{\boldsymbol \Phi}}$ represents a column vector corresponding to the sample value of spectral radiant flux ${\varphi _i}(\lambda )|{_{i = 1 \cdots n}} $ at certain spectral intervals; ${\textbf T}$ represents the spectral response conversion matrix corresponding to three channels when the input spectral radiant flux is $\varphi (\lambda )$. ${\textbf n}$ represents a row vector corresponding to random noise.

When the calibration data pairs $({\textbf D},{\mathrm{\boldsymbol \Phi}} )$ is known, the spectral sensitivity matrix ${\textbf T}$ of the imaging system can be estimated by the calibration data pair, thus

$${\textbf T} = f({{ {\mathrm{\boldsymbol \Phi}} ,{\textbf D}}} ). $$
where $f$ represents some mapping relation. Because the matrix of ${\mathrm{\boldsymbol \Phi}}$ has a low effective dimension usually, its condition number is very large. In order to avoid the ill-posedness of spectral sensitivity estimation, which is caused by inverse matrix solution, and if the principal component number is close to 3, we can convert the Eq. (5). Assuming ${\textbf H}\textrm{ = }{{\textbf T}^{ - 1}}$, the least squares method is used to estimate the inverse matrix of the system spectral sensitivity. The estimation model is shown here in Eq. (6)
$$\hat{{\textbf H}} = \mathop {\arg \min }\limits_{\hat{{\textbf H}}} ({{{||{{{\mathrm{\boldsymbol \Phi}} - {\textbf D}} \times \hat{{\textbf H}}} ||}^2}} ). $$
The system spectral sensitivity can be expressed as:
$${\textbf T} = {[{{{({{{\textbf D}^{\textbf T}}{\textbf D}} )}^{ - 1}}{{\textbf D}^T}{{\mathrm{\boldsymbol \Phi}} }} ]^ + }, $$
where the superscript of + stands for the operation of Moore-Penrose pseudo inverse. Theoretically, the estimation of the system spectral sensitivity can be computed from calibration experimental data and the estimation model. However, due to the effect of the condition number of the matrix, random noise and other factors, the results of estimation need to be further optimized.

3. Estimation method of spectral sensitivity method

Aiming at the estimation of the spectral sensitivity of imaging systems, an estimation method is proposed in this paper. The method introduces a calibration method of a kind of orthogonal test and a processing of window filtering optimization. The detailed flow of the method is shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. Schematic of the proposed method

Download Full Size | PDF

3.1 Orthogonal test design for calibration experiment

The estimation of the spectral sensitivity function requires input and output data pairs from calibration experiments. The best situation is that these data pairs are independent and have full coverage. When the 8 bits data is adopted, the total color number of RGB color space is ${\textrm{2}^\textrm{8}} \times {\textrm{2}^\textrm{8}} \times {\textrm{2}^\textrm{8}}$=16777216. A large full-scale test is very difficult. An orthogonal test design is used to solve this problem. A 3-factor and 9-level orthogonal test is designed for the input colors in our method. The 3-factor is R, G and B respectively. The range of every channel is from 0 to 255. This range is divided into 9 levels. An orthogonal table of ${L_{81}}({{9^{10}}} )$ is taken for calibration experiments. Under the condition of orthogonal test, the color space is completely covered. The orthogonal test sampling selection from RGB digital values and All 81 colors are shown in Fig. 2.

 figure: Fig. 2.

Fig. 2. Orthogonal test sampling of input colors. (a) Orthogonal test sampling of RGB digital values. (b) All 81 colors in one picture

Download Full Size | PDF

3.2 Optimized estimation by window filtering

In order to solve the accuracy problem, some constraint conditions are added to optimize the estimation. For optimal design of camera spectral sensitivities [3142], the Luther-Ives condition is set as the prior condition for solutions. The accurate acquisition of the RGB value of the camera depends on the combination of its image sensor and filter, and the design basis of the filter meets the Luther-Ives condition. The degree of fit with the Luther-Ives condition determines its colorimetric measurement accuracy. It is well known that the filters need not be exact duplicates of the color-matching functions (CMFs) but only needs to be a nonsingular transformation of them. The inclusion of an illumination spectrum in the light path makes the fabrication of filters more difficult. Since the reproduction is viewed by a human observer, the measurements can be limited to those properties that permit the creation of an image that will appear the same as the original [43]. The CIE has tabulated CMFs for this purpose. These functions together with an illuminant are used to take a necessary data. The desired tristimulus values are obtained by [39]:

$${\textbf t} = {\textbf A}_L^T{\textbf r} \approx {\textbf B}{{\textbf M}^T}{\textbf {OD}}{{\textbf L}_0}{\textbf r}$$
where ${\textbf A} = [{{a_j}({{\lambda_i}} )} ]$ is an N×3 matrix of the CIE CMFs sampled at N wavelengths, ${\textbf L}$ is an $N \times N$ diagonal matrix representing the spectrum of the illuminant under which the sample is viewed (called the viewing illuminant), ${{\textbf A}_L} = {\textbf {LA}}$ combines the CMFs and the viewing illuminant, ${\textbf r}$ is a reflectance spectrum. ${\textbf t}$ is a $3 \times 1$ vector of the tristimulus values; ${\textbf M}$ is the filter set, ${{\textbf L}_0}$ is the diagonal matrix whose elements define the instrument illumination, ${\textbf D}$ is the diagonal matrix whose elements define the detector sensitivity, ${\textbf O}$ is the diagonal matrix whose elements represent the transmission of the optical path and ${\textbf B}$ is the $3 \times 3$ transformation to obtain the estimate of the CIE tristimulus values under illuminant ${\textbf L}$.

For these reasons, the spatial and frequency domain windows width are selected on the three spectral stimulus values of the W. S. Stiles 10° field of view. The frequency spectrum of three spectral stimulus values of the W. S. Stiles 10° and its corresponding cutoff frequency of filter design is shown in Fig. 3. The cutoff frequency is determined by the maximum of frequency spectrum of CMFs of the W. S. Stiles 10°. In our design the cut-off frequency corresponds to the frequency of a quarter of the maximum spectrum value.

 figure: Fig. 3.

Fig. 3. Frequency spectrum of CMFs of the W. S. Stiles 10° and its corresponding cutoff frequency of filter design. (a) Red channel. (b) Green channel. (c) Blue channel

Download Full Size | PDF

According to the cutoff frequency of filter design, a Hanning window filter is added on the frequency domain [44]. When the Nyquist frequency is normalized to 0.5, the cut-off normalized frequency is designed as [0.0119, 0.0093, 0.0146] in frequency-domain window. Then a rectangular window filtering is performed on the space domain, the detail settings of these windows are shown in Fig. 4.

 figure: Fig. 4.

Fig. 4. Settings of the frequency domain and spatial windows

Download Full Size | PDF

4. Experimental verification

4.1 Experimental setup

The Schematic of the experimental setup is shown in Fig. 5. The experiment was performed in a dark room. An LCD display of sRGB standard is used as a light source to show the designed digital color chips. A spectral irradiance colorimeter and a trichromatic camera are needed. A BOE BOD065D (15.6-inch, 344.232mm×193.536 mm) LCD display is adopted to output the digital color patches. A SPIC-200 spectral irradiance colorimeter is adopted as spectrum measuring device. A Canon EOS600D color camera is adopted by manual settings. F/3.5, ISO=100, and exposure time t=0.02s is set. The distance between the LCD display and the spectral irradiance colorimeter is 185.19 mm. The camera was 1 meter away from the display. The main light axes of the camera, the spectral irradiance colorimeter, and the LCD display are placed in a straight line. During the experiment, the radiance of the LCD display is collected by the camera and the spectral irradiance colorimeter. In order to minimize errors, only one color patch is imaged in just one shot which fills the monitor. The sample interval is 1 nm. Finally, the raw TIFF format data with 16-bit linearity and JPEG format data are both collected. It means that there are two types of sensitivities. These two types of sensitivities are derived from their corresponding data.

 figure: Fig. 5.

Fig. 5. Schematic of the experimental setup

Download Full Size | PDF

4.2 Spectral sensitivity estimation

As mentioned above, the 81 color patches were measured by the spectral irradiance colorimeter. According to the positional relation, the irradiance on the image plane of the camera is computed from the measurement of the spectral irradiance colorimeter. There are also 81 groups of data for the three-channel response value of the camera. The camera image plane irradiance and its output value constitute input-output data pairs. Sensitivity varies depending on the type of data collected. Therefore the spectral pass band also depends on the original data. In order to illustrate the results more clearly, some settings and processing are described as follows: 1) to avoid spectrum leakage and spectrum aliasing, the prolongation is performed in the spatial domain in our method. The total number is 8001 after prolongation for each channel; 2) the Frequency window is set as Fig. 4.; 3) combining the results of rough estimation and CMFs, the spatial window is set; 4) the smoothness is guaranteed by cubic smoothing spline technique; 5) and non-negativity restriction is performed at last.

4.2.1 Spectral sensitivity under raw TIFF format

The rough estimation of spectral sensitivity curve under raw TIFF format with 16-bit linearity is shown in Fig. 6(a). The optimized estimation curve of spectral irradiance sensitivity is shown in Fig. 6(b).

 figure: Fig. 6.

Fig. 6. Spectral sensitivity estimation results under raw TIFF format with 16-bit linearity, (a) Results of rough estimation, (b) Results of optimized estimation

Download Full Size | PDF

Combining the character of the rough estimation and CMFs, the spatial window is set and shown in Fig. 7. The spatial window is first set by the CMFs. The three spatial windows are [400 nm, 700 nm], [400 nm, 645 nm] and [395 nm, 526 nm] respectively. For the green channel, the data between 420 nm to 500 nm fluctuates. Here, the marginal fluctuation section is cut off. This fluctuation section is different from others. There are two fluctuations and they may come from the sampling effect.

 figure: Fig. 7.

Fig. 7. The spatial window setting combined the rough estimation data and CMFs under raw TIFF format with 16-bit linearity. (a) R channel, (b) G channel. (c) B channel

Download Full Size | PDF

Some detailed data in process under raw TIFF format with 16-bit linearity are shown in Fig. 8. As seen here, the frequency window and spatial window play an important role in estimation.

 figure: Fig. 8.

Fig. 8. The experimental data in process under raw format with 16-bit linearity. (a) After adding frequency window of R channel. (b) After adding space window of R channel. (c) After cubic smoothing spline of R channel. (d) After adding frequency window of G channel. (e) After adding space window of G channel. (f) After cubic smoothing spline of G channel. (g) After adding frequency window of B channel. (h) After adding space window of B channel. (i) After cubic smoothing spline of B channel

Download Full Size | PDF

4.2.2 Spectral sensitivity under JPEG format

The JPEG RGB data is nonlinear. A uniform flat field is used to calibrate nonlinearity of our camera. The nonlinearity coefficient of the experimental camera is shown in Fig. 9. Here, the nonlinearity coefficient represents the ratio of two kinds of data. When the same energy is gathered, one corresponds to the output of the system we linearized and the other to the output of the nonlinearity system we adopted.

 figure: Fig. 9.

Fig. 9. Calibrated nonlinearity coefficient. (a) R channel. (b) G channel. (c) B channel

Download Full Size | PDF

Because nonsingular linear transformations only change the form of the mathematical model of the system, the spatial window and frequency window both are not affected by nonsingular linear transformations. Combining the character of the rough estimation and CMFs, the spatial window is set and shown in Fig. 10.

 figure: Fig. 10.

Fig. 10. The spatial window setting combined the rough estimation data and CMFs under JPEG format with linearity calibration. (a) R channel. (b) G channel. (c) B channel

Download Full Size | PDF

As seen here, the spatial windows have changed compared with the raw format data. It is because some transformations are performed using the raw format data. Some detailed data in process are shown in Fig. 11.

 figure: Fig. 11.

Fig. 11. The experimental data in process under JPEG format with linearity calibration. (a) After adding frequency window of R channel. (b) After adding space window of R channel. (c) After cubic smoothing spline of R channel. (d) After adding frequency window of G channel. (e) After adding space window of G channel. (f) After cubic smoothing spline of G channel. (g) After adding frequency window of B channel. (h) After adding space window of B channel. (i) After cubic smoothing spline of B channel

Download Full Size | PDF

The rough estimation of spectral sensitivity under JPEG format with linearity calibration is shown in Fig. 12(a). The optimized estimation of spectral irradiance sensitivity is shown in Fig. 12(b). As seen here, the spectral sensitivities under linearization output RGB is different from that of raw format TIFF data with 16-bit linearity.

 figure: Fig. 12.

Fig. 12. Spectral sensitivity estimation results under JPEG format with linearity calibration, (a) Results of rough estimation, (b) Results of optimized estimation

Download Full Size | PDF

4.3 Verification of spectral sensitivity estimation

In order to verify the proposed method, another experiment is performed. The 71SW151 monochromator of OFN instrument was used in the experiment, and the 71LX150A spherical xenon lamp of OFN instrument was used as the light source. The monochromatic light has an output at an interval of 10 nm at the range of 380nm-780 nm. The PR-715 spectrophotometer of Photo Research Company was used to measure the radiance of monochromatic light; the sensitivity of the camera is measured by these instruments. The spot of the monochromator images is shown in Fig. 13.

 figure: Fig. 13.

Fig. 13. Monochromatic light image

Download Full Size | PDF

The accuracy of the estimated spectral sensitivity is evaluated by the relative full-scale error between two experimental results. It is described as:

$$RE = \frac{{MSE}}{{\max ({{S_0}} )}} \times {100\%}, $$
where ${S_0}$ is the measured spectral sensitivity value based on the monochromator; $\max ({{S_0}} )$ represents the maximum measured sensitivity value of a channel; $MSE$ is the average of absolute error, which can be calculated by the following equation:
$$MSE = \sqrt {\frac{1}{n}\sum\limits_{i = 1}^n {{{[{S({{\lambda_i}} )- {S_0}({{\lambda_i}} )} ]}^2}} } , $$
where i is the data order of the monochromator measurement, ${\lambda _i}$ is the wavelength corresponding to this data, and $S({{\lambda_i}} )$ is the spectral sensitivity value estimated by the proposed method in this paper.

The comparison between the measured spectral irradiance sensitivity curves and the estimated value of our method under raw format with 16-bit linearity are shown in Fig. 14 (a). The comparison between the measured spectral irradiance sensitivity curves and the estimated value of our method under JPEG format with linearity calibration are shown in Fig. 14 (b).

 figure: Fig. 14.

Fig. 14. Comparison of two experimental results. (a) Raw TIFF format with 16-bit linearity. (b) JPEG format with linearity calibration

Download Full Size | PDF

There are some popular methods, which can be used to compare with our method. These works are the Wiener estimation [19] and Fourier band-limitedness method [20,21]. The comparison data is shown in Fig. 15. Here, the coefficient of correlation is set as 0.9 in Wiener method and the 5 Fourier basis is set in Fourier band-limitedness method.

 figure: Fig. 15.

Fig. 15. The estimation of other methods. (a) Wiener filtering method under raw format with 16-bit linearity. (b) Fourier band-limitedness method under raw format with 16-bit linearity. (c) Wiener filtering method under JPEG format with linearity calibration. (d) Fourier band-limitedness method under JPEG format with linearity calibration.

Download Full Size | PDF

As seen here, the proposed method in this paper is closer to the data from monochromator experiments. The relative full-scale errors of the three channels of the above different methods are shown as Table 1.

Tables Icon

Table 1. The relative full-scale errors of the three channels

According to the experimental data, the indices of the relative full-scale error of the three channels are 9.44%, 8.26% and 7.94% under raw TIFF format with 16-bit linearity data. As seen here, the average relative full-scale error of the three channels of our method is lower than that of Wiener filtering method by 7.03%, and lower than that of Fourier band-limitedness method by 8.68%. The indices of the relative full-scale error of the three channels are 14.75%, 8.04% and 7.69% under JPEG format with linearity calibration. As seen here, the average relative full-scale error of the three channels of our method is lower than that of Wiener filtering method by 11.98%, and lower than that of Fourier band-limitedness method by 15.78%.

4.4 Selection of color patches

The generation of 81 color chips using a monitor is a tedious task and takes a long time. This is a drawback of the presented method. According to Ref. [30], radiative monitor samples are the worst due to the fact that all of them generated by a monitor are in fact a linear combination of R, G and B channels spectral emmitances. This means that the condition number of the matrix of spectral emmitance of the samples is high and severely rank deficient. It also means that only three color patches are enough. We need to do some selection for our method. Therefore, a kind of Monte Carlo method is used to solve this problem. We build a uniform sampling model for selecting the 81 color patches and their spectral radiance data. 3 color patches and their spectral radiance data are selected at random. The results of 50,000 times under raw format with 16-bit linearity data are shown as Fig. 16.

 figure: Fig. 16.

Fig. 16. Sensitivity rough estimation under raw format with 16-bit linearity using Monte Carlo method. (a) Rough estimations of 50,000 groups data by 3 color patches. (b) Mean absolute error of 50,000 groups data.

Download Full Size | PDF

At the same time, the results of 50,000 times under JPEG format data with linearity calibration are shown as Fig. 17.

 figure: Fig. 17.

Fig. 17. Sensitivity rough estimation under JPEG format with linearity calibration using Monte Carlo method, (a) Rough estimations of 50,000 groups data by 3 color patches, (b) Mean absolute error of 50,000 groups data

Download Full Size | PDF

Figure 16(a) and Fig. 17(a) show all data of rough estimation under two types of format. As seen here, some estimations have very bad effect. The mean absolute error between two results of 3 color patches and 81 color patches is also computed. We sorted all data corresponding to 3 color patches. The minimum mean absolute error is the best combination of 3 color patches. The rough estimation results corresponding to the minimum mean absolute error is shown as Fig. 18. The minimum mean absolute error is 2.5980% under raw TIFF format with 16-bit linearity, and it is 8.0466% under JPEG format with linearity calibration. The combination of RGB values for optimal 3 color patches is [191, 223, 127], [159, 31, 191], and [0, 223, 223] under raw TIFF format with 16-bit linearity. The combination of RGB values for optimal 3 color patches is [31, 191, 223], [159, 223, 95], and [31, 255, 0] under JPEG format with linearity calibration.

 figure: Fig. 18.

Fig. 18. The rough estimation results corresponding to the minimum mean absolute error, (a) Raw TIFF format data with 16-bit linearity. (b) JPEG format data with linearity calibration

Download Full Size | PDF

As seen here, the rough estimation can still be conducted by 3 color patches. The results will exit some errors. The optimization sensitivity estimation results corresponding to the optimal 3 color patches is also performed. The relative full-scale errors of the three channels by optimal 3 color patches are shown in Table 2. We can see that the selection of 3 color patches produce greater errors than 81 color patches. However, the error is not that significant. For raw TIFF format data with 16-bit linearity, the average of the relative full-scale errors of the three channels is greater than that of 81 color patches by 0.05%, and for JPEG format data, it is greater by 1.12%.

Tables Icon

Table 2. The relative full-scale errors of the three channels by optimal 3 color patches

There are also some other color patches, which can be used to estimate the spectral sensitivity using monitors. According to our experiments, for raw TIFF format data with 16-bit linearity, they are {[159, 63, 223], [255, 95, 63], [0, 223, 223]}, {[127, 127, 255], [63, 255, 31], [0, 223, 223]}, {[127, 127, 255], [191, 223, 127], [0, 223, 223]} and so on. For JPEG format data with linearity calibration, they are {[159, 223, 95], [127, 127, 255], [31, 255, 0]}, {[159, 255, 127], [191, 191, 95], [31, 255, 0]}, {[31, 255, 0], [191 63 255], [31, 255, 0]} and so on. Although these color patches will produce greater error than the optimal selection. These color patches still can be used to estimate the spectral sensitivity using our method.

5. Conclusions

This paper proposes an optimized estimation method of spectral sensitivity of trichromatic digital camera. In this method, an LCD display was used as a light source, a 9-level and 3-factor orthogonal test was used to determine the input sample points, and a spectral irradiance colorimeter were used to measure the input radiation. A rough estimation model of spectral sensitivity was established based on the calibration experimental data. The Luther-Ives condition and the smoothness and non-negativity restriction condition are integrated to constrain the solution; the spectral sensitivity of the trichromatic imaging system can be estimated by these optimized operations. The experimental results show that the spectral sensitivity obtained by this method is basically consistent with the measured results of the monochromator experiment, and the relative full-scale errors of the RGB three-channel is obviously lower than Wiener filtering method and Fourier band-limitedness method. However, generation of the 81 color patches using a monitor is a tedious task and takes a long time. This is a drawback of the presented method. For reduction the number of color patches, we also provide some selection of 81 color patches. The optimal 3 color patches is given by Monte Carlo method. For raw TIFF format data with 16-bit linearity, the average of the relative full-scale errors of the three channels using optimal selection is greater than that of 81 color patches by 0.05%, and for JPEG format data with linearity calibration, it is greater by 1.12%. This research is of great significance to the colorimetric characterization and system evaluation of the imaging system.

Funding

Natural Science Foundation of Liaoning Province (2019-ZD-0292); National Natural Science Foundation of China (61975012).

Acknowledgements

The authors would like to thank Mr. Rolin for polishing the English text of this paper.

Disclosures

The authors declare no conflicts of interest.

References

1. W. Ji and P. A. Rhodes, “Spectral color characterization of digital cameras: a review,” Proc. SPIE 8332, 83320A (2012). [CrossRef]  

2. R. Safaee-Rad and M. Aleksic, “Spectral-based calorimetric calibration of a 3CCD color camera for fast and accurate characterization and calibration of LCD displays,” Proc. SPIE 7875, 787504 (2011). [CrossRef]  

3. M. Rump, A. Zinke, and R. Klein, “Practical spectral characterization of trichromatic cameras,” ACM Trans. Graph. 30(6), 1–10 (2011). [CrossRef]  

4. J. Guild, “The Colorimetric Properties of the Spectrum,” Philos. Trans. R. Soc. London 108(759), 576 (1931). [CrossRef]  

5. J. Jiang, D. Liu, J. Gu, and S. Süsstrunk, “What is the space of spectral sensitivity functions for digital color cameras?” in Proceedings of IEEE Conference on Applications of Computer Vision (IEEE, 2013), pp.168–179.

6. H. She, W. Lv, J. Qiu, H. Xu, and X. Zheng, “Spectral sensitivity measurement and evaluation of commercial digital cameras,” Opt Instr. 39(5), 15–21 (2017).

7. M. M. Darrodi, G. Finlayson, T. Goodman, and M. Mackiewicz, “Reference data set for camera spectral sensitivity estimation,” J. Opt. Soc. Am. A 32(3), 381–391 (2015). [CrossRef]  

8. EMVA Standard 1288: EMVA1288-3.1a (2016), pp. 4–30

9. W. Günter, “Multifilter method for determining relative spectral sensitivity functions of photoelectric detectors,” J. Opt. Soc. Am. A 50(10), 992–998 (1960). [CrossRef]  

10. J. Klein, J. Brauers, and T Aach, “Methods for spectral characterization of multispectral cameras,” Proc. SPIE 7876, 78760B (2011). [CrossRef]  

11. G. Chang and Y. Chen, “Spectral estimation of color CCD cameras,” Proc. SPIE 3422, 81–91 (1998). [CrossRef]  

12. D. L. Bongiorno, M. Bryson, D. G. Dansereau, and S. B. Williams, “Spectral Characterization of COTS RGB Cameras Using a Linear Variable Edge Filter,” Proc. SPIE 8660, 86600N (2013). [CrossRef]  

13. O. Haderka Jr., J. Perřna, V. Michálek, and M. Hamar, “Absolute spectral calibration of an intensified CCD camera using twin beams,” J. Opt. Soc. Am. B 31(10), B1–B7 (2014). [CrossRef]  

14. G. Chang and Y. Chen, “Automatic spectral measurement system for color video cameras,” IEEE Trans. Consum. Electron. 45(1), 225–235 (1999). [CrossRef]  

15. E. Berra, S. Gibson-Poole, A. MacArthur, R. Gaulton, and A. Hamilton, “Estimation of the spectral sensitivity functions of un-modified and modified commercial off-shelf digital cameras to enable their use as a multispectral imaging system for UAVS,” in Proceedings of International Conference on Unmanned Aerial Vehicles in Geomatics (International Society for Photogrammetry and Remote Sensing, 2015), pp. 207–214.

16. P. Zou, H. Wu, Q. Xu, P. Xie, J. Li, L. Zhou, X. Li, and X. Zheng, “Absolute spectral radiance responsivity calibration of the radiance transfer standard detector,” Proc. SPIE 6621, 66211B (2008). [CrossRef]  

17. Z. Sadeghipoor, Y. M. Lu, and S. Süsstrunk, “Optimum Spectral Sensitivity Functions for Single Sensor Color Imaging,” Proc. SPIE 8299, 829904 (2012). [CrossRef]  

18. H. Zhao, R. Kawakami, R. T. Tan, and K. Ikeuchi, “Estimating Basis Functions for Spectral Sensitivity of Digital Cameras,” in Proceedings of Image Recognition and Understanding, (MIRU, 2009) pp. 7–13.

19. W. K. Pratt and C. E. Mancill, “Spectral estimation techniques for the spectral calibration of a color image scanner,” Appl. Opt. 15(1), 73–75 (1976). [CrossRef]  

20. G. Finlayson, S. Hordley, and P. Hubel, “Recovering device sensitivities with quadratic programming,” in Proceedings of Sixth Color Imaging Conference on Color Science, Systems, and Applications (Society for Imaging Science and Technology, 1998), pp. 90–95.

21. M. Kang, U. Yang, and K. Sohn, “Spectral sensitivity estimation for EMCCD camera,” Electron. Lett. 47(25), 1369–1370 (2011). [CrossRef]  

22. C. Büttner and B. Schlichting, “Spectral sensitivity estimation of digital cameras,” IS&T 21(5), 1–8 (2006).

23. K. Barnard and B. Funt, “Camera characterization for color research,” Color Res. Appl. 27(3), 152–163 (2002). [CrossRef]  

24. M. Ebner, “Estimating the spectral sensitivity of a Digital sensor using calibration Targets,” in Proceedings of the 9th Annual Genetic and Evolutionary Computation Conference,H. Lipson, ed. (Association for Computing Machinery, 2007), pp. 642–650.

25. J. Qiu and H. Xu, “Investigation of impacting factors on camera calibration for spectral sensitivity estimation,” in Proceedings of Annual Conference on Advanced Computer Architecture, J. Wu, ed. (Springer, 2016), pp. 105–109.

26. J. Y. Hardeberg, H. Bretel, and F. J. M. Schmitt, “Spectral characterization of electronic cameras,” Proc. SPIE 3499, 100–109 (1998). [CrossRef]  

27. P. Urban, M. Desch, K. Happel, and D. Spiehl, “Recovering camera sensitivities using target-based reflectances captured under multiple led-illuminations,” in Proceedings of 16th Workshop on Color Image Processing (CAIP, 2010), pp. 9–16.

28. S. Han, Y. Matsushita, I. Sato, T. Okabe, and Y. Sato, “Camera Spectral Sensitivity Estimation from a Single Image under Unknown Illumination by using Fluorescence,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 805–812.

29. R. Kawakami, H. Zhao, R. T. Tan, and K. Ikeuchi, “Camera Spectral Sensitivity and White Balance Estimation from Sky Images,” Int. J. Comput. Vis. 105(3), 187–204 (2013). [CrossRef]  

30. M. Anderson, R. Motta, S. Chandrasekar, and M. Stokes, “Proposal for a Standard Default Color Space for the Internet-sRGB,” in Proceedings of the 4th IS and T/SID Color Imaging Conference on Color Science, Systems and Applications (Society for Imaging Science and Technology, 1996), pp. 198–205.

31. J. Y. Hardeberg, “Acquisition and reproduction of colour images: colorimetric and multi-spectral approaches,” A dissertation of Ecole Nationale Supérieure des Télécommunications, pp. 28–95 (1999).

32. S. Quan “Evaluation and optimal design of spectral sensitivities for digital color imaging,” A dissertation of Rochester Institute of Technology, pp. 104–128 (2002).

33. S. Quan, N. Ohta, and R. S. Berns., and N. Katoh, “Hierarchical approach to the optimal design of camera spectral sensitivities for colorimetric and spectral performance,” Proc. SPIE 5008, 159–170 (2003). [CrossRef]  

34. M. Wolshki, C. A. Bouman, and J. P. Allebach, “Optimization of Sensor Response Functions for Colorimetry of Reflective and Emissive Objects,” IEEE Trans. on Image Process. 5(3), 507–517 (1996). [CrossRef]  

35. J. M. Vrhel and H. J. Trussell, “Filter Considerations in Color Correction,” IEEE Trans. on Image Process. 3(2), 147–161 (1994). [CrossRef]  

36. G. Sharma, H. J. Trussell, and J. M. Vrhel, “Optimal Nonnegative Color Scanning Filters,” IEEE Trans. on Image Process. 7(1), 129–133 (1998). [CrossRef]  

37. S. Quan, N. Ohta, M. Rosen, and N. Katoh, “Fabrication Tolerance and Optimal Design of Spectral Sensitivities for Color Imaging Devices,” in Proceedings of the IS&T’s 2001 Image Processing, Image Quality, Image Capture Systems Conference (PICS) Conference, N. Katoh, ed. (Society for Imaging Science and Technology, 2001), pp. 277–282.

38. P. L. Vora and H. J. Trussell, “Measure of goodness of a set of color scanning filters,” J. Opt. Soc. Am. A 10(7), 1499–1508 (1993). [CrossRef]  

39. P. L. Vora and H. J. Trussell, “Design Results for a Set of Thin Film Color Scanning Filters,” Proc. SPIE 2414, 70–75 (1993). [CrossRef]  

40. P. L. Vora and H. J. Trussell, “Mathematical methods for the design of color scanning filters,” IEEE Trans. on Image Process. 6(2), 312–320 (1997). [CrossRef]  

41. A. M. Nahavandi and M. A. Tehran, “A new manufacturable filter design approach for spectral reflectance estimation,” Color Res. Appl. 42(3), 316–326 (2017). [CrossRef]  

42. A. M. Nahavandi and M. A. Tehran, “Metric for evaluation of filter efficiency in spectral cameras,” Appl. Opt. 55(32), 9193–9204 (2016). [CrossRef]  

43. P. L. Vora and H. J. Trussell, “A Mathematical Method for Designing a Set of Colour Scanning,” Proc. SPIE 1912, 322–332 (1993). [CrossRef]  

44. J. G. Proakis and D. G. Manolakis, Digital Signal Processing Principles, Algorithms, and Applications Fourth Edition (PUBLISHING HOUSE OF ELECTRONICS INDUSTRY, 2014), Chap. 10.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (18)

Fig. 1.
Fig. 1. Schematic of the proposed method
Fig. 2.
Fig. 2. Orthogonal test sampling of input colors. (a) Orthogonal test sampling of RGB digital values. (b) All 81 colors in one picture
Fig. 3.
Fig. 3. Frequency spectrum of CMFs of the W. S. Stiles 10° and its corresponding cutoff frequency of filter design. (a) Red channel. (b) Green channel. (c) Blue channel
Fig. 4.
Fig. 4. Settings of the frequency domain and spatial windows
Fig. 5.
Fig. 5. Schematic of the experimental setup
Fig. 6.
Fig. 6. Spectral sensitivity estimation results under raw TIFF format with 16-bit linearity, (a) Results of rough estimation, (b) Results of optimized estimation
Fig. 7.
Fig. 7. The spatial window setting combined the rough estimation data and CMFs under raw TIFF format with 16-bit linearity. (a) R channel, (b) G channel. (c) B channel
Fig. 8.
Fig. 8. The experimental data in process under raw format with 16-bit linearity. (a) After adding frequency window of R channel. (b) After adding space window of R channel. (c) After cubic smoothing spline of R channel. (d) After adding frequency window of G channel. (e) After adding space window of G channel. (f) After cubic smoothing spline of G channel. (g) After adding frequency window of B channel. (h) After adding space window of B channel. (i) After cubic smoothing spline of B channel
Fig. 9.
Fig. 9. Calibrated nonlinearity coefficient. (a) R channel. (b) G channel. (c) B channel
Fig. 10.
Fig. 10. The spatial window setting combined the rough estimation data and CMFs under JPEG format with linearity calibration. (a) R channel. (b) G channel. (c) B channel
Fig. 11.
Fig. 11. The experimental data in process under JPEG format with linearity calibration. (a) After adding frequency window of R channel. (b) After adding space window of R channel. (c) After cubic smoothing spline of R channel. (d) After adding frequency window of G channel. (e) After adding space window of G channel. (f) After cubic smoothing spline of G channel. (g) After adding frequency window of B channel. (h) After adding space window of B channel. (i) After cubic smoothing spline of B channel
Fig. 12.
Fig. 12. Spectral sensitivity estimation results under JPEG format with linearity calibration, (a) Results of rough estimation, (b) Results of optimized estimation
Fig. 13.
Fig. 13. Monochromatic light image
Fig. 14.
Fig. 14. Comparison of two experimental results. (a) Raw TIFF format with 16-bit linearity. (b) JPEG format with linearity calibration
Fig. 15.
Fig. 15. The estimation of other methods. (a) Wiener filtering method under raw format with 16-bit linearity. (b) Fourier band-limitedness method under raw format with 16-bit linearity. (c) Wiener filtering method under JPEG format with linearity calibration. (d) Fourier band-limitedness method under JPEG format with linearity calibration.
Fig. 16.
Fig. 16. Sensitivity rough estimation under raw format with 16-bit linearity using Monte Carlo method. (a) Rough estimations of 50,000 groups data by 3 color patches. (b) Mean absolute error of 50,000 groups data.
Fig. 17.
Fig. 17. Sensitivity rough estimation under JPEG format with linearity calibration using Monte Carlo method, (a) Rough estimations of 50,000 groups data by 3 color patches, (b) Mean absolute error of 50,000 groups data
Fig. 18.
Fig. 18. The rough estimation results corresponding to the minimum mean absolute error, (a) Raw TIFF format data with 16-bit linearity. (b) JPEG format data with linearity calibration

Tables (2)

Tables Icon

Table 1. The relative full-scale errors of the three channels

Tables Icon

Table 2. The relative full-scale errors of the three channels by optimal 3 color patches

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

{ R r a w = K 380 780 φ ( λ ) τ r ( λ ) γ ( λ ) Δ λ  +  B r G r a w = K 380 780 φ ( λ ) τ g ( λ ) γ ( λ ) Δ λ + B g B r a w = K 380 780 φ ( λ ) τ b ( λ ) γ ( λ ) Δ λ + B b ,
{ R r a w B 0 = K 380 780 φ ( λ ) τ r ( λ ) γ ( λ ) Δ λ + n r G r a w B 0 = K 380 780 φ ( λ ) τ g ( λ ) γ ( λ ) Δ λ + n g B r a w B 0 = K 380 780 φ ( λ ) τ b ( λ ) γ ( λ ) Δ λ + n b ,
{ R = F r [ 380 780 φ ( λ ) τ r ( λ ) γ ( λ ) Δ λ + n r ] G = F g [ 380 780 φ ( λ ) τ g ( λ ) γ ( λ ) Δ λ + n g ] B = F b [ 380 780 φ ( λ ) τ b ( λ ) γ ( λ ) Δ λ + n b ] ,
D = Φ T × T + n ,
T = f ( Φ , D ) .
H ^ = arg min H ^ ( | | Φ D × H ^ | | 2 ) .
T = [ ( D T D ) 1 D T Φ ] + ,
t = A L T r B M T OD L 0 r
R E = M S E max ( S 0 ) × 100 % ,
M S E = 1 n i = 1 n [ S ( λ i ) S 0 ( λ i ) ] 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.