Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Compressive recovery of smartphone RGB spectral sensitivity functions

Open Access Open Access

Abstract

Spectral response (or sensitivity) functions of a three-color image sensor (or trichromatic camera) allow a mapping from spectral stimuli to RGB color values. Like biological photosensors, digital RGB spectral responses are device dependent and significantly vary from model to model. Thus, the information on the RGB spectral response functions of a specific device is vital in a variety of computer vision as well as mobile health (mHealth) applications. Theoretically, spectral response functions can directly be measured with sophisticated calibration equipment in a specialized laboratory setting, which is not easily accessible for most application developers. As a result, several mathematical methods have been proposed relying on standard color references. Typical optimization frameworks with constraints are often complicated, requiring a large number of colors. We report a compressive sensing framework in the frequency domain for accurately predicting RGB spectral response functions only with several primary colors. Using a scientific camera, we first validate the estimation method with direct spectral sensitivity measurements and ensure that the root mean square errors between the ground truth and recovered RGB spectral response functions are negligible. We further recover the RGB spectral response functions of smartphones and validate with an expanded color checker reference. We expect that this simple yet reliable estimation method of RGB spectral sensitivity can easily be applied for color calibration and standardization in machine vision, hyperspectral filters, and mHealth applications that capitalize on the built-in cameras of smartphones.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Any digital light sensors or biological photosensors have different sensitivity to different wavelength ranges of light. Based on the tristimulus system of human color perception, three-color image sensors or trichromatic cameras have unique spectral response functions (also known as spectral sensitivity) in the red (R), green (G), and blue (B) channels. The RGB spectral response functions serve as a basis for a mapping between the reflection spectrum and color index of an object. In particular, RGB values are device dependent and are not identical among different camera models even if images are taken under the same conditions because each model of three-color image sensors has distinct RGB spectral response functions [112]. In this respect, the information on RGB spectral response functions is of paramount importance for a variety of imaging applications, including color correction, multispectral imaging, hyperspectral reconstruction, and spectral super-resolution [1332].

The knowledge of smartphone camera’s spectral response functions becomes more important as numerous mobile health (mHealth) technologies rely on the built-in cameras for point-of-care diagnostics and individual health monitoring, including dermatology, ophthalmology, cardiology, and quantification of bioassays [3337]. Obviously, device dependent RGB spectral response functions introduce significant errors, requiring different types of color calibration and standardization. For example, quantification of paper microfluidic assays [38], mHealth heart rate monitoring [39], quantitative urinalysis [40], and mHealth assessments of blood hemoglobin levels [41,42] need a color calibration to achieve consistent responses across different smartphone models. If RGB spectral response functions are readily accessible and estimated in a simple setting, color canonicalization in a variety of smartphones can be simplified and standardized. Unfortunately, the information on spectral response functions is often not shared by smartphone manufacturers nowadays (Table 1).

Tables Icon

Table 1. Brief summary of the previous studies on estimating RGB spectral response functions.

In general, direct measurements of RGB spectral response functions are not an easy task. RGB spectral response functions can be recovered using directly measured narrowband stimuli in the broad range of visible light. The standard method is to record the spectral response against monochromatic light produced by a monochromator or laser light source. Specifically, the previously developed direct measurement methods use narrowband stimuli covering the entire visible spectral range, including monochromators [43,5055], LEDs [56], or quantum efficiency measurements [12,57]. Importantly, such direct measurements require a priori information on the spectral profile of chosen light sources and the spectral response functions of calibration equipment, all of which are often challenging to quantify in a similar manner of measuring the spectral response functions of a specific device of interest, forming a vicious cycle of uncertainty. In other words, the spectral sensitivity of reference detectors and the spectral profiles of calibration light sources should be predetermined.

Alternative approaches for estimating RGB spectral response functions are statistical and machine learning methods based on a relationship between the RGB values and reflection spectra of color samples. Typically, commercially available color reference charts and cards are used. In brief, Table 1 compares representative mathematical methods of estimating RGB spectral response functions using color references. Additional summary is also available elsewhere [16,17,48,58,59]. In particular, several optimization frameworks combined with constraints on RGB spectral response functions have been successfully introduced, including quadratic programming [45], regularization [43,46], principal component analysis (or subspace method) [8], convex optimization [44,60], Wiener [43], and rank-based spectral estimation [48]. Some advanced methods, such as rank-based spectral estimation, have also shown that rendered images (e.g., JPEG) can be used for reliably predicting RGB spectral response functions. Recently, machine learning approaches have been applied based on the assumption of Gaussian functions for RGB spectral response functions [13,16].

Except for the machine learning methods, the previous mathematical methods can be recapitulated as l2-norm minimization in a broad manner. In general, lp-norm is expressed with an n dimension vector [61]:

$$\vert\vert{{\boldsymbol x}\vert\vert_{p}} = \; {\left( {\; \mathop \sum \nolimits_{i = 1}^n {{|{{x_i}} |}^p}\; } \right)^{1/p}}. $$

When p = 2, ||◦||2 is l2-norm. Indeed, l2-norm minimization is formulated as least squares regression. Least squares regression, which supports stable computation and minimizes over-fitting, is a widely used computational scheme for parameter estimation. On the other hand, such methods require a relatively large number of colors and depend on the selection of colors. As a result, direct applications of RGB spectral response functions for color calibration and standardization are rarely available.

An alternative mathematical method is to take advantage of l1-norm minimization in the frequency domain [62]. When p = 1, ||◦||1 is l1-norm. When p = 0, ||◦||0 is l0-norm. l0-norm means the sum of absolute values of the elements or the number of the nonzero elements. Theoretically, l0-norm is the limit as p → 0 of lp-norm. l0-norm minimization problems can be treated to the corresponding l1-norm minimization problem [63,64]. As directly related to compressive (or compressed) sensing, l1-norm minimization has received considerable attention [62,6568]. Specifically, when significant information is contained in a few elements (i.e., sparsity or compressibility), l1-norm minimization has shown reliable and enhanced performance. The recovery of the whole information is possible with a few measurements in a variety of imaging applications. Indeed, compressive sensing allows for reconstructing signals at a rate far below the Nyquist/Shannon sampling theorem. Thus, compressive sensing in the frequency domain is an alternative candidate to recover RGB spectral response functions.

In this paper, we report a simple compressive sensing framework for estimating RGB spectral response functions of a three-color image sensor (or trichromatic camera) on the basis of a small number of primary colors. To solve an ill-posed problem, we use compressive sampling in the frequency domain to overcome the limitation of low dimensionality in conventional color reference standards, resulting in a reliable recovery of RGB spectral response functions. First, we validate this method with respect to the ground truth RGB spectral response functions of a scientific three-color camera. In this validation, the ground truth of the RGB camera is directly assessed with a corresponding mono camera and sunlight measurements. Second, as numerous mHealth applications rely on the built-in cameras, we test several smartphone cameras and validate the estimated spectral response functions using additional diverse feature colors from an expanded color reference checker. The novelty of this reported method includes the use of only 12 primary colors and the simple, yet robust, inverse calculation, all of which can simply be conducted without sophisticated and expensive calibration equipment.

2. Definition of RGB spectral response functions

The measured RGB intensity values from an object of interest acquired by a three-color camera can be expressed with the RGB spectral response functions ${S_R}(\lambda )$, ${S_G}(\lambda )$, and ${S_B}(\lambda )$ in the R, G, and B channels, respectively:

$${I_M}({R,G,B} )= \smallint L(\lambda )C(\lambda ){S_{R,G,B}}(\lambda )D(\lambda )O(\lambda )\textrm{d}\lambda , $$
where $\lambda $ is the wavelength of light, $L(\lambda )$ is the spectral shape of the illumination light source, $C(\lambda )$ is the spectral response of all optical components in the imaging system, $D(\lambda )$ is the spectral response of the (mono) image sensor, and $O(\lambda )$ is the spectral intensity reflected from the object. To compensate for $L(\lambda )$, $C(\lambda )$, and $D(\lambda )$, the RGB intensity from a white reflectance standard that has a reflectivity of 99% in the visible range (i.e., $O(\lambda )$ = 0.99) is measured:
$${I_{\textrm{reference}}}({R,G,B} )= \smallint L(\lambda )C(\lambda ){S_{R,G,B}}(\lambda )D(\lambda )\textrm{d}\lambda . $$

The RGB values of the object solely determined by ${S_{R,G,B}}(\lambda )$ is calculated by normalizing ${I_M}({R,G,B} )$ by ${I_{\textrm{reference}}}({R,G,B} )$:

$${I_T}({R,G,B} )= \frac{{{I_M}({R,G,B} )}}{{{I_{\textrm{reference}}}({R,G,B} )\; }}. $$

The true RGB values of the object is further obtained by subtracting dark noise from the measurements:

$${I_T}({R,G,B} )= \frac{{{I_M}({R,G,B} )\; - \; {I_{\textrm{dark}}}({R,G,B} )}}{{{I_{\textrm{reference}}}({R,G,B} )\; - \; {I_{\textrm{dark}\; }}({R,G,B} )}}, $$
where ${I_{\textrm{dark}}}$ is the measurement with the light source off in the dark. In other words, the true RGB values of the object are not affected by the illumination light source, the optical components, and the ambient stray light. The true RGB values are determined only by the RGB spectral response functions (i.e., ${S_{R,G,B}}(\lambda )$) of the corresponding image sensor.

The relationship between the true RGB values and spectral intensity of n multiple objects (n different spectra) is described when $\lambda $ is discretized in the visible range ($\lambda = [{{\lambda_1},{\lambda_2}, \ldots ,{\lambda_m}} ]$):

$${{\boldsymbol Y}_{n \times 3}} = {{\boldsymbol F}_{n \times m}}{{\boldsymbol S}_{m \times 3}}, $$
where ${\boldsymbol Y}$ corresponds to the true RGB values, ${\boldsymbol S}$ is a m × 3 matrix of the RGB spectral response functions, and ${\boldsymbol F}$ is a n × m$\; $matrix that consists of the spectral intensity of n different objects (i.e., different color patches in the color checker). Equation (6) can be expressed explicitly:
$$\left[ {\begin{array}{@{}c@{}} {\begin{array}{@{}c@{}} {{I_1}\left( R \right)}\\ {{I_2}\left( R \right)}\\ \vdots \end{array}}\\ {{I_n}\left( R \right)} \end{array}\; \begin{array}{@{}c@{}} {\begin{array}{@{}c@{}} {{I_1}\left( G \right)}\\ {{I_2}\left( G \right)}\\ \vdots \end{array}}\\ {{I_n}\left( G \right)} \end{array}\; \begin{array}{@{}c@{}} {\begin{array}{@{}c@{}} {{I_1}\left( B \right)}\\ {{I_2}\left( B \right)}\\ \vdots \end{array}}\\ {{I_n}\left( B \right)} \end{array}} \right] = \left[ {\begin{array}{@{}c@{}} {\begin{array}{@{}c@{}} {{I_1}\left( {{\lambda _1}} \right)}\\ {{I_2}\left( {{\lambda _1}} \right)}\\ \vdots \end{array}}\\ {{I_n}\left( {{\lambda _1}} \right)} \end{array}\begin{array}{@{}c@{}} {\begin{array}{@{}c@{}} {{I_1}\left( {{\lambda _2}} \right)}\\ {{I_2}\left( {{\lambda _2}} \right)}\\ \vdots \end{array}}\\ {{I_n}\left( {{\lambda _2}} \right)} \end{array}\begin{array}{@{}c@{}} {\begin{array}{@{}c@{}} \cdots \\ \cdots \\ \ddots \end{array}}\\ \cdots \end{array}\begin{array}{@{}c@{}} {\begin{array}{@{}c@{}} {{I_1}\left( {{\lambda _m}} \right)}\\ {{I_2}\left( {{\lambda _m}} \right)}\\ \vdots \end{array}}\\ {{I_n}\left( {{\lambda _m}} \right)} \end{array}} \right]\left[ {\begin{array}{@{}c@{}} {\begin{array}{@{}c@{}} {{S_R}({\lambda _1})}\\ {{S_R}({\lambda _2})\; }\\ \vdots \end{array}}\\ {{S_R}({\lambda _m})} \end{array}\begin{array}{@{}c@{}} {\; \begin{array}{@{}c@{}} {{S_G}({\lambda _1})}\\ {{S_G}({\lambda _2})\; }\\ \vdots \end{array}\; }\\ {{S_G}({\lambda _m})} \end{array}\begin{array}{@{}c@{}} {\begin{array}{@{}c@{}} {{S_B}({\lambda _1})}\\ {{S_B}({\lambda _2})}\\ \vdots \end{array}}\\ {{S_B}({\lambda _m})} \end{array}} \right].$$

Theoretically, some direct laboratory measurement methods can be understood such that if a set of light sources with a narrow bandwidth (e.g., laser light sources) at all discretized$\; \lambda $ are used, F is the identity matrix with n = m, returning the recovery of ${\boldsymbol S}$.

3. Spectral compressive sensing framework

We propose that a compressive sensing framework can be ideal for solving ${\boldsymbol S}$. As shown in all of the previous studies [8,13,16,4348], the recovery of the RGB spectral response functions is not straightforward, given this inverse calculation is an underdetermined problem. Simply, solving ${\boldsymbol S}$ can be considered an ill-posed problem. In a compressive sensing framework, ${\boldsymbol S}$ is an unknown matrix to solve, ${\boldsymbol F}$ is a measurement matrix, and ${\boldsymbol Y}$ is an observation matrix. Compressive sensing intends to solve this underdetermined problem such that ${\boldsymbol S}$ is represented by a certain orthonormal basis Ψ that transforms ${\boldsymbol S}$ into sparser signals s:

$${\boldsymbol Y} = {\boldsymbol{F\varPsi s}} = {\boldsymbol{\varPhi s}}. $$

One of the simplest forms for ${\boldsymbol{\varPsi} }$ to enhance sparsity is to use a linear combination of Gaussian basis functions and the FWHM of the basis Gaussian function is 50 nm in our case [69].

Importantly, the performance of compressive sensing recovery is determined by the properties of the sensing matrix ${\boldsymbol{\varPhi} }$, which requires an incoherent (or uncorrelated) condition (also known as the uncertainty principle in compressive sensing) [62,6568].

We recover the RGB spectral response functions ${\boldsymbol S}$ by using l1 minimization that finds a minimum l1-norm solution of the underdetermined linear system (Eq. (8)):

$$\textrm{minimize}\; \vert\vert{{\boldsymbol s}\vert\vert_{1}}$$
$$\textrm{subject}\; \textrm{to}\; \vert\vert{\boldsymbol Y} - {\boldsymbol{\varPhi} }{{\boldsymbol s}\vert\vert_{2}} \le \varepsilon , $$
where ɛ is a noise level in the observations. Because l0-norm is non-convex, it is computationally impossible to solve (also known to be NP-hard). Thus, an l0-norm minimization problem is treated to the corresponding l1-norm minimization problem. In this case, the majority of unnecessary components (weights) are forced to be zeros, yielding only a few non-zero components and avoiding overfitting. l1-minimization is completely different from l2-norm minimization, in which all of the data points are used to minimize the sum of squared residuals. For practicality, we solve Eq. (8) as an unconstrained basis pursuit denoising problem by employing a weighted value γ:
$$\textrm{minimize}\; \frac{1}{2}\vert\vert{\boldsymbol Y} - {\mathbf \Phi }{\boldsymbol s}\vert\vert_{2}^2 + \gamma \vert\vert{{\boldsymbol s}\vert\vert_{1}}, $$
which is also known as l1 regularization or Lasso regression. It should also be noted that this regularization is different from conventional regularization with l2-norm minimization (i.e., least squares method). Specifically, among other commonly available l1-norm solvers, we use CVX that is a MATLAB-based open source based on a convex optimization algorithm [70].

Owing to the characteristics of compressive sensing, we focus on using a small number of colors for constructing the sensing matrix ${\boldsymbol{\varPhi} }$ that can satisfy the incoherence condition. Specifically, we take advantage of a commercially available color reference standard, ColorChecker Classic Nano (or Macbeth ColorChecker), to estimate RGB spectral response functions (Fig. 1(a)). This color reference standard consists of 24 patches of colors that have chromatic importance for general photography and cinematography. We exclude featureless or monotonous colors, such as white, gray, and black, because these achromatic colors have flattened spectral profiles. As a result, 18 signature color patches of distinguishable colors and spectra serve as color bases for the proposed method (Fig. 1(a) and (b)). We further reduce the number of color patches that can still result in reliable performance. It should be noted that the size of ColorChecker Classic Nano is small (22 mm × 36 mm) so that spatially uniform illumination and single shot imaging can easily be achieved.

 figure: Fig. 1.

Fig. 1. Small number of primary colors for compressive sensing-based estimation of RGB spectral response functions. (a) 18 colors from ColorChecker Classic Nano (X-Rite) are used to estimate RGB spectral response (or sensitivity) functions. (b) The corresponding reflection spectra of 18 colors. The small size of 22 mm × 36 mm is also useful for ensuring spatial uniform illumination and single shot imaging. A final number of primary colors is further reduced to 12 (#1 – #12) with reliable recovery of RGB spectral response functions.

Download Full Size | PDF

4. Evaluation with ground truth RGB spectral response functions

We first evaluate the proposed method of estimating RGB spectral response functions against the ground truth values of a scientific machine vision camera. For this validation purpose, we use a scientific machine vision RGB camera (GS3-U3-120S6C-C, Point Grey and Fig. 2(a) installed with an image sensor (Sony ICX625). We directly assess the RGB spectral response functions in this trichromatic camera; the ground truth of the RGB spectral response functions can be obtained with a mono version (GS3-U3-120S6M-C, Point Grey and Fig. 2(a)) installed with the same image sensor. The only difference for the mono camera is the absence of the color filter array on the image sensor. In an alternating manner, each camera is mounted to the same imaging spectrograph with a diffraction grating (groove density of 150 g/mm). In other words, each camera serves as a detector for the imaging spectrograph. As an ideal broadband light source, sunlight is incident on the spectrograph. During the measurement, the azimuth and elevation of the sun are 184.26° and 27.1° at the location coordinates of 40.42242°, −86.92039° (Fig. 2(b)). Sunlight contains abundant patterns of natural elements that can also be used for the wavelength calibration of the imaging spectrograph (Fig. 2(c)). In this case, the RGB spectral response functions ${S_R}(\lambda )$, ${S_G}(\lambda )$, and ${S_B}(\lambda )$ in the R, G, B channels can be calculated:

$${S_{R,\; G,B}}(\lambda )= \; \frac{{{{I}_{R,G,B}}(\lambda )}}{{{{I}_{\textrm{mono}}}(\lambda )}} = \frac{{L(\lambda )\cdot C(\lambda ) \cdot {S_{R,G,B}}(\lambda ) \cdot D(\lambda )}}{{L(\lambda ) \cdot C(\lambda ) \cdot D(\lambda )}},$$
where ${{I}_{R,G,B}}(\mathrm{\lambda } )$ is the measured spectral intensity of $R,G$, and B channels of the RGB camera, ${{I}_{\textrm{mono}}}(\mathrm{\lambda } )$ is the measured spectral intensity with the mono camera. As ${{I}_{R,G,B}}(\mathrm{\lambda } )$ and ${{I}_{\textrm{mono}}}(\mathrm{\lambda } )$ can be measured sequentially with a time interval of 60 seconds under the clear sky, changes in atmospheric transmission can be ignored. Figure 1(e) shows the measured RGB spectral response functions of the scientific machine vision RGB camera. It should be noted that this method of directly assessing the RGB spectral response functions is not generally possible for common smartphone cameras because a mono camera without a color filter array is not available.

 figure: Fig. 2.

Fig. 2. Ground truth RGB spectral response functions with directed measured ones for validation. (a) The RGB camera (GS3-U3-120S6C-C, Point Grey) and the mono camera (GS3-U3-120S6M-C, Point Grey) installed with the identical image sensor (Sony ICX625) are used for initial testing and validation. (b) Sunlight is used as an illumination light source. A photograph of clear blue sky taken during the direct measurement; the azimuth and elevation of the sun are 184.26° and 27.1°, respectively. (c) The spectra of sunlight through an imaging spectrograph mounted with the mono camera and the RGB camera. The Fraunhofer lines featured by natural elements in the atmosphere are also used for spectral calibration. The red (R), green (B), and blue (B) channels are split from the RGB camera, respectively. (d) The spectra of three different light sources. Sunlight and xenon-arc light sources are ideal for providing a broad spectral range in the visible light (400–700 nm). (e) The measured RGB spectral response functions of the RGB camera to serve as the ground truth for validation.

Download Full Size | PDF

Using the compressive sensing method, we then estimate the RGB spectral response functions of the trichromatic camera. In a laboratory setting, we use a Xenon-arc lamp that has a broad spectrum in the full visible range, similar to sunlight (Fig. 2(d)). The illumination from the Xenon-arc lamp is collimated and is incident onto the color checker (Fig. 1(a)) via an obliquely positioned mirror. The light reflected from the color checker is imaged with the RGB camera via a lens (focal length of 25 mm and F-number of F/1.4, Navitar). A white reflectance standard (AS-01160-060, Labsphere) that has a reflectivity of 99% is used to obtain ${I_{\textrm{reference}}}({R,G,B} )$ as described in Eq. (3). Conventional white LED light does not cover the entire visible range due to the low intensity at short (400–450 nm) and long (650–700 nm) wavelength ranges (Fig. 2(d)). The recovered RGB spectral response functions are discretized from 380 nm to 720 nm with a spectral interval of 0.5 nm.

We further explore if the number of colors can be reduced. Figure 3(a) shows the root mean square error (RMSE) between ground truth and recovered RGB spectral response functions as a function of the number of colors selected among 18 color patches. When a small number of colors is selected, possible combinations of selecting colors are extremely large. For example, when 9 colors are chosen among 18 colors, the number of combinations is 18C9 = 48,620. In such cases, we randomly select 100 different possible combinations and obtain an averaged RMSE. As expected, RMSE decreases with the number of colors selected (Fig. 3(a)). Figure 3(b)–3(e)) show four representative cases when the number of colors is 9, 12, 15, and 18, respectively. When the number of colors is greater than 12, the RMSE value converges into the minimum value (Fig. 3(a)). As a result, we use only 12 primary colors (#1 – #12 in Fig. 1). In this case, the RMSE values of the R, G, and B channels are 0.0285, 0.0220, and 0.0337, respectively (Fig. 3(c)). In pairwise cross-spectral analyses (Fig. 3(f)), the off-diagonal areas represent the correlation coefficients compared with the reflection spectra of other colors. The corresponding 12 different reflection spectra of primary colors are relatively uncorrelated with an average correlation coefficient of 0.579. In other words, only 12 color readings enable the reliable recovery of the RGB spectral response functions with absolute values in our compressive sensing-based method. This result also supports the idea that the set of selected 12 primary colors serves as a spectrally incoherent sensing matrix.

 figure: Fig. 3.

Fig. 3. Recovery of RGB spectral response functions with optimal color combinations. (a) The root mean square error (RMSE) between the ground truth and recovered RGB spectral response functions of the RGB camera (GS3-U3-120S6C-C, Point Grey) as the number of colors increases. For each number of color patches used for recovery, different possible combinations of colors selected among 18 colors are used to calculate an average and a standard deviation. (b)–(e) Representative cases of recovered RGB functions when the number of color patches used is 9, 12, 15, and 18, respectively. Overall, the estimated RGB spectral response functions (solid lines) are in excellent agreement with the ground truth ones (dotted lines and Fig. 2(e)). (f) The reflection spectra of 12 primary colors (#1 – #12 in Fig. 1) serve as a sensing matrix in the compressive sensing framework. The pairwise spectral comparison map shows an average correlation coefficient of 0.579. This level of spectral uncorrelation is sufficient to guarantee the incoherence requirement for compressive sensing theory.

Download Full Size | PDF

In addition, we compare the RGB spectral response functions using typical methods that are based on l2-norm minimization (i.e., least squares regression). Specifically, we use QR decomposition and the Moore-Penrose pseudo-inverse [71] both with a constraint of nonnegative values with the 12 primary colors. Figure 4 shows that the RGB spectral response functions estimated by these conventional methods are not in excellent agreement with the ground truth values, given that a small number of primary colors are used. The error from l1-norm minimization (i.e., compressive sensing) is significantly smaller than those from the l2-norm minimization methods; the mean RMSE values of the R, G, and B channels are 0.3557, 0.1511, and 0.0281 for QR decomposition, the Moore-Penrose pseudo-inverse, and compressive sensing, respectively (Fig. 4(c)). Indeed, because l2-norm normalization is ideal for an overdetermined problem, the conventional methods often rely on a significantly large number of colors.

 figure: Fig. 4.

Fig. 4. Comparisons with typical methods based on l2-norm minimization. (a) and (b) The RGB spectral response functions of the RGB camera (GS3-U3-120S6C-C, Point Grey) are estimated with 12 primary colors (# 1 – # 12 in Fig. 1) using QR decomposition with a constraint of nonnegative values (a) and the Moore-Penrose pseudo-inverse with a constraint of nonnegative values (b). The estimated RGB spectral response functions (solid lines) are compared with the ground truth RGB spectral response functions (dotted lines and Fig. 2(e)). (c) Mean RMSE values of the R, G, and B channels between the ground truth (dotted lines and Fig. 2(e)) and the recovered RGB spectral response functions (solid lines). The RMSE of compressive sensing is the same as Fig. 3(c).

Download Full Size | PDF

5. Testing with multiple smartphones

We further estimate the RGB spectral response functions of several Android smartphone models, including Samsung Galaxy Note8, Galaxy S9+, and Galaxy A21. When smartphone cameras are used, it is necessary to use the RAW format because conventional RGB images (JPEG) from smartphones are rendered and nonlinear to the light intensity [72]. RAW is an image format that captures unprocessed image data directly from the image sensor. Some high-end smartphones offer direct access to RAW images with “Pro Mode” or “ProRAW” in the default camera application. In the JPEG format, RGB color information is significantly compressed and often unrecoverable [72]. Thus, after reading RGB values in the RAW format, we recover the RGB spectral response functions. Figure 5 shows the RGB spectral response functions of the smartphone cameras recovered with the set of 12 primary colors. Unfortunately, the ground truth or directly measured RGB spectral response functions of the smartphones are not available as the manufacturers do not share those specifications in the public domain.

 figure: Fig. 5.

Fig. 5. Estimated RGB spectral response functions of the built-in cameras of high-end and low-end smartphones. The recovered RGB spectral response functions of Samsung Galaxy Note 8 (a), Galaxy S9+ (b), and Galaxy A21 (c).

Download Full Size | PDF

In this respect, we establish a simple validation method by comparing the measured and synthetic RGB values of a large number of feature colors. As an expanded color checker reference standard, we use 96 colors in ColorChecker Digital SG (Fig. 6(a)) that include additional representative colors of natural objects, such as human skin tones and various sky colors. First, we measure the true reflection spectrum of each color patch, not being affected by the light source spectrum and the ambient stray light in a similar manner of Eq. (5) (Fig. 6(b)). Second, we apply the estimated RGB spectral response functions to the reflection spectra of color patches and generate the corresponding synthetic RGB values for the specific smartphone camera. Third, we compare the measured and synthetic RGB values calculated by the estimated RGB spectral response functions of the smartphone camera (Fig. 6(c)–6(e)). To quantify errors, we use the root mean square relative error (RMSRE) defined as:

$$\textrm{RMSRE} = \frac{1}{3}\mathop \sum\nolimits_{C = R,\; G,\; B} \sqrt {\frac{1}{k}\mathop{\sum\nolimits_{i = 1}^k} {{\left( {\frac{{I_{C,i}^T - I_{C,i}^S}}{{I_{C,i}^T}}} \right)}^2}} , $$
where $I_{R,G,B}^T$ is the measured true RGB values, $I_{R,G,B}^S$ is the synthetic RGB values, and $k({ = 96} )$ is the number of feature color patches. The RMSRE values of Note8, S9+, and A21 between the measured and synthetic RGB values of 96 colors are 7.88%, 6.95%, and 10.30%, respectively, on average. The small RMSRE values support the validity of the estimated RGB response functions in case of the high-end smartphones (Note8 and S9+). The low-end smartphone (A21) has a relatively high RMSRE value, due to the higher signal-to-noise ratio (high dark noise). Overall, the small differences between the measured and synthetic RGB values support the reliability of spectral response function recovery and potential applications for color calibration and standardization for a variety of machine vision and mHealth applications.

 figure: Fig. 6.

Fig. 6. Color validation of estimated RGB spectral response functions with measured and synthetic colors. (a) A comprehensive color reference checker used for smartphone camera validation is ColorChecker Digital SG (X-Rite) with 140 feature color patches. Except for black, grey, and white colors, this color reference standard has 96 unique colors, including 12 primary colors for compressive sensing-based estimation of RGB spectral response functions. (b) The reflection spectra of the corresponding color patches from ColorChecker Digital SG. (c-1), (d-1), and (e-1) The directly measured colors with Galaxy Note 8 (c-1), Galaxy S9+ (d-1), and Galaxy A21 (e-1). The true RGB values are calculated following Eq. (5). For color visualization, the RAW images are rescaled to 8-bit (0 to 255) and are transformed into the RGB domain with a gamma correction. (c-2), (d-2), and (e-2) The synthetic colors of Galaxy Note 8 (c-2), Galaxy S9+ (d-2), and Galaxy A21 (e-2) generated by applying the estimated RGB spectral response functions to the reflection spectra, following Eq. (6).

Download Full Size | PDF

6. Discussion

We have introduced an l1-norm minimization framework for estimating the RGB spectral response functions of machine vision and smartphone cameras. As the estimation of RGB spectral response functions is typically an ill-posed problem, most of the previous color checker-based methods are framed to solve an overdetermined problem with l2-norm minimization (i.e., least squares method). Such methods are ideal for reliably recovering RGB spectral response functions with a small number of colors because the dimensionality (variation) of reflection spectra from color targets is lower than that of RGB spectral response functions. For example, even with a set of 1269 colors and their corresponding spectra, typical optimization frameworks require constraints on the illumination power spectrum or the RGB spectral response estimation [8,73]. In our study, we have demonstrated that the set of only primary 12 colors and their reflection spectra are sufficient to recover the RGB spectral response functions, owing to the compressive framework.

The previous studies on the mathematical estimation of spectral response functions are often available for digital single-lens reflex cameras (also known as digital SLR or DSLR), in part because the previous generations of smartphones are limited with the compressed format (e.g., JPEG) [74,75]. Nowadays, state-of-the-art smartphones allow end-users to control camera modes with access to the RAW format [76]. In this study, high-end Android smartphones (Note8 and S9+) offer “Pro mode” so that end-users can manually set the camera properties with the unprocessed RAW format data of images. It is even possible to extract images in the RAW format from A21, which is a considerably low-end smartphone. Third-party applications (e.g., Adobe Lightroom and Halide Mark) can also be used if smartphones allow “Pro Mode” or “ProRAW” in the default camera application.

7. Conclusion

We have developed a simple spectral compressive sensing framework for recovering RGB spectral response functions of smartphones with access to RAW images only using a set of 12 primary colors. This method is validated against the ground truth values via direct measurements obtained with the mono and RGB versions of a machine vision camera under sunlight. The predicted RGB spectral response functions of low- and high-end smartphones are supported by the significantly low error rates between the measured and synthetic color values of a comprehensive color reference checker. Different smartphone cameras equipped with diverse image sensors often produce distinguishable RGB images even when taken under the same conditions. Indeed, such a discrepancy has been a challenge in numerous mHealth applications where the built-in cameras are used for color or spectral quantifications. The reported spectral sensitivity estimation method can facilitate color and spectral canonicalization across different smartphone models for non-medical and mHealth applications. We also expect that the compressive sensing-based estimation of spectral sensitivity could be applied for characterizing machine vision systems and hyperspectral imaging components.

Funding

National Institutes of Health (R21TW010620, NIH Technology Accelerator Challenge).

Acknowledgments

We thank Dr. Eric Tkaczyk and Dr. Andrew Trister for an insightful discussion about clinical photography in dermatology and iPhone cameras, respectively.

Disclosures

YLK is a founding member of HemaChrome, LLC. All other authors declare no conflicts of interest.

References

1. L. T. Maloney, “Photoreceptor spectral sensitivities and color correction,” Perceiving, measuring, and using color 1250, 103–110 (1990). [CrossRef]  

2. J. Nathans, D. Thomas, and D. S. Hogness, “Molecular genetics of human color vision: The genes encoding blue, green, and red pigments,” Science 232(4747), 193–202 (1986). [CrossRef]  

3. J. Neitz and G. H. Jacobs, “Polymorphism of the long-wavelength cone in normal human color vision,” Nature 323(6089), 623–625 (1986). [CrossRef]  

4. R. Ramanath, W. E. Snyder, G. L. Bilbro, and W. A. Sander, “Demosaicking methods for bayer color arrays,” J. Electron. Imaging 11(3), 306–315 (2002). [CrossRef]  

5. O. Burggraaff, N. Schmidt, J. Zamorano, K. Pauly, S. Pascual, C. Tapia, E. Spyrakos, and F. Snik, “Standardized spectral and radiometric calibration of consumer cameras,” Opt. Express 27(14), 19075–19101 (2019). [CrossRef]  

6. W. Ji and P. A. Rhodes, “Spectral color characterization of digital cameras: A review,” in Photonics and Optoelectronics Meetings (POEM) 2011: Optoelectronic Sensing and Imaging, (2012).

7. A. Ilie and G. Welch, “Ensuring color consistency across multiple camera,” in Tenth IEEE International Conference on Computer Vision, Vols 1 and 2, (2005), pp. 1268–1275.

8. J. Jiang, D. Y. Liu, J. W. Gu, and S. Susstrunk, “What is the space of spectral sensitivity functions for digital color cameras?” in 2013 IEEE Workshop on Applications of Computer Vision (WACV), (2013), pp. 168–179.

9. G. Oh, H. J. Cho, S. Suh, Y. Ji, H. S. Chung, D. Lee, and K. Kim, “Multicolor fluorescence imaging using a single RGB-IR CMOS sensor for cancer detection with smURFP-labeled probiotics,” Biomed. Opt. Express 11(6), 2951–2963 (2020). [CrossRef]  

10. M. Ebner, “Estimating the spectral sensitivity of a digital sensor using calibration targets,” in GECCO 2007: Genetic and Evolutionary Computation Conference, Vol 1 and 2, (2007), pp. 642–649.

11. R. M. Turner and R. J. Guttosch, “Development challenges of a new image capture technology: Foveon X3 image sensors,” in ICIS 2006: International Congress of Imaging Science, Final Program and Proceedings, (2006), 175.

12. R. F. Lyon and P. M. Hubel, “Eyeing the camera: Into the next century,” in Color and Imaging Conference, (Society for Imaging Science and Technology, 2002), 349–355.

13. B. Kaya, Y. B. Can, and R. Timofte, “Towards spectral estimation from a single RGB image in the wild,” in 2019 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), (2019), pp. 3546–3555.

14. J. I. Park, M. H. Lee, M. D. Grossberg, and S. K. Nayar, “Multispectral imaging using multiplexed illumination,” in 2007 IEEE 11th International Conference on Computer Vision, Vols 1-6, (2007), p. 2049.

15. S. Tominaga, “Spectral imaging by a multichannel camera,” J. Electron. Imaging 8(4), 332–341 (1999). [CrossRef]  

16. S. Chaji, A. Pourreza, H. Pourreza, and M. Rouhani, “Estimation of the camera spectral sensitivity function using neural learning and architecture,” J. Opt. Soc. Am. A 35(6), 850–858 (2018). [CrossRef]  

17. N. Shimano, K. Terai, and M. Hironaga, “Recovery of spectral reflectances of objects being imaged by multispectral cameras,” J. Opt. Soc. Am. A 24(10), 3211–3219 (2007). [CrossRef]  

18. J. Spigulis and L. Elste, “Single snapshot RGB multispectral imaging at fixed wavelengths: Proof of concept,” Multimodal Biomedical Imaging IX 8937 (2014).

19. G. D. Finlayson and S. D. Hordley, “Color constancy at a pixel,” J. Opt. Soc. Am. A 18(2), 253–264 (2001). [CrossRef]  

20. G. D. Finlayson, P. M. Hubel, and S. Hordley, “Color by correlation,” in Fifth Color Imaging Conference: Color Science, Systems, and Applications, (1997), pp. 6–11.

21. B. A. Wandell, “The synthesis and analysis of color images,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9(1), 2–13 (1987). [CrossRef]  

22. F. H. Imai and R. S. Berns, “Spectral estimation using trichromatic digital cameras,” in Proceedings of the International Symposium on Multispectral Imaging and Color Reproduction for Digital Archives, (Chiba University Chiba, Japan, 1999), 1–8.

23. G. D. Finlayson and M. S. Drew, “The maximum ignorance assumption with positivity,” in Fourth Color Imaging Conference: Color Science, Systems and Applications, (1996), pp. 202–205.

24. L. T. Maloney and B. A. Wandell, “Color constancy: A method for recovering surface spectral reflectance,” J. Opt. Soc. Am. A 3(1), 29–33 (1986). [CrossRef]  

25. S. D. Hordley, “Scene illuminant estimation: Past, present, and future,” Color Res. Appl. 31(4), 303–314 (2006). [CrossRef]  

26. J. Van de Weijer, T. Gevers, and A. Gijsenij, “Edge-based color constancy,” IEEE Trans. on Image Process. 16(9), 2207–2214 (2007). [CrossRef]  

27. R. T. Tan, K. Nishino, and K. Ikeuchi, “Color constancy through inverse-intensity chromaticity space,” J. Opt. Soc. Am. A 21(3), 321–334 (2004). [CrossRef]  

28. R. Kawakami and K. Ikeuchi, “Color estimation from a single surface color,” in CVPR: 2009 IEEE Conference on Computer Vision and Pattern Recognition, Vols 1-4, (2009), pp. 635–642.

29. Y. Q. Li, C. Wang, J. Y. Zhao, and Q. S. Yuan, “Efficient spectral reconstruction using a trichromatic camera via sample optimization,” Vis. Comput. 34(12), 1773–1783 (2018). [CrossRef]  

30. S. Yamamoto, N. Tsumura, T. Nakaguchi, and Y. Miyake, “Development of a multi-spectral scanner using LED array for digital color proof,” J. Imaging Sci. Technol. 51(1), 61–69 (2007). [CrossRef]  

31. H. N. Li, J. Feng, W. P. Yang, L. Wang, H. B. Xu, P. F. Cao, and J. J. Duan, “Multi-spectral imaging using LED illuminations,” in 2012 5th International Congress on Image and Signal Processing (CISP), (2012), pp. 538–542.

32. S. C. Yoon, T. S. Shin, K. C. Lawrence, G. W. Heitschmidt, B. Park, and G. R. Gamble, “Hyperspectral imaging using rgb color for foodborne pathogen detection,” J. Electron. Imaging 24(4), 043008 (2015). [CrossRef]  

33. S. R. Steinhubl, E. D. Muse, and E. J. Topol, “The emerging field of mobile health,” Sci. Transl. Med. 7(283), 283rv3 (2015). [CrossRef]  

34. C. S. Wood, M. R. Thomas, J. Budd, T. P. Mashamb, A. Thompson, K. Herbst, D. Pillay, R. W. Peeling, A. M. Johnson, R. A. McKendry, and M. M. Stevens, “Taking connected mobile-health diagnostics of infectious diseases to the field,” Nature 566(7745), 467–474 (2019). [CrossRef]  

35. H. Nejati, V. Pomponiu, T. T. Do, Y. R. Zhou, S. Iravani, and N. M. Cheung, “Smartphone and mobile image processing for assisted living,” IEEE Signal Process. Mag. 33(4), 30–48 (2016). [CrossRef]  

36. T. Kim, M. A. Visbal-Onufrak, R. L. Konger, and Y. L. Kim, “Dat A: driven imaging of tissue inflammation using RGB-based hyperspectral reconstruction toward personal monitoring of dermatologic health,” Biomed. Opt. Express 8(11), 5282–5296 (2017). [CrossRef]  

37. R. Poplin, A. V. Varadarajan, K. Blumer, Y. Liu, M. V. McConnell, G. S. Corrado, L. Peng, and D. R. Webster, “Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning,” Nat. Biomed. Eng. 2(3), 158–164 (2018). [CrossRef]  

38. K. R. Konnaiyan, S. Cheemalapati, M. Gubanov, and A. Pyayt, “Mhealth dipstick analyzer for monitoring of pregnancy complications,” IEEE Sens. J. 17(22), 7311–7316 (2017). [CrossRef]  

39. J. B. Bolkhovsky, C. G. Scully, and K. H. Chon, “Statistical analysis of heart rate and heart rate variability monitoring through the use of smart phone cameras,” in 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), (2012), pp. 1610–1613.

40. S. Akraa, A. Pham Tran Tam, H. F. Shen, Y. H. Tang, B. Z. Tang, J. Li, and S. Walker, “A smartphone-based point-of-care quantitative urinalysis device for chronic kidney disease patients,” J. Network and Computer Appl. 115, 59–69 (2018). [CrossRef]  

41. S. M. Park, M. A. V. Onufrak, M. M. Haque, M. C. Were, V. Naanyu, M. K. Hasan, and Y. L. Kim, “mHealth spectroscopy of blood hemoglobin with spectral super-resolution,” Optica 7(6), 563–573 (2020). [CrossRef]  

42. T. Kim, S. H. Choi, N. Lambert-Cheatham, Z. B. Xu, J. E. Kritchevsky, F. R. Bertin, and Y. L. Kim, “Toward laboratory blood test-comparable photometric assessments for anemia in veterinary hematology,” J. Biomed. Opt. 21(10), 107001 (2016). [CrossRef]  

43. P. M. Hubel, D. Sherman, and J. E. Farrell, “A comparison of methods of sensor spectral sensitivity estimation,” in Color and Imaging Conference, (Society for Imaging Science and Technology, 1994), pp. 45–48.

44. G. Sharma and H. J. Trussell, “Characterization of scanner sensitivity,” in Color and Imaging Conference, (Society for Imaging Science and Technology, 1993), pp. 103–107.

45. G. D. Finlayson, S. Hordley, and P. M. Hubel, “Recovering device sensitivities with quadratic programming,” in Sixth Color Imaging Conference: Color Science, Systems and Applications, (1998), pp. 90–95.

46. K. Barnard and B. Funt, “Camera characterization for color research,” Color Res. Appl. 27(3), 152–163 (2002). [CrossRef]  

47. C. P. Huynh and A. Robles-Kelly, “Recovery of spectral sensitivity functions from a colour chart image under unknown spectrally smooth illumination,” in 2014 22nd International Conference on Pattern Recognition (ICPR), (2014), pp. 708–713.

48. G. Finlayson, M. M. Darrodi, and M. Mackiewicz, “Rank-based camera spectral sensitivity estimation,” J. Opt. Soc. Am. A 33(4), 589–599 (2016). [CrossRef]  

49. J. Y. Zhu, X. F. Xie, N. F. Liao, Z. Z. Zhang, W. M. Wu, and L. M. Lv, “Spectral sensitivity estimation of trichromatic camera based on orthogonal test and window filtering,” Opt. Express 28(19), 28085–28100 (2020). [CrossRef]  

50. M. M. Darrodi, G. Finlayson, T. Goodman, and M. Mackiewicz, “Reference data set for camera spectral sensitivity estimation,” J. Opt. Soc. Am. A 32(3), 381–391 (2015). [CrossRef]  

51. P. Urban, M. Desch, K. Happel, and D. Spiehl, “Recovering camera sensitivities using target-based reflectances captured under multiple LED-illuminations,” in Proc. of Workshop on Color Image Processing, (2010), pp. 9–16.

52. F. Sigernes, J. M. Holmes, M. Dyrland, D. A. Lorentzen, T. Svenoe, K. Heia, T. Aso, S. Chernouss, and C. S. Deehr, “Sensitivity calibration of digital colour cameras for auroral imaging,” Opt. Express 16(20), 15623–15632 (2008). [CrossRef]  

53. F. Sigernes, M. Dyrland, N. Peters, D. A. Lorentzen, T. Svenoe, K. Heia, S. Chernouss, C. S. Deehr, and M. Kosch, “The absolute sensitivity of digital colour cameras,” Opt. Express 17(22), 20211–20220 (2009). [CrossRef]  

54. J. Farrell, M. Okincha, and M. Parmar, “Sensor calibration and simulation,” Digital Photography IV 6817 (2008).

55. P. L. Vora, J. E. Farrell, J. D. Tietz, and D. H. Brainard, “Digital color cameras - 2 - spectral response,” (1997), hpl.hp.com.

56. P. K. Yang, “Determining the spectral responsivity from relative measurements by using multicolor light-emitting diodes as probing light sources,” Optik 126(21), 3088–3092 (2015). [CrossRef]  

57. R. Hartmann, G. Hartner, U. G. Briel, K. Dennerl, F. Haberl, L. Struder, J. Trumper, E. Bihler, E. Kendziorra, J. F. Hochedez, E. Jourdain, P. Dhez, P. Salvetat, J. Auerhammer, D. Schmitz, F. Scholze, and G. Ulm, “The quantum efficiency of the XMM pn-CCD camera,” Euv, X-Ray, and Gamm A: Ray Instrumentation for Astronomy X 3765, 703–713 (1999). [CrossRef]  

58. R. Kawakami, H. X. Zhao, R. T. Tan, and K. Ikeuchi, “Camera spectral sensitivity and white balance estimation from sky images,” Int. J. Comput. Vis. 105(3), 187–204 (2013). [CrossRef]  

59. M. Rump, A. Zinke, and R. Klein, “Practical spectral characterization of trichromatic cameras,” in Proceedings of the 2011 SIGGRAPH Asia Conference, (2011).

60. G. Sharma and H. J. Trussell, “Set theoretic estimation in color scanner characterization,” J. Electron. Imaging 5(4), 479–489 (1996). [CrossRef]  

61. Z. Zhang, Y. Xu, J. Yang, X. L. Li, and D. Zhang, “A survey of sparse representation: Algorithms and applications,” IEEE Access 3, 490–530 (2015). [CrossRef]  

62. Y. Kwak, S. M. Park, Z. Ku, A. Urbas, and Y. L. Kim, “A pearl spectrometer,” Nano Lett. 21(2), 921–930 (2021). [CrossRef]  

63. E. J. Candes and T. Tao, “Decoding by linear programming,” IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005). [CrossRef]  

64. C. Ramirez, V. Kreinovich, and M. Argaez, “Why ℓ1 is a good approximation to ℓ0: A geometric explanation,” (2013).

65. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]  

66. R. G. Baraniuk, “Compressive sensing,” IEEE Signal Process. Mag. 24(4), 118–121 (2007). [CrossRef]  

67. E. Candes and J. Romberg, “Sparsity and incoherence in compressive sampling,” Inverse Problems 23(3), 969–985 (2007). [CrossRef]  

68. G. Peyre, “Best basis compressed sensing,” IEEE Trans. Signal Process. 58(5), 2613–2622 (2010). [CrossRef]  

69. Z. Yang, T. Albrow-Owen, H. Cui, J. Alexander-Webber, F. Gu, X. Wang, T.-C. Wu, M. Zhuge, C. Williams, and P. Wang, “Single-nanowire spectrometers,” Science 365(6457), 1017–1020 (2019). [CrossRef]  

70. M. Grant and S. Boyd, “CVX: MATLAB software for disciplined convex programming, version 2.2,” (2014), http://cvxr.com/cvx.

71. J. C. A. Barata and M. S. Hussein, “The Moore-Penrose pseudoinverse: A tutorial review of the theory,” Braz. J. Phys. 42(1-2), 146–165 (2012). [CrossRef]  

72. R. Ramanath, W. E. Snyder, Y. J. Yoo, and M. S. Drew, “Color image processing pipeline,” IEEE Signal Process. Mag. 22(1), 34–43 (2005). [CrossRef]  

73. J. Y. Hardeberg, H. Brettel, and F. Schmitt, “Spectral characterisation of electronic cameras,” Electronic Imaging: Processing, Printing, and Publishing in Color 3409, 100–109 (1998). [CrossRef]  

74. J. Nakamura, Image sensors and signal processing for digital still cameras (CRC, 2017).

75. P. Daponte, L. De Vito, F. Picariello, and M. Riccio, “State of the art and future developments of measurement applications on smartphones,” Measurement 46(9), 3291–3307 (2013). [CrossRef]  

76. Y. Deng, “Deep learning on mobile devices: A review,” in Mobile Multimedia/Image Processing, Security, and Applications 2019, (International Society for Optics and Photonics, 2019), 109930A.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Small number of primary colors for compressive sensing-based estimation of RGB spectral response functions. (a) 18 colors from ColorChecker Classic Nano (X-Rite) are used to estimate RGB spectral response (or sensitivity) functions. (b) The corresponding reflection spectra of 18 colors. The small size of 22 mm × 36 mm is also useful for ensuring spatial uniform illumination and single shot imaging. A final number of primary colors is further reduced to 12 (#1 – #12) with reliable recovery of RGB spectral response functions.
Fig. 2.
Fig. 2. Ground truth RGB spectral response functions with directed measured ones for validation. (a) The RGB camera (GS3-U3-120S6C-C, Point Grey) and the mono camera (GS3-U3-120S6M-C, Point Grey) installed with the identical image sensor (Sony ICX625) are used for initial testing and validation. (b) Sunlight is used as an illumination light source. A photograph of clear blue sky taken during the direct measurement; the azimuth and elevation of the sun are 184.26° and 27.1°, respectively. (c) The spectra of sunlight through an imaging spectrograph mounted with the mono camera and the RGB camera. The Fraunhofer lines featured by natural elements in the atmosphere are also used for spectral calibration. The red (R), green (B), and blue (B) channels are split from the RGB camera, respectively. (d) The spectra of three different light sources. Sunlight and xenon-arc light sources are ideal for providing a broad spectral range in the visible light (400–700 nm). (e) The measured RGB spectral response functions of the RGB camera to serve as the ground truth for validation.
Fig. 3.
Fig. 3. Recovery of RGB spectral response functions with optimal color combinations. (a) The root mean square error (RMSE) between the ground truth and recovered RGB spectral response functions of the RGB camera (GS3-U3-120S6C-C, Point Grey) as the number of colors increases. For each number of color patches used for recovery, different possible combinations of colors selected among 18 colors are used to calculate an average and a standard deviation. (b)–(e) Representative cases of recovered RGB functions when the number of color patches used is 9, 12, 15, and 18, respectively. Overall, the estimated RGB spectral response functions (solid lines) are in excellent agreement with the ground truth ones (dotted lines and Fig. 2(e)). (f) The reflection spectra of 12 primary colors (#1 – #12 in Fig. 1) serve as a sensing matrix in the compressive sensing framework. The pairwise spectral comparison map shows an average correlation coefficient of 0.579. This level of spectral uncorrelation is sufficient to guarantee the incoherence requirement for compressive sensing theory.
Fig. 4.
Fig. 4. Comparisons with typical methods based on l2-norm minimization. (a) and (b) The RGB spectral response functions of the RGB camera (GS3-U3-120S6C-C, Point Grey) are estimated with 12 primary colors (# 1 – # 12 in Fig. 1) using QR decomposition with a constraint of nonnegative values (a) and the Moore-Penrose pseudo-inverse with a constraint of nonnegative values (b). The estimated RGB spectral response functions (solid lines) are compared with the ground truth RGB spectral response functions (dotted lines and Fig. 2(e)). (c) Mean RMSE values of the R, G, and B channels between the ground truth (dotted lines and Fig. 2(e)) and the recovered RGB spectral response functions (solid lines). The RMSE of compressive sensing is the same as Fig. 3(c).
Fig. 5.
Fig. 5. Estimated RGB spectral response functions of the built-in cameras of high-end and low-end smartphones. The recovered RGB spectral response functions of Samsung Galaxy Note 8 (a), Galaxy S9+ (b), and Galaxy A21 (c).
Fig. 6.
Fig. 6. Color validation of estimated RGB spectral response functions with measured and synthetic colors. (a) A comprehensive color reference checker used for smartphone camera validation is ColorChecker Digital SG (X-Rite) with 140 feature color patches. Except for black, grey, and white colors, this color reference standard has 96 unique colors, including 12 primary colors for compressive sensing-based estimation of RGB spectral response functions. (b) The reflection spectra of the corresponding color patches from ColorChecker Digital SG. (c-1), (d-1), and (e-1) The directly measured colors with Galaxy Note 8 (c-1), Galaxy S9+ (d-1), and Galaxy A21 (e-1). The true RGB values are calculated following Eq. (5). For color visualization, the RAW images are rescaled to 8-bit (0 to 255) and are transformed into the RGB domain with a gamma correction. (c-2), (d-2), and (e-2) The synthetic colors of Galaxy Note 8 (c-2), Galaxy S9+ (d-2), and Galaxy A21 (e-2) generated by applying the estimated RGB spectral response functions to the reflection spectra, following Eq. (6).

Tables (1)

Tables Icon

Table 1. Brief summary of the previous studies on estimating RGB spectral response functions.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

| | x | | p = ( i = 1 n | x i | p ) 1 / p .
I M ( R , G , B ) = L ( λ ) C ( λ ) S R , G , B ( λ ) D ( λ ) O ( λ ) d λ ,
I reference ( R , G , B ) = L ( λ ) C ( λ ) S R , G , B ( λ ) D ( λ ) d λ .
I T ( R , G , B ) = I M ( R , G , B ) I reference ( R , G , B ) .
I T ( R , G , B ) = I M ( R , G , B ) I dark ( R , G , B ) I reference ( R , G , B ) I dark ( R , G , B ) ,
Y n × 3 = F n × m S m × 3 ,
[ I 1 ( R ) I 2 ( R ) I n ( R ) I 1 ( G ) I 2 ( G ) I n ( G ) I 1 ( B ) I 2 ( B ) I n ( B ) ] = [ I 1 ( λ 1 ) I 2 ( λ 1 ) I n ( λ 1 ) I 1 ( λ 2 ) I 2 ( λ 2 ) I n ( λ 2 ) I 1 ( λ m ) I 2 ( λ m ) I n ( λ m ) ] [ S R ( λ 1 ) S R ( λ 2 ) S R ( λ m ) S G ( λ 1 ) S G ( λ 2 ) S G ( λ m ) S B ( λ 1 ) S B ( λ 2 ) S B ( λ m ) ] .
Y = F Ψ s = Φ s .
minimize | | s | | 1
subject to | | Y Φ s | | 2 ε ,
minimize 1 2 | | Y Φ s | | 2 2 + γ | | s | | 1 ,
S R , G , B ( λ ) = I R , G , B ( λ ) I mono ( λ ) = L ( λ ) C ( λ ) S R , G , B ( λ ) D ( λ ) L ( λ ) C ( λ ) D ( λ ) ,
RMSRE = 1 3 C = R , G , B 1 k i = 1 k ( I C , i T I C , i S I C , i T ) 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.