Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Simple color appearance model (sCAM) based on simple uniform color space (sUCS)

Open Access Open Access

Abstract

A new color appearance model named sCAM has been developed, including a uniform color space, sUCS. The model has a simple structure but provides comprehensive functions for color related applications. It takes input from either XYZ D65 or signals from an RGB space. Their accuracy has been extensively tested. sUCS performed the best or second-best to the overall 28 datasets for space uniformity and the 6 datasets for hue linearity comparing the state of the art UCSs. sCAM also performed the best to fit all available one- and two-dimensional color appearance datasets. It is recommended to have field tests for all color related applications.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

1.1 Definition

Uniform color space (UCS) has long been extensively investigated research topics in the field of color science. It is defined as that “equal distances are intended to represent threshold or suprathreshold perceived color differences of equal size” [1]. UCS is frequently used to evaluate color difference for color quality control, to specify colors to communicate appearance, and to perform image applications such as image compression, gamut mapping, image enhancement. Note a UCS is automatically a color difference equation (CDE), i.e., the Euclidean distance between two samples in a UCS represents color difference. However, some CDEs do not have an associated color space (see next section).

1.2 CIE recommended color models

The Commission Internationale de l’Éclairage (CIE) has contributed greatly in the standard developments of UCS. In 1976, they first proposed two ISO/CIE standard UCSs, CIELUV and CIELAB [2,3]. They were originally recommended by the CIE to predict perceptual color differences for the surface and light source, respectively. However, they were not well tested due to the limited datasets available, i.e., in fact just two sets, Munsell and MacAdam data [4,5]. During 1980-2000, many more experimental datasets have been produced with an aim to develop new color models with color differences less than 5 CIELAB difference units, for which the four most robust datasets were selected, BFD, Witt, Leeds, RIT-Dupont [69]. It was found that CIELAB and CIELUV spaces gave poor prediction to all those datasets. A number of advanced CDEs were developed by the modification of CIELAB to fit those datasets, including CMC, CIE94, Leeds, Bradford [1012]. Note CIELAB can fit well to the Munsell data but performed poorly to the present small color difference datasets, which is caused by their color difference magnitudes, i.e., CIELAB can accurately predict the Munsell data, with an average of 10 CIELAB units much smaller than those new data having less than 5 units. In year 2000, CIE TC1-29 combined the best features from the above advanced CDEs to fit all the four datasets to recommend CIEDE2000 to be the ISO/CIE standard [13,14]. Note those CDEs do not have an associated color space.

1.3 Development of UCSs

After 2001, the research focus was switched to develop new UCSs like OSA, DIN99d and CAM02-UCS to fit the above mentioned 4 robust datasets [1517]. The latter belongs to the family of color appearance model (CAM) which has been extensively studied since 1990 to predict the changed color appearance under different viewing conditions, such as illuminants, luminance levels, backgrounds and surrounds. CAM is highly desired by the imaging industry to reproduce color appearance across different ambient lightings and media like display, printer, projectors, etc. CIECAM97s, CIECAM02 were recommended by CIE, respectively, and later it was superseded by CIECAM16 [1820]. The latter two CAMs were extended to include a uniform color space CAM02-UCS and CAM16-UCS respectively. Note CAM02-UCS and CAM16-UCS gave very similar performance in predicting both the color difference uniformity and color appearance datasets. After a large-scale test organized by the CIE, it was found the best three UCSs including CAM02-UCS, OSA, DIN99d to give similar performance but they could not perform better than CIEDE2000 [21]. No decision was made to recommend a CIE UCS. More recently, Luo and Xu did an even more comprehensive study by collecting state-of-the-art color models including 4 CDEs and 13 UCSs, and the most comprehensive datasets including 28 datasets, which are divided into three groups: small color-difference display (SCDd) and surface colors (SCDs), and large color difference surface (LCD) [22]. It was found CAM16-UCS significantly outperformed the other models in all three data groups. A CIE new TC has been formed in a process to recommend it to be CIE Standard UCS. These 28 datasets are also used in this study.

1.4 Hue linearity property of UCS

A perfect UCS should have two properties: ideal uniformity and perfect hue linearity. All tests introduced in the last section only concern the space uniformity. Some datasets have been produced to analysis hue linearity. The Hung and Berns and Ebner and Fairchild specified constantly perceived hues [23,24]. Their data revealed that the existing models could not avoid inter-dependence between chroma and hue and were not suitable for the imaging application such as gamut mapping. The uniform color space IPT was developed to provide improved hue linearity [25]. The other datasets were produced to investigate hue linearity for the unitary hues (pure red, yellow, green and blue), including Xiao et al., Zhao and Luo. Zhao and Luo also tested the latest UCSs including CIELAB, CAM02-UCS, IPT and the Jzazbz using all available hue linearity datasets [26,27]. Note Jzazbz is a UCS developed to consider high dynamic range (HDR) viewing conditions and was fitted including both the uniformity and hue linearity datasets [28]. The results clearly showed that CAM02-UCS performed the best to the uniform datasets but badly to the hue linearity datasets. IPT gave the reverse performance. CIELAB gave bad performance to both types of datasets. Jzazbz can give reasonable performance to predict space uniformity and hue linearity datasets, i.e., it performed slightly worse than CAM02-UCS and IPT to the respective datasets. Obviously, there is a challenge to derive a model to predict both types of the data accurately well.

1.5 Practical applications

Some computational models were widely used for computer graphics, e.g., the HSV system for graphic rendering and color processing. It also includes three color attributes: hue, saturation, and value [29]. However, the system shows the discontinuity in a constant hue. Furthermore, HSV is device-dependent, i.e., no transformation from/to CIE specification. The other space is YUV, or YCrCb, which was widely used for video compression, image processing, computer vision [30]. It was originally designed for the analogue television system. The system has advantage to compress the chrominance signals for signal transmission. However, it suffers for crosstalk among three scales. Overall, these models have much simpler structure for fast data processing and transmission. They do not provide desired UCS requirements.

With the above in mind, the goals of this study are to:

  • • develop a color model with input either from the XYZ or an RGB space.
  • • include a UCS and a CAM, like CIECAM16 and CAM16-UCS respectively.
  • • have a simple structure for routine image signal processing applications.
  • • be easy to perform forward and inverse transforms, and to have input and output of an RGB space for routine computer applications.
  • • perform equally well to the state-of-the-art models to fit the space uniformity and hue linearity datasets.
  • • give the most accurate prediction to the available color appearance datasets, including 1D and 2D attributes.

2. System development

A color model has been developed, sCAM-UCS. It will be described in this section including three parts: 1) simple Uniform Color Space (sUCS), 2) simple Color Appearance Model (sCAM) and 3) simple 2D scales (s2D). Each part is described below.

2.1 Simple uniform color space (sUCS)

The structure of sUCS is based on the structure of IPT and CAM16-UCS for their hue linearity, and space uniformity, respectively. The model coefficients were optimized by minimising STRESS measure between the visual results and the predicted results, as shown in Eq. (1) [31].

$$STRESS = 100\sqrt {\frac{{\sum {{({F{E_i} - {V_i}} )}^2}}}{{\sum V_i^2}}} $$
where
$$F = \frac{{\sum {E_i}{V_i}}}{{\sum E_i^2}}.$$

${E_i}$ and ${V_i}$ represent the model’s prediction and the visual data from a dataset.

Step 1 To calculate LMS cone responses

The CIE XYZ values under D65 illuminant and CIE 1931 (2°), 1964 (10°), or CIE 2015 color matching functions are transformed to the LMS (long-wavelength, medium-wavelength, and short-wavelength) cone responses, via the HPE (Hunt-Pointer-Estevez) matrix [32]. The matrix was used in all CIE color appearance models, as given in Eq. (2).

$$\left[ {\begin{array}{{c}} L\\ M\\ S \end{array}} \right] = \; \left[ {\begin{array}{{ccc}} {0.4002}&{0.7075}&{ - 0.0807}\\ { - 0.2280}&{1.1500}&{0.0612}\\ 0&0&{0.9184} \end{array}} \right]\; \left[ {\begin{array}{{c}} {{X_{D65}}}\\ {{Y_{D65}}}\\ {{Z_{D65}}} \end{array}} \right]$$

Step 2 To calculate L′M′S′ adapted cone responses

Equation (3) is a power transformation to convert LMS responses to L′M′S′, an approximately equal visual spacing scale. The power factor was determined by minimising the STRESS values using the SCDs and LCD databases. A power factor of 0.43 was found, identical to that of IPT.

$$T^{\prime} = \; {T^{0.43}};T \ge 0;$$
$$T^{\prime} ={-} {({ - T} )^{0.43}};T < 0;$$
where T is L, M, or S response.

Step 3 To calculate Iab opponent responses

The L′M′S′ responses are transformed using Eq. (4) to I, a, b attributes, corresponding to lightness, redness-greenness and yellowness-blueness, respectively. This step is taking the concept of CIECAM16 models, i.e., the ratios of 2:1:1/20 approximate the distribution of L′:M′:S′ cones in retina. For the reference white (I = 100), L′=M′=S′ = 1. The coefficients in a and b scales sum to zero to define all neutral colors, i.e., a = b = 0. For lightness (I), the above ratio of the responses was kept, i.e., [11: -12:1].

$$\left[ {\begin{array}{{c}} I\\ a\\ b \end{array}} \right] = \textrm{}\left[ {\begin{array}{{ccc}} {\frac{{200}}{{3.05}}}&{\frac{{100}}{{3.05}}}&{\frac{5}{{3.05}}}\\ {430}&{ - 470}&{40}\\ {49}&{49}&{ - 98} \end{array}} \right]\; \left[ {\begin{array}{{c}} {L^{\prime}}\\ {M^{\prime}}\\ {S^{\prime}} \end{array}} \right]$$

Step 4 To compute Chroma (C), hue angle (h)

Chroma (C) and hue angle (h) are given in Eq. 5 and Eq. 6, respectively. These attributes form the polar chromatic plane of sUCS. The coefficients in chroma scale were optimized in the same way to obtain the CAM02-UCS M’ scale to achieve space uniformity. Again, the same datasets as Eq. (3) and Eq. (4) were used to optimize the coefficients.

$$C = \frac{1}{{0.0252}}\ln \left( {1 + 0.0447\sqrt {{a^2} + {b^2}} } \right)$$
$$h = \; arctan\; \left({{b}/{a}}\right)$$

To calculate color differences (ΔE), and hue difference (ΔH)

$${\Delta }E = \sqrt {{\Delta }{I^2} + {\Delta }a{'^2} + {\Delta }b{'^2}\; }$$
$$= \sqrt {\Delta {I^2} + \Delta {C^2} + \Delta {H^2}\; }$$
where
$${\Delta }H = 2\sqrt {{C_{Sample}}{C_{Standard\; \; }}} \sin \left( {{\Delta }h/2} \right)$$
$$a^{\prime} = \textrm{C}\cos (h)$$
$$b^{\prime} = \; C\sin (h)$$
$$\Delta a^{\prime} = \textrm{}{a^{\prime}_{Sample}} - a{^{\prime}_{Standard}}$$
$$\Delta b^{\prime} = \textrm{}{b^{\prime}_{Sample}} - b{^{\prime}_{Standard}}$$
$$\Delta C = \textrm{}{C_{Sample}} - {C_{Standard}}$$
$$\Delta h = \textrm{}{h_{Sample}} - {h_{Standard}}$$
where a′ and b′ are the redness-greenness and yellowness-blueness for calculating sUCS color difference. They are different from a and b calculated from Eq. (4).

The full steps of sUCS are given in below.

$$\left[ {\begin{array}{{c}} L\\ M\\ S \end{array}} \right] = \left[ {\begin{array}{{ccc}} {0.4002}&{0.7075}&{ - 0.0807}\\ { - 0.2280}&{1.1500}&{0.0612}\\ 0&0&{0.9184} \end{array}} \right]\textrm{}\left[ {\begin{array}{{c}} {{X_{D65}}}\\ {{Y_{D65}}}\\ {{Z_{D65}}} \end{array}} \right]$$
or
$$\left[ {\begin{array}{{c}} L\\ M\\ S \end{array}} \right] = \left[ {\; \begin{array}{{ccc}} {0.314}&{0.6395}&{0.0466}\\ {0.1517}&{0.7482}&{0.1}\\ {0.0178}&{0.1095}&{0.8728} \end{array}} \right]\; \left[ {\begin{array}{{c}} {{R_{SRGB}}}\\ {{G_{SRGB}}}\\ {{B_{SRGB}}} \end{array}} \right]$$
$$T^{\prime} = \textrm{}{T^{0.43}};T \ge 0;$$
$$T^{\prime} ={-} {({ - T} )^{0.43}};T < 0;$$
$$\left[ {\begin{array}{{c}} I\\ a\\ b \end{array}} \right] = \textrm{}\left[ {\begin{array}{{ccc}} {\frac{{200}}{{3.05}}}&{\frac{{100}}{{3.05}}}&{\frac{5}{{3.05}}}\\ {430}&{ - 470}&{40}\\ {49}&{49}&{ - 98} \end{array}} \right]\; \left[ {\begin{array}{{c}} {L^{\prime}}\\ {M^{\prime}}\\ {S^{\prime}} \end{array}} \right]$$
$$C = \frac{1}{{0.0252}}\ln \left( {1 + 0.0447\sqrt {{a^2} + {b^2}} } \right)$$
$$h = \textrm{}{tan ^{ - 1}}\left( {\frac{b}{a}} \right)$$

2.2 Simple color appearance model (sCAM)

As mentioned earlier, CAM is derived to predict the change of color appearance under different viewing conditions. It has been widely used to reproduce color imaging for color management. It has the input of color stimulus, XYZ, and a range of parameters to define viewing conditions, including illuminants, luminance of adapting field, surround, luminance factor of backgrounds, and the output includes two ‘absolute’ color appearance attributes of brightness (Q), and colorfulness (M), and corresponding ‘relative’ attributes of lightness (J), chroma (C), respectively. The others are hue angle (h) and hue composition (H). The relationship between the ‘relative’ and ‘absolute’ attributes can be explained better using equations, i.e., J = Qs/Qrw, and C = Ms/Qrw, where subscripts s and rw represent sample and reference white respectively. This indicates that the two ‘relative’ attributes, lightness and chroma, are relative to the brightness of the reference white. The ‘absolute’ terms, brightness and colorfulness, would changes according to the luminance level.

In this section, a new CAM, called sCAM is developed based on the sUCS. It was aimed to have a simple structure and to accurately predict the color appearance datasets. All the available color appearance datasets are included to optimize the new model. Again, the strategy is to train and test the sCAM using different datasets.

sCAM like CIECAM16, predicts various visual phenomena, i.e., change color appearance caused by 1) illuminant, i.e., chromatic adaptation effect, 2) ambient illuminance, Steven effect and Hunt effect, a rise of illuminance to increase lightness contrast and more colorful, respectively, 3) background lightness (simultaneous lightness contrast), 4) surround (Bartleson-Breneman effect), the average, dim and dark surround conditions corresponding to typical viewing images in office, watching TV at home, observing film in cinema, respectively [3337].

Note the input to sUCS is always XYZ values under D65 illuminant, it is recommended to use CAT16 as chromatic adaptation transform from XYZ of other illuminant to those of D65.

Some parameters need to be predefined for sCAM. These are the background factor (z), (see Eq. (12), and luminance for adapting field (FL), (see Eq. (13). The coefficients in these equations were obtained by fitting the LUTCHI datasets. The two surround factors, c and FM are given in Table 1. The variable c is defined as the adjustment coefficient for Lightness under different surrounds. Conversely, FM is defined as the adjustment coefficient for colorfulness under varying surround conditions.These parameters were defined in two stages. Stage 1 was to fit the Eq. (14) – Eq. (16) without the surround factors (c and FM) by minimising the mean of each individual sub-datasets using STRESS. After that, all datasets were combined to find the best surround factors (c and FM) for each surround in Table 1.

$$z = 1.48 + \; \sqrt {\frac{{{Y_b}}}{{{Y_w}}}} $$
$${F_L} = 0.171\; L_A^{\frac{1}{3}}\; \left( {\frac{1}{{1 - 0.4934{e^{ - 0.9934{L_A}}}}}} \right)$$

Tables Icon

Table 1. The c and FM parameters used in sCAM

Within the framework of the discussion, Eq. (12) defines Yb as the background luminance under test conditions, and Yw represents the adopted white Y stimulus under the test illuminant. In Eq. (13), LA is defined as the luminance of the test adapting field. The definitions of these terms are consistent with those established in the CIE (Commission Internationale de l'Éclairage) standards.

Step 1 To develop Appearance lightness attribute (${I_a}$)

In the context of color appearance models, it was found that the lightness (I) fitted to colour difference data well but cannot fit well to the colour appearance data well. A new scale, named Appearance Lightness (Ia) (Eq. (14)), was derived similar to that of CIECAMs, i.e., A/Aw was replaced by the I/100. The same concept to include a power factor, c times z, to introduce the lightness contrast and surround effects, respectively.

$${I_a} = 100\; {\left( {\frac{I}{{100}}} \right)^{cz}}$$

Figure 1(a) and 1(b) illustrate the difference between sUCS I and sCAM ${I_a}$ under a) different surround conditions and b) against different luminous factor of backgrounds, respectively. Figure 1(a) shows that a large difference between I and ${I_a}$ values for the dark surround conditions (as reflected by viewing 35 mm projected image and large cut sheet transparency conditions, or LUTCHI’s M35 and LTX subsets, respectively) with little difference for the average and dim surround conditions. Figure 1(b) shows the luminous factor of background (Yb) effect between I and ${I_a}$, i.e., both agree exactly for background having Yb of 20, close to L* of 50, and Ia contrast decreases or increases for Yb larger or smaller than 20, respectively.

 figure: Fig. 1.

Fig. 1. Plot of sCAM Ia against sUCS I for (a) different surround conditions and (b) different luminous factors of background.

Download Full Size | PDF

Step 2 To develop brightness (Q)

The sCAM brightness (Q) formula is given in Eq. (15).

$$Q = {I_a}\left( {\frac{2}{c}F_L^{0.1}} \right)$$

It is expected Q and Ia to have a linear relationship as shown in Eq. (15) but affecting by different visual effects. Q is affected, firstly by the surround factor (c), i.e., a color would appear lower brightness under average surround (higher luminance contrast) than that under dim and dark surround conditions, and secondly by the term $F_L^{0.1}$ to reflect the Stevens effect, a higher luminance to make a higher contrast effect.

Figure 2 shows a linear relationship between the Q and Ia. All neutral colors would appear brighter under higher luminance levels than lower luminance levels.

 figure: Fig. 2.

Fig. 2. Plot of Q against Ia values under various luminance levels. Note the lines were plotted for neutral stimuli from 0 to 100 at interval of 0.01 under ambient luminance levels of 0.01, 1.0, 100, 10000 cd/m2.

Download Full Size | PDF

Step 3 to develop colorfulness (M)

A new colorfulness (M) formula is given in Eq. (16).

$$M = C\left( {\frac{{F_L^{0.1}{e_t}}}{{{I_a}^{0.27}}}{F_M}} \right)$$
where
$${e_t} = 1 + 0.06\cos ({110 + h} )$$

It is expected that M and C should have a linear relationship. M scales various visual effects. Firstly, the term $F_L^{0.1}$ models the Hunt effect (a higher luminance to make color more colorful) (also see Fig. 3(a)). Secondly, chroma induction factor (${F_M}$) makes colors more colorfulness under the average surround condition than those of dim and dark surrounds (see Fig. 3(b)). Thirdly, to reflect the fact that to increase lightness of a color would reduce its colorfulness. Finally, it is the colorfulness magnitudes differ on different hues. So, the eccentricity function (et) (Eq. (17)) was derived and its function is plotted in Fig. 4.

 figure: Fig. 3.

Fig. 3. Plots of sCAM M vs sUCS C to show (a) the adapting luminance and (b) the surround effects. All points in each line have Ia values of 50. And hue angle of 20°.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Plot et values against hue angles (Eq. (17)).

Download Full Size | PDF

Figure 3 plots the sUCS C scales against sCAM M at a) different luminance levels and b) under different surround conditions, respectively. It can be seen the colors having the same C values would appear more colorful under higher than lower luminance levels. Also, colors having the same c values would appear more colorful under the average surround than the dim and dark surrounds.

Figure 4 shows the colorfulness is higher in yellow region and lower in blue region. To use this function, adjust colorfulness to be more or less equal in all hue range.

Step 4 To model the hue composition (H)

Like in all CIE color appearance models, hue composition (H) attribute is computed, as given in Eq. (18). The difference is to use hue angle and hue composition, respectively. The hue compositions of 0, 100, 200, 300, 400 corresponding to unitary red, yellow, green, blue and back to red, respectively. These corresponding to hue angles (h) of 15.6°, 80.3°, 157.8°, 219.7° and 376.6° respectively.

$$H = {H_i} + \frac{{100\frac{{h^{\prime} - {h_i}}}{{{e_i}}}}}{{\frac{{h^{\prime} - {h_i}}}{{{e_i}}} + \frac{{{h_{i + 1}} - h^{\prime}}}{{{e_{i + 1}}}}}}$$
$$\textrm{where}h^{\prime} = h + 360,if\; h < 15.6$$
$$h^{\prime} = h\; \; ,else$$
and Hi and ei can be found in Table 2.

A color appearance model can have two types of spaces based on h and H attributes respectively, e.g., JCh and JCH spaces [38]. The space adopts unitary hues cannot produce equal visual difference spacing as illustrated by Munsell and NCS color order systems respectively [39,40].

Tables Icon

Table 2. The Hi, hi and ei parameters used to compute hue composition (H)

2.3 Simple 2D color appearance scales (s2D)

The two-dimensional (2D) color appearance scales have gained attention recently. These scales are the function of one-dimensional scales, say CIELAB L* and C*ab. Berns proposed vividness (V*ab = sqrt(L*2 + C*ab2)), and depth (D*ab = sqrt[(L*-100)2 + C*ab2]) [41]. These are defined as the distance of a color away from a black, a white and its background respectively. The 2D scales are easy to understand including the nature effect in real-world conditions. Some examples are given here. For depth, by adding more colorants in a dyebath would make the dyed textile to appear deeper, i.e., an increase depth would increase C*ab and decrease L*. For vividness, by increasing the intensity of light illuminated to an object, it would increase vividness of the object, i.e., an increase of both L* and C*ab.

Cho et al., and Li and Luo investigated 2D scales including whiteness, blackness, depth and vividness [4244]. They found that these 4 scales can be reduced to two scales: 1) Whiteness (W) and Depth (D), 2) Blackness (K) and Vividness (V), i.e., the two magnitudes in each pair have the opposite directions, i.e., W = 100-D, and K = 100-V.

The 2D color appearance scales of sCAM were first derived by fitting the Whiteness and Blackness data of NCS. The NCS is a color order system to specify color appearance in terms of blackness, whiteness, chromaticness, and hue composition, based on the Hering’s theory “the degree of its resemblance to the six fundamental colors, white, black, red, green, yellow and blue.” About 60,000 estimations were made based on 450 colors judged by 50 observers. The colorimetric data here were obtained by color measurement of a using a spectrophotometer (d/8) including 1950 samples. So, when derived W and K scales (Eq. (19)) and Eq. (20), and to fit the NCS data, D and V scales (Eq. (20) and Eq. (22)) are automatically obtained.

These scales are given in Eq. (19) and Eq. (20) for depth (D) and vividness (V) respectively.

$$W = \; 100 - D = 100\; - \; 1.3\sqrt {{{({100 - {I_a}} )}^2} + 1.6\; {C^2}} $$
where
$$D = 1.3\sqrt {{{({100 - {I_a}} )}^2} + 1.6\; {C^2}} $$
$$K = \; 100 - V = 100 - \sqrt {I_a^2 + 3\; {C^2}} $$
where
$$V = \; \sqrt {I_a^2 + 3\; {C^2}} $$

Figure 5(a) and Fig. 5(b) plot the constant vividness/blackness and depth/whiteness loci respectively in the plane of lightness (Ia) and chroma (C). In addition, a vector was drawn in each figure marked with the name of the scale in each arrow. It can be seen in Fig. 5(a) that the equal vividness or blackness colors in each locus, and both are open-end scales, i.e., the [0,0] origin corresponding to a blackness of 100 and a vividness of zero. Figure 5(b) shows the equal depth or whiteness colors in each locus, and both are again an open-end scale, i.e., the [0,0] origin corresponding to a whiteness of 100 and a depth of zero. Note from the earlier studies, we know saturation attributes had almost identical concept of depth. This will be verified during the testing stage.

 figure: Fig. 5.

Fig. 5. Plots the equal loci of (a) Vividness, (b) Depth scales in Ia and C plane for the unitary red hue. The loci have V or D scale of 30, 60, 90 with fixed hue angle at 15.60°, unitary red.

Download Full Size | PDF

The full steps of sCAM are given below.

$$Input:XYZ\; XY{Z_w}\; {L_A}\; {Y_b}\; Surround$$

SurroundcFM
avg0.521
dim0.50.95
dark0.390.85

$$z = 1.48 + \; \sqrt {\frac{{{Y_b}}}{{{Y_w}}}} $$
$$XY{Z_{D65}} = CA{T_{16}}({XYZ,XY{Z_w},{L_A},F} )$$
$$ICh = sUCS({XY{Z_{D65}}} )$$
$${I_a} = 100\; {\left( {\frac{I}{{100}}} \right)^{cz}}$$
$${e_t} = 1 + 0.06\cos ({110 + h} )$$
$$M = \frac{{CF_L^{0.1}{e_t}}}{{I_a^{0.27}}}{F_M}$$
$$Q = \frac{2}{c}{I_a}F_L^{0.46}$$

Index12345
hi15.680.3157.8219.7376.6
Hi0100200300400
ei0.70.61.20.90.7

$$h^{\prime} = h + 360,if\; h < 15.6$$
$$h^{\prime} = h\; \; ,else\; \; \; \; \; \; \; $$
$$H = {H_i} + \frac{{100\frac{{h^{\prime} - {h_i}}}{{{e_i}}}}}{{\frac{{h^{\prime} - {h_i}}}{{{e_i}}} + \frac{{{h_{i + 1}} - h^{\prime}}}{{{e_{i + 1}}}}}}$$
$$W = \; 100 - 1.3\sqrt {{{({100 - {I_a}} )}^2} + 1.6\; {C^2}} $$
$$K = \; 100 - \sqrt {I_a^2 + 3\; {C^2}} $$
$$D = 1.3\sqrt {{{({100 - {I_a}} )}^2} + 1.6\; {C^2}} $$
$$V = \; \sqrt {I_a^2 + 3\; {C^2}} $$

3. Testing the system performance

In section 1, a simple color model to provide comprehensive functions is derived. Six rigid tests will be carried out to verify the sCAM-UCS’s performance: i) color order system visualization, ii) hue linearity, iii) space uniformity, iv) computational costs, v) color appearance, and vi) 2D color appearance data.

3.1 Visualization of the color order systems

Munsell and OSA color order systems (COSs) were used to test each UCS’s performance. These two COSs were developed based on the concept of visual uniformity. The 3 attributes of Munsell system are Munsell Value, Munsell Chroma, Munsell Hue, denoted as H, V and C. It was designed to have equal spacing between neighboring steps for each individual attribute. However, Zhu et al. studied Munsell COS and found that one step of Munsell Value is not equal to one step of Chroma, in fact, about 2-3 steps. OSA has 3 attributes of lightness (L), redness-greenness (g), yellowness-blueness (j) [45]. It was designed to have many cuboctahedrons packed to form a 3D space with no gap in between. For each color, it has equal visual difference to the 12 neighboring lattices.

Six UCSs, representing the state-of-the-art spaces were investigated: CIELAB, IPT, ICTCP, CAM16-UCS, Jzazbz, and sUCS. The former is the CIE standard space and possibly the most widely used UCS. IPT is a space to give best hue linearity. CAM16-UCS is the most uniform UCS. Jzazbz is the space to consider both hue linearity and space uniformity, and is capable of considering HDR luminance conditions. sUCS is the new space described in Section 2. Figure 6 and Fig. 7 plot the Munsell and OSA samples in the chromatic plane of 5 UCSs (a: CIELAB, b: IPT, c: ICTCP, d: CAM16-UCS, e: Jzazbz, and f: sUCS).

 figure: Fig. 6.

Fig. 6. (a)-(f) Plots of the Munsell samples at Value of 5 in the chromatic plane of CIELAB, IPT, ICTCP, CAM16-UCS, Jzazbz, and sUCS, respectively.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. (a)-(f) Plots of the OSA samples at L of 0 in the chromatic plane of CIELAB, IPT, ICTCP, CAM16-UCS, Jzazbz, and sUCS, respectively.

Download Full Size | PDF

Figures 6(a)-(f) show the Munsell samples at Value of 5 in CIELAB, IPT, ICTCP, CAM16-UCS, Jzazbz, and sUCS, respectively. For a perfect agreement between Munsell spacing and a UCS, all constant chroma loci should be circles and equal distance between different circles. It can be seen CAM16-UCS fits the best, followed by sUCS, Jzazbz. They outperformed CIELAB, IPT and ICTCP. Furthermore, each hue locus in an ideal space should be a straight line at a constant hue angle, like IPT and Jzazbz. Detailed inspection, the constant hue loci in blue region for CIELAB and CAM16-UCS are strongly curved, as pointing out by other researchers. Overall, sUCS gave reasonable fit to the Munsell spacing.

Figures 7(a)-(f) show the OSA samples at constant OSA L of 0 plane plotted on CIELAB, IPT, ICTCP, CAM16-UCS, Jzazbz, and sUCS, respectively. For a perfect agreement between the OSA spacing and a UCS, all grids should be a square like IPT, ICTCP and Jzazbz spaces. Although the grids in CIELAB are not quite square, they do have more or less equal size. However, sUCS and CAM16-UCS showed larger grids for neutral region than the others. As pointed out by Zhu et al, this is the largest characteristic difference between small and large color difference UCSs.

Overall, sUCS gave quite satisfactory fit to the Munsell and OSA spacing.

3.2 Testing the hue linearity

As mentioned earlier, hue linearity is an important property of an UCS. All the datasets studied by Zhao and Luo to investigate UCSs’ hue linearity was also used here. These were: Hung&Berns, Xiao et al., Zhao HM (Hue Matching), NCS and Zhao UHE (Unitary Hue Estimation). The Hung&Berns dataset included 2 subsets, named Constant Lightness (CL) and Varying Lightness (VL) respectively. The Xiao et al.'s and Zhao UHE datasets studied the 4 unitary hue perceptions. The NCS dataset includes 40 hues and up to 1799 samples. For the purposes of this study, 120 samples of neutral colors were removed from the dataset. Zhao HM dataset investigated the primary and secondary hues of a WCG/HDR display. Each dataset has the same arrangement, i.e., different chroma values for each constant hue angle.

The mean standard deviation (SD) calculated between all points against the mean hue angle for all hue loci is used to measure hue linearity in a UCS. A higher SD value indicates poorer hue linearity (see Fig. 811). Table 3 shows each UCSs’ performance in SD unit.

 figure: Fig. 8.

Fig. 8. (a)-(f) Plots of the Hung&Berns samples in the chromatic plane of CIELAB, IPT, ICTCP, CAM16-UCS, Jzazbz, and sUCS, respectively. The solid lines are drawn based on linear fitting. The red solid dot represents the corresponding white.

Download Full Size | PDF

Tables Icon

Table 3. The performance of hue linearity in terms of SD.a

It can be seen from the ‘mean’ results in Table 3 that IPT, ICTCP, Jzazbz and sUCS performed equal best, and CIELAB and CAM16-UCS gave the poorest performance. For the Zhao HM set, sUCS performed the best and outperformed UCSs, IPT and ICTCP, for which three UCSs gave the similar degree of accuracy.

Equation (23) is utilized to calculate the displacement distance of uniform color spaces from the origin across various datasets. Within this context, Distance denotes the distance of each fitted curve from the origin under a first-order linear equation. ChromaMax represents the maximum Chroma value in the dataset within that color space, serving to balance the scales of different uniform color spaces.

$$D = \; \frac{{\sum Distance}}{{Chrom{a_{MAX}}}}$$

From Table 4, it is evident that different models exhibit varying performances in terms of their distance from the origin across various databases. Overall, IPT demonstrates the best performance, followed by CIELAB.

Tables Icon

Table 4. The performance of hue linearity in terms of D.a

Figures 8–Fig. 11 plot the data of the Hung&Berns, Zhao HM, Xiao et al and Zhao UHE, and NCS data on the chromatic plane of the six UCSs studied. Figures 8(a)–(f) show the plots of the Hung&Berns dataset includes two subsets, Constant and Varying lightness (CL and VL subsets). Only CL set is used here. It can be clearly seen that hue linearity for all hue loci in all UCSs is quite reasonable except blue hue. Amongst them, CAM16-UCS and CIELAB clearly observe large hue angle variation.

Both Xiao et al and Zhao UHE datasets were generated to define the unitary hues. So, they are both plotted in Figs. 9(a)–(f) for CIELAB, IPT, ICTCP, CAM16-UCS, Jzazbz, and sUCS, respectively. For a perfect agreement, each of the 4 unitary locus should be straight line. It can be seen that all unitary hue loci are more or less straight lines, indicating all 6 spaces having good hue linearity. In addition, two independent datasets agreed well to each other.

 figure: Fig. 9.

Fig. 9. (a)-(f) Plots of the Xiao et al and Zhao UHE dataset in the chromatic plane of CIELAB, IPT, ICTCP, CAM16-UCS, Jzazbz, and sUCS, respectively. The solid lines are drawn based on linear fitting. The red solid dot represents the corresponding white.

Download Full Size | PDF

The Zhao HM dataset provides the strictest test to reveal the hue linearity, due to the inclusion of data in different lightness levels. Figures 10(a)-(f) show the best hue linearity spaces are IPT, Jzazbz and sUCS, especially in blue hue range. Note that although the data scattering for the CIELAB or CAM16-UCS blue hue seems not to be scattered, their hue angles had large variations (i.e., not going through neutral point). For constant hue perception in different lightness levels, CAM16-UCS also seems to give slightly larger scattering than the others. In addition, the magenta hue data points seem to be more scattered in all spaces than the other hue data.

 figure: Fig. 10.

Fig. 10. (a)-(f) Plots of the Zhao HM dataset in the chromatic plane of CIELAB, IPT, ICTCP, CAM16-UCS, Jzazbz, and sUCS, respectively. The solid lines are drawn based on linear fitting. The red solid dot represents the corresponding white.

Download Full Size | PDF

The NCS dataset provides the most comprehensive hue data for finely referencing the hue linearity across different color regions. Figures 11(a)–(f) demonstrate that the best hue linearity is observed in IPT and sUCS. The performance of Jzazbz in the yellow region appears poorer compared to its representation in the Zhao HM database. Meanwhile, CAM16-UCS still exhibits significant scatter in the blue region.

 figure: Fig. 11.

Fig. 11. (a)-(f) Plots of the NCS dataset in the chromatic plane of CIELAB, IPT, ICTCP, CAM16-UCS, Jzazbz, and sUCS, respectively. The solid lines are drawn based on linear fitting. The red solid dot represents the corresponding white.

Download Full Size | PDF

From the present analysis, CAM16-UCS did perform the worst but it did best in the uniformity test (see next section). This implies that it is difficult to strive an UCS to perform best in fitting both uniformity and hue linearity data types. It is encouraging that sUCS performed equal best with IPT and ICTCP.

3.3 Testing the models’ performance using color difference datasets

The 28 datasets accumulated by Luo et al. are again used. The data is divided into 3 groups, large color difference surface (LCD), and small color difference display (SCDd) and surface (SCDs). The Standard Residual Sum of Squares (STRESS) is again used here (see Eq. (1)) to express the merit of color models. For a perfect agreement between each dataset (visual differences (V)) and the UCS predictions (E), STRESS value should be zero. A higher STRESS value means a poor space uniformity of the space in question.

For the comparison between the performance of a pair of uniform color spaces, the F-Test was employed to perform statistically significant test. In the field of color science, F formula in Eq. (24) is used to measure the linear performance of Model A and Model B in a single dataset. For F value greater than the threshold T, this indicates that Model A significantly outperforms Model B. Conversely, if the F value is less than 1/T, it indicates that Model B significantly outperforms Model A, where T value is critical values of the F-distribution with σ at 0.05.

$$F = \frac{{STRESS_A^2}}{{STRESS_B^2}}$$

Table 5 lists the performance for each UCS or CDE in STRESS values according to SCDs, LCD and SCDd groups respectively. One CDE, CIEDE2000, and 9 UCSs were tested.

Tables Icon

Table 5. The models’ performance in STRESS units using the 28 CDE (SCDs, LCD, SCDd) datasets

In Table 5, the models’ performance using the COM-correct dataset is reported. Because this dataset has been used set to derive models such as CIEDE2000, CAM16-UCS, DIN99d, OSA-GP-EU, sUCS), it is expected that these models to outperformed the others. It can be found CIEDE2000 (STRESS of 28), followed by DIN99d, CAM16-UCS, OSA-GP-EU (STRESS of 29), then sUCS (32). Similar performance can also be found considering the mean of 13 datasets. sUCS gave a top 5 performance.

For the LCD surface dataset, CAM16-UCS performed the best (29), followed by sUCS (30) and CIEDE2000 (31). CAM16-UCS is Insignificant better than sUCS, but significantly better than CIEDE2000 according to F-test.

For the SCD display dataset, CAM16-UCS and sUCS performed the best (29), followed by CIEDE2000 and DIN99d (31). The former two are significantly better than the latter two according to F-test.

For the Total mean (28 datasets), CAM16-UCS performed the best (29), followed by sUCS and CIEDE2000 (30), then DIN99d, OSA-GP-EU (32), IPT (35), Jzazbz (36), CIELAB (38), ICTCP (39). Testing the performance using F test, the results showed CAM16-UCS insignificantly better than sUCS and CIEDE2000, and sUCS shows significant superiority compared to the other models. Overall, sUCS ranked second, slightly worse than CAM16-UCS, amongst the 9 state of the art UCSs tested.

3.4 Computational cost

Computational cost refers to the resources required to perform single operation for a UCS. The criteria for evaluating computational performance was the time required to compute the COM-corrected dataset (11273 pairs of samples) written in MATLAB code for each uniform color space within a single loop. The host CPU and GPU models used for calculating the time were Intel i5-12400F and RTX4070, respectively. The version of MATLAB utilized was MATLAB R2022b. As shown in Table 6, the time was an average result for ten repeated computations. It can be seen that IPT is the fastest, followed by sUCS, which is faster than CAM16-UCS by a factor of 8.

Tables Icon

Table 6. Calculation cost of various uniform color spaces

3.5 Testing the performance using color appearance datasets

Color appearance model was developed to consider cross-media color reproduction. It has been introduced in the Introduction section. The largest dataset used to test CAMs’ performance is LUTCHI, which was used to develop almost all color appearance models tested here, CIECAM97s, CIECAM02, CIECAM16, ZCAM models here.

Luo et al. produced the LUTCHI dataset including seven subsets: Reflection High-Luminance (RHL), Reflection Low-Luminance (RLL), Reflection Varying-Luminance (RVL), Reflection TExtile (RTE), Display (CRT), 35 MM projection slide (M35) and Large Cut sheet (LTX) [4650]. The experiment did for each subset was designed under 3 surround conditions: average, dim and dark according to surround ratio (calculated between the luminance of adapting field and of surround). The stimuli for each surround were viewed using nonluminous surface (RHL, RLL, RVL, RTE), luminous display (CRT), reflected projector (M35, LTX) respectively. Each stimulus was assessed by 10 observers in terms of lightness, colorfulness, and hue composition using the magnitude estimation method. Only RVL set also includes brightness results, for which each stimulus was illuminated at different luminance levels ranged from 0.1 to 1,000 cd/m2. The LUTCHI dataset is also used as the training dataset for the development of sCAM.

Also, there were three independent datasets to test the performance of CAMs. First was produced by Juan and Luo including four visual color appearance results: lightness, colorfulness, hue computation, and saturation. Second was Choi et al., including scales of lightness, colorfulness, and hue composition [51,52]. The study included more than 105 samples from indoor dark surround to outdoor high luminance surround. The last was Li and Luo comprises 40 surface samples assessed across seven perceptions include one-dimensional (brightness, colorfulness) and two-dimensional (vividness, depth, whiteness and blackness) results under four different luminance levels (10, 100, 1,000 10,000 cd/m2). The data will be used to test two-dimensional models in the next section.

Five CAMs were tested, CIELAB, CIECAM16, sCAM and ZCAM. The latter was developed by Safdar and Luo based on Jzazbz [53]. More recently, Hellwig and Fairchild modified CIECAM16’s colorfulness, chroma, saturation, lightness and brightness to make it simpler and more accurate in predicting some datasets [54,55]. Only its brightness, lightness, colorfulness scales were tested to the corresponding datasets. They are named HF-JHK, HF-QHK and HF-M, respectively. Note the subscript HK, represents Helmholtz–Kohlrausch effect (see later).

Testing Lightness scales

Each CAM’s lightness scales were tested using the LUTCHI dataset. Table 7 summarizes the results in terms of STRESS unit. In the table, the STRESS value for each LUTCHI subset were first reported, followed by the mean and total, which represent the mean from the individual and the set combining all 7 subsets, respectively. Subsequently, the results from the Juan and Choi datasets were reported. Finally, the Overall mean is reported for each UCS, mean from the LUTCHI Total, Juan and Choi datasets.

Tables Icon

Table 7. Testing CAMs’ performance using lightness datasets in STRESS unit

Note the LUTCHI Total data should represent the true performance. It can be seen that sCAM performed the best in the LUTCHI dataset, followed by CAM16-UCS and ZCAM. sCAM also performed the best for the Juan and Choi datasets.

Testing colorfulness scales

The method for testing the models’ performance using colorfulness data is the same as the last section. The results are given in Table 8. It can be seen that CIECAM16 and sCAM (16) outperformed the others in the LUTCHI and Overall datasets. But sCAM performed better than CIECAM16 in both independent datasets.

Tables Icon

Table 8. Testing CAMs’ performance using colorfulness datasets in STRESS unit

Testing the hue scales

The experimental data for Hue were calculated using Hue composition data. The results are given in Table 9. It can be seen that all CAMs fitted the data well, and there is no significant difference between models according to F-test.

Tables Icon

Table 9. Testing CAMs’ performance using hue composition datasets in STRESS unit

Testing brightness scales

Only RVL subset in LUTCHI data include Brightness results. The data were accessed from a large range of luminance values ranged from 0.1 to 1,000 cd/m2, allowing the evaluation of the accuracy of brightness prediction by color appearance models concerning rod response. The Li and Luo dataset, on the other hand, covers extremely high luminance up to 10,000 cd/m2. Table 10 list the models’ performance in STRESS unit, while the scatter plot is shown in Fig. 12.

 figure: Fig. 12.

Fig. 12. (a)-(j) Plots of the brightness predictions from a) CIECAM16, b) ZCAM, c) sCAM and d) sCAM, e) FH-QHK and FH-Q from left to right, respectively against the LUTCHI RVL (left), and Li and Luo (right) datasets, respectively.

Download Full Size | PDF

Tables Icon

Table 10. Testing CAMs’ performance using brightness datasets in STRESS unit

From Table 10, it can be clearly seen that HF-QHK performed the best, followed by sCAM and HF-Q brightness scales. The performance is also verified in Fig. 12. It clearly showed that HF-QHK gave the smallest data scattering, followed by sCAM and HF-Q. This indicates that to include the Helmholtz–Kohlrausch (HK) effect improved the model’s performance to predict both datasets. The HK effect is indeed occurred and it is a visual phenomenon, i.e., an increase of purity (or chroma) of a visual stimulus would increases its brightness, when the stimulus luminance (or lightness) is held constant. No attempt was made to improve the Q of sCAM due to no significant improvement according to F-test and quite satisfactory performance already in predicting both datasets.

3.6 Testing the 2D scales’ performance

All 2D scales will be tested using three two dimensional datasets. The first set is the NCS dataset including data of whiteness and blackness from over 1950 samples. The sCAM V, D, K, and W scales were trained to use this dataset.

Li and Luo dataset comprises 40 surface samples, each assessed using vividness, depth, whiteness and blackness terms under four different luminance levels (10, 100, 1,000 10,000 cd/m2).

The performance of the 2D color appearance models is compared between the three models: Berns, ZCAM, and sUCS. Their performance is given in Table 11 in STRESS unit. Note that Berns only developed vividness and depth scales. These were extended to blackness and whiteness using Eq. (19) and Eq. (20), respectively.

Tables Icon

Table 11. Summary of the performance of the 2D scales from ZCAM, sCAM, and Berns in STRESS unit using the Li and Luo, NCS databases

Table 11 shows the performance of the 2D scales from ZCAM, sCAM, and Berns in STRESS values using the Li and Luo, NCS databases, respectively.

As mentioned in Section 2.3, the 2D scales of sCAM was obtained by fitting NCS's Whiteness and Blackness data (Eq. (19) and Eq. (20) respectively). The Depth and Vividness scales were approximated using 100-W and 100-K, respectively. The same approach was applied to the Berns’ whiteness and blackness scales, i.e., 100-D and 100-V respectively. These scales are tested using the Li and Luo, and NCS datasets.

The results in Table 11 clearly showed that sCAM outperformed the others in both NCS and Li and Luo datasets. It supposes to be good to fit the whiteness and blackness NCS data. However, it also gives the most accurate prediction to all the 2D scales in the Luo and Luo dataset. This implies a good agreement of the whiteness and blackness results between the two datasets, and further support of the 100-D and 100-V evidence. Finally, the Berns’ scales also gave reasonable performance.

The final point to address is the saturation scale which is a common scale included in many models. Cho et al found the depth and saturation results are quite similar, So, only the depth scale is included in the model.

4. Conclusions

A new color system named sCAM has been developed, including sUCS uniform color space. It includes the following features:

  • sUCS has a simple structure, and performed the second fastest amongst the UCSs tested,
  • sUCS ranked second best to predict 28 uniformity datasets, and have just one STRESS unit worse than CAM16-UCS. For testing hue linearity, sUCS performed equal best with IPT, while CAM16-UCS gave the worst performance.
  • sCAM to fit the best to the all datasets for lightness, colorfulness, brightness and the second best to predict hue composition. For testing its two-dimensional scales, whiteness, blackness, depth and vividness scales gave the most accurate prediction to all 2D datasets.

Funding

National Natural Science Foundation of China (61775190).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. CIE S 017:2020 ILV: International Lighting Vocabulary, 2nd edition.

2. ISO/CIE 11664-5:2009 (2009) Colorimetry - Part 5: CIE 1976 L*u*v* Colour space and u’, v’ uniform chromaticity scale diagram.

3. ISO/CIE 11664-4:2019 (2019) Colorimetry - Part 4: CIE 1976 L*a*b* Colour space.

4. S. M. Newhall, “Preliminary report of the OSA subcommittee on the spacing of the Munsell colors,” J. Opt. Soc. Am. 30(12), 617–645 (1940). [CrossRef]  

5. D. L. MacAdam, “Uniform color scales,” J. Opt. Soc. Am. 64(12), 1691–1702 (1974). [CrossRef]  

6. M. R. Luo and B. Rigg, “BFD (l: c) colour-difference formula Part 1ndashDevelopment of the formula,” J. Soc. Dyers Colour. 103(2), 86–94 (1987). [CrossRef]  

7. M. R. Luo and B. Rigg, “BFD (l: c) colour-difference formula Part 2-Performance of the formula,” J. Soc. Dyers Colour. 103(3), 126–132 (1987). [CrossRef]  

8. K. Witt, “Geometric relations between scales of small colour differences,” Color Res. Appl. 24(2), 78–92 (1999). [CrossRef]  

9. D. H. Kim and J. H. Nobbs, “New weighting functions for the weighted CIELAB colour difference formula,” Proceedings of AIC Colour 97, 446–449 (1997).

10. R. S. Berns, D. H. Alman, L. Reniff, et al., “Visual determination of suprathreshold color-difference tolerances using probit analysis,” Color Res. Appl. 16(5), 297–316 (1991). [CrossRef]  

11. F. J. Clarke, R. McDonald, and B. Rigg, “Modification to the JPC79 colour–difference formula,” J. Soc. Dyers Colour. 100(4), 128–132 (1984). [CrossRef]  

12. D. H. Alman, “CIE technical committee 1–29, industrial color-difference evaluation progress report,” Color Res. Appl. 18(2), 137–139 (1993). [CrossRef]  

13. M. R. Luo, G. Cui, and B. Rigg, “The development of the CIE 2000 colour-difference formula: CIEDE2000,” Color Res. Appl. 26(5), 340–350 (2001). [CrossRef]  

14. ISO/CIE 11664-6:2022 (2022) Colorimetry – Part 6: CIEDE2000 colour-difference formula.

15. C. Oleari, M. Melgosa, and R. Huertas, “Euclidean color-difference formula for small-medium color differences in log-compressed OSA-UCS space,” J. Opt. Soc. Am. A 26(1), 121–134 (2009). [CrossRef]  

16. G. Cui, M. R. Luo, B. Rigg, et al., “Uniform colour spaces based on the DIN99 colour-difference formula,” Color Res. Appl. 27(4), 282–290 (2002). [CrossRef]  

17. M. R. Luo, G. Cui, and C. Li, “Uniform colour spaces based on CIECAM02 colour appearance model,” Color Res. Appl. 31(4), 320–330 (2006). [CrossRef]  

18. M. R. Luo and R. W. G. Hunt, “The structure of the CIE 1997 colour appearance model (CIECAM97s),” Color Res. Appl. 23(3), 138–146 (1998). [CrossRef]  

19. CIE 159: 2004 (2004) A colour appearance model for colour management systems: CIECAM02.

20. C. Li, Z. Li, and M. Wang, “Comprehensive color solutions: CAM16, CAT16, and CAM16-UCS,” Color Res. Appl. 42(6), 703–718 (2017). [CrossRef]  

21. CIE 217:2016 (2016) Recommended Method for Evaluating the Performance of Colour-Difference Formulae.

22. M. R. Luo, Q. Xu, and M. Pointer, “A comprehensive test of colour-difference formulae and uniform colour spaces using available visual datasets,” Color Res. Appl. 48(3), 267–282 (2023). [CrossRef]  

23. P. C. Hung and R. S. Berns, “Determination of constant hue loci for a CRT gamut and their predictions using color appearance spaces,” Color Res. Appl. 20(5), 285–295 (1995). [CrossRef]  

24. F. Ebner and M. D. Fairchild, “Finding constant hue surfaces in color space. In Color Imaging: Device-Independent Color,” Color Hardcopy, and Graphic Arts III3300, pp. 107–117 (SPIE, 1998).

25. F. Ebner and M. D. Fairchild, “Development and testing of a color space (IPT) with improved hue uniformity,” in Color and imaging conference1998(1), pp. 8–13 (Society for Imaging Science and Technology, 1998).

26. K. Xiao, S. Wuerger, C. Fu, et al., “Unique hue data for colour appearance models. Part I: Loci of unique hues and hue uniformity,” Color Res. Appl. 36(5), 316–323 (2011). [CrossRef]  

27. B. Zhao, Q. Xu, and M. R. Luo, “Color difference evaluation for wide-color-gamut displays,” J. Opt. Soc. Am. A 37(8), 1257–1265 (2020). [CrossRef]  

28. M. Safdar, G. Cui, Y. J. Kim, et al., “Perceptually uniform color space for image signals including high dynamic range and wide gamut,” Opt. Express 25(13), 15131–15151 (2017). [CrossRef]  

29. A. R. Smith, “Color gamut transform pairs,” SIGGRAPH Comput. Graph. 12(3), 12–19 (1978). [CrossRef]  

30. EG 28:1993 (1993) SMPTE Engineering Guideline - Annotated Glossary of Essential Terms for Electronic Production. Eg 28:1993: 1–45.

31. P. A. Garcia, R. Huertas, M. Melgosa, et al., “Measurement of the relationship between perceived and computed color differences,” J. Opt. Soc. Am. A 24(7), 1823–1829 (2007). [CrossRef]  

32. R. W. G. Hunt and M. R. Pointer, “A colour-appearance transform for the CIE 1931 standard colorimetric observer,” Color Res. Appl. 10(3), 165–179 (1985). [CrossRef]  

33. CIE 160:2004 (2004) A review of chromatic adaptation transform

34. S. S. Stevens, “To Honor Fechner and Repeal His Law: A power function, not a log function, describes the operating characteristic of a sensory system,” Science 133(3446), 80–86 (1961). [CrossRef]  

35. R. W. G. Hunt, “The perception of color in 1° fields for different states of adaptation,” J. Opt. Soc. Am. 43(6), 479–484 (1953). [CrossRef]  

36. T. Agostini and D. R. Proffitt, “Perceptual organization evokes simultaneous lightness contrast,” Perception 22(3), 263–272 (1993). [CrossRef]  

37. C. J. Bartleson and E. J. Breneman, “Brightness perception in complex fields,” J. Opt. Soc. Am. 57(7), 953–957 (1967). [CrossRef]  

38. S. Abasi and M. D. Fairchild, “Fundamental scales of hue appearance and discrimination,” Color Res. Appl. 48(6), 673–688 (2023). [CrossRef]  

39. A. Hård, L. Sivik, and G. Tonnquist, “NCS, natural color system—From concept to research and applications. Part I,” Color Res. Appl. 21(3), 180–205 (1996). [CrossRef]  

40. A. Hård, L. Sivik, and G. Tonnquist, “NCS, natural color system—From concept to research and applications. Part II,” Color Res. Appl. 21(3), 206–220 (1996). [CrossRef]  

41. R. S. Berns, “Extending CIELAB: Vividness, depth, and clarity,” Color Res. Appl. 39(4), 322–330 (2014). [CrossRef]  

42. Y. J. Cho, L. C. Ou, and R. Luo, “A cross-cultural comparison of saturation, vividness, blackness and whiteness scales,” Color Res. Appl. 42(2), 203–215 (2017). [CrossRef]  

43. Y. J. Cho, L. C. Ou, G. Cui, et al., “New colour appearance scales for describing saturation, vividness, blackness, and whiteness,” Color Res. Appl. 42(5), 552–563 (2017). [CrossRef]  

44. M., Li and M. R. Luo, Assessing 2D colour appearance scales under different luminance levels, CRA (submitted 2023)

45. S. Y. Zhu, M. R. Luo, G. H. Cui, et al., “Comparing large colour-difference data sets.,” Color Res. Appl. 36(2), 111–117 (2011). [CrossRef]  

46. M. R. Luo, A. A. Clarke, P. A. Rhodes, et al., “Quantifying colour appearance, Part I, Lutchi colour appearance data,” Color Res. Appl. 16(3), 166–180 (1991). [CrossRef]  

47. M. R. Luo, A. A. Clarke, P. A. Rhodes, et al., “Quantifying colour appearance, Part II, Testing colour appearance models performance using LUTCHI colour appearance data,” Color Res. Appl. 31(3), 438 (2006). [CrossRef]  

48. M. R. Luo, X. W. Gao, P. A. Rhodes, et al., “Quantifying colour appearance, part III, Supplementary LUTCHI colour appearance data,” Color Res. Appl. 18(2), 98–113 (1993). [CrossRef]  

49. M. R. Luo, X. W. Gao, P. A. Rhodes, et al., “Quantifying colour appearance, part IV, Transmissive media,” Color Res. Appl. 18(3), 191–209 (1993). [CrossRef]  

50. M. R. Luo, X. W. Gao, and S. A. Scrivener, “Quantifying colour appearance, part V, simultaneous contrast,” Color Res. Appl. 20(1), 18–28 (1995). [CrossRef]  

51. L. G. Juan and M. R. Luo, “New magnitude estimation data for evaluating colour appearance models,” Colour and Visual Scales, 3–5 (2000).

52. S. Y. Choi, M. R. Luo, M. R. Pointer, et al., “Changes in colour appearance of a large display in various surround ambient conditions,” Color Res. Appl. 35(3), 200–212 (2010). [CrossRef]  

53. M. Safdar, J. Y. Hardeberg, and M. R. Luo, “ZCAM, a colour appearance model based on a high dynamic range uniform colour space,” Opt. Express 29(4), 6036–6052 (2021). [CrossRef]  

54. L. Hellwig and M. D. Fairchild, “Brightness, lightness, colorfulness, and chroma in CIECAM02 and CAM16,” Color Res. Appl. 47(5), 1083–1095 (2022). [CrossRef]  

55. L. Hellwig, D. Stolitzka, and M. D. Fairchild, “Extending CIECAM02 and CAM16 for the Helmholtz–Kohlrausch effect,” Color Res. Appl. 47(5), 1096–1104 (2022). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Plot of sCAM Ia against sUCS I for (a) different surround conditions and (b) different luminous factors of background.
Fig. 2.
Fig. 2. Plot of Q against Ia values under various luminance levels. Note the lines were plotted for neutral stimuli from 0 to 100 at interval of 0.01 under ambient luminance levels of 0.01, 1.0, 100, 10000 cd/m2.
Fig. 3.
Fig. 3. Plots of sCAM M vs sUCS C to show (a) the adapting luminance and (b) the surround effects. All points in each line have Ia values of 50. And hue angle of 20°.
Fig. 4.
Fig. 4. Plot et values against hue angles (Eq. (17)).
Fig. 5.
Fig. 5. Plots the equal loci of (a) Vividness, (b) Depth scales in Ia and C plane for the unitary red hue. The loci have V or D scale of 30, 60, 90 with fixed hue angle at 15.60°, unitary red.
Fig. 6.
Fig. 6. (a)-(f) Plots of the Munsell samples at Value of 5 in the chromatic plane of CIELAB, IPT, ICTCP, CAM16-UCS, Jzazbz, and sUCS, respectively.
Fig. 7.
Fig. 7. (a)-(f) Plots of the OSA samples at L of 0 in the chromatic plane of CIELAB, IPT, ICTCP, CAM16-UCS, Jzazbz, and sUCS, respectively.
Fig. 8.
Fig. 8. (a)-(f) Plots of the Hung&Berns samples in the chromatic plane of CIELAB, IPT, ICTCP, CAM16-UCS, Jzazbz, and sUCS, respectively. The solid lines are drawn based on linear fitting. The red solid dot represents the corresponding white.
Fig. 9.
Fig. 9. (a)-(f) Plots of the Xiao et al and Zhao UHE dataset in the chromatic plane of CIELAB, IPT, ICTCP, CAM16-UCS, Jzazbz, and sUCS, respectively. The solid lines are drawn based on linear fitting. The red solid dot represents the corresponding white.
Fig. 10.
Fig. 10. (a)-(f) Plots of the Zhao HM dataset in the chromatic plane of CIELAB, IPT, ICTCP, CAM16-UCS, Jzazbz, and sUCS, respectively. The solid lines are drawn based on linear fitting. The red solid dot represents the corresponding white.
Fig. 11.
Fig. 11. (a)-(f) Plots of the NCS dataset in the chromatic plane of CIELAB, IPT, ICTCP, CAM16-UCS, Jzazbz, and sUCS, respectively. The solid lines are drawn based on linear fitting. The red solid dot represents the corresponding white.
Fig. 12.
Fig. 12. (a)-(j) Plots of the brightness predictions from a) CIECAM16, b) ZCAM, c) sCAM and d) sCAM, e) FH-QHK and FH-Q from left to right, respectively against the LUTCHI RVL (left), and Li and Luo (right) datasets, respectively.

Tables (11)

Tables Icon

Table 1. The c and FM parameters used in sCAM

Tables Icon

Table 2. The Hi, hi and ei parameters used to compute hue composition (H)

Tables Icon

Table 3. The performance of hue linearity in terms of SD.a

Tables Icon

Table 4. The performance of hue linearity in terms of D.a

Tables Icon

Table 5. The models’ performance in STRESS units using the 28 CDE (SCDs, LCD, SCDd) datasets

Tables Icon

Table 6. Calculation cost of various uniform color spaces

Tables Icon

Table 7. Testing CAMs’ performance using lightness datasets in STRESS unit

Tables Icon

Table 8. Testing CAMs’ performance using colorfulness datasets in STRESS unit

Tables Icon

Table 9. Testing CAMs’ performance using hue composition datasets in STRESS unit

Tables Icon

Table 10. Testing CAMs’ performance using brightness datasets in STRESS unit

Tables Icon

Table 11. Summary of the performance of the 2D scales from ZCAM, sCAM, and Berns in STRESS unit using the Li and Luo, NCS databases

Equations (54)

Equations on this page are rendered with MathJax. Learn more.

S T R E S S = 100 ( F E i V i ) 2 V i 2
F = E i V i E i 2 .
[ L M S ] = [ 0.4002 0.7075 0.0807 0.2280 1.1500 0.0612 0 0 0.9184 ] [ X D 65 Y D 65 Z D 65 ]
T = T 0.43 ; T 0 ;
T = ( T ) 0.43 ; T < 0 ;
[ I a b ] = [ 200 3.05 100 3.05 5 3.05 430 470 40 49 49 98 ] [ L M S ]
C = 1 0.0252 ln ( 1 + 0.0447 a 2 + b 2 )
h = a r c t a n ( b / a )
Δ E = Δ I 2 + Δ a 2 + Δ b 2
= Δ I 2 + Δ C 2 + Δ H 2
Δ H = 2 C S a m p l e C S t a n d a r d sin ( Δ h / 2 )
a = C cos ( h )
b = C sin ( h )
Δ a = a S a m p l e a S t a n d a r d
Δ b = b S a m p l e b S t a n d a r d
Δ C = C S a m p l e C S t a n d a r d
Δ h = h S a m p l e h S t a n d a r d
[ L M S ] = [ 0.4002 0.7075 0.0807 0.2280 1.1500 0.0612 0 0 0.9184 ] [ X D 65 Y D 65 Z D 65 ]
[ L M S ] = [ 0.314 0.6395 0.0466 0.1517 0.7482 0.1 0.0178 0.1095 0.8728 ] [ R S R G B G S R G B B S R G B ]
T = T 0.43 ; T 0 ;
T = ( T ) 0.43 ; T < 0 ;
[ I a b ] = [ 200 3.05 100 3.05 5 3.05 430 470 40 49 49 98 ] [ L M S ]
C = 1 0.0252 ln ( 1 + 0.0447 a 2 + b 2 )
h = t a n 1 ( b a )
z = 1.48 + Y b Y w
F L = 0.171 L A 1 3 ( 1 1 0.4934 e 0.9934 L A )
I a = 100 ( I 100 ) c z
Q = I a ( 2 c F L 0.1 )
M = C ( F L 0.1 e t I a 0.27 F M )
e t = 1 + 0.06 cos ( 110 + h )
H = H i + 100 h h i e i h h i e i + h i + 1 h e i + 1
where h = h + 360 , i f h < 15.6
h = h , e l s e
W = 100 D = 100 1.3 ( 100 I a ) 2 + 1.6 C 2
D = 1.3 ( 100 I a ) 2 + 1.6 C 2
K = 100 V = 100 I a 2 + 3 C 2
V = I a 2 + 3 C 2
I n p u t : X Y Z X Y Z w L A Y b S u r r o u n d
z = 1.48 + Y b Y w
X Y Z D 65 = C A T 16 ( X Y Z , X Y Z w , L A , F )
I C h = s U C S ( X Y Z D 65 )
I a = 100 ( I 100 ) c z
e t = 1 + 0.06 cos ( 110 + h )
M = C F L 0.1 e t I a 0.27 F M
Q = 2 c I a F L 0.46
h = h + 360 , i f h < 15.6
h = h , e l s e
H = H i + 100 h h i e i h h i e i + h i + 1 h e i + 1
W = 100 1.3 ( 100 I a ) 2 + 1.6 C 2
K = 100 I a 2 + 3 C 2
D = 1.3 ( 100 I a ) 2 + 1.6 C 2
V = I a 2 + 3 C 2
D = D i s t a n c e C h r o m a M A X
F = S T R E S S A 2 S T R E S S B 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.