Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

ZCAM, a colour appearance model based on a high dynamic range uniform colour space

Open Access Open Access

Abstract

A colour appearance model based on a uniform colour space is proposed. The proposed colour appearance model, ZCAM, comprises of comparatively simple mathematical equations, and plausibly agrees with the psychophysical phenomenon of colour appearance perception. ZCAM consists of ten colour appearance attributes including brightness, lightness, colourfulness, chroma, hue angle, hue composition, saturation, vividness, blackness, and whiteness. Despite its relatively simpler mathematical structure, ZCAM performed at least similar to the CIE standard colour appearance model CIECAM02 and its revision, CAM16, in predicting a range of reliable experimental data.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

A psychophysically-plausible and mathematically-simple colour appearance model, to accurately predict most reliable experimental data, has long been desired. The Commission Internationale de l’Eclairage (CIE) recommended the CIECAM02 colour appearance model in 2004 [1,2]. Since then, it has been used for applications including colour appearance prediction, colour difference evaluation [3], colour representation in uniform colour spaces [4], colour management [5], etc. The CIECAM02 has been widely used in the colour industry, however, some mathematical issues can occur with applications e.g., in cross-media colour reproduction [6]. A revision of the CIECAM02, named CAM16, has been proposed to overcome its mathematical problems i.e., to make sure that no computational failure occurs during image processing [6]. The CIECAM02 and CAM16 include colour appearance attributes including lightness, brightness, chroma, colourfulness, saturation, hue angle, and hue composition, for related colours under the photopic region and over a wide range of viewing conditions, and give similar prediction performance. Viewing conditions are defined by the illumination of the adapting field, lightness of the background, and the surround.

Safdar et al., [7] proposed a perceptually uniform colour space, $J_za_zb_z,$ to specify colours in imaging applications including high dynamic range (HDR) and wide gamut imagery. The test performance of $J_za_zb_z$ was either best or similar to the best colour space predicting different sets of a wide range of experimental data used for testing [7]. One key component of $J_za_zb_z$ model is the Perceptual Quantizer (PQ) curve, developed to uniformly encode a luminance range of $0.001$ to $10,000$ $cd/m^2$ [8], previously, included in $IC_TC_P$ colour representation [9]. Recently, several studies tested performance of $J_za_zb_z$ compared to other colour spaces in various image processing applications including gamut mapping [10,11], tone mapping [12], image quality [13], etc., and prediction of colour appearance [14], and colour difference [15] data. In the above–mentioned studies, $J_za_zb_z$ performed better or comparable to other test spaces.

Colour appearance attributes including vividness, blackness, depth, whiteness, and clarity are the more common representatives of our daily life experience of colour perception [16,17]. Berns [16] introduced CIELAB–based formulae for vividness, depth, and clarity. Cho et al., [17] produced experimental data for saturation, vividness, blackness, and whiteness. They found that Berns vividness formula [16] could not perform well to predict experimental vividness but his clarity formula predicted experimental vividness satisfactorily. Further, Berns depth formula was found in high correlation with experimental saturation. Based on what we have learnt above, at least three more colour appearance attributes (i.e., vividness, blackness, and whiteness) should also be included in a colour appearance model, in addition to seven attributes included in the original CIECAM02 and CAM16, for image processing applications.

Chromatic adaptation transform is an important step in colour appearance prediction. Smet et al., [18,19] conducted two experiments to study neutral white and chromatic adaption using both surface and self–luminous colour samples. Based on these experimental results, Zhai and Luo [20] proposed a new formula for effective degree of adaptation (D) and a two–step chromatic adaptation transform with improved performance, particularly for self–luminous stimuli, compared to the state–of–the–art chromatic adaptation transform called CAT02 [2]. Note that CAT16’s formula for degree of adaptation is same as that in CAT02 [2,6]. This indicates that CAT02 and CAT16 models need to be updated but more experimental data are needed to develop a generic model [20].

Current study aimed development of a mathematically–simple colour appearance model in plausible agreement with the psychophysical phenomenon of colour appearance perception to accurately predict existing experimental data. Proposed colour appearance model is developed based on a perceptually uniform colour space $J_za_zb_z$ [7], includes ten colour appearance attributes: brightness, lightness, colourfulness, chroma, hue angle, hue composition, saturation, vividness, blackness, and whiteness, and is named ZCAM.

Colour attributes of ZCAM are defined in the next section followed by a section describing the experimental data used in the current study. ZCAM is then described step–by–step followed by results and discussions. Finally, conclusions are drawn based on the test performance of ZCAM.

2. Definitions of colour appearance attributes

Definitions of some colour appearance attributes can be found in the CIE International Lighting Vocabulary (CIE ILV) [21] and/or in literature [2,16,2224]. In this section, a brief account of each attribute is given including the symbol, scale, and visual phenomenon. Note that the explanation for the scale (or range) of each attribute is introduced based on the authors’ experience. They can be, intuitively, divided into one– and two–dimensional attributes.

2.1 One–dimensional colour appearance attributes

One–dimensional (1D) attributes are closely associated with the three attributes of the Munsell colour system [25], where a colour is represented in terms of Munsell Value, Munsell Chroma, and Munsell Hue. In other words, to arrange colour samples based on the colour difference between pairs of samples, three terms: light–dark, strong–weak, and hue (i.e., a colour is redder–, yellower–, greener–, or bluer–, relative to the adjacent unique hue), are frequently used.

  • Brightness ($Q$) is an "attribute of a visual perception according to which an area appears to emit, or reflect, more or less light" [21]. It is an open–end scale with origin as pure black or complete darkness. It is an absolute scale according to the illumination condition i.e., an increase of brightness of an object when the illuminance of light is increased. This is a visual phenomenon known as Stevens effect [26].
  • Lightness $(J)$ is "brightness of an area $(Q)$ judged relative to the brightness of a similarly illuminated area that appears to be white or highly transmitting $(Q_w)$" [21], i.e., $J=(Q/Q_w)$. It is a visual scale with two well defined levels i.e., zero and $100$ for a pure black and a reference white, respectively. Note that in HDR visual field, samples could have a higher luminance than that of the reference white, so the lightness could be over $100$. Subscripts $s$ and $w$ are used to annotate the sample and the reference white, respectively.
  • Colourfulness $(M)$ is an "attribute of a visual perception according to which the perceived colour of an area appears to be more or less chromatic" [21]. It is an open–end scale with origin as a neutral colour i.e., appearance of no hue. It is an absolute scale according to the illumination condition i.e., an increase of colourfulness of an object when the illuminance of light is increased. This is a visual phenomenon known as Hunt effect [27].
  • Chroma $(C)$ is "colourfulness of an area $(M)$ judged as a proportion of the brightness of a similarly illuminated area that appears white or highly transmitting $(Q_w)$" [21], i.e., $C=(M/Q_w)$. It is an open–end scale with origin as a colour in the neutral axis. It can be estimated as the magnitude of the chromatic difference between the test colour and a neutral colour having the lightness same as the test colour.
  • Hue (composition) $(H)$ is an "attribute of a visual perception according to which an area appears to be similar to one of the colours: red, yellow, green, and blue, or to a combination of adjacent pairs of these colours considered in a closed ring" [21]. It has a $0$$400$ scale, i.e., hue composition of $0,$ $100,$ $200,$ $300,$ and $400$ range from unitary red to, yellow, green, blue, and back to red, respectively. For example, a cyan colour consists of $50$% green and $50$% blue, corresponding to a hue composition of $250$.
  • Hue angle $(h)$ is a scale ranged from $0^{\textrm {o}}$ to $360^{\textrm {o}}$ with the hues following rainbow sequence. The same distance between pairs of hues in a constant lightness and chroma shows the same perceived colour difference.

2.2 Two–dimensional colour appearance attributes

Two–dimensional (2D) attributes correspond to visual experience of illumination or colour mixing. They are the interaction between one–dimensional attributes: lightness $(J)$ and chroma $(C)$.

  • Saturation $(S)$ is the "colourfulness $(M)$ of an area judged in proportion to its brightness $(Q)$" [21], i.e., $S=(M/Q)$. It can also be defined as the chroma of an area judged in proportion to its lightness, i.e., $S=(C/J)$. It is an open–end scale with all neutral colours to have saturation of zero. For example, the red bricks in a building would exhibit different colours when illuminated by daylight. Those (directly) under daylight will appear to be bright and colourful, and those under shadow will appear darker and less colourful. However, two areas have the same saturation.
  • Vividness $(V)$ is an "attribute of colour used to indicate the degree of departure of the colour (of stimulus) from a neutral black colour" [16], i.e., $V=\sqrt {J^2+C^2}$. It is an open–end scale with origin at pure black. This reflects the visual phenomena of an object illuminated by a light to increase both the lightness and the chroma.
  • Blackness $(K)$ is a visual attribute according to which an area appears to contain more or less black content. It is a scale in the NCS and can also be defined in resemblance to a pure black [23,24]. It is an open–end scale with $100$ as pure black (luminance of $0$ $cd/m^2)$, i.e., $K=(100-\sqrt {J^2+C^2})=(100-V)$. The visual effect can be illustrated by mixing a black to a colour pigment. The more black pigment is added, the higher blackness will be. A blacker colour will have less lightness and/or chroma than a less black colour.
  • Depth $(D)$ is an "attribute of colour used to indicate the degree of departure of the colour from a neutral white colour" [16], i.e., $D=\sqrt {(100-J)^2+C^2}$. The effect is illustrated by adding a colour pigment to a white pigment. This will result in a reduction of lightness and increase of chroma.
  • Whiteness $(W)$ is a visual attribute according to which an area appears to contain more or less white content. It is a scale of the Natural Colour System (NCS) and can also be defined in resemblance to a pure white [23,24]. It is an open–end scale with $100$ as reference white, i.e., $W=(100-\sqrt {(100-J)^2+C^2})=(100-D)$. The visual effect can be illustrated by mixing a white to a colour pigment. The more white pigment is added, the higher whiteness will be. A whiter colour will have a lower chroma and higher lightness than the less white colour.
  • Clarity $(T)$ is an "attribute of colour used to indicate the degree of departure of the colour from its background colour" [16], i.e., $T=\sqrt {(J-J_g)^2+(a-a_g)^2+(b-b_g)^2}$, where $J,$ $a,$ and $b$ represent lightness, redness–greenness, and yellowness–blueness of the colour, respectively, and subscript $g$ represents background. In other words, clarity of a colour increases as the Euclidean distance between the colour and its background colour increases.
In this study, 2D attributes of saturation, vividness, blackness, and whiteness were developed for ZCAM (see later). Depth and clarity lack sufficient experimental data.

3. Experimental data

To develop and test ZCAM, a range of reliable experimental data was collected. Luo et al., [2832] carried out a large–scale project funded by the UK government with an aim to accumulate colour appearance data under a wide range of viewing conditions including different illuminants, luminance levels, media, and surround conditions (surface, display, and transmissive). They first accumulated colour appearance data of lightness, colourfulness, and hue composition using the surface and display media [28,29]. They then assessed the surface colours under 6 different luminance levels in terms of brightness, lightness, colourfulness, and hue composition [30]. Kuo et al., [31] produced large size textile samples under different illuminants. Finally, the projected transmissive colours were studied using the large cut–sheet and 35mm transparency materials under dark surround condition [32]. They were all accumulated at the Loughborough University of Technology Computer–Human Interface (LUTCHI) Research Centre. They were named ’LUTCHI’ dataset, which is divided into seven individual groups based on the type and size of stimuli i.e., RHL, RLL, RVL, RTE, CRT, M35, and LTX representing reflective high level, reflective low level, reflective varying levels, reflective textiles, CRT display, $35mm,$ and large cut–sheet, respectively. Each group is divided into a number of phases where each phase has similar illuminant and viewing conditions. In total, there are $54$ phases which include $4945$ number of data samples. Note that, RVL group includes $12$ phases ($480$ samples), where six phases ($240$ samples) include lightness data and the other six include brightness data only. Hence, in total, $4705$ samples have lightness data only and $240$ samples have brightness data only. The LUTCHI data have been previously used as training data to develop the CIECAM02 and CAM16 models. The LUTCHI dataset covers a luminance range of $0.018$ to $1272$ $cd/m^2$ for stimuli and $0.4$ to $2259$ $cd/m^2$ for reference whites. These data were collected to develop ZCAM’s attributes including lightness, brightness, colourfulness, and hue composition.

To further verify the performance of ZCAM, more experimental datasets for perceptual colour appearance were collected. Juan et al., [33,34] conducted nine different phases of experiment to assess lightness, colourfulness, hue composition, and saturation. Fu et al., [35,36] performed ten different phases of experiment to test the effect of surround field and stimuli sizes on the colour appearance attributes of lightness, colourfulness, and hue composition. Choi et al., [37] studied changes in colour appearance of a large display arising from different illumination conditions. They also scaled colour appearance in terms of lightness, colourfulness, and hue composition, in 9 different phases. Some other researchers including Kwak et al., [38,39] and Park et al., [40] also performed experiments and generated colour appearance data mainly for display. All of the above–mentioned studies found that the CIECAM02 consistently outperformed the other models. This implies that despite being generated independently, these data agreed well with the LUTCHI dataset. Three of these independent datasets including Juan et al., [33], Fu et al., [35,36], and Choi et al., [37] hereinafter called ’JUAN’, ’FU’, and ’CHOI’ data, respectively, were used as testing data to evaluate performance of ZCAM’s attributes of lightness, colourfulness, and hue composition.

Furthermore, ZCAM is also extended to predict (intuitively) two–dimensional colour appearance attributes as introduced in Section 2.2. Cho et al., [17] performed psychophysical experiments to scale four commonly experienced colour appearance attributes: saturation, vividness, blackness, and whiteness. This datasets, hereinafter, will be called ’CHO’ data. They used $120$ samples from NCS colour Atlas to scale saturation, vividness, and whiteness, and $110$ NCS samples to scale blackness. In total, $132$ observers ($68$ British and 64 Korean) took part in the experiment. They invited colour naive observers (i.e., they had no knowledge in colour science and were not instructed that neutral colours to have saturation of zero) to perform experiment. Cho et al., [41] studied individual differences in the assessment of saturation, scaled in [17], and found disagreement between different groups of observers while assessing achromatic samples (i.e., "some regarded white, a light grey or a medium grey as the least saturated among all test colours, while some others regarded black as the most saturated colour among all test colours" [41]). Saturation data for neutral colours assessed by a group of observers disagreed with the definition of saturation (see section 2.2); i.e., samples having zero colourfulness should have zero saturation. Therefore, current saturation formula was tested considering two cases; (i) CHO data including neutral samples, and (ii) CHO data excluding five neutral samples (S9000–N, S5000–N, S3000–N, and S0500–N). This dataset is considered reliable and have been, previously, used to develop Cho et al., [42] ellipsoid–based and hue–based models, Cho et al., [43] reversible models, and CAM16’s extended attributes [44], for saturation, vividness, blackness, and whiteness. This dataset was used as ’training data’ to develop current blackness and vividness formulae, and as ’test data’ to verify performance of current whiteness and saturation formulae.

Juan et al., [34] as mentioned above also produced saturation data under nine different experimental conditions. The ’JUAN’ saturation dataset was used to develop ZCAM’s saturation formula. H. B. Midtfjord et al., [45] performed an experiment to scale vividness. Vividness was assessed on a scale from $0$ to $100$, by a panel of $31$ observers. There were $100$ colours including both surface and display colours. This dataset, hereinafter called ’HBM’ dataset, was used to test performance of ZCAM’s vividness formula. Finally, the Natural Colour System (NCS) dataset, comprising of $1749$ blackness and whiteness data, was used as ’test data’ to verify performance of ZCAM’s formulae for blackness and whiteness, respectively [23,24].

4. Mathematical formulation of ZCAM

Input: Let $[X,\;Y,\;Z]$ be the input tristimulus values of the test stimulus, $[X_{t,w},\; Y_{t,w},\; Z_{t,w}]$ be the adapting white under test illuminant, $[X_{r,w},\;Y_{r,w},\;Z_{r,w}]$ be the white under reference illuminant, and CIE 1931 standard colorimetric observer, $L_a = L_wY_b/100$ (where $L_w$ is luminance of the reference white and $Y_b$ is the background luminance factor) be the test adapting field luminance, and $F_s$ be a parameter to consider impact of the surround. Note that absolute tristimulus values (in units of $cd/m^2$) are used as input.

  • Step 0: Firstly, if input tristimulus values correspond to a reference white other than CIE D65, transform them to the CIE D65 using CAT02 [2] to obtain $[X_{D65},\;Y_{D65},\;Z_{D65}].$
  • Step 1: Now, compute those factors that relate with viewing conditions, and are independent of the test stimulus. Surround factor $(F_s),$ corresponding to the ’dim’, ’dark’, or ’average’ surround conditions, background factor $(F_b),$ and luminance level adaptation factor $(F_L),$ can be computed using Eqs. (13), respectively.
    $$F_s= \begin{cases} 0.525, & \textrm{dark} \\ 0.59, & \textrm{dim} \\ 0.69, & \textrm{average} \\ \end{cases}.$$

    Values of surround factor $(F_s)$ for ’dim’, ’dark’, and ’average’ conditions are same as in the CIECAM02 and CAM16 [6,46]. In future, with the availability of more data, another value of $F_s$ may be determined for the ’bright’ surround conditions. Finally, $F_s$ needs to be modelled as a continuous function of surround ratio, ranging from ’dark’ to ’bright’ surround conditions, instead of categories (see Eq. (1)).

    $$F_b = \sqrt{\frac{Y_b}{Y_w}},$$

    where $Y_b$ and $Y_w$ are relative luminance values of the background and the reference white, respectively.

    $$F_L = 0.171 \cdot (L_a)^{\frac{1}{3}} \cdot \left( 1-e^{-\frac{48}{9}L_a} \right).$$

    Formula of $F_L$ was developed by simplifying the two–piece function used in the CIECAM02 and CAM16 [6,46]. These factors (Eq. (13)) are computed only once when processing image data under given viewing and surround conditions, while the rest of the computing steps are stimulus (pixel) dependent.

  • Step 2: Transform input tristimulus values into achromatic response $(I_z),$ redness–greenness $(a_z),$ and yellowness–blueness $(b_z),$ using:
    $$\begin{bmatrix} X_{D65}' \\ Y_{D65}' \end{bmatrix} = \begin{bmatrix} bX_{D65} \\ gY_{D65} \end{bmatrix} - \begin{bmatrix} (b-1)X_{D65} \\ (g-1)Y_{D65} \end{bmatrix},$$
    $$\begin{bmatrix} R \\ G \\ B \end{bmatrix} = \begin{bmatrix} 0.41478972 & 0.579999 & 0.0146480 \\ -0.2015100 & 1.120649 & 0.0531008 \\ -0.0166008 & 0.264800 & 0.6684799 \end{bmatrix} \begin{bmatrix} X_{D65}' \\ Y_{D65}' \\ Z_{D65} \end{bmatrix},$$
    $$\{R',\;G',\;B'\}= \left( \frac{ c_1+c_2\left( \frac{\{R,\;G,\;B\}}{10,000} \right)^\eta }{ 1+c_3\left( \frac{\{R,\;G,\;B\}}{10,000} \right)^\eta } \right)^\rho,$$
    $$\begin{bmatrix} a_z \\ b_z \end{bmatrix} = \begin{bmatrix} 3.524000 & -4.066708 & 0.542708 \\ 0.199076 & 1.096799 & -1.295875 \end{bmatrix} \begin{bmatrix} R' \\ G' \\ B' \end{bmatrix},$$
    $$I_z = G'-\epsilon,$$
    where $b=1.15,$ $g=0.66,$ $c_1=3424/2^{12},$ $c_2 = 2413/2^7,$ $c_3=2392/2^7,$ $\eta = 2610/2^{14},$ $\rho = 1.7 \times 2523/2^5,$ and $\epsilon$ $=$ $3.7035226210190005e-11$. In Eq. (8), a small number $\epsilon$, with single precision, is used to ensure that output is zero for zero input, and to also avoid occurrence of complex numbers in inverse implementation of ZCAM. Note that $a_z$ and $b_z$ are computed in exactly same way as in [7], whereas, current $I_z$ is function of $G'$ only, not both $R'$ and $G'$ as in [7], and should not be confused with $I_z$ in [7]. Formula for $I_z$ same as in [7] could also be used but would adversely affect prediction performance (within 1 STRESS units) of brightness attribute (see later) for ’LUTCHI’ dataset. Current $I_z$ represents achromatic response (analogous to $A$ in the CIECAM02 and CAM16).
  • Step 3: Calculate hue angle $(h_z)$ using
    $$h_z = tan^{{-}1}\left(\frac{b_z}{a_z}\right).$$

    Make sure that units of hue angle are in degrees, and ranging from $0^{\textrm {o}}$ to $360^{\textrm {o}}$. Hue angle is typically used to calculate the colour difference between a pair of samples.

  • Step 4: Hue composition $(H_z)$ is computed according to hue quadrature $(H)$ given as
    $$H = H_i+ \frac{100 \cdot \frac{h'-h_i}{e_i}}{\frac{h'-h_i}{e_i}+\frac{h_{i+1}-h'}{e_{i+1}}},$$
    where $h_i$, $H_i$, and $e_i$ for $i=$ $1,$ $2,$ $\cdots ,$ $5,$ can be computed using unique hue data given in Table 1. Set $h' = h_z+360^{\textrm {o}}$ if $h_z<h_1,$ otherwise, $h'=h_z.$ Choose a proper $i$ so that $h_i\leq h' \leq h_{i+1}.$ If, for example, $i=4$ and $H=346.37$ i.e., $H$ is between $H_4$ and $H_5$ then compute $P_L=H_5-H=53.63,$ $P_R=H-H_4=46.37,$ and round $P_L$ and $P_R$ values to integers i.e., $54$ and $46,$ respectively. Thus, according to Table 1, hue composition $(H_z)$ of this sample is $54$% of Blue and $46$% of Red, and can be reported as $54\textrm {B}46\textrm {R}$ or $46\textrm {R}54\textrm {B}.$ Hue composition is used to define the hue appearance of a sample. Note that hue circles formed by the equal hue angle and equal hue composition appear to be quite different. Values for $h_i$ and $e_i$ for $i=$ $1,$ $2,$ $\cdots ,$ $5,$ were optimized using unitary hue data of the Natural Colour System (NCS) as reference (see Table 1). Equation (11) gives $e_z,$ similar to but not same as eccentricity factor given in Table 1. It will be used in a later step.
    $$e_z = 1.015+cos\left(89.038+h'\right)$$
  • Step 5: Calculate brightness ($Q_z$), lightness ($J_z$), colourfulness ($M_z$), and chroma ($C_z$) using Eqs. (1215), respectively.
    $$Q_z = 2700 \cdot \left(I_z\right)^{\frac{1.6 \cdot F_s}{(F_b)^{0.12}}} \cdot \left((F_s)^{2.2} \cdot (F_b)^{0.5} \cdot (F_L)^{0.2}\right)$$

    Eq. (12) indicates a higher achromatic signal ($I_z$) to have a higher brightness. A higher luminance adapting field and a lighter surround will enhance the brightness.

    $$J_z = 100 \cdot \left(\frac{Q_z}{Q_{z,w}}\right),$$
    where $Q_{z,w}$ is brightness of the reference white. Equation (13) indicates that lightness ($J_z$) will be higher for higher brightness and will achieve a value of $100$ for the reference white. Equation (13) agrees with the CIE definition of lightness (see section 2.1) and indicates that lightness and brightness have a linear relationship; i.e., they cannot be distinguished.
    $$M_z = 100\cdot (a_z^2+b_z^2)^{0.37}\cdot \left(\frac{(e_z)^{0.068} \cdot (F_L)^{0.2} }{ (F_b)^{0.1}\cdot (I_{z,w})^{0.78}}\right),$$
    where $I_{z,w}$ is the achromatic response of the reference white. Equation (14) indicates that higher chroma to have a higher colourfulness. A higher luminance adapting field and darker background will enhance the colourfulness.
    $$C_z = 100\cdot \left(\frac{M_z}{Q_{z,w}}\right)$$

    Eq. (15) also agrees with the CIE definition of chroma (see section 2.1) and indicates that chroma and colourfulness have a linear relationship; i.e., they cannot be distinguished. Constant coefficients and exponents in Eqs. (11), (12), and (14) were fit using the LUTCHI data. Note that current $J_z$ and $C_z$ given in Eqs. (13) and (15), respectively, should not be confused with those in [7] which do not incorporate viewing and surround conditions.

    Brightness attributes of CAM16 and ZCAM, and lightness attributes of the CIELAB, CAM16, and ZCAM, are analyzed by plotting for a luminance range of $0$ to $2,000$ $cd/m^2$ and chromaticity of the CIE standard illuminant D65, as shown in Figs. 1(a) and 1(b), respectively. Figure 1(a) shows that CAM16 would over–predict brightness, whereas, Fig. 1(b) shows that both the CIELAB and CAM16 would under–predict lightness, compared to ZCAM, for the same stimulus under same viewing conditions.

  • Step 6: Calculate two–dimensional attributes including saturation $(S_z),$ vividness $(V_z),$ blackness $(K_z),$ and whiteness $(W_z)$ using Eqs. (1619), respectively.
    $$S_z = 100 \cdot \left(F_L\right)^{0.6}\cdot \sqrt{\frac{M_z}{Q_z}}$$

    Eq. (16) indicates that stronger colourfulness or lower brightness will enhance saturation. A higher luminance adapting field will also enhance the saturation. Factor $F_L$ was introduced in $S_z$, to improve prediction performance, as it cancels out in the ratio of $M_z$ to $Q_z$ (see Eq. (16)). Equation (16) was developed using Juan et al., [34] saturation data. The reason to chose JUAN data instead of CHO data to train saturation was that JUAN data comprises of nine different phases with varying experimental conditions, whereas, the CHO dataset includes just one experimental condition [17,34].

    $$V_z = \sqrt{(J_z-58)^2+3.4\cdot(C_z)^2}$$
    $$K_z = 100-0.8\cdot \sqrt{(J_z)^2+8\cdot(C_z)^2}$$
    $$W_z = 100-\sqrt{(100-J_z)^2+(C_z)^2}$$

    Note that whiteness $(W_z)$ and blackness $(K_z)$ will achieve a value of $100$ for the reference white (i.e., $J_z=100$ and $C_z=0)$ and pure black (i.e., $J_z=C_z=0),$ respectively, in agreement with their definitions given in section 2.2. Whiteness formula (see Eq. (19)), straightaway, follows the definition given in Section 2.1, and was not trained using any experimental data. However, constant coefficient in formulae for vividness (see Eq. (17)) and blackness (see Eq. (18)) were developed to fit the experimental data by Cho et al., [17]. To fully agree with the definition of vividness (see section 2.2), $V_z$ should intercept with grey axis at $J_z=0$ but the optimum number found to fit CHO data was $J_z=58$, this number was $61$ and $53$ in Cho et al., [42] models based on the CIELAB and the CIECAM02 colour attributes, respectively. This implies that CHO vividness data does not intercept grey axis at pure black.

 figure: Fig. 1.

Fig. 1. (a) Brightness attributes of CAM16 and ZCAM, and (b) lightness attributes of the CIELAB, CAM16, and ZCAM, plotted for luminance ranging from $0$ to $2,000$ $cd/m^2$, where luminance of the reference white is $2,000$ $cd/m^2$.

Download Full Size | PDF

Tables Icon

Table 1. Unique hue data for calculation of hue quadrature

It is immediately apparent that all above equations are invertible. For performance comparison between the models’, formulae for vividness, blackness, and whiteness were also developed based on CAM16 attributes of lightness and chroma [6], having structure similar to Eqs. (1719), respectively (see Appendix 1). Inverse ZCAM and a few working examples are given in Supplement 1.

5. Results and discussions

The Standard Residual Sum of Squares (STRESS) [47], between experimental and predicted data, was minimized as an objective function (see Eq. (20)).

$$STRESS = 100\cdot \sqrt{\frac{\sum_{i=1}^N (F\cdot P_i-V_i)^2}{\sum_{i=1}^N (V_i)^2}},\;\; \textrm{where,}\;\; F = \frac{\sum_{i=1}^N (P_i \cdot V_i)}{\sum_{i=1}^N (P_i)^2},$$
where $P_i$ and $V_i$ represent predicted and experimental values of the $i\textrm {th}$ sample, respectively, and $F$ and $N$ represent scaling factor and number of samples in the experimental phase, respectively.

5.1 One–dimensional attributes

The performance of new ZCAM was compared with that of CAM16 and CIELAB to predict different experimental datasets. The results were computed in terms of the STRESS $(\xi )$ and the Coefficient of Correlation (r). A lower value of STRESS ($0<$ STRESS $<100$) reflects better agreement between predicted and experimental data; i.e., a STRESS of $0$ means perfect agreement. Whereas, a higher value of the Coefficient of Correlation ($-1<$r $<1$) reflects better agreement between two data. Table 2 lists the results of the CIELAB predicting the ’LUTCHI’ lightness, and CAM16 and ZCAM predicting the ’LUTCHI’ lightness, colourfulness, and hue composition data. Results showed that CAM16 and ZCAM gave very similar performance to predict colourfulness and hue composition. ZCAM performed slightly better compared to CIELAB and CAM16 to predict ’LUTCHI’ lightness data by $1.4$ and $1.6$ $STRESS$ units, respectively.

Tables Icon

Table 2. Results for the CIELAB, CAM16, and ZCAM predicting the LUTCHI data, in terms of STRESS ($\xi$) and the Coefficient of Correlation (r). Best values are bold faced.

The Coefficient of Correlation followed overall similar trend as that of the STRESS but in few cases the two metrics were not aligned with each other (see Table 2). For example, CAM16 performed better compared to ZCAM to predict lightness for ’RLL’ and ’CRT’ data in terms of the STRESS but worse in terms of the Coefficient of Correlation (see Table 2). In several cases, the Coefficient of Correlation showed no difference between models’ performance, whereas, the STRESS values still showed the difference (e.g., for colourfulness prediction of ’RVL’ and ’CRT’ groups of the LUTCHI data).

Figure 2 shows scatter plots between the ’RHL’ experimental data for lightness, colourfulness, and hue composition and corresponding predictions by CAM16 and ZCAM. Note that predicted values of lightness and colourfulness are plotted after multiplying with the scaling factor $F$ (see Eq. (20)). For a perfect agreement, all data points should lie on the $45^{\textrm {o}}$ line (see dashed black lines in Fig. 2(a–f)). Figure 2 also shows that the scatter plots agree with the quantitative results (see Table 2). Similar trend was observed for all other groups of the LUTCHI data.

 figure: Fig. 2.

Fig. 2. Scatter plots between the experimental data (RHL group of the LUTCHI dataset) and predictions by corresponding attributes of (left) CAM16 and (right) ZCAM.

Download Full Size | PDF

The ’RVL’ group of the LUTCHI data comprises of twelve phases, where brightness was judged in six out of twelve phases instead of lightness. The STRESS values between the ’RVL’ brightness data and predictions by CAM16 and ZCAM were noted as $17$ and $8.4$, respectively. The latter gave better performance by $8.6$ STRESS units. Following the same trend, the Coefficient of Correlation between predicted and experimental date were noted as $0.94$ and $0.98$ for CAM16 and ZCAM, respectively. Scatter plots between the experimental brightness and corresponding predictions by CAM16 and ZCAM are shown in Figs. 3(a) and 3(b), respectively. Figure 3 clearly shows that brightness predicted by ZCAM have less spread around the $45^{\textrm {o}}$ line compared to CAM16. This again agrees with the quantitative results reported above.

 figure: Fig. 3.

Fig. 3. The experimental brightness from the RVL group of the LUTCHI data are plotted against predictions by the brightness attributes of (a) CAM16 and (b) ZCAM.

Download Full Size | PDF

Three independent colour appearance datasets, including JUAN [33], FU [35,36], and CHOI [37] were used as test data to compare models’ performance. Note that the CHOI data, originally, comprises of nine phases out of which data for seven phases were available for the current study. For two phases of CHOI data with ’bright’ surround conditions, $F_s=0.69$ (same as ’average’ surround conditions) was used because a change in $F_s$ gave no improvement in the prediction performance. The same was reported by Choi et al., [37]. The results were again computed in terms of the STRESS and the Coefficient of Correlation. Table 3 lists the results for the CIELAB predicting the experimental lightness data, and corresponding attributes of CAM16 and ZCAM predicting the experimental lightness, colourfulness, and hue composition data. The results (see Table 3) again showed that ZCAM performed slightly better compared to CIELAB and CAM16 to predict experimental lightness by $1$ and $1.9$ STRESS units, respectively. For prediction of colourfulness prediction and hue composition, CAM16 and ZCAM performed very similar. Once again, in most of the cases, the results in terms of the Coefficient of Correlation followed similar trend as that of the STRESS but in few others (see Table 3).

Tables Icon

Table 3. Results for the CIELAB, CAM16, and ZCAM predicting the test data, including JUAN, [33] FU, [35,36] and CHOI, [37] in terms of the STRESS ($\xi$) and the Coefficient of Correlation (r). Best values are bold faced.

Note that the quantitative results were computed separately for each individual phase of the experimental data, however, mean results for individual groups of the ’LUTCHI dataset’ and each individual ’test dataset’ are given in Tables 2 and 3, respectively. The values of the Coefficient of Correlation for prediction of the perceptual hue composition are not presented because numbers, ranging between $0.99$ to $1$, were indifferent between the test models. Also note that, the scaling factor $(F)$ (see Eq. (20)) was set to $1$ for hue composition in all different phases of the experimental data. Achromatic samples were excluded from the data when computing the STRESS and the Correlation of Coefficient for hue composition [6].

Overall results discussed above showed that ZCAM performed similar to the state–of–the–art CAM16 to predict colourfulness and hue composition, and better to predict brightness and lightness. It can also be observed that ZCAM’s attributes have relatively simpler mathematical structure compared to that of CAM16. For instance, ZCAM’s brightness and colourfulness formulae (see Eqs. (14) and (15)) are a function of one variable related to the attribute considered; i.e., $I_z$ and $C'=\sqrt{(a^2+b^2)}$ for $Q_z$ and $M_z$, respectively. The other terms are viewing parameters $(F_L,\;F_s,\;\textrm {and}\;F_b)$. However, CAM16’s attributes include many other variables; e.g., CAM16’s colourfulness is a function of additional variables including $R_a,$ $G_a,$ $B_a,$ $A$, $N_{c},$ $N_{cb},$ etc. Moreover, ZCAM’s formulae for lightness and chroma are in agreement with the CIE definitions (see section 2.1). This implies that it is relatively easy to unfold the psychophysical phenomena of colour appearance perception from ZCAM’s attributes compared to corresponding attributes of CAM16 [6].

5.2 Two–dimensional attributes

Cho et al., [17] produced experimental colour appearance data for saturation, vividness, blackness, and whiteness. These data were collected to develop and/or test ZCAM’s two–dimensional attributes. The experimental data were originally ranging between $-3$ to $3$ and were re–scaled by adding an offset of $3$ and multiplying by $100/6$ to achieve a $0$$100$ scale.

Table 4 lists results for ZCAM’s and CAM16’s attributes of saturation, vividness, blackness, and whiteness (see Eqs. (16)–(19), Appendix 1, and [6]) predicting corresponding training and testing experimental data, in terms of the STRESS ($\xi$) and the Coefficient of Correlation (r). Figure 4 shows scatter plots of the CHO data against predictions by corresponding ZCAM attributes (Eqs. (1619)). Table 4 and Fig. 4 show that ZCAM’s saturation formula performed better to predict JUAN and CHO (neutral included (NI) and neutral excluded (NE)) saturation data compared to CAM16’s saturation formula [6]. Also note that ZCAM’s prediction of saturation was improved in terms of the STRESS and the Coefficient of correlation when five neutral samples were excluded (see Table 4). Same is reflected in Fig. 4, where five neutral samples are represented by different (red) symbols. Prediction results for saturation also indicate that CHO (five neutral samples excluded) and JUAN saturation data agree with each other. This also agrees with previous finding by Cho et al., [41]. CAM16’s prediction of CHO saturation was almost equally bad in both cases (i.e., neutral included and neutral excluded).

 figure: Fig. 4.

Fig. 4. Scatter plots between Cho et al., [17] experimental data and corresponding predictions by ZCAM attributes: (a) saturation ($5$ neutral samples are represented by red symbols and rest $115$ by blue circles), (b) vividness, (c) blackness, and (d) whiteness.

Download Full Size | PDF

Tables Icon

Table 4. Test results for CAM16’s and ZCAM’s attributes predicting corresponding experimental data, including JUAN [34], CHO (neutral included (NI) and neutral excluded (NE)) [17], HBM [45], and NCS [23,24], in terms of the STRESS ($\xi$) and the Coefficient of Correlation (r). Best values are bold faced.

According to STRESS results in Table 4, ZCAM again performed better compared to CAM16 to predict CHO data for vividness, blackness, and whiteness. Whereas, value of the Coefficient of Correlation for CAM16 was slightly higher compared to that for ZCAM when predicting CHO vividness data. Also, for HBM vividness data, CAM16’s current vividness formula performed better compared to ZCAM’s vividness. Note that Midtfjord et al., [45] concluded that "the majority of the variations in the vividness data was predicted by chroma, while results indicated that lightness does not contribute in the observers’ interpretation of vividness". This disagrees with the definition of vividness (see Section 2.2). Finally, results in Table 4 also show that ZCAM performed better to predict NCS blackness, and worse to predict NCS whiteness compared to current CAM16-based formulae for blackness, and whiteness, respectively.

6. Conclusions

A colour appearance model, ZCAM, that plausibly agrees with psychophysical phenomena of colour appearance perception, is proposed for imaging applications. Basic equations of $J_za_zb_z$ uniform colour space were used as basis of new colour appearance model. ZCAM’s ten colour appearance attributes include brightness, lightness, chroma, colourfulness, hue angle, hue composition, saturation, vividness, blackness, and whiteness.

A range of most reliable experimental data was collected for development and testing of ZCAM. ZCAM’s attributes of lightness, chroma, and whiteness, straightaway, follow corresponding definitions (see section 2), and attributes of brightness, colourfulness, hue composition, saturation, vividness, and blackness were trained using most reliable experimental data. Above mentioned attributes were also tested using independent experimental datasets. The present results showed that, despite relatively simpler mathematical structure, ZCAM performed better or at least equal to that of CAM16, and can be used in wide range of colour imaging applications including high dynamic range ($0.4$ to above $2,200$ $cd/m^2$).

Appendix 1. CAM16–based attributes for vividness, blackness, and whiteness

Following three equations give current CAM16–based attributes for vividness, blackness, and whiteness, respectively.

$$V_{16} = \sqrt{(J_{16}-54)^2+0.57\cdot (C_{16})^2},$$
$$K_{16} = 100-0.81\cdot \sqrt{(J_{16})^2+(C_{16})^2},$$
$$W_{16} = 100-\sqrt{(100-J_{16})^2+(C_{16})^2},$$
where $J_{16}$ and $C_{16}$ are CAM16’s attributes of lightness and chroma, respectively. Li et al., [44] previously developed 2D models for saturation, vividness, blackness, and whiteness based on CAM16–UCS. However, they lack agreement with the definitions (see section 2.2); e.g., blackness and whiteness will not achieve a value of $100$ for pure black and the reference white, respectively, and saturation and vividness will never achieve a value of zero even for pure black [44].

Funding

National Natural Science Foundation of China (61775190).

Acknowledgments

This work was partly carried out during the tenure of an ERCIM ‘Alain Bensoussan’ Fellowship Programme.

Disclosures

The authors declare no conflicts of interest.

Supplemental document

See Supplement 1 for supporting content.

References

1. CIE TC8-01, “A color appearance model for color management systems,” in Publ. 159, Vienna: CIE Cent. Bureau (2004).

2. M. D. Fairchild, Color appearance models (John Wiley & Sons, 2013).

3. C. Li, M. R. Luo, and G. Cui, “Colour-differences evaluation using colour appearance models,” in Color and Imaging Conference, vol. 2003 (Society for Imaging Science and Technology, 2003), pp. 127–131.

4. M. R. Luo, G. Cui, and C. Li, “Uniform colour spaces based on CIECAM02 colour appearance model,” Color Res. Appl. 31(4), 320–330 (2006). [CrossRef]  

5. J. Morovič, Color gamut mapping (John Wiley & Sons, 2008).

6. C. Li, Z. Li, Z. Wang, Y. Xu, M. R. Luo, G. Cui, M. Melgosa, M. H. Brill, and M. Pointer, “Comprehensive color solutions: CAM16, CAT16, and CAM16-UCS,” Color Res. Appl. 42(6), 703–718 (2017). [CrossRef]  

7. M. Safdar, G. Cui, Y. J. Kim, and M. R. Luo, “Perceptually uniform color space for image signals including high dynamic range and wide gamut,” Opt. Express 25(13), 15131–15151 (2017). [CrossRef]  

8. SMPTE Standard, “High dynamic range electro-optical transfer function of mastering reference displays,” SMPTE ST 2084, 1–14 (2014). [CrossRef]  

9. What is ICTCP-introduction?” Dolby White Paper 2016, version 7.1, United States, pp. 1–16 (2016).

10. L. Xu, B. Zhao, and M. R. Luo, “Colour gamut mapping between small and large colour gamuts: Part I. gamut compression,” Opt. Express 26(9), 11481–11495 (2018). [CrossRef]  

11. L. Xu, B. Zhao, and M. R. Luo, “Color gamut mapping between small and large color gamuts: Part II. gamut extension,” Opt. Express 26(13), 17335–17349 (2018). [CrossRef]  

12. M. U. Khan, M. Safdar, M. F. Mughal, and M. R. Luo, “Performance comparison of uniform color spaces by integrating into a tone mapping operator,” in Applied Sciences in Graphic Communication and Packaging, (Springer, 2018), pp. 39–45.

13. M. Rousselot, O. Le Meur, R. Cozot, and X. Ducloux, “Quality assessment of HDR/WCG images using HDR uniform color spaces,” J. Imaging 5(1), 18–57 (2019). [CrossRef]  

14. B. Zhao and M. R. Luo, “Hue linearity of color spaces for wide color gamut and high dynamic range media,” J. Opt. Soc. Am. A 37(5), 865–875 (2020). [CrossRef]  

15. B. Zhao, Q. Xu, and M. R. Luo, “Color difference evaluation for wide-color-gamut displays,” J. Opt. Soc. Am. A 37(8), 1257–1265 (2020). [CrossRef]  

16. R. S. Berns, “Extending CIELAB: Vividness, depth, and clarity,” Color Res. Appl. 39(4), 322–330 (2014). [CrossRef]  

17. Y. J. Cho, L.-C. Ou, and R. Luo, “A cross-cultural comparison of saturation, vividness, blackness and whiteness scales,” Color Res. Appl. 42(2), 203–215 (2017). [CrossRef]  

18. K. A. Smet, Q. Zhai, M. R. Luo, and P. Hanselaer, “Study of chromatic adaptation using memory color matches, Part I: neutral illuminants,” Opt. Express 25(7), 7732–7748 (2017). [CrossRef]  

19. K. A. Smet, Q. Zhai, M. R. Luo, and P. Hanselaer, “Study of chromatic adaptation using memory color matches, Part II: colored illuminants,” Opt. Express 25(7), 8350–8365 (2017). [CrossRef]  

20. Q. Zhai and M. R. Luo, “Study of chromatic adaptation via neutral white matches on different viewing media,” Opt. Express 26(6), 7724–7739 (2018). [CrossRef]  

21. CIE, “2019. CIE FDIS 017/E:2019. ILV: International Lighting Vocabulary, 2nd Edition. Vienna: CIE.,” http://eilv.cie.co.at.

22. R. W. G. Hunt and M. R. Pointer, Measuring colour (John Wiley & Sons, 2011).

23. A. Hård, L. Sivik, and G. Tonnquist, “NCS, Natural Color System–from concept to research and applications. Part I,” Color Res. Appl. 21(3), 180–205 (1996). [CrossRef]  

24. A. Hård, L. Sivik, and G. Tonnquist, “NCS, Natural Color System–from concept to research and applications. Part II,” Color Res. Appl. 21(3), 206–220 (1996). [CrossRef]  

25. R. G. Kuehni, “The early development of the Munsell system,” Color Res. Appl. 27(1), 20–27 (2002). [CrossRef]  

26. J. C. Stevens and S. S. Stevens, “Brightness function: Effects of adaptation,” J. Opt. Soc. Am. 53(3), 375–385 (1963). [CrossRef]  

27. R. W. G. Hunt, “Light and dark adaptation and the perception of color,” J. Opt. Soc. Am. 42(3), 190–199 (1952). [CrossRef]  

28. M. R. Luo, A. A. Clarke, P. A. Rhodes, A. Schappo, S. A. Scrivener, and C. J. Tait, “Quantifying colour appearance. Part I. LUTCHI colour appearance data,” Color Res. Appl. 16(3), 166–180 (1991). [CrossRef]  

29. M. R. Luo, A. A. Clarke, P. A. Rhodes, A. Schappo, S. A. Scrivener, and C. J. Tait, “Quantifying colour appearance. Part II. testing colour models performance using LUTCHI colour appearance data,” Color Res. Appl. 16(3), 181–197 (1991). [CrossRef]  

30. M. R. Luo, X. W. Gao, P. A. Rhodes, H. J. Xin, A. A. Clarke, and S. A. Scrivener, “Quantifying colour appearance. Part III. supplementary LUTCHI colour appearance data,” Color Res. Appl. 18(2), 98–113 (1993). [CrossRef]  

31. W.-G. Kuo, M. R. Luo, and H. E. Bez, “Various chromatic-adaptation transformations tested using new colour appearance data in textiles,” Color Res. Appl. 20(5), 313–327 (1995). [CrossRef]  

32. M. R. Luo, X. W. Gao, P. A. Rhodes, H. J. Xin, A. A. Clarke, and S. A. Scrivener, “Quantifying colour appearance. Part IV. transmissive media,” Color Res. Appl. 18(3), 191–209 (1993). [CrossRef]  

33. L.-Y. G. Juan and M. R. Luo, “New magnitude estimation data for evaluating colour appearance models,” in Colour and Visual Scales 2000: Conference Digest, Royal Holloway College, University of London Egham, UK (National Physical Laboratory, 2000), pp. 3–5.

34. L.-Y. G. Juan and M. R. Luo, “Magnitude estimation for scaling saturation,” in 9th Congress of the International Colour Association, vol. 4421 (International Society for Optics and Photonics, 2002), pp. 575–578.

35. C. Fu and M. R. Luo, “Optimised parameters for CIECAM02 based upon surround size effect,” in Conference on Colour in Graphics, Imaging, and Vision, vol. 2008 (Society for Imaging Science and Technology, 2008), pp. 49–52.

36. C. Fu and M. R. Luo, “Affecting colour appearance by surround field and stimuli sizes,” in Color and Imaging Conference, vol. 2006 (Society for Imaging Science and Technology, 2006), pp. 191–196.

37. S. Y. Choi, M. R. Luo, M. R. Pointer, C. Li, and P. A. Rhodes, “Changes in colour appearance of a large display in various surround ambient conditions,” Color Res. Appl. 35(3), 200–212 (2010). [CrossRef]  

38. Y. Kwak, L. W. MacDonald, and M. R. Luo, “Color appearance comparison between LCD projector and LCD monitor colors,” in 9th Congress of the International Colour Association, vol. 4421 (International Society for Optics and Photonics, 2002), pp. 599–602.

39. Y. Kwak, L. W. MacDonald, and M. R. Luo, “Colour appearance estimation under cinemac viewing conditions,” in Conference on Colour in Graphics, Imaging, and Vision, vol. 2002 (Society for Imaging Science and Technology, 2002), pp. 64–68.

40. Y. Park, M. R. Luo, C. J. Li, Y. Kwak, D.-S. Park, and C. Kim, “Correcting veiling glare of refined CIECAM02 for mobile display,” Color Res. Appl. 38(1), 14–21 (2013). [CrossRef]  

41. Y.-J. Cho, L.-C. Ou, and M. R. Luo, “Individual differences in the assessment of colour saturation,” in Color and Imaging Conference, vol. 2012 (Society for Imaging Science and Technology, 2012), pp. 221–225.

42. Y. J. Cho, L.-C. Ou, G. Cui, and R. Luo, “New colour appearance scales for describing saturation, vividness, blackness, and whiteness,” Color Res. Appl. 42(5), 552–563 (2017). [CrossRef]  

43. Y. Ji Cho, G. Cui, and K. Sohn, “Reversible colour appearance scales for describing saturation, vividness, blackness, and whiteness for image enhancement,” in Color and Imaging Conference, vol. 2018 (Society for Imaging Science and Technology, 2018), pp. 91–95.

44. C. Li, X. Liu, K. Xiao, Y. Ji Cho, and M. R. Luo, “An extension of CAM16 for predicting size effect and new colour appearance perceptions,” in Color and Imaging Conference, vol. 2018 (Society for Imaging Science and Technology, 2018), pp. 264–267.

45. H. B. Midtfjord, P. Green, and P. Nussbaum, “Vividness as a colour appearance attribute,” in Color and Imaging Conference, vol. 2019 (Society for Imaging Science and Technology, 2019), pp. 308–313.

46. N. Moroney, M. D. Fairchild, R. W. Hunt, C. Li, M. R. Luo, and T. Newman, “The CIECAM02 color appearance model,” in Color and Imaging Conference, vol. 2002 (Society for Imaging Science and Technology, 2002), pp. 23–27.

47. P. A. Garcia, R. Huertas, M. Melgosa, and G. Cui, “Measurement of the relationship between perceived and computed color differences,” J. Opt. Soc. Am. A 24(7), 1823–1829 (2007). [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       Equation of Inverse ZCAM and its working examples.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. (a) Brightness attributes of CAM16 and ZCAM, and (b) lightness attributes of the CIELAB, CAM16, and ZCAM, plotted for luminance ranging from $0$ to $2,000$ $cd/m^2$ , where luminance of the reference white is $2,000$ $cd/m^2$ .
Fig. 2.
Fig. 2. Scatter plots between the experimental data (RHL group of the LUTCHI dataset) and predictions by corresponding attributes of (left) CAM16 and (right) ZCAM.
Fig. 3.
Fig. 3. The experimental brightness from the RVL group of the LUTCHI data are plotted against predictions by the brightness attributes of (a) CAM16 and (b) ZCAM.
Fig. 4.
Fig. 4. Scatter plots between Cho et al., [17] experimental data and corresponding predictions by ZCAM attributes: (a) saturation ( $5$ neutral samples are represented by red symbols and rest $115$ by blue circles), (b) vividness, (c) blackness, and (d) whiteness.

Tables (4)

Tables Icon

Table 1. Unique hue data for calculation of hue quadrature

Tables Icon

Table 2. Results for the CIELAB, CAM16, and ZCAM predicting the LUTCHI data, in terms of STRESS ( ξ ) and the Coefficient of Correlation (r). Best values are bold faced.

Tables Icon

Table 3. Results for the CIELAB, CAM16, and ZCAM predicting the test data, including JUAN, [33] FU, [35,36] and CHOI, [37] in terms of the STRESS ( ξ ) and the Coefficient of Correlation (r). Best values are bold faced.

Tables Icon

Table 4. Test results for CAM16’s and ZCAM’s attributes predicting corresponding experimental data, including JUAN [34], CHO (neutral included (NI) and neutral excluded (NE)) [17], HBM [45], and NCS [23,24], in terms of the STRESS ( ξ ) and the Coefficient of Correlation (r). Best values are bold faced.

Equations (23)

Equations on this page are rendered with MathJax. Learn more.

F s = { 0.525 , dark 0.59 , dim 0.69 , average .
F b = Y b Y w ,
F L = 0.171 ( L a ) 1 3 ( 1 e 48 9 L a ) .
[ X D 65 Y D 65 ] = [ b X D 65 g Y D 65 ] [ ( b 1 ) X D 65 ( g 1 ) Y D 65 ] ,
[ R G B ] = [ 0.41478972 0.579999 0.0146480 0.2015100 1.120649 0.0531008 0.0166008 0.264800 0.6684799 ] [ X D 65 Y D 65 Z D 65 ] ,
{ R , G , B } = ( c 1 + c 2 ( { R , G , B } 10 , 000 ) η 1 + c 3 ( { R , G , B } 10 , 000 ) η ) ρ ,
[ a z b z ] = [ 3.524000 4.066708 0.542708 0.199076 1.096799 1.295875 ] [ R G B ] ,
I z = G ϵ ,
h z = t a n 1 ( b z a z ) .
H = H i + 100 h h i e i h h i e i + h i + 1 h e i + 1 ,
e z = 1.015 + c o s ( 89.038 + h )
Q z = 2700 ( I z ) 1.6 F s ( F b ) 0.12 ( ( F s ) 2.2 ( F b ) 0.5 ( F L ) 0.2 )
J z = 100 ( Q z Q z , w ) ,
M z = 100 ( a z 2 + b z 2 ) 0.37 ( ( e z ) 0.068 ( F L ) 0.2 ( F b ) 0.1 ( I z , w ) 0.78 ) ,
C z = 100 ( M z Q z , w )
S z = 100 ( F L ) 0.6 M z Q z
V z = ( J z 58 ) 2 + 3.4 ( C z ) 2
K z = 100 0.8 ( J z ) 2 + 8 ( C z ) 2
W z = 100 ( 100 J z ) 2 + ( C z ) 2
S T R E S S = 100 i = 1 N ( F P i V i ) 2 i = 1 N ( V i ) 2 , where, F = i = 1 N ( P i V i ) i = 1 N ( P i ) 2 ,
V 16 = ( J 16 54 ) 2 + 0.57 ( C 16 ) 2 ,
K 16 = 100 0.81 ( J 16 ) 2 + ( C 16 ) 2 ,
W 16 = 100 ( 100 J 16 ) 2 + ( C 16 ) 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.