Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Color gamut extension algorithm for various images based on laser display

Open Access Open Access

Abstract

In pursuit of enhancing the display performance of gamut extension algorithms across diverse image types while minimizing image dependency, this study introduces a dynamic gamut extension algorithm. The algorithm is designed to extend the sRGB source gamut towards the wide gamut of a laser display. To evaluate its effectiveness, psychophysical experiments were conducted using four distinct image categories: complexions, scenery, objects, and color blocks and bars. The performance of the proposed algorithm was benchmarked against four established color gamut mapping algorithms. The comparative analysis revealed that our algorithm excels in handling wide color gamuts, outperforming the alternatives in terms of preference and the preservation of detail richness.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The color gamut of a display device refers to the color range displayed by this device and is an important indicator of its color-rendering capability. There are two common representations of color gamut: the planar chromaticity diagram and color gamut volume (CGV). The first is intuitive and easy to compare but lacks luminance perception and uniformity [1]. The second, a gamut volume including lightness, is based on MacAdam's theory [2,3]. Uniform color space (UCS) has been used in subsequent studies, such as CIE 1976 L*u*v* [4], CIE 1976 L*a*b* [5], CAM02 [68], and Jzazbz [9].

When transforming colors among different devices, ensuring faithful color reproduction and appropriate expansion is difficult because of different color ranges. Hence, gamut mapping algorithms (GMAs) have been proposed to achieve good performance across different devices [10]. In other words, the GMA is a bridge between the color gamut of the source and that of the target device, which directly affects the results of the color image reproduction. Generally, GMAs are classified into two types: gamut compression algorithms (GCAs) and gamut extension algorithms (GEAs). In previous studies, the reverse version of the GCA was used to express gamut extension, and it has not been confirmed to be the optimal method for achieving gamut extension because of different evaluation criteria [11]. GMAs are evaluated according to two criteria: fidelity and pRef. [10,12,13]. GCAs are designed to perform natural and faithful color reproduction, whereas GEAs aim to provide natural, pleasant, and satisfactory color reproduction. For GEAs, both naturalness and preference are vital for achieving excellent performance [14]. Therefore, a trade-off between fidelity and preference in GMAs must be achieved [1518].

Currently, common GEAs mainly include the same drive signal (SDS) and true color mapping (TCM) algorithms [19]. SDS is the simplest for color gamut extension, but introduces a hue offset into the rendered images. TCM reproduces the true colors of the source and achieves accurate gamut mapping [20]. However, both algorithms only meet one evaluation criterion of GMAs. A hybrid color mapping (HCM) algorithm combining saturation-dependent TCM and SDS was proposed by Ward et al. [21]. Numerous gamut mapping and extension methods are documented in literature, and interested readers are directed to Morovic's book [10]. Some involve user studies to learn about gamut extension transforms [22,23], while others categorize data for different extension in each category. For example, some preserve skin tones [24,25], while others extend based on object chroma or categorize memory colors [26]. The another method uses a two-stage process: first, extending the source gamut using a non-linear hue-varying transform, and second, applying an image-dependent chroma smoothing operation [18]. Finally, some methods employ perceptually-based image energy minimization [27,28]. Xu et al. proposed a color gamut extension algorithm based on vividness and implemented chroma extension in the third step [29]. Under the same hue and lightness, each pixel is mapped. Vazquez-Corral et al. were the first to simultaneously expand and compress the color gamut in an image, addressing non-inclusive color mapping issues [30].

With the continuous development of display technology, laser display has emerged as a representative of wide-color-gamut display technology, known for its advantages such as high resolution, high brightness, and long lifespan [31]. With the introduction of lasers in digital display technology, the color gamut of laser display devices is significantly enhanced [32]. This enhancement often surpasses the color capabilities of cameras and goes far beyond what traditional light sources can achieve. To fully harness the color gamut potential of wide gamut display devices, it is necessary to explore color GEAs that are better suited for these devices.

Several previous studies investigated skin color images to improve skin color reproduction and achieve low image dependency through gamut mapping [18,33,34]. Therefore, in this study, to improve the display effect of GEAs for different types of images, a linear color gamut mapping algorithm on the same luminance is designed based on CIE L*a*b* color space, and different proportions of chroma mapping are performed for each image. By using this method, the goal is to achieve favorable mapping results for all images and diminish the algorithm's dependency on the images. Four types of images, including complexions, scenery and objects, and color blocks and bars, are selected for comprehensive comparison with respect to TCM, SDS, HCM, and a GEA with a constant mapping ratio of 0.6. Through psychophysical experiments, images with complexions, scenery, and objects are evaluated for preference, naturalness, and detail, whereas images with color blocks and bars are evaluated for smoothness of color change. Subsequently, the z-score values and degree of image dependency for each type images are analyzed [35,36].

2. Theoretical analysis and algorithm design

2.1 Determination of the most saturated boundary of source and target color gamut

The foundation of the designed GMA lies in establishing the source and target color gamut for each image to be mapped, where source color gamut and target color gamut respectively denote the color gamut of the source device and the target device. Extracting the parameters of each pixel point in the L*a*b* color space involves lifting the XYZ data of each pixel point and then converting XYZ to the L*a*b* color space. The L*, a*, and b* values for the original color of each pixel point are recorded as $L_{\textrm{origina}{\textrm{l}_{ij}}}^\ast $, $a_{\textrm{origina}{\textrm{l}_{ij}}}^\ast $, and $b_{\textrm{origina}{\textrm{l}_{ij}}}^\ast $, respectively, where Original colors refers to the range of color distribution in an unmapped image. The color temperature (CT) is 6500K, and the chromaticity coordinate of white point is (0.3127,0.3290). The conversion between the XYZ color space and the CIE L*a*b* color space is shown in Eq. (1).

$$\left\{ {\begin{array}{{c}} {{L^\ast } = 116f({Y/{Y_0}} )- 16}\\ {{a^\ast } = 500[f({X/{X_0}} )- f({Y/{Y_0}} )]}\\ {{b^\ast } = 200[f({Y/{Y_0}} )- f({Z/{Z_0}} )]} \end{array}} \right.$$
$$f\left( x \right) = \left\{ \begin{matrix} {x^{{1 / 3}}} \cr {\left( {{{841} / {108}}} \right)x + {{16} / {116}}} \end{matrix} \right.\begin{matrix} , {x > {\left( {{{24} / {116}}} \right)}^3} \cr , {x \le {\left( {{{24} / {116}}} \right)}^3} \end{matrix} $$
$$chroma_{\textrm{origina}{\textrm{l}_{ij}}}^\ast{=} \sqrt {a{{_{\textrm{origina}{\textrm{l}_{ij}}}^\ast }^2} + b{{_{\textrm{origina}{\textrm{l}_{ij}}}^\ast }^2}}$$
Where X0, Y0, Z0 in Eq. (1) are the tristimulus values of the specified white point, the $chroma_{\textrm{origina}{\textrm{l}_{ij}}}^\ast $ is the chroma value of the original color in the L*a*b* color space. The chroma value of each pixel point is then calculated using Eq. (3). Finding the most saturated boundaries of pixel points with the same luminance and hue in the source and target color gamut is crucial. MacAdam's theory proposes the use of three primary color grayscales of an image to obtain the most saturated colors under the corresponding hue and luminance, providing a basis for mapping algorithms.

Combined with previous research from our laboratory [37,38], the specific steps for obtaining the most saturated boundaries at the same luminance and hue angle for each pixel of the image are as follows:

  • 1. The colors of the original images are transferred from the device-dependent to the device-independent color space, and the parameters ($L_{\textrm{origina}{\textrm{l}_{ij}}}^\ast $, $a_{\textrm{origina}{\textrm{l}_{ij}}}^\ast $, $b_{\textrm{origina}{\textrm{l}_{ij}}}^\ast $) of each pixel are then extracted in the L*a*b* color space.
  • 2. The color ($L_{\textrm{origina}{\textrm{l}_{ij}}}^\ast $, $a_{\textrm{origina}{\textrm{l}_{ij}}}^\ast $, $b_{\textrm{origina}{\textrm{l}_{ij}}}^\ast $) of each pixel is determined whether it belongs to the boundary of the CGV or to a color without a hue in the source color gamut. Otherwise, the corresponding point ($L_{\textrm{sourc}{\textrm{e}_{ij}}}^\ast $, $a_{\textrm{sourc}{\textrm{e}_{ij}}}^\ast $, $b_{\textrm{sourc}{\textrm{e}_{ij}}}^\ast $) of the color on the source gamut boundary is solved by ensuring that the hue and luminance are consistent.

    Determining whether a color is located on a boundary is performed by dichotomy. First, a step size of 10 is used to calculate whether the corresponding source color gamut chromaticity coordinates meet the color grayscale of each primary color between 0 and 255. If it exceeds, this indicates that the point is illegal, thereby determining the lower and upper bounds of the dichotomy. The dichotomy method is then used to calculate whether the midpoint is legal. The final judgment condition is that the distance to the boundary should be < 0.5. So far, it is believed that this point lies on the source gamut boundary, and the corresponding chroma is

    $$chroma_{\textrm{sourc}{\textrm{e}_{ij}}}^\ast{=} \sqrt {a{{_{\textrm{sourc}{\textrm{e}_{ij}}}^\ast }^2} + b{{_{\textrm{sourc}{\textrm{e}_{ij}}}^\ast }^2}}$$

  • (3) Furthermore, using the same method, the most saturated boundary points in the target color gamut with the same hue and luminance are calculated, and the corresponding chroma is
    $$chroma_{\textrm{targe}{\textrm{t}_{ij}}}^\ast{=} \sqrt {a{{_{\textrm{targe}{\textrm{t}_{ij}}}^\ast }^2} + b{{_{\textrm{targe}{\textrm{t}_{ij}}}^\ast }^2}}$$

This method is reasonable for determining the most saturated boundaries of the source and target color gamut for each image. However, the time and cost of this method for practical application cannot be ignored. To efficiently perform color-gamut mapping in practical applications, the calculation results can be set in the form of a lookup table. Taking the most widely used 3-channel 8-bit display system as an example, the drive value range for each primary color channel is [0,255]. When the additive property is satisfied, a total of 2563 = 16777216 colors can be displayed. According to MacAdam’s theory and related research by Ou Yang [39], there are 1 + (256*3-3) + (255*255*3) + (255*254*3 + 1) = 390152 colors located on the boundary, where 1 represents the case where the channel value is (0,0,0), as shown in Fig. 1 at the point O. (256*3-3) represents the case where two channels have a value of 0, as shown in Fig. 1 at line OA, OC, and OH. (255*255*3) represents the case where only one channel has a value of 0, as shown in Fig. 1 at surface OAFC(expect line OA and OC), surface OADH(expect line OA and OH), surface OHEC(expect line OH and OC). (255*254*3 + 1) represents the case where one channel has a value of 255 while the other two channels are non-zero, as shown in Fig. 1 at surface CEGF(expect line CF and CE), surface ADGF(expect line AF and AD), surface HDGE(expect line HD and HE), and point G. Therefore, it is only necessary to calculate the L*a*b* color space data of the source and target color gamut of 390152 colors and store them in the form of a lookup table, which can perform color gamut mapping for a large number of images relatively quickly, effectively saving time. This approach can still be used in future 10-bit or even 12-bit systems.

 figure: Fig. 1.

Fig. 1. The boundary of the three-primary display system in RGB color space

Download Full Size | PDF

Taking an image as an example, the gamut boundary in the L*a*b* color space was drawn, as depicted in Fig. 2. The red and blue dots represent the source and target gamut boundaries, respectively, which have the same hue and luminance as the original color of each pixel point. The outer contours in Fig. 2(a) and (b) represent the boundaries of the most saturated source color gamut, while the outer contours in Fig. 2(c) and (d) represent the boundaries of the most saturated target color gamut. The red and blue dots nearly align with the most saturated boundaries, demonstrating that the most saturated boundary points with the same luminance and hue angle for each pixel can be obtained through the dichotomy method.

 figure: Fig. 2.

Fig. 2. Most saturated gamut boundaries for laser projector and sRGB system with all pixel points of sample image in L*a*b* color space; (a) CGV; (b) Top view of CGV; (c) CGV; (d) Top view of CGV; (e) Sample image;

Download Full Size | PDF

2.2 Design of gamut mapping algorithm

For each chromatic pixel point, the ratio kij of its original chroma to the target gamut boundary is calculated using Eq. (6), and the average value of kij is then calculated and recorded as k, as shown in Eq. (7). Because the corresponding k value for each image is calculated and then mapped, the algorithm proposed in this study is referred to as the dynamic k algorithm. The values of kij and k are used to judge whether to perform gamut extension (or gamut compression). If the ratio kij of each pixel point is less than k, gamut mapping is not performed, and if it is greater than k, gamut mapping is performed according to the gamut mapping Eq. (8).

$${k_{ij}} = \frac{{chroma_{\textrm{origina}{\textrm{l}_{ij}}}^\ast }}{{chroma_{\textrm{targe}{\textrm{t}_{ij}}}^\ast }}$$
$$k = \frac{{\sum {{k_{ij}}} }}{{{k_{\textrm{total}}}}}$$

When calculating the average value k, it is necessary to subtract the number of achromatic points from the total number of pixel points (because no gamut mapping is performed for achromatic points); the final number of pixel points is then calculated as ktotal. Achromatic points are not mapped, and the original luminance is maintained. For other points, the mapping equation is represented by Eq. (8):

$$chroma_{\textrm{fina}{\textrm{l}_{ij}}}^\ast{=} \left\{ {\begin{array}{{cccc}} {\begin{array}{{cccc}} {}&{chroma_{\textrm{origina}{\textrm{l}_{ij}}}^\ast }&{}&, \end{array}chroma_{\textrm{origina}{\textrm{l}_{ij}}}^\ast \le k \times chroma_{\textrm{targe}{\textrm{t}_{ij}}}^\ast }\\ {k \times chroma_{\textrm{targe}{\textrm{t}_{ij}}}^\ast{+} ({1 - k} )\times chroma_{\textrm{targe}{\textrm{t}_{ij}}}^\ast \frac{{chroma_{\textrm{origina}{\textrm{l}_{ij}}}^\ast{-} k \times chroma_{\textrm{targe}{\textrm{t}_{ij}}}^\ast }}{{chroma_{\textrm{sourc}{\textrm{e}_{ij}}}^\ast{-} k \times chroma_{\textrm{targe}{\textrm{t}_{ij}}}^\ast }},\textrm{else}} \end{array}} \right.$$
where the $chroma_{\textrm{fina}{\textrm{l}_{ij}}}^\ast $ refers to the chroma value after GMA in L*a*b* color space.

Figure 3 provides a clear visual representation of the designed color gamut mapping algorithm. In the L*a*b* color space, under conditions of equal luminance and equal hue angles, the relationship should adhere to the Eq. (9). If the line segment OM is greater than the line segment OP, color gamut mapping is performed; otherwise, the original color is maintained.

$$\left\{ {\begin{array}{{c}} {\overline {\textrm{O}{\textrm{M}^{\prime}}} = \overline {\textrm{OM}} }\\ {{{\overline {\textrm{P}{\textrm{M}^{\prime}}} } / {\overline {\textrm{PT}} }} = {{\overline {\textrm{PM}} } / {\overline {\textrm{PS}} }}} \end{array}} \right.\begin{array}{{cc}} ,&{\textrm{if}}\\ ,&{\textrm{else}} \end{array}\begin{array}{{c}} {\overline {\textrm{OM}} \le \overline {\textrm{OP}} }\\ {} \end{array}$$

 figure: Fig. 3.

Fig. 3. The schematic diagram of chroms stretching in the CIE L*a*b* color space; (a) Front view; (b) Top view. Point O represents a color with zero chromaticity, point S is located at the boundary of the source color gamut, point T is at the boundary of the target color gamut, point M indicates the position of the color to be mapped, point M’ shows the position of the color after mapping, P refers to the point with the highest chromaticity that happens to not be mapped.

Download Full Size | PDF

The algorithm designed in this paper can perform reverse color space expansion by applying the mapping formula when the target color gamut boundary is located inside the source color gamut boundary. Unlike the method proposed in this article [30], the color brightness and hue angle of the original image will not be altered during the color gamut mapping process. Another difference is that in this article, there are very few colors that perform gamut compression.

After color gamut mapping, the three primary color grayscale values for each pixel in the final image need to be calculated. If the pixel point has no chromaticity, then Eq. (10) holds.

$$\begin{array}{{cccc}} {L_{\textrm{fina}{\textrm{l}_{ij}}}^\ast{=} L_{\textrm{origina}{\textrm{l}_{ij}}}^\ast ;}&{a_{\textrm{fina}{\textrm{l}_{ij}}}^\ast{=} a_{\textrm{origina}{\textrm{l}_{ij}}}^\ast ;}&{b_{\textrm{fina}{\textrm{l}_{ij}}}^\ast{=} b_{\textrm{origina}{\textrm{l}_{ij}}}^\ast } \end{array}$$

If the pixel point chromaticity is not zero, then Eq. (11) holds.

$$\left\{ {\begin{array}{{c}} {L_{\textrm{fina}{\textrm{l}_{ij}}}^\ast{=} L_{\textrm{origina}{\textrm{l}_{\textrm{ij}}}}^\mathrm{\ast }}\\ {a_{\textrm{fina}{\textrm{l}_{ij}}}^\ast{=} {{a_{\textrm{origina}{\textrm{l}_{ij}}}^\ast{\times} chroma_{\textrm{fina}{\textrm{l}_{ij}}}^\ast } / {chroma_{\textrm{origina}{\textrm{l}_{ij}}}^\ast }}}\\ {b_{\textrm{fina}{\textrm{l}_{ij}}}^\ast{=} {{b_{\textrm{origina}{\textrm{l}_{ij}}}^\ast{\times} chroma_{\textrm{fina}{\textrm{l}_{ij}}}^\ast } / {chroma_{\textrm{origina}{\textrm{l}_{ij}}}^\ast }}} \end{array}} \right.$$
where, $L_{\textrm{fina}{\textrm{l}_{ij}}}^\ast $, $a_{\textrm{fina}{\textrm{l}_{ij}}}^\ast $, and $b_{\textrm{fina}{\textrm{l}_{ij}}}^\ast $ are the data in L*a*b* color space of the final image pixel points; then, the space is transferred to the XYZ space through a conversion equation to calculate three primary color grayscale values of the mapped image.

2.3 Introduction to other algorithms

The SDS algorithm, which does not change the original image and directly inputs it into the display, is a simple GMA. Another algorithm is TCM. The core of the TCM algorithm is maintaining the XYZ value of the original image, thereby reproducing its true color. The core of TCM is expressed in Eq. (12), the T represents transposition.

$$\left[ {\begin{array}{{ccc}} X&Y&Z \end{array}} \right]_{\textrm{original}}^\textrm{T} = \left[ {\begin{array}{{ccc}} X&Y&Z \end{array}} \right]_{\textrm{final}}^\textrm{T}$$

Based on Eq. (12), we deduce

$$ \left[\begin{array}{ccc} X R & X G & X B \\ Y R & Y G & Y B \\ Z R & Z G & Z B \end{array}\right]_{\text {original }}\left[\begin{array}{c} I R_{\text {original }} \\ I G_{\text {original }} \\ I B_{\text {original }} \end{array}\right]=\left[\begin{array}{ccc} X R & X G & X B \\ Y R & Y G & Y B \\ Z R & Z G & Z B \end{array}\right]_{\text {final }}\left[\begin{array}{c} I R_{\text {final }} \\ I G_{\text {final }} \\ I B_{\text {final }} \end{array}\right] $$
where IRoriginal, IGoriginal, and IBoriginal are the luminance ratios of each primary color for every pixel point in the original image, whereas IRfinal, IGfinal, and IBfinal are the luminance ratios of each primary color for every pixel point in the mapped image. The grayscale value of each primary color channel can be calculated using Eq. (13).

In theory, TCM can faithfully restore the colors of the original images, however, errors can occur when calculating the three-channel grayscale values of the mapped image. One reason is the computational accuracy, and the second reason is that the target color gamut does not fully contain the source color gamut, resulting in some colors outside the target color gamut. However, the target color gamut contains only a small portion that does not overlap with the source color gamut, so its impact on TCM is minimal.

The HCM is a combination of the SDS and TCM, which analyzes the saturation of the input image and linearly combines the output of the TCM and SDS, as shown in Eq. (14).

$$\left[ {\begin{array}{{ccc}} R&G&B \end{array}} \right]_{\textrm{HCM}}^\textrm{T} = ({1 - \kappa } )\left[ {\begin{array}{{ccc}} R&G&B \end{array}} \right]_{\textrm{TCM}}^\textrm{T} + \kappa \left[ {\begin{array}{{ccc}} R&G&B \end{array}} \right]_{\textrm{SDS}}^\textrm{T}$$
Where κ is a mixing factor that works as a function of saturation in Eq. (15). SL and SH are the constants, their values are SL = 0.8 and SH = 1 in general. HCM aims at preserving natural colors by keeping low-saturated colors while mapping the high-saturated colors using SDS.
$$\kappa (S ){ = _{}}\left\{ {\begin{array}{{c}} 0\\ {{{({S - {S_\textrm{L}}} )} / {({{S_\textrm{H}} - {S_\textrm{L}}} )}}}\\ 1 \end{array}} \right.,\begin{array}{{ccc}} {}&{\textrm{if}}&{S < {S_\textrm{L}}}\\ {}&{\textrm{if}}&{{S_\textrm{L}} < S < {S_\textrm{H}}}\\ {}&{\textrm{if}}&{S \ge {S_\textrm{H}}} \end{array}$$

3. Experiment

3.1 Parameters of the laser projector

The experiment was performed using a three-primary-laser projection display system with a black level of 0.2682 cd/m2. The primary color chromaticity coordinates of the laser projector were R(0.7084, 0.2884), G(0.0909, 0.8018), and B(0.1700, 0.0367). The color temperature was 6500K, and the chromaticity coordinate was (0.3127, 0.3290). The gamma value of the laser projector was 2.2, which meets the additive colorimetric requirements. The three-primary and white point chromaticity coordinates for sRGB and the target color gamut are listed in Table 1. The center wavelengths of the red, green, and blue primary colors of the laser projector are 633.1, 520.4, and 448.3nm, respectively, and the half wave widths are 4.9, 11.1, and 6.5nm. The three primary colors of the sRGB and the target color gamut are represented in the 1931 CIE chromaticity diagram shown in Fig. 4. The instrument used is a spectral color luminance meter SRC-600 (EVERFINE Corporation, 0.1° field of view).

 figure: Fig. 4.

Fig. 4. sRGB and the target color gamut in the 1931 CIE chromaticity diagram

Download Full Size | PDF

Tables Icon

Table 1. Three-primary and white point chromaticity coordinates for sRGB and the target color gamut

The blue primary of sRGB exceeds the blue primary chromaticity coordinate of the target color gamut, indicating that a portion of the source color gamut cannot be reached by the target color gamut. This can also be observed in the L*a*b* color space, where one part of the source color gamut exceeds the target color gamut, as shown in Fig. 5. A small portion of the sRGB CGV exceeds the laser projector system in the blue primary region.

 figure: Fig. 5.

Fig. 5. CGV of sRGB and target color gamut; (a) CGV; (b) Top view of CGV; (c) Front view of CGV; (d) Side view of CGV;

Download Full Size | PDF

The gamma value of the laser projector was 2.2, and the laser projector satisfied the additive colorimetric property. A complete colorimetric display model includes a colorimetric value and an electro-optical transfer function (EOTF) called the gamma function. The EOTF of an 8-bit display is as follows:

$${L_{\textrm{C},{d_\textrm{C}}}} = {({{{{d_\textrm{C}}} / {255}}} )^{\textrm{gamma}}}$$
where ${L_{\textrm{C},{d_\textrm{C}}}}$ is the luminance ratio of the C primary color and dC is the grayscale for each primary color channel. For an 8-bit display, the value range of dC is an integer in the range [0,255]. According to Grassmann's law [40], the grayscale dC for each channel corresponds to the luminance of the primary color. The relationship between dC and ${Y_{\textrm{C},{d_\textrm{C}}}}$ is given by Eq. (17). The X, Y, Z are the tristimulus values of a color.
$$\left[ {\begin{array}{{c}} {{X_{({d_\textrm{R}},{d_\textrm{G}},{d_\textrm{B}})}}}\\ {{Y_{({d_\textrm{R}},{d_\textrm{G}},{d_\textrm{B}})}}}\\ {{Z_{({d_\textrm{R}},{d_\textrm{G}},{d_\textrm{B}})}}} \end{array}} \right] = \left[ {\begin{array}{{ccc}} {{X_{\textrm{R},{d_R}}}}&{{X_{\textrm{G},{d_G}}}}&{{X_{\textrm{B},{d_B}}}}\\ {{Y_{\textrm{R,}{d_R}}}}&{{Y_{\textrm{G,}{d_G}}}}&{{Y_{\textrm{B,}{d_B}}}}\\ {{Z_{\textrm{R,}{d_R}}}}&{{Z_{\textrm{G,}{d_G}}}}&{{Z_{\textrm{B,}{d_B}}}} \end{array}} \right]\left[ {\begin{array}{{c}} {{L_{\textrm{R,}{d_\textrm{R}}}}}\\ {{L_{\textrm{G,}{d_\textrm{G}}}}}\\ {{L_{\textrm{B},{d_\textrm{B}}}}} \end{array}} \right]$$

3.2 Selection of test images for psychophysical experiments

In this study, four types of contrast images were used, which were processed by TCM, SDS, HCM, the algorithm maintained the mapping ratio at a constant value of 0.6 [29]. A total of 38 images were selected, and each image generated 5 test images. These images were selected for the following reasons.

  • (1) The images have k values ranging from 0.1 to 0.8, which satisfied the need for images with different mapping ratio k values, i.e., low and high saturation images.
  • (2) The images contain numerous memory colors [41], such as in flowers, fruits, animals, and scenery.
  • (3) The images contain sharp lines and fine textures.
  • (4) Complexions appear in 14 images, including images of people with different skin colors, ages, and gender groups, allowing to evaluate the effect of skin color mapping.
  • (5) Some images contain gradient color bars and blocks to evaluate the hue continuity of the gamut extension algorithm.

The name and corresponding k value of each image are listed in Table 2.

Tables Icon

Table 2. Names and k values of all the images

The 14 images contain complexions. Images from C5 to C14 are all from StyleGAN-generated datasets on the Internet [42], whereas C3 is an image from Kodak database [43]. In addition, C1, C2, and C4 are screenshots captured from high-definition videos. The images O1, O4, O6, O8, and from S3 to S8 show screenshots captured from the high-efinition videos. O2, O3, and O5 are pictures captured from the surrounding environment of the observers in this psychophysical experiment. S1 is the test chart from MATLAB. O7 was selected from the standard ISO 12640 to evaluate the color gamut of the device [44].

Figure 6 shows 8 images containing color bars and blocks. The color hues of image B1, B2, B3, B4 are 180° and 0°, 45° and 225°, 135° and 315°, 90° and 270° respectively, which aim to test the smoothness of color changes in the same hue. Each row of color blocks in B5 was taken from the colors at eight hue angles (0°, 22.5°, 45°, 67.5°, 90°, 112.5°, 135°, 157.5°, 180°, 202.5°, 225°, 247.5°, 270°, 292.5°, 315°, and 337.5°) in the L*a*b* color space with a lightness of L* = 75, with the saturation is gradually increasing from left to right. To reduce the impact of the overall background lightness of the display screen on the color judgment of the color blocks, the background lightness between color blocks was increased. The colors of each hue were divided into 20 parts, from left to right and from low to maximum saturation, and were used to determine whether the hue gradient of the image after gamut mapping was uniform. The background lightness between each color block in B5 was L* = 50, which was consistent with the background lightness. Image B6 gradually increases in color luminance from left to right in the horizontal direction, and there are 16 color hues in it such as B5. The color of every block in image B6 is the most saturated color in each lightness. B7 gradually increases in color luminance from left to right in the horizontal direction. Image B8 is a color vignette referenced from the ISO 12640 standard [45], with luminance values changing continuously in the horizontal direction. These gradient colors can be used to evaluate the reproducible gradation characteristics of various output devices. When the color bars produce a hue discontinuity problem after color gamut mapping, it is easy to observe that bars appear in the vertical direction.

 figure: Fig. 6.

Fig. 6. 8 images containing color bars and blocks

Download Full Size | PDF

Figure 7 shows the top view of each pixel in O2 and O7 in the L*a*b* space after mapping. Compared to the k = 0.6 algorithm, the dynamic k algorithm has more blue portions of the color gamut volume, indicating that the greater the degree of color gamut extension, the greater the color chromaticity extension of the mapped image, which is in good agreement with subsequent experimental observations.

 figure: Fig. 7.

Fig. 7. A top view of the L*a*b* color space of the target CGV and the mapped CGV of O2 and O7; (a) O2 after dynamic k mapping; (b) O7 after dynamic k mapping; (c) O2 after k = 0.6 mapping; (d) O7 after k = 0.6 mapping;

Download Full Size | PDF

3.3 Psychophysical experiment

To further reflect the performance of the algorithm designed in this study, four other algorithms were introduced: the SDS algorithm, which keeps the drive value unchanged; the TCM algorithm, which keeps the XYZ value unchanged; the algorithm that maintains the chroma mapping ratio at 0.6, and HCM algorithm. The four algorithms and the dynamic k mapping algorithm proposed in this study were used to process all the images, thereby generating five mapped images for each original image. The five test images were then compared pairwise, therefore, each original image generated $C_5^2 = {{({5 \times 4} )} / 2} = 10$ comparisons. In total, there were 38 images, 190 mapped images, and 380 comparisons. The lightness of the background was set to L* = 50 in the L*a*b* color space, corresponding to a drive value of 118 for the three RGB channels.

To ensure that ambient light does not affect the experimental results, the experiment was carried out under dim conditions, where the measured ambient light intensity was only 0.0124 cd/m2. The laser projector parameters are described in section 3.1. In addition, the laser projector screen was approximately 3.2 × 1.7 m2 and 2048 × 1080 pixels. The observation distance was approximately 4 m. In this experiment, there were 13 normal color observers ranging in age from 22 to 25 years old, with an average age of 23.46 years old. All the participants were asked to choose the better of the two images in paired comparisons.

There were four types of images, the first of which contained the color of the human skin. The second included some objects with high saturated colors. The third included natural scenery colors, such as in flowers, fruits, animals, and so on. The fourth included color blocks and bars with different luminance and saturation. The observers were asked to evaluate the first, second, and third type images in terms of three aspects: preference, naturalness, and detail richness. Preference is based on the observers preference for the colors presented in the image. Naturalness requires the observer to determine whether the colors presented in the image match the natural colors that should be present in the scene. Detailed richness is an evaluation criterion used to assess whether the mapped images preserve texture details. The observers were asked to evaluate the smoothness of the color changes for the fourth type images. If the color difference between the lowest and highest saturation colors of each line of color blocks or bars in the horizontal direction of the image is relatively large and the color transitions are relatively uniform, this indicates that the color transition smoothness of the image is relatively good.

The participants subjective wrong decisions (WD) were recorded, which characterize the degree of agreement between a participant's initial and subsequent judgments [46]. WD is defined as the number of incorrect judgments divided by the total number of judgments. A single incorrect judgment indicates a discrepancy between the results given initially and later. The average WD of all the participants was 23.61%.

4. Results and discussion

4.1 Introduction to z-score

A paired comparison method was used to evaluate the effects of the psychophysical experiment. Based on the evaluation results, the z-score was calculated according to Case V of Thurstone's Law of Comparative Judgment [35]. In Case V, the difference in visual perception between stimuli a and b can be quantitatively described by the mean of the psychological perception values of the observer, which can be calculated using the following equation:

$${\mu _\textrm{a}} - {\mu _\textrm{b}} = {z_{\textrm{ab}}}\sigma \sqrt 2$$

In Eq. (18), a and b represent two images after different color gamut mappings, µa and µb represent the mean values of observer's psychological perception values. The zab is the z-score value, with the 95% confidence interval calculated as

$$C{I_{95\%}} = {z_{\textrm{ab}}} \pm 1.96\left( {\sigma /\sqrt N } \right)$$

The final result is a proportional value for measuring image quality, therefore, $\sigma $ can be any value and is generally set at √0.5. In Eq. (19), N represents the total number of observations. The fluctuation range of the 95% confidence interval is calculated as ± 0.3844. In psychophysical experiments, the advantages and disadvantages of the GMAs can be judged by the z-score. The higher the z-score, the better the performance of the GMAs. The smaller the fluctuation range of CI95%, the smaller the degree of dispersion of the observer's psychological perception of the GMAs, and the more reliable the results of the experiments.

4.2 Overall analysis of the five algorithms for all the test images

The z-scores and variance values of the five algorithms for the four types of images are presented in Table 3, and the values of the whole images are depicted in Fig. 8. In the paired comparison experiment, one image selected by an observer after being mapped was assigned a score of 1, while the other image in the pair was assigned a score of 0. Subsequently, each GMA was given a final score based on the images it mapped. The term variance refers to the variability of each GMA across all observers, and it was calculated as the variance of the scores of each GMA across all observers. A small variance value indicates that the algorithm is more likely to be applicable to different types of images, suggesting weak image dependency.

 figure: Fig. 8.

Fig. 8. Z-scores and variance values diagrams of all test images; (a) Z-scores; (b) Variance values;

Download Full Size | PDF

Tables Icon

Table 3. Psychophysical experiment evaluation results: z-scores and variance values of the five algorithms for the four categories of images and all the images.

Overall, the dynamic k algorithm demonstrated the best performance, followed by the HCM algorithm, then the k = 0.6 algorithm, with SDS exhibiting the poorest performance. The k = 0.6 algorithm showed the lowest image dependency among the five GMAs. In particular, the dynamic k algorithm excelled in preference and detail richness, with only a slight decrease in naturalness, likely due to significant chroma stretching after its application. Furthermore, the dynamic k algorithm ranked fourth in terms of score variances, slightly outperforming SDS. This is because the mapping proportions for each image are generally different, depending on the saturation distribution of all pixels in the image, introducing some level of image dependency.

The z-scores and variance values of different types of images for each evaluation criterion are depicted in Fig. 9, providing a clear illustration of the performance of all algorithms across different image types. The dynamic k algorithm performed exceptionally well with images featuring complexion and scenery. However, its performance on images containing objects was slightly lacking. On the contrary, the TCM algorithm performed well with these images, especially in naturalness. This phenomenon may occur because the observers memory colors of these artificial objects tend to be low in saturation level. It is worth noting that for the images with complexion, even though the dynamic k algorithm in preference and naturalness is not as good as HCM, the overall performance is very pleasing. As for the images with color bars and blocks, the SDS algorithm performed best, followed by the dynamic k algorithm. Figure 9(b) shows the variance values of each GMA for different types of images. The variance distribution in Fig. 8(b) is similar to that of Fig. 9(b). Subsequently, to offer a more detailed analysis of the performance of the GEA employed in this study, we proceeded with a specific evaluation of each image type later.

 figure: Fig. 9.

Fig. 9. Z-scores and variance values diagrams of each evaluation criterion for the four types of images; (a) Z-scores; (b) Variance values. “Complexion” and “C” represent 14 images with skin color, “Objects” and “O” represent 8 images containing objects, “Scenery” and “S” represents 8 images containing scenery, and “Block” and “B” represents 8 images containing color blocks and bars. “P”, “N”, “D”, and “Smooth” respectively represent four evaluation criteria: preference, naturalness, detail richness, and smoothness of color changes.

Download Full Size | PDF

It is important to acknowledge that Zamir et al. also employed HCM as a comparative algorithm in their paper [47]. However, due to variations in experimental conditions and test images, direct comparisons with the results of our experiment are not feasible. To further substantiate the efficacy of our proposed algorithm, future studies may consider incorporating additional comparative algorithms. In our experiment, we used gamma calibration and multi-point testing to analyze the display system. These methods helped us understand performance characteristics, such as color accuracy and brightness consistency. If further accuracy improvement is needed, testing methods can be enhanced. Precise calibration and testing enabled us to gather accurate data and obtain a more comprehensive understanding.

4.3 Analysis of the five algorithms for the images with complexion

The results of z-scores and variance values for preference, naturalness, and details of each image with different complexions are shown in Fig. 10. Figures 10(a), (c), and (e) display the z-scores diagrams, while Fig. 10(b), (d), and (f) show the variance values diagrams.

 figure: Fig. 10.

Fig. 10. Z-scores and variance values diagrams of the 14 images with complexion; (a) Z-scores of preference; (b) Variance values of preference; (c) Z-scores of naturalness; (d) Variance values of naturalness; (e) Z-scores of details; (f) Variance values of details;

Download Full Size | PDF

Regarding preference, the z-scores of the images in the dynamic k algorithm are mostly greater than zero, except for C12. Among the images, C3, C7, C8, C9, and C11 in the dynamic k algorithm exhibit the highest z-scores compared to the other algorithms. From Fig. 10(a), it is evident that both the dynamic k and HCM algorithms are relatively preferred, followed by k = 0.6, while the SDS algorithm yields the least favorable image effects. The variance values in Fig. 10(b) illustrate that the variance of the SDS algorithm has a wide range of changes, indicating relatively large differences in the image scores given by the observers for the SDS algorithm. The score variance of the dynamic k algorithm is smaller than that of SDS but higher than that of TCM, HCM, and k = 0.6 algorithms, suggesting that the dynamic k algorithm's applicability is more stable than SDS but less stable than the others.

In terms of naturalness, the z-scores of C6, C9, and C11 in the dynamic k algorithm are the highest among the five algorithms. From Fig. 10(c), it can be observed that, overall, the naturalness of images with different complexions is best represented in the HCM algorithm, followed by TCM, k = 0.6, and dynamic k algorithms, with SDS being the least natural. The variance values diagram in Fig. 10(d) shows that the SDS algorithm exhibits a high image dependency, with dynamic k ranking fourth and TCM exhibiting the least score variance.

Regarding details, the dynamic k algorithm demonstrates strong performance with high z-score values, with the z-scores of the images, except for C2, C4, C10, C13, and C14 in the dynamic k algorithm, being the highest among the algorithms. The SDS algorithm shows a high score variance, with the dynamic k algorithm ranking fourth and k = 0.6 algorithm exhibiting the least variance values in the evaluation criterion of details from Fig. 10(f).

Overall, the dynamic k algorithm had a significant impact on the preference and details of the skin images, and its overall performance is commendable. The HCM algorithm had a substantial effect on naturalness, followed by TCM and k = 0.6. The dynamic k algorithm extends the color gamut in high-saturation regions while preserving the original color in low-saturation regions, and it also improves the richness of image details.

It's worth mentioning that the test images with various skin colors could be divided into two groups: those generated by StyleGan and those collected from real scenes. During the experiment, all images were classified together to analyze the algorithms’ impact on skin color. While we acknowledge that StyleGan-generated images may not fully represent the statistical characteristics of natural faces, we chose to utilize them to demonstrate the effectiveness of our approach in addressing challenging scenarios. In future research, the incorporation of additional datasets that better capture the statistical features of natural faces will be considered to further validate the method. Additionally, from Fig. 10(a), it can be observed that only the z-score value of the algorithm proposed in this paper is greater than zero for image C3. Image C3 stands out from the others due to its highly saturated background. This discrepancy may be attributed to observers’ sensitivity to skin tone, as the background saturation can impact the mapping of skin tone when processed by our algorithm. Future analysis of the GMA effect can be further refined based on this observation for image classification.

4.4 Analysis of the five algorithms for the images with objects

Figure 11 depicts the z-scores and variance values diagrams for the 8 images featuring objects. Three evaluation criteria were utilized for these images: preference, naturalness, and details.

 figure: Fig. 11.

Fig. 11. Z-scores and variance values diagrams of the 8 images with objects; (a) Z-scores of preference; (b) Variance values of preference; (c) Z-scores of naturalness; (d) Variance values of naturalness; (e) Z-scores of details; (f) Variance values of details;

Download Full Size | PDF

In terms of preference, the z-scores of the images, except for O5, O7, and O8, rank highest among the five GMAs. From Fig. 11(a), it is evident that the dynamic k algorithm exhibits the best performance, followed by the k = 0.6 algorithm, while TCM and SDS consistently perform poorly. Regarding variance, the results differ from Fig. 8(b), and the dynamic k algorithm ranks second, only worse than the k = 0.6 algorithm.

In relation to naturalness, the TCM algorithm demonstrates the best performance, followed by k = 0.6, with the dynamic k algorithm in fourth place, and the SDS algorithm exhibiting the poorest performance based on Fig. 11(c). Furthermore, the variance in the naturalness of the k = 0.6 algorithm in Fig. 11(d) is relatively small, indicating that most observers generally believe that the naturalness effect of these images obtained through k = 0.6 performs well.

Regarding details, we observed that the dynamic k algorithm had the best mapping effect, followed by the TCM algorithm, while the SDS algorithm performed the worst. However, the score variance of the k = 0.6 algorithm is the lowest. Overall, the dynamic k algorithm performs best in terms of preference and details, but poorly in terms of naturalness. Additionally, the k = 0.6 algorithm exhibits low variance values in all three evaluation criteria, however, its z-scores are not the best.

4.5 Analysis of the five algorithms for the images with scenery

Figure 12 illustrates the z-scores and variance values diagrams for the 8 scenic images, evaluated based on three criteria: preference, naturalness, and details.

 figure: Fig. 12.

Fig. 12. Z-scores and variance diagrams of the 8 images with scenery; (a) Z-scores of preference; (b) Variance values of preference; (c) Z-scores of naturalness; (d) Variance values of naturalness; (e) Z-scores of details; (f) Variance values of details;

Download Full Size | PDF

Regarding preference, the dynamic k algorithm demonstrates the best performance, followed by the HCM algorithm, while the SDS algorithm exhibits the poorest overall performance, as shown in Fig. 12(a). In terms of naturalness, the TCM algorithm performs well among the five algorithms, however, its score variance is relatively large. The k = 0.6 algorithm achieves a high z-score, lower than TCM, but with less image dependency. When considering details, the z-score of the dynamic k algorithm is the highest, followed by the k = 0.6 algorithm, as depicted in Fig. 12(e).

Despite the strong performance of the HCM algorithm, it may impact color-continuous areas within the image, as evidenced in the section inside the red box in Fig. 13. The reason for this outcome might be that the HCM algorithm does not operate in a uniform color space. In summary, the k = 0.6 algorithm demonstrates low image dependency across the three evaluation criteria, while the dynamic k algorithm surpasses the other algorithms in terms of performance.

 figure: Fig. 13.

Fig. 13. A sample image after dynamic k and HCM mapping

Download Full Size | PDF

4.6 Analysis of the five algorithms for the images with color bars and blocks

Figure 14(a) displays the z-scores and variance values of the eight test images containing color blocks and bars, evaluated based on the smoothness of color changes.

 figure: Fig. 14.

Fig. 14. Z-scores and variance values diagrams of the 8 images with bars and blocks; (a) Z-scores for smoothness of color change; (b) Variance values for smoothness of color change;

Download Full Size | PDF

Observing Fig. 14(a), among the five algorithms, TCM performs the worst, while SDS exhibits the best performance, followed by the dynamic k algorithm. In Fig. 14(b), the score variance of TCM is the lowest, indicating that most observers agree on its poor performance in terms of the smoothness of color changes. While the dynamic k algorithm has a high z-score, less than SDS, and a high score variance, indicating high image dependency. Although the z-score of SDS is higher than that of dynamic k, and its score variance is relatively low, SDS has a significant drawback. It heavily depends on the shapes of the target and source color gamut, which can easily lead to a change in the hue of the mapped images. As shown in Fig. 15, the hue of the color inside the red box is different. While the color transitions may appear smooth to the human eye, the hues of the colors can undergo alterations, losing their original color. This also offers valuable insights into the selection of a color space. In this article, we conduct hue-maintaining chroma stretching in the CIEL*a*b* space, as it generally preserves good color hues for most colors. However, if maintaining a consistent color angle is a strict requirement, we may consider performing color stretching in a color space with more uniform hues, such as the IPT space.

 figure: Fig. 15.

Fig. 15. The image B3 after SDS and TCM mapping

Download Full Size | PDF

5. Conclusion

In this study, a three-primary laser projector was utilized to establish the target color gamut, with sRGB serving as the source color gamut. To enhance the display effect of the color gamut extension algorithm across different image types, a linear chroma gamut mapping algorithm was developed to perform color mapping at dynamic ratios for each image in the CIE L*a*b* color space. This algorithm was compared with TCM, SDS, HCM, and an algorithm maintaining a constant mapping ratio of 0.6. A series of psychophysical experiments was designed, and four types of images featuring complexions, objects, scenery, color blocks and bars were selected to evaluate performance across four aspects: preference, naturalness, detail richness, and smoothness of color changes. Overall, with respect to wide color gamuts, the results of the psychophysical experiments revealed that the algorithm designed in this study demonstrated the best performance in terms of preference and details. TCM exhibited the best performance in naturalness, while the SDS algorithm excelled in terms of smoothness but had the potential to alter color hues. The k = 0.6 algorithm displayed relatively low image dependency. Notably, when mapping skin color images, the dynamic k algorithm's overall performance is commendable in terms of preference, naturalness, and detail.

Funding

National Key Research and Development Program of China (2021YFF0307804).

Disclosures

The authors declare no conflicts of interest.

Data Availability

The data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. K. V. Chellappan, E. Erden, and H. Urey, “Laser-based displays: a review,” Appl. Opt. 49(25), F79–98 (2010). [CrossRef]  

2. D. L. MacAdam, “Maximum visual efficiency of colored materials,” J. Opt. Soc. Am. 25(11), 361–367 (1935). [CrossRef]  

3. D. L. MacAdam, “The theory of the maximum visual efficiency of colored materials,” J. Opt. Soc. Am. 25(8), 249–252 (1935). [CrossRef]  

4. M. R. Pointer, “A comparison of the CIE 1976 colour spaces,” Color Res. Appl. 6(2), 108–118 (1981). [CrossRef]  

5. K. McLaren, “The development of the CIE 1976 (L*a*b*) uniform colour space and colour-difference formula,” J. Soc. Dyers Colour. 92(9), 338–341 (1976). [CrossRef]  

6. Commission Internationale de ĺEclairage (CIE), “A colour appearance model for colour management systems: CIECAM02,” CIE publication 159, (2004).

7. C. Li, Z. Li, Z. Wang, et al., “A revision of CIECAM02 and its CAT and UCS,” in Color and Imaging Conference (Society for Imaging Science and Technology2016), pp. 208–212.

8. M. R. Luo, G. H. Cui, and C. J. Li, “Uniform colour spaces based on CIECAM02 colour appearance model,” Color Res. Appl. 31(4), 320–330 (2006). [CrossRef]  

9. M. Safdar, G. Cui, Y. J. Kim, et al., “Perceptually uniform color space for image signals including high dynamic range and wide gamut,” Opt. Express 25(13), 15131–15151 (2017). [CrossRef]  

10. J. Morovič, Color gamut mapping (John Wiley & Sons, 2008).

11. M. Sakurai, T. Nakatsue, Y. Shimpuku, et al., “67.1: Evaluation of gamut-expansion algorithms for wide-gamut display,” in 2009 Vehicles and Photons Symposium, October 15, 2009 - October 16, 2009 (Blackwell Publishing Ltd, Dearborn, MI, United states, 2009), pp. 1006–1009.

12. H. Yeganeh and Z. Wang, “Objective quality assessment of tone-mapped images,” IEEE Trans. on Image Process. 22(2), 657–667 (2013). [CrossRef]  

13. S. N. Yendrikhovskij, F. J. J. Blommaert, and H. de Ridder, “Color reproduction and the naturalness constraint,” Color Res. Appl. 24(1), 52–67 (1999). [CrossRef]  

14. L. MacDonald, J. Morovič, and D. J. M. M. Saunders, and Curatorship, “Evaluation of colour fidelity for reproductions of fine art paintings,” Museum Management 14(3), 253–281 (1995). [CrossRef]  

15. J. Yuan, J. Y. Hardeberg, and G. Chen, “Development and evaluation of a hybrid point-wise gamut mapping framework,” in 2015 Colour and Visual Computing Symposium (CVCS) (IEEE, 2015), pp. 1–4.

16. S. W. Zamir, J. Vazquez-Corral, and M. Bertalmio, “Gamut extension for cinema: psychophysical evaluation of the state of the art, and a new algorithm,” in Conference on Human Vision and Electronic Imaging XX (Spie-Int Soc Optical Engineering, San Francisco, CA, 2015).

17. S. Nakauchi, S. Hatanaka, and S. Usui, “Color gamut mapping based on a perceptual image difference measure,” Color Res. Appl. 24(4), 280–291 (1999). [CrossRef]  

18. Y. Li, G. Song, and H. Li, “A multilevel gamut extension method for wide gamut displays,” in 2011 International Conference on Electric Information and Control Engineering (IEEE, 2011), pp. 1035–1038.

19. J. Laird, R. Muijs, and J. Kuang, “Development and evaluation of gamut extension algorithms,” Color Res. Appl. 34(6), 443–451 (2009). [CrossRef]  

20. G. Sharma, “Digital color imaging handbook,” (2003).

21. G. Ward, H. Yoo, A. Soudi, et al., “Exploiting wide-gamut displays,” Color and Imaging Conference 24(1), 163–167 (2016). [CrossRef]  

22. H. Anderson, E. K. Garcia, and M. R. Gupta, “Gamut expansion for video and image sets,” in 14th International Conference on Image Analysis and Processing Workshops, ICIAP 2007, September 10, 2007 - September 13, 2007 (IEEE Computer Society, Modena, Italy, 2007), pp. 188–191.

23. B.-H. Kang, J. Morovic, M. R. Luo, et al., “Gamut compression and extension algorithms based on observer experimental data,” ETRI J. 25(3), 156–170 (2003). [CrossRef]  

24. H. Pan and S. Daly, “A gamut-mapping algorithm with separate skin and non-skin color preference controls for wide-color-gamut TV,” in 2008 SID International Symposium, May 20, 2008 - May 21, 2008 (Society for Information Display, Los Angeles, CA, United states, 2008), pp. 1363–1366.

25. X. Meng, G. Song, and H. Li, “A human skin-color-preserving extension algorithm for wide gamut displays,” in 2012 International Conference on Information Technology and Software Engineering, ITSE 2012, December 8, 2012 - December 10, 2012 (Springer Verlag, Beijing, China, 2013), pp. 705–713.

26. S. E. Casella, R. L. Heckaman, M. D. Fairchild, et al., “Mapping standard image content to wide-gamut displays,” in 16th Color Imaging Conference: Color Science and Engineering Systems, Technologies, and Applications, November 10, 2008 - November 15, 2008 (Society for Imaging Science and Technology, Portland, OR, United states, 2008), pp. 106–111.

27. S. W. Zamir, J. Vazquez-Corral, and M. Bertalmio, “Gamut mapping in cinematography through perceptually-based contrast modification,” IEEE J. Sel. Top. Signal Process. 8(3), 490–503 (2014). [CrossRef]  

28. S. W. Zamir, J. Vazquez-Corral, and M. Bertalmio, “Gamut extension for cinema,” IEEE Trans. on Image Process. 26(4), 1595–1606 (2017). [CrossRef]  

29. L. Xu, B. Zhao, and M. R. Luo, “Color gamut mapping between small and large color gamuts: part II. gamut extension,” Opt. Express 26(13), 17335–17349 (2018). [CrossRef]  

30. J. Vazquez-Corral and M. Bertalmío, “Spatial gamut mapping among non-inclusive gamuts,” Journal of Visual Communication and Image Representation 54, 204–212 (2018). [CrossRef]  

31. Z.-Y. Xu, “Large colour gamut display - the new generation of display technique,” Wuli 39(4), 227–231 (2010).

32. Y. Yuan, Y. Bi, Z. Xu, et al., “Present development and tendency of laser display technology,” Chinese Journal of Engineering Science 22(3) (2020).

33. Y.-K. Lai and S.-M. Lee, “Wide color-gamut improvement with skin protection using content-based analysis for display systems,” J. Display Technol. 9(3), 146–153 (2013). [CrossRef]  

34. K. Xiao, N. Liao, F. Zardawi, et al., “Investigation of Chinese skin colour and appearance for skin colour reproduction,” Chin. Opt. Lett. 10(8), 083301 (2012). [CrossRef]  

35. L. L. Thurstone, “A law of comparative judgment,” Psychol Rev. 34(4), 273–286 (1927). [CrossRef]  

36. R. Rajae-Joordens and J. Engel, “Paired comparisons in visual perception studies using small sample sizes,” Displays 26(1), 1–7 (2005). [CrossRef]  

37. B. Yao, L. Zhu, Y. Yang, et al., “General solution to the calculation of peak luminance of primaries in multi-primary display systems,” Opt. Express 30(2), 1036–1055 (2022). [CrossRef]  

38. B. Yao, L. Zhu, L. Deng, et al., “Upper limit of gamut volumes in multi-primary display systems,” Opt. Express 30(20), 36576–36591 (2022). [CrossRef]  

39. M. Ou-Yang and S. W. Huang, “Determination of gamut boundary description for multi-primary color displays,” Opt. Express 15(20), 13388–13403 (2007). [CrossRef]  

40. D. H. Krantz, “Color measurement and color theory: I. Representation theorem for Grassmann structures,” J. Math. Psychol. 12(3), 283–303 (1975). [CrossRef]  

41. K. Amano, K. Uchikawa, and I. Kuriki, “Characteristics of color memory for natural scenes,” J. Opt. Soc. Am. A 19(8), 1501–1514 (2002). [CrossRef]  

42. StyleGAN-generated datasets, https://www.seeprettyface.com/mydataset.html.

43. Kodak database, https://r0 k.us/graphics/kodak/.

44. International Organization for Standardization, ISO 12640-3 2022 Graphic technology - Prepress digital data exchange - Part 3: CIELAB standard colour image data (CIELAB/SCID), 2022.

45. International Organization for Standardization, ISO 12640-2:2004 Graphic technology-prepress digital data exchange - Part 2: XYZ/sRGB encoded standard colour image data (XYZ/SCID), 2004.

46. K. McLAREN, and Colourists, “An introduction to instrumental shade passing and sorting and a review of recent developments,” J. Soc. Dyers Colour. 92(9), 317–326 (1976). [CrossRef]  

47. S. W. Zamir, J. Vazquez-Corral, and M. Bertalmio, “Vision models for wide color gamut imaging in cinema,” IEEE Trans. Pattern Anal. Mach. Intell. 43(5), 1777–1790 (2021). [CrossRef]  

Data Availability

The data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. The boundary of the three-primary display system in RGB color space
Fig. 2.
Fig. 2. Most saturated gamut boundaries for laser projector and sRGB system with all pixel points of sample image in L*a*b* color space; (a) CGV; (b) Top view of CGV; (c) CGV; (d) Top view of CGV; (e) Sample image;
Fig. 3.
Fig. 3. The schematic diagram of chroms stretching in the CIE L*a*b* color space; (a) Front view; (b) Top view. Point O represents a color with zero chromaticity, point S is located at the boundary of the source color gamut, point T is at the boundary of the target color gamut, point M indicates the position of the color to be mapped, point M’ shows the position of the color after mapping, P refers to the point with the highest chromaticity that happens to not be mapped.
Fig. 4.
Fig. 4. sRGB and the target color gamut in the 1931 CIE chromaticity diagram
Fig. 5.
Fig. 5. CGV of sRGB and target color gamut; (a) CGV; (b) Top view of CGV; (c) Front view of CGV; (d) Side view of CGV;
Fig. 6.
Fig. 6. 8 images containing color bars and blocks
Fig. 7.
Fig. 7. A top view of the L*a*b* color space of the target CGV and the mapped CGV of O2 and O7; (a) O2 after dynamic k mapping; (b) O7 after dynamic k mapping; (c) O2 after k = 0.6 mapping; (d) O7 after k = 0.6 mapping;
Fig. 8.
Fig. 8. Z-scores and variance values diagrams of all test images; (a) Z-scores; (b) Variance values;
Fig. 9.
Fig. 9. Z-scores and variance values diagrams of each evaluation criterion for the four types of images; (a) Z-scores; (b) Variance values. “Complexion” and “C” represent 14 images with skin color, “Objects” and “O” represent 8 images containing objects, “Scenery” and “S” represents 8 images containing scenery, and “Block” and “B” represents 8 images containing color blocks and bars. “P”, “N”, “D”, and “Smooth” respectively represent four evaluation criteria: preference, naturalness, detail richness, and smoothness of color changes.
Fig. 10.
Fig. 10. Z-scores and variance values diagrams of the 14 images with complexion; (a) Z-scores of preference; (b) Variance values of preference; (c) Z-scores of naturalness; (d) Variance values of naturalness; (e) Z-scores of details; (f) Variance values of details;
Fig. 11.
Fig. 11. Z-scores and variance values diagrams of the 8 images with objects; (a) Z-scores of preference; (b) Variance values of preference; (c) Z-scores of naturalness; (d) Variance values of naturalness; (e) Z-scores of details; (f) Variance values of details;
Fig. 12.
Fig. 12. Z-scores and variance diagrams of the 8 images with scenery; (a) Z-scores of preference; (b) Variance values of preference; (c) Z-scores of naturalness; (d) Variance values of naturalness; (e) Z-scores of details; (f) Variance values of details;
Fig. 13.
Fig. 13. A sample image after dynamic k and HCM mapping
Fig. 14.
Fig. 14. Z-scores and variance values diagrams of the 8 images with bars and blocks; (a) Z-scores for smoothness of color change; (b) Variance values for smoothness of color change;
Fig. 15.
Fig. 15. The image B3 after SDS and TCM mapping

Tables (3)

Tables Icon

Table 1. Three-primary and white point chromaticity coordinates for sRGB and the target color gamut

Tables Icon

Table 2. Names and k values of all the images

Tables Icon

Table 3. Psychophysical experiment evaluation results: z-scores and variance values of the five algorithms for the four categories of images and all the images.

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

{ L = 116 f ( Y / Y 0 ) 16 a = 500 [ f ( X / X 0 ) f ( Y / Y 0 ) ] b = 200 [ f ( Y / Y 0 ) f ( Z / Z 0 ) ]
f ( x ) = { x 1 / 3 ( 841 / 108 ) x + 16 / 116 , x > ( 24 / 116 ) 3 , x ( 24 / 116 ) 3
c h r o m a origina l i j = a origina l i j 2 + b origina l i j 2
c h r o m a sourc e i j = a sourc e i j 2 + b sourc e i j 2
c h r o m a targe t i j = a targe t i j 2 + b targe t i j 2
k i j = c h r o m a origina l i j c h r o m a targe t i j
k = k i j k total
c h r o m a fina l i j = { c h r o m a origina l i j , c h r o m a origina l i j k × c h r o m a targe t i j k × c h r o m a targe t i j + ( 1 k ) × c h r o m a targe t i j c h r o m a origina l i j k × c h r o m a targe t i j c h r o m a sourc e i j k × c h r o m a targe t i j , else
{ O M ¯ = OM ¯ P M ¯ / PT ¯ = PM ¯ / PS ¯ , if , else OM ¯ OP ¯
L fina l i j = L origina l i j ; a fina l i j = a origina l i j ; b fina l i j = b origina l i j
{ L fina l i j = L origina l ij a fina l i j = a origina l i j × c h r o m a fina l i j / c h r o m a origina l i j b fina l i j = b origina l i j × c h r o m a fina l i j / c h r o m a origina l i j
[ X Y Z ] original T = [ X Y Z ] final T
[ X R X G X B Y R Y G Y B Z R Z G Z B ] original  [ I R original  I G original  I B original  ] = [ X R X G X B Y R Y G Y B Z R Z G Z B ] final  [ I R final  I G final  I B final  ]
[ R G B ] HCM T = ( 1 κ ) [ R G B ] TCM T + κ [ R G B ] SDS T
κ ( S ) = { 0 ( S S L ) / ( S H S L ) 1 , if S < S L if S L < S < S H if S S H
L C , d C = ( d C / 255 ) gamma
[ X ( d R , d G , d B ) Y ( d R , d G , d B ) Z ( d R , d G , d B ) ] = [ X R , d R X G , d G X B , d B Y R, d R Y G, d G Y B, d B Z R, d R Z G, d G Z B, d B ] [ L R, d R L G, d G L B , d B ]
μ a μ b = z ab σ 2
C I 95 % = z ab ± 1.96 ( σ / N )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.