Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Accurate physics-based digital reproduction of effect coatings

Open Access Open Access

Abstract

We built an improved 3D rendering framework to accurately visualize the complete appearance of effect coatings, including metallic effects, sparkle and iridescence. Spectral reflectance measurements and sparkle indexes from a commercially available multi-angle spectrophotometer (BYKmac-i) were used together with physics-based approaches, such as flake-based reflectance models, to implement efficiently the appearance reproduction from a small number of bidirectional measurement geometries. With this rendering framework, we rendered a series of effect coating samples on an iPad display, simulating how these samples would be viewed inside a Byko-spectra effect light booth. We validated the appearance fidelity through psychophysical methods. We asked observers to evaluate the most important visual attributes that directly affect the appearance of effect coatings, i.e., the color, the angular dependence of color (color flop) and the visual texture (sparkle and graininess). Observers were asked to directly compare the rendered samples with the real samples inside the Byko-spectra effect light booth. In this study, we first validated the accuracy of rendering the color flop of effect coatings by conducting two separate visual tests, using flat and curved samples respectively. The results show an improved accuracy when curved samples were used (acceptability of 93% vs 80%). Secondly, we validated the digital reproduction of both color and texture by using a set of 30 metallic samples, and by including texture in the rendering using a sparkle model. We optimized the model parameters based on sparkle measurement data from the BYK-mac I instrument and using a matrix-adjustment model for optimization. The results from the visual tests show that the visual acceptability of the rendering is high at 90%.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The digital reproduction (visualization) of objects and materials has made a significant progress over the past decades [14]. Effect coatings have become highly popular in markets where a premium material appearance is important. Metallic paints, which are widely used in the automotive industry, are the most well-known example. They consist of thin aluminum flakes with diameters between around 5 and 50 µm, lying almost parallel to the coating surface and embedded in a colored medium. This makes the visual appearance of these materials much more complex to be described qualitatively and quantitatively than for most other paints and materials. The presence of the flakes makes the color strongly depend on viewing and illumination angles; a phenomenon often referred to as color flop [5] (also known as goniochromism or iridescence). For metallic coatings, this is particularly the case for the color attributes lightness and chroma. Pearlescent coatings form a special type of effect coatings (special effect coatings), and they are also popular in automotive industry. In this case, the flakes consist of a multilayered structure that gives rise to iridescence by interference. As a result, not only lightness and chroma but also the hue of pearlescent coatings varies with viewing and illumination directions. A full instrumental characterization of this angular variation of reflectance requires the spectral Bidirectional Reflectance Distribution Function (BRDF) to be measured for a large set of pairs of illumination and viewing directions [6].

However, the visual appearance of effect coatings is characterized not only by color, but also by their visual texture [7,8]. Under intense directional lighting, such as direct sunlight, these coatings can show a kind of visual texture usually known as sparkle or glint. Being caused by specular reflection at the tiny flakes, these are bright but very small glints of light. Under more diffuse lighting, such as under an overcast sky, effect coatings typically show a lower-contrast spatial pattern of darker and lighter areas at a spatial scale of millimeters. This kind of visual texture is usually called diffuse coarseness or graininess.

Currently available commercial renderers such as Maya, Keyshot, VRED (Autodesk), Revit and Mentalray (Nvidia) are able to provide impressive images (frames) often used for applications such as cinema and games industry [9]. However, few articles analyze the accuracy of these images. A visual comparison of the rendered images with the real-world physical objects which they intend to represent shows that in many cases they are not realistic in terms of color reproduction [10,11]. The color fidelity of their rendered images is insufficient for more critical applications such as automotive design where this situation is even worse, especially since rendering special effect coatings is still a subject of active investigation [12,13]. Sparkle and graininess are difficult to render in a convincing way [14][15]. Both phenomena are either absent or not represented well in current rendering software [10,1518]. The dependence of the color on viewing and illumination directions makes the problem even more challenging and is also not covered correctly in current rendering algorithms [19].

For improving color reproduction, it is crucial to include a full spectral rendering approach that accounts for the spectral reflectance of objects and the spectral power distribution (SPD) of the light source, as well as taking into account the technical specifications of the display used for the rendering. The available spectral rendering software is usually computationally very demanding and requires special hardware. The use of modest computer hardware is not possible for such complex rendering software.

For accurate absolute color rendering, it is fundamental to consider the technical specifications of the display [20]. We recently proposed the Mobile Display Characterization and Illumination Model (MDCIM) [21], which also takes into account the influence from ambient lighting.

For representing the color flop, rendering algorithms may use tabulated reflection properties. Since the color of effect coatings depends on three angles (two polar angles for illumination and viewing directions and one azimuthal angle between both directions), this would require interpolation of large three-dimensional look-up tables. For accurate 3D rendering of effect coatings, such methods would become expensive in terms of calculation times and required data storage. For this reason, parametrized reflection models have become very popular, such as those from Phong, Blinn, Ward and Cook-Torrance [22]. These models all assume that the angular variation of color can be parametrized by only one angular parameter. More recently some models were introduced that use two angular parameters: Ashikhmin and Shirley, Meyer, Dumont-Bècle and others [2225]. However, even these methods provide a crude interpolation of angular reflectance data, since they assume that the angular variation is mainly accounted for by the aspecular angle (i.e., the angle between the viewing and specular directions) [26]. More recently, we proposed a physics-based analysis of the reflectance of special effect coatings. With the so-called flake-based parameters, a more accurate representation of the color flop effect can be obtained than with the other approaches that use two parameters [2629].

This paper is a follow-up of a research collaboration for developing a physics-based rendering framework for accurate color, color flop, sparkle and graininess of car paints. In previous works, we introduced our spectral rendering framework for real-time rendering on a tablet computer (iPad) [3034]. We presented a method for representing any lighting environment in the renderer and a method for strongly improving the color reproduction of uniform color objects, by considering the display characteristics and ambient lighting [11,3538]. In the current article, we extend the groundwork and analysis to include accurate rendering of color flop, graininess, and sparkle, where not only spectral but also spatial features are included by our physics-based analytical models.

2. Methods

2.1 Rendering approach

In this section we give a summary of our rendering tool. Unlike the currently available rendering solutions that require high-end PC or graphic cards, our rendering framework was developed to run on a modest hardware such as a simple iPad tablet computer. Our rendering framework is based on OpenGL 3.0 ES (Embedded Systems); an open-source graphics library that already has the basic functionality of rendering. OpenGL is also supported on Android devices and Windows PCs, which makes adaptations of this framework portable to other technology platforms [30]. This framework is based on the RGBA spectral pipeline of OpenGL, from which we turned our rendering approach into a fully spectral rendering pipeline by conducting all calculations for different spectral bands. Hence, 16 different spectral bands are included by considering the visible wavelength range from 400 nm to 700 nm, with a spectral resolution of 20 nm steps. We process the data in blocks of 4 spectral bands through the RGBA pipeline of OpenGL 3.0 ES. Consequently, and only at the final stage of the calculations, by using the MDCIM model we combine all calculated spectral data into one resulting RGB image. The spectral pipeline needs spectral input for describing the lighting and for the reflectance of materials that cover the objects that need to be rendered. Therefore, a controlled lighting environment is essential to create an accurate rendering of any object in a lighting scene. With our rendering approach it is possible to specify the lighting environment in terms of the IES/EULUMDAT format. This allow us to specify the luminous intensity distribution from lamps and the illuminance produced by the reflecting areas surrounding the object that needs to be rendered.

Since this work is focused on effect coatings, we selected a light booth that is particularly suitable for the visual inspection of this type of materials. The Byko-spectra effect light booth from supplier BYK-Gardner (Fig. 1) was designed to allow observation of samples under the six different viewing and illumination angles that are optimal for observing angular color variation of metallic and pearlescent coatings according to international standards [39]. Moreover, this light booth includes the right combinations of well-defined directional and diffuse light sources to enable visual assessments of sparkle [32]. For this task, as described in our previous works [3031,3334], a digital and well-defined virtual lighting environment was built to accurately simulate the Byko-spectra effect multi-directional light booth. In this way, we created a set up for visual tests, in which physical objects inside the physical light booth are visually compared to the rendered images of virtual objects inside the virtual light booth. A three-dimensional (3D) geometrical model of the light booth was first built using Blender [40]. Six different 3D models of the light booth were built, one for each of the illumination-viewing geometries supported by the Byko light booth and the multi-angle spectrophotometer BYK-mac-i instrument. According to DIN 6157-2 and ASTM E2194 nomenclature [41,42], these geometries are referred to as 45as-15, 45as15, 45as25, 45as45, 45as75 and 45as110, where the term “as” is the abbreviation of the word “aspecular”. The lighting inside the light booth was characterized by measuring its spectral and spatial angular distribution. In the virtual light booth, we also accounted for the spectral reflectance of the inner components of the light booth as they might influence the final appearance of the test objects.

 figure: Fig. 1.

Fig. 1. The Byko-spectra effect light booth (BYK-Gardner). The left image shows the inner structure and the rotation platform of the light booth.

Download Full Size | PDF

The appearance of effect and special effect coatings strongly depends on the viewing/illumination geometry. Their spectral reflectance is mainly determined by the presence of the effect pigments (or flakes) inside these coatings, which are not completely oriented parallel to the surface of the coating. For this reason, in order to make a proper spectral, angular and spatial description of the reflectance of the effect coating, it is more suitable to use parameters with respect to the local normal vectors to these flakes [26]. Previous studies have shown the use of flake-based parameters allows a more adequate formalism for modelling and representation of the BRDF than the common approaches using viewing and incidence angles with respect to the coating surface [19,37,4345].

Multi-angle spectrophotometers or gonio-spectrophotometers are needed to characterize the angular distribution of the spectral BRDF of effect coatings. Measurement instruments like the BYKmac-i provide a simplified BRDF with a limited number of geometries and cannot be regarded as a full-BRDF measurement. Therefore, we need the flake-based parameterization (Fig. 2) to interpolate these BRDF measurements for all required rendering geometries [19]. The spectral BRDF expressed as a function of the spherical coordinates of the illumination and viewing directions can be expressed as a function of the flake-parameters (${\theta _{flake}}$ and ${\theta _{inc}}$, see Fig. 2). The derivation of this transformation is given in Refs. [3637]).

 figure: Fig. 2.

Fig. 2. Definition of angles for flake-based parameters.

Download Full Size | PDF

This physical model is applied in the metallic shader, which is responsible for rendering the metallic coating. The multi-angle interpolation of the spectral BRDF measurement is used to calculate the spectral color at any pixel fragment along any viewing-lighting vector. The shader allows two different rendering scales: macroscopic, when a pixel fragment contains many flakes; and microscopic, when each pixel contains a single flake. At the macroscopic scale, the shader represents the spatially-integrated value, whereas, at the microscopic scale, it represents single-scattering events at the flakes. The microscopic scale is where the relevant physics is taking place; the flakes have a finite size, and they are assumed to be oriented according to a normal distribution. As a starting point we use the microfacet based BRDF parametrization of the commonly used GGX model [46]. This model allows for a smooth transition between microscopic and macroscopic rendering scales. We use the GGX microfacet model not only to describe the surface roughness of the clearcoat which is responsible for the glossiness of the coating, but also to take into account both the flake orientation distribution as well as the non-flatness of the flakes themselves. Both parameters are used to best match a measured BRDF. The model also takes into account light scattering by non-effect pigments. Specular reflection at the clearcoat, producing gloss appearance, is accounted for by assuming a value of 1.5 for the refractive index, and by using a pre-rendered spectral image of the environment (a so-called skybox).

The numerical values for the parameters of the various GGX model instances were optimized for reproducing the visual texture for all samples, as described below. The most relevant parameters of the rendering framework are related to the magnitude of the flake area (fa, calculated as the logarithm of the average flake area), the flake orientation as represented by microfacets in GGX by the collective roughness of all flakes (rm), and finally the non-flatness of the flake surfaces as represented by microfacets in GGX by the roughness of individual flakes ($rf$). The three instances of the GGX model blend together at larger distances where the coatings look smooth rather than textured.

In the following section, we explain the adjustment model we built to find representative values for the three model parameters ($fa,\,rf,\,rm$) using measured texture values from the BYKmac-i instrument.

2.2 Calculating model parameters based on Matrix-Adjustment Method

We used a matrix-adjustment method to calculate suitable numerical values for the texture model parameters from the BYKmac-i measurements. Firstly, a perception-based rendering of a training set of 30 metallic samples was performed by visually matching the rendered images to the physical samples. The model parameters $(fa,\,rf,\,rm$) were manually optimized by tuning them in the tool interface until we obtained a good match between physical sample and rendered representation. Based on the model parameters obtained on this matching process and the BYKmac-i measurements of the training panels, and through a matrix-adjustment method we obtained a conversion matrix A. This matrix A is able to provide the texture model parameters for other painted object from their BYKmac-i measurements. This conversion matrix ${\mathbf A}$ is worked out from the equation:

$$M = A\ast B, $$
which relates linearly the BYKmac-i measurements (B) with the model parameters (M). A is the matrix containing the coordinates in this linear relation. Both A and M have two dimensions, one for the number of model parameters/ BYKmac-i measurements, and other for the number of samples used for training. The A matrix is expressed as:
$$A = M\ast {B^T}\ast {({B\ast {B^T}} )^{ - 1}}.$$

The BYKmac-i provide measurement for several spectrophotometric, colorimetric and texture quantities. The quantities more relevant to obtain the most adequate conversion matrix A were selected after an optimization routine based on the lowest relative error when comparing the computed model parameters to the manually adjusted model parameters.

Once A is known by training, to obtain the texture model parameters $M$ of any sample set, one has to multiply the conversion matrix $A$ by the BYKmac-i measurements (B) of the same sample set by using Eq. (1).

The BYKmac-i multi-angle spectrophotometer provides the following measured data:

  • - Spectral reflectance factor at each of the six geometries supported by the instrument, which are used to represent the color on the spectral rendering as explained before.
  • - CIE-Lab color coordinates at each of the six geometries supported by the BYKmac-i.
  • - Texture parameters: sparkle area ($S{a_{15^\circ }}$, $S{a_{45^\circ }}$, and $S{a_{75^\circ }}$), sparkle intensity ($S{i_{15^\circ }}$, $S{i_{45^\circ }}$, and $S{i_{75^\circ }}$), sparkle grade ($S\_G$) and graininess ($G$).
  • - Additionally, the color $flop$ is computed based on the luminance as $flop = 2.69{\ast }({L_{15^\circ }} - {L_{110^\circ }})\hat{} {1.11})/({{L_{45^\circ }}} )\hat{} {0.86}$.

To find the most relevant selection of parameters from the BYKmac-i instrument, we built a MATLAB routine to obtain the ideal BYKmac-i input parameters, to be used in the matrix-adjustment process explained above. An additional test set of panels was used to validate the adjustment method.

2.3 Visual evaluation of the appearance reproduction

We carried out visual tests to evaluate the performance of the proposed rendering framework. In these tests, observers were asked to directly compare the color of samples as viewed inside a real-world Byko spectra effect light booth with the corresponding sample on an iPad display, rendered inside the simulated virtual light booth. We use a standard psychophysical method to obtain quantitative results about the fidelity of the perceived color reproduction. Figure 3 illustrates the setup of the visual experiment with an example of the rendered sample on the right side of the figure. The tablet was placed next to the viewing slit of the light booth to allow the simultaneous observation of the virtual and the physical samples. This setup also allows the observer to watch the display from a fixed straight angle to avoid the angular variation of color and luminance perceived on the LCD display of an iPad. A dark room was used. Observers were allowed 2 minutes for color adaptation. All tests were done with three repetitions per observer. In addition, all observers who participated in these experiments had normal color vision as confirmed by the Ishihara color vision test, and all had normal, or corrected to normal, visual acuity.

Firstly, two initial tests were performed to evaluate the color flop in effect coatings. These two tests used flat and curved (3D) panels, respectively. In both of these tests we only render color, without taking onto account texture yet. In the final visual evaluation test, we did include rendering of texture (as well as color), which allows us to evaluate the capability of the metallic shader of the rendering model to reproduce the total appearance in terms of color and texture of effect coatings. In this final test we used curved panels to simulate 3D objects.

In each test, observers were asked to judge the matching of the physical samples and their rendered images displayed on the tablet screen giving scores ratings from 0 to 5 as shown in Table 1. When observers saw no color difference, or hardly any color difference, a score of zero was given. A score of five was given when the perceived color was assessed as large, meaning that the color of the rendered sample was a very bad color match to the color of the real-world sample.

Tables Icon

Table 1. Descriptions of scores for the Scoring method.

Following the same criteria used in our previous work [31], we set the score acceptance threshold at 2.5. A score average equal or higher than the threshold means that the observer is rejecting the match. All scores below this threshold are considered as an acceptable match.

The color flop effect describes how the perceived color of effect coatings changes with the illumination/observation geometries. Two tests were conducted to evaluate the capability of our model to simulate the color flop. In the first test, five observes evaluated the color flop of fifteen flat metallic samples at three different observation geometries: as25°, as45° and as110°. The illumination angle was set at 45°. The samples were selected from a much larger sample set to cover a wide range of lightness and chromaticity. Images were generated using the full renderer at the macroscopic scale. The resulting image rendered only with respect to color; texture was not rendered at this stage. Observes were asked to ignore texture that they saw on the physical panels, and to focus only on its color. In addition, observers were asked to give a score, according to Table 1, of the total perceived color change when swapping between these three different geometries. The second test was performed similarly, using the same samples, but this time on curved panels. Samples were placed on a magnetic curved platform. Eight observers evaluated the color matching at four observation geometries: as -15, as25°, as45° and as110°. We added the as -15° geometry to include the extreme of the standard observation geometries. The reason behind this test is that the final scope of this work is to achieve an accurate rendering of 3D objects. Here, we want to evaluate if acceptance is different when flat samples are evaluated separately in different viewing geometries as compared to viewing curved samples (where different viewing geometries are the result of the samples being curved).

After evaluating the color flop, an additional and final visual test was needed to validate the total appearance of the rendered samples including the texture effect i.e., sparkle and graininess. In this case, the images were generated by using the full renderer, including both the microscopic and the macroscopic scales. In the same way as for the previous tests, fifteen different observers (6 males and 9 females, aged between 22 and 54) were asked to judge the fidelity of the appearance reproduction of curved metallic effect samples (Fig. 3). The set of samples was increased to 30 for this test. We used the same setup and scoring method as for the color flop evaluation. Since the color flop effect was evaluated in the first two tests, we opted for using only one geometry for this evaluation (the as45°), since the curved panels already represent a range of viewing geometries. As already mentioned, in this final test the rendered images include color and texture, and the total appearance is being evaluated. The physical panels were attached on a magnetic curved platform inside the physical light booth, which made the panels becoming curved, thus mimicking a 3D shaped object. Again, the paint samples were selected from a much larger set to cover a wide range of lightness, chromaticity, color flop, sparkle, and graininess (Fig. 4). The lightness of samples in this selected set is biased to low values, since one of the most challenging aspect to be reproduced for this kind of samples is the sparkle texture, and the perception of sparkle is more conspicuous in sample with low reflectance.

 figure: Fig. 3.

Fig. 3. Observers compared the 3D rendered samples using our spectral rendering with real-world objects inside the Byko-spectra effect light booth.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. The measured BYKmac-i data for the 30 samples studied in the final visual test in this article corresponding to the as45 measurement geometry. The figure (a) shows the chromatic data, represented in CIE — L*a*b* color space. The second graph (b) represents the sparkle values ($S\_i,\,S\_a$) and the sparkle grade ($S\_G$), while the last column graph on the right represents the graininess values ($G$).

Download Full Size | PDF

3. Results

In the first two visual tests, we evaluated the capability of the renderer to reproduce the color flop effect of metallic coatings. We selected a set of 15 metallic samples. In the first test we compared flat and color-only rendered images to physical flat metallic panels. The average observer accepted the match for 12 of the 15 samples (acceptance average of 80%).

In the second test, curved, color-only rendered images were visually tested by comparing them to the curved physical panels. This time, the average observer accepted 14 from the 15 samples, with an acceptance average percentage of 93%. From these results we conclude that observers are more tolerant for rendering the color flop of curved 3D objects than for flat objects. This can be explained by the gradual variation in appearance in the case of 3D objects. The curvature of these surfaces already introduces the changes in the observation and the illumination geometries, which causes the perception of different colors along the surface curvature. The simultaneous perception of the different colors along the curved object makes the human visual system less critical to these small changes in color when a complex scene with different colors is perceived. In contrast to the case of evaluating flat samples under separate geometries. The perception of isolated colors on the flat samples makes our visual system more sensitive to the small differences. Figure 5 illustrates the rendering results at each of the six geometries of the Byko light booth. There, the color flop is easily perceived on the images. The panels are getting darker when moving from small (as-15) to large (as110) aspecular angles. However, this angular variation does not only change lightness, but also hue. For example, for the rendered images in Fig. 5 a cyan hue is present at the as-15 geometry, while the hue becomes more bluish when moving towards the as110 geometry. This change in appearance matches the behavior of the corresponding physical samples when physical samples and rendered images are observed together.

 figure: Fig. 5.

Fig. 5. Color flop effect in the rendered images of a metallic panel at the six-geometries of the Byko light booth. The red bordered images are the geometries used for the visual evaluation of the color flop.

Download Full Size | PDF

In the final test we evaluated the total color and texture appearance on 3D curved panels. We obtained that the most relevant BYKmac-i parameters to be used as input for reproducing texture according to the procedure described in subsection 2.2 were $S{a_{45^\circ }}$, $S{i_{15^\circ }}$, $S\_G$, $flop$, $\,{L_{45^\circ }}$, ${a_{45^\circ }}$. These were obtained by selecting all possible subsets of BYK-mac-i parameters, calculating A for each one using half of the training data, and comparing the performance of each matrix to match the model parameters of the other half of the training data. This performance was quantified by the relative average error between the model parameters obtained by visual matching and the model parameter obtained by the matrix-adjustment method. This matrix A should be able to provide the model parameters from the BYK-mac-i parameters for effect coatings usual in the automotive industry, as those used for calculating A in this work.

The conversion matrix A obtained with these parameters, and the $ B$ matrix of the selected BYKmac-i parameters are represented as following:

$$\left( {\begin{array}{c} {fa}\\ {rf}\\ {rm} \end{array}} \right) = \left( {\begin{array}{c} {S{a_{45^\circ }}}\\ {S{i_{15^\circ }}}\\ {{S_G}}\\ {flop}\\ {{L_{45^\circ }}}\\ {{a_{45^\circ }}} \end{array}} \right)\ast \left( {\begin{array}{cccccc} {0.3461}&{0.1365}&{ - 0.3228}&{0.1472}&{0.4534}&{0.5528}\\ {0.7911}&{ - 0.4244}&{0.6183}&{0.8433}&{0.0131}&{ - 0.6054}\\ {0.7399}&{ - 0.0405}&{ - 0.0786}&{ - 0.0869}&{0.5110}&{0.3088} \end{array}} \right).$$

With this model parameters, we find a good fit for the texture parameters in the model, with an average relative error of η=0.063 for the texture parameters.

In the third visual test we evaluated the total color and texture reproduction using an independent set of 30 metallic samples. The virtual panels were generated after computing the values of the model parameters just described, utilizing Eq. 1. Figure 6 shows an example of the final textured 3D rendering of a metallic panel inside the virtual light booth.

 figure: Fig. 6.

Fig. 6. Screen shoots of a final rendering of a metallic panel inside the virtual Byko light booth. The left image (a) shows a side view while the right image (b) mimics the visualization slit of the physical light booth.

Download Full Size | PDF

In the final test, we collected the visual scores from 15 observers on 30 metallic samples using the scoring method, and we repeated each assessment independently three times. This resulted in a total of 1350 visual assessments. We computed the intra-observer repeatability of the assessments by calculating the standard deviation of the scores over the three replications for each sample and each observer. The intra-observer repeatability based on the standard deviation in visual scores over all samples and all observers was found to be 0.51.

Table 1 Shows that the possible score in the visual tests ranges from 0 to 5 with a step size of 1unit. Therefore, the intra-observer repeatability covers only 8.5% of the total scale of 6 steps. This indicates a good intra-observer variability, since it is smaller than the quantification limit of the evaluation method.

We also determined the inter-observer reproducibility, which refers to the agreement between observers. This was obtained by firstly calculating the absolute difference between the average assessment for a particular observer and the average of all assessments from all observers at each single sample. We obtained the inter-observer reproducibility by then computing the total average over all samples, i.e., the average absolute deviation in the score of an observer with respect to the average score of all observers. In that way, we found that the inter-observer reproducibility is 0.73 units, thus covering 12.2% of the total scale of 6 steps. This value is still smaller than the quantification limit of the evaluation method, but it is slightly higher than the intra-observer repeatability. This is the expected since observers usually tend to agree better with themselves than with others. Our results also confirm the low ambiguity of the assessment for observers.

The collected visual scores provide quantitative information on the accuracy in appearance match by the rendering framework, as perceived by the observers. The average visual score from all the observers and over all samples was 1.46, which indicates that the perceived difference between the rendered samples and the physical ones is between “small, negligible difference” and “difference visible but still acceptable”, and it is closer to the first description than to the second. This indicates an acceptable color matching average of the total rendered samples.

Figure 7 shows a box-plot graph of the results of this visual comparison of the total appearance reproduction. The red pluses indicate the outliers, the green pluses represent the mean value, and the horizontal line illustrates the acceptance threshold. The boxes are drawn from the first to the third quartile values with the minimum and maximum values represented with the dashed lines.

 figure: Fig. 7.

Fig. 7. Results of the final visual experiment. The red pluses indicate the outliers. Green pluses represent the mean value. Horizontal line illustrates the acceptance threshold. Acceptable color match is obtained for scores below the threshold value in the graph.

Download Full Size | PDF

From the observers’ assessments, we found that for only three of the 30 samples (i.e., for 10% of the total number of samples) the rendered appearance is not acceptable, having an average score larger than the 2.5 units threshold. Observers mainly mentioned failures in matching color, and much less so the texture.

4. Conclusion

We presented a 3D rendering approach for visualizing gonio-apparent materials, based on physical measurements. Our results show that the presented model is well able to provide acceptable rendering of both color and texture of effect coatings. The rendering of color and texture (sparkle and graininess) is based on a spectral physical approach. We used the concept of flake-based parameters to interpolate the BRDF in the rendering pipeline. We used a mathematical matrix adjustment method to calculate the most suitable values of model texture parameters based on BYKmac-i measurements. In previous work, we studied the color reproduction of solid colors [31]. Here, we extend our study to evaluate the digital reproduction of metallic paints. Through two initial visual tests, we tested the capability of our rendering method to reproduce the color flop effect of metallic coatings. Observers evaluated the match between the rendered “color-only” images of 15 metallic coatings on the tablet screen and the physical panels inside the light booth, on flat and curved panels successively. In the first test, flat and color-only images of metallic panels are evaluated. Results show an acceptance percentage of 80%, which is the same percentage obtained in our previous work for accurately rendering the colors of flat non-effect coating samples [31]. In the second test we evaluated the rendering of curved metallic panels. We found that in this case the color match acceptability of rendered images is significantly improved compared to the case of flat panels. The percentage of acceptable matches increased from 80% to 93%. In the third and final psychophysical test, we studied the joint color and texture reproduction by the rendering framework and by calculating suitable values for the model parameters. In this test, observers evaluated the accuracy of the digital reproduction of the color and texture of 30 metallic panels. We found that the intra-observer repeatability and the inter-observer reproducibility are both smaller than the quantification limit of the evaluation method, with values that cover only 8.5% and 12.2% of the total scale. These results show a high coherence of the observers with themselves. There are only few samples where different observers are not consistent. In 90% of the assessments, the color and texture reproduction were found to be below the acceptance threshold. Observers more often mention deviations in color representation than in texture rendering.

These results show a high acceptability for rendering color and texture of effect coatings when using the methodology developed in this article. A further improvement in the accuracy of rendering is expected when a larger set of samples is used for training the models. Also, the parametrization and interpolation of the BRDF as well as the representation of the colorimetric properties of the display may be improved in order to further optimize the accuracy of the rendering framework [2021,27,3738].

Funding

Ministerio de Economía y Competitividad (FPIBES-2016-077325, RTI2018-096000-B-I00).

Acknowledgments

The authors thank the Ministry of Economy and Competitiveness. The authors also thank the Ministry of Economy and Competitiveness for the pre-doctoral fellowship.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. Dorsey, H. Rushmeier, and F. Sillion, Digital modeling of material appearance (Elsevier, 2010).

2. Y. Dong, S. Lin, and B. Guo, Material Appearance Modeling: A Data-Coherent Approach (Springer-Verlag, Berlin Heidelberg2013). [CrossRef]  

3. M. Bordegoni and C. Rizzi, Innovation in Product Design, From CAD to Virtual Prototyping (Springer-Verlag, London2011). [CrossRef]  

4. M. Haindl and J. Filip, Accurate Material Appearance Measurement, Representation and Modeling. Advances in Computer Vision and Pattern Recognition (Springer-Verlag, ,London2013).

5. G. Pfaff, “Special Effect Pigments,” in High Performance Pigments, p. 75–104 (John Wiley & Sons, 2009). [CrossRef]  

6. T. Golla and R. Klein, “An efficient statistical data representation for real-time rendering of metallic effect car paints. In: International Conference on Virtual Reality and Augmented Reality,” in International Conference on Virtual Reality and Augmented Reality, (Springer, 2017), pp 51–68.

7. N. Dekker, E. J. J. Kirchner, R. Supèr, G. J. van den Kieboom, and R. Gottenbos, “Total appearance differences for metallic and pearlescent materials: contributions from color and texture,” Color Res. Appl. 36(1), 4–14 (2011). [CrossRef]  

8. Z. Huang, H. Xu, MR. Luo, G. Cui, and H. Feng, “Assessing total differences for effective samples having variations in color, coarseness and glint,” Chin. Opt. Lett. 8(7), 717–720 (2010). [CrossRef]  

9. R. Martín, M. Weinmann, and M. Hullin, “Digital transmission of subjective material appearance,” Journal of WSCG 25(2), 123–132 (2017).

10. J. Günther, T. Chen, M. Goesele, I. Wald, and HP. Seidel, “Efficient acquisition and realistic rendering of car paint,” in Proceedings of Vision, Modeling and Visualization, G. Greiner, J. Hornegger, H. Niemann, and M. Stamminger, eds. (Erlangen, 2005), pp. 487–494.

11. J. A. Ferwerda, S. Westin, R. Smith, and R. Pawlicki, “Effects of rendering on shape perception in automobile design,” inProceeedings of the ACM Symposium on applied perception in Graphics and Visualization, 107–114 (2004).

12. A. Ferrero, J. Campos, E. Perales, A. Rabal, F. Martínez-Verdú, A. Pons, E. Chorro, and M. L. Hernanz, “Measuring and specifying goniochromatic colors,” in Proceedings of the 23rd ICO conference, (Santiago de Compostela, 2014).

13. F. Maile, G. Pfaff, and P. Reynders, “Effect pigments—past, present and future,” Prog. Org. Coat. 54(3), 150–163 (2005). [CrossRef]  

14. T. Golla and R. Klein, “Interactive interpolation of metallic effect car paints,” in Proceedings of Vision, Modeling and Visualization, (Eurographics, 2018).

15. E. Kirchner, “Texture Measurement, Modeling, and Computer Graphics,” in Encyclopedia of Color Science and Technology, M. Ronnier Luo, ed., (Springer Publications, 2015).

16. M. Rump, G. Müller, R. Sarlette, D. Koch, and R. Klein, “Photo-realistic rendering of metallic car paint from image-based measurements,” Eurographics/Computer Graphics Forum27(2) 27 (Crete, Greece), pp. 527–536 (2008). [CrossRef]  

17. C. Shimizu and G. Meyer, “A computer aided color appearance design system for metallic car paint,” J imaging sci technol 59(3), 304031 (2015). [CrossRef]  

18. M. Nimier-David, D. Vicini, T. Zeltner, and W. Jakob, “Mitsuba 2: A retargetable forward and inverse renderer,” ACM Trans. Graph. 38(6), 1–17 (2019). [CrossRef]  

19. E. Kirchner, I. van der Lans, A. Ferrero, J. Campos, F. M. Martínez-Verdú, and E. Perales, “Fast and Accurate 3D Rendering of Automotive Coatings,” in Proceedings of the Twentythird Color and Imaging Conference. Society for Imaging Science and Technology, P. J. Harshman, T. K. Gustafson, and P. Kelley, eds., “Title of paper,” J. Chem. Phys. 3, p.154-160 (2015).

20. X. Gao, E. Khodamoradi, L. Guo, X. Yang, S. Tang, W. Guo, and Y. Wang, “Evaluation of colour appearances on smartphones,” in Proceedings of the Midterm meeting of the International Colour Association (AIC, 2015.).

21. E. Kirchner, I. van der Lans, F. Martínez-Verdú, and E. Perales, “Improving color reproduction accuracy of a mobile liquid crystal display,” J Opt Soc Am A 34(1), 101–110 (2017). [CrossRef]  

22. M. Pharr and G. Humphreys, 2010. Physically based rendering. Elsevier.

23. C. Shimizu, G. Meyer, and J. Wingard, “Interactive goniochromatic color design,” in Proceedings of the IS&T/SID’s Eleventh Color Imaging Conference, 16–22 (2003).

24. G. Meyer and C. Shimizu, “Computational automotive color appearance,” in Eurographics workshop on Computational Aesthetics in Graphics, Visualization and Imaging, L. Neumann, M. Sbert, B. Gooch, and W. Purgathofer, eds., (Eurographics association, 2005), 217-222.

25. P. Dumont-Bècle, E. Ferley, A. Kemeny, S. Michelin, and D. Arquès, “Multi-texturing approach for paint appearance on virtual vehicles,” in >Proceedings of the Driving Simulation Conference, 123–133 (2001).

26. E. Kirchner and W. Cramer, “Making sense of measurement geometries for multi-angle spectrophotometers,” Color Res. Appl. 37(3), 186–198 (2012). [CrossRef]  

27. E. Kirchner and A. Ferrero, “Isochromatic lines as extension of Helmholtz reciprocity principle for effect paints,” J. Opt. Soc. Am. A 31(8), 1861–1867 (2014). [CrossRef]  

28. J. Filip, R. Vávra, and F. Maile, “Optical analysis of coatings including diffractive pigments using a high-resolution gonioreflectometer,” J Coat Technol Res 16(2), 555–572 (2019). [CrossRef]  

29. A. Ferrero, A. Rabal, J. Campos, F. Martínez-Verdú, E. Chorro, E. Perales, A. Pons, and M. L. Hernanz, “Spectral BRDF-based determination of proper measurement geometries to characterize color shift of special effect coatings,” J. Opt. Soc. Am. A 30(2), 206–214 (2013). [CrossRef]  

30. E. Kirchner, I. van der Lans, P. Koeckhoven, K. Huraibat, F. M. Martínez-Verdú, E. Perales, A. Ferrero, and J. Campos, “Real-time accurate rendering of color and texture of car coatings,” in Proceedings of IS&T International Symposium on Electronic Imaging, 076, 1–5 (2019).

31. K. Huraibat, E. Perales, E. Kirchner, I. Van der Lans, A. Ferrero, and J. Campos, “Visual validation of the appearance of chromatic objects rendered from spectrophotometric measurements,” J. Opt. Soc. Am. A 38(3), 328–336 (2021). [CrossRef]  

32. F.M. Martínez-Verdú, E. Perales, V. Viqueira, E. Chorro, F. Burgos-Fernández, and J. Pujol, “Comparison of colorimetric features of some current lighting booths for obtaining a right visual and instrumental correlation for gonio-apparent coatings and plastics,” Proceedings of CIE, Lighting Quality and Energy Efficiency, 692–705 (2012).

33. K. Huraibat, E. Perales, V. Viqueira, E. Kirchner, I. van der Lans, A. Ferrero, and J. Campos, “Byko-Spectra effect light booth simulation for digital rendering tool,” in Proceedings of the XII Congreso Nacional del Color, R.B. Román, A. J. Hernández, R. Martos, A. B. Ruiz, and M. C. Cruz, eds., (Linares, 2019), p. 45–48.

34. K. Huraibat, E. Perales, F. M. Verdú, E. Kirchner, I. van der Lans, A. Ferrero, and J. Campos, “Characterization of Byko-spectra light booth for digital simulation in a rendering tool, ” in Proceedings of the CIE, 29th session, (Washington, 2019).

35. E. Valenza, Blender Cycles: Materials and Textures Cookbook (Packt Publishing, 2015)

36. C. Nguyen, MH. Kyung, JH. Lee, and SW. Nam, “A pca decomposition for real-time brdf editing and relighting with global illumination,” Computer Graphics Forum 29(4), 1469–1478 (2010). [CrossRef]  

37. A. Ferrero, E. Perales, A. Rabal, J. Campos, F. Martínez-Verdú, and A. Pons, “Color representation and interpretation of special effect coatings,” J. Opt. Soc. Am. A 31(2), 436–447 (2014). [CrossRef]  

38. A. Ferrero, J. Campos, E. Perales, F. Martínez-Verdú, I. van der Lans, and E. Kirchner, “Global color estimation of special-effect coatings from measurements by commercially available portable multiangle spectrophotometers,” J. Opt. Soc. Am. A 32(1), 1–11 (2015). [CrossRef]  

39. DIN 6175-2, “Tolerances for automotive paints. Part 2: Goniochromatic paints,” (Berlin, 2001), pp. 1-8.

40. Blender Foundation. 2002; Available from: http://www.blender.org.

41. DIN 61572, “Tolerances for automotive paints – Part 2:Goniochromatic paints”.

42. ASTM E2194-03, “Standard Practice for Multiangle Color Measurement of Metal Flake Pigmented Materials,” ASTM International, West Conshohocken, PA, (2003).

43. A. Ferrero, B. Bernad, J. Campos, E. Perales, J.L. Velázquez, and F.M. Martínez-Verdú, “Color characterization of coatings with diffraction pigments,” J. Opt. Soc. Am. A 33(10), 1978–1988 (2016). [CrossRef]  

44. N. Rogelj and M.K. Gunde, “Goniospectrometric space curve for coatings with special effect pigments,” Appl. Opt. 55(1), 122–132 (2016). [CrossRef]  

45. C. Strothkämper, K.O. Hauer, and A. Höpe, “How to efficiently characterize special effect coatings,” J. Opt. Soc. Am. A 33, 1–8 (2018). [CrossRef]  

46. B. Walter, S.R. Marschner, H. Li, and K.E. Torrance, “Microfacet model for refraction through rough surfaces,” Rendering Techniques (2007).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. The Byko-spectra effect light booth (BYK-Gardner). The left image shows the inner structure and the rotation platform of the light booth.
Fig. 2.
Fig. 2. Definition of angles for flake-based parameters.
Fig. 3.
Fig. 3. Observers compared the 3D rendered samples using our spectral rendering with real-world objects inside the Byko-spectra effect light booth.
Fig. 4.
Fig. 4. The measured BYKmac-i data for the 30 samples studied in the final visual test in this article corresponding to the as45 measurement geometry. The figure (a) shows the chromatic data, represented in CIE — L*a*b* color space. The second graph (b) represents the sparkle values ($S\_i,\,S\_a$) and the sparkle grade ($S\_G$), while the last column graph on the right represents the graininess values ($G$).
Fig. 5.
Fig. 5. Color flop effect in the rendered images of a metallic panel at the six-geometries of the Byko light booth. The red bordered images are the geometries used for the visual evaluation of the color flop.
Fig. 6.
Fig. 6. Screen shoots of a final rendering of a metallic panel inside the virtual Byko light booth. The left image (a) shows a side view while the right image (b) mimics the visualization slit of the physical light booth.
Fig. 7.
Fig. 7. Results of the final visual experiment. The red pluses indicate the outliers. Green pluses represent the mean value. Horizontal line illustrates the acceptance threshold. Acceptable color match is obtained for scores below the threshold value in the graph.

Tables (1)

Tables Icon

Table 1. Descriptions of scores for the Scoring method.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

M = A B ,
A = M B T ( B B T ) 1 .
( f a r f r m ) = ( S a 45 S i 15 S G f l o p L 45 a 45 ) ( 0.3461 0.1365 0.3228 0.1472 0.4534 0.5528 0.7911 0.4244 0.6183 0.8433 0.0131 0.6054 0.7399 0.0405 0.0786 0.0869 0.5110 0.3088 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.