Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Digital cleaning and “dirt” layer visualization of an oil painting

Open Access Open Access

Abstract

We demonstrate a new digital cleaning technique which uses a neural network that is trained to learn the transformation from dirty to clean segments of a painting image. The inputs and outputs of the network are pixels belonging to dirty and clean segments found in Fernando Amorsolo’s Malacañang by the River. After digital cleaning we visualize the painting’s discoloration by assuming it to be a transmission filter superimposed on the clean painting. Using an RGB color-to-spectrum transformation to obtain the point-per-point spectra of the clean and dirty painting images, we calculate this “dirt” filter and render it for the whole image.

©2011 Optical Society of America

1. Introduction

Many old paintings look very different today than when they first came out of the artists’ studios. Certain physical and chemical phenomena, like dirt, dust, and light exposure, may degrade a painting’s appearance over time. Digital cleaning is a type of virtual restoration that attempts to simulate a painting’s appearance before exposure to these factors. Specifically, the aim of digital cleaning is to find a mathematical transform that would characterize the dirtying process that occurs in a painting, reverse the transformation, and produce a digitally cleaned image of the painting. The transformation must relate the pixel values of the dirty patches of the painting to the values of the corresponding clean patches. To obtain the clean samples, most previous works employed small patch actual cleaning [13] or else micro-sampled the painting to create fresh paint samples of the same composition [3]. In this paper we propose a cleaning method that makes use of hidden, dirt-free samples thus making the cleaning process completely non-invasive. The cleaning technique was also robust in that we were able to clean the whole of a very colorful painting.

Due to the painting’s dramatic change in appearance after being digitally cleaned, it was very interesting to study the color changes that happened to it. To do this, we first assumed that the painting’s dirty appearance was caused by a film covering the original clean painting. We called this virtual film the “dirt” layer to encompass all factors that can cause a painting to appear dirty or discolored such as dust, grime, soot, varnish oxidation, and variation in color steadfastness. In our previous work [4] we tried different transformations to model the dirt layer, such as vector difference, transparency using an alpha channel, and spectral ratio. The best result was obtained when the dirt layer was modeled as a transmission filter and was rendered for the whole painting. Whereas most papers would study the color change of a few pigments only and would present their results either as spectral [57] or CIELab [7,8] plots, in this paper, we present a dirt layer visualization that is more complete and more easily understandable.

2. Motivation for using neural networks

The painting that was digitally cleaned was Malacañang by the River by the Philippines’ first National Artist, Fernando Amorsolo. It was dated 1948 and was still in its original frame. When the painting was taken out of its frame, it was observed that parts that were previously covered by the frame were generally less dirty and darkened than the exposed parts. This discovery gave us a new source of clean samples that does not require doing actual restorations or micro-sampling on the painting. We used as our “clean” segments the painting parts that were previously covered by the frame (within around 5mm from the painting’s edge) while the “dirty” segments were the adjacent exposed parts. This kind of sampling limits the dirty-clean sample pairs to colors that are present on the edges and therefore makes the cleaning technique more challenging.

In our previous work [9], we trained a neural network to learn the transformation of pixel values from dirty to clean. The advantage of a neural network is that it imposes no assumptions; it only requires input-output pairs of data. Then the neural network can learn the transformation on its own through successive trainings. However, we observed that the network sometimes “overcleans” the image and so in this paper we introduce context-based post processing to correct such instances.

Although neural networks have already been used to solve classification and image in-painting problems [10], they are yet to be applied for digital cleaning. By exploiting the trained neural network’s ability to solve the desired output for completely new inputs, we were able to clean the whole painting using just the training data from the edges of the painting, where our exposed (dirty) – unexposed (clean) pairs are available. The application ofneural networks, has therefore allowed us to do total non-invasive digital cleaning of a very colorful painting.

3. Sampling procedure and neural network training

An image of the painting without its frame was captured without flash under ambient museum lights (50W halogen dichroic lamps) using an 8-megapixel Olympus E500 digital SLR camera.

A total of 1,350 pairs of pixels from exposed and unexposed parts were manually selected from all around the edges of the image. The number of sample pairs per color/element was dependent on the occurrence of that color/element on the unexposed part of the painting. A summary of the distribution of these sampling points is listed in Table 1 . Because the painting is not only composed of color but also texture, the effect of shadows and highlights was preserved by taking the exposed and unexposed pair from the same texture component (brushstroke, shadow of brush stroke, bumps and dips of canvas weave) .

Tables Icon

Table 1. Distribution of Sample Pairs

The neural network creation and training was implemented using the nftool and the nntool interfaces from the Neural Network Toolbox of Matlab 2007 [11]. Of all the neural networks tested, the best training performance was obtained using a standard two-layer feed forward neural network trained using Levenberg-Marquardt optimization, with 30 neurons in the hidden layer. A tangent-sigmoid transfer function was used for both the hidden and the output layers. This was done to limit the output to acceptable RGB values only (0 to 255). The inputs to this network are the RGB of the pixels belonging to dirty paint segments and the desired output are the RGB of the pixels from clean segments. Of the 1350 sample pairs, 60% was used for network training, 20% was used for validation and 20% was used for testing.

4. Cleaning results

In actual physical cleaning, there is no standard quantitative criterion of restoration performance and the only gauge of the success of the cleaning process is by a visual examination of the results. Proof of this is evident in the trial and error approach implemented by most conservators [1]. Likewise, although digital cleaning uses a scientific approach, results are still speculative [3,5]. However, certain factors, like the presence of colored photographs or descriptions of the painting while it was still new, could help gauge the reasonability of results. In this case, we assess the performance of our technique using two measures. First of these is that the neural network should be able to clean even pixels of the boundary that were not part of the training set. This would mean that after digital cleaning the transition of colors from the unexposed to the adjacent exposed parts should be smoother. The second measure would involve looking if the color change induced by the digital cleaning is consistent with the context of the painting. For example, knowing that the Malacañang Palace depicted in the center of the painting is supposed to be white and that the upper right corner is supposed to be clear blue sky, we expect a color change towards white and towards blue for the two painting elements, respectively.

A side-by-side comparison of Malacañang by the River before and after digital cleaning is shown in Fig. 1 . Notice that the mask-like boundary between the exposed and unexposed portions is now less visible. The blue sky, green tree and the lower-left portion of the river all look more vivid after digital cleaning. The highlights on the clouds and the Malacañang are also more pronounced. Figure 2 shows a detail cropped from the upper right corner of the painting.

 figure: Fig. 1

Fig. 1 Malacañang by the River by Fernando Amorsolo, oil on canvas board, 43.7x56.2x4.0 cm before (left) and after (right) digital cleaning. The painting is from the UP Vargas Museum Collection.

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 Detail of Malacañang by the River before (left) and after (right) digital cleaning.

Download Full Size | PDF

It must be noted that the total number of pixels used in training (1,350 pairs) is less than 0.04% of the total number of pixels belonging to the painting image (7,138,640) and yet the network was able to generalize the cleaning all throughout the painting.

However, certain portions, like the pinkish red leaves and shore (Fig. 3 ) appeared over-cleaned; making it look like the paint has flaked off. It was observed that these overcleaned areas either had the same color as the dirty pixels from the training set, or were the colors that were not represented in the training set.

 figure: Fig. 3

Fig. 3 Detail of Malacañang by the River before (left) and with the over-cleaning (right).

Download Full Size | PDF

5. Context-based post processing

The over-cleaning was addressed by applying a post processing that looks at the context of the painting to rule out overcleaned areas. The first step is to segment the overcleaned areas. This could be done either manually, by drawing a polygon around the region of interest, or automatically, using any segmentation algorithm. In our case we use histogram back projection [12], which is a color segmentation algorithm that uses the histogram of a sample portion of the region of interest as a probability distribution function in tagging the rest of the pixels belonging to the said region of interest. To remove the effect of brightness variations, the Euclidian color difference, D, in rg space was then computed for these areas using D = (r2 + g2)1/2 where r = R/(R + G + B) and g = G/(R + G + B). After observing that the parts that were over-cleaned would have a greater Euclidian color difference in rg space than those that were not over-cleaned, we imposed a condition that once the Euclidian color difference before and after cleaning exceeded a certain threshold, the pixel will retain its value prior to cleaning. The result for the post processing is shown in Fig. 4 . The final result for the whole painting is shown in Fig. 5 .

 figure: Fig. 4

Fig. 4 Detail of Malacañang by the River before (left), with the over-cleaning (center) and after the post processing (right).

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Malacañang by the River before (left) and after (right) digital cleaning and context-based post processing.

Download Full Size | PDF

6. Visualization of the Dirt Layer

Because the action of our digital cleaning procedure was to virtually remove the painting’s dirt and grime due to exposure and to eliminate the effect of the oxidized varnish, we associate the dirt layer with the oxidized varnish and the dirt and grime that adhered to it. As Cotte and Dupraz [5] have experimentally proven, the effect of this aged varnish is similar to that of a brightness and color filter that is superimposed on the painting. Since the effect of a superimposed filter is to multiply the spectrum of the object beneath it with the filter’s spectrum, the dirt layer spectrum could then be obtained by getting the quotient of the painting image’s reflectance spectra before and after digital cleaning, that is,

Dirt_spectra(λ)=Dirty_pixel_spectra(λ)Clean_pixel_spectra(λ)

The reflectance spectra before and after digital cleaning can be reconstructed from the image RGB values using Imai and Berns’ technique, or alternatively using Haneishi et al’s method of Wiener estimation [13,14]. In this case we used our variation of Imai and Bern’s technique [15]. Using just the first three principal components of an ensemble of 1600 Munsell color chips reflectance spectra and the measured channel sensitivities of the camera the point-per-point spectral information of the painting image before and after digital cleaning was obtained. Figure 6 shows the calculation and rendering of the dirt spectrum of a point in the painting image. Applying Eq. (1) and computing the RGB of the dirt filter per pixel for the whole painting results to the dirt layer visualization shown in Fig. 7 .

 figure: Fig. 6

Fig. 6 Reconstruction of the dirty, clean, and dirt spectra of a pixel in the sky portion of the painting. The patches show the corresponding image pixel.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Visualization of Malacañang by the River’s dirt layer.

Download Full Size | PDF

More accurate spectral estimation of the dirty and clean paint may be obtained if more than 3 camera channels are used to image the painting. However it is interesting to note that even with just 3 channels, an estimate of the dirt spectra can be obtained. The yellowish cast for the whole painting is consistent with the effect of the yellow oxidized varnish, while the greenish brown color that is visible especially at the corners of the image could be attributed to the dirt and grime that adhered to the painting over time. Although many of the color changes in the painting happens similarly across the entire surface of the painting, some changes could also be isolated to a particular area or pigment only. Differences in pigment steadfastness and variations in the dirtying of different locations in the painting could also contribute to the non-homogeneity of the actual dirt layer [16]. Also because the varnish and the dirt discoloration would not be equally visible on all colors, some non-uniformity on the dirt layer visualization could be expected. The black areas correspond to locations where the reconstruction of either the clean or dirty spectra has turned out negative values, and has therefore been set to zero.

7. Conclusion

We introduced two innovations in digital cleaning. The first is the use of neural networks and digital color samples of hidden, dirt-free parts of the painting to learn the transformation from dirty to clean segments of a painting. The application of neural networks has allowed us to introduce a totally non-invasive, whole-painting, digital cleaning. Although the results showed over-cleaning for portions that resemble the color of the dirty pixels during training, this was resolved by doing context-based image processing. Since the results showed visual correctness, as evidenced by the minimization of the difference between the exposed and unexposed painting portions, we conclude that our methodology was successful in digitally cleaning Fernando Amorsolo’s Malacañang by the River.

The second is a method for visualizing the discoloration of the painting as a transmitting film. From the RGB values of pixels of the dirty and digitally cleaned painting, we can recover an estimate of their reflectance spectra. By taking the ratio of the dirty and the clean spectrum point-per-point we can calculate the transmission of this “dirt” filter and render it for the whole painting. Even with just 3 camera channels, the calculated spectra appear consistent with common discoloration processes such as varnish oxidation, difference in color steadfastness, or accumulation of dirt particles.

Acknowledgements

The authors would like to thank the UP Vargas Museum for giving us access to the painting. Cherry May Palomero would also like to thank DOST-ASTHRDP for her scholarship. This work is sponsored by the UP Open Grant Project No. 062929OG.

References and links

1. M. Pappas and I. Pitas, “Digital color restoration of old paintings,” IEEE Trans. Image Process. 9(2), 291–294 (2000). [CrossRef]   [PubMed]  

2. M. Barni, F. Bartolini, and V. Cappellini, “Image processing for virtual restoration of artworks,” IEEE Multimed. 7(2), 34–37 (2000). [CrossRef]  

3. R. Berns, F. Imai, and L. Taplin, “Rejuvenating Seurat’s A Sunday On La Grande Jatte- 1884 using color And imaging science techniques: A simulation,” in ICOM 14th Triennial Meeting The Hague: 12–16September,2005: Preprints, I. Verger. ed. (Maney Publishing, 2005), pp 452–458.

4. C. M. Palomero and M. Soriano, “After digital cleaning: visualization of the dirt layer,” Proc. SPIE 7869, 78690O, 78690O-7 (2011), doi:. [CrossRef]  

5. P. Cotte and D. Dupraz, “Spectral imaging of Leonardo Da Vinci’s Mona Lisa: A true color smile without the influence of aged varnish,” in Proc. IS&T CGIV’06, University of Leeds UK, June 19–22, 2006.

6. R. S. Berns, “Rejuvenating the appearance of cultural heritage using color and imaging science techniques,” in Proc. AIC Colour 05 (AIC, 2005), pp. 369–374.

7. M. Bacci, F. Baldini, R. Carla, R. Linari, M. Picollo, and B. Radicati, “Color analysis of the Brancacci chapel frescoes: part II,” Appl. Spectrosc. 47(4), 399–402 (1993). [CrossRef]  

8. M. Bacci, A. Casini, C. Cucci, M. Picollo, B. Radicati, and M. Vervat, “Non-invasive spectroscopic measurements on the Il Ritratto della figliastra by Giovanni Fattori: identification of pigments and colourimetric analysis,” J. Cult. Herit. 4(4), 329–336 (2003). [CrossRef]  

9. C. M. Palomero and M. Soriano, “Neural network for the digital cleaning of an oil painting,” in Digital Image Processing and Analysis, OSA Technical Digest (CD) (Optical Society of America, 2010), paper DMD5. http://www.opticsinfobase.org/abstract.cfm?URI=DIPA-2010-DMD5

10. A. Gascadi and P. Szolgay, “Image inpainting methods by using cellular neural networks,” in Int’l Workshop on Cellular Neural Networks and Their Applications (IEEE,2005), pp 198–201.

11. Matlab 2007 Neural Network Toolbox Documentation page.

12. M. J. Swain and D. H. Ballard, “Color indexing,” Int. J. Comput. Vis. 7(1), 11–32 (1991). [CrossRef]  

13. F. Imai and R. Berns, “Spectral estimation using trichromatic digital cameras,” in Proc. of the International Symposium on Multispectral Imaging and Color Reproduction for Digital Archives (AIC, 1999) pp. 42–49.

14. H. Haneishi, T. Hasegawa, A. Hosoi, Y. Yokoyama, N. Tsumura, and Y. Miyake, “System design for accurately estimating the spectral reflectance of art paintings,” Appl. Opt. 39(35), 6621–6632 (2000). [CrossRef]   [PubMed]  

15. M. Soriano, W. Oblefias, and C. Saloma, “Fluorescence spectrum estimation using multiple color images and minimum negativity constraint,” Opt. Express 10(25), 1458–1464 (2002). [PubMed]  

16. K. Martinez, J. Cupitt, D. Saunders, and R. Pillay, “Ten years of art imaging research,” in Proc. IEEE 90, 28–41 (2002).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Malacañang by the River by Fernando Amorsolo, oil on canvas board, 43.7x56.2x4.0 cm before (left) and after (right) digital cleaning. The painting is from the UP Vargas Museum Collection.
Fig. 2
Fig. 2 Detail of Malacañang by the River before (left) and after (right) digital cleaning.
Fig. 3
Fig. 3 Detail of Malacañang by the River before (left) and with the over-cleaning (right).
Fig. 4
Fig. 4 Detail of Malacañang by the River before (left), with the over-cleaning (center) and after the post processing (right).
Fig. 5
Fig. 5 Malacañang by the River before (left) and after (right) digital cleaning and context-based post processing.
Fig. 6
Fig. 6 Reconstruction of the dirty, clean, and dirt spectra of a pixel in the sky portion of the painting. The patches show the corresponding image pixel.
Fig. 7
Fig. 7 Visualization of Malacañang by the River’s dirt layer.

Tables (1)

Tables Icon

Table 1 Distribution of Sample Pairs

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

Dirt_spectra(λ)= Dirty_pixel_spectra(λ) Clean_pixel_spectra(λ)
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.