Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multifocus HDR VIS/NIR hyperspectral imaging and its application to works of art

Open Access Open Access

Abstract

This paper presents a complete framework for capturing and processing hyperspectral reflectance images of artworks in situ, using a hyperspectral line scanner. These capturing systems are commonly used in laboratory conditions synchronized with scanning stages specifically designed for planar surfaces. However, when the intended application domain does not allow for image capture in these controlled conditions, achieving useful spectral reflectance image data can be a very challenging task (due to uncontrolled illumination, high-dynamic range (HDR) conditions in the scene, and the influence of chromatic aberration on the image quality, among other factors). We show, for the first time, all the necessary steps in the image capturing and post-processing in order to obtain high-quality HDR-based reflectance in the visible and near infrared, directly from the data captured by using a hyperspectral line scanner coupled to a rotating tripod. Our results show that the proposed method outperforms the normal capturing process in terms of dynamic range, color and spectral accuracy. To demonstrate the potential interest of this processing strategy for on-site analysis of artworks, we applied it to the study of a vintage copy of the famous painting “Transfiguration” by Raphael, as well as a facsimile of “The Golden Haggadah” from the British Library of London. The second piece has been studied for the identification of highly reflective gold-foil covered areas.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Spectral imaging is creating a growing interest in many scientific and industrial fields. The new fast and light image capturing system architectures, together with powerful CPUs and GPUs, facilitate easier capturing and processing of spectral image data for several applications [1, 2]. Among these, recording spectral reflectance image data is crucial for some purposes. Spectral reflectance is a property of objects. When any spectral imaging device is used in raw mode (unprocessed), and under any arbitrary illuminant, the sensor responses retrieved are proportional not only to the spectral reflectance of the object being imaged, but also to the spectral responsivity of the device and the spectral power distribution of the illumination used. Therefore, in order to retrieve only reflectance data pixel-wise, both the influence of the responsivity of the device and the impact of the spectrally and spatially non-uniform illumination must be discounted (also pixel-wise). So far, hyperspectral image capturing systems and techniques present important limitations. From commercially available cameras and scanners [9], to ad hoc proposed prototypes [10], all spectral imaging systems are affected by a limited dynamic range, and the undesired effect of chromatic aberrations. The latter is more important the wider the captured spectral range is. Having a wide spectral range is relatively common in capture devices with silicon-based sensors, which cover not only the visible but also the near infrared range, typically from 400 up to 1000 nm.

Regarding dynamic range, some researchers already proposed prototype HDR multispectral imaging systems. Brauers et al. proposed a filter wheel-based camera with 7 broad filters covering the visible spectral range from 380 to 780 nm and using 7 exposure times per filter [12]. Lapray et al. proposed a prototype sensor based on multispectral filter arrays with 8 broad-band channels (7 in the visible range and 1 covering both part of the visible and the NIR range of the spectrum), and using 3 exposure times per capture [13]. Also Hirai et al. proposed a system based on two equal RGB video cameras, each one with a different color filter on top, and a programmable rotating stage [14]. They used this system to capture outdoor omnidirectional 6-bands multispectral HDR images by taking 15 shots per image and stitching them together with the proper geometric calibration and corrections.

When capturing spectral images in completely uncontrolled illumination conditions including inter-reflections and even temporally variable illumination (so called out-of-the-lab situation), applying only HDR techniques may not suffice for the purpose of acquiring useful image data for scientific applications. Some authors also proposed methods for processing the captured image data in order to reduce the impact of the spectrally non-uniform illumination. These techniques originally applied for color images (chromatic adaptation), are also evolving to spectral imaging with proposed algorithms [15], and imaging systems [16, 17] by estimating the illuminant present on the scene [18–20], and allowing the estimation of the reflectances of the non-planar objects present on it.

All these systems and techniques are multispectral, and cover mainly the visible range. Spectral reflectance of the objects present on the scene can be estimated from few sensor responses (usually 6 to 8), by discounting the illumination present on the scene (which in many cases is also assumed spatially uniform), as well as using some spectral estimation algorithms to get the full spectral reflectance information. In this work we do not aim to tackle the complex problem of outdoor capture, which usually includes volumetric objects and spatial and temporal variations in the illumination present in the scene. The focus is rather on a particular application domain for which spectral information analysis has proven to provide very useful insights: analysis of artworks. The intended aim is to retrieve accurate spectral reflectance data of planar works of art (paintings, documents, posters, photographs…), in the visible and near infrared range, by discounting the spatially and spectrally non-uniform illuminant pixel wise. This is done using a commercial hyperspectral imaging system. In the conditions illustrated in this study, flat field can be performed to help discounting the spatially non-uniform illumination. However, on-site capture introduces other limitations (like chromatic aberration and dynamic range issues) which have not been sufficiently addressed in the previous studies related to hyperspectral capture in this domain.

2. Spectral imaging of pictorial artworks

Recently, some applications related to the present study, like analysis, conservation and restoration of cultural heritage, are emerging as potentially expanding fields for the use of spectral imaging systems [3, 4]. Spectral imaging is a very powerful technique which, when properly applied, can retrieve useful information from art objects in a non-invasive manner. This is especially important when the objects are very old and delicate [5–7].

However, so far no art-related applications have applied high dynamic range multifocus hyperspectral imaging techniques able to capture high quality hyperspectral reflectance images in uncontrolled illumination conditions, or from highly reflective artworks (i.e. containing golden coatings). Even in completely controlled laboratory conditions, and with smaller samples able to be captured in a scanning stage, there is a lack of homogeneity in the way the different laboratories capture and process spectral reflectance images [8].

One common system architecture used for hyperspectral imaging, is the hyperspectral line scanner [11]. These systems feature a two-dimensional sensor array. However, they do not capture a two-dimensional image in each shot. Due to their dispersive optic elements, the sensor is able to record spatial information in one dimension and narrow-band spectral information in the other dimension, in a single shot. In other words, the device only senses a single line of the scene at a time, but it does sense the full spectrum of the line. This is due to the push-broom scanning movement which records the spectral information of the whole two-dimensional scene. This system architecture is widely used for spectral imaging of artworks [9, 21]. In some applications, a scanning linear stage can be used to easily capture the whole hyperspectral information of some samples by moving the samples (the scene) instead of the hyperspectral camera [5] (see Fig. 2, a). Thus, the scanner always senses the same line, where the illumination is kept fixed and controlled both in its spatial and spectral profiles. These conditions are very convenient for capturing a flat field sample. This flat field capture is later used for discounting illumination non-uniformities, as well as the contribution of the illumination spectrum and spectral responsivity of the capturing device for each spectral band. Thus, a hyperspectral reflectance image is captured which only contains information of the scene’s spectral reflectance. This information serves as a powerful tool for pigment and material identification in works of art [6–10].

However, some applications do not allow capture in such completely controlled conditions. This is the case for example of works of art, when they are frescoes affixed to a wall, or their size is big or else the preservation state of the work precludes it being moved from its location on the wall and fixing it to a linear stage [22]. Thus, the only available option in these cases is to transport the hyperspectral capture device to the site of the work of art. For these situations, manufacturers also offer the possibility of synchronizing the scanners to a rotating stage on a tripod (see Fig. 2, b). In this way, a scene can be scanned by rotating the camera instead of moving the sample or the scene.

The main problem of this configuration is that the illumination is no longer so well controlled. Even if standard light sources are used to illuminate the painting, it is not homogeneously illuminated, presenting thus highlights and shadowy areas due to the common presence of high reflectivity binders, varnishes or materials. Hence, different spatial locations and spectral bands might register huge variations in intensity of the reflected light from the painting. This might exceed the dynamic range of the device, making it necessary to use HDR techniques to be able to recover useful information for each pixel and every spectral band [23, 24]. This situation is still not the completely out-of-the-lab uncontrolled situation, since the imaged object is planar, and the light sources static with no inter-reflections, allowing the use of flat-field correction.

Comparing extreme cases of possible sensor response values, wit would be likely to find on the one hand a very dimly illuminated area with a very dark reflectance in a wavelength with very low sensor responsivity and very low power of illumination which might need very long exposure times to generate any significant sensor response above the noise floor. On the other hand though, a very brightly illuminated area could be found, with a very high sample reflectance in a wavelength with high sensor responsivity and illumination power, which would require very short exposure times not to get saturated (clipped). In these two extreme cases, the light signal impinging on the sensor and its responsivity in the different wavelengths are so different that the dynamic range is very high. Unfortunately, hyperspectral scanners only allow one exposure time to be set for each complete capture. In other words: in a single scan, all wavelengths and areas of the scene are captured using the same single exposure time.

In these conditions, performing flat field would mean either assuming uniform illumination for the whole scene (which is never the case), or using a flat-field sample big enough to cover the same extension as the region of interest to be recorded (in this case the whole painting). In Fig. 2 (c), such a situation is shown. A painting is being captured in situ, where the size is much bigger than the available scanning stage. Therefore, the illumination is set in front of the painting, and the painting is placed before a calibrated gray tarp of known reflectance. Fig. 2 (d) shows a map of relative irradiance impinging on the painting. The illumination is clearly not spatially uniform. It is also not spectrally uniform either across the entire scene. This makes an accurate illumination discount procedure more important. Some areas are more powerfully illuminated (red) than others (blue). An additional problem is also the difference in reflectance at different wavelengths and different spatial areas of the painting, which increases the high dynamic range of the scene.

The literature shows that the near infrared region of the spectrum is important when dealing with art-related applications [6–10, 21]. There is also an additional limitation due to chromatic aberration (different wavelengths focus at different distances). Therefore, if the system is focused while monitoring for instance the band of 500 nm, the images above 700 nm will probably be out of focus. This blur is even more severe when capturing near infrared images in the same scanning. This is why it would also be interesting to capture different regions of the spectrum using different focus positions and merge them in the same final spectral cube after performing registration to correct for the ensuing differences in magnification inherent to the different focus positions.

In this paper, to the best of our knowledge a complete workflow is proposed for the first time which is able to overcome these limitations and capture and process multifocus HDR hyperspectral image data using a hyperspectral visible and near infrared linescanner, in order to extract a final hyperspectral reflectance image of a known painting, captured in uncontrolled illumination conditions.

Promising potentials of the captured data are the non-invasive pigment identification in paintings and other works of art [6–10, 21], and the accurate colorimetric analysis of the temporal evolution when submitted to aging processes after conservation actions [5, 25].

3. Material and Methods

In this section, the painting and the equipment used, as well as the methods proposed to retrieve the hyperspectral reflectance image through multifocus HDR hyperspectral image capture are described. In Fig. 1, a work-flow diagram is shown. On the left-hand side the multifocus HDR capture is represented (see section 3.5). Three focusing positions were needed to capture a sharp image of the whole spectral range from 400 to 1000 nm. Three exposure times per focus position have also been needed to correctly capture the whole dynamic range of the scene (as proposed in [23]). The processing steps are represented on the right-hand side. Firstly conversion from LDR raw to radiance images (mentioned in sections 3.2.2 and 3.7) is shown. After that the LDR reflectance cube building (see section 3.10) is shown, followed by the HDR cube building (explained in section 3.11). Additional steps suchas weight map calculations, image registration and HDR flat-field cube building are explained in sections 3.6, 3.8, and 3.9 respectively.

 figure: Fig. 1

Fig. 1 Workflow of the capturing and processing steps.

Download Full Size | PDF

3.1. The painting

The Transfiguration is an oil painting (63.3×93.2 cm). It is a copy of the Transfiguration of Raphael Sanzio (1518-1520), which is displayed in the Vatican Pinacoteca and that was initially painted on a wood panel (405×278 cm) by order of Cardinal Julio de’Medici (1515), future Pope Clement VII. The Transfiguration of Jesus Christ is the posthumous work of Raphael, which was left unfinished and was completed by his disciple Giulio Romano. In addition to the painting studied in this paper (anonymous), to our knowledge there are 3 recognized copies painted by Giovanni Francesco Penni disciple of Raphael (1520-1528), Domingo Álvarez Enciso, (XVIII Century) and anonymous (XVI Century). Thepainting studied in this paper is the most similar to the original of Raphael.

3.2. Capturing system: raw and radiance modes

The system used in this study is a hyperspectral line scanner model PikaL VNIR from Resonon Inc. (Canada). The sensor size is 900 pixels wide. The maximum spectral resolution is 2.1 nm (300 bands from roughly 384 to 1018 nm), but using hardware binning in the spectral dimension, it ends up as 4.1 nm (150 bands from 384 to 1016 nm). Using hardware binning increases the signal level at the cost of decreasing the spectral resolution. For this study, 150 spectral bands are enough to get accurate spectral reflectance data. This sampling interval is smaller than the usual 5 or 10 nm in hyperspectral imaging capture [8]. The optic angular field of view is 13°, and the vertical size of the painting is93.2 cm, thus the device was placed approximately 450 cm from the painting (see Fig. 2,c). In the horizontal dimension, any number of lines can be scanned. Since the horizontal size of the painting is 63.3 cm, the final hyperspectral image had a spatial resolution (after cropping edges) of 736 x 516 pixels. Higher spatial resolution could be achieved getting closer to the painting, but the whole painting would not fit in the field of view, so several scans would be needed to do it piecewise and stitch the different parts of the final spectral cube afterwards. This could be interesting for studying a particularly relevant detail of the painting, and it can always be done independently of the full artwork capture, and in exactly the same way.

 figure: Fig. 2

Fig. 2 a) PikaL line scanner mounted in linear stage for scanning samples of reduced size. b) PikaL scanner mounted on rotating tripod and gray tarp. c) set up for the measurement of the art painting. d) Relative irradiance map impinging on the painting at 702 nm band (red means high and blue low irradiance).

Download Full Size | PDF

The captured image data can be in two main formats: raw and radiance. We explain these two formats and the influence of very low or very high values of raw responses on the image data in the following subsections.

3.2.1. Raw mode

Image data in raw mode is the unprocessed image data from the sensor. These sensor responses ρraw(x,y,λ) are wavelength-wise proportional to the spectral power distribution of the illumination SPDillum(x,y,λ), the reflectance of the sample Refsample(x,y,λ) and the responsivity of the sensor Respsens(x,y,λ). Since this system has narrow spectral bands, the sensor responses can be calculated as shown in Eq. (1), where x and y correspond to spatial coordinates and λ to wavelength.

ρraw(x,y,λ)=SPDillum(x,y,λ)Refsample(x,y,λ)Respsens(λ)

This model is a simplification of the real situation since it is assuming lambertian surface of the painting, which is not the case. Therefore, the spectral reflectance calculated for both the painting and the facsimile studied in section 4.3 make this assumption. This fact was pointed out by MacDonald et al. [8]. They found that reflectances in the golden foil covered areas measured with different imaging systems were “similar in shape but varied in amplitude by factors of as much as 3x”. Specular reflections due to some materials could be certainly an issue which is not solved by the approach presented in this paper.

They are encoded with 12 bits and afterwards normalized in the range [0 - 1] (float). A sensor response equal to 0 means it is underexposed (noise) and a sensor response equal to 1 means it is saturated (clipped). In both cases, a different exposure setting would be necessary for correctly exposing these pixels. This is the classical high dynamic range (HDR) problem [26, 27]. In these cases, the most widely used strategy is to change the exposure settings (i.e. increasing or decreasing the exposure time) and capturing differently exposed images of the same scene or sample [28]. These images are then blended together into a single HDR image [29].

3.2.2. Radiance mode

If the capturing device is properly radiometrically calibrated (its camera response function, CRF, is known), as is the case in this study, a radiance map can be calculated out of the raw images [23, 24]. Otherwise, it is easy to extract the CRF by capturing differently exposed images of the same scene [29, 30]. In this case, the raw sensor responses (ρrad(x,y,λ)) are back-mapped to exposure values (via inverse cameraresponse function, CRF1 [31, 32]), as shown in Eq. (2).

ρrad(x,y,λ)=CRF1(ρraw(x,y,λ))

Thus, the influence of the camera responsivity is eliminated and only the information from the SPDillum(x,y,λ) and the Refsample(x,y,λ) is left in the image data (radiance). Note that this new radiance image is the result of post-processing the initial raw image. Then, if the raw sensor response was underexposed or saturated, the radiance value given to those pixels would be the minimum or maximum radiance able to be correctly captured using the current exposure time. This of course would not be a correct value. In an intermediate exposure time capture for example, if there were saturated raw sensor responses, there would not be very bright saturated pixels in the final radiance image. However, detail would be lost in these areas since the original raw responses were clipped. This fact is shown in Fig. 3. On the right there is a detail of one of the LDR radiance images (584 nm). Over it, a green line is drawn, and the radiance profile is plotted on the left. Note that for the pixel positions between 307 and 334, the retrieved radiance value is clearly clipped. It should be a higher value, so detail in this region is lost. In other words, a radiance value is always obtained, but due to the LDR capture limitations, it may not be proportional to the real radiance coming from this area of the scene.

 figure: Fig. 3

Fig. 3 Left: normalized LDR radiance profile of the green segment drawn in the detail of the painting shown on the right. Right: LDR radiance image of spectral band corresponding to 584 nm.

Download Full Size | PDF

3.3. The gray tarp

Flat-field correction needs to be performed using a gray tarp, in order to retrieve the reflectance image. When using the scanning stage for capturing (see Fig. 2, left), this is as easy as just capturing and averaging few lines of the white Teflon bar which is as long as the field of view of the camera. However, when capturing using the rotating tripod (see Fig. 2 center), the different scanned lines receive different illumination. Hence, the whole scene needs to be covered with a flat field sample. In this study, an airborne sensor calibration panel of size 1.2 x 1.2 m was used (model Resonon Inc. type 822, see Fig. 2, center, behind the tripod). This is a single level diffuse gray panel of 36% reflectance in the spectral range 400 to 1700 nm.

3.4. Illumination

It is important to get enough light signal into the whole spectral range to be captured. Therefore, three halogen lamps (500 W each) and two LED panels (36 W each) model Cromalite Nanguan CN-600CSA were used. The light sources were placed at both sides of the painting, leaving the central region free for sensing (see Fig. 2, right). This arrangement introduced spatial lightness and chromatic in homogeneities in the illumination, which will be discounted later on. These conditions mimic the natural conditions which can be found in museums, temples, walls, outdoors, or wherever a work of art could be placed.

3.5. HDR multifocus raw capture

As mentioned in section 1, chromatic aberration is a limitation which is difficult to get rid of when using spectral imaging systems covering such a wide spectral range [33, 34]. In the captures with the hyperspectral scanner, during the focusing step, whatever focus position is set, the entire spectral range can never be in focus at the same time. Roughly speaking, each focus position achieves around 200 nm range of sharp images, whilst the rest of them are blurrier the further we get from the focusing wavelength. An example of this is shown in Fig. 4, where 3 different spectral bands of the same capture are shown in grayscale (contrast stretched for better display). The hyperspectral scanner was roughly focused in the central region of the spectrum (close to 600 nm). It is evident that the band in the center is sharp, while much shorter and much longer wavelengths (left and right respectively) are clearly blurry.

 figure: Fig. 4

Fig. 4 Contrast stretched grayscale images of three different spectral bands captured at the same time. Note the effect of the chromatic aberration in the lack of sharpness of the short-most and long-most wavelengths.

Download Full Size | PDF

This limitation is overcome by capturing different spectral images using different focus positions for each of them. This hyperspectral system has manual focus optics, so the focus was manually changed between captures, using a live view mode and setting a lines-pattern target on scene of size 105 x 75 cm. In a system with automatic focusing optics, a method for assessing the sharpness of the bands and re-focusing automatically could be implemented [35–37]. Changing the focus means that a later registration step will be needed in the post-processing of the images because the magnification of the lens changes for different focus settings [38].

Moreover, as mentioned earlier, the hyperspectral scanner only allows one exposure time for all spectral bands at the same time. The light signal varies highly both spectrally and spatially (the scene has a high dynamic range both spatially and spectrally). Then for each focus position, different exposure times have to be captured in order to combine those spectral low dynamic range (LDR) images into a single spectral HDR image. For each different exposure time used, a flat field cube was captured using the same exposure setting, as well as a dark image with the lens cap on, to discount the effect of dark current noise.

3.6. Building of raw weight maps

During the raw capture, those pixels receiving a too high light signal and those receiving a too low light signal are saturated and underexposed respectively. All this information is lost. Since these sensor responses are going to be used for building the final hyperspectral reflectance cube, there is the need to control which sensor responses can be trusted and which ones cannot. For this aim, weight maps are built. For each pixel response, a weighting function is applied (see Fig. 5, left).

 figure: Fig. 5

Fig. 5 Left: weighting function applied for building the weight maps. Center: weight map computed for 972 nm band using 59.04 ms of exposure time. Right: mean weight map computed for 972 nm band.

Download Full Size | PDF

This weighting function was proposed in [26], and previously successfully used in [31]. It assigns a value of 1 to each sensor response that can be fully trusted (usually the range where the camera response is linear), and a value of 0 to those responses that are saturated or underexposed. An example of a resulting weight map for one spectral band is shown in Fig. 5, center. In this image, higher weights are represented with lighter gray levels and lower weights with darker gray levels. Those regions where the sensor responses were too high or too low appear as dark pixels in the weight map. This is the reason why subsequent images with higher or lower exposure times were captured, until each pixel was correctly exposed in at least one shot [31]. For each focusing position, the summation and the mean weight cube of all exposures was calculated in order to check that all the regions were correctly exposed. If the sum weight cube presented any null values, it would mean that these areas were not correctly exposed in any shot. Moreover, the mean weight cube contains information of which areas are more reliable than others. An example of this can also be seen in Fig. 5, right. These weight cubes were built for both sample and flat field samples (which also presented saturation and under-exposure). No sum weight cube was found with null values, nor was any mean weight cube found with values lower than 0.8 (over 1). Hence, all the pixels were correctly exposed in each of the three focusing positions.

3.7. Calculation of radiance cubes

Transforming raw sensor responses to radiance values is straightforward (using Eq. (2)), if the function relating incoming radiance with sensor response had been previously calculated. This function is called camera response function (CRF) and can be easily worked out by several methods [23, 29, 31]. In this case, the capturing system was already calibrated so that the radiance cubes needed were directly retrieved. Remember that, as explained in section 3.2.2, this calibration will yield a radiance value even if the raw sensor response is saturated or underexposed. There is no way of knowing which radiance value is valid or not unless the previously calculated weight cubesare used.

3.8. Registration of cubes

Changing the focus leads to changes in the lateral magnification, which leads to a change in the effective field of view of the camera even if neither the camera nor the scene have moved at all. Moreover, for the correct capture of the furthest infrared bands (800 to 1000 nm), very high exposure times were needed due to the low responsivity values of the sensor. In this case, the continuous scanning mode did not work properly. Instead, the step mode scanning was set and the rotating stage moved step by step instead of continuously. For this reason, the aspect ratio of the final image changed noticeably. For these two main reasons, the registration of all captured cubes with either different focus settings or captured using the step size mode was a necessary step in the processing work-flow. Registration was performed using feature-based methods by automatic control-point detection and SURF feature extraction methods [39] implemented in the Matlab toolbox for image registration: Image Estimator (extracting SURF features to determine a set of control points with an affine transform type, which was the transformation giving the best registration quality among the ones implemented for feature-based registration in Matlab, as assessed by the SSIM metric index values). This registration kept both the global and local aspect ratios of the original painting. However, for bigger canvas sizes or closer capturing distances, there could be a need for geometrical corrections in order to compensate for the projection of the cylindrical focal plane on the painting plane. If this happens, non-rigid registration techniques can be applied on the final cube [33, 38].

3.9. Building HDR flat field cubes

Capturing the flat field samples with a non-uniform illumination, and using different exposure times in different wavelengths, also leads to underexposed and saturated values. Since the flat field radiance cubes are going to be used to retrieve the reflectance information, the radiance values for all pixels must be valid. Therefore a HDR radiance flat-field cube is built by blending the differently exposed LDR flat field cubes according to Eq. (3).

radffHDR(x,y,λ)=n=1Nωn(x,y,λ)radn,ff(x,y,λ)n=1Nωn(x,y,λ)
where radffHDR(x,y,λ) is the HDR flat field radiance cube value. ωn(x,y,λ) is the weight map for the nth LDR flat field raw cube, and radn,ffi(x,y,λ) is the nth LDR flat field radiance cube. λ represents the wavelength and x and y the pixel positions.

3.10. Calculation of LDR reflectance cubes

Once the raw cubes have been converted into radiance cubes, and the HDR flat field cubes calculated, the LDR reflectance cubes are computed. All the raw cubes are automatically dark corrected after capturing. Thus, the reflectance cubes are computed as shown in Eq. (4).

refLDR,n(x,y,λ)=radsample,n(x,y,λ)radffHDR(x,y,λ)refff(λ)
where refLDR,n(x,y,λ) is the nth LDR reflectance cube. radsample,n(x,y,λ) is the nth radiance cube for the sample, radffHDR(x,y,λ) is the HDR flat field radiance cube previously calculated using Eq. (3), and refff(λ) is the spectral reflectance of the gray tarp (uniform for the whole tarp).

We have to take into account that this reflectance still has saturation and underexposure. Therefore, multiple LDR reflectance cubes are built for each focus position, each of them with different areas with valid reflectance values.

3.11. Building of HDR reflectance cubes

After calculating the LDR reflectance cubes (refLDR,n(x,y,λ)), they are blended together into an HDR reflectance cube. This is the step where the weight maps (ωn(x,y,λ)) previously calculated in the raw images (see sub-section 3.6) are important. They are used to compute the final HDR reflectance cubes (refHDR(x,y,λ)) using Eq. (5).

refHDR(x,y,λ)=n=1Nωn(x,y,λ)refLDR,n(x,y,λ)n=1Nωn(x,y,λ)

Note that, even though HDR capture has been used due to the high dynamic range of the radiances incoming from the scene captured (with highlights and shadowy areas), the final reflectance of the painting does not need to contain HDR information. Actually, the reflectance of a painting (and in general most reflectances), is rarely going to have any HDR content [26, 27]. Rather, HDR is only present in radiances, where powerful light sources or dim shadowy areas are present in the same scene.

4. Results and discussion

We built two reflectance cubes. The first is the multi-focus HDR version proposed in section 3 (CubeHDR). The second is a LDR version of it (CubeLDR), which is captured in the basic way to operate the system, using a single exposure time and a single focusing for the whole capture. After the final reflectance cubes were built (both the HDR and the LDR versions), different aspects of their qualities were evaluated. In the following sub-sections, the sharpness is evaluated as well as the color and spectral metrics using a Spectroradiometer model Spectrascan PR-745 (Photo Research Inc. USA), as ground truth. Besides, an example application is shown were highly reflective golden material is segmented from a real facsimile.

4.1. Focus evaluation

This evaluation consisted of simulating RGB captures under D65 illumination, and rendering CIE 1976 L*a*b* images (Lab image) from the spectral reflectance cubes. Then a sharpness index (S) [35–37] was computed to the luminance channel (L*) as shown in Eq. (6), where h and v are image height and width respectively, i=1...h; j=1...v and Gx, Gy are the horizontal and vertical gradient vector components.

S=1hvSi,j[(Gx(i,j))2+(Gy(i,j))2]

An sRGB image was also computed for each one using a color conversion from the L*a*b* image to sRGB color space. Results of both images are shown in Fig. 6. Note that zoom-in details of the same area are included in both cases to better appreciate the effect of chromatic aberration on the colored edges of the LDR image.

 figure: Fig. 6

Fig. 6 Left: sRGB capture simulation of CubeLDR and zoom in detail. Right: sRGB capture simulation of CubeHDR and zoom in detail. The effects of chromatic aberrations are mostly visible on the edges.

Download Full Size | PDF

The higher the value of S, the sharper or better focused the image is. Three different sharpness values were computed in order to study separately the effect of using HDR imaging techniques and multifocus techniques. Those values were SLDRmono=0.0280, SHDRmono=0.0312, and SHDRmulti=0.0421. They correspond to mono-focus LDR, mono-focus HDR and multi-focus HDR cases respectively. It can be seen that, only using HDR imaging techniques, the sharpness index improves 11.4%. Comparing mono-focus HDR and multifocus HDR, the improvement is 34.9%. Finally, the total improvement comparing mono-focus LDR with the proposed multi-focus HDR is 50.4%. This points out the fact that the sharpness improvement is due to both the use of HDR imaging techniques (which usually increase the signal to noise ratio), and the multiple focus approach which reduces the impact of chromatic aberrations. These results are in agreement with the less blurry appearance of the sRGB image computed from the HDR cube compared with the one computed from the LDR cube. How the LDR color image simulated is more blurry and suffers from chromatic aberration can be seen in Fig. 6. This introduces spatial color artifacts in the image by mixing information from different bands in the LDR reflectance cube, contributing to the final blur of the image.

4.2. Color and spectral evaluation

This evaluation consisted of using different color and spectral metrics to compare the spectral reflectances obtained from both HDR and LDR spectral reflectance cubes, with those measured using a spectroradiometer in the exact same areas of the painting. For this purpose, a spectroradiometer was used to measure the spectral reflectance of the painting in 16 different areas as ground truth. These areas were selected to present a wide variety of colors within the gamut of the painting. Two spectral radiances ignals were measured in each point from 400 to 1000 nm. The first was the sample radiance, directly from the painting. The second was the white radiance, from a Spectralon standard white patch (Sphere Optics GmbH) placed on top of the painting in the exact same areas. The final spectral reflectance was computed as shown in Eq. (7).

R(n,λ)=radsample(n,λ)radwhite(n,λ)refwhite(λ)
where R(n,λ) is the spectral reflectance at area sample number n, radsample(n,λ) is the sample spectral radiance at the same sample area, radwhite(n,λ) is the spectral radiance measured from the white patch placed at the same area, and refwhite(λ) is the spectral reflectance of the white patch provided by the manufacturer.

These reflectances were compared to those extracted from the same 16 areas in both CubeLDR and CubeHDR, averaging areas of equivalent size in the same points. Results plotted in Fig. 7 show that, especially in some samples (number 5, 7, 8, 10, 11, 16), the reflectances calculated in the LDR approach differ more from the measured reference reflectances in the region around 600 nm. This is due to the fact that in those regions and wavelengths, the LDR capture was saturated either in the painting capture or in the gray tarp. Therefore, when doing the flat field correction, the calculated reflectance differs more from the measured one in those wavelengths.

 figure: Fig. 7

Fig. 7 Comparison of spectral reflectances measured with the spectroradiometer (red line), and retrieved from the CubeLDR (green dashed line) and the CubeHDR (blue dashed line). X axis represents wavelength in nanometers, and Y axis Reflectance.

Download Full Size | PDF

The quantitative metrics computed to compare the reflectances retrieved from the two spectral cubes and the spectroradiometer measurements in the 16 areas are shown in Tables 1 and 2.

Tables Icon

Table 1. Color and spectral metrics results comparing LDR (L) and HDR (H) spectral reflectances only in the visible range (vis, from 400 to 720 nm).

The reason for computing the spectral metrics twice is that, in color metrics, only the visible range is taken into account. Therefore, two spectra could be very similar in the visible range, but have differences in the infrared region, or the other way around. It is clear that the mean error metrics are much better for the CubeHDR, with a colorimetric performance improvement of 71.6%. Regarding the spectral metrics, in the visible range, the GFC is improved in 66.98%, and the RMSE in 36.96%.

Tables Icon

Table 2. Spectral metrics results comparing LDR (L) and HDR (H) spectral reflectances in the full visible and near-infra-red range (vnir, from 400 to 1000 nm).

On the other hand, in the whole spectral range, the GFC is improved in 65.63% and the RMSE in 48.89%. Note that, since the GFC is a metric where the perfect match value is 1, the improvement is calculated using its complementary metric cGFC calculated as cGFC=1GFC. Only one sample area had a better performance in the CubeLDR. The remaining 15 sample areas yielded better results in the CubeHDR.

It would not be a fair comparison without mentioning the issue of the capturing time. In the regular LDR capture, only one sample cube, one black cube and one flat field cube need to be captured, whilst in the proposed HDR capture, a total of 15 cubes were captured (9 sample cubes, 3 flat field cubes and 3 black cubes). This is the price to pay for retrieving a higher quality reflectance image data.

4.3. Additional example of application: identification of golden foil in a facsimile

As a final experiment, an original facsimile from the British Library of London has been captured using the proposed framework. This facsimile (see Fig. 8), presents areas of golden highly reflective material. These kinds of materials always represent a challenge for image capturing systems. The problem is that depending on the illumination/observation geometry, the capturing device may receive specular reflections from the sample. If this happens, these areas would most probably saturate the sensor when using the exposure times needed to correctly capture the rest of the scene. Even if the illumination could be controlled, if the samples are not perfectly flat (as is the case in many artworks and illuminated manuscripts which have irregular texture), avoiding the saturated areas would not be possible by only manipulating the light sources. Therefore, even in controlled illumination laboratory conditions, the high dynamic range may still be a problem for this kind of samples. As in previous sections with the art painting, two spectral reflectance cubes of the facsimile were captured (one LDR and one HDR). These cubes were used for the automatic detection of those areas of the facsimile containing the highly reflective golden material. For such a purpose, the best results were found using the spectral metric goodness of fit coefficient (GFC) in both cases, in the near infrared spectral range from 700 to 1000 nm. The segmentations of the HDR and LDR cubes were compared with a manually segmented ground truth (shown in 8 center of bottom row). Since there are only two possible labels for each pixel (0 for non-golden material and 1 for golden material), the performance was compared as the percentage of matches between the automatic segmentations and the ground truth.

 figure: Fig. 8

Fig. 8 RGB renderization of the spectral reflectance cubes, together with red highlight of the golden material found, and the segmentation compared with the ground truth. Left: CubeHDR. Right: CubeLDR.

Download Full Size | PDF

In Fig. 8, the RGB renderizations show that the LDR cube was saturated in some golden areas of the facsimile (top left and bottom right scenes), while all the pixels in the HDR cubes were correctly exposed. This resulted in more false negatives and false positives found in the LDR cube compared to the HDR cube. The performance of the segmentations was: PHDR=95.50% and PLDR=83.78%. These results show a 11.72% improvement in performance for the given segmentation task. Note that those areas which were saturated in the LDR cube, were not correctly detected as golden material, but they were in the HDR cube. Hence, if due to illumination or a rougher texture of the artwork, more areas were specularly reflecting light to the capturing device, the difference in performance would be higher.

5. Conclusions and future work

In this study, a complete framework is introduced for the hyperspectral reflectance capture of a painting in situ, and under high dynamic range conditions. Both the high dynamic range and the focusing problem due to chromatic aberrations have been overcome by using multiple captures with different focus positions and exposure times. A final hyperspectral reflectance cube has been computed using weighting maps calculated for both sample and flat fields and the quality of this cube has been tested and compared with a spectral cube captured in the usual LDR and single focus way. Our results show that the proposed method outperforms the best low dynamic range capture acquired. The sharpness index, as well as the color and spectral metrics show that it is possible to achieve good quality spectral reflectance images using a hyperspectral scanner in non-controlled illumination conditions. Moreover, as an example application, highly reflective golden material has been segmented from a facsimile. Our results show that by applying the proposed framework for capturing and processing, those areas which saturate the sensor in the usual capturingway, can be correctly exposed and segmented using the HDR multifocus capture. In future research, a new version of this framework will be developed including piecewise cube stitching for blending different cubes captured in different regions of big paintings. This will allow us to get closer to the painting and retrieve higher spatial resolution data, whilst still maintaining the spectral resolution and performance achieved in this study. Moreover, we will use the spectral reflectance images computed in this study, together with X-ray fluorescence measurements for the non-invasive pigment identification, in order to help the dating of ancient paintings and other works of art.

Funding

Spanish Ministry of Economy and Competitiveness, DPI2015-64571-R, ECQM2018-004952-P

Acknowledgments

The authors would like to thank Mr. Francisco Fernández Fábregas, owner of the Transfiguration of Christ for allowing us the privilege of studying this painting. We also acknowledge the collaboration of Angela Tate.

References

1. L. M. Dale, A. Thewis, C. Boudry, I. Rotar, P. Dardenne, V. Baeten, and J. A. F. Pierna, “Hyperspectral imaging applications in agriculture and agro-food product quality and safety control: a review,” Appl. Spectrosc. Rev. , 48(2),142–159 (2013). [CrossRef]  

2. M. Martínez, E. Valero, J. Hernández-Andrés, J. Romero, and G. Langfelder, “Combining Transverse Field Detectors and Color Filter Arrays to improve multispectral imaging systems,” Appl. Opt. 53, C14–C24 (2014). [CrossRef]   [PubMed]  

3. H. Liang, “Advances in multispectral and hyperspectral imaging for archaeology and art conservation,” Appl. Phys. A , 106(2), 309–323 (2012). [CrossRef]  

4. C. Fischer and I. Kakoulli, “Multispectral and hyperspectral imaging technologies in conservation: current research and potential applications,” Stud. Conserv. , 51(sup1), 3–16 (2006). [CrossRef]  

5. M. Martinez, E. Valero, M. Durban, R. Blanc, and T. Espejo, “Analysis of ageing processes of paper graphics documents with different varnishes through hyperspectral imaging,” Hyperspectral Imaging and Applications conference. Conventry, UK. (2018).

6. B. Grabowski, W. Masarczyk, P. Glomb, and A. Mendys, “Automatic pigment identification from hyperspectral data,” J. Cult. Herit. 31, 1–12 (2018). [CrossRef]  

7. P. Pinto, J. Linhares, and S. Nascimento, “Correlated color temperature preferred by observers for illumination of artistic paintings,” J. Opt. Soc. Am. A. 25 (3), 623–630 (2008). [CrossRef]  

8. L. W. Macdonald, T. Vitorino, M. Picollo, R. Pillay, M. Obarzanowski, J. Sobczyk, S. Nascimento, and J. Linhares, “Assessment of multispectral and hyperspectral imaging systems for digitisation of a Russian icon,” Herit. Sci. 5(1), 41 (2017). [CrossRef]  

9. F. Daniel, A. Mounier, J. Pérez, C. Pardos, N. Prieto, S. F. O. Vallejuelo, and K. Castro, “Hyperspectral imaging applied to the analysis of Goya paintings in the Museum of Zaragoza (Spain),” Microchem. J. , 126, 113–120 (2016). [CrossRef]  

10. A. Consentino, “Identification of pigments by multispectral imaging; a flowchart method,” Herit. Sci. , 2 (1), 8–20 (2014). [CrossRef]  

11. H. Grahn and H. P. Geladi, Techniques and applications of hyperspectral image analysis (John Wiley & Sons,2007). [CrossRef]  

12. J. Brauers, N. Schulte, A. Bell, and T. Aach, “Multispectral High Dynamic Range Imaging,” IS&T SPIE Electronic Imaging. California, USA. (2008) pp. 680704.

13. P. J. Lapray, J. B. Thomas, and P. Gouton, “High Dynamic Range Spectral Imaging Pipeline For Multispectral Filter Array Cameras,” Sensors , 17 (6), 1281 (2017). [CrossRef]  

14. K. Hirai, N. Osawa, M. Hori, T. Horiuchi, and S. Tominaga, “High-Dynamic-Range Spectral Imaging System for Omnidirectional Scene Capture,” J. Imaging , 4 (4), 53 (2018). [CrossRef]  

15. M. D. Fairchild, “Spectral adaptation,” Color Res. Appl. , 32 (2), 100–112 (2007). [CrossRef]  

16. R. Shrestha and J. Y. Hardeberg, “Spectrogenic imaging: A novel approach to multispectral imaging in an uncontrolled environment,” Opt. Expr. 22(8), 9123–9133 (2014). [CrossRef]  

17. H. A. Khan, J. B. Thomas, J. Y. Hardeberg, and O. Laligant, “Spectral Adaptation Transform for Multispectral Constancy,” J. Imaging Sci. Techn. , 62(2), 20504 (2018). [CrossRef]  

18. J. L. Nieves, C. Plata, E. M. Valero, and J. Romero, “Unsupervised illuminant estimation from natural scenes: an RGB digital camera suffices,” Appl. Opt. 47(20), 3574–3584 (2008). [CrossRef]   [PubMed]  

19. D. An, J. Suo, H. Wang, and Q. Dai, “Illumination estimation from specular highlight in a multi-spectral image,” Opt. Expr. 23(13), 17008–17023 (2015). [CrossRef]  

20. H. A. Khan, J. B. Thomas, J. Y. Hardeberg, and O. Laligant, “Illuminant estimation in multispectral imaging,” J. Opt. Soc. Am. A 34(7), 1085–1098 (2017). [CrossRef]  

21. J. K. Delaney, M. Thoury, J. G. Zeibel, P. Ricciardi, K. M. Morales, and K. A. Dooley, “Visible and infrared imaging spectroscopy of paintings and improved reflectography,” Herit. Sci. 4(1), 6 (2016). [CrossRef]  

22. A. Durán, L. K. Herrera, M. D. Robador, and J. L. Pérez, “Color study of Mudejar paintings of the pond found in the palace of “Reales Alcazares” in Seville,” Color Res. Appl. , 32 (6), 489–495 (2007). [CrossRef]  

23. M. Martínez, E. Valero, and J. Hernández-Andrés, “Adaptive exposure estimation for high dynamic range imaging applied to natural scenes and daylight skies,” Appl. Opt. 54(4), B241–B250 (2015). [CrossRef]   [PubMed]  

24. M. Martínez, E. Valero, J. Hernández-Andrés, and J. Romero, “HDR imaging - Automatic Exposure Time Estimation. A novel approach,” Proceedings AIC conference in Tokyo 54(4), pp. 603–608 (2015).

25. J. D. Martin, A. Zafra, and J. L. Vílchez, “Non-destructive pigment characterization in the painting Little Madonna of Foligno by X-ray Powder Diffraction,” Microchem. J. 134, 343–353 (2017). [CrossRef]  

26. E. Reinhard, W. Heidrich, P. Debevec, S. Pattanaik, G. Ward, and K. Myszkowski, High dynamic range imaging: acquisition, display, and image-based lightning(Morgan Kaufmann, 2010).

27. J. J. McCann and A. Rizzi, The art and science of HDR imaging(John Wiley & Sons, 2011). [CrossRef]  

28. M. D. Fairchild, “The HDR photographic survey,” in Proceedings of Color and Imaging Conference, (2007) pp. 233–238.

29. P. Debevec and J. Malik, “Recovering high dynamic range radiance maps from photographs,” in Proceedings of ACM SIGGRAPH pp. 31–40, (2008).

30. M. Granados, B. Ajdin, M. Wand, C. Theobalt, H. Seidel, and H. Lensch, “Optimal HDR reconstruction with linear digital cameras,” in Proc. CVPR IEEE, (IEEE, 2010) pp. 215–222.

31. M. Martínez, E. Valero, J. Hernández-Andrés, S. Tominaga, T. Horiuchi, and K. Hirai, “Image processing pipeline for segmentation and material classification based on multispectral high dynamic range polarimetric images,” Opt. Expr. 25(24), 30073–30090 (2017). [CrossRef]  

32. J. McCann and A. Rizzi, “Camera and visual veiling glare in HDR images,” J. Soc. Inf. Display 15, 721–730 (2012). [CrossRef]  

33. J. Eckhard, T. Eckhard, E. M. Valero, J. L. Nieves, and E. G. Contreras, “Outdoor scene reflectance measurements using a Bragg-grating-based hyperspectral imager,” Appl. Opt. 54(13), D15–D24 (2015). [CrossRef]  

34. Z. Sadeghipoor, Y. M. Lu, E. Mendez, and S. Susstrunk, “Multiscale guided deblurring: Chromatic aberration correction in color and near-infrared imaging,” Proceedings EUSIPCO IEEE (IEEE,2015) pp. 2336–2340.

35. L. Shis, “Autofocus survey: a comparison of algorithms,” Proceedings of Digital photography III (2007), pp. 65020B.

36. H. Xu, J. Liu, Y. Li, Z. Yan, and H. Lu, “Autofocus using adaptive prediction approximation combined search for the fluorescence microscope in second-generation DNA sequencing system,” Appl. Opt. 53(20), 4509–4518 (2014). [CrossRef]   [PubMed]  

37. E. Krotkov and J. P. Martin, “Range from focus,” Proceedings of IEEE International Conference on Robotics and Automation. 3, 1093–1098 (1986).

38. T. Eckhard, J. Eckhard, E. Valero, and J. Nieves, “Nonrigid registration with free-form deformation model of multilevel uniform cubic B-splines: application to image registration and distortion correction of spectral image cubes,” Appl. Opt. 53(17), 3764–3772 (2014). [CrossRef]   [PubMed]  

39. H. Bay, T. Tuytelaars, and L. VanGool, “Surf: Speeded up robust features,” Proceedings of European conference on computer vision. pp. 404–417 (Springer, 2006).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Workflow of the capturing and processing steps.
Fig. 2
Fig. 2 a) PikaL line scanner mounted in linear stage for scanning samples of reduced size. b) PikaL scanner mounted on rotating tripod and gray tarp. c) set up for the measurement of the art painting. d) Relative irradiance map impinging on the painting at 702 nm band (red means high and blue low irradiance).
Fig. 3
Fig. 3 Left: normalized LDR radiance profile of the green segment drawn in the detail of the painting shown on the right. Right: LDR radiance image of spectral band corresponding to 584 nm.
Fig. 4
Fig. 4 Contrast stretched grayscale images of three different spectral bands captured at the same time. Note the effect of the chromatic aberration in the lack of sharpness of the short-most and long-most wavelengths.
Fig. 5
Fig. 5 Left: weighting function applied for building the weight maps. Center: weight map computed for 972 nm band using 59.04 ms of exposure time. Right: mean weight map computed for 972 nm band.
Fig. 6
Fig. 6 Left: sRGB capture simulation of C u b e L D R and zoom in detail. Right: sRGB capture simulation of C u b e H D R and zoom in detail. The effects of chromatic aberrations are mostly visible on the edges.
Fig. 7
Fig. 7 Comparison of spectral reflectances measured with the spectroradiometer (red line), and retrieved from the C u b e L D R (green dashed line) and the C u b e H D R (blue dashed line). X axis represents wavelength in nanometers, and Y axis Reflectance.
Fig. 8
Fig. 8 RGB renderization of the spectral reflectance cubes, together with red highlight of the golden material found, and the segmentation compared with the ground truth. Left: C u b e H D R. Right: C u b e L D R.

Tables (2)

Tables Icon

Table 1 Color and spectral metrics results comparing LDR (L) and HDR (H) spectral reflectances only in the visible range (vis, from 400 to 720 nm).

Tables Icon

Table 2 Spectral metrics results comparing LDR (L) and HDR (H) spectral reflectances in the full visible and near-infra-red range (vnir, from 400 to 1000 nm).

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

ρ r a w ( x , y , λ ) = S P D i l l u m ( x , y , λ ) R e f s a m p l e ( x , y , λ ) R e s p s e n s ( λ )
ρ r a d ( x , y , λ ) = C R F 1 ( ρ r a w ( x , y , λ ) )
r a d f f H D R ( x , y , λ ) = n = 1 N ω n ( x , y , λ ) r a d n , f f ( x , y , λ ) n = 1 N ω n ( x , y , λ )
r e f L D R , n ( x , y , λ ) = r a d s a m p l e , n ( x , y , λ ) r a d f f H D R ( x , y , λ ) r e f f f ( λ )
r e f H D R ( x , y , λ ) = n = 1 N ω n ( x , y , λ ) r e f L D R , n ( x , y , λ ) n = 1 N ω n ( x , y , λ )
S = 1 h v S i , j [ ( G x ( i , j ) ) 2 + ( G y ( i , j ) ) 2 ]
R ( n , λ ) = r a d s a m p l e ( n , λ ) r a d w h i t e ( n , λ ) r e f w h i t e ( λ )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.