Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Spectral imaging by synchronizing capture and illumination

Open Access Open Access

Abstract

This paper proposes a spectral imaging technology by synchronizing a programmable light source and a high-speed monochrome camera. The light source is capable of emitting arbitrary spectrum in high speed, so that the system has the advantage of capturing spectral images without using filters. The camera and the light source are controlled by a computer in order to capture image sequence synchronously with camera and illuminant control signals. First, we describe a projector for spectrally rendering a real scene as a fundamental usage of the spectral imaging system. Second, we describe the effective applications to (1) spectral reflectance recovery and (2) tristimulus imager. The performances of the proposed algorithms to solve the applied problems are examined in experiments in detail. We demonstrate potential applicability of the proposed spectral imaging technology.

© 2012 Optical Society of America

1. INTRODUCTION

Spectral imaging technology is a useful technology that is now widespread in all fields related with visual information. Some situations requiring the spectral imaging technology are as follows:

  • (1) An imaging system based on trichromacy faces the limitation that a color camera with three channels RGB cannot always satisfy the color matching property of the human visual system, called the Luther condition. Therefore, the color camera cannot be a colorimeter.
  • (2) Surface spectral reflectance exhibits a physical characteristic inherent to an object surface. Recovering the spectral reflectance functions from image sensor outputs is necessary not only for solving vision problems such as color constancy, but also for material identification and color image production. In such cases, three-channel camera has serious difficulty in estimating the accurate spectral reflectances.
  • (3) Spectral analysis is often needed for the detail analysis of object surfaces in a natural scene. Spectral synthesis is also needed for realistic color image production of the object surfaces under arbitrary observation conditions.

A variety of multispectral imaging systems have been proposed for acquiring spectral information from a scene. Conventional spectral imaging systems are mostly constructed by multiband imaging devices with different filtration mechanisms, such as (1) using one or two additional color filters to a trichromatic digital camera [1,2], (2) combining a monochrome camera and color filters with different spectral bands [3], (3) using narrowband interference filters [4,5], and (4) using a liquid-crystal tunable filter to a monochromatic camera [68]. Some of the shortcomings of the conventional systems are (1) latency time due to multiple capturing, (2) time consumption due to filter change, (3) difficulty in designing optimum filters, (4) accuracy inferior to a spectrometer, and (5) long exposure time because of low filter transmittances. Recently, a new type of sensor, called transverse field detector was proposed [9,10]. The sensitivities of this type of sensor are spectrally tunable by taking advantage of the light absorption of silicon. There are reported only preliminary simulations of a theoretical imaging system using the sensor [11]. It should be noted that these multispectral imaging systems are based on the filtration mechanism at the sensor side under passive illumination.

Surface spectral reflectance can be recovered under active illumination. Several light sources, such as an array of light-emitting diodes (LEDs) [12,13], a xenon flash lamp [14,15], and a digital light processing (DLP) projector [16], were proposed as the active illuminant. The active imaging method has the possibility of recovering spectral reflectance information in high speed. However, it was difficult to accomplish the active imaging system with merits in both computation time and recovering accuracy, because the previous methods employed broadband light sources, which consist of linear combination of the fixed spectral power distributions of primary colors.

From the point of view of illuminant projection, a proposal was made for a projector that can change the spectral power distribution of output light to a particular waveform, and use it to spectrally render a real scene [17]. However, the idea lacks practical programmability and is time consuming.

In this paper, a spectral imaging technology using an active spectral illumination is proposed for solving the above problems and finding effective applications in a variety of fields, including color engineering, computer vision, and imaging industry. We construct an imaging system by synchronizing a programmable light source and a high-speed monochrome camera. The light source is capable of emitting arbitrary spectrum in high speed [18], which has the essential advantage of capturing spectral images without using filters in high frame rates. As a result, we can acquire spectral information from a scene quickly and exactly. In this paper, we design the emission of illuminant spectra in two modes of time sequence: steady-state and time-varying, and then devise an automatic calibration mechanism to accurately produce the spectral functions.

First, we describe a projector for spectrally rendering a real scene as a fundamental usage of the spectral imaging system. Human visual assessment of object surface appearance was often performed under such limited light sources as illuminant A and illuminant D65. Here, we can observe three-dimensional (3D) object surfaces in a real scene under a light source with arbitrary spectral power distribution produced by the present system. We show the accuracy of the produced illuminants and the effectiveness of spectral rendering of a real scene.

Second, we describe the effective applications to (1) spectral reflectance recovery and (2) tristimulus imager, and their algorithms using this imaging system.

  • (1) The reflectance estimation is to solve an inverse problem for camera outputs. The difficulty in solving the problem on the traditional systems is caused by the elimination of the camera and illumination influences from the camera outputs. The programmable light source enables us to design arbitrary spectral functions. If the reciprocal function of the camera is projected onto surfaces as a time sequence of spectrum, we can obtain the spectral reflectance at the spatial resolution of camera pixels, without the computational difficulty of eliminating the camera and illumination influences. This direct reflectance recovery has advantages in both computational time and estimation accuracy over the previous methods employing broadband light sources. A linear finite-dimensional model of spectral reflectances is useful for accelerating the recovery process.
  • (2) We propose a new technology for colorimetry, instead of the traditional technology based on colorimeter and spectrometer, where color values are obtained for a single broad area on an object surface. If the color-matching functions are projected onto object surfaces as spectral illuminants, the color values can be obtained at the spatial resolution of camera pixels directly from the camera outputs. We can realize a high-speed and high-spatial resolution measurement system of object colors by using the integrated imaging system. This system is called the tristimulus imager or the CIE-XYZ imager. The colorimetric technique is based on the projection of the color-matching functions as illuminant, while the traditional colorimetry performs the color-matching computation for light reflected from an object surface.

2. SPECTRAL IMAGING SYSTEM

Figure 1 shows the spectral imaging system, consisting of a high-speed monochrome camera (Epix SV643M), a programmable light source (Optronic Laboratories OL490), a liquid light guide, and a personal computer (PC) for controlling the camera and the light source.

 figure: Fig. 1.

Fig. 1. Spectral imaging system. (a) Setup scene. (b) System configuration.

Download Full Size | PDF

A. Programmable Light Source

A large number of mechanisms for realizing a spectrally controllable light source have been proposed over the past decade. A spatial light modulator is the heart of the spectral light system. For this purpose, a digital micromirror device (DMD) is well used, which is a computer-interfaced mirror array. Figure 2 depicts the principle of the programmable light source using DMD technology. It is composed of a xenon lamp source, a grating, a DMD chip, and a liquid light guide. In this system, a light beam of xenon is separated by the grating into its constituent wavelengths. Figure 3 shows the spectral power distribution of the xenon lamp used in this paper. The wavelength and intensity information is controlled using the two-dimensional DMD chip, where one axis corresponds to the wavelength distribution of the spectrum, and the other axis to the intensity distribution. The micromirrors on the DMD have two on-off bistable states, which control the incident light. That is, the output intensity levels are controlled by the number of DLP mirrors toggled in the on state. The present system uses the chip of 1024×768pixels, where the former number influences the wavelength resolution in the range of 380–780 nm and the latter number determines the intensity quantization level. As an advantage of the DMD-based programmable light source, it can switch the output light spectrum much faster than a light source based on a liquid-crystal display [19,20].

 figure: Fig. 2.

Fig. 2. Principle of the programmable light source.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Spectral power distribution of the xenon lamp.

Download Full Size | PDF

B. Projection Characteristics

We examined the spatial uniformity and intensity of the illumination. Figure 4(a) shows an image of the illumination projected on a reference white surface. Figure 4(b) illustrates the intensity distribution along the red scan line in Fig. 4(a), where the horizontal axis indicates a spatial location on the scan line, and the vertical axis shows a pixel value of the camera output. Thus, it has good illumination uniformity. Figure 5 shows the illuminance as a function of diameter on the lighted surface. The solid curve indicates the illuminance values (lux) by the light source. When we use two sets of the programmable light sources, the illuminance value superimposed by two lights is indicated by the dashed curve in the figure. In practical applications, real-time image capturing by the present camera requires the luminance level of at least 3000 lux. Moreover, the illuminance level of more than 10,000 lux is desirable in accurately performing the spectral reflectance estimation and colorimetry.

 figure: Fig. 4.

Fig. 4. Spatial uniformity of projected illumination. (a) Image of illumination projected on a reference white. (b) Intensity distribution on the scan line.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Illuminance as a function of diameter on the lighted surface.

Download Full Size | PDF

C. High-Speed Camera

The monochrome complementary metal-oxide semiconductor camera used in this paper provides 640×480 resolution and 10 bit quantization at about 200 frames per second (fps). We operated this camera with 320×240640×480 resolution and 8–10 bit quantization at 200–780 fps. Figure 6 shows the spectral sensitivity function of the monochrome camera. We investigated the relationship between the input light intensities and the camera outputs, where a good linear relationship was obtained at every wavelength.

 figure: Fig. 6.

Fig. 6. Spectral sensitivity function of the monochrome camera.

Download Full Size | PDF

D. Synchronous Imaging System

The camera and the light source are controlled by a computer in order to capture image sequences synchronously with camera and illuminant control signals, as shown in Fig. 1 (see [21]). Since the operation of the camera is much slower than the light source, the camera should be the master device in the relationship to the light source for their synchronization. When the camera starts observing a scene, the camera control signal, which is the RS644 strobe signal, is output to the control PC. A transfer module on the PC converts the camera control signal into the illuminant control signal, which is the transistor–transistor logic (TTL) signal.

We create a sequence of frames forming lighting conditions, such as wavelength, bandwidth, and relative intensity in advance, before capturing the images. The light source software then prepares a frame by downloading the sequence to random access memory in the control PC and setting up the frame for hardware triggering. It takes 62 μs for the image loading from receiving the trigger signal. Less than 18 μs is required for the DLP mirror reset and settling. Therefore, the minimum exposure time of this system is 80 μs. The light source follows the TTL signal to illuminate the next frame of the sequences. The captured image sequences are transmitted to a display device for monitoring through the PC.

E. Illuminant Control

The light source system can produce emissions at a single wavelength and broad spectrum. We design the emission of a spectral function in two modes of time sequence: steady-state and time-varying. In the steady-state mode, the same spectrum is generated at every time, while in the time-varying mode, different spectra can be generated at every time. Let Eλi(λ) be a spectral power distribution emitted at a central wavelength λi and (Eλ1(λ),Eλ2(λ),,Eλn(λ)) be the time sequence. Figure 7 illustrates an example of spectral power distributions generated as a time sequence. A set of single spectral functions with narrow width of wavelength is generated at an equal wavelength interval in the visible range. Figure 7(a) is a 3D perspective view in the time-varying mode, where different spectral functions are depicted in the time series Eλ1(λ,t1),Eλ2(λ,t2),,Eλn(λ,tn). Figure 7(b) is the view in the steady-state mode, where the same spectrum is depicted as Eλ1(λ,t)+Eλ2(λ,t)++Eλn(λ,t) at each time.

 figure: Fig. 7.

Fig. 7. Spectral power distributions generated as a time sequence. (a) Time-varying mode; (b) steady-state mode.

Download Full Size | PDF

Note that the basis illuminants Eλi(λ) are not ideal narrowband spectra. In order to accurately produce the spectral shape of target illuminant, we devised an automatic calibration mechanism to accurately produce an arbitrary illuminant from the light source system [22]. The mechanism is separated into two types of feedback control systems, which combine the present light source with two different measurement devices as shown in Fig. 8. The first system uses a high-speed camera as the device [Fig. 8(a)]. This feedback control system enables the illuminant sequence (Eλ1(λ,t1),Eλ2(λ,t2),,Eλn(λ,tn)) to be determined iteratively to minimize the difference between the camera output sequence (O(t1),O(t2),,O(tn)) and the target sequence (T(t1),T(t2),,T(tn)). In this system the known surface spectral reflectance S(λ) of an object is needed. We use a reference white plate with the known reflectance S(λ) in the present calibration. When the target is set to S(λ), the calibrated illuminant sequence becomes equivalent to the inverse function of the camera spectral sensitivity. Therefore, this system can be applied to the estimation of surface spectral reflectance, even though the camera spectral sensitivity function is unknown. The second system uses a spectroradiometer as the device [Fig. 8(b)]. The feedback control system determines the illuminant sequence iteratively so that the observed spectrum O(λ) by the spectrometer agrees with the target spectrum T(λ). This system is useful for colorimetry and color reproduction.

 figure: Fig. 8.

Fig. 8. Calibration systems. (a) Illuminant-camera system and (b) illuminant-spectrometer system.

Download Full Size | PDF

F. System Stability

The xenon of the light source sends out heat. For protecting the electronic circuit, we keep the temperature around the lamp house as 20 °C using an air conditioner. In this condition, the spectral power distribution of the light source is stable.

The signal-to-noise ratio (SNR) of the CCD sensor is 44.2 dB, which was derived by measuring dark current. Next, in order to investigate the temporal stability in real-time measurement of color images, RGB illuminants were sequentially projected onto a reference white at switching speed of 90 fps as {R1,G1,B1,R2,G2,B2}. Then, 100 iterations of each component image in the RGB sequence were captured by the camera. The SNR in this case is 39.1 dB, which was calculated from the average and the standard deviation of the camera outputs. This SNR includes both noises in the camera and in the light generation setup.

These characteristics do not depend on the spectral shape and the wavelength. However, the SNR may decrease as a frame rate becomes high. The total performance at 200 fps will be explained in Subsection 4.B. We confirm that enough measurement accuracy is obtained at such a high frame rate.

3. ILLUMINANT PROJECTION FOR SPECTRAL RENDERING

The feasibility of illuminant-spectral shape was extremely limited in the conventional lighting systems because the illuminant spectra were produced by combining several basis light sources of wide spectral bands. In contrast, the proposed imaging system enables us to observe object surfaces under illuminant with arbitrary spectral power distribution. We examined the accuracy of illuminant spectra produced by the present system.

Human visual assessment of object surface appearance is often performed under typical light sources, such as illuminants A and D65. Figure 9 shows the illuminant spectral distributions for CIE standard illuminants A and D65, which were produced by the control system in Fig. 8(b). The produced spectra are almost coincident with the target spectra. The root mean squared error (RMSE) of the produced illuminant spectra are 0.01 for illuminant A in Fig. 9(a) and 2.8 for illuminant D65 in Fig. 9(b).

 figure: Fig. 9.

Fig. 9. Illuminant spectral distributions produced for CIE standard illuminants (a) A and (b) D65.

Download Full Size | PDF

A visual assessment system for spectrally rendering 3D object surfaces in a real scene is constructed by producing illuminant with arbitrary spectral power distribution in the present system. Figure 10 demonstrates appearances of a flower decoration under the two CIE standard illuminants. Figure 11 shows appearances of the same object by projecting two different LED illuminants of about 6500 K color temperature. Note that Figs. 10 and 11 are real scene photographs captured by a digital camera (Canon EOS 1Ds Mark II). We confirmed that the color appearances of these photographs displayed on a calibrated sRGB monitor were close to the visual appearance.

 figure: Fig. 10.

Fig. 10. Appearances of a flower decoration under CIE standard illuminants (a) A and (b) D65.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Appearances of a flower decoration under different LED illuminants with 6500 K color temperature. (a) Spectral power distribution (SPDs) of white LED, (b) SPDs of LED by mixing RGB, (c) appearance of standard white under illuminant (a), (d) appearance of standard white under illuminant (b), (e) appearance of the flower decoration under illuminant (a), and (f) appearance of the flower decoration under illuminant (b).

Download Full Size | PDF

Figures 11(a) and 11(b) show the illuminant spectral power distributions of a white LED and an RGB mixing LED, respectively. These illuminants were projected on the same object. The color temperatures of the illuminants are quite close with each other as (a) 6430 K and (b) 6485 K, and also the xy chromaticity coordinates are almost the same as (a) (x,y)=(0.313,0.332) and (b) (x,y)=(0.314,0.332). Figures 11(c) and 11(d) represent the color appearance of the white reference plate under the two light sources, where the appearance of the white object is the same in both cases. However, it is interesting to observe that the appearances of the colorful flower decoration are greatly distinguished under the two illuminants in Figs. 11(e) and 11(f). Note that the former emphasizes the appearance of green colored surfaces, and the latter emphasizes the appearance of red colored surfaces. Both appearances are also distinguished from the appearance with the daylight image in Fig. 10(b). Thus, the color appearance is strongly affected by the illuminant spectral curve. Therefore, this projection system is useful for various purposes, such as high-speed spectral rendering of a real scene, metamer detection, visual evaluation of surface appearance, and investigation of the color rendering index.

4. APPLICATIONS AND ALGORITHMS

A. Spectral Reflectance Recovery

We solve the problem of spectral reflectance recovery effectively by using the spectral imaging system. When the reciprocal function of the camera sensitivity is projected onto an object surface as a time sequence of spectrum, the spectral reflectance is obtained directly from the camera outputs at the spatial resolution of camera pixels.

Let S(λ) and V(λ) be the surface spectral reflectance function and the sensor-spectral sensitivity function, respectively. When an object surface is illuminated by a light source with spectrum I(λ,t), the camera output at time ti is described as

O(ti)=S(λ)I(λ,ti)V(λ)dλ.
If the spectral reflectance is smooth and the light source is a narrowband illuminant Eλ(λ,t) at the center wavelength λ, the camera output is rewritten as
O(ti)=S(λ)Eλi(λ,ti)V(λ)dλ=S(λi)(Eλi(λ,ti)V(λ)dλ).
Here, note that the sensor-spectral sensitivity function V(λ) does not have to be sharp or rather should be broad enough in the visible wavelength range, because the illuminant is narrow in the present system. Since the bracketed term of Eq. (2) is independent of an object surface, we can calculate it in advance as
ci=Eλi(λ,ti)V(λ)dλ.
When the surface is illuminated sequentially by the narrowband spectrum with a moving center wavelength, as shown in Fig. 7(a), the surface spectral reflectance (S(λ1),S(λ2),,S(λn)) can be estimated from the time sequence of the camera outputs (O(t1),O(t2),,O(tn)) as
S(λi)=O(ti)/ci(i=1,2,,n).
If the basis illuminant Eλi(λ,ti) is designed as a reciprocal function of V(λ), the spectral reflectance is obtained directly from the camera output sequence (O(t1),O(t2),,O(tn)), without computation.

The above recovering process consists of the sequential projection of n times. If spectral reflectance is represented by 61 wavelength points sampled with an equal interval of 5 nm in the range [400, 700 nm], it needs 61 illuminant projections. Figure 12 shows a set of 61 basis illuminant spectra produced for the reciprocal function 1/V(λ), which was determined by the calibration system in Fig. 8(a).

 figure: Fig. 12.

Fig. 12. Set of 61 basis illuminant spectra produced for 1/V(λ).

Download Full Size | PDF

The linear finite-dimensional model [23] of spectral reflectances is useful for accelerating the recovering process by reducing the number of illuminant projections. We suppose that the spectral reflectance function S(λ) can be expressed in a linear combination of m basis functions as

S(λ)=i=1mσiSi(λ),
where {Si(λ), i=1,2,,m} is a statistically determined set of orthogonal basis functions for reflectances, and {σi, i=1,2,,m} is a set of scalar weights.

In this case, we design the illuminant spectra as I(λ,t)=Si(λ)/V(λ). Then the corresponding camera output is O(ti)=σi. Therefore, the surface spectral reflectance is recovered from the camera output sequence in the following form:

S(λ)=i=1mO(ti)Si(λ).
The previous investigations of the linear model pointed out that the surface spectral reflectances of natural objects and artificial ones could be reproduced with the use of five to seven basis functions [2426]. In this paper, the reflectance basis functions were determined with the use of the authors’ database of surface spectral reflectances, which consisted of 507 measured reflectance spectra of different materials collected from Munsell chips, paint chips, industrial products, and natural objects. Figure 13 shows the first five basis functions. The percent variance was 0.9974 for the first five components.

 figure: Fig. 13.

Fig. 13. Basis functions of surface spectral reflectances.

Download Full Size | PDF

Obviously the basis functions take negative values except for the first basis, which are not optically realizable. The simplest way to solve this problem is to shift the functions upward by adding a constant bias as Si(λ)=Si(λ)+K to take all positive values. In this case, we need an additional illuminant projection I(λ,tK)=K/V(λ) and an additional camera output O(tK). The reflectance can be recovered from the linear model Eq. (5) with σi=O(ti)O(tK). This algorithm has a merit of reducing the number of projections. However, we should note that the dynamic range of illuminant spectral radiance is reduced by adding the constant bias, which leads to decreasing the estimation accuracy.

Other compensation algorithms, such as Epstein approximation [27], optimal nonnegative filter [28], and nonnegative matrix factorization [29,30] were also proposed for nonnegative components.

In this paper, a simple and effective way is recommended to separate the basis functions into both positive and negative parts, and make symmetric functions with respect to the zero-axis. The basis functions are described as Si(λ)=Si+(λ)Si(λ). Figure 14 depicts the illuminant spectra {Si+(λ)/V(λ)} and {Si(λ)/V(λ)} as a set of the modified basis functions with all positive values, where the solid curves and the broken ones indicate the positive functions to Si+(λ) and the symmetric functions to Si(λ), respectively. In this case, we need two projections corresponding to the positive and negative functions for each basis function except for the first basis. Then a pair of the camera outputs O(ti+) and O(ti) is used for recovering spectral reflectance from Eq. (5) with σi=O(ti+)O(ti). When we adopt the five-dimensional linear model, the reflectance recovering process consists of nine illuminant projections. The number of projections is still much smaller than the one in the original sequential projection algorithm.

 figure: Fig. 14.

Fig. 14. Illuminant spectra for the modified basis functions with positive values.

Download Full Size | PDF

Table 1 summarizes the performances in reflectance estimation results using an X-Rite Mini Color Checker by the three algorithms. We use RMSE and the CIELAB color difference as metrics for spectral matching. The second and third columns indicate the RMSE in all 24 color patches. The algorithm using the symmetric basis functions provides the most accurate estimates in the measure of RMSE. The frame rate for successively recovering the spectral reflectance images is about 13 fps in the present system. The fourth and fifth columns indicate the CIELAB color differences under the assumption of the light source D65. The algorithm using the narrowband functions is the most accurate in this case.

Tables Icon

Table 1. Performances in Reflectance Estimation by Three Algorithms

Figures 15(a) and 15(b) show the reflectance estimation results using the narrowband and the symmetric basis functions, respectively. The solid curves represent the estimation results by the proposed algorithms, and the broken curves represent the direct measurements by a spectrometer. The estimated curves in Fig. 15(a) include small fluctuations like ripple noise, and relatively large errors in the short wavelength, which are caused by low illuminant intensity in the short wavelength region, as shown in Fig. 3. The accumulation of these errors leads to the worse RMSE. However, the reflectance estimation error does not much influence to the CIELAB color difference. In contrast, the estimated curves by using the symmetric basis functions provide the smoothly fitting results to the measurements. In this case, however, the CIELAB color difference increases due to specific colors, such as #13 blue. Table 2 shows the performance details in reflectance estimation for all 24 color patches by the two algorithms.

 figure: Fig. 15.

Fig. 15. Reflectance estimation results for all 24 patches of a Color Checker.

Download Full Size | PDF

Tables Icon

Table 2. Performance Details in Reflectance Estimation for All 24 Color Patches by the Two Algorithms

B. Tristimulus Imager

Colorimetry is the scientific technology used to qualify and describe the human color perception physically. The CIE-XYZ tristimulus values are most often used as colorimetric values in representing the physical correlates of color perception. The traditional methods based on colorimeter or spectrometer cannot determine the tristimulus values at the pixel level of a color image, but determine the values for a broad area on the object surface at a time. It takes much time to obtain the precise color values. For scanning devices, a method for converting three channel output into colorimeter outputs was proposed [31]. More recently, colorimetric image acquisition was discussed for a dual-color-mode imaging sensor [32].

The present spectral imaging system using an active illuminant can be applied to a new type of technology aiming at high-speed and high-spatial resolution colorimetry. The new technology of the tristimulus imager is based on projection of the modified color-matching functions as illuminant. The CIE-XYZ values can then be obtained at the spatial resolution of camera pixels on the illuminated surface directly from the camera responses.

The tristimulus values X, Y, and Z of an object surface are calculated as

[XYZ]=S(λ)ET(λ)[x¯(λ)y¯(λ)z¯(λ)]dλ,
where ET(λ) is a target illuminant, and (x¯(λ), y¯(λ), z¯(λ)) are the CIE color-matching functions. In the traditional technology of colorimetry, the above calculation is performed using the light reflected from the object surface. In our new technology, if the target illuminant ET(λ), in which the object is observed, is specified like illuminant D65 or illuminant A, we can design the active illuminant to imitate the environment of observation.

Let I(λ) be a linear combination of the basis spectra in the form

I(λ)=i=1nciEλi(λ).
Suppose that this set of basis spectra is projected to the object surface in the steady-state mode. The camera output is then described as
O=S(λ)I(λ)V(λ)dλ.
The comparison of Eq. (9) to Eq. (7) suggests that the tristimulus values can be obtained by the camera outputs if three conditions are satisfied as
Ix(λ)V(λ)=ET(λ)x¯(λ),Iy(λ)V(λ)=ET(λ)y¯(λ),Iz(λ)V(λ)=ET(λ)z¯(λ).
Therefore, the problem is reduced to determining the weights c1,c2,,cn for the basis spectra at λ1,λ2,,λn, so that the illuminant spectra are coincident with the modified matching functions ET(λ)x¯(λ)/V(λ), ET(λ)y¯(λ)/V(λ), and ET(λ)z¯(λ)/V(λ). Since the present paper uses a monochrome camera, the camera outputs only one tristimulus value at one time. Therefore, the tristimulus values are obtained by three camera outputs (O(t1), O(t2), O(t3)) in time sequence, where
[O(t1)O(t2)O(t2)]=S(λ)[Ix(λ,t1)Iy(λ,t2)Iz(λ,t3)]V(λ)dλ.
The tristimulus values are obtained at every pixel point of the object surfaces in an image.

Figure 16 shows the illuminant spectra (Ix(λ,t1), Iy(λ,t2), Iz(λ,t3)) that were produced in order to obtain the XYZ tristimulus values under the CIE standard illuminants D65 and A by the present system. The broken curves represent the target spectra, which are the weighted matching functions (ED65(λ)x¯(λ)/V(λ), ED65(λ)y¯(λ)/V(λ), ED65(λ)z¯(λ)/V(λ)) in Fig. 16(a) and (EA(λ)x¯(λ)/V(λ), EA(λ)y¯(λ)/V(λ), EA(λ)z¯(λ)/V(λ)) in Fig. 16(b), where (ED65(λ), EA(λ)) indicate the illuminant spectral power distributions, and (x¯(λ), y¯(λ), z¯(λ)) indicate the CIE 1931 color-matching functions in this case. We performed an experiment of colorimetry, where the above illuminants were projected to the X-Rite Mini Color Checker. The accuracy was evaluated by the CIELAB color difference. The color differences ΔEab* under illuminant D65 and illuminant A are shown in Table 3. The average color differences of ΔEab* for all 24 color patches were 2.43 and 2.82 under illuminant D65 and illuminant A, respectively. The maximum color differences were 5.08 for #13 blue under illuminant D65 and 5.15 for #7 orange under illuminant A, respectively.

 figure: Fig. 16.

Fig. 16. Illuminant spectra produced for obtaining the tristimulus values under (a) illuminant D65 and (b) illuminant A.

Download Full Size | PDF

Tables Icon

Table 3. Color Differences ΔEab* for All 24 Color Patches Under Illuminant D65 and Illuminant A

The speed and resolution of color measurement is dependent on the experimental setup [33]. In the above experiment, the distance between the object and the camera was 120 mm and then the exact size of the scene on the camera sensor is 48μm/pixel. The distance between the object and the light source (liquid guide cable) was 70 mm and the effective diameter of the irradiated region was about 20 mm. The weighted color-matching functions were sequentially projected from the liquid guide cable to the object at 200 fps and the camera captured the scene of the illuminated object synchronously.

5. CONCLUSION

The present paper has proposed a spectral imaging technology by synchronizing a programmable light source and a high-speed monochrome camera. The light source was capable of emitting arbitrary spectrum in high speed, so that the system had the advantage of capturing spectral images without using filters in high frame rate. We designed the emission of illuminant spectra in two modes of time sequence: steady-state and time-varying, and then devised an automatic calibration mechanism to accurately produce the spectral functions.

First, a projector for spectrally rendering a real scene was described as a fundamental usage of the spectral imaging system. Although, so far, human visual assessment of object surface appearance was often performed under such limited light sources as illuminants A and D65, the present system enabled us to observe 3D object surfaces in a real scene under a light source with arbitrary spectral power distribution. We showed the accuracy of the produced illuminants and the effectiveness of spectral rendering.

Second, we described the effective applications to (1) spectral reflectance recovery and (2) tristimulus imager, and their algorithms using the spectral imaging system. The principle of (1) is to project the reciprocal function of camera spectral sensitivity onto object surfaces as a time sequence of spectrum, so that the spectral reflectances can be obtained at the spatial resolution of camera pixels. We showed that the linear finite-dimensional model representation was effective for accelerating the recovering process. The tristimulus imager of (2) was a new technology for colorimetry. When the color-matching functions modulated by a target illumination condition are projected onto object surfaces, the tristimulus values can be obtained at high speed and high resolution directly from the camera outputs. The accuracy was less than 3.0 in the average CIELAB color difference for the color patches. The feasibility of the proposed algorithms was confirmed in experiments in detail.

The proposed spectral imaging technology is potential to be expanded to a variety of fields. One of the straightforward applications is a camera simulator. Provided that we know the color response functions of a real camera and project the modulated functions by a target illuminant, the present system outputs can imitate the scene image captured by the real camera. Spectral printing is also an interesting application field of our spectral imaging technology.

ACKNOWLEDGMENTS

The authors would like to thank Brian Wandell at Stanford University for useful discussions, and Hirokazu Kakinuma, Akihiko Yoshimura, and Daisuke Nishioka at Chiba University for help in experiments.

REFERENCES

1. F. H. Imai and R. S. Berns, “Spectral estimation using trichromatic digital camera,” in Proceedings of the International Symposium on Multispectral Imaging and Color Reproduction for Digital Archives (Society of Multispectral Imaging of Japan, 1999), pp. 42–49.

2. S. Tominaga, T. Fukuda, and A. Kimachi, “A high-resolution imaging system for omnidirectional illuminant estimation,” J. Imaging Sci. Technol 52, 040907 (2008). [CrossRef]  

3. S. Tominaga, “Multichannel vision system for estimating surface and illuminant functions,” J. Opt. Soc. Am. A 13, 2163–2173(1996). [CrossRef]  

4. S. Nishi and S. Tominaga, “Calibration of a multispectral camera system using interference filters and its application,” in Proceedings of the Congress of the International Colour Association (International Colour Association, 2009), CD-ROM.

5. N. Shimano, K. Terai, and M. Hironaga, “Recovery of spectral reflectances of objects being imaged by multispectral cameras,” J. Opt. Soc. Am. A 24, 3211–3219 (2007). [CrossRef]  

6. S. Tominaga, “Spectral imaging by a multi-channel camera,” J. Electron. Imaging 8, 332–341 (1999). [CrossRef]  

7. M. A. López-Álvarez, J. Hernández-Andrés, and J. Romero, “Developing an optimum computer-designed multispectral system comprising a monochrome CCD camera and a liquid-crystal tunable filter,” Appl. Opt. 47, 4381–4390 (2008). [CrossRef]  

8. M. A. López-Álvarez, J. Hernández-Andrés, J. Romero, J. Campos, and A. Pons, “Calibrating the elements of a multispectral imaging system,” J. Imaging Sci. Technol. 53, 031102 (2009). [CrossRef]  

9. A. Longoni, F. Zaraga, G. Langfelder, and L. Bombelli, “The transverse field detector (TFD): A novel color-sensitive CMOS device,” IEEE Electron. Device Lett. 29, 1306–1308 (2008). [CrossRef]  

10. G. Langfelder, A. F. Longoni, and F. Zaraga, “Implementation of a multi-spectral color imaging device without color filter array,” in Proceedings of Electronic Imaging (SPIE-IS&T, 2011), p. 787606.

11. A. L. Lin and F. Imai, “Efficient spectral imaging based on imaging systems with scene adaptation using tunable color pixels,” in Proceedings of the 19th Color Imaging Conference (Society for Imaging Science and Technology, 2011), pp. 332–338.

12. F. Xiao, J. M. DiCarlo, P. B. Catrysse, and B. A. Wandell, “Image analysis using modulated light sources,” Proc. SPIE 4306, 22–30. [CrossRef]  

13. J.-I. Park, M.-H. Lee, M. D. Grossberg, and S. K. Nayar, “Multispectral imaging using multiplexed illumination,” in Proceedings of IEEE International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

14. J. M. DiCarlo, F. Xiao, and B. A. Wandell, “Illuminating illumination,” in Proceedings of the 9th Color Imaging Conference(Society for Imaging Science and Technology, 2001), pp. 27–34.

15. C. Chi, H. Yoo, and M. Ben-Ezra, “Multi-spectral imaging by optimized wide band illumination,” Int. J. Comput. Vis. 86, 140–151 (2010). [CrossRef]  

16. S. Han, I. Sato, T. Okabe, and Y. Sato, “Fast spectral reflectance recovery using DLP projector,” in Proceedings of IEEE Asian Conference on Computer Vision (IEEE, 2010), pp. 323–335.

17. A. Mohan, R. Raskar, and J. Tumblin, “Agile spectrum imaging: programmable wavelength modulation for cameras and projectors,” Comput. Graph. Forum 27, 709–717 (2008). [CrossRef]  

18. A. Fong, B. Bronson, and E. Wachman, “Advanced photonic tools for hyperspectral imaging in the life sciences,” SPIE Newsroom (2008). [CrossRef]  

19. M. Hauta-Kasari, K. Miyazawa, S. Toyooka, and J. Parkkinen, “Spectral vision system for measuring color images,” J. Opt. Soc. Am. A 16, 2352–2362 (1999). [CrossRef]  

20. I. Farup, J. H. Wold, T. Seim, and T. Søndrol, “Generating light with a specified spectral power distribution,” Appl. Opt. 46, 2411–2422 (2007). [CrossRef]  

21. S. Tominaga, T. Horiuchi, H. Kakinuma, and A. Kimachi, “Spectral imaging with a programmable light source,” in Proceedings of the 17th Color Imaging Conference (Society for Imaging Science and Technology, 2009), pp. 133–138.

22. T. Horiuchi, H. Kakinuma, and S. Tominaga, “Effective illumination control for an active spectral imaging system,” in Proceedings of the 12th International Symposium on Multispectral Color Science (Society for Imaging Science and Technology, 2010), pp. 529–534.

23. B. A. Wandell, Foundations of Vision (Sinauer Associates, 1995).

24. L. T. Maloney, “Evaluation of linear models of surface spectral reflectance with small numbers of parameters,” J. Opt. Soc. Am. A 3, 1673–1683 (1986). [CrossRef]  

25. J. P. S. Parkkinen, J. Hallikainen, and T. Jaaskelainen, “Characteristic spectra of Munsell colors,” J. Opt. Soc. Am. A 6, 318–322 (1989). [CrossRef]  

26. M. J. Vrhel, R. Gershon, and L. S. Iwan, “Measurement and analysis of object reflectance spectra,” Color Res. Appl. 19, 4–9 (1994).

27. D. W. Epstein, “Colorimetric analysis of RCA color television system,” RCA Rev. 14, 227–258 (1953).

28. G. Sharma, H. J. Trussell, and M. J. Vrhel, “Optimal nonnegative color scanning filters,” IEEE Trans. Image Process. 7, 129–133 (1998). [CrossRef]  

29. D. D. Lee and H. S. Seung, “Learning the parts of objects by non-negative matrix factorization,” Nature 401, 788–791 (1999). [CrossRef]  

30. G. Buchsbaum and O. Bloch, “Color categories revealed by non-negative matrix factorization of Munsell color spectra,” Vis. Res. 42, 559–563 (2002). [CrossRef]  

31. J. Farrell, D. Sherman, and B. Wandell, “How to turn your scanner into a colorimeter,” in Proceedings of the Tenth International Congress on Advances in Non-Impact Printing Technologies (Society for Imaging Science and Technology, 1994), pp. 579–581.

32. G. Langfelder, “Spectrally reconfigurable pixels for dual-color-mode imaging sensors,” Appl. Opt. 51, A91–A98 (2012). [CrossRef]  

33. S. Tominaga, T. Horiuchi, and A. Yoshimura, “Real-time color measurement using active illuminant,” Proc. SPIE 7528, 752809 (2010). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (16)

Fig. 1.
Fig. 1. Spectral imaging system. (a) Setup scene. (b) System configuration.
Fig. 2.
Fig. 2. Principle of the programmable light source.
Fig. 3.
Fig. 3. Spectral power distribution of the xenon lamp.
Fig. 4.
Fig. 4. Spatial uniformity of projected illumination. (a) Image of illumination projected on a reference white. (b) Intensity distribution on the scan line.
Fig. 5.
Fig. 5. Illuminance as a function of diameter on the lighted surface.
Fig. 6.
Fig. 6. Spectral sensitivity function of the monochrome camera.
Fig. 7.
Fig. 7. Spectral power distributions generated as a time sequence. (a) Time-varying mode; (b) steady-state mode.
Fig. 8.
Fig. 8. Calibration systems. (a) Illuminant-camera system and (b) illuminant-spectrometer system.
Fig. 9.
Fig. 9. Illuminant spectral distributions produced for CIE standard illuminants (a) A and (b) D65.
Fig. 10.
Fig. 10. Appearances of a flower decoration under CIE standard illuminants (a) A and (b) D65.
Fig. 11.
Fig. 11. Appearances of a flower decoration under different LED illuminants with 6500 K color temperature. (a) Spectral power distribution (SPDs) of white LED, (b) SPDs of LED by mixing RGB, (c) appearance of standard white under illuminant (a), (d) appearance of standard white under illuminant (b), (e) appearance of the flower decoration under illuminant (a), and (f) appearance of the flower decoration under illuminant (b).
Fig. 12.
Fig. 12. Set of 61 basis illuminant spectra produced for 1/V(λ).
Fig. 13.
Fig. 13. Basis functions of surface spectral reflectances.
Fig. 14.
Fig. 14. Illuminant spectra for the modified basis functions with positive values.
Fig. 15.
Fig. 15. Reflectance estimation results for all 24 patches of a Color Checker.
Fig. 16.
Fig. 16. Illuminant spectra produced for obtaining the tristimulus values under (a) illuminant D65 and (b) illuminant A.

Tables (3)

Tables Icon

Table 1. Performances in Reflectance Estimation by Three Algorithms

Tables Icon

Table 2. Performance Details in Reflectance Estimation for All 24 Color Patches by the Two Algorithms

Tables Icon

Table 3. Color Differences ΔEab* for All 24 Color Patches Under Illuminant D65 and Illuminant A

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

O(ti)=S(λ)I(λ,ti)V(λ)dλ.
O(ti)=S(λ)Eλi(λ,ti)V(λ)dλ=S(λi)(Eλi(λ,ti)V(λ)dλ).
ci=Eλi(λ,ti)V(λ)dλ.
S(λi)=O(ti)/ci(i=1,2,,n).
S(λ)=i=1mσiSi(λ),
S(λ)=i=1mO(ti)Si(λ).
[XYZ]=S(λ)ET(λ)[x¯(λ)y¯(λ)z¯(λ)]dλ,
I(λ)=i=1nciEλi(λ).
O=S(λ)I(λ)V(λ)dλ.
Ix(λ)V(λ)=ET(λ)x¯(λ),Iy(λ)V(λ)=ET(λ)y¯(λ),Iz(λ)V(λ)=ET(λ)z¯(λ).
[O(t1)O(t2)O(t2)]=S(λ)[Ix(λ,t1)Iy(λ,t2)Iz(λ,t3)]V(λ)dλ.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.