Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Hyperspectral imaging enabled by an unmodified smartphone for analyzing skin morphological features and monitoring hemodynamics

Open Access Open Access

Abstract

We propose a novel method and system that utilizes a popular smartphone to realize hyperspectral imaging for analyzing skin morphological features and monitoring hemodynamics. The imaging system works based on a built-in RGB camera and flashlight on the smartphone. We apply Wiener estimation to transform the acquired RGB-mode images into “pseudo”-hyperspectral images with 16 wavebands, covering a visible range from 470nm to 620nm. The processing method uses weighted subtractions between wavebands to extract absorption information caused by specific chromophores within skin tissue, mainly including hemoglobin and melanin. Based on the extracted absorption information of hemoglobin, we conduct real-time monitoring experiments in the skin to measure heart rate and to observe skin activities during a vascular occlusion event. Compared with expensive hyperspectral imaging systems, the smartphone-based system delivers similar results but with very-high imaging resolution. Besides, it is easy to operate, very cost-effective and has a wider customer base. The use of an unmodified smartphone to realize hyperspectral imaging promises a possibility to bring a hyperspectral analysis of skin out from laboratory and clinical wards to daily life, which may also impact on healthcare in low resource settings and rural areas.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The application of hyperspectral imaging in cosmetology and dermatology is becoming increasingly popular and appealing to academic researchers and industrial entrepreneurs [1,2]. Based on specific spectral characteristics of chromophores with skin tissue, for example hemoglobin and melanin, hyperspectral imaging can be used to separate and contrast the target chromophores from others, upon which to analyze and monitor skin features [35].

A number of hyperspectral imaging systems have been recently developed for the analysis of skin features. One of such uses monochromatic lasers or optical filters (either filter wheels or tunable filters) to provide specific spectral illumination and uses a single array detector to sequentially capture the tissue reflection images [69]. For example, Kim et al used LED to provide illumination in multispectral imaging [10]. Diebele et al tuned illumination wavelengths with liquid crystal filters for the clinical evaluation of melanomas and common nevi [11]. In these devices, wavelength-selection procedure requires at least tens of milliseconds to complete, leading to asynchronous data acquisitions for different wavelengths. Consequently, both tissue motion and device movements would inevitably cause motion artifacts, affecting our ability to interpret the final results. In addition, the need to select the wavelengths complicates the system setup, not a cost-effective solution for daily-use purposes.

Recent development of hyperspectral cameras surges new interests and new opportunities for hyperspectral imaging [12,13]. This type of camera is manufactured by assembling optical-filter array on the sensor, so that the pixels on the sensor can be separated into various wavebands, enabling spectral images across a wide spectrum to be captured at once [14,15]. Such capability of snapshot-capturing hyperspectral images eliminates motion artifacts during data acquisition and improves device compactness [16]. However, due to complicated design and enabling fabrication, the cost is currently at least prohibitive for routine and cost-effective applications. And also, the number of pixels available in the spectral array is limited, which directly translates to the limited imaging resolution that can be achieved. As a result, a high demand remains for a hyperspectral system that is capable of high-resolution imaging, and at the same time, is immune to motion artifacts, compact and cost effective so that it can be deployed to a wider user community for daily assessment of the skin features.

In the past decade, the developments in smartphone have changed the daily life of human beings. Both the technical development and the consumer group have experienced explosive improvements. Nowadays, a typical camera in a smartphone has 8 to 12 million pixels and is capable of high-speed imaging, ideal as a low-cost and handy imaging device for skin assessments [17]. There have seen some developments of smartphone-based skin analysis [18,19], however, these developments simply take a straightforward approach to enhance the images captured by the camera.

In this paper, we propose a novel concept and method that utilizes an unmodified smartphone to enable hyperspectral imaging, where RGB images captured by the built-in camera is used to reconstruct “pseudo”-hyperspectral images through a transformation using Wiener estimation. The reconstruction process is calibrated with a snapshot hyperspectral camera with 16 spectral channels. After the reconstruction of hyperspectral images from RGB images, we use a weighted subtraction strategy to demonstrate the extraction of the chromophore information, e.g. blood, melanin absorption, oxygenation etc. within the skin from the RGB images captured by the smartphone, which are then compared with the results directly obtained from snapshot hyperspectral camera. Finally, we show the usefulness of the proposed approach in the analysis of skin morphological features. Meanwhile, in the application of monitoring the hemodynamic activities, we successfully detect the heartbeat and map the vascular occlusion effect on the skin.

2. Methods and materials

2.1 Reconstruction principle from RGB images to hyperspectral images

The accuracy of extracting information from the measurements related to a specific chromophore (e.g. hemoglobin) relies on how the measurements represent accurately its characteristic absorption features across the wavelengths. However, the sensor in the smartphone has only three sensitive channels (red, green and blue), and each channel detects the lights that are integrated over its spectral sensitive bandwidth (that typically ranges as wide as >50 nm, though its peak wavelength is at R, G or B). Such integration reduces the sensitivity of the measurement to chromophore features that makes the accurate extraction of the chromophore information difficult. Hyperspectral reconstruction to refine the measurement signal before processing could be a useful solution to mitigate this problem. To recover high-dimensional information from low-dimensional data, several reconstruction techniques have been investigated, such as finite-dimensional modeling [20], pseudo-inverse [21] and Wiener estimation [22]. Among these, Wiener estimation proves superior in terms of reconstruction accuracy and computational efficiency [23,24]. Therefore, we elected the Wiener estimation algorithm to perform hyperspectral reconstruction from RGB images captured by a smartphone camera. In smartphone camera, the response of RGB channels can be depicted as:

$${V_C} = \int {l(\lambda )\gamma (\lambda ){f_C}(\lambda )s(\lambda )d\lambda = \int {{m_C}(\lambda )\gamma (\lambda )d\lambda } }$$
where $ \lambda $ is the wavelength, ${V_c}$ is the response of subchannel c in the smartphone camera (c = R,G,B), $ l(\lambda )$ is the spectral power distribution of illumination, $ {f_c}(\lambda )$ is the spectral transmittance of the filter in subchannel c, $s(\lambda )$ is the spectral sensitivity of camera sensor. ${m_c}(\lambda )$ is the product of $l(\lambda )$, ${f_c}(\lambda )$ and $s(\lambda )$, which is the spectral responsivity of each subchannel. $\gamma (\lambda )$ is the spectral reflection of the sample. The matrix form of Eq. (1) can be represented as:
$${{\textbf {V}}} = {\mathbf{{M}{\boldsymbol \gamma }}} $$
where ${\textbf V}$ is the vector of smartphone camera response, ${\textbf M}$ is the matrix of spectral responsivity in smartphone camera, ${\boldsymbol \gamma }$ is the vector of sample reflection.

In the Wiener estimation, the core step is to find a reconstruction matrix, which is used to transform RGB images to hyperspectral images. This step necessitates training multiple samples with known colors for calculation and error correction. In our study, we used 100 color blocks with different reflectance in visible wavelength bands as the training set samples. Traditionally, the spectral reflection from the color blocks can be calibrated with a well-characterized spectrometer. In doing so, the sampling areas in the smartphone camera imaging and spectral reflection measurements need to match with each other, which is termed the co-registration step. This would increase the workload and introduce additional instabilities in the calibration. Furthermore, the reflection measurement and co-registration steps are supposed to be repeatedly conducted for 100 samples, which may cause even heavier workloads and more instabilities.

To mitigate the tedious procedures when using spectrometer for calibration, we instead used a snapshot hyperspectral camera (MQ022HG-IM-SM4X4-VIS, XIMEA, Germany) with 16 spectral channels to provide spectral reflection calibrations. With this hyperspectral camera, all training steps were replaced by taking one RGB image and one hyperspectral image of the color chart with100 color samples. The illumination light sources in the training should be the same to the light sources used in the later imaging process to the skin, which includes the smartphone flashlight and fluorescent lamp in our study. The measurements of these 100 samples and co-registration of each sampling area can be achieved by selecting and calculating the subchannel values of corresponding target area from color chart images. When the sample is captured by the snapshot hyperspectral camera, the response of each subchannels is depicted as:

$$V{^{\prime}_{{C}^{\prime}}} = \int {l(\lambda )\gamma (\lambda ){f_{{C}^{\prime}}}(\lambda )s^{\prime}(\lambda )d\lambda = \int {m{^{\prime}_{{C}^{\prime}}}(\lambda )\gamma (\lambda )d\lambda } }$$
where $V{^{\prime}_{c^{\prime}}}$ is the response of c’th subchannel (c = 1,2,3, ···, 16), $ {f_{c^{\prime}}}(\lambda )$ is the spectral transmittance of the filter in c’th subchannel, $s^{\prime}(\lambda )$ is the spectral sensitivity. ${m^{\prime}_{c^{\prime}}}(\lambda )$ is the product of $l(\lambda )$, ${f_c}(\lambda )$ and $s(\lambda )$, which is the spectral responsivity of each subchannel in the hyperspectral camera. The matrix form of Eq. (3) is then expressed as:
$${\boldsymbol V}^{\prime} = {\boldsymbol M}^{\prime}{\boldsymbol \gamma }$$
where ${\textbf V}^{\prime}$ is the vector of hyperspectral camera response, ${\textbf M}^{\prime}$ is the matrix of spectral responsivity in hyperspectral camera. To reconstruct hyperspectral images from RGB images, we assume the reconstruction matrix is ${\textbf W}$. The process is expressed as:
$$\widetilde {{\boldsymbol V}^{\prime}} = {\boldsymbol {WV}}$$
where $\widetilde {{\textbf V}^{\prime}}$ is the reconstructed hyperspectral image. To ensure the accuracy of reconstruction, the minimum square error between the reconstructed hyperspectral image and the original hyperspectral image should be minimized. The minimum square error is calculated as:
$$e = \langle {({\boldsymbol V}' - \widetilde {{\boldsymbol V}'})^{\boldsymbol t}}\left( {{\boldsymbol V}' - \widetilde {{\boldsymbol V}'}} \right)\rangle = \langle{\boldsymbol V}{'^{\boldsymbol t}}{\boldsymbol V}'\rangle - {\boldsymbol {W}}\langle{\boldsymbol {V}}{'^{\boldsymbol t}}{\boldsymbol V}\rangle - {{\boldsymbol W}^{\boldsymbol t}}\langle{{\boldsymbol V}^{\boldsymbol t}}{\boldsymbol V}'\rangle + {{\boldsymbol W}^{\boldsymbol t}}{\boldsymbol W}\langle{\boldsymbol{V}}{'^{\boldsymbol t}}{\boldsymbol V}'\rangle $$
When the partial derivative of e with respect to ${\textbf W}$ is zero, the minimum square error is minimized, expressed as:
$$\frac{{\partial e}}{{\partial {\boldsymbol W}}} = - \langle {\boldsymbol V}{'^{\boldsymbol t}}{\boldsymbol V}\rangle + {{\boldsymbol W}^{\boldsymbol t}} \langle{{\boldsymbol V}^{\boldsymbol t}}{\boldsymbol V}\rangle = 0$$
The reconstruction matrix is derived as:
$${\boldsymbol W} = \langle{\boldsymbol V}^{\prime}{{\boldsymbol V}^{\boldsymbol t}}\rangle \langle{\boldsymbol V}{{\boldsymbol V}^{\boldsymbol t}}\rangle^{ - 1}$$
where $\left\langle {\textrm{ }} \right\rangle $ is an ensemble-averaging operator. $\langle{{\textbf V}^{\prime}}{{\textbf V}^t}\rangle$ is the correlation matrix between the hyperspectral response and smartphone camera response. $\langle{\textbf V}{{\textbf V}^t}\rangle$ is the autocorrelation matrix of the smartphone camera response. The reconstruction matrix was calculated from the calibration of a color chart which contains 100 color blocks. With this reconstruction matrix, skin images captured by a smartphone camera can be reconstructed into hyperspectral images.

2.2 Wiener estimation matrix calculation

A schematic setup of the hyperspectral reconstruction calibration is shown in Fig. 1(a). We used a color chart with 100 randomly selected known-color blocks. The color chart was illuminated by the flashlight from a smartphone (Mate SE, HUAWEI, China). Both the smartphone camera (Sensor model: Sony IMX371) and the snapshot hyperspectral camera were used to acquire images of the color chart. The smartphone camera has a sensor with 3264×2448 pixels and works in the RGB mode. The snapshot hyperspectral camera (MQ022HG-IM-SM4X4-VIS, XIMEA, Germany) houses a CMOS sensor with 2048×1088 pixels, where a filter array separates the sensor array into 512×272 super-pixels. Each super-pixel contains a 4×4 spectral sensitive pixel-matrix that are sensitive to 16 wavebands, termed as 16 subchannels. Figure 1(b) shows the spectral sensitivity curves of 16 subchannels in the snapshot hyperspectral camera. The specific values of spectral characterization of the snapshot hyperspectral camera is shown in Table 1. The RGB mode image of the color chart from smartphone camera is shown in Fig. 1(c). From the raw hyperspectral data, we extracted a representative image from the sub-channel of band 9 (615 nm) as an example, shown in Fig. 1(d). The spectral power distribution of the smartphone flashlight is shown in Fig. 1(e). Meanwhile, the hyperspectral data of light source was measured from the polymer white diffuser standard (SphereOptics GmbH, 95% Reflectance) under the smartphone flashlight illumination. Therefore, the reflectance of the color chart in 16 subchannels can be calculated. Based on Eq. (8), we calculated the correlation matrix between hyperspectral response and the smartphone camera response, and the autocorrelation matrix of the smartphone camera to obtain the Wiener estimation matrix. To minimize the estimation error, we used the averaged values of pixel responses in the central area, which is one fourth of the total area, of every color block in the calibration.

 figure: Fig. 1.

Fig. 1. (a) Schematic of the hyperspectral reconstruction calibration system that consists of a smartphone, color chart and 16-channel hyperspectral camera, with sensor structure and sensitive wavebands at each subchannel shown in the top left. (b) The manufacturer’s data of wavelength-dependent sensitivity for 16 bands in the snapshot hyperspectral camera. (c) The RGB image of color chart from the smartphone camera. (d) The raw image of color chart directly exported from the band 9 in the snapshot hyperspectral camera. (e) Spectral power distribution of the smartphone flashlight that is used in this study. (f) Absorption spectra of oxyhemoglobin (oxyHb), deoxyhemoglobin (deoxyHb) and melanin.

Download Full Size | PDF

Tables Icon

Table 1. The spectral characterization of snapshot hyperspectral camera

2.3 Hyperspectral reconstruction and post-processing

Chromophores, such as hemoglobin and melanin, are the key factors for the skin assessments. By extracting their absorption information from the skin, these chromophore features can be contrasted from surrounding tissues, realizing specific assessment. We recruited two volunteers and used the smartphone to capture RGB images from their faces, where there appears redness and moles (i.e., nevi) on the skin. This study adhered to the tenets of the Declaration of Helsinki and was performed in accordance with the Health Insurance Portability and Accountability Act. Ethical approval was obtained from the Institutional Review Board of the University of Washington. All enrolled participants provided written informed consent.

With our method, we extracted the hemoglobin-related redness and melanin-related moles from the facial skin and separated them from each other. The distribution of chromophores provides the basis for assessing morphological features. The time-resolved variation of hemoglobin absorption may be used as an index of hemodynamic monitoring, which may be useful to infer pathological information. With the calculated Wiener estimation matrix, we transformed RGB images of the skin acquired by the smartphone camera into the hyperspectral images, simulating the images as if they were captured by the 16-spectral channel hyperspectral camera. From the reconstructed hyperspectral images, we extracted spatial absorption information of skin chromophores, e.g. melanin and hemoglobin, through a series of processing steps on images representing different wavebands. Figure 1(e) shows plots of absorption efficiencies of melanin and hemoglobin, respectively. For example, in this scheme in order to extract the absorption information caused by hemoglobin in blood, we selected several red light wavebands, including bands 8 (615 nm), 9 (625 nm), 10 (603 nm), 11 (595 nm), and subtracted one by one from green light wavebands, including bands 6 (556 nm), 7 (544 nm), 12 (529 nm), 13 (543 nm). From ‘Green bands’ to ‘Red bands’, the absorption of the chromophores of interest decreases, but the rate of decrease is slow for melanin, and rapid for hemoglobin. By subtracting the measurement at ‘Red bands’ from that at ‘Green bands’, the contribution of melanin absorption to the extraction of hemoglobin absorption can be suppressed. Thus, we used the weighted subtraction to emphasize the hemoglobin absorption measurement, indicating blood perfusions in skin samples. The weighted subtraction is expressed as:

$${C_r} = {C_1} - K{C_2} = m{x_1}{l_1} + n{y_1}{l_1} - K(m{x_2}{l_2} + n{y_2}{l_2}) = m({x_1}{l_1} - K{x_2}{l_2}) + n({y_1}{l_1} - K{y_2}{l_2})$$
where ${C_r}$ is the estimated reflection of skins that is assumed to be influenced by the absorption of melanin and hemoglobin [3]. ${C_1}$ and ${C_2}$ are the detected reflections at two selected wavebands. K represents the ratio of weighted subtraction. m and n are the concentrations of the hemoglobin and melanin in the skin sample. $ {l_1}$ and ${l_2}$ are the illumination intensities at two selected wavebands. ${x_1}$ and ${x_2}$ are the reflectance of hemoglobin at two selected wavebands. $ {y_1}$ and ${y_2}$ are the reflectance of melanin at two selected wavebands. By setting the value of K to be ${y_1}{l_1}/{y_2}{l_2}$, the reflection of hemoglobin in the data can be extracted. By conducting similar weighted subtraction processing between blue light wavebands (band 0: 482 nm, band 1: 494 nm, band 2: 472 nm, band 3: 465 nm) and green light wavebands, the superficial melanin can be extracted.

In the real-time monitoring experiment, we extracted hemoglobin absorption information from hyperspectral images which were reconstructed from RGB-image sequences. From the results, we quantitatively analyzed the skin hemodynamics during heartbeat cycle and vascular occlusion. Furthermore, in the monitoring to vascular occlusion, besides the blood absorption, we also estimated oxygen saturation (SaO2). In this case, oxyhemoglobin and deoxyhemoglobin are studied independently. The reflection is expressed as:

$${C_i} = {m_{oxy}}x_i^{oxy} + {m_{deo}}x_i^{deo} + \alpha$$
where ${C_i}$ is the detected reflection at the selected wavebands. ${m_{oxy}}$ and ${m_{deo}}$ are the concentrations of oxy and deoxyhemoglobin, respectively. $ x_i^{oxy}$and $x_i^{deo}$are the corresponding reflectance coefficients of oxyhemoglobin and deoxyhemoglobin. $\alpha $ is a term that represents the light intensity losses caused by other chromophores, including melanin. We selected bands 4 (570 nm), 5 (581 nm) and 6 (556 nm) for the evaluation of SaO2. Since the sensitive wavelengths in these bands are close to each other, we assumed that $\alpha $ is a constant in the processing. From Eq. (10), the $Sa{O_2}$ can be evaluated as:
$$Sa{O_2} = {m_{oxy}}/({m_{oxy}} + {m_{deo}})$$

3. Results

3.1 Reconstruction accuracy

To investigate the reconstruction performance, we reconstructed RGB images of 100 color blocks from the color chart into hyperspectral images with Wiener estimation matrix. For each color block, we calculated the average value of relative errors of 16 subchannels between initial and reconstructed hyperspectral reflectance. The averaged relative errors of 100 color blocks are shown in Table 2. The maximum, minimum and average values are 10.950%, 0.424% and 4.933%, respectively. Relative errors of the reconstruction are higher in some color blocks, mainly in some blocks with dark cold tones. The underexposure in blue and green wavebands caused by the relative lower power intensity around 500 nm of the light source can be the main factor. To show the reconstruction in more details, we selected 3 representative samples, including the one with maximum reconstruction error, the one with minimum reconstruction error and the one with reconstruction error close to the average value of 100 color blocks, and compared their initial and reconstructed reflectance in 16 wavebands, showing in Fig. 2. Note that the reflectance in each waveband was averaged from data in the central area, which is one fourth of the total area of corresponding color block. The results indicate that reconstructed hyperspectral images from RGB images match well with initial hyperspectral images.

 figure: Fig. 2.

Fig. 2. Comparison of reflectance in 16 wavebands of reconstructed hyperspectral images from RGB images with initial hyperspectral images from snapshot hyperspectral camera. Shown are 3-representative color blocks. The relative error between initial and reconstructed hyperspectral reflectance of the color blocks in the left, medium and right are 10.95% (the maximum reconstruction error), 4.946% (close to the mean error) and 0.424% (the minimum error) respectively. Black box symbol represents the reflectance from initial hyperspectral images; Red circle symbol corresponds to the reflectance of reconstructed hyperspectral images. The insert of each panel is the RGB image of selected color block.

Download Full Size | PDF

Tables Icon

Table 2. Relative errors between initial and reconstructed hyperspectral reflectance of 100 color blocks

3.2 Skin morphological feature analysis

Under dark environment, we conducted skin imaging to a volunteer with a smartphone, showing in Fig. 3(a). The illumination was provided by the built-in flashlight and the smartphone camera was used to acquire images. These settings were the same as the calibration steps when using the smartphone flashlight as the illumination source. By stabilizing the illumination condition, the accuracy of the hyperspectral reconstruction were ensured. The RGB-mode image of the facial skin was shown in Fig. 3(b). There are redness and moles in the field of view. The RGB-mode image was reconstructed into hyperspectral images with Wiener estimation matrix as described above. The reconstructed data consisted of 16 wavebands, simulating the hyperspectral images as if they were captured by the snapshot hyperspectral camera. Blood vessels are localized within relatively deep skin tissue so that lights with longer penetration depth are suitable for detection. Therefore, we applied weighted subtractions between green- and red-light wavebands to extract hemoglobin absorption information. The extracted blood absorption map is shown in Fig. 3(c), where redness spots (red arrow) are significantly contrasted from other features. Afterwards, we extracted melanin absorption with weighted subtractions between blue and green bands because melanin exists in superficial layer of the skin. Figure 3(d) shows the absorption map of melanin. As expected, the hemoglobin-related features are significantly weakened while the nevi (black arrows) are enhanced in this map.

 figure: Fig. 3.

Fig. 3. The image acquisition and extraction of blood and melanin absorption information content from hyperspectral reconstruction with the RGB image from a smartphone camera with illumination from built-in flashlight. (a) Photography during image acquisition with the smartphone camera and built-in flashlight. (b) Initial RGB-mode image of the facial skin captured by the smartphone camera. (c) Blood absorption information map. (d) Melanin absorption information map. Red arrow: skin redness; Black arrow: moles. Blood and melanin absorption maps are coded according to the color bar shown in the right.

Download Full Size | PDF

To compare the skin analysis performance of the RGB-camera-based hyperspectral imaging system and snapshot hyperspectral camera, we imaged the same skin area with two cameras and conducted the same processing. Melanin absorption information was extracted and compared as an example (Fig. 4). Figure 4(a) shows the raw image from the band 9 in the snapshot hyperspectral camera. The extracted melanin absorption map from the images captured by hyperspectral camera is shown in Fig. 4(b), where the details of two moles (marked by square boxes) are shown in the zoomed-in images (Fig. 4(c) to 4(d)). Analyses of images captured by the smartphone resulted in melanin absorption map shown in Fig. 4(f). Figures 4(g) and 4(h) are the zoomed-in view of the two moles, marked by square boxes in Fig. 4(f). Since the smartphone camera has many more pixels than the snapshot camera, the melanin absorption map in Fig. 4(f) performs much better in terms of image resolution. From the zoomed-in images, we can see that two moles absorption spots from smartphone-based system (Fig. 4(g) to 4(h)) show clear edges and sizes. However, these characteristics are not clearly depicted by the snapshot hyperspectral camera largely due to its limited spatial resolution. It is worth noting that the unexpected high contrast in the skin area below the mandible (i.e. jaw) was caused by the curvature of skin surface that makes the reflectance measurement not uniform (typically low) at that area.

 figure: Fig. 4.

Fig. 4. Comparison of the imaging performance between snapshot hyperspectral camera and smartphone-based hyperspectral reconstruction. (a) Raw image of the facial skin with moles from band 9 of snapshot hyperspectral camera. (b) Extracted melanin absorption map from snapshot hyperspectral camera. (c) Zoomed-in view of the left white box area of (b). (d) Zoomed-in view of the right white box area of (b). (e) Raw RGB image of the same facial skin captured by the smartphone camera. (f) Extracted melanin absorption map from the smartphone camera. (g) Zoomed-in view of the left white box area of (f). (d) Zoomed-in view of the right white box area of (f). Blood and melanin absorption maps are coded according to the color bar shown in the right.

Download Full Size | PDF

In our smartphone-based hyperspectral imaging and analysis method, as long as the illumination is controlled to be generally stable, various types of light sources are applicable in this system. Besides the smartphone flashlight, one of other types of regular light sources, the fluorescent lamp, was tested in our study as well. Before imaging, we re-conducted the calibration steps under the fluorescent lamp illumination and calculated the new Wiener estimation matrix. Then, we used the same smartphone to image volunteer’s facial skins and conducted similar reconstruction and processing for the captured RGB images. We selected and presented analysis results of two parts on the left facial skin in Fig. 5. Figure 5(a) shows the initial RGB-mode image from the smartphone camera under the fluorescent lamp illumination, where there are pimples and moles in the field of view. The reconstruction and processing resulted in absorption information maps of hemoglobin and melanin, showing in Figs. 5(b) to 5(c), respectively. To clearly show the analysis performance of different skin features, we zoomed in a small target area (the red-box area in Fig. 5(a)), to provide the detailed visualization in Fig. 5(d) to 5(f). Figures 5(e) to 5(f) clearly shows that the pimple (red arrow) is of higher blood content while the mole has higher melanin content. The lower part of the facial skin features one mole and many skin redness instead of pimples. With similar steps, the initial RGB image, the extracted absorption maps of hemoglobin and melanin and corresponding zoomed-in details of main features are shown in Fig. 5(g) to 5(l), respectively. The results in Fig. 5 verified that fluorescent lamp can also be used as illumination source in the smartphone-based hyperspectral imaging.

 figure: Fig. 5.

Fig. 5. Extraction of blood and melanin information content from hyperspectral reconstruction with the RGB images captured by a smartphone camera under the fluorescent lamp illumination. (a) Initial RGB image of upper facial skin captured by the smartphone. (b) blood absorption map and (c) melanin absorption map of the facial skin, with pimples and moles. Figure 5(d)–5(f) Zoomed-in views of the target areas in red box of Fig. 5(a)–5(c), respectively. (g) Initial RGB image of lower facial skin captured by the smartphone, (h) blood absorption map and (i) melanin absorption map of the facial skin, with pimples and moles. Figure 5(j)–5(l) Zoomed-in views of the target areas in red box of Fig. 5(g)–5(i), respectively. Red arrow: skin redness or pimples; Black arrow: moles. Blood and melanin absorption maps are coded according to the color bar shown in the right.

Download Full Size | PDF

3.3 Hemodynamic monitoring

3.3.1 Heart rate measurement

Following the cycle of heart contraction and expansion, the blood supply in the body appears pulsatile. Therefore, by monitoring the change of blood absorption intensity in skin tissue, the heartbeat may be detected [25]. In the experiment, we used a fixed support to keep the facial skin stable during the video recording process and provided illumination with the smartphone flashlight, showing in Fig. 6(a). We extracted the blood absorption map from every frame in the video. From the blood absorption map in Fig. 6(b), we can see higher blood absorption intensities at the areas of lips, eye socket and ear positions than other areas, matching with our common senses. To test whether it is possible to evaluate the heart rate from the time series of blood information maps, we summed the signals of blood absorption map on whole facial skins for each frame in the whole video. We then conducted Fourier transform of the time series of temporal variation data. This process resulted in a plot in the frequency domain (black curve in Fig. 6(c)), where the main frequency peak around 1.05 Hz is identified, which we believe is the heart beat frequency. As a proof, we used a pulse sensor and data-logger device (PowerLab 4/30, AD Instruments) to provide reference heart rate measurements during imaging. The result is shown in red curve in Fig. 6(c) where a heart-beat frequency at 1.05 Hz is identified, exactly matched with the measurement from smartphone camera. This demonstrates that our monitoring has successfully captured heart rates in the heart cycles.

 figure: Fig. 6.

Fig. 6. Heart Rate measurement with the smartphone-based hyperspectral imaging system. (a) One frame extracted from the RGB mode video in facial skin monitoring with the smartphone camera under flashlight illumination. (b) The derived map of blood absorption information (coded according to the color bar shown in the right) overlaid with the raw grey-scale RGB image in (a). (c) Black curve: The frequency spectrum of temporal profile of blood absorption information content. Red curve: The heart rate reference from the PowerLab pulse sensor.

Download Full Size | PDF

3.3.2 Vascular occlusion

Vascular occlusion monitoring is useful in the assessment of some clinical procedures, like skin grafting [26], and monitoring in rehabilitations, like the pressure sores monitoring [27]. In this study, we applied outside pressures with rubber ring on a volunteer’s mid-finger to create a vascular occlusion. We kept the pressure for 60 seconds and then released it. The pressure-on and recovery processes were recorded by the smartphone camera. The illumination was provided by the smartphone flashlight. We extracted one RGB frame at 60s in the video (Fig. 7(a)) and reconstructed it into the hyperspectral image. Figure 7(b) shows the blood absorption map extracted from the reconstructed hyperspectral image. The oxygen saturation map was extracted as well and shown in Fig. 7(c). Compared with other fingers, the mid-finger under pressures shows lower blood absorption and oxygenation intensities. The correlation coefficient between the blood absorption map and the oxygen saturation map is calculated to be 0.9430, which means these two indices are highly related. The oxygenation status in the healthy tissue is macroscopically stable, except for the periodical variation caused by the heart cycle. The different intensities in the oxygenation map may be due to the density and locations of the vascular components in the corresponding area, upon which to determine the blood absorption mapping in our study. The supply of blood to the occluded skin area was reduced, leading to the decrease of both blood volume and oxyhemoglobin. We selected the dorsal skin on the middle digits of five fingers as the ROI. The field of view was ∼10 × 20 mm. During video recording, measures were taken to stabilize the hand and fingers to minimize the motion artifacts. For every frame in the video, we summed blood absorption and oxygen saturation intensities in the ROI (shown in the color box areas Fig. 7(a)) on five fingers. The temporal blood absorption and oxygen saturation intensity variation curves were normalized and presented in Fig. 7(d) and 7(e), respectively. In the mid-finger curves, both blood absorption and oxygen saturation intensities are relatively lower with pressures on. After releasing the rubber ring at around 60s, the intensities in both two curves show a rapid over-rebounding and then a slow regression to a stable level. However, in control groups, which consists of other four fingers, blood absorption and oxygen saturation intensities are more stable during the monitoring.

 figure: Fig. 7.

Fig. 7. Vascular occlusion monitoring with the smartphone-based hyperspectral imaging system. (a) Representative RGB frame at 60s from the monitoring video during vascular occlusion on the middle finger. (b) Blood absorption map at 60s extracted from hyperspectral reconstruction based on the RGB frame in (a) (color-coded according to color bar shown in the right). (c) Oxygen saturation map at 60s extracted from hyperspectral reconstruction based on the RGB frame in (a) (color-coded according to color bar shown in the right). (d) Real-time response curves of blood absorption intensities on middle finger during vascular occlusion. The intensities were normalized summations of blood absorption intensities in corresponding color box areas in (a): Experiment group: black curve: box 1, mid-finger. Control group: red curve: box 2, forefinger; green curve: box 3, index finger; blue curve: box 4, little finger; teal curve: box 5, thumb. (e) Real-time response curves of oxygen saturation intensities on finger skins during vascular occlusion. The intensities were normalized summations of oxygen saturation intensities in corresponding color box areas in (a). Experiment group: black curve: box 1, mid-finger. Control group: red curve: box 2, forefinger; green curve: box 3, index finger; blue curve: box 4, little finger; teal curve: box 5, thumb). (f) Visualization of RGB frames, blood absorption maps and oxygen saturation maps at 10s, 50s, 70s and 110s from the monitoring video to vascular occlusion on middle finger. Blood absorption maps were color-coded according to color bar shown in (b). Oxygen saturation maps were color-coded according to color bar shown in (c).

Download Full Size | PDF

To provide visualized proofs to these variations, we intercepted four frames at around 10s, 50s, 70s and 110s in the video to extract the blood absorption and oxygenation maps. The initial RGB-mode frames, blood absorption and oxygen saturation maps are respectively listed in Fig. 7(f). As expected, the blood perfusion and oxygenation on the mid-finger at 10 and 50s were relatively weaker than other fingers. At 70s, these two indices are relatively higher than other fingers. Finally, at 110s, the blood absorption and oxygenation intensities regressed to the similar level with other fingers. These results demonstrate that our smartphone-based hyperspectral imaging method could be applicable in hemodynamic monitoring of the skin vascular occlusion.

4. Discussions

Commercial smartphones have experienced a booming development in the past ten years. Up to 2019, there are 2.71 billion of smartphone users worldwide. Following the development of smartphone, smartphone camera is also being increasingly advanced, especially in its imaging quality, fidelity, resolution and speed. Some flagship smartphones equip cameras with 12 million pixels and can achieve video recording at 60 fps. Under some special setups, like slow motion modes, some smartphone cameras can even realize high-speed recording at 960 fps. All these advancements make the smartphone-based imaging method attractive in providing cost-effective skin assessments with high spatial and temporal resolution.

Chromophores, mainly hemoglobin and melanin, in the skin tissue are the dominant factors which have potential impacts on the skin assessments, both in clinical dermatology and cosmetics. Hemoglobin concentration is related to the features like skin redness, inflammations and vascular abnormalities. Melanin variation often presents in skin pigmentations, nevus and some skin cancers [2832]. Due to different optical properties, spectroscopic analysis is usually used in the quantitative measurements of these chromophores [33,34]. Recently, hyperspectral imaging provides a strategy with both spectral analysis and snapshot visualization abilities, which has shown the attractive potential to be more widely used in the skin assessments.

Our study aims to provide a smartphone-based hyperspectral imaging system for the skin assessments. Since the RGB mode of smartphone cameras lacks enough wavebands and spectral resolutions to properly conduct hyperspectral analysis, we innovatively applied Wiener estimation to transform RGB images captured by the smartphone camera into the hyperspectral images. While there may be other more efficient and optimal methods, we selected Wiener estimation in our study. We showed that Wiener estimation has been promising in providing accurate and high-resolution hyperspectral reconstruction. We note that such reconstruction using Winer estimation relies on the training of a big set of samples, requiring independent measurements to the spectral reflection of each sample.

The spectral calibration would be ideally conducted by the use of high-resolution spectrometers. However, in doing so, the calibration process would suffer extremely high workload and perhaps instabilities because it has to be done at each wavelength one by one. To mitigate this prohibitive and tedious task, we used a state-of-the-art snapshot hyperspectral camera with 16 wavebands ranging from 470 nm to 620 nm to provide spectral reflection calibration. With this camera, the measurements of multiple wavelengths, multiple samples and the co-registration of sampling area in each sample can be easily achieved by selecting and calculating corresponding areas of the color chart in the RGB and hyperspectral images. Therefore, the calibration is realized by taking RGB and hyperspectral images of the color chart under the same illumination, dramatically reducing the tedious workload in the process and increasing the stability of calibration. However, like traditional calibrations with spectrometers, the effectiveness of our calibration and training is also limited for practical applications because the same illumination condition is required. An alternation of ambient lights may require re-calibration.

The calibration process resulted in a Wiener estimation matrix that is used to transform the RGB images into hyperspectral images. Since it is done through pixel by pixel, the reconstructed hyperspectral images possess the same spatial pixel resolution as the original RGB images (typically around 800 million spatial pixels), and each pixel bearing spectral information with 16 wavebands in visible range. With weighted subtractions between different wavebands, the target chromophores were contrasted from surrounding tissues. Coupling with above hyperspectral reconstruction and post-processing steps, we showed the smartphone camera’s abilities to visualize blood, melanin absorption maps and oxygen saturation in the facial skin. Besides, the extraction and separation of chromophores could provide an ability of smartphone-based system to quantitatively analyze and monitor the skin temporal activities (however, it should be understood that this is still an estimation). Compared with conventional hyperspectral imaging system, which mostly relies on lasers or tunable optical filters, the smartphone-based hyperspectral imaging system eliminates the internal time difference within frames, greatly improving the imaging speed and immunity to motion artifacts. Furthermore, compared with advanced and costly hyperspectral cameras, the smartphone based hyperspectral imaging is superior in terms of its spatial resolution. In addition, our method shows flexibility in terms of different illumination conditions. The smartphone flashlight and the daily-used fluorescence lamps are all applicable in this strategy. Most importantly, our strategy does not require any modification or addition to the existing smartphones, which makes hyperspectral imaging and analysis of skin tissue possible in daily scenes out of labs. This may be particularly important and appealing for the inhabitant regions where the resource-settings are relatively low.

We have conducted the analyses of regular skin features, like pimples and nevus, to provide proof of concept for the feasibility and advantages of smartphone-based hyperspectral imaging system and methods. If properly developed, we believe that this method would be also applicable to facilitate the diagnosis and prognosis of some other dermatosis with chromophore abnormalities, like malignant melanoma [35]. In addition, the smartphone-based operation would hold the enormous promises in cosmetic applications, where the assessments of the UV spots and the skin hyperpigmentation [36,37] are often needed. In the future, related mobile applications of skin assessments may be developed as the APPs for the smartphones, and consumers may utilize them to conveniently conduct self-analyses of their own skin conditions.

There is often lack of scientific devices and instruments for hyperspectral imaging in the low-resource setting regions. However, due to the explosive growth of mobile communication techniques, smartphone becomes widely used, even in the developing countries and rural areas. Our study introduces the use of low-cost smartphone to realize hyperspectral analysis of the skin, providing an opportunity for users to achieve skin assessments. The popularity of smartphones makes the proposed hyperspectral imaging useful in daily cosmetic and dermatologic applications. In addition, the hyperspectral reconstruction may also be applicable to other imaging techniques, such as microscopy and fluorescent imaging, to extract more sensitive information of interest from the typical RGB images. However, the calibration settings and steps as discussed in this paper still require further improvement and optimization so that it can be easily deployable to ordinary smartphones, including illumination during the calibration and imaging.

5. Conclusion

We have proposed a smartphone-based hyperspectral imaging system and corresponding post-processing methods for the skin analysis and monitoring. Using Wiener estimation strategy, RGB-mode skin images acquired by the smartphone camera were reconstructed/transformed into hyperspectral images with 16 wavebands, simulating as if they were captured by a 16-channel hyperspectral camera. After transformation, weighted subtractions were applied to extract blood and melanin absorption information from the reconstructed hyperspectral images of skins, realizing the spatial analysis of skin features. Meanwhile, we have demonstrated the advantages of smartphone-based hyperspectral system in terms of its inherent imaging resolution and the adaptability to various illumination sources. We have also presented that the proposed system can be used to monitor the skin hemodynamic activities in heart cycles and in a vascular occlusion. Compared with conventional hyperspectral imaging methods relied on lasers or tunable filters, our method improves dramatically the system compactness and immunity to motion artifacts. Compared with snapshot-hyperspectral-camera based systems, the smartphone-based system has higher imaging resolution, and is very cost-effective. Finally, differing from systems in labs and wards, our smartphone-based system is flexible enough to be used by users no matter when and where they are as long as there is a smartphone available. It should be reasonable to anticipate that our system and methods can be extremely useful in dermatology and cosmetology applications in future, especially in the inhabitant regions with low-resource settings.

Disclosures

The authors declare no conflicts of interest.

References

1. E. Zherebtsov, V. Dremin, A. Popov, A. Doronin, D. Kurakina, M. Kirillin, I. Meglinski, and A. Bykov, “Hyperspectral imaging of human skin aided by artificial neural networks,” Biomed. Opt. Express 10(7), 3545–3559 (2019). [CrossRef]  

2. A. Nkengne, J. Robic, P. Seroul, S. Gueheunneux, M. Jomier, and K. Vie, “SpectraCam®: A new polarized hyperspectral imaging system for repeatable and reproducible in vivo skin quantification of melanin, total hemoglobin, and oxygen saturation,” Skin Res. Technol. 24(1), 99–107 (2018). [CrossRef]  

3. D. Kapsokalyvas, N. Bruscino, D. Alfieri, V. de Giorgi, G. Cannarozzo, R. Cicchi, D. Massi, N. Pimpinelli, and F. S. Pavone, “Spectral morphological analysis of skin lesions with a polarization multispectral dermoscope,” Opt. Express 21(4), 4826–4840 (2013). [CrossRef]  

4. D. Jakovels, J. Spigulis, and I. Saknite, “Multi-spectral mapping of in vivo skin hemoglobin and melanin,” Proc. SPIE 7715, 77152Z (2010). [CrossRef]  

5. F. Vasefi, N. MacKinnon, R. Saager, K. M. Kelly, T. Maly, N. Booth, A. J. Durkin, and D. L. Farkas, “Separating melanin from hemodynamics in nevi using multimode hyperspectral dermoscopy and spatial frequency domain spectroscopy,” J. Biomed. Opt. 21(11), 114001 (2016). [CrossRef]  

6. R. Abdlaty, L. Doerwald-Munoz, A. Madooei, S. Sahli, S.-C. A. Yeh, J. Zerubia, R. K. W. Wong, J. E. Hayward, T. J. Farrell, and Q. Fang, “Hyperspectral Imaging and Classification for Grading Skin Erythema,” Front. Phys. 6, 72 (2018). [CrossRef]  

7. S. Kim, D. Cho, J. Kim, M. Kim, S. Youn, J. E. Jang, M. Je, D. H. Lee, B. Lee, and D. L. Farkas, “Smartphone-based multispectral imaging: system development and potential for mobile skin diagnosis,” Biomed. Opt. Express 7(12), 5294–5307 (2016). [CrossRef]  

8. N. Kröger, A. Egl, M. Engel, N. Gretz, K. Haase, I. Herpich, B. Kränzlin, S. Neudecker, A. Pucci, and A. Schönhals, “Quantum cascade laser–based hyperspectral imaging of biological tissue,” J. Biomed. Opt. 19(11), 111607 (2014). [CrossRef]  

9. X. Delpueyo, M. Vilaseca, S. Royo, M. Ares, L. Rey-Barroso, F. Sanabria, S. Puig, J. Malvehy, G. Pellacani, and F. Noguero, “Multispectral imaging system based on light-emitting diodes for the detection of melanomas and basal cell carcinomas: a pilot study,” J. Biomed. Opt. 22(6), 065006 (2017). [CrossRef]  

10. C. Mo, G. Kim, K. Lee, M. Kim, B.-K. Cho, J. Lim, and S. Kang, “Non-destructive quality evaluation of pepper (Capsicum annuum L.) seeds using LED-induced hyperspectral reflectance imaging,” Sensors 14(4), 7489–7504 (2014). [CrossRef]  

11. I. Diebele, I. Kuzmina, A. Lihachev, J. Kapostinsh, A. Derjabo, L. Valeine, and J. Spigulis, “Clinical evaluation of melanomas and common nevi by spectral imaging,” Biomed. Opt. Express 3(3), 467–472 (2012). [CrossRef]  

12. L. Gao, R. T. Smith, and T. S. Tkaczyk, “Snapshot hyperspectral retinal camera with the Image Mapping Spectrometer (IMS),” Biomed. Opt. Express 3(1), 48–54 (2012). [CrossRef]  

13. W. R. Johnson, D. W. Wilson, W. Fink, M. S. Humayun, and G. H. Bearman, “Snapshot hyperspectral imaging in ophthalmology,” J. Biomed. Opt. 12(1), 014036 (2007). [CrossRef]  

14. J. Kaluzny, H. Li, W. Liu, P. Nesper, J. Park, H. F. Zhang, and A. A. Fawzi, “Bayer filter snapshot hyperspectral fundus camera for human retinal imaging,” Curr. Eye Res. 42(4), 629–635 (2017). [CrossRef]  

15. B. Geelen, C. Blanch, P. Gonzalez, N. Tack, and A. Lambrechts, “A tiny VIS-NIR snapshot multispectral camera,” Proc. SPIE 9374, 937414 (2015). [CrossRef]  

16. P.-J. Lapray, J.-B. Thomas, and P. Gouton, “High dynamic range spectral imaging pipeline for multispectral filter array cameras,” Sensors 17(6), 1281 (2017). [CrossRef]  

17. J. Spigulis, I. Oshina, A. Berzina, and A. Bykov, “Smartphone snapshot mapping of skin chromophores under triple-wavelength laser illumination,” J. Biomed. Opt. 22(9), 091508 (2017). [CrossRef]  

18. L. Wang, P. C. Pedersen, D. M. Strong, B. Tulu, E. Agu, and R. Ignotz, “Smartphone-based wound assessment system for patients with diabetes,” IEEE Trans. Biomed. Eng. 62(2), 477–488 (2015). [CrossRef]  

19. E. Chao, C. K. Meenan, and L. K. Ferris, “Smartphone-based applications for skin monitoring and melanoma detection,” Dermatol. Clin. 35(4), 551–557 (2017). [CrossRef]  

20. H.-L. Shen, J. H. Xin, and S.-J. Shao, “Improved reflectance reconstruction for multispectral imaging by combining different techniques,” Opt. Express 15(9), 5531–5536 (2007). [CrossRef]  

21. H.-L. Shen and J. H. Xin, “Spectral characterization of a color scanner based on optimized adaptive estimation,” J. Opt. Soc. Am. A 23(7), 1566–1569 (2006). [CrossRef]  

22. S. Chen and Q. Liu, “Modified Wiener estimation of diffuse reflectance spectra from RGB values by the synthesis of new colors for tissue measurements,” J. Biomed. Opt. 17(3), 030501 (2012). [CrossRef]  

23. I. Nishidate, T. Maeda, K. Niizeki, and Y. Aizu, “Estimation of melanin and hemoglobin using spectral reflectance images reconstructed from a digital RGB image by the Wiener estimation method,” Sensors 13(6), 7902–7915 (2013). [CrossRef]  

24. S. Chen, G. Wang, X. Cui, and Q. Liu, “Stepwise method based on Wiener estimation for spectral reconstruction in spectroscopic Raman imaging,” Opt. Express 25(2), 1005–1018 (2017). [CrossRef]  

25. D. K. Spierer, Z. Rosen, L. L. Litman, and K. Fujii, “Validation of photoplethysmography as a method to detect heart rate during rest and exercise,” J. Med. Eng. Technol. 39(5), 264–271 (2015). [CrossRef]  

26. J. H. G. M. Klaessens, M. Nelisse, R. M. Verdaasdonk, and H. J. Noordmans, “Non-contact tissue perfusion and oxygenation imaging using a LED based multispectral and a thermal imaging system, first results of clinical intervention studies,” Proc. SPIE 8572, 857207 (2013). [CrossRef]  

27. K. Peters, B. Colebunders, S. Brondeel, S. D’Arpa, and S. Monstrey, “The foot fillet flap for ischial pressure sore reconstruction: A new indication,” J. Plast. Reconstr. Aes. 71(11), 1664–1678 (2018). [CrossRef]  

28. S. Liu, J. M. Hempe, R. J. McCarter, S. Li, and V. A. Fonseca, “Association between inflammation and biological variation in hemoglobin A1c in US nondiabetic adults,” J. Clin. Endocrinol. Metab. 100(6), 2364–2371 (2015). [CrossRef]  

29. A. R. Matias, M. Ferreira, P. Costa, and P. Neto, “Skin colour, skin redness and melanin biometric measurements: comparison study between Antera® 3D, Mexameterand Colorimeter,” Skin Res. Technol. 21(3), 346–362 (2015). [CrossRef]  

30. J. C. Furlan, J. Fang, and F. L. Silver, “Acute ischemic stroke and abnormal blood hemoglobin concentration,” Acta Neurol. Scand. 134(2), 123–130 (2016). [CrossRef]  

31. S. Majewski, C. Carneiro, E. Ibler, P. Boor, G. Tran, M. C. Martini, S. Di Loro, A. W. Rademaker, D. P. West, and B. Nardone, “Digital dermoscopy to determine skin melanin index as an objective indicator of skin pigmentation,” J. Surg. Dermatol. 1(1), 37–42 (2016). [CrossRef]  

32. T. H. Nasti and L. Timares, “MC 1R, Eumelanin and Pheomelanin: Their role in determining the susceptibility to skin cancer,” Photochem. Photobiol. 91(1), 188–200 (2015). [CrossRef]  

33. G. Zonios, A. Dimou, I. Bassukas, D. Galaris, A. Tsolakidis, and E. Kaxiras, “Melanin absorption spectroscopy: new method for noninvasive skin investigation and melanoma detection,” J. Biomed. Opt. 13(1), 014017 (2008). [CrossRef]  

34. E. V. Salomatina, B. Jiang, J. Novak, and A. N. Yaroslavsky, “Optical properties of normal and cancerous human skin in the visible and near-infrared spectral range,” J. Biomed. Opt. 11(6), 064026 (2006). [CrossRef]  

35. I. Stoffels, S. Morscher, I. Helfrich, U. Hillen, J. Leyh, N. C. Burton, T. C. P. Sardella, J. Claussen, T. D. Poeppel, and H. S. Bachmann, “Metastatic status of sentinel lymph nodes in melanoma determined noninvasively with multispectral optoacoustic imaging,” Sci. Transl. Med. 7(317), 317ra199 (2015). [CrossRef]  

36. F. Linming, H. Wei, L. Anqi, C. Yuanyu, X. Heng, P. Sushmita, L. Yiming, and L. Li, “Comparison of two skin imaging analysis instruments: the VISIA® from Canfield vs the ANTERA 3D® CS from Miravex,” Skin Res. Technol. 24(1), 3–8 (2018). [CrossRef]  

37. Y. Takahashi, Y. Fukushima, K. Kondo, and M. Ichihashi, “Facial skin photo-aging and development of hyperpigmented spots from children to middle-aged Japanese woman,” Skin Res. Technol. 23(4), 613–618 (2017). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. (a) Schematic of the hyperspectral reconstruction calibration system that consists of a smartphone, color chart and 16-channel hyperspectral camera, with sensor structure and sensitive wavebands at each subchannel shown in the top left. (b) The manufacturer’s data of wavelength-dependent sensitivity for 16 bands in the snapshot hyperspectral camera. (c) The RGB image of color chart from the smartphone camera. (d) The raw image of color chart directly exported from the band 9 in the snapshot hyperspectral camera. (e) Spectral power distribution of the smartphone flashlight that is used in this study. (f) Absorption spectra of oxyhemoglobin (oxyHb), deoxyhemoglobin (deoxyHb) and melanin.
Fig. 2.
Fig. 2. Comparison of reflectance in 16 wavebands of reconstructed hyperspectral images from RGB images with initial hyperspectral images from snapshot hyperspectral camera. Shown are 3-representative color blocks. The relative error between initial and reconstructed hyperspectral reflectance of the color blocks in the left, medium and right are 10.95% (the maximum reconstruction error), 4.946% (close to the mean error) and 0.424% (the minimum error) respectively. Black box symbol represents the reflectance from initial hyperspectral images; Red circle symbol corresponds to the reflectance of reconstructed hyperspectral images. The insert of each panel is the RGB image of selected color block.
Fig. 3.
Fig. 3. The image acquisition and extraction of blood and melanin absorption information content from hyperspectral reconstruction with the RGB image from a smartphone camera with illumination from built-in flashlight. (a) Photography during image acquisition with the smartphone camera and built-in flashlight. (b) Initial RGB-mode image of the facial skin captured by the smartphone camera. (c) Blood absorption information map. (d) Melanin absorption information map. Red arrow: skin redness; Black arrow: moles. Blood and melanin absorption maps are coded according to the color bar shown in the right.
Fig. 4.
Fig. 4. Comparison of the imaging performance between snapshot hyperspectral camera and smartphone-based hyperspectral reconstruction. (a) Raw image of the facial skin with moles from band 9 of snapshot hyperspectral camera. (b) Extracted melanin absorption map from snapshot hyperspectral camera. (c) Zoomed-in view of the left white box area of (b). (d) Zoomed-in view of the right white box area of (b). (e) Raw RGB image of the same facial skin captured by the smartphone camera. (f) Extracted melanin absorption map from the smartphone camera. (g) Zoomed-in view of the left white box area of (f). (d) Zoomed-in view of the right white box area of (f). Blood and melanin absorption maps are coded according to the color bar shown in the right.
Fig. 5.
Fig. 5. Extraction of blood and melanin information content from hyperspectral reconstruction with the RGB images captured by a smartphone camera under the fluorescent lamp illumination. (a) Initial RGB image of upper facial skin captured by the smartphone. (b) blood absorption map and (c) melanin absorption map of the facial skin, with pimples and moles. Figure 5(d)–5(f) Zoomed-in views of the target areas in red box of Fig. 5(a)–5(c), respectively. (g) Initial RGB image of lower facial skin captured by the smartphone, (h) blood absorption map and (i) melanin absorption map of the facial skin, with pimples and moles. Figure 5(j)–5(l) Zoomed-in views of the target areas in red box of Fig. 5(g)–5(i), respectively. Red arrow: skin redness or pimples; Black arrow: moles. Blood and melanin absorption maps are coded according to the color bar shown in the right.
Fig. 6.
Fig. 6. Heart Rate measurement with the smartphone-based hyperspectral imaging system. (a) One frame extracted from the RGB mode video in facial skin monitoring with the smartphone camera under flashlight illumination. (b) The derived map of blood absorption information (coded according to the color bar shown in the right) overlaid with the raw grey-scale RGB image in (a). (c) Black curve: The frequency spectrum of temporal profile of blood absorption information content. Red curve: The heart rate reference from the PowerLab pulse sensor.
Fig. 7.
Fig. 7. Vascular occlusion monitoring with the smartphone-based hyperspectral imaging system. (a) Representative RGB frame at 60s from the monitoring video during vascular occlusion on the middle finger. (b) Blood absorption map at 60s extracted from hyperspectral reconstruction based on the RGB frame in (a) (color-coded according to color bar shown in the right). (c) Oxygen saturation map at 60s extracted from hyperspectral reconstruction based on the RGB frame in (a) (color-coded according to color bar shown in the right). (d) Real-time response curves of blood absorption intensities on middle finger during vascular occlusion. The intensities were normalized summations of blood absorption intensities in corresponding color box areas in (a): Experiment group: black curve: box 1, mid-finger. Control group: red curve: box 2, forefinger; green curve: box 3, index finger; blue curve: box 4, little finger; teal curve: box 5, thumb. (e) Real-time response curves of oxygen saturation intensities on finger skins during vascular occlusion. The intensities were normalized summations of oxygen saturation intensities in corresponding color box areas in (a). Experiment group: black curve: box 1, mid-finger. Control group: red curve: box 2, forefinger; green curve: box 3, index finger; blue curve: box 4, little finger; teal curve: box 5, thumb). (f) Visualization of RGB frames, blood absorption maps and oxygen saturation maps at 10s, 50s, 70s and 110s from the monitoring video to vascular occlusion on middle finger. Blood absorption maps were color-coded according to color bar shown in (b). Oxygen saturation maps were color-coded according to color bar shown in (c).

Tables (2)

Tables Icon

Table 1. The spectral characterization of snapshot hyperspectral camera

Tables Icon

Table 2. Relative errors between initial and reconstructed hyperspectral reflectance of 100 color blocks

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

VC=l(λ)γ(λ)fC(λ)s(λ)dλ=mC(λ)γ(λ)dλ
V=Mγ
VC=l(λ)γ(λ)fC(λ)s(λ)dλ=mC(λ)γ(λ)dλ
V=Mγ
V~=WV
e=(VV~)t(VV~)=VtVWVtVWtVtV+WtWVtV
eW=VtV+WtVtV=0
W=VVtVVt1
Cr=C1KC2=mx1l1+ny1l1K(mx2l2+ny2l2)=m(x1l1Kx2l2)+n(y1l1Ky2l2)
Ci=moxyxioxy+mdeoxideo+α
SaO2=moxy/(moxy+mdeo)
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.