Multispectral imaging (MSI) is widely used in terrestrial applications to help increase the discriminability between objects of interest. While MSI has shown potential for underwater geological and biological surveys, it is thus far rarely applied underwater. This is primarily due to the fact light propagation in water is subject to wavelength dependent attenuation and tough working conditions in the deep ocean. In this paper, a novel underwater MSI system based on a tunable light source is presented which employs a monochrome still image camera with flashing, pressure neutral color LEDs. Laboratory experiments and field tests were performed. Results from the lab experiments show an improvement of 76.66% on discriminating colors on a checkerboard by using the proposed imaging system over the use of an RGB camera. The field tests provided in situ MSI observations of pelagic fauna, and showed the first evidence that the system is capable of acquiring useful imagery under real marine conditions.
© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Multispectral and hyperspectral imaging are important detection methods, which acquire a 3D spatial-spectral data cube containing both spatial and spectral information of a scene. They are widely used in remote sensing for mineral recognition , in water quality monitoring , in the fields of agriculture , and in food industry . Spectral imaging is also a promising method for deep sea optical surveying where a spectral imager attached to an underwater camera platform (towed structure, remotely operated vehicle - ROV, autonomous underwater vehicle - AUV) captures detailed inherent reflectance spectra of benthic targets for efficient discrimination and classification [5–11].
Optical spectrum acquisition techniques developed rapidly. Spectrometers measure the spectrum of a single point with high spectral resolution. Full-sampling (scanning) spectral imaging systems provide both spectral and spatial resolution by using a 2D image sensor to sample the spectral-spatial data cube over the temporal domain. Undersampling (snapshot) systems based on compressive sensing theory can modulate, capture, and reconstruct the spectral-spatial data cube by fewer snapshots [12,13] or even by a single shot .
For terrestrial applications, the light intensity loss and spectral distortion through a short distance of air (or vacuum) are negligible. In remote sensing (long distances), the atmosphere has effects on the light propagation mainly due to aerosol scattering and water vapor absorption. In dry and clear condition, atmospheric effects can be well compensated by statistical and physics based algorithms .
In contrast, when light passes through water the absorption is more significant than in air and dependent on wavelength. In addition to water itself, light attenuation in natural water is also affected by phytoplankton containing different pigments (chlorophylls, carotenoids and phycobilipigments), suspended particles (living organic particles, detrital organic matter, and inorganic particles), colored dissolved organic matter (CDOM) and bubbles. The variable composition and concentration of oceanic constituents results in temporal and spatial variations of optical properties in natural water .
Besides, in the deep ocean, durable optical systems with compact structure and optically efficient design are preferred considering the tough working conditions such as shaking platforms, restricted load and space, absence of sunlight, limited electrical power, and lack of computational power.
In underwater applications, RGB imaging and point-based spectroscopy are widely used, while underwater multispectral and hyperspectral imaging techniques are rarely applied. A modified RGB imaging system  has been used in an underwater optical survey to recover real color information in natural water. Reflectance or fluorescence spectra are collected by a wide field-of-view (FOV) spectrometer, which has been proven valuable for studies of coral genera [18–20] and coral health condition . A combination of cameras and spectrometers on an AUV  has been proposed recently to map the seafloor with mosaicking averaged spectra in the FOV of the spectrometer along the dive track.
Existing underwater multispectral and hyperspectral imaging systems mainly capture the spatial-spectral data cube through the “push broom” spatial-scanning method and the filter based spectral-scanning method. The “push-broom” method scans one spatial dimension with a narrow slit FOV and spans the spectral dimension using a dispersive optical device (prism or diffraction grating). Images taken by the “push-broom” method feature high spectral resolution, and do not need to be registered along wavelengths, but need to be merged spatially. Underwater hyperspectral imaging (UHI) systems based on the “push-broom” method have been deployed with a diver  or on AUVs and ROVs for effective seafloor mapping with both spatial and spectral resolution [5–10]. Pettersen et al. (2014) studied the inverse relationship between absorption spectra and reflected spectra of pigments, and effectively used a UHI system in bio-optical taxonomic identification . Letnes et al. (2017) used a UHI system to monitor and classify cold water corals in different living conditions .
The spectral scanning method stacks up a sequence of monochrome images of the scene at different wavelengths specified by narrow-band filters in front of the sensor. In order to scan more wavelengths in less time, motorized filter wheels  and solid-state tunable filters (e.g., LCTF and AOTF ) have been introduced. Gleason et al. studied coral reefs with a six-band filter wheel underwater multispectral camera, and concluded an improvement of using narrow spectral bands ratio combined with image texture measures in automated coral classification . The alignment of images taken at different wavelengths is necessary, and the number of channels is limited by the switching time and the size of the filter wheel (e.g. more filters require a longer acquisition time and a larger filter wheel). Other than the filter based approach, a tunable light source (e.g. a color LED array [28,29]) is used to specify wavelength as e.g. shown in medical research [30,31]. By using a tunable light source and removing the (optical) filters, the camera system is further simplified, and the optical efficiency is improved. However, this technique has not been implemented in underwater applications yet.
In this paper, we present a tunable LED-based underwater multispectral imaging system (TuLUMIS) which synchronizes a monochrome camera with flashing, pressure neutral color LEDs. As a scanning method, it provides 2D spatial light intensity distribution of the scene in a sequence of eight spectral channels in a short period (RGB cameras provide only three channels, i.e., red, green and blue). It acquires finer-resolved spectral reflectance than the RGB method, and features a durable and simpler structure as well as lower cost compared to existing underwater multispectral imaging techniques.
The main contribution of this paper is the description of the TuLUMIS system. Additionally, we present a quantitative method to compare the usefulness of different spectral imaging methods. It is based on a normalisation criterion and can compare data sets of different dimensionality. It is hence able to discriminate data of different spectral resolution and thus more generally applicable than methods that rank different spectral imaging methods of the same dimensionality [20,32]. We use this discrimination method to compare results obtained by TuLUMIS to RGB imaging and point-based spectroscopic measurements.
The paper describes the principles of spectral imaging and the proposed discrimination criteria with data analysis algorithms in Section 2. Composition and specification of TuLUMIS are presented in Section 3, and lab experiments are described in Section 4 followed by the presentation of results in Section 5. The field test conducted during a cruise to the Atlantic Ocean is also briefly demonstrated in Section 5. Dimensionality reduction methods for the data analysis and potential improvements of TuLUMIS are discussed in Section 6 with conclusions drawn in Section 7.
2. Principles on spectral imaging
2.1. Acquisition of spectral signatures
According to the underwater imaging model and the Beer-Lambert law , the intensity detected by a gray value camera at each pixel can be represented by
- I = intensity detected by the camera
- Is(λ) = spectral radiance of the light source
- R(λ) = spectral reflectance at the surface of the target to be studied
- C(λ) = spectral response of the camera
- α = the wavelength-dependent attenuation coefficient of the water medium
- d1, d2 = the distances from the light source to the target, and the distances from the target to the camera, respectively
- λc = spectral range of the camera
- dλ = the symbol of the differential of the variable λ
An effective spectrum analysis is based on an accurate estimate of the surface spectral reflectance. Therefore it is necessary to recover the reflectance at the surface of the target from the detected radiance at the sensor to remove the effects of illumination and the medium before subsequent analysis.
As reference, a white board with equal spectral reflectance through all visible light bands is used to correct the spectrally distorted images. For the white board area in the images, the reflectance R(λ) is constant in Eq. (1). The relative reflectance of other pixels can be obtained by dividing the intensity at the white pixel in each channel. Subsequently, each spatial pixel consists of a spectral vector s = [s1, s2, . . ., sn]T with n relative reflectance values from different channels, the vector s ∈ ℝn is known as the spectral signature at that pixel.
2.2. Discrimination measures of spectral signatures
For the analysis of multispectral and hyperspectral signatures, spectral angle mapper (SAM) and spectral information divergence (SID) are two discrimination measures that are widely and effectively used [20,33]. In this study, the similarity of the spectral signatures extracted from two different samples are estimated by both SAM and SID.
SAM calculates the spectral angle θ between two signature vectors in the n-dimensional space as
SID is a measure according to information theory. It appraises the probability vector which is defined as
2.3. Comparison of spectral dissimilarity among different dimensions
To quantify the discrimination ability on spectral-different objects of TuLUMIS, a comparison is made with respect to SAM and SID among the three methods: images taken by a general RGB color camera illuminated by a white light source (RGB), images taken by the TuLUMIS multispectral camera (MS), and spectral reflectance measured by a spectrometer with 709 channels covering the visible spectrum from 400 nm to 700 nm (SP).
Since the direct comparison of SAM and SID among feature spaces of different dimensions (s ∈ ℝ3 for the RGB method, s ∈ ℝ8 for the MS method, and s ∈ ℝ709 for the SP method) is not appropriate, principal component analysis (PCA)  is used to convert spectral signatures from ℝ8 space and ℝ709 space into ℝ3 space before using SAM or SID. The dimensionality reduction processing will be discussed in detail in Section 7.1.
In Fig. 1, the comparison between two objects is for exemplarily shown. Suppose there are m1 samples [C11, C12, . . ., C1m1] taken from object C1 and m2 samples [C21, C22, . . ., C2m2] taken from object C2 and for each of these m1 + m2 samples, all three spectral imaging methods are applied, and the per-sample spectral signatures, denoted as c11, c12, . . ., c1m1 and c21, c22, . . ., c2m2 are provided by averaging signatures of all pixels forming the sample. A matrix as in Fig. 1 can be built to show the similarity between each of two samples by using the measures of SAM and SID as introduced in Eq. (2) and Eq. (4), respectively.
Ideally, each object is characterized by a unique deterministic spectrum, i.e. all samples on the same object have the same signature. However, in practice the assumption of an exact spectral signature does not always hold due to mixed-pixel interference and inherent spectral variability (i.e., inhomogeneous surface composition, media condition and sensor noise, etc) . In Fig. 1, the 9-by-9 matrix can be divided into sub-matrices:
In order to find the dissimilarity between spectral signatures of two different objects, each measure of a pair of between-class samples (i.e. Mij) is divided by the average of all corresponding within-class measures (i.e. C1ij and C2ij):
Hence and denotes normalized between-class SAM and SID dissimilarity, respectively. The results of can then be compared among the three spectral imaging methods RGB, MS, and SP. It allows evaluating their discrimination ability on two objects; a smaller Δ(Γ) indicates higher similarity.
3. Setup of TuLUMIS
3.1. Hardware description
As shown in Fig. 2, TuLUMIS consists of a 12-bit industrial monochrome camera (acA2040-25gmNIR, Basler, Germany) with an Apo-Xenoplan 2.0/24 lens (Schneider, Germany), custom color LEDs (LUXEON Z, Lumileds, Netherlands) with drivers (CAM-V2, PCB Components, Germany), and periphery components for power supply (TEN 60-2412 DC/DC Converters, Traco Power, Switzerland), synchronization (Arduino Micro, Italy), controlling (NUC6i5SYB, Intel, USA), data storing (SSD 850 EVO 1TB, Samsung, Korea), and power/signal transmission through underwater cables (SubConn Micro Circular, MacArtney, Denmark).
In order to deploy TuLUMIS in the deep ocean, a titanium housing (Develogic, Germany) is used to protect the camera and the control system from water and high pressure. On one end of the housing, a flat sapphire glass port is assembled as the window for the camera.
3.2. Synchronization strategy and softwares
An Arduino micro board is programmed to send external trigger signals to synchronize the flash of the LEDs and the acquisition of the camera. Several integrated circuit components (hex Schmitt-trigger inverters SN74HCT14 and 3-line to 8-line decoders SN74HCT138, Texas Instruments, USA) are used to achieve fast switching among the LEDs. A C++ software based on the camera’s SDK (Software Development Kit) runs on the camera control computer (NUC, Intel, USA) with Ubuntu 16.04 as operating system. The software is used to adjust parameters for image acquisition and data storage. All images are stored as 16-bit uncompressed TIFF files. In order to preprocess and show the multispectral images, a software has been programmed in C++ using the OpenCV library (version 3.3.0)  and Qt (version 5.9).
3.3. Pressure-neutral LEDs
TuLUMIS has sixteen colored LEDs to illuminate a target at eight respective wavelengths covering the visible spectrum from 400 nm to 700 nm. As shown in Fig. 2, the LEDs are mounted on a metal printed circuit board (PCB), and are cast in highly transparent polyurethane . The polyurethane forms thin walls to transfer heat efficiently to the surrounding water. A reflector is also adapted on each LED. The LED itself as a solid semiconductor is exposed to the water-depth dependent pressure. The pressure neutral cast LEDs, combined with also pressure neutral cast drivers, form low-cost lightweight and corrosion resistant light source units rated to 6000 m.
The nominal central wavelengths of the LEDs are 405 nm, 450 nm, 500 nm, 530 nm, 565 nm, 590 nm, 615 nm, 660 nm tested at 500 mA, 25°C by the manufacturer . The relative radiance of each LED driven at 700 mA was measured by using a spectrometer (FLAME-S, Ocean Optics, USA) with a cosine corrector (CC-3-UV, Ocean Optics, USA). The relative radiances of the LEDs and the spectral response curve of the camera (provided by the manufacturer) are plotted in Fig. 3. The relative energy distribution of the light source was calculated by integrating each LED’s spectral radiance with the camera’s response. The spectral coverage of each LED can be indicated by the full width at half maximum (FWHM) of the corresponding LED’s spectral radiance.
4. Lab experiments
Intuitively, detection of an object with more and narrower spectral channels brings finer information on the spectrum, and consequently improves the discriminative ability. In order to derive a quantitative comparison between the RGB method (three channels), the MS method (eight channels), and the SP method (709 channels) with respect to normalized between-class SAM and SID dissimilarities as defined in Section 2.3, lab experiments were conducted under controlled environmental conditions. In the experiments, a custom checkerboard illustrated in Fig. 5 with Macbeth colors was used as common target for the three methods.
For the RGB method, an RGB camera (ILCE-7SM2, Sony, Japan) has been used with a white LED light source (BXRA-56C5300-H, Bridgelux, USA). The white LED is manufactured by covering a conversion layer on a blue LED; the spectrum is not as even as sun light. Such LEDs are widely used underwater as an artificial light source because of the high energy efficiency and compactness. A water tank filled with tap water was used to conduct the experiments. As shown in Fig. 4, the distance from the light source to the target, and from the target to the camera is 1 m. It is necessary to transform raw camera RGB colors to a device independent space because of the variation of color appearance underwater with different devices . The DCRaw software written by David Coffin was used to transfer “ARW” format raw data files in Bayer mosaic pattern to “tiff” format three-channel color images for subsequent processing . For this transfer the settings “-v -w -o 0 -4 -T” were used in the command to output 16-bit “tiff” files with original camera white balance and no other modification (i.e., no color space designation, no gamma correction and no automatically brightening).
For the MS method, TuLUMIS has been used in the same water tank. For the SP method, a spectrometer (FLAME-S, Ocean Optics, USA) with a Y-shaped fiber-optic probe has been used in the tank, with the distance between the probe and the board surface fixed at 3 mm.
The spectral signatures for the RGB method and the MS method are extracted through the preprocessing shown in Fig. 5. First, the raw images are cropped to the area of interest which is the color checkerboard area as shown in Fig. 5(a).
Then the cropped images are segmented to create a mask of all color blocks and a mask of all white blocks as shown in Fig. 5(a). Each selected square is cropped to 80% of the corresponding color block width (64% of the color block area).
After that, the white background (or spatial illuminance distribution) of the image in each individual channel is estimated by third order polynomial fitting of the pixels in white blocks. The white blocks are filtered from the image by using the mentioned white mask.
The color blocks are then divided by the estimated white background illumination shown in Fig. 5(b). This alleviates the effect of a non-uniform illuminance distribution and allows to calculate the relative reflectance shown in Fig. 5(c).
In each selected color block, ten samples are collected on the diagonal shown in Fig. 5(d). The spectral reflectance of each sample is the averaged spectral reflectance of all pixels in the sampled area. Each color is assigned an index (arranged according to their hue). In Fig. 5(e), a color bar is created as a reference, where each color is calculated by averaging all pixels in the corresponding ten collected sample areas. All colors are transformed according to the ICC profile of SonyA7SM2-Generic only for visualization purposes.
5. Experimental results
5.1. Results of lab experiments
The discrimination ability of the RGB method, the MS method and the SP method are based on the spectral data collected through the experiments described above. The spectral acquisition capabilities of the three techniques are illustrated explicitly by overlaying their captured spectral reflectance curves of all 33 color panels on the checkerboard. As plotted in Fig. 6, the RGB method uses three values to roughly estimate the reflectance spectrum, while the SP method measures the detailed spectrum in 709 spectral channels. The MS method with eight channels acquires a finer-resolved spectrum than the RGB method, and provides comparable spectral signatures as the SP method.
In total, 330 samples (33 color blocks, 10 samples per block) were collected from the color checkerboard. A 330-by-330 symmetric matrix was built to show the dissimilarity between each two samples (shown in Fig. 7), where 330 elements are on the main diagonal, 3,300 elements represent within-class measures and the remaining 105,600 (= 330 × 330 − 3300) elements are between-class measures. Because of symmetry, only the 52,800 elements on the upper triangular part are considered.
The first row in Fig. 7 shows the results of SAM for the RGB method, the MS method and the SP method calculated by Eq. (2). In general, the results of the SP method feature the smallest within-class dissimilarity and the largest between-class dissimilarity, and the results of the RGB method feature the smallest between-class dissimilarity. The results of the MS method fall between the two other methods. For color blocks No. 25 to No. 33, which are different shades of gray, the samples can be distinguished more easily by using the MS method over the use of the RGB method; the SP method has the best discriminative ability among the three methods.
The second row in Fig. 7 shows the results of SID of the three methods calculated by Eq. (4). They are consistent with the results of SAM; the results of the SP method feature the smallest within-class dissimilarity and the largest between-class dissimilarity, but the results of the RGB method and the MS method are more complex. For color blocks No. 25 to No. 33, the same conclusion can be drawn as for the SAM results.
From the SAM results, the matrix of the normalized between-class SAM, or Δ(SAM) can be calculated using Eq. (11). The matrix in Fig. 8(a) shows the difference of Δ(SAM) of the RGB method subtracted from Δ(SAM) of the MS method. It can be seen that for most pairs of samples, Δ(SAM) is increased. It is shown in Fig. 8(b) that for blue color blocks, the spectral discrimination ability using TuLUMIS is not always better than the use of the RGB camera. The matrix in Fig. 8(c) shows the difference of Δ(SAM) of the RGB method subtracted from Δ(SAM) of the SP method. It can be seen that for almost all pairs of samples, Δ(SAM) is increased. The color comprising of the histogram shown in Fig. 8(d) indicates that by using the SP method, the discrimination abilities for all color pairs on the checkerboard are comparable. Histograms of the corresponding matrices are shown in Fig. 8(b) and Fig. 8(d). 40,479 out of 52,800 of between-class elements are increased (76.66%) by using the MS method compared to the RGB method. 52,794 out of 52,800 of between-class elements are increased (99.99%) by using the SP method compared to the RGB method.
From the SID results, the matrix of the normalized between-class SID, or Δ(SID) can also be calculated using Eq. (11). The matrix in Fig. 9(a) shows the difference of Δ(SID) of the RGB method subtracted from Δ(SID) of the MS method. For most pairs of samples, Δ(SID) is increased. Decreased elements appear in the measures between most of the color blocks and the different shades of gray blocks, between red blocks and green blocks, and between blue blocks and green blocks. It is also shown in Fig. 9(b) that for these color pairs, the Δ(SID,MS) − Δ(SID,RGB) are located in the bins close to zero. The matrix in Fig. 9(c) shows the difference of Δ(SID) of the RGB method subtracted from Δ(SID) of the SP method. It can be seen that for almost all pairs of samples, Δ(SID) is increased, which can also be found in Fig. 9(d). Histograms of the corresponding matrices are shown in Fig. 9(b) and Fig. 9(d). 36,030 out of 52,800 of between-class elements are increased (68.24%) by using the MS method compared to the RGB method. 52,674 out of 52,800 of between-class elements are increased (99.76%) by using the SP method compared to the RGB method.
5.2. Field test of the first generation prototype
A field test of the first generation prototype of TuLUMIS was conducted during an oceanographic cruise (MSM61) on the German research vessel Maria S. Merian in waters around the Republic of Cape Verde . TuLUMIS was carried by the frame of the towed pelagic in situ observation system PELAGIOS (Hoving et al in prep.) as presented in Fig. 10(a) for in-situ observations of pelagic fauna.
Figure 11 shows a sergestid shrimp in spectral stacks and the fusion into a pseudo-color image after using affine transform based on manually selected feature points to align the images. It shows that the proposed TuLUMIS is technically feasible to be deployed in the deep sea. Nine TuLUMIS deployments (30 minutes each) were conducted between 75 and 100 meters depth at night (no sunlight) with different multispectral camera settings, LED array layouts and towing depths. The experience obtained from the field test is discussed in detail from a technical perspective in Section 6.2.
6.1. Dimensionality reduction
In the evaluation of spectral similarity, SAM and SID described in Section 2.2 are widely used. A larger SAM means a larger dissimilarity between two spectral vectors. However, SAM can be systematically impacted by changing the dimension of the vector space. Similarly, a larger SID indicates a larger dissimilarity between two spectral information sources but the increase of dimensionality of the signal changes the entire probability distribution. Therefore it is not reasonable to directly compare SAM or SID dissimilarities that are calculated based on spectral signatures with different dimensions.
In the field of pattern recognition, dimensionality is usually reduced by using principal component analysis (PCA), as well as linear discriminant analysis (LDA) based on Fisher’s discriminant . PCA finds the components with the largest variety while LDA optimizes the ratio of between-class variety and within-class variety. PCA is unsupervised while LDA is supervised which requires a preliminary knowledge of the classes of the targets. However, the classes of the underwater targets are usually unknown, thus a training process before dimensionality reduction is not feasible in practice.
In case the information on the targets is available, the discrimination ability of the MS method can be further improved. It is evident from Fig. 12, where LDA was used for dimensionality reduction instead of PCA. When compared with Fig. 8 and Fig. 9, the between-class elements in the difference matrix by using the MS method are increased from 76.66% (with PCA) to 95.82% (with LDA) for Δ(SAM), and the increased from 68.24% (with PCA) to 78.06% (with LDA) for Δ(SID).
6.2. Potential improvements
During the lab experiments and field tests, the first generation prototype of TuLUMIS operated effectively and the results have verified the expected merits:
- The pressure neutral LED light source with neither pressure housing nor extra mechanical parts has the benefit of reduced bulk and complexity of the system
- Light reflected from targets is detected by the camera directly without passing through any filter hence increases the light efficiency
- The intensity of each LED can be adjusted separately to compensate for the water attenuation in different water conditions
- The combination of pressure-neutral LEDs and an off-the-shelf gray camera is cost-effective compared to specialized multispectral cameras
According to the results illustrated in Section 5, the spectral discrimination ability by using TuLUMIS is not better than the use of the RGB camera for blue and green colors. The reason for this could be the significant overlapping of the LED light spectra covering 480 nm – 600 nm as shown in Fig. 3. This is a result from the so called “green-yellow gap” in LED technology. Blue, cyan, and green LEDs typically use InGaN as semiconductor, while yellow and red LEDs are AlInGaP based. Generally, red and blue LEDs are radiant efficient (with high wall-plug efficiency), but green and yellow LEDs are not. The gap in efficiency can be filled with a green phosphor-converted LED (pumped by a blue LED, and then converted to green) as the one used in TuLUMIS with the central wavelength of 565 nm. As shown in Fig. 3, the efficiency of the converted green LED is welcome, but the large FWHM is counterproductive in this case. A more even combination of LED spectra and reduced spectral overlap could result in a further improvement on the spectral discrimination ability of TuLUMIS. In addition, optical filters based on Fabry-Pérot interferometer design could provide sharp separation between spectral adjacent LEDs but at the cost of energy loss.
For real-world studies, compensation of the effects of the light source and the water attenuation needs to be considered. In this study, experiments were only conducted in a water tank filled with clear tap water; the distance between the light source and the target, and the distance between the target and the camera were only 1 m. Under such conditions, spectral distortion can be corrected by dividing with a white reference, without taking advantage of the compensation feature of the tunable light source. At greater distance, tuning the LEDs for counter acting the wavelength depending attenuation will be an asset of TuLUMIS. Besides, the temporal variance of water conditions should also be taken into account in practical scenarios which could also be achieved with TuLUMIS.
Correction of the heterogeneity of the light source is worth further study. In the lab experiment, only one LED was turned on at a time, thus the uneven spatial distribution of the illuminance can be captured by low order polynomial fitting. However, in practice, where an array of LEDs flashes at the same time, the optical field could be complex and thus could have a more severe effect on the construction of the spectral signatures.
In general, TuLUMIS with its spectral-scanning approach usually has a better performance in taking images of static scenes instead of moving objects. As shown in Fig. 11, the images taken during the field test are lacking brightness, and the alignment of images taken at different wavelength was difficult due to the changing target (distance, movement of the fauna). Different aperture sizes and acquisition times were evaluated to balance the depth of field, brightness and sharpness. The PELAGIOS was towed at a speed of 0.5 knots (approximately 0.25 m/s) and the acquisition time of the camera was 35 ms with a pixel size of 5.5 μm and focal length of 24.5 mm. By enhancing the light intensity, the aperture can be smaller to achieve a larger depth-of-view, and the acquisition time of the camera can be shorter to alleviate the relative movement between the camera and targets. Deploying TuLUMIS for midwater imaging requires more tuning of the imaging parameters in the future. We will further improve the system by in-situ and ex-situ imaging of sessile benthic fauna.
In this paper, an underwater multispectral imaging system based on a tunable light source using pressure neutral color LEDs is presented. The tunable LEDs bring flexibility to the spectral energy distribution of the light source. The combination of pressure-neutral LEDs and an off-the-shelf gray camera reduces the complexity and cost of the system. Spectral dissimilarities based on both SAM and SID measures are used to quantify the spectral discrimination ability compared to traditional RGB imaging, MS imaging, and hyperspectral (SP) information. Results of lab experiments show that for different color blocks, the MS method with eight channels can distinguish 76.66% of color pairs more easily than the common RGB method, while almost all color pairs can be more effectively distinguished by using the SP method.
In future studies we will apply TuLUMIS to spectral imaging of fauna in aquaria under controllable conditions. The LEDs will be tuned individually to compensate for the wavelength-dependent attenuation of the light in natural water conditions at different distances, thus constructing more accurate spectral signatures. We will investigate how different minerals, corals and sediments can be better discriminated in-situ with our multispectral approach.
China Scholarship Council (201606320111); German Research Foundation (DFG) Cluster of Excellence FUTURE OCEAN (CP1626); National High-tech R&D Program of China (863 Program) (2014AA093400).
It is publication No. 38 of the DeepSea Monitoring group of GEOMAR. The authors express their sincere gratitude to the staff of the GEOMAR Technology and Logistics Centre for their generous technical support, especially to Thorsten Schott, Matthias Wieck, Bjoern Schäfer, and Sidney Michalak. We would also like to thank Yilu Guo (Zhejiang University) and all the colleagues at the DeepSea Monitoring group of GEOMAR, especially to Jochen Mohrmann and Yifan Song, for joining discussions and providing comments, and all members on the cruise MSM61 for their generous help. Additionally, Hongbo Liu would like to thank Yufei Jin for her appreciation, care and company.
References and links
1. T. A. Carrino, A. P. Crósta, C. L. B. Toledo, and A. M. Silva, “Hyperspectral remote sensing applied to mineral exploration in Southern Peru: A multiple data integration approach in the chapi chiara gold prospect,” Int. J. Appl. Earth Obs. 64, 287–300 (2018). [CrossRef]
2. H. Pu, D. Liu, J.-H. Qu, and D.-W. Sun, “Applications of imaging spectrometry in inland water quality monitoring-a review of recent developments,” Water, Air, & Soil Pollution 228, 131 (2017). [CrossRef]
3. V. Leemans, G. Marlier, M.-F. Destain, B. Dumont, and B. Mercatoris, “Estimation of leaf nitrogen concentration on winter wheat by multispectral imaging,” Proc. SPIE 10213, 102130I (2017). [CrossRef]
4. A. I. Ropodi, E. Z. Panagou, and G.-J. E. Nychas, “Multispectral imaging (MSI): A promising method for the detection of minced beef adulteration with horsemeat,” Food Control 73, 57–63 (2017). [CrossRef]
5. G. Johnsen, Z. Volent, E. Sakshaug, F. Sigernes, and L. H. Pettersson, Remote sensing in the Barents Sea (Tapir Academic, 2009), Chap. 6.
6. G. Johnsen, Z. Volent, H. Dierssen, R. Pettersen, M. Van Ardelan, F. Søreide, P. Fearns, M. Ludvigsen, and M. Moline, “Underwater hyperspectral imagery to create biogeochemical maps of seafloor properties,” in Subsea Optics and Imaging, J. Watson and O. Zielinski, eds. (Woodhead, 2013). [CrossRef]
7. J. Tegdan, S. Ekehaug, I. M. Hansen, L. M. S. Aas, K. J. Steen, R. Pettersen, F. Beuchel, and L. Camus, “Underwater hyperspectral imaging for environmental mapping and monitoring of seabed habitats,” in Proceedings of IEEE/MTS OCEANS’15 (IEEE, 2015), pp. 1–6.
8. G. Johnsen, M. Ludvigsen, A. Sørensen, and L. M. S. Aas, “The use of underwater hyperspectral imaging deployed on remotely operated vehicles - methods and applications,” IFAC-PapersOnLine 49, 476–481 (2016). [CrossRef]
9. A. A. Mogstad and G. Johnsen, “Spectral characteristics of coralline algae: a multi-instrumental approach, with emphasis on underwater hyperspectral imaging,” Appl. Opt. 56, 9957–9975 (2017). [CrossRef]
10. Ø. Sture, M. Ludvigsen, and L. M. S. Aas, “Autonomous underwater vehicles as a platform for underwater hyperspectral imaging,” in Proceedings of IEEE/MTS OCEANS’17 (IEEE, 2017), pp. 1–8.
11. D. L. Bongiorno, M. Bryson, T. C. Bridge, D. G. Dansereau, and S. B. Williams, “Coregistered hyperspectral and stereo image seafloor mapping from an autonomous underwater vehicle,” J. Field Robot. (2017). [CrossRef]
12. L. Bian, J. Suo, G. Situ, Z. Li, J. Fan, F. Chen, and Q. Dai, “Multispectral imaging using a single bucket detector,” Sci. Rep. -UK 6, 24752 (2016). [CrossRef]
13. S. Jin, W. Hui, Y. Wang, K. Huang, Q. Shi, C. Ying, D. Liu, Q. Ye, W. Zhou, and J. Tian, “Hyperspectral imaging using the single-pixel fourier transform technique,” Sci. Rep. -UK 7, 45209 (2017). [CrossRef]
14. X. Cao, T. Yue, X. Lin, S. Lin, X. Yuan, Q. Dai, L. Carin, and D. J. Brady, “Computational snapshot multispectral cameras: Toward dynamic capture of the spectral world,” IEEE Signal Proc. Mag. 33, 95–108 (2016). [CrossRef]
15. M. K. Griffin and H.-h. K. Burke, “Compensation of hyperspectral data for atmospheric effects,” Lincoln Laboratory Journal 14, 29–54 (2003).
16. C. Mobley, E. Boss, and C. Roesler, Ocean Optics Web Bookhttp://www.oceanopticsbook.info/.
17. I. Vasilescu, C. Detweiler, and D. Rus, “Color-accurate underwater imaging using perceptual adaptive illumination,” Auton. Robot. 31, 285 (2011). [CrossRef]
18. I. Leiper, S. Phinn, and A. G. Dekker, “Spectral reflectance of coral reef Benthos and substrate assemblages on Heron Reef, Australia,” Int. J. Remote Sens. 33, 3946–3965 (2012). [CrossRef]
19. T. Treibitz, B. P. Neal, D. I. Kline, O. Beijbom, P. L. Roberts, B. G. Mitchell, and D. Kriegman, “Wide field-of-view fluorescence imaging of coral reefs,” Sci. Rep. -UK 5, 7694 (2015). [CrossRef]
21. H. Holden and E. LeDrew, “Hyperspectral discrimination of healthy versus stressed corals using in situ reflectance,” J. Coastal Res.850–858 (2001).
22. A. Chennu, P. Färber, G. De’ath, D. de Beer, and K. E. Fabricius, “A diver-operated hyperspectral imaging and topographic surveying system for automated mapping of benthic habitats,” Sci. Rep. -UK 7, 7122 (2017). [CrossRef]
23. R. Pettersen, G. Johnsen, P. Bruheim, and T. Andreassen, “Development of hyperspectral imaging as a bio-optical taxonomic tool for pigmented marine organisms,” Org. Divers. Evol. 14, 237–246 (2014). [CrossRef]
24. P. A. Letnes, I. M. Hansen, L. M. Aas, I. Eide, R. Pettersen, L. Tassara, J. Receveur, S. le Floch, J. Guyomarch, L. Camus, and J. Bytingsvik, “Underwater hyperspectral classification of deep sea corals exposed to a toxic compound,” bioRxiv (2017).
25. Y. Guo, H. Song, H. Liu, H. Wei, P. Yang, S. Zhan, H. Wang, H. Huang, N. Liao, Q. Mu, J. Leng, and W. Yang, “Model-based restoration of underwater spectral images captured with narrowband filters,” Opt. Express 24, 13101–13120 (2016). [CrossRef] [PubMed]
26. H. R. Morris, C. C. Hoyt, and P. J. Treado, “Imaging spectrometers for fluorescence and raman microscopy: acousto-optic and liquid crystal tunable filters,” Appl. Spectrosc. 48, 857–866 (1994). [CrossRef]
27. A. Gleason, R. Reid, and K. Voss, “Automated classification of underwater multispectral imagery for coral reef monitoring,” in Proceedings of IEEE/MTS OCEANS’07 (IEEE, 2007), pp. 1–8.
28. J.-I. Park, M.-H. Lee, M. D. Grossberg, and S. K. Nayar, “Multispectral imaging using multiplexed illumination,” in Proceedings of IEEE Conference on Computer Vision (IEEE, 2007), pp. 1–8.
29. H. Blasinski and J. Farrell, “Computational multispectral flash,” in Proceedings of IEEE Conference on Computational Photography (IEEE, 2017), pp. 1–10.
30. M. B. Bouchard, B. R. Chen, S. A. Burgess, and E. M. Hillman, “Ultra-fast multispectral optical imaging of cortical oxygenation, blood flow, and intracellular calcium dynamics,” Opt. Express 17, 15670–15678 (2009). [CrossRef] [PubMed]
31. X. Delpueyo, M. Vilaseca, S. Royo, M. Ares, L. Rey-Barroso, F. Sanabria, S. Puig, J. Malvehy, G. Pellacani, F. Noguero, G. Solomita, and T. Bosch, “Multispectral imaging system based on light-emitting diodes for the detection of melanomas and basal cell carcinomas: a pilot study,” J. Biomed. Opt. 22, 065006 (2017). [CrossRef]
32. D. Swinehart, “The beer-lambert law,” J. Chem. Educ 39, 333 (1962). [CrossRef]
33. Y. Du, C.-I. Chang, H. Ren, C.-C. Chang, J. O. Jensen, and F. M. D’Amico, “New hyperspectral discrimination measure for spectral characterization,” Opt. Eng. 43, 1777–1786 (2004). [CrossRef]
34. C. M. Bishop, Pattern Recognition and Machine Learning (Springer, 2006).
35. D. Manolakis, D. Marden, and G. A. Shaw, “Hyperspectral image processing for automatic target detection applications,” Lincoln Laboratory Journal 14, 79–116 (2003).
36. Itseez, “Open source computer vision library,” https://github.com/opencv/opencv (2017).
37. J. Sticklus and T. Kwasnitschka, “Verfahren und vorrichtung zur herstellung von in vergussmasse vergossenen leuchten,” (2015). DE Patent 102,014,118,672.
38. Lumileds Holding B.V., “DS105 LUXEON Z color line product datasheet,” https://www.lumileds.com/uploads/415/DS105-pdf (2017).
39. D. Akkaynak, E. Chan, J. J. Allen, and R. T. Hanlon, “Using spectrometry and photography to study color underwater,” in Proceedings of IEEE/MTS OCEANS’11 (IEEE, 2011), pp. 1–8.
40. D. Coffin, “DCRaw Version 9.27,” https://www.cybercom.net/~dcoffin/dcraw/ (2016).
41. B. Fiedler, “Short cruise report RV Maria S. Merian MSM61,” https://www.ldf.uni-hamburg.de/merian/wochenberichte/wochenberichte-merian/msm58-2-msm61/msm61-scr.pdf (2017).