November 2015
Spotlight Summary by Roarke Horstmeyer
Spectral edge: gradient-preserving spectral mapping for image fusion
A huge variety of cameras can now capture information beyond what our eyes can “see”. This information includes wavelengths in the ultraviolet or infrared, or segments of the color rainbow that are incredibly fine and distinct, or even the various signals that are picked up during an MRI scan. These non-standard sets of image data can be incredibly useful for computers to process. For example, the Microsoft Kinect uses infrared light to determine your location in 3D, satellites capture hyperspectral data to classify crops from above, and medical anomalies are often identified automatically within MRI scans. However, it can be very challenging to convert these types of data into a simple image for a human to look at and easily interpret.
To help present such rich data sets as understandable images, their “multi-channel” content must be mapped down to the familiar blends of red, green, and blue (RGB) that our eyes can see. This mapping process is commonly referred to as image fusion, and a variety of algorithms are available. The easiest technique might be to take all of the data channel values, for a given image pixel, and just sum them up. This will result in one value of interest for each pixel, which can be displayed in grayscale for human interpretation. This simple method unfortunately can produce the same output values for unequal inputs (i.e., metamers), which is undesirable. Other mapping methods consider the local spatial information across each data set, which helps to more faithfully preserve the multi-channel data within the fused output.
In this paper, Connah and colleagues propose a new image fusion technique that preserves the total image gradient at each pixel. Image gradients highlight abrupt changes (e.g., edges) within a scene and are very important to human vision. Often, our eyes rely upon them to extract features of interest. So when they change or go missing, we are likely to quickly notice that something is off. This work convincingly demonstrates that by accurately preserving gradients, image fusion maintains key information within its naturally colored RGB image output.
One of the nicest features about this new method is its generality. The authors are able to provide proofs of the accuracy of their mapping technique, which may extend beyond just fusing data into RGB color. Their framework may map any data set with N distinct color channels to a smaller data set with M distinct color channels, for any value M
You must log in to add comments.
To help present such rich data sets as understandable images, their “multi-channel” content must be mapped down to the familiar blends of red, green, and blue (RGB) that our eyes can see. This mapping process is commonly referred to as image fusion, and a variety of algorithms are available. The easiest technique might be to take all of the data channel values, for a given image pixel, and just sum them up. This will result in one value of interest for each pixel, which can be displayed in grayscale for human interpretation. This simple method unfortunately can produce the same output values for unequal inputs (i.e., metamers), which is undesirable. Other mapping methods consider the local spatial information across each data set, which helps to more faithfully preserve the multi-channel data within the fused output.
In this paper, Connah and colleagues propose a new image fusion technique that preserves the total image gradient at each pixel. Image gradients highlight abrupt changes (e.g., edges) within a scene and are very important to human vision. Often, our eyes rely upon them to extract features of interest. So when they change or go missing, we are likely to quickly notice that something is off. This work convincingly demonstrates that by accurately preserving gradients, image fusion maintains key information within its naturally colored RGB image output.
One of the nicest features about this new method is its generality. The authors are able to provide proofs of the accuracy of their mapping technique, which may extend beyond just fusing data into RGB color. Their framework may map any data set with N distinct color channels to a smaller data set with M distinct color channels, for any value M
Add Comment
You must log in to add comments.
Article Information
Spectral edge: gradient-preserving spectral mapping for image fusion
David Connah, Mark S. Drew, and Graham D. Finlayson
J. Opt. Soc. Am. A 32(12) 2384-2396 (2015) View: Abstract | HTML | PDF