Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fusion of infrared and visual images through region extraction by using multi scale center-surround top-hat transform

Open Access Open Access

Abstract

Fusion of infrared and visual images is an important research area in image analysis. The purpose of infrared and visual image fusion is to combine the image information of the original images into the final fusion result. So, it is crucial to effectively extract the image information of the original images and reasonably combine them into the final fusion image. To achieve this purpose, an algorithm by using multi scale center-surround top-hat transform through region extraction is proposed in this paper. Firstly, multi scale center-surround top-hat transform is discussed and used to extract the multi scale bright and dim image regions of the original images. Secondly, the final extracted image regions for image fusion are constructed from the extracted multi scale bright and dim image regions. Finally, after a base image is calculated from the original images, the final extracted image regions are combined into the base image through a power strategy to form the final fusion result. Because the image information of the original images are well extracted and combined, the proposed algorithm is very effective for image fusion. Comparison experiments have been performed on different image sets, and the results verified the effectiveness of the proposed algorithm.

©2011 Optical Society of America

1. Introduction

Image fusion is an important research area in image analysis and pattern recognition, which is used to combine the useful information of different images obtained from multi sensors or at different situations of one sensor [13]. Especially, infrared image obtained from the infrared sensor may clearly show the important target regions which could not be easily observed in the visual image. However, the infrared image may not contain image details as many as the visual image. To both maintain the image details and important target regions, the image fusion technique should be used [4]. Many algorithms have been proposed for image fusion, such as the wavelet transform based algorithm [5,6], curvelet transform based algorithm [7,8], ICA or PCA based algorithms [911], segmentation based algorithms [12,13], neural networks based algorithms [14,15], and so on. Wavelet or curvelet transform based algorithms transform the original image into different images at multi scales and extract the multi scale image details for image fusion, which performs well if the multi scale image details could be effectively extracted. However, some image details may be smoothed. This may influence the performance of these algorithms for infrared and visual image fusion. ICA or PCA based algorithms extract the main information of the original images and combine them together to form the final fusion result. Also, some image details may not be well preserved in the final result. Segmentation based algorithms segment the important image regions of the original images and combine these segmented regions together to form a new image. But, it could be easily observed that, the performance of segmentation may affect the effectiveness of the algorithms. Neural networks are also used for image fusion, but they are usually designed for some special applications. Mathematical morphology [16] is an important theory in image processing, and different morphological operations are widely used in different applications [1627]. Among these morphological operations, top-hat transform is an important morphological operation for image regions extraction [1721], which has also been used for multi-modal and multi-focus image fusion [22]. Multi scale theory [2326], which could be recognized as using the multi scale structuring elements with multi sizes in mathematical morphology, is used in these algorithms to extract the multi scale image features for image fusion. Because the multi scale image features could be extracted, these algorithms perform well in some cases. However, some interest image regions or image details may be smoothed, which will affect the performance of these algorithms for infrared and visual image fusion.

To improve the ability of top-hat transform for image processing, the new top-hat transform, which is renamed as center-surround top-hat transform in this paper, is proposed through structuring element construction [27]. The center-surround top-hat transform could effectively extract image regions which are different from its surrounding regions and well maintain other regions [26,27]. This property of center-surround top-hat transform is especially useful for extracting the regions of interest. Moreover, the image details are usually different from its surrounding regions, especially in infrared image. Thus, the center-surround top-hat transform could also be well used for image detail extraction. Therefore, by using the center-surround top-hat transform, an effective algorithm for infrared and visual image fusion may be constructed.

In this paper, an algorithm through region extraction by using the multi scale center-surround top-hat transform for infrared and visual image fusion is proposed. The multi scale center-surround top-hat transform using multi scale structuring elements is discussed and used to extract the multi scale image regions of the original infrared and visual images, firstly. Then, the final extracted image regions for image fusion are constructed from the extracted multi scale image regions. Finally, the final fusion image is obtained by combining the final extracted image regions together through a power strategy. Comparison experiments have been used to verify the effectiveness of the proposed algorithm for image fusion. Also, the proposed algorithm may be used for other types of image fusion applications.

2. Mathematical morphology

Mathematical morphology has been widely used for image processing [16]. Most of the morphological operations are defined based on two sets. One set is the original image to be processed and the other set is the structuring element used to process the original image. Let f(x, y) and B(u, v) represent the original image and structuring element, respectively. (x, y) and (u, v) are the pixel coordinates of f and B. Based on the two sets f and B, the basic morphological operations, which are dilation and erosion and denoted by f ⊕B and f ΘB, are defined as follows.

fB=maxu,v(f(xu,yv)+B(u,v)),
fΘB=minu,v(f(x+u,y+v)B(u,v)).
By sequentially operating the dilation and erosion, two important operations, which are opening and closing and denoted by f∘B and f•B, are defined as follows.
fB=(fΘB)B,
fB=(fB)ΘB.
Through comparing the original image and the result of opening or closing, two important region extraction operations, which are the classical white top-hat transform and black top-hat transform and denoted by WTH and BTH, are defined as follows.
WTH(x,y)=f(x,y)fB(x,y),
BTH(x,y)=fB(x,y)f(x,y).
Opening smoothes bright image regions, then WTH which is the difference of the original image and the result of opening will extract the smoothed bright image regions by using opening operation. Similarly, closing smoothes dim image regions, then BTH will extract dim image regions. Through appropriately processing the results of the classical top-hat transforms, effective results could be achieved for different applications [1721,2527].

3. Center-surround top-hat transform

3.1 Definition

The definitions of the classical top-hat transform indicate that, only one structuring element is used in WTH and BTH, then the difference information between the region of interest and the surrounding regions is not well extracted and used for different applications, which may affect the performance of the classical top-hat transform for image processing. To improve the image processing ability of the classical top-hat transform, the new top-hat transform is proposed [27], which is defined through importing the difference information between the region of interest and the surrounding regions into the constructed structuring elements. In this paper, we rename the proposed new top-hat transform as center-surround top-hat transform. Different from the classical top-hat transform, two structuring elements which are constructed from the inner structuring element (Bi) and outer structuring element (Bo) are used in the center-surround top-hat transform. The size of Bo should be larger than the region of interest. Bo represents the whole local region containing the region of interest. The size of Bi should be not larger than the region of interest. Bi represents the inner region of the region of interest. The two used structuring elements in the center-surround top-hat transform are constructed from Bi and Bo. The first structuring element used in the center-surround top-hat transform is named marginal structuring element and denoted by ΔB = BoBi, which represents the marginal region between the region of interest and its surrounding regions. The second structuring element used in the center-surround top-hat transform is denoted by Bb. The size of Bb is between the size of Bi and Bo. Bb could change from Bi to Bo following different applications. Then, by using ΔB and Bb, two new operations which have the similar property as the classical opening and closing, denoted by fBoi and fBoi, are defined as follows.

fBoi(x,y)=(fΔB)ΘBb,
fBoi(x,y)=(fΘΔB)Bb.
Boi indicates that the definition relates to Bo and Bi. fBoi and fBoi could be used to smooth the bright and dim image regions following the used structuring elements, respectively. So, using the similar definitions as WTH and BTH, the center-surround top-hat transforms including the center-surround white top-hat transform and center-surround black top-hat transform, denoted by NWTH and NBTH, are defined as follows.
NWTH(x,y)=f(x,y)fBoi(x,y),
NBTH(x,y)=fBoi(x,y)f(x,y).
NWTH and NBTH extract the bright and dim image regions following the used structuring elements.

However, the definitions of fBoi and fBoi indicate that, two different structuring elements are used and the operation sequence of erosion and dilation is also different from the classical opening and closing. So, there is no firm relationship between the original image and the results of fBoi and fBoi, which means that the negative gray values may exist in the final result of the center-surround top-hat transform because of the difference operation between the original image and the results of fBoi and fBoi.

Fortunately, one simple way to suppress the possible negative gray values in NWTH and NBTH through comparing the gray values of the original image and the results of fBoi and fBoi is given as follows [27].

NWTH(x,y)=f(x,y)min(fBoi(x,y),f(x,y))=f(x,y)min((fΔB)ΘBb,f(x,y)),
NBTH(x,y)=max(fBoi(x,y),f(x,y))f(x,y)=max((fΘΔB)Bb,f(x,y))f(x,y).
The constructed structuring elements used in NWTH and NBTH, which are ΔB and Bb, directly use the difference information between the regions of interest and the surrounding regions, which effectively improves the performance of the center-surround top-hat transform for image region extraction. NWTH and NBTH could be used for bright and dim region extraction, respectively.

The crucial of image fusion is extracting important image information of the original images and reasonably combining them into the final result image. Center-surround top-hat transform could extract image regions in purpose. Then, center-surround top-hat transform could be well used for image fusion. More importantly, important image information is usually bright or dim image regions in infrared image. So, based on center-surround top-hat transform, effective infrared and visual image fusion algorithm could be well constructed.

3.2 Constructing multi scale center-surround top-hat transform

Because only one type of size of the structuring elements is used, the center-surround top-hat transform only extracts image regions with size corresponding to the size of the used structuring elements. To extract all the regions of interest with different sizes, multi scale structuring elements with different sizes should be used.

Multi scale theory is widely used in mathematical morphology, and the size (scale) dependent operation using structuring elements with different sizes have been proved to be an effective way [2226]. In this paper, we will also use multi scale structuring elements with different sizes in the center-surround top-hat transform to extract all the possible image information for image fusion.

Let nL be the size of structuring element Bb. Let nW be the size of the marginal structuring element ΔB. And, let nM be the size of the margin in ΔB. One illustration of the sizes of the structuring element is demonstrated in Fig. 1 .

 figure: Fig. 1

Fig. 1 Used structuring element with square shape.

Download Full Size | PDF

Suppose n scales of structuring element should be used, and the size increasing step is nS. Then, the size of the structuring element at each scale s (1≤sn) could be defined as follows.

nLs=nL+s×nS,
nWs=nW+s×nS.
Let ΔBs and Bbs represent the used structuring elements ΔB and Bb at scale s, respectively. nLs is the size of Bbs. nWs is the size of ΔBs. nL and nW are the start size of Bb and ΔB at the first scale.

Bright image regions extracted by using the center-surround white top-hat transform at scale s is as follows.

NWTHs(x,y)=f(x,y)min((fΔBs)ΘBbs,f(x,y)).
Dim image regions extracted by using the center-surround black top-hat transform at scale s is as follows.
NBTHs(x,y)=max((fΘΔBs)Bbs,f(x,y))f(x,y).
By using NWTHs and NBTHs, the multi scale bright and dim image regions could be extracted and used for image fusion.

The scale increasing step nS and scale number n decide all the possibly extracted image regions. The small nS and n will extract only a few image regions, which may affect the performance of the algorithm. Conversely, the very large nS and n will dramatically increase the calculation time of the proposed algorithm. Usually, the regions of interest in infrared image are not very large. So, there is no necessary to set very large values for nS and n. In this paper, nS = 11, n = 9.

4. Algorithm

4.1 Extracting multi scale bright and dim image regions

Let fIR and fVI represent the original infrared and visual images for image fusion. The bright image regions of fIR extracted by the center-surround white top-hat transform at scale s could be expressed as follows.

NWTHs(fIR)(x,y)=fIR(x,y)min((fIRΔBs)ΘBbs,fIR(x,y)).
Also, the dim image regions of fIR extracted by the center-surround black top-hat transform at scale s could be expressed as follows.
NBTHs(fIR)(x,y)=max((fIRΘΔBs)Bbs,fIR(x,y))fIR(x,y).
Similarly, the bright and dim image regions of fVI extracted by the center-surround white and black top-hat transform at scale s could be expressed as follows.
NWTHs(fVI)(x,y)=fVI(x,y)min((fVIΔBs)ΘBbs,fVI(x,y)),
NBTHs(fVI)(x,y)=max((fVIΘΔBs)Bbs,fVI(x,y))fVI(x,y).
Through varying the s from 1 to n, multi scale image regions of fIR and fVI for image fusion could be extracted.

4.2 Constructing the final bright and dim image regions

Center-surround top-hat transform extracts bright and dim image regions, which means the pixels with important information in the results of center-surround top-hat transform should have large gray values. So, based on the extracted multi scale bright image regions of fIR, the bright image regions of all the scales of fIR should be the pixel-wise maximum of NWTHs(fIR) at all the scales as follows.

NWTH(fIR)=maxs{NWTHs(fIR)}.
Also, the bright image regions of all the scales of fVI should be the pixel-wise maximum of NWTHs(fVI) at all the scales as follows.
NWTH(fVI)=maxs{NWTHs(fVI)}.
Then, the final bright image regions of the original infrared and visual images for image fusion, denoted by RW, could be obtained through the pixel-wise maximum on NWTH(fIR) and NWTH(fVI).
RW=max{NWTH(fIR),NWTH(fVI)}.
Similarly, the final dim image regions of the original infrared and visual images for image fusion, denoted by RB, could be obtained using the similar way as RW as follows.
RB=max{NBTH(fIR),NBTH(fVI)},
where
NBTH(fIR)=maxs{NBTHs(fIR)},
NBTH(fVI)=maxs{NBTHs(fVI)}.
Then, the final extracted bright and dim image regions, which are RW and RB, could be used for image fusion.

4.3 Image fusion through contrast enhancement

Appropriately combining the final extracted bright and dim image regions into the final fusion image is important for achieving a good result. The final extracted bright and dim image regions mainly represent the image details of the original images. To form a good fusion result, a base image containing most of the basic information of the original images should be constructed, firstly. Then, the final extracted bright and dim image regions could be combined into the base image. A simple way of pixel-wise averaging of the original images is used in this paper to calculate the base image RM as follows.

RM(x,y)=fIR(x,y)+fVI(x,y)2.
Because of the averaging, the basic image information of the original images is well combined into the base image and some detailed features could be also remained, which will be very useful for image fusion.

The final extracted bright and dim image regions represent two types of image information which are bright and dim image information. So, to obtain a good visual effect of the final fusion image and well maintain the image information, a powered strategy through contrast enhancement is used in this paper to construct the final fusion image Fu as follows.

Fu=RM×w1+RW×w2RB×w3.
In the calculation of Fu, the bright image regions are added on the base image and the dim image regions are subtracted from the base image, which will apparently enhance the contrast between the bright and dim image regions of the final fusion image. And, the contrast of the final fusion image could be further enhanced through the simple linear extension. Then, the visual effect of the final fusion image would be good.

The powers w 1, w 2 and w 3 are used to adjust the effect of the final result. A big w 1 will well keep the basic image information of the original images. And, the big w 2 and w 3 will enhance the image details of the final result image and obtain a good visual effect. Usually, w 1, w 2 and w 3 could be selected in [0, 5].

4.4 Structuring element selection

Structuring element is the very important parameter used in the algorithm. The main parameters related to the structuring element are the shape and size of the structuring element. The widely used shapes of structuring element are rectangle, square, circle and line. Although the rectangle and square shapes may result in block effect, they are the most widely used shapes in different applications. And, the square shape has symmetric structure which may simplify the calculation. So, the square shape is used in this paper. An illustration of the shape of the structuring elements is shown in Fig. 1.

In the used structuring elements, the start sizes, nL and nW, decide the smallest size of the possible extracted image regions. So, it would be better if the size of nL and nW are small. Since nW should not be smaller than nL, we set nL = nW = 5 in this paper.

The marginal size, nM, is an important parameter which could influence the performance of the center-surround top-hat transform. A small nM will improve the performance of the center-surround top-hat transform for region extraction, but the center-surround top-hat transform may be sensitive to the noises if nM is very small [27]. So, in this paper, nM = 2.

4.5. Implementation

The details of the implementation of the proposed algorithm are demonstrated in Fig. 2 . Firstly, the multi scale center-surround top-hat transform is applied on the original infrared and visual images to extract the multi scale bright and dim image regions of the original images. Secondly, the final bright and dim image regions are obtained through the pixel-wise maximum operation on the multi scale bright and dim image regions. Finally, the fusion result is constructed through combining the final extracted bright and dim image regions into a base image using the power strategy.

 figure: Fig. 2

Fig. 2 Implementation of the algorithm.

Download Full Size | PDF

5. Experimental results

The proposed algorithm is performed on five image sets which are “UNcamp”, “Trees”, “Dune”, “navi” and “OctecWS” image sets. And, some examples are shown below. Also, to do the comparison, some widely used methods, including the direct average algorithm, wavelet pyramid algorithm [5,6] and multi scale mathematical morphology based algorithm [22], are applied on the same image sets. The direct average algorithm and wavelet pyramid algorithm are efficient and widely used methods for image fusion. The multi scale mathematical morphology based algorithm is a multi scale morphological algorithm. The proposed algorithm is also a multi scale morphological algorithm and intended to be a widely used algorithm for image fusion. So, the direct average algorithm, wavelet pyramid algorithm and multi scale mathematical morphology based algorithm are used to do the comparison in this paper.

5.1 Visual effect comparison

Figures 38 are some comparison examples. In these examples, (a) and (b) are the original infrared and visual images. (c) is the fusion result of the proposed algorithm. (d) is the fusion result of the direct average algorithm. (e) is the fusion result of the wavelet pyramid algorithm. (f) is the fusion result of the multi scale mathematical morphology based algorithm.

 figure: Fig. 3

Fig. 3 An example on UNcamp images.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Another example on OctecWS images.

Download Full Size | PDF

Figure 3 is an example on UNcamp images. The direct average algorithm and wavelet pyramid algorithm may smooth some image details. Especially, the “people” region is not clear. Multi scale mathematical morphology based algorithm and the proposed algorithm could well keep the image details of the original images. But, comparing with the result of the multi scale mathematical morphology based algorithm, the proposed algorithm obtains a good visual effect.

Figure 4 is an example on Trees images. The direct average algorithm and wavelet pyramid algorithm smooth some image details of the original images. So, the result is not very clear. The multi scale mathematical morphology based algorithm performs better than the direct average algorithm and wavelet pyramid algorithm. But, comparing with the multi scale mathematical morphology based algorithm, the result of the proposed algorithm is clearer and achieves a better effect.

 figure: Fig. 4

Fig. 4 An example on Trees images.

Download Full Size | PDF

Figure 5 is an example on Dune images. The result of the wavelet pyramid algorithm is very dim and not clear. The direct average algorithm could not well maintain the image details of the original images, which results in an unclear image. The multi scale mathematical morphology based algorithm and the proposed algorithm obtain better results. And, the result of the proposed algorithm is clear.

 figure: Fig. 5

Fig. 5 An example on Dune images.

Download Full Size | PDF

Figure 6 is an example on navi images. The multi scale mathematical morphology based algorithm performs better than the direct average algorithm and wavelet pyramid algorithm. But, because the multi scale image features of the original images are well extracted and remained in the final fusion result, the result of the proposed algorithm has the best visual effect. Especially, the very small bright and dark regions are clearer than the results of other algorithms.

 figure: Fig. 6

Fig. 6 An example on navi images.

Download Full Size | PDF

Figure 7 is an example on OctecWS images. The direct average algorithm produces a blurred result image. The result of the wavelet pyramid algorithm is also not very clear. The multi scale mathematical morphology based algorithm produces some artifact regions, especially around the tree regions. However, the result of the proposed algorithm is clear and has good visual effect, which achieves the best effect.

 figure: Fig. 7

Fig. 7 An example on OctecWS images.

Download Full Size | PDF

Figure 8 is another example on OctecWS images. The results of the proposed algorithm and multi scale mathematical morphology based algorithm are better than the results of the direct average algorithm and wavelet pyramid algorithm. Especially, comparing with the result of multi scale mathematical morphology based algorithm, some important regions in the result of the proposed algorithm, such as the people regions, are very clearer, which means the proposed algorithm performs better than other algorithms.

Different types of images have been used in this experiment. And, the results show that the proposed algorithm performs better than some other algorithms on most of the images, which indicates that the proposed algorithm is an effective algorithm for infrared and visual image fusion. More importantly, the proposed algorithm could be also used in other types of image fusion.

5.2 Quantitative comparison

The purpose of image fusion is to combine all the possible image information into the final fusion image. Then, the information combined in the final fusion image would be more. Therefore, to do a quantitative comparison, two information based measures, which are entropy [28,29] and joint entropy [30], are used in this paper.

5.2.1 Entropy measure

Suppose one image I has b pixels and L gray levels. The probability of one gray level l which is denoted by pl could be calculated through dividing the number (bl) of pixels with gray value l by b: pl = bl/b. Then, the measure of entropy E is defined as follows [28,29].

E=l=0L1pllog2pl.
A bigger value of entropy indicates that the final fusion image contains more information and the corresponding algorithm gives a better result.

5.2.2 Joint entropy measure

Suppose the final fusion image is F and the original infrared and visual images are C and D. The measure joint entropy, denoted by JEFCD, to quantify the information contained between the fusion image F and the original images C and D, is calculated as follows [30].

JEFCD=k=0L1c=0L1d=0L1pFCD(k,c,d)log2pFCD(k,c,d).
PFCD(k, c, d) is the joint probability of gray levels k, c and d in the images F, C and D.

A bigger value of joint entropy indicates that the final fusion image contains more information and has more relationship with the original images. So, the performance of the corresponding algorithm would be better.

5.2.3 Comparison results

Images from the five image sets used in this paper are processed by the direct average algorithm, wavelet pyramid algorithm, multi scale mathematical morphology based algorithm and the proposed algorithm. The results are used to calculate the values of entropy and joint entropy. And, the mean value of the entropy or joint entropy of all the result images of each algorithm is used to do the comparison.

The mean value of the entropy of each algorithm is shown in Fig. 9 . Figure 9 shows that, the entropy value of the proposed algorithm is larger than other algorithms. So, the final fusion result of the proposed algorithm contains more image information, which indicates the effective performance of the proposed algorithm for infrared and visual image fusion.

 figure: Fig. 9

Fig. 9 Quantitative comparison using the entropy measure.

Download Full Size | PDF

The mean value of the joint entropy of each algorithm is shown in Fig. 10 . Figure 10 shows that the joint entropy value of the proposed algorithm is the largest among all the algorithms. So, the information contained in the fusion result of the proposed algorithm and the relationships between the original images and the fusion result of the proposed algorithm is better than other algorithms. This result shows that, the proposed algorithm could well extract the image details from the original images and combine them into the final fusion image, which finally results in a very effective fusion result.

 figure: Fig. 10

Fig. 10 Quantitative comparison using the joint entropy measure.

Download Full Size | PDF

All the images used in this experiment are from different image sets which contain different image properties. And, the final results of the proposed algorithm are better than other algorithms. Therefore, the proposed algorithm could effectively combine the image information of the original infrared and vision images for image fusion and the performance is robust.

6. Summary and conclusions

Extracting image information of the original images and combining them into the final result image is very important for image fusion. Especially, in infrared and visual image fusion, the regions of interest are usually important image information for different applications. So, the crucial of infrared and visual image fusion is to effectively extract the regions of interest and appropriately combine them into the fusion result. Center-surround top-hat transform could extract regions of interest corresponding to the used structuring element. Multi scale center-surround top-hat transforms could extract all the possible regions of interest at different scales. Then, appropriately combining the extracted multi scale regions of interest through contrast enhancement will lead to an effective fusion result. So, the proposed algorithm by using the extracted multi scale image regions based on the multi scale center-surround top-hat transform performs well for image fusion. Moreover, different infrared and visual image sets have been processed by the proposed algorithm. The results show that, the proposed algorithm could be effectively used for infrared and visual image fusion and perform better than some other widely used algorithms. Because the regions of interest are well extracted and combined into the final fusion result, the fusion image would be more useful for different applications, such as target detection, tracking, recognition and so on.

Acknowledgment

The original images used in this paper are obtained from the websites www.imagefusion.org and www.metapix.de/indexp.htm. This work is partly supported by the National Natural Science Foundation of China (60902056) and the Civil Aviation United Foundation of the National Natural Science Foundation of China (60832011).

References and links

1. J. Nunez, X. Otazu, O. Fors, A. Prades, V. Pala, and R. Arbiol, “Multiresolution-based image fusion with additive wavelet decomposition,” IEEE Trans. Geosci. Rem. Sens. 37(3), 1204–1211 (1999). [CrossRef]  

2. C. A. Lieber, S. Urayama, N. Rahim, R. Tu, R. Saroufeem, B. Reubner, and S. G. Demos, “Multimodal near infrared spectral imaging as an exploratory tool for dysplastic esophageal lesion identification,” Opt. Express 14(6), 2211–2219 (2006), http://www.opticsinfobase.org/abstract.cfm?URI=oe-14-6-2211. [CrossRef]   [PubMed]  

3. Y. Chen, L. Wang, Z. Sun, Y. Jiang, and G. Zhai, “Fusion of color microscopic images based on bidimensional empirical mode decomposition,” Opt. Express 18(21), 21757–21769 (2010), http://www.opticsinfobase.org/abstract.cfm?URI=oe-18-21-21757. [CrossRef]   [PubMed]  

4. M. Leviner and M. Maltz, “A new multi-spectral feature level image fusion method for human interpretation,” Infrared Phys. Technol. 52(2-3), 79–88 (2009). [CrossRef]  

5. G. Pajares and J. M. de la Cruz, “A wavelet-based image fusion tutorial,” Pattern Recognit. 37(9), 1855–1872 (2004). [CrossRef]  

6. K. Amolins, Y. Zhang, and P. Dare, “Wavelet based image fusion techniques—an introduction, review and comparison,” ISPRS J. Photogramm. Remote Sens. 62(4), 249–263 (2007). [CrossRef]  

7. Q. Guihong, Z. Dali, and Y. Pingfan, “Medical image fusion by wavelet transform modulus maxima,” Opt. Express 9(4), 184–190 (2001), http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-9-4-184. [CrossRef]   [PubMed]  

8. F. Nencini, A. Garzelli, S. Baronti, and L. Alparone, “Remote sensing image fusion using the curvelet transform,” Inf. Fusion 8(2), 143–156 (2007). [CrossRef]  

9. N. Mitianoudis and T. Stathaki, “Pixel-based and region-based image fusion schemes using ICA bases,” Inf. Fusion 8(2), 131–142 (2007). [CrossRef]  

10. M. González-Audícana, J. L. Saleta, R. G. Catalán, and R. García, “Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition,” IEEE Trans. Geosci. Rem. Sens. 42(6), 1291–1299 (2004). [CrossRef]  

11. N. Cvejic, D. Bull, and N. Canagarajah, “Region-based multimodal image fusion using ICA bases,” IEEE Sens. J. 7(5), 743–751 (2007). [CrossRef]  

12. S. Li and B. Yang, “Multifocus image fusion using region segmentation and spatial frequency,” Image Vis. Comput. 26(7), 971–979 (2008). [CrossRef]  

13. A. Toet, M. A. Hogervorst, S. G. Nikolov, J. J. Lewis, T. D. Dixon, D. R. Bull, and C. N. Canagarajah, “Towards cognitive image fusion,” Inf. Fusion 11(2), 95–113 (2010). [CrossRef]  

14. Z. Wang, Y. Ma, and J. Gu, “Multi-focus image fusion using PCNN,” Pattern Recognit. 43(6), 2003–2016 (2010). [CrossRef]  

15. W. Huang and Z. Jing, “Multi-focus image fusion using pulse coupled neural network,” Pattern Recognit. Lett. 28(9), 1123–1132 (2007). [CrossRef]  

16. P. Soille, Morphological Image Analysis-Principle and Applications (Springer, 2003).

17. X. Bai, F. Zhou, and T. Jin, “Enhancement of dim small target through modified top-hat transformation under the condition of heavy clutter,” Signal Process. 90(5), 1643–1654 (2010). [CrossRef]  

18. X. Bai and F. Zhou, “Analysis of different modified top-hat transformations based on structuring element constructing,” Signal Process. 90(11), 2999–3003 (2010). [CrossRef]  

19. X. Bai and F. Zhou, “Top-hat selection transformation for infrared dim small target enhancement,” Imaging Sci. J. 58(2), 112–117 (2010). [CrossRef]  

20. M. Zeng, J. Li, and Z. Peng, “The design of top-hat morphological filter and application to infrared target detection,” Infrared Phys. Technol. 48(1), 67–76 (2006). [CrossRef]  

21. P. Jackway, “Improved morphological top-hat,” Electron. Lett. 36(14), 1194–1195 (2000). [CrossRef]  

22. S. Mukhopadhyay and B. Chanda, “Fusion of 2D grayscale images using multiscale morphology,” Pattern Recognit. 34(10), 1939–1949 (2001). [CrossRef]  

23. P. Jackway and M. Deriche, “Scale-space properties of the multiscale morphological dilation-erosion,” IEEE Trans. Pattern Anal. Mach. Intell. 18(1), 38–51 (1996). [CrossRef]  

24. M. A. Oliveira and N. J. Leite, “A multiscale directional operator and morphological tools for reconnecting broken ridges in fingerprint images,” Pattern Recognit. 41(1), 367–377 (2008). [CrossRef]  

25. I. De, B. Chanda, and B. Chattopadhyay, “Enhancing effective depth-of-field by image fusion using mathematical morphology,” Image Vis. Comput. 24(12), 1278–1287 (2006). [CrossRef]  

26. X. Bai, F. Zhou, and B. Xue, “Infrared image enhancement through contrast enhancement by using multi scale new top-hat transform,” Infrared Phys. Technol. 54(2), 61–69 (2011). [CrossRef]  

27. X. Bai and F. Zhou, “Analysis of new top-hat transformation and the application for infrared dim small target detection,” Pattern Recognit. 43(6), 2145–2156 (2010). [CrossRef]  

28. J. W. Roberts, J. Van Aardt, and F. Ahmed, “Assessment of image fusion procedures using entropy, image quality, and multispectral classification,” J. Appl. Remote Sens. 2(1), 023522 (2008). [CrossRef]  

29. Y. Chen, Z. Xue, and R. S. Blum, “Theoretical analysis of an information-based quality measure for image fusion,” Inf. Fusion 9(2), 161–175 (2008). [CrossRef]  

30. G. Qu, D. Zhang, and P. Yan, “Information measure for performance of image fusion,” Electron. Lett. 38(7), 313–315 (2002). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Used structuring element with square shape.
Fig. 2
Fig. 2 Implementation of the algorithm.
Fig. 3
Fig. 3 An example on UNcamp images.
Fig. 8
Fig. 8 Another example on OctecWS images.
Fig. 4
Fig. 4 An example on Trees images.
Fig. 5
Fig. 5 An example on Dune images.
Fig. 6
Fig. 6 An example on navi images.
Fig. 7
Fig. 7 An example on OctecWS images.
Fig. 9
Fig. 9 Quantitative comparison using the entropy measure.
Fig. 10
Fig. 10 Quantitative comparison using the joint entropy measure.

Equations (30)

Equations on this page are rendered with MathJax. Learn more.

f B = max u , v ( f ( x u , y v ) + B ( u , v ) ) ,
f Θ B = min u , v ( f ( x + u , y + v ) B ( u , v ) ) .
f B = ( f Θ B ) B ,
f B = ( f B ) Θ B .
W T H ( x , y ) = f ( x , y ) f B ( x , y ) ,
B T H ( x , y ) = f B ( x , y ) f ( x , y ) .
f B o i ( x , y ) = ( f Δ B ) Θ B b ,
f B o i ( x , y ) = ( f Θ Δ B ) B b .
N W T H ( x , y ) = f ( x , y ) f B o i ( x , y ) ,
N B T H ( x , y ) = f B o i ( x , y ) f ( x , y ) .
N W T H ( x , y ) = f ( x , y ) min ( f B o i ( x , y ) , f ( x , y ) ) = f ( x , y ) min ( ( f Δ B ) Θ B b , f ( x , y ) ) ,
N B T H ( x , y ) = max ( f B o i ( x , y ) , f ( x , y ) ) f ( x , y ) = max ( ( f Θ Δ B ) B b , f ( x , y ) ) f ( x , y ) .
n L s = n L + s × n S ,
n W s = n W + s × n S .
N W T H s ( x , y ) = f ( x , y ) min ( ( f Δ B s ) Θ B b s , f ( x , y ) ) .
N B T H s ( x , y ) = max ( ( f Θ Δ B s ) B b s , f ( x , y ) ) f ( x , y ) .
N W T H s ( f I R ) ( x , y ) = f I R ( x , y ) min ( ( f I R Δ B s ) Θ B b s , f I R ( x , y ) ) .
N B T H s ( f I R ) ( x , y ) = max ( ( f I R Θ Δ B s ) B b s , f I R ( x , y ) ) f I R ( x , y ) .
N W T H s ( f V I ) ( x , y ) = f V I ( x , y ) min ( ( f V I Δ B s ) Θ B b s , f V I ( x , y ) ) ,
N B T H s ( f V I ) ( x , y ) = max ( ( f V I Θ Δ B s ) B b s , f V I ( x , y ) ) f V I ( x , y ) .
N W T H ( f I R ) = max s { N W T H s ( f I R ) } .
N W T H ( f V I ) = max s { N W T H s ( f V I ) } .
R W = max { N W T H ( f I R ) , N W T H ( f V I ) } .
R B = max { N B T H ( f I R ) , N B T H ( f V I ) } ,
N B T H ( f I R ) = max s { N B T H s ( f I R ) } ,
N B T H ( f V I ) = max s { N B T H s ( f V I ) } .
R M ( x , y ) = f I R ( x , y ) + f V I ( x , y ) 2 .
F u = R M × w 1 + R W × w 2 R B × w 3 .
E = l = 0 L 1 p l log 2 p l .
J E F C D = k = 0 L 1 c = 0 L 1 d = 0 L 1 p F C D ( k , c , d ) log 2 p F C D ( k , c , d ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.