Abstract
At present, deep-learning-based infrared and visible image fusion methods have the problem of extracting insufficient source image features, causing imbalanced infrared and visible information in fused images. To solve the problem, a multiscale feature pyramid network based on activity level weight selection (MFPN-AWS) with a complete downsampling–upsampling structure is proposed. The network consists of three parts: a downsampling convolutional network, an AWS fusion layer, and an upsampling convolutional network. First, multiscale deep features are extracted by downsampling convolutional networks, obtaining rich information of intermediate layers. Second, AWS highlights the advantages of the ${l_1}$-norm and global pooling dual fusion strategy to describe the characteristics of target saliency and texture detail, and effectively balances the multiscale infrared and visible features. Finally, multiscale fused features are reconstructed by the upsampling convolutional network to obtain fused images. Compared with nine state-of-the-art methods via the publicly available experimental datasets TNO and VIFB, MFPN-AWS reaches more natural and balanced fusion results, such as better overall clarity and salient targets, and achieves optimal values on two metrics: mutual information and visual fidelity.
© 2022 Optica Publishing Group
Full Article | PDF ArticleMore Like This
Ruixing Yu, Weiyu Chen, and Bing Zhu
Appl. Opt. 61(11) 3107-3114 (2022)
Yili Chen, Minjie Wan, Yunkai Xu, Xiqing Cao, Xiaojie Zhang, Qian Chen, and Guohua Gu
J. Opt. Soc. Am. A 39(12) 2257-2270 (2022)
Chong Zhang, Kun Wang, and Jie Tian
Biomed. Opt. Express 13(3) 1243-1260 (2022)