Abstract
For noisy images, in most existing sparse representation-based models, fusion and denoising proceed simultaneously using the coefficients of a universal dictionary. This paper proposes an image fusion method based on a cartoon + texture dictionary pair combined with a deep neural network combination (DNNC). In our model, denoising and fusion are carried out alternately. The proposed method is divided into three main steps: denoising + fusion + network denoising. More specifically, (1) denoise the source images using external/internal methods separately; (2) fuse these preliminary denoised results with external/internal cartoon and texture dictionary pair to obtain the external cartoon + texture sparse representation result (E-CTSR) and internal cartoon + texture sparse representation result (I-CTSR); and (3) combine E-CTSR and I-CTSR using DNNC (EI-CTSR) to obtain the final result. Experimental results demonstrate that EI-CTSR outperforms not only the stand-alone E-CTSR and I-CTSR methods but also state-of-the-art methods such as sparse representation (SR) and adaptive sparse representation (ASR) for isomorphic images, and E-CTSR outperforms SR and ASR for heterogeneous multi-mode images.
© 2017 Optical Society of America
Full Article | PDF ArticleMore Like This
Xu Ma, Yongmei Cheng, and Shuai Hao
Appl. Opt. 55(35) 10038-10044 (2016)
Ying Li, Fangyi Li, Bendu Bai, and Qiang Shen
Appl. Opt. 55(7) 1814-1823 (2016)
Yu Tian, Wenjing Yang, and Ji Wang
Appl. Opt. 60(24) 7466-7479 (2021)