• Infrared Technology
  • Vol. 44, Issue 6, 571 (2022)
Junyao WANG*, Zhishe WANG, Yuanyuan WU, Yanlin CHEN, and Wenyu SHAO
Author Affiliations
  • [in Chinese]
  • show less
    DOI: Cite this Article
    WANG Junyao, WANG Zhishe, WU Yuanyuan, CHEN Yanlin, SHAO Wenyu. Multi-Feature Adaptive Fusion Method for Infrared and Visible Images[J]. Infrared Technology, 2022, 44(6): 571 Copy Citation Text show less

    Abstract

    Owing to different imaging mechanisms, infrared images represent typical targets by pixel distribution, whereas visible images describe texture details through edges and gradients. Existing fusion methods fail to adaptively change according to the characteristics of the source images, thereby resulting in fusion results that do not retain infrared target features and visible texture details simultaneously. Therefore, a multi-feature adaptive fusion method for infrared and visible images is proposed in this study. First, a multi-scale dense connection network that can effectively reuse all intermediate features of different scales and levels, and further enhance the ability of feature extraction and reconstruction is constructed. Second, a multi-feature adaptive loss function is designed. Using the pixel intensity and gradient as measurement criteria, the multi-scale features of the source image are extracted by the VGG-16 network and the feature weight coefficients are calculated by the degree of information preservation. The multi-feature adaptive loss function can supervise network training and evenly extract the respective feature information of the source image to obtain a better fusion effect. The experimental results of public datasets demonstrate that the proposed method is superior to other typical methods in terms of subjective and objective evaluations.
    WANG Junyao, WANG Zhishe, WU Yuanyuan, CHEN Yanlin, SHAO Wenyu. Multi-Feature Adaptive Fusion Method for Infrared and Visible Images[J]. Infrared Technology, 2022, 44(6): 571
    Download Citation