• Infrared Technology
  • Vol. 45, Issue 2, 171 (2023)
Tianyuan WANG1, Xiaoqing LUO1、*, and Zhancheng ZHANG2
Author Affiliations
  • 1[in Chinese]
  • 2[in Chinese]
  • show less
    DOI: Cite this Article
    WANG Tianyuan, LUO Xiaoqing, ZHANG Zhancheng. Infrared and Visible Image Fusion Based on Self-attention Learning[J]. Infrared Technology, 2023, 45(2): 171 Copy Citation Text show less

    Abstract

    Due to the lack of image saliency preserving in the existing fusion rules, a self-attention-guided infrared and visible light image fusion method is proposed. First, the feature map and self-attention map of the source images are learnt by the self-attention learning mechanism in the feature learning layer. Next, the self-attention map which can capture the long-distance dependent characteristics of the image is used to design average weighted fusion strategy. Finally, the fused feature maps are reconstructed to obtain the fused image, and the learning of image feature coding, self-attention mechanism, fusion rule, and fused feature decoding are realized by generative adversarial network. Experiments on TNO real-world data show that the learned self-attention unit can represent the salient region and benefit the fusion rule design, the proposed algorithm is better than SOAT infrared and visible image fusion algorithms in objective and subjective evaluation, and it retains the detailed information of visible images and infrared target information of infrared images.
    WANG Tianyuan, LUO Xiaoqing, ZHANG Zhancheng. Infrared and Visible Image Fusion Based on Self-attention Learning[J]. Infrared Technology, 2023, 45(2): 171
    Download Citation