• Laser Journal
  • Vol. 45, Issue 7, 150 (2024)
YUAN Shuozhi, LIU Peipei*, ZHANG Yuxiao, XU Huyang, and LIU Sili
Author Affiliations
  • Chengdu University of Technology, Chengdu 610059, China
  • show less
    DOI: 10.14016/j.cnki.jgzz.2024.07.150 Cite this Article
    YUAN Shuozhi, LIU Peipei, ZHANG Yuxiao, XU Huyang, LIU Sili. Infrared and visible image fusion based on gradient residual dense block and shuffle attention[J]. Laser Journal, 2024, 45(7): 150 Copy Citation Text show less
    References

    [1] Cao Y P, Guan D Y, Huang W L, et al. Pedestrian Detection with Unsupervised Multispectral Feature Learning Using Deep Neural Networks[J]. Information Fusion, 2019, 46: 206-217.

    [2] Li C L, Zhu C L, Huang Y, et al. Cross-modal Ranking with Soft Consistency and Noisy Labels for Robust RGB-T Tracking[C]//Proceedings of the European Conference on Computer Vision, Munich: ECCV, 2018: 808-823.

    [3] Lu Y, Wu Y, Liu B, et al. Cross-modality Person Re-identification with Shared-specific Feature Transfer[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle: CVPR, 2020: 13379-13389.

    [7] Zhang Y, Liu Y, Sun P, et al. IFCNN: A General Image Fusion Framework Based on Convolutional Neural Network[J]. Information Fusion, 2020, 54: 99-118.

    [8] Ma J Y, Yu W, Liang P W, et al. FusionGAN: A Generative Adversarial Network for Infrared and Visible Image Fusion[J]. Information Fusion, 2019, 48: 11-26.

    [9] Prabhakar K R, Srikar V S, Babu R V, et al.. Deepfuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs[C]//IEEE International Conference on Computer Vision, Venice: ICCV, 2017: 4714-4722.

    [10] Li H, Wu X J. Densefuse: A Fusion Approach to Infrared and Visible Images[J]. IEEE Transactions on Image Processing, 2018, 28(5): 2614-2623.

    [11] Li H, Wu X J, Durrani T, et al. Nestfuse: An Infrared and Visible Image Fusion Architecture Based on Nest Connection and Spatial/Channel Attention Models[J]. IEEE Transactions on Instrumentation and Measurement, 2020, 69(12): 9645-9656.

    [12] Li H, Wu X J, Kittler J. RFN-Nest: An End-to-end Residual Fusion Network for Infrared and Visible Images[J]. Information Fusion, 2021, 73: 72-86.

    [13] Xu H, Ma J Y, Jiang J J, et al. U2Fusion: A Unified Unsupervised Image Fusion Network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 502-518.

    [14] Maas A L, Hannun A Y H, Ng A Y. Rectifier Nonlinearities Improve Neural Network Acoustic Models[C]//International Conference on Machine Learning, Atlanta: ICML, 2013, 30(1): 3.

    [15] Tang L F, Yuan J T, Ma J Y. Image Fusion in the Loop of High-level Vision Tasks: A semantic-aware real-time infrared and visible image fusion network[J]. Information Fusion, 2022, 82: 28-42.

    [16] Ma J Y, Tang L F, Xu M L, et al. STDFusionNet: An Infrared and Visible Image Fusion Network Based on Salient Target Detection[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 1-13.

    [17] Zhang Q L, Yang Y B. SA-Net: Shuffle Attention for Deep Convolutional Neural Networks[C]//IEEE International Conference on Acoustics, Speech and Signal Processing, Toronto: ICASSP, 2021: 2235-2239.

    [18] Wu Y X, He K M. Group Normalization. In Computer Vision[C]//European Conference on Computer Vision, Munich: ECCV, 2018: 3-19.

    [19] Hu J, Shen L, Albanie S, et al. Squeeze-and-excitation Networks[C]//IEEE Conference on Computer Vision and Pattern Recognition, Salt Lack City: CVPR, 2018: 7132-7141.

    [20] Wang Z, Bovik A C, Sheikh H R, et al. Image Quality Assessment: From Error Visibility to Structural Similarity[J]. IEEE Transactions on Image Processing, 2004, 13(4): 600-612.

    [21] Toet A. The TNO Multiband Image Data Collection[J]. Data in Brief, 2017, 15: 249-251.

    [22] Lin T Y, Maire M, Belongie S, et al. Microsoft Coco: Common Objects in Context[C]//European conference on computer vision, Zurich: ECCV, 2014: 740-755.

    YUAN Shuozhi, LIU Peipei, ZHANG Yuxiao, XU Huyang, LIU Sili. Infrared and visible image fusion based on gradient residual dense block and shuffle attention[J]. Laser Journal, 2024, 45(7): 150
    Download Citation