• Laser Journal
  • Vol. 45, Issue 12, 106 (2024)
LIU Peipei, ZHANG Yuxiao*, YUAN Shuozhi, WANG Shuo, and XU Huyang
Author Affiliations
  • Chengdu University of Technology, Chengdu 610059, China
  • show less
    DOI: 10.14016/j.cnki.jgzz.2024.12.106 Cite this Article
    LIU Peipei, ZHANG Yuxiao, YUAN Shuozhi, WANG Shuo, XU Huyang. Infrared and visible image fusion based on shuffle attention mechanism and residual dense network[J]. Laser Journal, 2024, 45(12): 106 Copy Citation Text show less
    References

    [1] Chen J, Li X, Luo L, et al. Multi-focus image fusion based on multi-scale gradients and image matting[J]. IEEE Transactions on Multimedia, 2021, 24: 655-667.

    [2] Zhang Q, Fu Y, Li H, et al. Dictionary learning method for joint sparse representation-based image fusion[J]. Optical Engineering, 2013, 52(5): 7006.

    [3] Kumar B K S. Image fusion based on pixel significance using cross bilateral filter[J]. Signal, image and video processing, 2015, 9(5): 1193-1204.

    [4] Li H, Wu X. J, J Kittler. Infrared and Visible Image Fusion using a Deep Learning Framework[C]//2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 2018: 2705-2710.

    [5] Li H, Wu X, Durrani T S. Infrared and visible image fusion with ResNet and zero-phase component analysis[J]. Infrared Physics & Technology, 2019, 102: 103039.

    [6] Xu H, Zhang H, Ma J Y. Classification Saliency-Based Rule for Visible and Infrared Image Fusion[J]. IEEE Transactions on Computational Imaging, 2021, 7: 824-836.

    [7] Prabhakar K R, Srikar V S, Babu R V. DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs[C]//2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy: 2017: 4724-4732.

    [8] Li J, Huo H T, Li C, et al. AttentionFGAN: Infrared and Visible Image Fusion using Attention-based Generative Adversarial Networks[J]. IEEE Transactions on Multimedia, 2021, 23: 1383-1396.

    [9] Ma J, Zhang H, Shao Z, et al. GANMcC: A Generative Adversarial Network With Multiclassification Constraints for Infrared and Visible Image Fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 1-14.

    [10] Zhou H, Wu W, Zhang Y D, et al. Semantic-Supervised Infrared and Visible Image Fusion Via a Dual-Discriminator Generative Adversarial Network[J]. IEEE Transactions on Multimedia, 2023, 25: 635-648.

    [11] Tang L, Yuan J, Ma J. Image fusion in the loop of highlevel vision tasks: A semantic-aware real-time infrared and visible image fusion network[J]. Information Fusion, 2022, 82: 28-42.

    [12] Li H, Cen Y, Liu Y, et al. Different Input Resolutions and Arbitrary Output Resolution: A Meta Learning-Based Deep Framework for Infrared and Visible Image Fusion[J]. IEEE Transactions on Image Processing, 2021, 30: 4070-4083.

    [13] Jian L, Yang X, Liu Z, et al. SEDRFuse: A Symmetric Encoder-Decoder with Residual Block Network for Infrared and Visible Image Fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 1-15.

    [14] Ma J Y, Wei Y, Liang P, et al. FusionGAN: A generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26.

    [15] Zhang Y, Liu Y, Sun P, et al. IFCNN: A general image fusion framework based on convolutional neural network[J]. Information Fusion, 2020, 54(2): 99-118.

    [16] Li H, Wu X J. DenseFuse: A Fusion Approach to Infrared and Visible Images[J]. IEEE Transactions on Image Processing, 2019, 28(5): 2614-2623.

    [17] Li H, Wu X J, Durrani T. NestFuse: An Infrared and Visible Image Fusion Architecture based on Nest Connection and Spatial/Channel Attention Models[J]. IEEE Transactions on Instrumentation and Measurement, 2020, 69(12): 9645-9656.

    [18] Zhao Z X, Hao B, Zhang J S, et al. CDDFuse: Correlation-Driven Dual-Branch Feature Decomposition for Multi-Modality Image Fusion[C]//2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, Vancouver, BC, Canada: IEEE 5906-5916.

    [19] Yang Y B. SA-Net: Shuffle Attention for Deep Convolutional Neural Networks[J]. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, Canada, 2021: 2235-2239.

    [20] Zhang X, Zhou X, Lin M, et al. ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, Salt Lake City, UT, USA: IEEE, 6848-6856.

    [21] Li H, Wu X J. RFN-Nest: An end-to-end residual fusion network for infrared and visible images[J]. Information Fusion, 2021, 73(1): 72-86.

    [22] T Y, Lin et al. Microsoft COCO: Common objects in context[C]//Proc. Eur Conf Comput. Vis. Cham, Switzerland: Springer, 2014: 740-755.

    [23] Xu H, Ma J Y, Jiang J J, et al. U2Fusion: A Unified Unsupervised Image Fusion Network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 502-518.

    [24] Li H, T. Xu X Y, Wu X J, et al. LRRNet: A Novel Representation Learning Guided Fusion Network for Infrared and Visible Images[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(9): 11040-11052.

    LIU Peipei, ZHANG Yuxiao, YUAN Shuozhi, WANG Shuo, XU Huyang. Infrared and visible image fusion based on shuffle attention mechanism and residual dense network[J]. Laser Journal, 2024, 45(12): 106
    Download Citation