• Infrared Technology
  • Vol. 43, Issue 6, 566 (2021)
Di LUO1、2, Congqing WANG1、2, and Yongjun ZHOU2、*
Author Affiliations
  • 1[in Chinese]
  • 2[in Chinese]
  • show less
    DOI: Cite this Article
    LUO Di, WANG Congqing, ZHOU Yongjun. A Visible and Infrared Image Fusion Method based on Generative Adversarial Networks and Attention Mechanism[J]. Infrared Technology, 2021, 43(6): 566 Copy Citation Text show less
    References

    [1] MA J, MA Y, LI C. Infrared and visible image fusion methods and applications: a survey[J]. Information Fusion, 2019, 45: 153-178.

    [2] Burt P J, Adelson E H. The Laplacian pyramid as a compact image code[J]. Readings in Computer Vision, 1987, 31(4): 671-679.

    [3] Selesnick I W, Baraniuk R G, Kingsbury N C. The dual-tree complex wavelet transform[J]. IEEE Signal Processing Magazine, 2005, 22(6): 123-151.

    [4] A L da Cunha, J Zhou, M N Do. Nonsubsampled contourilet transform: filter design and applications in denoising[C]//IEEE International Conference on Image Processing 2005, 749: (doi: 10.1109/ ICIP.2005.1529859).

    [5] Hariharan H, Koschan A, Abidi M.The direct use of curvelets in multifocus fusion[C]//16th IEEE International Conference on Image Processing (ICIP), 2009: 2185-2188(doi: 10.1109/ICIP.2009.5413840).

    [6] LI Hui. Dense fuse: a fusion approach to infrared and visible images[C]//IEEE Transactions on Image Processing, 2018, 28: 2614- 2623(doi: 0.1109/TIP.2018.2887342).

    [7] MA J, YU W, LIANG P, et al. Fusion GAN: a generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26.

    [8] Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation[C]//International Conference on Medical Image Computing and Computer-assisted Intervention, 2015: 234-241.

    [9] Hwang S, Park J, Kim N, et al. Multispectral pedestrian detection: Benchmark dataset and baseline[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015: 1037-1045.

    [10] Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[C]//Advances in Neural Information Processing Systems, 2014: 2672-2680.

    [11] Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks[J/OL][2015-11-07]. arXiv preprint arXiv:1511.06434, 2015: https:// arxiv.org/abs/1511.06434v1.

    [12] MAO X, LI Q, XIE H, et al. Least squares generative adversarial networks[C]//2017 IEEE International Conference on Computer Vision (ICCV), 2017: 2813-2821(doi: 10.1109/ICCV.2017.304).

    [13] Isola Phillip, ZHU Junyan, ZHOU Tinghui, et al. Image-to-image translation with conditional adversarial networks, 2017: 5967-5976 (doi:10.1109/CVPR.2017.632).

    [14] Jaderberg M, Simonyan K, Zisserman A. Spatial transformer networks [C]//Advances in Neural Information Processing Systems, 2015: 2017-2025.

    [15] HU J, SHEN L, SUN G. Squeeze-and-excitation networks[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018: 7132-7141.

    [16] Woo S, Park J, Lee J Y, et al. Cbam: convolutional block attention module[C]//Proceedings of the European Conference on Computer Vision (ECCV), 2018: 3-19.

    CLP Journals

    [1] CHEN Xin. Infrared and Visible Image Fusion Using Double Attention Generative Adversarial Networks[J]. Infrared Technology, 2023, 45(6): 639

    [2] LI Yongping, YANG Yanchun, DANG Jianwu, WANG Yangping. Infrared and Visible Image Fusion Based on Transform Domain VGGNet19[J]. Infrared Technology, 2022, 44(12): 1293

    LUO Di, WANG Congqing, ZHOU Yongjun. A Visible and Infrared Image Fusion Method based on Generative Adversarial Networks and Attention Mechanism[J]. Infrared Technology, 2021, 43(6): 566
    Download Citation