• Infrared Technology
  • Vol. 44, Issue 3, 268 (2022)
Haicheng QU*, Yuping WANG, Jiankang GAO, and Siqi ZHAO
Author Affiliations
  • [in Chinese]
  • show less
    DOI: Cite this Article
    QU Haicheng, WANG Yuping, GAO Jiankang, ZHAO Siqi. Mode Adaptive Infrared and Visible Image Fusion[J]. Infrared Technology, 2022, 44(3): 268 Copy Citation Text show less
    References

    [3] Reinhard E, Adhikhmin M, Gooch B, et al. Color transfer between images[J]. IEEE Comput. Graph. Appl., 2001, 21(5): 34-41.

    [4] Kumar P, Mittal A, Kumar P. Fusion of thermal infrared and visible spectrum video for robust surveillance[C]// Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing, 2006: 528-539.

    [6] Bavirisetti D P, Dhuli R. Fusion of infrared and visible sensor images based on anisotropic diffusion and karhunen-loeve transform[J]. IEEE Sensors, 2016, 16(1): 203-209.

    [7] Kumar B S. Image fusion based on pixel significance using cross bilateral filter[J]. Signal Image Video Process, 2015, 9(5): 1193-1204.

    [8] MA J, ZHOU Z, WANG B. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared Physics & Technology, 2017, 82: 8-17.

    [9] LIU Y, WANG Z. Simultaneous image fusion and denoising with adaptive sparse representation[J]. IET Image Process, 2014, 9(5): 347-357.

    [10] Burt P, Adelson E. The Laplacian pyramid as a compact image code[J]. IEEE Trans. Commun., 1983, 31(4): 532-540.

    [12] LI S, YIN H, FANG L. Group-sparse representation with dictionary learning for medical image denoising and fusion[J]. IEEE Trans Biomed Eng., 2012, 59(12): 3450-3459.

    [13] Prabhakar K R, Srikar V S, Babu R V. DeepFuse: A deep unsupervised approach for exposure fusion with extreme exposure image[C]//Proc of the 2017 IEEE International Conference on Computer Vision, 2017: 4724-4732.

    [14] MA J Y, YU W, LIANG P W, et al. FusionGAN: A generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26.

    [15] LI H, WU X J, Kittler J. Infrared and visible image fusion using a deep learning framework[C]//The 24th International Conference on Pattern Recognition (ICPR), 2018: 2705-2710.

    [16] LI H, WU X. DenseFuse: A fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 2019, 28(5): 2614-2623.

    [18] XU Han, MA Jiayi, JIANG Junjun, et al. U2Fusion: A unified unsupervised image fusion network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 44: 502-518.

    [19] Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial nets[C]//Proceedings of the 27th International Conference on Neural Information Processing Systems, 2014: 2672-2680.

    [20] HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016:770-778.

    [21] Figshare. TNO Image Fusion Dataset[OL]. [2018-09-15]. https://flgshare. com/articles/TNO Image Fusion Dataset/1008029.

    [22] Hwang S, Park J, Kim N, et al. Multispectral pedestrian detection: benchmark dataset and baseline[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015: 1037-1045.

    [23] Roberts J W, Aardt J V, Ahmed F. Assessment of image fusion procedures using entropy, image quality, and multispectral classification[J]. Appl. Remote Sens., 2008, 2(1): 023522-023522-28.

    [24] QU G, ZHANG D, YAN P. Information measure for performance of image fusion[J]. Electron Lett., 2002, 38(7): 313-315.

    [25] Kumar B K S. Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform[J]. Signal, Image and Video Processing, 2013, 7(6): 1125-1143.

    QU Haicheng, WANG Yuping, GAO Jiankang, ZHAO Siqi. Mode Adaptive Infrared and Visible Image Fusion[J]. Infrared Technology, 2022, 44(3): 268
    Download Citation