[1] HUI L, XIAO J W. Dense Fuse: a fusion approach to infrared and visible images[J]. IEEE Trans. Image Processing, 2019, 28(5): 2614-2623.
[2] HE G Q, JI J Q, DONG D D, et al. Infrared and visible image fusion method by using hybrid representation learning[J]. IEEE Geosci. Remote Sensing Lett, 2019, 16(11): 1796-1800.
[7] LIU C H, QI Y, DING W R. Infrared and visible image fusion method based on saliency detection in sparse domain[J]. Infrared Physics & Technology, 2017, 83: 94-102.
[8] GUO L, YANG B. Fusion of infrared and visible images based on visual saliency[J]. Computer Science, 2015, 42(6): 211-235.
[21] YANG Z, ZENG S. TPFusion: Texture preserving fusion of infrared and visible images via dense networks[J]. Entropy, 2022, 24(2): 294.
[22] JING L, HUO H T, CHANG L, et al. AttentionFGAN: infrared and visible image fusion using attention-based generative adversarial networks[J]. IEEE Transactions on Multimedia, 2020, 23: 1383-1396.
[25] MA J Y, XU H, JIANG J J, et al. DDcGAN: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion[J]. IEEE Transactions on Image Processing: a Publication of the IEEE Signal Processing Society, 2020, 29: 4980-4995.
[27] YU L, LIU S, WANG Z. A general framework for image fusion based on multi-scale transform and sparse representation[J]. Information Fusion, 2015, 24: 147-164.
[28] YU L, XUN C, Ward R K, et al. Image fusion with convolutional sparse representation[J]. IEEE Signal Processing Letters, 2016, 23(12): 1882-1886.