[1] YANG B, JIANG Z, PAN D, et al. Detail-aware near infrared and visible fusion with multi-order hyper-Laplacian priors[J]. Information Fusion, 2023, 99: 101851.
[2] ZHANG X, DAI X, ZHANG X, et al. Joint principal component analysis and total variation for infrared and visible image fusion[J]. Infrared Physics & Technology, 2023, 128: 104523.
[3] TANG L, XIANG X, ZHANG H, et al. DIVFusion: Darkness-free infrared and visible image fusion[J]. Information Fusion, 2023, 91: 477-493.
[4] WANG Z, SHAO W, CHEN Y, et al. Infrared and visible image fusion via interactive compensatory attention adversarial learning[J]. IEEE Transactions on Multimedia, 2022, 25: 1-12.
[5] TANG Lili, LIU Gang, XIAO Gang. Infrared and visible image fusion method based on dual-path cascade adversarial mechanism[J]. Acta Photonica Sinica, 2021, 50(9): 0910004.
[6] WANG Zhishe, SHAO Wenyu, YANG Fengbao, et al. Infrared and visible image fusion method via interactive attention-based generative adversarial network [J]. Acta Photonica Sinica, 2022, 51(4): 0410002.
[7] RAO D, XU T, WU X. Tgfuse: An infrared and visible image fusion approach based on transformer and generative adversarial network[J]. IEEE Transactions on Image Processing, 2023, 5: 1-11.
[8] XIE Q, MA L, GUO Z, et al. Infrared and visible image fusion based on NSST and phase consistency adaptive DUAL channel PCNN[J]. Infrared Physics & Technology, 2023, 131: 104659.
[9] JIANG Zetao, JIANG Qi, HUANG Yongsong, et al. Infrared and lowlight-levelvisible light enhancement image fusion method based on latent low-rank representation and composite filtering [J]. Acta Photonica Sinica, 2020, 49(4): 0410001.
[10] WANG C, WU Y, YU Y, et al. Joint patch clustering-based adaptive dictionary and sparse representation for multi-modality image fusion [J]. Machine Vision and Applications, 2022, 33(5): 69.
[11] PANG Y, ZHAO X, ZHANG L, et al. Multi-scale interactive network for salient object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 9413-9422.
[12] CHEN J, WU K, CHENG Z, et al. A saliency-based multiscale approach for infrared and visible image fusion[J]. Signal Processing, 2021, 182: 107936.
[13] VESHKI F G, VOROBYOV S A. Coupled feature learning via structured convolutional sparse coding for multimodal image fusion[C]//ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022: 2500-2504.
[14] LIU Y, CHEN X, CHENG J, et al. Infrared and visible image fusion with convolutional neural networks[J]. International Journal of Wavelets, Multiresolution and Information Processing, 2018, 16(3): 1850018.
[15] Prabhakar K R, Srikar V S, Babu R V. DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs[C]//Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017: 4714-4722.
[16] MA J, YU W, LIANG P, et al. FusionGAN: A generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26.
[17] ZHANG H, YUAN J, TIAN X, Ma J. GAN-FM: Infrared and visible image fusion using GAN with full-scale skip connection and dual Markovian discriminators[J]. IEEE Transactions on Computational Imaging, 2021, 7: 1134-1147.
[18] SUN X, HU S, MA X, et al. IMGAN: Infrared and visible image fusion using a novel intensity masking generative adversarial network[J]. Infrared Physics & Technology, 2022, 125: 104221.
[19] LE Z, HUANG J, XU H, et al. UIFGAN: An unsupervised continuallearning generative adversarial network for unified image fusion[J]. Information Fusion, 2022, 88: 305-318.
[20] Toet Alexander. The TNO Multiband image data collection[J]. Data in Brief, 2017, 15: 249-251.
[21] ZHANG H, MA J. SDNet: a versatile squeeze-and-decomposition network for real-time image fusion[J]. International Journal of Computer Vision, 2021, 129: 2761-2785.
[22] ZHAO Z, XU S, ZHANG C, et al. Bayesian fusion for infrared and visible images[J]. Signal Processing, 2020, 177: 107734.
[23] CHEN J, LI X, WU K. Infrared and visible image fusion based on relative total variation decomposition[J]. Infrared Physics & Technology, 2022, 123: 104112.
[24] XU H, MA J, JIANG J, et al. U2Fusion: a unified unsupervised image fusion network [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 502-518.
[25] MA J, TANG L, FAN F, et al. SwinFusion: cross-domain long-range learning for general image fusion via swin transformer [J]. IEEE/CAA Journal of Automatica Sinica, 2022, 9(7): 1200-1217.
[26] TANG L, YUAN J, ZHANG H, et al. PIAFusion: a progressive infrared and visible image fusion network based on illumination aware [J]. Information Fusion, 2022, 83: 79-92.
[27] XUE W, WANG A, ZHAO L. FLFuse-Net: A fast and lightweight infrared and visible image fusion network via feature flow and edge compensation for salient information [J]. Infrared Physics & Technology, 2022, 127: 104383.
[28] TAN W, ZHOU H, SONG J, et al. Infrared and visible image perceptive fusion through multi-level Gaussian curvature filtering image decomposition [J]. Applied optics, 2019, 58(12): 3064-3073.
[29] VESHKI FG, OUZIR N, VOROBYOV SA, et al. Multimodal image fusion via coupled feature learning[J]. Signal Processing, 2022, 200: 108637.
[30] Panigrahy C, Seal A, Mahato N K. Fractal dimension based parameter adaptive dual channel PCNN for multi-focus image fusion[J]. Optics and Lasers in Engineering, 2020, 133: 106141.
[31] Panigrahy C, Seal A, Mahato N K. Parameter adaptive unit-linking dualchannel PCNN based infrared and visible image fusion[J]. Neurocomputing, 2022, 514: 21-38.