[1] F ZHANG, H PENG, L YU et al. Dual-modality space-time memory network for RGBT tracking. IEEE Transactions on instrumentation and Measurement, 72, 1-11(2023).
[5] H LI, X J WU. DenseFuse: a fusion approach to infrared and visible images. IEEE Transactions on Image Processing, 28, 2614-2623(2018).
[7] H XU, J MA, J JIANG et al. U2Fusion: A unified unsupervised image fusion network. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44, 502-518(2020).
[10] SU Z, LIU W, YU Z, et al. Pixel difference wks f efficient edge detection[C]Proceedings of the IEEECVF International Conference on Computer Vision, 2021: 51175127.
[11] A VASWANI, N SHAZEER, N PARMAR et al. Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998-6008(2017).
[12] Z WANG, Y CHEN, W SHAO et al. SwinFuse: A residual swin transformer fusion network for infrared and visible images. IEEE Transactions on Instrumentation and Measurement, 71, 1-12(2022).
[13] ZHAO Z, Bai H, Zhang J, et al. Cddfuse: Crelationdriven dualbranch feature decomposition f multimodality image fusion[C]Proceedings of the IEEECVF Conference on Computer Vision Pattern Recognition, 2023: 59065916.
[14] LIU X, PENG H, ZHENG N, et al. EfficientViT: Memy efficient vision transfmer with caded group attention[C]Proceedings of the IEEECVF Conference on Computer Vision Pattern Recognition, 2023: 1442014430.
[15] HOU Q, ZHOU D, Feng J. Codinate attention f efficient mobile wk design[C]Proceedings of the IEEECVF Conference on Computer Vision Pattern Recognition, 2021: 1371313722.
[17] Z XIANG, J HAO, G QI et al. MFST: Multi-modal feature self-adaptive transformer for infrared and visible image fusion. Remote Sensing, 14, 3233-3233(2022).
[18] S KARIM, G TONG, J Li et al. MTDFusion: A multilayer triple dense network for infrared and visible image fusion. IEEE Transactions on Instrumentation and Measurement, 73, 1-17(2023).