[1] 1刘先红, 陈志斌, 秦梦泽. 结合引导滤波和卷积稀疏表示的红外与可见光图像融合[J]. 光学 精密工程, 2018, 26(5): 1242-1253. doi: 10.3788/OPE.20182605.1242LIUX H, CHENZ B, QINM Z. Infrared and visible image fusion using guided filter and convolutional sparse representation[J]. Opt. Precision Eng., 2018, 26(5): 1242-1253. (in Chinese). doi: 10.3788/OPE.20182605.1242
[2] W R HU, Y H YANG, W S ZHANG et al. Moving object detection using tensor-based low-rank and saliently fused-sparse decomposition. IEEE Transactions on Image Processing, 26, 724-737(2017).
[3] S LI, X KANG, L FANG et al. Pixel-level image fusion: a survey of the state of the art. Information Fusion, 33, 100-112(2017).
[4] C MA, Z J MIAO, X P ZHANG et al. A saliency prior context model for real-time object tracking. IEEE Transactions on Multimedia, 19, 2415-2424(2017).
[5] 5王昕, 吉桐伯, 刘富. 结合目标提取和压缩感知的红外与可见光图像融合[J]. 光学 精密工程, 2016, 24(7): 1743-1753. doi: 10.3788/ope.20162407.1743WANGX, JIT B, LIUF. Fusion of infrared and visible images based on target segmentation and compressed sensing[J]. Opt. Precision Eng., 2016, 24(7): 1743-1753. (in Chinese). doi: 10.3788/ope.20162407.1743
[6] 6张蕾, 金龙旭, 韩双丽, 等. 采用非采样Contourlet变换与区域分类的红外和可见光图像融合[J]. 光学 精密工程, 2015, 23(3): 810-818. doi: 10.3788/ope.20152303.0810ZHANGL, JINL X, HANS L, et al. Fusion of infrared and visual images based on non-sampled Contourlet transform and region classification[J]. Opt. Precision Eng., 2015, 23(3): 810-818. (in Chinese). doi: 10.3788/ope.20152303.0810
[7] A AZARANG, H E MANOOCHEHRI, N KEHTARNAVAZ. Convolutional autoencoder-based multispectral image fusion. IEEE Access, 7, 35673-35683(2019).
[8] R C HOU, D M ZHOU, R C NIE et al. VIF-net: an unsupervised framework for infrared and visible image fusion. IEEE Transactions on Computational Imaging, 6, 640-651(2020).
[9] Y LIU, X CHEN, H PENG et al. Multi-focus image fusion with a deep convolutional neural network. Information Fusion, 36, 191-207(2017).
[10] J MA, W YU, P LIANG et al. FusionGAN: a generative adversarial network for infrared and visible image fusion. Information Fusion, 48, 11-26(2019).
[11] W B AN, H M WANG. Infrared and visible image fusion with supervised convolutional neural network. Optik, 219, 165120(2020).
[12] Y ZHANG, Y LIU, P SUN et al. IFCNN: a general image fusion framework based on convolutional neural network. Information Fusion, 54, 99-118(2020).
[13] 13郝永平, 曹昭睿, 白帆, 等. 基于兴趣区域掩码卷积神经网络的红外-可见光图像融合与目标识别算法研究[J]. 光子学报, 2021, 50(2): 0210002.HAOY P, CAOZ R, BAIF, et al. Research on infrared visible image fusion and target recognition algorithm based on region of interest mask convolution neural network[J]. Acta Photonica Sinica, 2021, 50(2): 0210002. (in Chinese)
[14] Q ZHANG, X SHEN, L XU et al. Rolling Guidance Filter, 815-830(12).
[15] A TOET. Alternating guided image filtering. PeerJ Computer Science, 2(2016).
[16] Y ZHAI, M SHAH. Visual Attention Detection in Video Sequences Using Spatiotemporal Cues, 815-824(27).
[18] J MA, Z ZHOU, B WANG et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Physics & Technology, 82, 8-17(2017).
[19] J Y MA, Y ZHOU. Infrared and visible image fusion via gradientlet filter. Computer Vision and Image Understanding, 197, 103016(2020).
[20] H LI, X J WU, J KITTLER. Infrared and Visible Image Fusion Using a Deep Learning Framework, 2705-2710(20).
[21] 21李恒, 张黎明, 蒋美容, 等. 一种基于ResNet152的红外与可见光图像融合算法[J]. 激光与光电子学进展, 2020, 57(8): 081013. doi: 10.3788/lop57.081013LIH, ZHANGL M, JIANGM R, et al. An infrared and visible image fusion algorithm based on ResNet152[J]. Laser & Optoelectronics Progress, 2020, 57(8): 081013. (in Chinese). doi: 10.3788/lop57.081013
[22] L TANG, J YUAN, H ZHANG et al. PIAFusion: a progressive infrared and visible image fusion network based on illumination aware. Information Fusion, 83/84, 79-92(2022).
[23] J A VAN AARDT, F B AHMED. Assessment of image fusion procedures using entropy, image quality, and multispectral classification. Journal of Applied Remote Sensing, 2(2008).
[24] A M ESKICIOGLU, P S FISHER. Image quality measures and their performance. IEEE Transactions on Communications, 43, 2959-2965(1995).
[25] M HAGIGHAT, M A RAZIAN. Fast-FMI: non-reference image fusion metric, 1-3(2014).
[26] Y HAN, Y CAI, Y CAO et al. A new image fusion performance metric based on visual information fidelity, Inf. Fusion, 127-135(14).