• Infrared Technology
  • Vol. 44, Issue 6, 571 (2022)
Junyao WANG*, Zhishe WANG, Yuanyuan WU, Yanlin CHEN, and Wenyu SHAO
Author Affiliations
  • [in Chinese]
  • show less
    DOI: Cite this Article
    WANG Junyao, WANG Zhishe, WU Yuanyuan, CHEN Yanlin, SHAO Wenyu. Multi-Feature Adaptive Fusion Method for Infrared and Visible Images[J]. Infrared Technology, 2022, 44(6): 571 Copy Citation Text show less
    References

    [1] Paramanandham N, Rajendiran K. Multi sensor image fusion for surveillance applications using hybrid image fusion algorithm[J]. Multimedia Tools and Applications, 2018, 77(10): 12405-12436.

    [2] ZHANG Xingchen, YE Ping, QIAO Dan, et al. Object fusion tracking based on visible and infrared images: a comprehensive review[J]. Information Fusion, 2020, 63: 166-187.

    [3] TU Zhengzheng, LI Zhun, LI Chenglong, et al. Multi-interactive dualdecoder for RGB-thermal salient object detection[J]. IEEE Transactions on Image Processing, 2021, 30: 5678-5691.

    [4] FENG Zhanxiang, LAI Jianhuang, XIE Xiaohua. Learning modalityspecific representations for visible-infrared person re-identification[J]. IEEE Transactions on Image Processing, 2020, 29: 579-590.

    [5] MO Yang, KANG Xudong, DUAN Puhong, et al. Attribute filter based infrared and visible image fusion[J]. Informantion Fusion, 2021, 75: 41-54.

    [6] LI Hui, WU Xiaojun, Kittle J. MDLatLRR: a novel decomposition method for infrared and visible image fusion[J]. IEEE Transactions on Image Processing, 2020, 29: 4733-4746.

    [8] WANG Zhishe, YANG Fengbao, PENG Zhihao, et al. Multi-sensor image enhanced fusion algorithm based on NSST and top-hat transformation[J]. Optik-International Journal for Light and Electron Optics, 2015, 126(23): 4184-4190.

    [9] LIU Yu, CHEN Xun, PENG Hu, et al. Multi-focus image fusion with a deep convolutional neural network[J]. Informantion Fusion, 2017, 36: 191-207 .

    [10] WANG Zhishe; WU Yuanyuan; WANG Junyao, et al. Res2Fusion: infrared and visible image fusion based on dense Res2net and double non-local attention models[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 1-12.

    [11] MA Jiayi, MA Yong, LI Chang. Infrared and visible image fusion methods and applications: a survey[J]. Information Fusion, 2019, 45: 153-178.

    [12] Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation[C]//Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention, 2015: 234-241.

    [13] Toet A. Computational versus psychophysical bottom-up image saliency: a comparative evaluation study[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2011, 33(11): 2131-2146.

    [14] LI Hui, WU Xiaojun. DenseFuse: a fusion approach to infrared and visible images[J]. IEEE Transactions on Image Processing, 2019, 28(5): 2614-2623.

    [15] ZHANG Yu , LIU Yu , SUN Peng , et al. IFCNN: a general image fusion framework based on convolutional neural network[J]. Information Fusion, 2020, 54: 99-118.

    [16] WANG Zhishe, WANG Junyao, WU Yuanyuan, et al. UNFusion: a unified multi-scale densely connected network for infrared and visible image fusion[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(6): 3360-3374.

    [17] MA Jiayi, YU Wei, LIANG Pengwei, et al. FusionGAN: a generative adversarial network for infrared and visible image fusion[J]. Information Fusion, 2019, 48: 11-26.

    [18] MA Jiayi, ZHANG Hao, SHAO Zhenfeng, et al. GANMcC: a generative adversarial network with multiclassification constraints for infrared and visible image fusion[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 1-14.

    [19] LI Hui, WU Xiaojun, Josef Kittler. RFN-Nest: an end-to-end residual fusion network for infrared and visible images[J]. Information Fusion, 2021, 73:72-86.

    [20] TOET A. TNO Image Fusion Datase[DB/OL]. [2014-04-26]. https:// figshare.com/articles/TN Image Fusion Dataset/1008029.

    [21] XU Han. Roadscene Database[DB/OL]. [2020-08-07]. https:// github.com/hanna-xu/RoadScene.

    [22] Ariffin S. OTCBVS Database[DB/OL]. [2007-06]. http://vcipl-okstate. org/pbvs/bench/.

    [23] XU Han, MA Jiayi, JIANG Junjun, et al. U2Fusion: a unified unsupervised image fusion network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 502-518.

    [24] Aslantas V, Bendes E. Assessment of image fusion procedures using entropy, image quality, and multispectral classification[J]. Journal of Applied Remote Sensing, 2008(2): 1-28.

    [25] LIU Zheng, Blasch E, XUE Zhiyun, et al. Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: A comparative study[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 34: 94-109.

    [26] RAO Yunjiang. In-fibre bragg grating sensors[J]. Measurement Science and Technology, 1997(8): 355-375.

    [27] Aslantas V, Bendes E. A new image quality metric for image fusion:

    [28] HAN Yu, CAI Yunze, CAO Yin, et al. A new image fusion performance metric based on visual information fidelity[J]. Information Fusion, 2013(14): 127-135.

    [29] MA Kede, ZENG Kai, WANG Zhou. Perceptual quality assessment for multi-exposure image fusion[J]. IEEE Trans. Image Process, 2015, 24: 3345-3356.

    CLP Journals

    [1] CHEN Sijing, FU Zhitao, LI Ziqian, NIE Han, SONG Jiawen. A Visible and Infrared Image Fusion Algorithm Based on Adaptive Enhancement and Saliency Detection[J]. Infrared Technology, 2023, 45(9): 907

    WANG Junyao, WANG Zhishe, WU Yuanyuan, CHEN Yanlin, SHAO Wenyu. Multi-Feature Adaptive Fusion Method for Infrared and Visible Images[J]. Infrared Technology, 2022, 44(6): 571
    Download Citation