[1] Waxman A M, Gove A N, Fay D A, et al. Color night vision: opponent processing in the fusion of visible and IR imagery[J]. Neural Networks,1997, 10(1): 1-6.
[2] XIANG T, YAN L, GAO R. A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking PCNN in NSCT domain[J]. Infrared Physics & Technology, 2015, 69: 53-61.
[3] ZHAO J, GAO X, CHEN Y, et al. Multi-window visual saliency extraction for fusion of visible and infrared images[J]. Infrared Physics & Technology, 2016, 76: 295-302.
[4] YAN L, CAO J, Rizvi S, et al. Improving the performance of image fusion based on visual saliency weight map combined with CNN[J]. IEEE Access, 2020, 8(99): 59976-59986.
[5] Lewis J J, Robert J. O’Callaghan, Nikolov S G, et al. Pixel- and region-based image fusion with complex wavelets[J]. Information Fusion,2007, 8(2): 119-130.
[10] Toet A. Image fusion by a ratio of low-pass pyramid[J]. Pattern Recognition Letters,1989, 9: 245-253.
[11] Akerman A. Pyramidal techniques for multisensor fusion[C]//Proceedings of SPIE the International Society for Optical Engineering,1992, 1828: 124-131.
[12] LI Huafeng, QIU Hongmei, YU Zhengtao, et al. Infrared and visible image fusion scheme based on NSCT and low-level visual features[J].Infrared Physics and Technology, 2016, 76: 174-184.
[14] Pajares G, Jesús Manuel de la Cruz. A wavelet-based image fusion tutorial[J]. Pattern Recognition, 2004, 37(9): 1855-1872.