[1] Z L Cui, H Sheng, D Yang et al. Light field depth estimation for non-lambertian objects via adaptive cross operator. IEEE Trans Circuits Syst Video Technol, 34, 1199-1211(2024).
[2] S Ma, N Wang, L C Zhu et al. Light field depth estimation using weighted side window angular coherence. Opto-Electron Eng, 48, 210405(2021).
[3] D Wu, X D Zhang, Z G Fan et al. Depth acquisition of noisy scene based on inline occlusion handling of light field. Opto-Electron Eng, 48, 200422(2021).
[5] L Han, D W Zhong, L Li et al. Learning residual color for novel view synthesis. IEEE Trans Image Process, 31, 2257-2267(2022).
[6] F Xu, J H Liu, Y M Song et al. Multi-exposure image fusion techniques: a comprehensive review. Remote Sens, 14, 771(2022).
[7] S T Li, X D Kang, J W Hu. Image fusion with guided filtering. IEEE Trans Image Process, 22, 2864-2875(2013).
[8] Y Liu, Z F Wang. Dense SIFT for ghost-free multi-exposure fusion. J Visual Commun Image Represent, 31, 208-224(2015).
[10] O Ulucan, D Ulucan, M Turkan. Ghosting-free multi-exposure image fusion for static and dynamic scenes. Signal Process, 202, 108774(2023).
[14] J L Yin, B H Chen, Y T Peng. Two exposure fusion using prior-aware generative adversarial network. IEEE Trans Multimedia, 24, 2841-2851(2021).
[15] H Xu, J Y Ma, X P Zhang. MEF-GAN: multi-exposure image fusion via generative adversarial networks. IEEE Trans Image Process, 29, 7203-7216(2020).
[16] J Y Liu, G Y Wu, J S Luan et al. HoLoCo: holistic and local contrastive learning network for multi-exposure image fusion. Inf Fusion, 95, 237-249(2023).
[17] J Y Liu, J J Shang, R S Liu et al. Attention-guided global-local adversarial learning for detail-preserving multi-exposure image fusion. IEEE Trans Circuits Syst Video Technol, 32, 5026-5040(2022).
[18] Y Y Chen, G Y Jiang, M Yu et al. Learning to simultaneously enhance field of view and dynamic range for light field imaging. Inf Fusion, 91, 215-229(2023).
[20] K D Ma, Z F Duanmu, H W Zhu et al. Deep guided learning for fast multi-exposure image fusion. IEEE Trans Image Process, 29, 2808-2819(2020).
[24] H Zhang, J Y Ma. IID-MEF: a multi-exposure fusion network based on intrinsic image decomposition. Inf Fusion, 95, 326-340(2023).
[26] H Xu, J Y Ma, J J Jiang et al. U2Fusion: a unified unsupervised image fusion network. IEEE Trans Pattern Anal Mach Intell, 44, 502-518(2022).
[29] K D Ma, K Zeng, Z Wang. Perceptual quality assessment for multi-exposure image fusion. IEEE Trans Image Process, 24, 3345-3356(2015).
[30] Z Wang, A C Bovik, H R Sheikh et al. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process, 13, 600-612(2004).
[31] M Hossny, S Nahavandi, D Creighton. Comments on ‘Information measure for performance of image fusion’. Electron Lett, 44, 1066-1067(2008).
[33] G M Cui, H J Feng, Z H Xu et al. Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition. Opt Commun, 341, 199-209(2015).
[34] C S Xydeas, V Petrovic. Objective image fusion performance measure. Electronics letters, 36, 308-309(2000).
[35] Y J Rao. In-fibre Bragg grating sensors. Meas Sci Technol, 8, 355-375(1997).
[37] H Chen, P K Varshney. A human perception inspired quality metric for image fusion based on regional information. Inf Fusion, 8, 193-207(2007).