• Opto-Electronic Engineering
  • Vol. 51, Issue 6, 240093-1 (2024)
Yulong Li1, Yeyao Chen1, Yueli Cui2, and Mei Yu1,*
Author Affiliations
  • 1Faculty of Information Science and Engineering, Ningbo University, Ningbo, Zhejiang 315211, China
  • 2School of Electronic and Information Engineering, Taizhou University, Taizhou, Zhejiang 318000, China
  • show less
    DOI: 10.12086/oee.2024.240093 Cite this Article
    Yulong Li, Yeyao Chen, Yueli Cui, Mei Yu. LF-UMTI: unsupervised multi-exposure light field image fusion based on multi-scale spatial-angular interaction[J]. Opto-Electronic Engineering, 2024, 51(6): 240093-1 Copy Citation Text show less
    References

    [1] Z L Cui, H Sheng, D Yang et al. Light field depth estimation for non-lambertian objects via adaptive cross operator. IEEE Trans Circuits Syst Video Technol, 34, 1199-1211(2024).

    [2] S Ma, N Wang, L C Zhu et al. Light field depth estimation using weighted side window angular coherence. Opto-Electron Eng, 48, 210405(2021).

    [3] D Wu, X D Zhang, Z G Fan et al. Depth acquisition of noisy scene based on inline occlusion handling of light field. Opto-Electron Eng, 48, 200422(2021).

    [4] R X Cong, D Yang, R S Chen et al. Combining implicit-explicit view correlation for light field semantic segmentation, 9172-9181(2023). https://doi.org/10.1109/CVPR52729.2023.00885

    [5] L Han, D W Zhong, L Li et al. Learning residual color for novel view synthesis. IEEE Trans Image Process, 31, 2257-2267(2022).

    [6] F Xu, J H Liu, Y M Song et al. Multi-exposure image fusion techniques: a comprehensive review. Remote Sens, 14, 771(2022).

    [7] S T Li, X D Kang, J W Hu. Image fusion with guided filtering. IEEE Trans Image Process, 22, 2864-2875(2013).

    [8] Y Liu, Z F Wang. Dense SIFT for ghost-free multi-exposure fusion. J Visual Commun Image Represent, 31, 208-224(2015).

    [9] S Lee, J S Park, N I Cho. A multi-exposure image fusion based on the adaptive weights reflecting the relative pixel intensity and global gradient, 1737-1741(2018). https://doi.org/10.1109/ICIP.2018.8451153

    [10] O Ulucan, D Ulucan, M Turkan. Ghosting-free multi-exposure image fusion for static and dynamic scenes. Signal Process, 202, 108774(2023).

    [11] M S K Gul, T Wolf, M Bätz et al. A high-resolution high dynamic range light-field dataset with an application to view synthesis and tone-mapping, 1-6(2020). https://doi.org/10.1109/ICMEW46912.2020.9105964

    [12] C Li, X Zhang. High dynamic range and all-focus image from light field, 7-12(2015). https://doi.org/10.1109/ICCIS.2015.7274539

    [13] Pendu M Le, C Guillemot, A Smolic. High dynamic range light fields via weighted low rank approximation, 1728-1732(2018). https://doi.org/10.1109/ICIP.2018.8451584

    [14] J L Yin, B H Chen, Y T Peng. Two exposure fusion using prior-aware generative adversarial network. IEEE Trans Multimedia, 24, 2841-2851(2021).

    [15] H Xu, J Y Ma, X P Zhang. MEF-GAN: multi-exposure image fusion via generative adversarial networks. IEEE Trans Image Process, 29, 7203-7216(2020).

    [16] J Y Liu, G Y Wu, J S Luan et al. HoLoCo: holistic and local contrastive learning network for multi-exposure image fusion. Inf Fusion, 95, 237-249(2023).

    [17] J Y Liu, J J Shang, R S Liu et al. Attention-guided global-local adversarial learning for detail-preserving multi-exposure image fusion. IEEE Trans Circuits Syst Video Technol, 32, 5026-5040(2022).

    [18] Y Y Chen, G Y Jiang, M Yu et al. Learning to simultaneously enhance field of view and dynamic range for light field imaging. Inf Fusion, 91, 215-229(2023).

    [19] Prabhakar K Ram, Srikar V Sai, Babu R Venkatesh. DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs, 4724-4732(2017). https://doi.org/10.1109/ICCV.2017.505

    [20] K D Ma, Z F Duanmu, H W Zhu et al. Deep guided learning for fast multi-exposure image fusion. IEEE Trans Image Process, 29, 2808-2819(2020).

    [21] L H Qu, S L Liu, M N Wang et al. TransMEF: a transformer-based multi-exposure image fusion framework using self-supervised multi-task learning, 2126-2134(2022). https://doi.org/10.1609/AAAI.v36i2.20109

    [22] K W Zheng, J Huang, H Yu et al. Efficient multi-exposure image fusion via filter-dominated fusion and gradient-driven unsupervised learning, 2804-2813(2023). https://doi.org/10.1109/CVPRW59228.2023.00281

    [23] H Xu, L Haochen, JY Ma. Unsupervised multi-exposure image fusion breaking exposure limits via contrastive learning, 3010-3017(2023). https://doi.org/10.1609/AAAI.v37i3.25404

    [24] H Zhang, J Y Ma. IID-MEF: a multi-exposure fusion network based on intrinsic image decomposition. Inf Fusion, 95, 326-340(2023).

    [25] H Xu, J Y Ma, Z L Le et al. FusionDN: a unified densely connected network for image fusion, 12484-12491(2020). https://doi.org/10.1609/AAAI.v34i07.6936

    [26] H Xu, J Y Ma, J J Jiang et al. U2Fusion: a unified unsupervised image fusion network. IEEE Trans Pattern Anal Mach Intell, 44, 502-518(2022).

    [27] H Zhang, H Xu, Y Xiao et al. Rethinking the image fusion: a fast unified image fusion network based on proportional maintenance of gradient and intensity, 12797-12804(2020). https://doi.org/10.1609/AAAI.v34i07.6975

    [28] M Zhou, J Huang, Y C Fang et al. Pan-sharpening with customized transformer and invertible neural network, 3553-3561(2022). https://doi.org/10.1609/aaai.v36i3.20267

    [29] K D Ma, K Zeng, Z Wang. Perceptual quality assessment for multi-exposure image fusion. IEEE Trans Image Process, 24, 3345-3356(2015).

    [30] Z Wang, A C Bovik, H R Sheikh et al. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process, 13, 600-612(2004).

    [31] M Hossny, S Nahavandi, D Creighton. Comments on ‘Information measure for performance of image fusion’. Electron Lett, 44, 1066-1067(2008).

    [32] Q Wang, Y Shen, J Jin. Performance evaluation of image fusion techniques. Stathaki T. Image Fusion: Algorithms and Applications, 469-492(2008). https://doi.org/10.1016/B978-0-12-372529-5.00017-2

    [33] G M Cui, H J Feng, Z H Xu et al. Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition. Opt Commun, 341, 199-209(2015).

    [34] C S Xydeas, V Petrovic. Objective image fusion performance measure. Electronics letters, 36, 308-309(2000).

    [35] Y J Rao. In-fibre Bragg grating sensors. Meas Sci Technol, 8, 355-375(1997).

    [36] A M Eskicioglu, P S Fisher. Image quality measures and their performance. IEEE Trans Commun, 43, 2959-2965(1995). https://doi.org/10.1109/26.477498

    [37] H Chen, P K Varshney. A human perception inspired quality metric for image fusion based on regional information. Inf Fusion, 8, 193-207(2007).

    Yulong Li, Yeyao Chen, Yueli Cui, Mei Yu. LF-UMTI: unsupervised multi-exposure light field image fusion based on multi-scale spatial-angular interaction[J]. Opto-Electronic Engineering, 2024, 51(6): 240093-1
    Download Citation