[1] Levoy M, Hanrahan P. Light field rendering[C]//Proceedings of the 23rd Annual Conference on Computer Graphics and Inter-active Techniques, New York, 1996: 31–42.
[2] Ng R. Fourier slice photography[J]. ACM Trans Graph, 2005, 24(3): 735–744.
[3] Zhao Y Y, Shi S X. Light-field image super-resolution based on multi-scale feature fusion[J]. Opto-Electron Eng, 2020, 47(12): 200007.
[5] Wu D, Zhang XD, Fan ZG, et al. Depth acquisition of noisy scene based on inline occlusion handling of light field[J]. Op-to-Electron Eng, 2021, 48(7): 200422.
[7] Jeon H G, Park J, Choe G, et al. Accurate depth map estimation from a lenslet light field camera[C]//Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, 2015: 1547–1555.
[8] Johannsen O, Sulc A, Goldluecke B. What sparse light field coding reveals about scene structure[C]//Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Rec-ognition (CVPR), Las Vegas, 2016: 3262–3270.
[9] Chen C, Lin HT, Yu Z, et al. Light field stereo matching using bilateral statistics of surface cameras[C]//2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, 2014: 1518–1525.
[10] Zhang S, Sheng H, Li C, et al. Robust depth estimation for light field via spinning parallelogram operator[J]. Comput Vis Image Underst, 2016, 145: 148–159.
[11] Wang W K, Lin Y F, Zhang S. Enhanced spinning parallelogram operator combining color constraint and histogram integration for robust light field depth estimation[J]. IEEE Signal Process Lett, 2021, 28: 1080–1084.
[12] Zhu H, Wang Q, Yu J Y. Occlusion-model guided antiocclusion depth estimation in light field[J]. IEEE J Sel Top Signal Process, 2017, 11(7): 965–978.
[13] Ng R, Levoy M, Brédif M, et al. Light field photography with a hand-held plenoptic camera[R]. Computer Science Technical Report CSTR, 2005.
[14] Ma S, Guo ZH, Wu J L, et al. Occlusion-aware light field depth estimation using side window angular coherence[J]. Appl Opt, 2021, 60(2): 392–404.
[15] Boykov Y, Veksler O, Zabih R. Fast approximate energy mini-mization via graph cuts[J]. IEEE Trans Pattern Anal Mach Intell, 2001, 23(11): 1222–1239.
[16] He K M, Sun J, Tang X O. Guided image filtering[J]. IEEE Trans Pattern Anal Mach Intell, 2013, 35(6): 1397–1409.
[17] Honauer K, Johannsen O, Kondermann D, et al. A dataset and evaluation methodology for depth estimation on 4d light fields[C]//Proceedings of the 13th Asian Conference on Com-puter Vision, Taiwan, China, 2016: 19–34.
[18] Strecke M, Alperovich A, Goldluecke B. Accurate depth and normal maps from occlusion-aware focal stack symme-try[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, 2017: 2529–2537.
[19] Shi M D, Zhang X D, Dong Y L, et al. A light field demosaicing method with double guided filtering[J]. Opto-Electron Eng, 2019, 46(12): 180539.
[21] Williem, Park I K, Lee K M. Robust light field depth estimation using occlusion-noise aware data costs[J]. IEEE Trans Pattern Anal Mach Intell, 2017, 40(10): 2484–2497.
[22] Bok Y, Jeon H G, Kweon I S. Geometric calibration of mi-cro-lens-based light field cameras using line features[J]. IEEE Transn Pattern Anal Mach Intell, 2017, 39(2): 287–300.