[1] Levoy M, Hanrahan P. Light field rendering[C], 31-42(1996).
[3] Wu G C, Liu Y B, Fang L et al. Revisiting light field rendering with deep anti-aliasing neural network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44, 5430-5444(2022).
[4] Liang Z Y, Wang Y Q, Wang L G et al. Light field image super-resolution with transformers[J]. IEEE Signal Processing Letters, 29, 563-567(2022).
[5] Ko K, Koh Y J, Chang S et al. Light field super-resolution via adaptive feature remixing[J]. IEEE Transactions on Image Processing, 30, 4114-4128(2021).
[6] Alain M, Smolic A. A spatio-angular filter for high quality sparse light field refocusing[C](2021).
[7] Jayaweera S S, Edussooriya C U S, Wijenayake C et al. Multi-volumetric refocusing of light fields[J]. IEEE Signal Processing Letters, 28, 31-35(2021).
[8] Wang T C, Efros A A, Ramamoorthi R. Depth estimation with occlusion modeling using light-field cameras[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38, 2170-2181(2016).
[9] Feng W, Gao J H, Qu T et al. Three-dimensional reconstruction of light field based on phase similarity[J]. Sensors, 21, 7734(2021).
[10] Yin Y K, Yu K, Yu C Z et al. 3D imaging using geometric light field: a review[J]. Chinese Journal of Lasers, 48, 1209001(2021).
[11] Wanner S, Goldluecke B. Globally consistent depth labeling of 4D light fields[C], 41-48(2012).
[12] Lü H J, Gu K Y, Zhang Y B et al. Light field depth estimation exploiting linear structure in EPI[C](2015).
[13] Zhang Y B, Lü H J, Liu Y B et al. Light-field depth estimation via epipolar plane image analysis and locally linear embedding[J]. IEEE Transactions on Circuits and Systems for Video Technology, 27, 739-747(2017).
[14] Tao M W, Hadap S, Malik J et al. Depth from combining defocus and correspondence using light-field cameras[C], 673-680(2013).
[15] Tao T Y, Chen Q, Feng S J et al. Active depth estimation from defocus using a camera array[J]. Applied Optics, 57, 4960-4967(2018).
[16] Williem, Park I K, Lee K M. Robust light field depth estimation using occlusion-noise aware data costs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40, 2484-2497(2018).
[17] Cai Z W, Liu X L, Peng X et al. Structured light field 3D imaging[J]. Optics Express, 24, 20324-20334(2016).
[18] Cai Z W, Liu X L, Peng X et al. Ray calibration and phase mapping for structured-light-field 3D reconstruction[J]. Optics Express, 26, 7598-7613(2018).
[19] Zhang X J, Cai Z W, Liu X L et al. Improved 3D imaging and measurement with fringe projection structured light field[J]. Proceedings of SPIE, 11438, 114380X(2020).
[20] Zhou P, Zhang Y T, Yu Y L et al. 3D reconstruction from structured light field by Fourier transformation profilometry[J]. Proceedings of SPIE, 11338, 113381K(2019).
[21] Zhou P, Zhang Y T, Yu Y L et al. 3D shape measurement based on structured light field imaging[J]. Mathematical Biosciences and Engineering, 17, 654-668(2019).
[22] Wang Z W, Yang Y, Liu X L et al. Light-field-assisted phase unwrapping of fringe projection profilometry[J]. IEEE Access, 9, 49890-49900(2021).
[23] Cai Z W, Liu X L, Pedrini G et al. Accurate depth estimation in structured light fields[J]. Optics Express, 27, 13532-13546(2019).
[24] Cai Z W, Liu X L, Pedrini G et al. Structured-light-field 3D imaging without phase unwrapping[J]. Optics and Lasers in Engineering, 129, 106047(2020).
[25] Liu L, Xiang S, Deng H P et al. Fast geometry estimation for phase-coding structured light field[C], 124-127(2020).
[26] Shin C, Jeon H G, Yoon Y et al. EPINET: a fully-convolutional neural network using epipolar geometry for depth from light field images[C], 4748-4757(2018).
[27] Tsai Y J, Liu Y L, Ouhyoung M et al. Attention-based view selection networks for light-field disparity estimation[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 34, 12095-12103(2020).
[28] Huang Z C, Hu X M, Xue Z et al. Fast light-field disparity estimation with multi-disparity-scale cost aggregation[C], 6300-6309(2021).
[29] Luo Y X. The research of depth estimation for light field based on convolutional neural network[D](2018).
[30] Pan Z W. Depth estimation on 4D light field based convolutional neural network[D](2018).
[31] Ma H X. Method, system and medium for estimating optical field depth based on convolution neural network[P].
[32] Li Y X, Qian J M, Feng S J et al. Deep-learning-enabled dual-frequency composite fringe projection profilometry for single-shot absolute 3D shape measurement[J]. Opto-Electronic Advances, 33-48(2022).
[33] Yin W, Hu Y, Feng S J et al. Single-shot 3D shape measurement using an end-to-end stereo matching network for speckle projection profilometry[J]. Optics Express, 29, 13388-13407(2021).
[34] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[C], 3431-3440(2015).
[35] Chen B, Zhang S. High-quality 3D shape measurement using saturated fringe patterns[J]. Optics and Lasers in Engineering, 87, 83-89(2016).
[36] Zhang S, Sheng H, Li C et al. Robust depth estimation for light field via spinning parallelogram operator[J]. Computer Vision and Image Understanding, 145, 148-159(2016).