• Laser & Optoelectronics Progress
  • Vol. 59, Issue 2, 0211002 (2022)
Ligen Shi, Jun Qiu, Chang Liu*, and Xiaojuan Deng
Author Affiliations
  • Institute of Applied Mathematics, School of Applied Science, Beijing Information Science and Technology University, Beijing 100101, China
  • show less
    DOI: 10.3788/LOP202259.0211002 Cite this Article Set citation alerts
    Ligen Shi, Jun Qiu, Chang Liu, Xiaojuan Deng. Disparity Reconstruction Algorithm Based on YCbCr Light Field Data[J]. Laser & Optoelectronics Progress, 2022, 59(2): 0211002 Copy Citation Text show less
    References

    [1] Gershun A. The light field[J]. Journal of Mathematics and Physics, 18, 51-151(1939).

    [2] Adelson E H, Bergen J R. The plenoptic function and the elements of early vision[M]. Landy M, Movshon J A. Computational models of visual processing(1991).

    [3] Levoy M, Hanrahan P. Light field rendering[C], 31-42(1996).

    [4] Fang L, Dai Q H. Computational light field imaging[J]. Acta Optica Sinica, 40, 0111001(2020).

    [5] Ng R, Levoy M, Brédif M et al. Light field photography with a hand-held plenoptic camera[D](2005).

    [6] Sun F S, Han X. Multi-objective fast ranging method based on microlens array light field camera[J]. Acta Optica Sinica, 39, 0815001(2019).

    [7] Wilburn B, Joshi N, Vaish V et al. High-speed videography using a dense camera array[C], 8168976(2004).

    [8] Wilburn B, Joshi N, Vaish V et al. High performance imaging using large camera arrays[J]. ACM Transactions on Graphics, 24, 765-776(2005).

    [9] Liang C K, Lin T H, Wong B Y et al. Programmable aperture photography: multiplexed light field acquisition[J]. ACM Transactions on Graphics, 27, 1-10(2008).

    [10] Nagahara H, Zhou C Y, Watanabe T et al. Programmable aperture camera using LCoS[M]. Daniilidis K, Maragos P, Paragios N. Computer vision-ECCV 2010. Lecture notes in computer science, 6316, 337-350(2010).

    [11] Liu C, Qiu J, Jiang M. Light field reconstruction from projection modeling of focal stack[J]. Optics Express, 25, 11377-11388(2017).

    [12] Bishop T E, Favaro P. Plenoptic depth estimation from multiple aliased views[C], 1622-1629(2009).

    [13] Yu Z, Yu J Y, Lumsdaine A et al. Plenoptic depth map in the case of occlusions[J]. Proceedings of SPIE, 8667, 86671S(2013).

    [14] Yu Z, Guo X Q, Ling H B et al. Line assisted light field triangulation and stereo matching[C], 2792-2799(2013).

    [15] Heber S, Ranftl R, Pock T. Variational shape from light field[M]. Heyden A, Kahl F, Olsson C, et al. Energy minimization methods in computer vision and pattern recognition. Lecture notes in computer science, 8081, 66-79(2013).

    [16] Fickel G P, Jung C R, Malzbender T et al. Stereo matching and view interpolation based on image domain triangulation[J]. IEEE Transactions on Image Processing, 22, 3353-3365(2013).

    [17] Chen C, Lin H T, Yu Z et al. Light field stereo matching using bilateral statistics of surface cameras[C], 1518-1525(2014).

    [18] Jeon H G, Park J, Choe G et al. Accurate depth map estimation from a lenslet light field camera[C], 1547-1555(2015).

    [19] Buades A, Facciolo G. Reliable multiscale and multiwindow stereo matching[J]. SIAM Journal on Imaging Sciences, 8, 888-915(2015).

    [20] Navarro J, Buades A. Reliable light field multiwindow disparity estimation[C], 1449-1453(2016).

    [21] Mishiba K. Fast depth estimation for light field cameras[J]. IEEE Transactions on Image Processing, 29, 4232-4242(2020).

    [22] Tosic I, Berkner K. Light field scale-depth space transform for dense depth estimation[C], 441-448(2014).

    [23] Johannsen O, Sulc A, Goldluecke B. What sparse light field coding reveals about scene structure[C], 3262-3270(2016).

    [24] Tao M W, Srinivasan P P, Malik J et al. Depth from shading, defocus, and correspondence using light-field angular coherence[C], 1940-1948(2015).

    [25] Wang T C, Efros A A, Ramamoorthi R. Occlusion-aware depth estimation using light-field cameras[C], 3487-3495(2015).

    [26] Liu X M, Du M Z, Ma Z B et al. Depth estimation method of light field image based on occlusion scene[J]. Acta Optica Sinica, 40, 0510002(2020).

    [27] Heber S, Yu W, Pock T. Neural EPI-volume networks for shape from light field[C], 2271-2279(2017).

    [28] Jeon H G, Park J, Choe G et al. Depth from a light field image with learning-based matching costs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41, 297-310(2019).

    [29] Shin C, Jeon H G, Yoon Y et al. EPINET: a fully-convolutional neural network using epipolar geometry for depth from light field images[C], 4748-4757(2018).

    [30] Yang Y, Peng Y H, Liu Z G. A fast algorithm for YCbCr to RGB conversion[J]. IEEE Transactions on Consumer Electronics, 53, 1490-1493(2007).

    [31] Wanner S, Meister S, Goldluecke B. Datasets and benchmarks for densely sampled 4D light fields[C], 13, 225-226(2013).

    [32] Rudin L I, Osher S, Fatemi E. Nonlinear total variation based noise removal algorithms[J]. Physica D, 60, 259-268(1992).

    [33] Honauer K, Johannsen O, Kondermann D et al. A dataset and evaluation methodology for depth estimation on 4D light fields[M]. Lai S H, Lepetit V, Nishino K, et al. Computer vision-ACCV 2016. Lecture notes in computer science, 10113, 19-34(2017).

    [34] Johannsen O, Honauer K, Goldluecke B et al. A taxonomy and evaluation of dense light field depth estimation algorithms[C], 1795-1812(2017).

    Ligen Shi, Jun Qiu, Chang Liu, Xiaojuan Deng. Disparity Reconstruction Algorithm Based on YCbCr Light Field Data[J]. Laser & Optoelectronics Progress, 2022, 59(2): 0211002
    Download Citation