[1] Dai Q, Wang Y J, Han G L. Perspective image rectification based on improved Hough transformation and perspective transformation[J]. Chinese Journal of Liquid Crystals and Displays, 27, 552-556(2012).
[2] Zhao T, Kang H L, Zhang Z P. Fast image mosaic algorithm based on area blocking and BRISK[J]. Laser & Optoelectronics Progress, 55, 031005(2018).
[3] Chen J H, Guo W S. Method of panoramic image stitching for theodolite-camera system[J]. Laser & Optoelectronics Progress, 53, 051001(2016).
[4] Du S P, Hu S M, Martin R R. Changing perspective in stereoscopic images[J]. IEEE Transactions on Visualization and Computer Graphics, 19, 1288-1297(2013). http://www.ncbi.nlm.nih.gov/pubmed/23744259
[5] Zhang L, Zhang Y H, Huang H. Efficient variational light field view synthesis for making stereoscopic 3D images[J]. Computer Graphics Forum, 34, 183-191(2015). http://onlinelibrary.wiley.com/doi/10.1111/cgf.12757/pdf
[6] Chaurasia G, Sorkine O, Drettakis G. Silhouette-aware warping for image-based rendering[J]. Computer Graphics Forum, 30, 1223-1232(2011). http://onlinelibrary.wiley.com/doi/10.1111/j.1467-8659.2011.01981.x/full
[7] Chaurasia G, Duchene S, Sorkine-Hornung O et al. Depth synthesis and local warps for plausible image-based navigation[J]. ACM Transactions on Graphics, 32, 30(2013). http://dl.acm.org/citation.cfm?id=2487238
[8] Wanner S, Goldluecke B. Spatial and angular variational super-resolution of 4D light fields[M]. ∥Fitzgibbon A, Lazebnik S, Perona P, et al. Computer vision-ECCV 2012. Lecture notes in computer science. Berlin, Heidelberg: Springer, 7576, 608-621(2012).
[9] Pujades S, Devernay F, Goldluecke B. Bayesian view synthesis and image-based rendering principles. [C]∥2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 23-28, 2014, Columbus, OH, USA. New York: IEEE, 3906-3913(2014).
[10] Kalantari N K, Wang T C, Ramamoorthi R. Learning-based view synthesis for light field cameras[J]. ACM Transactions on Graphics, 35, 193(2016). http://arxiv.org/abs/1609.02974
[11] Flynn J, Neulander I, Philbin J et al. Deep stereo: learning to predict new views from the world's imagery. [C]∥2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 27-30, 2016, Las Vegas, NV, USA. New York: IEEE, 5515-5524(2016).
[12] Gul M S K, Gunturk B K. Spatial and angular resolution enhancement of light fields using convolutional neural networks[J]. IEEE Transactions on Image Processing, 27, 2146-2159(2018). http://ieeexplore.ieee.org/document/8259363/
[13] Wang Y L, Liu F, Wang Z L et al. End-to-end view synthesis for light field imaging with pseudo 4DCNN[M]. ∥Ferrari V, Hebert M, Sminchisescu C, et al. Computer vision-ECCV 2018. Lecture notes in computer science. Cham: Springer, 11206, 340-355(2018).
[14] Wanner S, Goldluecke B. Globally consistent depth labeling of 4D light fields. [C]∥2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 16-21, 2012, Providence, RI, USA. New York: IEEE, 41-48(2012).
[15] Yuan R F, Liu M, Hui M et al. Depth map stitching based on binocular vision[J]. Laser & Optoelectronics Progress, 55, 121013(2018).
[16] Beck A, Teboulle M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems[J]. SIAM Journal on Imaging Sciences, 2, 183-202(2009). http://biomet.oxfordjournals.org/external-ref?access_num=10.1137/080716542&link_type=DOI
[17] Criminisi A, Pérez P, Toyama K. Region filling and object removal by exemplar-based image inpainting[J]. IEEE Transactions on Image Processing, 13, 1200-1212(2004). http://dl.acm.org/citation.cfm?id=2320602
[18] Xue H Y, Zhang S M, Cai D. Depth image inpainting: improving low rank matrix completion with low gradient regularization[J]. IEEE Transactions on Image Processing, 26, 4311-4320(2017). http://ieeexplore.ieee.org/document/7954738/
[19] Dansereau D G, Pizarro O, Williams S B. Decoding, calibration and rectification for lenselet-based plenoptic cameras. [C]∥2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 23-28, 2013, Portland, OR, USA. New York: IEEE, 1027-1034(2013).