• Electronics Optics & Control
  • Vol. 29, Issue 2, 67 (2022)
ZHOU Weiqiang1、2 and HAN Jun1、2
Author Affiliations
  • 1[in Chinese]
  • 2[in Chinese]
  • show less
    DOI: 10.3969/j.issn.1671-637x.2022.02.015 Cite this Article
    ZHOU Weiqiang, HAN Jun. Monocular Depth Estimation Fusing Multi-scale Feature with Semantic Information[J]. Electronics Optics & Control, 2022, 29(2): 67 Copy Citation Text show less
    References

    [1] SNAVELY N, SEITZ S M, SZELISKI R.Skeletal graphs for efficient structure from motion[C]//IEEE Conference on Computer Vision and Pattern Recognition.Anchorage, AK: IEEE, 2008: 1-8.

    [2] ZHANG R, TSAI P S, CRYER J E, et al.Shape-from-shading: a survey[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 1999, 21(8): 690-706.

    [4] FAVARO P, SOATTO S.A geometric approach to shape from defocus[J].IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(3): 406-417.

    [5] EIGEN D, PUHRSCH C, FERGUS R.Depth map prediction from a single image using a multi-scale deep network[C]//Advances in Neural Information Processing Systems.Montreal: Neural Information Processing Systems Foundation, 2014: 2366-2374.

    [6] LIU F, SHEN C, LIN G.Deep convolutional neural fields for depth estimation from a single image[C]//IEEE Conference on Computer Vision and Pattern Recognition.Boston, MA: IEEE, 2015: 5162-5170.

    [7] KUZNIETSOV Y, STUCKLER J, LEIBE B.Semi-supervised deep learning for monocular depth map prediction[C]//IEEE Conference on Computer Vision and Pattern Recognition.Honolulu, HI: IEEE, 2017: 6647-6655.

    [8] ZHOU T, BROWN M, SNAVELY N, et al.Unsupervised learning of depth and ego-motion from video[C]//IEEE Conference on Computer Vision and Pattern Recognition.Honolulu, HI: IEEE, 2017: 1851-1858.

    [9] MAYER N, ILG E, HAUSSER P, et al.A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation[C]//IEEE Conference on Computer Vision and Pattern Recognition.Las Vegas, NV: IEEE, 2016: 4040-4048.

    [10] GARG R, BG V K, CARNEIRO G, et al.Unsupervised CNN for single view depth estimation: geometry to the rescue[C]//European Conference on Computer Vision.Cham: Springer, 2016: 740-756.

    [11] GODARD C, MAC AODHA O, BROSTOW G J.Unsupervised monocular depth estimation with left-right consistency[C]//IEEE Conference on Computer Vision and Pattern Recognition.Honolulu, HI: IEEE, 2017: 270-279.

    [12] HE K, ZHANG X, REN S, et al.Deep residual learning for image recognition [C]//IEEE Conference on Computer Vision and Pattern Recognition.Las Vegas, NV: IEEE, 2016: 770-778.

    [13] WANG C, BUENAPOSADA J M, ZHU R, et al.Learning depth from monocular videos using direct methods[C]//IEEE Conference on Computer Vision and Pattern Recognition.Salt Lake City, UT: IEEE, 2018: 2022-2030.

    [14] GEIGER A, LENZ P, URTASUN R.Are we ready for autonomous driving? The KITTI vision benchmark suite[C]//IEEE Conference on Computer Vision and Pattern Recognition.Providence, RI: IEEE, 2012: 3354-3361.

    [15] GAN Y, XU X, SUN W, et al.Monocular depth estimation with affinity, vertical pooling, and label enhancement[C]//European Conference on Computer Vision.Cham: Springer, 2018: 224-239.

    ZHOU Weiqiang, HAN Jun. Monocular Depth Estimation Fusing Multi-scale Feature with Semantic Information[J]. Electronics Optics & Control, 2022, 29(2): 67
    Download Citation