• Laser & Optoelectronics Progress
  • Vol. 57, Issue 6, 061007 (2020)
Renyue Dai, Zhijun Fang*, and Yongbin Gao
Author Affiliations
  • School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201600, China
  • show less
    DOI: 10.3788/LOP57.061007 Cite this Article Set citation alerts
    Renyue Dai, Zhijun Fang, Yongbin Gao. Unsupervised Monocular Depth Estimation by Fusing Dilated Convolutional Network and SLAM[J]. Laser & Optoelectronics Progress, 2020, 57(6): 061007 Copy Citation Text show less

    Abstract

    The quality of a depth map generated by coarse features which are predicted by convolutional neural networks (CNNs) is low. Meanwhile, strong-supervised methods strictly limit the data volume due to lack of labeling. To address these problems, an unsupervised monocular depth estimation method by fusing dilated convolutional neural network and simultaneous localization and mapping (SLAM) is proposed. This method adopts the idea of view reconstruction to estimate depth. Photo-consistency error is utilized in the method to constrain training, expand the field of view, and concern the image details. Traditional SLAM algorithm functions to globally optimize the camera pose and incorporate it into the reconstruction framework. Finally the straight correspondence between the input monocular image and its depth map is exploited. The method is evaluated on the public KITTI dataset. The evaluation results show that, compared with the classical sfmlearner method, the error indicators, including absolute relative difference, squared relative difference, root mean squared error, and log root mean squared error, decrease by 0.032, 0.634, 1.095, and 0.026 respectively, and the accuracy indicators, δ1, δ2 and δ3, increase by 3.8%, 2.6%, and 0.9% respectively. The availability and robustness of the proposed method are verified.
    Renyue Dai, Zhijun Fang, Yongbin Gao. Unsupervised Monocular Depth Estimation by Fusing Dilated Convolutional Network and SLAM[J]. Laser & Optoelectronics Progress, 2020, 57(6): 061007
    Download Citation