• Laser & Optoelectronics Progress
  • Vol. 55, Issue 2, 021501 (2018)
Meng Yuan*, Aihua Li, Yong Zheng, Zhigao Cui, and Zhengqiang Bao
Author Affiliations
  • Institute of War Support, Rocket Force University of Engineering, Xi'an, Shaanxi 710025, China
  • show less
    DOI: 10.3788/LOP55.021501 Cite this Article Set citation alerts
    Meng Yuan, Aihua Li, Yong Zheng, Zhigao Cui, Zhengqiang Bao. Point-Line Feature Fusion in Monocular Visual Odometry[J]. Laser & Optoelectronics Progress, 2018, 55(2): 021501 Copy Citation Text show less

    Abstract

    A semi-direct monocular visual odometry (SVO) algorithm with point-line feature fusion is proposed to solve the problem of localization and mapping in underground engineering for patrol robot. The proposed algorithm is divided into feature extraction, state estimation and depth filter. The point-line feature of image is extracted in the feature extraction thread. The camera pose with 6 degrees of freedom is obtained with different matching and tracking strategies of point-line feature, and it is further optimized by the constraint relationships between frame and frame, feature and feature, and local frames. And the depth information from three-dimensional landmarks to the camera optical center is described through the depth of filter threads with probability distribution. The proposed method can improve the robustness of depth estimation with respect to the fixed depth values. The average positioning accuracy of the proposed algorithm increases by 17.6% on the Euroc dataset compared with that of the LSD-SLAM algorithm, and increases by 6.4% on the Tum dataset compared with that of SVO algorithm. We adopt the robot camera platform to test, and the actual positioning error of about 1.17% meets the actual requirements.
    Meng Yuan, Aihua Li, Yong Zheng, Zhigao Cui, Zhengqiang Bao. Point-Line Feature Fusion in Monocular Visual Odometry[J]. Laser & Optoelectronics Progress, 2018, 55(2): 021501
    Download Citation