• Electronics Optics & Control
  • Vol. 29, Issue 2, 99 (2022)
QI Jichao, HE Li, YUAN Liang, RAN Teng, and ZHANG Jianbo
Author Affiliations
  • [in Chinese]
  • show less
    DOI: 10.3969/j.issn.1671-637x.2022.02.021 Cite this Article
    QI Jichao, HE Li, YUAN Liang, RAN Teng, ZHANG Jianbo. SLAM Method Based on Fusion of Monocular Camera and Lidar[J]. Electronics Optics & Control, 2022, 29(2): 99 Copy Citation Text show less

    Abstract

    The fusion of visual sensor and lidar is a hotspot of research at present, and its actual effect is superior to that of a single sensor.In existing fusion algorithms of visual sensor and lidar, the feature points for positioning are not sufficient, so that the positioning accuracy is not high enough.To solve the problem, this paper makes full use of the depth information provided by lidar, and proposes a multi-strategy SLAM algorithm based on fusion of vision and laser.Before estimating inter-frame posture, the depth value of the feature points in the last frame is judged.According to the three kinds of judgment results, that is, all the feature points have depth information, part of the feature points have depth information, and none of the feature points has depth information, different pose estimation strategies are adopted respectively.Finally, the algorithm is tested on the public data set KITTI, and the experimental results show that the algorithm effectively improves the positioning accuracy and robustness.
    QI Jichao, HE Li, YUAN Liang, RAN Teng, ZHANG Jianbo. SLAM Method Based on Fusion of Monocular Camera and Lidar[J]. Electronics Optics & Control, 2022, 29(2): 99
    Download Citation