• Laser & Optoelectronics Progress
  • Vol. 57, Issue 20, 201012 (2020)
Bin Zou1、2, Siyang Lin1、*, and Zhishuai Yin1、2
Author Affiliations
  • 1Hubei Key Laboratory of Advanced Technology for Automotive Components, Wuhan University of Technology, Wuhan, Hubei 430070, China
  • 2Hubei Collaborative Innovation Center for Automotive Components Technology, Wuhan, Hubei 430070, China
  • show less
    DOI: 10.3788/LOP57.201012 Cite this Article Set citation alerts
    Bin Zou, Siyang Lin, Zhishuai Yin. Semantic Mapping Based on YOLOv3 and Visual SLAM[J]. Laser & Optoelectronics Progress, 2020, 57(20): 201012 Copy Citation Text show less

    Abstract

    Visual simultaneous localization and mapping (SLAM) systems that use cameras as input can retain the spatial geometry information of a point cloud in the map construction process. However, such systems do not fully utilize the semantic information of objects in the environment. To address this problem, the mainstream visual SLAM system and object detection algorithms based on neural network structures, such as Faster R-CNN and YOLO, are investigated. Moreover, an effective point cloud segmentation method that adds supporting planes to improve the robustness of the segmentation results is considered. Finally, the YOLOv3 algorithm is combined with ORB-SLAM system to detect objects in the environment scene and ensures that the constructed point cloud map has semantic information. The experimental results demonstrate that the proposed method constructs a semantic map with complex geometric information that can be applied to the navigation of unmanned vehicles or mobile robots.
    Bin Zou, Siyang Lin, Zhishuai Yin. Semantic Mapping Based on YOLOv3 and Visual SLAM[J]. Laser & Optoelectronics Progress, 2020, 57(20): 201012
    Download Citation