• Laser & Optoelectronics Progress
  • Vol. 59, Issue 18, 1811003 (2022)
Rongfen Zhang*, Wenhao Yuan, Jin Lu, and Yuhong Liu
Author Affiliations
  • College of Big Data and Information Engineering, Guizhou University, Guiyang 550025, Guizhou , China
  • show less
    DOI: 10.3788/LOP202259.1811003 Cite this Article Set citation alerts
    Rongfen Zhang, Wenhao Yuan, Jin Lu, Yuhong Liu. Visual Simultaneous Localization and Mapping Method of Semantic Octree Map Toward Indoor Dynamic Scenes[J]. Laser & Optoelectronics Progress, 2022, 59(18): 1811003 Copy Citation Text show less

    Abstract

    Aiming at the problems that traditional visual simultaneous localization and mapping (vSLAM) systems cannot remove moving objects in dynamic scenes effectively and lack semantic maps for high-level interactive applications, a vSLAM system scheme was proposed. The scheme can remove moving objects effectively and build semantic octree maps representing indoor static environments. First, Fast-SCNN was used as a semantic segmentation network to extract semantic information from images. Meanwhile, a pyramid optical flow method was used to track and match feature points. Then, for step sampling of the feature points, a stepping random sampling consistent algorithm (Multi-stage RANSAC) was used to perform the RANSAC process on different scales several times. Later, the epipolar geometry constraint and semantic information extracted from the Fast-SCNN were combined to remove the dynamic feature points of the visual odometer. Finally, the semantic octree map representing the static indoor environment was built by the point cloud after using voxel filtering to reduce redundancy. Experimental results show that the performance indicators of a camera, including relative displacement, relative rotation, and global trajectory errors in the 8 RGB-D high dynamic sequence of common datasets TUM RGB-D, are improved by more than 94% compared with the ORB-SLAM2 system, and the global trajectory error is only 0.1 m. Compared with a similar DS-SLAM system, the total time for eliminating a moving point is reduced by 21%. After voxel filtering, the semantic point cloud and octree maps occupy 9.6 MB and 685 kB storage space, respectively, in terms of map construction performance. Compared with the original point cloud of 17 MB, the semantic octree map occupies only 4% of the storage space; therefore, it could be used for high-level intelligent interactive applications due to its semantics.
    Rongfen Zhang, Wenhao Yuan, Jin Lu, Yuhong Liu. Visual Simultaneous Localization and Mapping Method of Semantic Octree Map Toward Indoor Dynamic Scenes[J]. Laser & Optoelectronics Progress, 2022, 59(18): 1811003
    Download Citation