• Laser & Optoelectronics Progress
  • Vol. 60, Issue 20, 2015002 (2023)
Meng Zuo1、2、3、4, Yiyang Liu1、2、3、*, Hao Cui1、2、3, and Hongfei Bai2
Author Affiliations
  • 1Key Laboratory Networked Control Systems, Chinese Academy of Sciences, Shenyang 110016, Liaoning , China
  • 2Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, Liaoning , China
  • 3Institutes for Robotics & Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, Liaoning , China
  • 4University of Chinese Academy of Sciences, Beijing 100049, China
  • show less
    DOI: 10.3788/LOP222819 Cite this Article Set citation alerts
    Meng Zuo, Yiyang Liu, Hao Cui, Hongfei Bai. Semantic Segmentation Method of Point Cloud Based on Sparse Convolution and Attention Mechanism[J]. Laser & Optoelectronics Progress, 2023, 60(20): 2015002 Copy Citation Text show less

    Abstract

    Recently, three-dimensional point cloud semantic segmentation techniques based on sparse convolution have made great progress. However, sparse convolution causes a loss of global context information. In this study, a point cloud semantic segmentation method based on a sparse convolution and attention mechanism is proposed. Here, the attention mechanism is introduced into a sparse convolutional network to improve the network's ability to achieve global context information. However, extensive computation of the attention mechanism limits the applicability of the proposed method. Hence, to expand its usage while decreasing the amount of computation, spatial pyramid sampling is further introduced in the attention mechanism. Experimental results demonstrate that the proposed method achieves 71.825% of the average intersection over union (MIOU) on the Scannet V2 dataset and 70.5% on the S3DIS dataset, suggesting the proposed method's effectiveness and its superiority to the comparison method.
    Meng Zuo, Yiyang Liu, Hao Cui, Hongfei Bai. Semantic Segmentation Method of Point Cloud Based on Sparse Convolution and Attention Mechanism[J]. Laser & Optoelectronics Progress, 2023, 60(20): 2015002
    Download Citation