• Laser & Optoelectronics Progress
  • Vol. 58, Issue 22, 2210007 (2021)
Xiaolong Chen1、*, Ji Zhao1、2, Siyi Chen1、**, Xinhao Du1, and Xin Liu1
Author Affiliations
  • 1School of Automation and Electronic Information, Xiangtan University, Xiangtan, Hunan 411100 China
  • 2National CIMS Engineering Technology Research Center, Tsinghua University, Beijing 100084, China
  • show less
    DOI: 10.3788/LOP202158.2210007 Cite this Article Set citation alerts
    Xiaolong Chen, Ji Zhao, Siyi Chen, Xinhao Du, Xin Liu. Grouped Double Attention Network for Semantic Segmentation[J]. Laser & Optoelectronics Progress, 2021, 58(22): 2210007 Copy Citation Text show less

    Abstract

    The application of deep learning and self-attention mechanism greatly improves the performance of semantic segmentation network. Aiming at the roughness of the current self-attention mechanism that treats all channels of each pixel as a vector for calculation, we propose a grouped dual attention network based on the spatial dimension and channel dimension. First, divide the feature layer into multiple groups; then, adaptively filter out the invalid basis groups of each feature layer to capture accurate context information; finally, fuse multiple groups of weighted information to obtain stronger context information. The experimental results show that the segmentation performance of this network on the two data sets is better than dual attention network, the segmentation accuracy on the PASCAL VOC2012 verification set is 85.6%, and the segmentation accuracy on the Cityscapes verification set is 71.7%.
    Xiaolong Chen, Ji Zhao, Siyi Chen, Xinhao Du, Xin Liu. Grouped Double Attention Network for Semantic Segmentation[J]. Laser & Optoelectronics Progress, 2021, 58(22): 2210007
    Download Citation