• Laser & Optoelectronics Progress
  • Vol. 60, Issue 16, 1628004 (2023)
Zhe He1、2, Yuxiang Tao1、2、*, Xiaobo Luo1、2, and Hao Xu1、2
Author Affiliations
  • 1School of Computer Sciences and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
  • 2Spatial Big Data Research Center, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
  • show less
    DOI: 10.3788/LOP222634 Cite this Article Set citation alerts
    Zhe He, Yuxiang Tao, Xiaobo Luo, Hao Xu. Road Extraction from Remote Sensing Image Based on an Improved U-Net[J]. Laser & Optoelectronics Progress, 2023, 60(16): 1628004 Copy Citation Text show less

    Abstract

    Road information extracted from remote sensing images is of great value in urban planning, traffic management, and other fields. However, owing to the complex background, obstacles, and numerous similar nonroad areas, high-quality road information extraction from remote sensing images is still challenging. In this work, we propose HSA-UNet, a road information extraction method based on mixed-scale attention and U-Net, for high-quality remote sensing images. First, an attention residual learning unit, composed of a residual structure and an attention feature fusion mechanism, is used in the coding network to improve the extraction ability of global and local features. Second, owing to roads with the characteristics of large spans, narrowness, and continuous distribution, the attention-enhanced atrous spatial pyramid pooling module is added to the bridge network to enhance the ability of road features extraction at different scales. Experiments were performed on Massachusetts roads dataset, and the results showed that HSA-UNet significantly outperformed D-LinkNet, DeepLabV3+, and other semantic segmentation networks in terms of F1, intersection over union, and other evaluation indicators.
    Zhe He, Yuxiang Tao, Xiaobo Luo, Hao Xu. Road Extraction from Remote Sensing Image Based on an Improved U-Net[J]. Laser & Optoelectronics Progress, 2023, 60(16): 1628004
    Download Citation