• Laser & Optoelectronics Progress
  • Vol. 59, Issue 12, 1215016 (2022)
Qisheng Wang1、2、3, Fengsui Wang1、2、3、*, Jingang Chen1、2、3, and Furong Liu1、2、3
Author Affiliations
  • 1School of Electrical Engineering, Anhui Polytechnic University, Wuhu 241000, Anhui , China
  • 2Anhui Key Laboratory of Detection Technology and Energy Saving Devices, Wuhu 241000, Anhui , China
  • 3Key Laboratory of Advanced Perception and Intelligent Control of High-End Equipment, Ministry of Education, Wuhu 241000, Anhui , China
  • show less
    DOI: 10.3788/LOP202259.1215016 Cite this Article Set citation alerts
    Qisheng Wang, Fengsui Wang, Jingang Chen, Furong Liu. Faster R-CNN Target-Detection Algorithm Fused with Adaptive Attention Mechanism[J]. Laser & Optoelectronics Progress, 2022, 59(12): 1215016 Copy Citation Text show less

    Abstract

    To address the localization and the detection accuracy problems of the Faster R-CNN target-detection algorithm, a movable attention (MA) model that can be embedded in the algorithm and trained end-to-end is designed. First, to obtain more accurate spatial location information, MA uses two adaptive maximum pooling operations to aggregate features based on the horizontal and the vertical directions of the input feature and generates two independent directional-sensing feature maps. Second, to prevent model overfitting, the sigmoid activation function is used to increase network nonlinearity. Finally, to fully exploit the obtained spatial location information, the two nonlinear and input feature maps are multiplied successively to enhance the representational ability of the latter. The experimental results show that the improved Faster R-CNN target-detection algorithm based on MA can effectively enhance the network’s ability to locate the target of interest, as well as considerably improve the average detection accuracy.
    Qisheng Wang, Fengsui Wang, Jingang Chen, Furong Liu. Faster R-CNN Target-Detection Algorithm Fused with Adaptive Attention Mechanism[J]. Laser & Optoelectronics Progress, 2022, 59(12): 1215016
    Download Citation