• Laser & Optoelectronics Progress
  • Vol. 57, Issue 18, 181025 (2020)
Hui Jin1、2 and Xinyang Li1、*
Author Affiliations
  • 1Key Laboratory on Adaptive Optics, Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu, Sichuan 610209, China
  • 2University of Chinese Academy of Sciences, Beijing 100049, China
  • show less
    DOI: 10.3788/LOP57.181025 Cite this Article Set citation alerts
    Hui Jin, Xinyang Li. Residual Network Feature Fusion Tracking Algorithm Based on Graph Salience Detection[J]. Laser & Optoelectronics Progress, 2020, 57(18): 181025 Copy Citation Text show less

    Abstract

    The feature expression of the target is the key to the target tracking process. The artificial features are relatively simple and have strong real-time performance, but the expression ability is insufficient. It is easy to produce tracking drift when dealing with the problems of rapid change and target occlusion. The strong feature expression ability of deep neural network (DNN) in target detection and recognition tasks makes DNN gradually become a feature extraction tool. A deeper residual neural network (ResNet) is used to replace VGG-19 network as a feature extraction tool. First, the special additional layer structure and convolution layer features in ResNet-50 are fused to obtain target representation features with stronger robustness. Then, the feature is filtered and the target position is determined according to the maximum response value. Finally, in order to expand the application scene of the algorithm in the field of local target tracking, a graphic based visual saliency detection algorithm is used to increase the weight value of the local target and suppress background information, so as to improve the target representation ability of the feature layer.
    Hui Jin, Xinyang Li. Residual Network Feature Fusion Tracking Algorithm Based on Graph Salience Detection[J]. Laser & Optoelectronics Progress, 2020, 57(18): 181025
    Download Citation