• Laser & Optoelectronics Progress
  • Vol. 59, Issue 22, 2215002 (2022)
Tao Long, Chang Su, and Jian Wang*
Author Affiliations
  • School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
  • show less
    DOI: 10.3788/LOP202259.2215002 Cite this Article
    Tao Long, Chang Su, Jian Wang. Learning Feature Point Descriptors for Detail Preservation[J]. Laser & Optoelectronics Progress, 2022, 59(22): 2215002 Copy Citation Text show less


    Detecting salient key points in images and extracting feature descriptors are important components of computer vision tasks such as visual odometry and simultaneous localization and mapping systems. The main goal of the feature point extraction algorithm is to detect accurate key point positions and extract reliable feature descriptors. Reliable feature descriptors should maintain stability against rotation, scale scaling, illumination changes, viewing angle changes, noise, etc. Due to the loss of image information during the downsampling process in recent deep learning-based feature point extraction algorithms, the reliability of the descriptor and accuracy of feature matching are reduced. This study proposes a network structure to detect detail-preserving oriented feature descriptors to solve this problem. The proposed network fuses shallow detail and deep semantic features to sample the descriptors to a higher resolution. Combined with the attention mechanism, local (corners, lines, textures, etc.), semantic, and global features are used to improve the detection of feature points and the reliability of feature descriptors. Experiments on the Hpatches dataset show that the matching accuracy of the proposed method is 55.5%. Additionally, when the input image resolution is 480×640, the homography estimation accuracy of the proposed method is 5.9 percentage points higher than that of the existing method. These results demonstrate the effectiveness of the proposed method.