• Opto-Electronic Engineering
  • Vol. 47, Issue 12, 190669 (2020)
Wang Ronggui, Wang Jing, Yang Juan*, and Xue Lixia
Author Affiliations
  • [in Chinese]
  • show less
    DOI: 10.12086/oee.2020.190669 Cite this Article
    Wang Ronggui, Wang Jing, Yang Juan, Xue Lixia. Feature pyramid random fusion network for visible-infrared modality person re-identification[J]. Opto-Electronic Engineering, 2020, 47(12): 190669 Copy Citation Text show less

    Abstract

    Existing works in person re-identification only considers extracting invariant feature representations from cross-view visible cameras, which ignores the imaging feature in infrared domain, such that there are few studies on visible-infrared relevant modality. Besides, most works distinguish two-views by often computing the similarity in feature maps from one single convolutional layer, which causes a weak performance of learning features. To handle the above problems, we design a feature pyramid random fusion network (FPRnet) that learns discriminative multiple semantic features by computing the similarities between multi-level convolutions when matching the person. FPRnet not only reduces the negative effect of bias in intra-modality, but also balances the heterogeneity gap between inter-modality, which focuses on an infrared image with very different visual properties. Meanwhile, our work integrates the advantages of learning local and global feature, which effectively solves the problems of visible-infrared person re-identification. Extensive experiments on the public SYSU-MM01 dataset from aspects of mAP and convergence speed, demonstrate the superiorities in our approach to the state-of-the-art methods. Furthermore, FPRnet also achieves competitive results with 32.12% mAP recognition rate and much faster convergence.
    Wang Ronggui, Wang Jing, Yang Juan, Xue Lixia. Feature pyramid random fusion network for visible-infrared modality person re-identification[J]. Opto-Electronic Engineering, 2020, 47(12): 190669
    Download Citation