• Laser & Optoelectronics Progress
  • Vol. 61, Issue 12, 1215006 (2024)
Jing Zhang and Guangfeng Chen*
Author Affiliations
  • College of Mechanical Engineering, Donghua University, Shanghai 201620, China
  • show less
    DOI: 10.3788/LOP231858 Cite this Article Set citation alerts
    Jing Zhang, Guangfeng Chen. Visible-Infrared Person Re-Identification Via Feature Constrained Learning[J]. Laser & Optoelectronics Progress, 2024, 61(12): 1215006 Copy Citation Text show less

    Abstract

    Due to the huge modal difference between visible and infrared images, visible-infrared person re-identification (VI-ReID) is a challenging task. Recently, the main problem of VI-ReID is how to effectively extract useful information from the shared features across modalities. To solve this problem, we propose a dual-flow cross-modal pedestrian recognition network based on the visual Transformer, which utilizes a modal token embedding module and a multi-resolution feature extraction module to supervise the model in extracting discriminative modal shared information. In addition, to enhance the discrimination of the model, the modal invariance constraint loss and the feature center constraint loss are designed. The modal invariance constraint loss will guide the model to learn the invariant features between modalities. The feature center constraint loss will supervise the model to minimize inter-class feature differences and maximize intra-class feature differences. Extensive experimental results on the SYSU-MM01 dataset and RegDB dataset show that the proposed method is better than most existing methods. On the large-scale SYSU-MM01 dataset, our model can achieve 67.69% and 66.82% in terms of the first matching characteristic and the mean average precision.
    Jing Zhang, Guangfeng Chen. Visible-Infrared Person Re-Identification Via Feature Constrained Learning[J]. Laser & Optoelectronics Progress, 2024, 61(12): 1215006
    Download Citation