• Laser & Optoelectronics Progress
  • Vol. 59, Issue 8, 0810010 (2022)
Fengsui Wang1、2、3、*, Furong Liu1、2、3, Jingang Chen1、2、3, and Qisheng Wang1、2、3
Author Affiliations
  • 1School of Electrical Engineering, Anhui Polytechnic University, Wuhu , Anhui 241000, China
  • 2Anhui Key Laboratory of Detection Technology and Energy Saving Devices, Wuhu , Anhui 241000, China
  • 3Key Laboratory of Advanced Perception and Intelligent Control of High-End Equipment, Ministry of Education, Wuhu , Anhui 241000, China
  • show less
    DOI: 10.3788/LOP202259.0810010 Cite this Article Set citation alerts
    Fengsui Wang, Furong Liu, Jingang Chen, Qisheng Wang. Multi-Loss Joint Cross-Modality Person Re-Identification Method Integrating Attention Mechanism[J]. Laser & Optoelectronics Progress, 2022, 59(8): 0810010 Copy Citation Text show less

    Abstract

    The difficulty of cross-modality person re-identification task is to extract more effective modal shared features. To solve the problems, this paper proposes a multi-loss joint cross-modality person re-identification method based on attention mechanism. Firstly, the attention model is embedded in the ResNet50 network, preserving the details. Secondly, the feature is divided into six local features to make the network focus on local deep information and enhance the representation ability of the network. Finally, the extracted local feature column vectors were normalized by batch processing, and the cross-entropy loss and improved hetero-center loss were used for joint supervised learning to accelerate the model convergence and improve the model accuracy. The proposed method achieves an average accuracy of 56.82% and 75.44% in the SYSU-MM01 and RegDB datasets, respectively. The experimental results show that the proposed method can effectively improve the accuracy of cross-modality person re-identification.
    Fengsui Wang, Furong Liu, Jingang Chen, Qisheng Wang. Multi-Loss Joint Cross-Modality Person Re-Identification Method Integrating Attention Mechanism[J]. Laser & Optoelectronics Progress, 2022, 59(8): 0810010
    Download Citation