• Opto-Electronic Engineering
  • Vol. 48, Issue 5, 200388 (2021)
Zhang Xiaoyan1, Zhang Baohua1、2、*, Lv Xiaoqi2、3, Gu Yu1、2, Wang Yueming1、2, Liu Xin1、2, Ren Yan1, and Li Jianjun1、2
Author Affiliations
  • 1[in Chinese]
  • 2[in Chinese]
  • 3[in Chinese]
  • show less
    DOI: 10.12086/oee.2021.200388 Cite this Article
    Zhang Xiaoyan, Zhang Baohua, Lv Xiaoqi, Gu Yu, Wang Yueming, Liu Xin, Ren Yan, Li Jianjun. The joint discriminative and generative learning for person re-identification of deep dual attention[J]. Opto-Electronic Engineering, 2021, 48(5): 200388 Copy Citation Text show less

    Abstract

    In the task of person re-identification, there are problems such as difficulty in labeling datasets, small sample size, and detail feature missing after feature extraction. The joint discriminative and generative learning for person re-identification of the deep dual attention is proposed against the above issues. Firstly, the author constructs a joint learning framework and embeds the discriminative module into the generative module to realize the end-to-end training of image generative and discriminative. Then, the generated pictures are sent to the discriminative module to optimize the generative module and the discriminative module simultaneously. Secondly, according to the connection between the channels of the attention modules and the connection between the attention modules in spaces, it merges all the channel features and spatial features and constructs a deep dual attention module. By embedding the models in the teacher model, the model can better extract the fine-grained features of the objects and improve the recognition ability. The experimental results show that the algorithm has better robustness and discriminative capability on the Market-1501 and the DukeMTMC-ReID datasets.
    Zhang Xiaoyan, Zhang Baohua, Lv Xiaoqi, Gu Yu, Wang Yueming, Liu Xin, Ren Yan, Li Jianjun. The joint discriminative and generative learning for person re-identification of deep dual attention[J]. Opto-Electronic Engineering, 2021, 48(5): 200388
    Download Citation