• Opto-Electronic Engineering
  • Vol. 50, Issue 12, 230239-1 (2023)
Siyu Cheng and Ying Chen*
Author Affiliations
  • Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), School of Internet of Things Engineering, Jiangnan University, Wuxi, Jiangsu 214122, China
  • show less
    DOI: 10.12086/oee.2023.230239 Cite this Article
    Siyu Cheng, Ying Chen. Camera-aware unsupervised person re-identification method guided by pseudo-label refinement[J]. Opto-Electronic Engineering, 2023, 50(12): 230239-1 Copy Citation Text show less
    The overall framework of our method
    Fig. 1. The overall framework of our method
    Neighborhood pseudo label refinement module
    Fig. 2. Neighborhood pseudo label refinement module
    Schematic diagram of camera-aware guided by refined pseudo-labels. (a) Original intra-camera contrast; (b) Corrected intra-camera contrast; (c) Original inter-camera contrast; (d) Corrected inter-camera contrast
    Fig. 3. Schematic diagram of camera-aware guided by refined pseudo-labels. (a) Original intra-camera contrast; (b) Corrected intra-camera contrast; (c) Original inter-camera contrast; (d) Corrected inter-camera contrast
    Comparison of Top-10 ranking lists between on Market-1501 dataset among different methods. (a) Baseline method; (b) CAP[9] method; (c) PPLR[15] method; (d) Our method
    Fig. 4. Comparison of Top-10 ranking lists between on Market-1501 dataset among different methods. (a) Baseline method; (b) CAP[9] method; (c) PPLR[15] method; (d) Our method
    Feature T-SNE visualization results of different methods on Market-1501 dataset. (a) Baseline method; (b) CAP[9] method; (c) PPLR[15] method; (d) Our method
    Fig. 5. Feature T-SNE visualization results of different methods on Market-1501 dataset. (a) Baseline method; (b) CAP[9] method; (c) PPLR[15] method; (d) Our method
    The impact of each hyperparameter to our model on Market-1501. (a) α; (b) k; (c) τintra; (d) τinter
    Fig. 6. The impact of each hyperparameter to our model on Market-1501. (a) α; (b) k; (c) τintra; (d) τinter
    MethodsMarket-1501MSMT17Personx
    mAP/%Rank-1/%mAP/%Rank-1/%mAP/%Rank-1/%
    Unsupervised domain adaptation (UDA)
    ECN[2]CVPR’1943.075.110.230.2--
    SPCL[3]NeurIPS’2077.589.726.853.778.591.1
    MEB-Net[13]ECCV’2076.089.9----
    MMT[12]CVPR’2171.287.723.550.078.990.6
    GLT[31]CVPR’2179.592.227.759.5--
    MCL[32]JBUAA’2280.693.228.558.5--
    CACHE[33]TCSVT’2283.193.431.358.0--
    CIFL[20]TMM’2283.393.939.070.5--
    MCRN[16]AAAI’2283.893.835.767.5--
    IICM[34]JCRD’2374.989.027.252.3--
    NPSS[21]TIFS’2384.694.138.969.4--
    CaCL[11]ICCV’2384.793.840.370.0--
    Unsupervised learning (USL)
    SPCL[3]NeurIPS’2073.188.119.142.372.388.1
    MetaCam[7]CVPR’2161.783.915.535.2--
    IICS[8]CVPR’2172.989.526.956.4--
    RLCC[14]CVPR’2177.790.827.956.5--
    CAP[9]AAAI’2179.291.436.967.4--
    ICE[17]ICCV’2182.393.838.970.2--
    CACHE[33]TCSVT’2281.092.031.858.2--
    CIFL[20]TMM’2282.493.938.870.1--
    GRACL[35]TCSVT’2283.793.234.664.087.995.3
    PPLR[15]CVPR’2284.494.342.273.3--
    CA-UReID[10]ICME’2284.594.1---
    NPSS[21]TIFS’2382.394.036.768.8--
    LRMGFS[36]JEMI’2383.393.327.458.4--
    PLRIS[22]ICIP’2383.293.143.371.5--
    AdaMG[37]TCSVT’2384.693.938.066.387.695.0
    LP[18]TIP’2385.894.539.567.9--
    DCCT[19]TCSVT’’2386.394.441.868.787.695.0
    CC[4]CoRR’2182.192.327.656.084.794.4
    Ours-85.294.444.374.188.795.9
    Table 1. The comparison between the our method and the latest methods
    L˜ceL˜intraL˜interMarket1501Personx
    mAP/%Rank-1/%Rank-5/%Rank-10/%mAP/%Rank-1/%Rank-5/%Rank-10/%
    M1(CC[4])---82.192.396.797.984.794.498.399.3
    M2--82.993.297.398.287.995.798.899.5
    M3--83.993.697.598.387.395.998.899.4
    M4-84.193.697.698.588.595.898.999.6
    M5--84.794.297.998.787.595.598.999.5
    M6(Ours)85.294.498.198.688.795.999.299.7
    Table 2. Results of ablation studies on Market-1501
    Siyu Cheng, Ying Chen. Camera-aware unsupervised person re-identification method guided by pseudo-label refinement[J]. Opto-Electronic Engineering, 2023, 50(12): 230239-1
    Download Citation