• Optical Instruments
  • Vol. 47, Issue 2, 50 (2025)
Yemin QIU, Rongfu ZHANG*, Chen HE, Ziye YANG, and Guyu GAO
Author Affiliations
  • School of Optical-Electrical and Computer Engineering , University of Shanghai for Science and Technology, Shanghai 200093, China
  • show less
    DOI: 10.3969/j.issn.1005-5630.202402270026 Cite this Article
    Yemin QIU, Rongfu ZHANG, Chen HE, Ziye YANG, Guyu GAO. Image classification approach using AllMix for label noise learning[J]. Optical Instruments, 2025, 47(2): 50 Copy Citation Text show less

    Abstract

    Datasets collected and annotated manually are inevitably contaminated with label noise, which negatively affects the generalization ability of image classification models. Therefore, designing robust classification algorithms for datasets with label noise has become a hot research topic. The main issue with existing methods is that self-supervised learning pre-training is time-consuming and still includes a large number of noisy samples after sample selection. This paper introduces the AllMix model, which reduces the time required for pre-training. Based on the DivideMix model, the AllMatch training strategy replaces the original MixMatch training strategy. The AllMatch training strategy uses focal loss and generalized cross-entropy loss to optimize the loss calculation for labeled samples. Additionally, it introduces a high-confidence sample semi-supervised learning module and a contrastive learning module to fully learn from unlabeled samples. Experimental results show that on the CIFAR10 dataset, the existing pre-trained label noise classification algorithms are 0.7%, 0.7%, and 5.0% higher in performance than those without pre-training for 50%, 80%, and 90% symmetric noise ratios, respectively. On the CIFAR100 dataset with 80% and 90% symmetric noise ratios, the model performance is 2.8% and 10.1% higher, respectively.
    Yemin QIU, Rongfu ZHANG, Chen HE, Ziye YANG, Guyu GAO. Image classification approach using AllMix for label noise learning[J]. Optical Instruments, 2025, 47(2): 50
    Download Citation