Two Wrongs Don’t Make a Right: Combating Confirmation Bias in Learning with Label Noise
DOI:
https://doi.org/10.1609/aaai.v37i12.26725Keywords:
GeneralAbstract
Noisy labels damage the performance of deep networks. For robust learning, a prominent two-stage pipeline alternates between eliminating possible incorrect labels and semi-supervised training. However, discarding part of noisy labels could result in a loss of information, especially when the corruption has a dependency on data, e.g., class-dependent or instance-dependent. Moreover, from the training dynamics of a representative two-stage method DivideMix, we identify the domination of confirmation bias: pseudo-labels fail to correct a considerable amount of noisy labels, and consequently, the errors accumulate. To sufficiently exploit information from noisy labels and mitigate wrong corrections, we propose Robust Label Refurbishment (Robust LR)—a new hybrid method that integrates pseudo-labeling and confidence estimation techniques to refurbish noisy labels. We show that our method successfully alleviates the damage of both label noise and confirmation bias. As a result, it achieves state-of-the-art performance across datasets and noise types, namely CIFAR under different levels of synthetic noise and mini-WebVision and ANIMAL-10N with real-world noise.Downloads
Published
2023-06-26
How to Cite
Chen, M., Cheng, H., Du, Y., Xu, M., Jiang, W., & Wang, C. (2023). Two Wrongs Don’t Make a Right: Combating Confirmation Bias in Learning with Label Noise. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 14765-14773. https://doi.org/10.1609/aaai.v37i12.26725
Issue
Section
AAAI Special Track on Safe and Robust AI