Home
Scholarly Works
Rethinking Label Refurbishment: Model Robustness...
Conference

Rethinking Label Refurbishment: Model Robustness under Label Noise

Abstract

A family of methods that generate soft labels by mixing the hard labels with a certain distribution, namely label refurbishment, are widely used to train deep neural networks. However, some of these methods are still poorly understood in the presence of label noise. In this paper, we revisit four label refurbishment methods and reveal the strong connection between them. We find that they affect the neural network models in different manners. Two of them smooth the estimated posterior for regularization effects, and the other two force the model to produce high-confidence predictions. We conduct extensive experiments to evaluate related methods and observe that both effects improve the model generalization under label noise. Furthermore, we theoretically show that both effects lead to generalization guarantees on the clean distribution despite being trained with noisy labels.

Authors

Lu Y; Xu Z; He W

Volume

37

Pagination

pp. 15000-15008

Publisher

Association for the Advancement of Artificial Intelligence (AAAI)

Publication Date

June 27, 2023

DOI

10.1609/aaai.v37i12.26751

Conference proceedings

Proceedings of the AAAI Conference on Artificial Intelligence

Issue

12

ISSN

2159-5399
View published work (Non-McMaster Users)

Contact the Experts team