Simple and effective complementary label learning based on mean square error loss
C Wang, X Xu, D Liu, X Niu, S Han - Machine Vision and Applications, 2023 - Springer
C Wang, X Xu, D Liu, X Niu, S Han
Machine Vision and Applications, 2023•SpringerA complementary label specifies one of the classes that an instance does not belong to.
Complementary label learning only uses training instances each assigned a complementary
label to train a classifier that can predict a ground-truth label for each testing instance.
Though many surrogate loss functions have been proposed for complementary label
learning, the mean square error (MSE) surrogate loss function, widely used in the standard
classification paradigm, cannot provide classifier consistency in complementary label …
Complementary label learning only uses training instances each assigned a complementary
label to train a classifier that can predict a ground-truth label for each testing instance.
Though many surrogate loss functions have been proposed for complementary label
learning, the mean square error (MSE) surrogate loss function, widely used in the standard
classification paradigm, cannot provide classifier consistency in complementary label …
Abstract
A complementary label specifies one of the classes that an instance does not belong to. Complementary label learning only uses training instances each assigned a complementary label to train a classifier that can predict a ground-truth label for each testing instance. Though many surrogate loss functions have been proposed for complementary label learning, the mean square error (MSE) surrogate loss function, widely used in the standard classification paradigm, cannot provide classifier consistency in complementary label learning. However, classifier consistency not only guarantees the converged model is the optimal classifier that can be found in the searching space but also indicates that standard backpropagation is enough to search for the optimal classifier without needing model selection. This paper designs an effective square loss for complementary label learning under unbiased and biased assumptions. We also theoretically demonstrate that our method assurances that the optimal classifier under complementary labels is also the optimal classifier under ordinary labels. Finally, we test our method on different benchmark datasets with biased and unbiased assumptions to verify the effectiveness of our method.
Springer
Showing the best result for this search. See all results