Classifier Decoupled Training for Black-Box Unsupervised Domain Adaptation

X Chen, Y Shen, X Luo, Y Zhang, K Li, S Lin - Chinese Conference on …, 2023 - Springer
X Chen, Y Shen, X Luo, Y Zhang, K Li, S Lin
Chinese Conference on Pattern Recognition and Computer Vision (PRCV), 2023Springer
Black-box unsupervised domain adaptation (B 2 UDA) is a challenging task in unsupervised
domain adaptation, where the source model is treated as a black box and only its output is
accessible. Previous works have treated the source models as a pseudo-labeling tool and
formulated B 2 UDA as a noisy labeled learning (LNL) problem. However, they have ignored
the gap between the “shift noise” caused by the domain shift and the hypothesis noise in
LNL. To alleviate the negative impact of shift noise on B 2 UDA, we propose a novel …
Abstract
Black-box unsupervised domain adaptation () is a challenging task in unsupervised domain adaptation, where the source model is treated as a black box and only its output is accessible. Previous works have treated the source models as a pseudo-labeling tool and formulated as a noisy labeled learning (LNL) problem. However, they have ignored the gap between the “shift noise” caused by the domain shift and the hypothesis noise in LNL. To alleviate the negative impact of shift noise on , we propose a novel framework called Classifier Decoupling Training (CDT), which introduces two additional classifiers to assist model training with a new label-confidence sampling. First, we introduce a self-training classifier to learn robust feature representation from the low-confidence samples, which is discarded during testing, and the final classifier is only trained with a few high-confidence samples. This step decouples the training of high-confidence and low-confidence samples to mitigate the impact of noise labels on the final classifier while avoiding overfitting to the few confident samples. Second, an adversarial classifier optimizes the feature distribution of low-confidence samples to be biased toward high-confidence samples through adversarial training, which greatly reduces intra-class variation. Third, we further propose a novel ETP-entropy Sampling (ES) to collect class-balanced high-confidence samples, which leverages the early-time training phenomenon into LNL. Extensive experiments on several benchmarks show that the proposed CDT achieves , , and accuracies on Office-31, Office-Home, and VisDA-17, respectively, which outperforms state-of-the-art methods.
Springer
Showing the best result for this search. See all results