Our augmentation approach leads to scalable and effective self-training with improvements of up to 2.6% on standard text classification benchmarks. Finally, we ...
Oct 5, 2020 · Our augmentation approach leads to scalable and effective self-training with improvements of up to 2.6% on standard text classification ...
Jun 6, 2021 · We show that self-training improves upon unsupervised pretraining: we improve. RoBERTa-Large by 1.2% accuracy on average on six standard ...
Sep 11, 2024 · Our augmentation approach leads to scalable and effective self-training with improvements of up to 2.6% on standard text classification ...
This work revisits the self-training technique for language model fine-tuning and presents a state-of-the-art prompt-based few-shot learner, SFLM.
Self-training is a popular semi-supervised method where a supervised model, trained using labeled data, generates pseudo labels for unlabeled data and ...
Oct 31, 2023 · In our paper Self-training Improves Pre-training for Natural Language Understanding, we show that SentAugment provides strong gains on ...
Sep 14, 2021 · Task-adaptive pre-training (TAPT) and Self-training (ST) have emerged as the major semi-supervised approaches to improve natural language ...
In such scenarios, pre-training works well; but self-training also benefits models with and without pre-training. For example, our experiment with PASCAL ...
Missing: Natural | Show results with:Natural