A study on knowledge distillation from weak teacher for scaling up pre-trained language models
Distillation from Weak Teacher (DWT) is a method of transferring knowledge from a smaller,
weaker teacher model to a larger student model to improve its performance. Previous
studies have shown that DWT can be effective in the vision domain and natural language
processing (NLP) pre-training stage. Specifically, DWT shows promise in practical
scenarios, such as enhancing new generation or larger models using pre-trained yet older
or smaller models and lacking a resource budget. However, the optimal conditions for using …
weaker teacher model to a larger student model to improve its performance. Previous
studies have shown that DWT can be effective in the vision domain and natural language
processing (NLP) pre-training stage. Specifically, DWT shows promise in practical
scenarios, such as enhancing new generation or larger models using pre-trained yet older
or smaller models and lacking a resource budget. However, the optimal conditions for using …
Distillation from Weak Teacher (DWT) is a method of transferring knowledge from a smaller, weaker teacher model to a larger student model to improve its performance. Previous studies have shown that DWT can be effective in the vision domain and natural language processing (NLP) pre-training stage. Specifically, DWT shows promise in practical scenarios, such as enhancing new generation or larger models using pre-trained yet older or smaller models and lacking a resource budget. However, the optimal conditions for using DWT have yet to be fully investigated in NLP pre-training. Therefore, this study examines three key factors to optimize DWT, distinct from those used in the vision domain or traditional knowledge distillation. These factors are: (i) the impact of teacher model quality on DWT effectiveness, (ii) guidelines for adjusting the weighting value for DWT loss, and (iii) the impact of parameter remapping as a student model initialization technique for DWT.
arxiv.org
Showing the best result for this search. See all results