Rectification-based knowledge retention for continual learning
Proceedings of the IEEE/CVF conference on computer vision and …, 2021•openaccess.thecvf.com
Deep learning models suffer from catastrophic forgetting when trained in an incremental
learning setting. In this work, we propose a novel approach to address the task incremental
learning problem, which involves training a model on new tasks that arrive in an incremental
manner. The task incremental learning problem becomes even more challenging when the
test set contains classes that are not part of the train set, ie, a task incremental generalized
zero-shot learning problem. Our approach can be used in both the zero-shot and non zero …
learning setting. In this work, we propose a novel approach to address the task incremental
learning problem, which involves training a model on new tasks that arrive in an incremental
manner. The task incremental learning problem becomes even more challenging when the
test set contains classes that are not part of the train set, ie, a task incremental generalized
zero-shot learning problem. Our approach can be used in both the zero-shot and non zero …
Abstract
Deep learning models suffer from catastrophic forgetting when trained in an incremental learning setting. In this work, we propose a novel approach to address the task incremental learning problem, which involves training a model on new tasks that arrive in an incremental manner. The task incremental learning problem becomes even more challenging when the test set contains classes that are not part of the train set, ie, a task incremental generalized zero-shot learning problem. Our approach can be used in both the zero-shot and non zero-shot task incremental learning settings. Our proposed method uses weight rectifications and affine transformations in order to adapt the model to different tasks that arrive sequentially. Specifically, we adapt the network weights to work for new tasks by" rectifying" the weights learned from the previous task. We learn these weight rectifications using very few parameters. We additionally learn affine transformations on the outputs generated by the network in order to better adapt them for the new task. We perform experiments on several datasets in both zero-shot and non zero-shot task incremental learning settings and empirically show that our approach achieves state-of-the-art results. Specifically, our approach outperforms the state-of-the-art non zero-shot task incremental learning method by over 5% on the CIFAR-100 dataset. Our approach also significantly outperforms the state-of-the-art task incremental generalized zero-shot learning method by absolute margins of 6.91% and 6.33% for the AWA1 and CUB datasets, respectively. We validate our approach using various ablation studies.
openaccess.thecvf.com
Showing the best result for this search. See all results