StackNet: Stacking Feature Maps for Continual Learning

Jangho Kim, Jeesoo Kim, Nojun Kwak; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 242-243

Abstract


Training a neural network for a classification task typically assumes that the data to train are given from the beginning. However, in the real world, additional data accumulate gradually and the model requires additional training without accessing the old training data. This usually leads to the catastrophic forgetting problem which is inevitable for the traditional training methodology of neural networks. In this paper, we propose a continual learning method that is able to learn additional tasks while retaining the performance of previously learned tasks by stacking parameters. Composed of two complementary components, the index module and the StackNet, our method estimates the index of the corresponding task for an input sample with the index module and utilizes a particular portion of StackNet with this index. The StackNet guarantees no degradation in the performance of the previously learned tasks and the index module shows high confidence in finding the origin of an input sample. Compared to the previous work of PackNet, our method is competitive and highly intuitive.

Related Material


[pdf]
[bibtex]
@InProceedings{Kim_2020_CVPR_Workshops,
author = {Kim, Jangho and Kim, Jeesoo and Kwak, Nojun},
title = {StackNet: Stacking Feature Maps for Continual Learning},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}