- [pdf] [supp]
Learning and Transforming General Representations to Break Down Stability-Plasticity Dilemma
In the Class Incremental Learning (CIL) setup, a learning model must have the ability to incrementally update its knowledge to recognize newly appeared classes (plasticity) while maintaining the knowledge to recognize the classes it has already learned (stability). Such conflicting requirements are known as the stability-plasticity dilemma, and most existing studies attempt to achieve a good balance between them by stability improvements. Unlike those attempts, we focus on the generality of representations. The basic idea is that a model does not need to change if it has already learned such general representations that they contain enough information to recognize new classes. However, the general representations are not optimal for recognizing the classes a model has already learned because the representations must contain unrelated and noisy information for recognizing them. To acquire representations suitable for recognizing known classes while leveraging general representations, in this paper, we propose a new CIL framework that learns general representations and transforms them into suitable ones for the target classification tasks. In our framework, we achieve the acquisition of general representations and their transformation by self-supervised learning and attention techniques, respectively. In addition, we introduce a novel knowledge distillation loss to make the transformation mechanism stable. Using benchmark datasets, we empirically confirm that our framework can improve the average incremental accuracy of four types of CIL methods that employ knowledge distillation in the CIL setting.