Class-Incremental Learning by Knowledge Distillation With Adaptive Feature Consolidation

Minsoo Kang, Jaeyoo Park, Bohyung Han; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 16071-16080

Abstract


We present a novel class incremental learning approach based on deep neural networks, which continually learns new tasks with limited memory for storing examples in the previous tasks. Our algorithm is based on knowledge distillation and provides a principled way to maintain the representations of old models while adjusting to new tasks effectively. The proposed method estimates the relationship between the representation changes and the resulting loss increases incurred by model updates. It minimizes the upper bound of the loss increases using the representations, which exploits the estimated importance of each feature map within a backbone model. Based on the importance, the model restricts updates of important features for robustness while allowing changes in less critical features for flexibility. This optimization strategy effectively alleviates the notorious catastrophic forgetting problem despite the limited accessibility of data in the previous tasks. The experimental results show significant accuracy improvement of the proposed algorithm over the existing methods on the standard datasets. Code is available

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Kang_2022_CVPR, author = {Kang, Minsoo and Park, Jaeyoo and Han, Bohyung}, title = {Class-Incremental Learning by Knowledge Distillation With Adaptive Feature Consolidation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {16071-16080} }