Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning

Da-Wei Zhou, Hai-Long Sun, Han-Jia Ye, De-Chuan Zhan; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 23554-23564

Abstract


Class-Incremental Learning (CIL) requires a learning system to continually learn new classes without forgetting. Despite the strong performance of Pre-Trained Models (PTMs) in CIL a critical issue persists: learning new classes often results in the overwriting of old ones. Excessive modification of the network causes forgetting while minimal adjustments lead to an inadequate fit for new classes. As a result it is desired to figure out a way of efficient model updating without harming former knowledge. In this paper we propose ExpAndable Subspace Ensemble (EASE) for PTM-based CIL. To enable model updating without conflict we train a distinct lightweight adapter module for each new task aiming to create task-specific subspaces. These adapters span a high-dimensional feature space enabling joint decision-making across multiple subspaces. As data evolves the expanding subspaces render the old class classifiers incompatible with new-stage spaces. Correspondingly we design a semantic-guided prototype complement strategy that synthesizes old classes' new features without using any old class instance. Extensive experiments on seven benchmark datasets verify EASE's state-of-the-art performance. Code is available at: https://github.com/sun-hailong/CVPR24-Ease

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Zhou_2024_CVPR, author = {Zhou, Da-Wei and Sun, Hai-Long and Ye, Han-Jia and Zhan, De-Chuan}, title = {Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {23554-23564} }