Enhancing Few-Shot Class-Incremental Learning via Frozen Feature Augmentation

Shimou Ling, Shengkai Gan, Caoxin Wang, Lili Pan, Hongliang Li; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2025, pp. 6621-6629

Abstract


Few-shot class incremental learning (FSCIL) poses a significant challenge, as it not only requires mitigating catastrophic forgetting but also addressing overfitting. Previous studies have primarily focused on alleviating these issues through representation learning, with the goal of enhancing the separation between base classes and providing adequate representation space for incremental classes. As pre-trained models become increasingly prevalent in continual learning, exploring frozen feature augmentation for FSCIL becomes extremely necessary. This work proposes an ingenious brightness-based frozen feature augmentation method for FSCIL. We validate its effectiveness on both regular and multi-modal continual learning datasets, including CIFAR-100, mini-ImageNet, UESTC-MMEA-CL, and ARIC. Not only that, we analyze the decision attribution stability change afforded by such a feature augmentation, demonstrating that it does not compromise interpretability. Our code is available at https://github.com/learninginvision/FAOrCo-ViT

Related Material


[pdf]
[bibtex]
@InProceedings{Ling_2025_CVPR, author = {Ling, Shimou and Gan, Shengkai and Wang, Caoxin and Pan, Lili and Li, Hongliang}, title = {Enhancing Few-Shot Class-Incremental Learning via Frozen Feature Augmentation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2025}, pages = {6621-6629} }