-
[pdf]
[supp]
[bibtex]@InProceedings{Ahmad_2022_CVPR, author = {Ahmad, Touqeer and Dhamija, Akshay Raj and Cruz, Steve and Rabinowitz, Ryan and Li, Chunchun and Jafarzadeh, Mohsen and Boult, Terrance E.}, title = {Few-Shot Class Incremental Learning Leveraging Self-Supervised Features}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2022}, pages = {3900-3910} }
Few-Shot Class Incremental Learning Leveraging Self-Supervised Features
Abstract
Few-Shot Class Incremental Learning (FSCIL) is a recently introduced Class Incremental Learning (CIL) setting that operates under more constrained assumptions: only very few samples per class are available in each incremental session, and the number of samples/classes is known ahead of time. Due to limited data for class incremental learning, FSCIL suffers more from over-fitting and catastrophic forgetting than general CIL. In this paper we study leveraging the advances due to self-supervised learning to remedy over-fitting and catastrophic forgetting and significantly advance the state-of-the-art FSCIL. We explore training a lightweight feature fusion plus classifier on a concatenation of features emerging from supervised and self-supervised models. The supervised model is trained on data from a base session, where a relatively larger amount of data is available in FSCIL. Whereas a self-supervised model is learned using an abundance of unlabeled data. We demonstrate a classifier trained on the fusion of such features outperforms classifiers trained independently on either of these representations. We experiment with several existing self-supervised models and provide results for three popular benchmarks for FSCIL including Caltech-UCSD Birds-200-2011 (CUB200), miniImageNet, and CIFAR100 where we advance the state-of-the-art for each benchmark. Code is available at: https://github.com/TouqeerAhmad/FeSSSS
Related Material