Synthesized Feature Based Few-Shot Class-Incremental Learning on a Mixture of Subspaces

Ali Cheraghian, Shafin Rahman, Sameera Ramasinghe, Pengfei Fang, Christian Simon, Lars Petersson, Mehrtash Harandi; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 8661-8670

Abstract


Few-shot class incremental learning (FSCIL) aims to incrementally add sets of novel classes to a well-trained base model in multiple training sessions with the restriction that only a few novel instances are available per class. While learning novel classes, FSCIL methods gradually forget base (old) class training and overfit to a few novel class samples. Existing approaches have addressed this problem by computing the class prototypes from the visual or semantic word vector domain. In this paper, we propose addressing this problem using a mixture of subspaces. Subspaces define the cluster structure of the visual domain and help to describe the visual and semantic domain considering the overall distribution of the data. Additionally, we propose to employ a variational autoencoder (VAE) to generate synthesized visual samples for augmenting pseudo-feature while learning novel classes incrementally. The combined effect of the mixture of subspaces and synthesized features reduces the forgetting and overfitting problem of FSCIL. Extensive experiments on three image classification datasets show that our proposed method achieves competitive results compared to state-of-the-art methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Cheraghian_2021_ICCV, author = {Cheraghian, Ali and Rahman, Shafin and Ramasinghe, Sameera and Fang, Pengfei and Simon, Christian and Petersson, Lars and Harandi, Mehrtash}, title = {Synthesized Feature Based Few-Shot Class-Incremental Learning on a Mixture of Subspaces}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {8661-8670} }