-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Kanagalingam_2024_ACCV, author = {Kanagalingam, Heethanjan and Pathmanathan, Thenukan and Ketheeswaran, Navaneethan and Vathanakumar, Mokeeshan and Afham, Mohamed and Rodrigo, Ranga}, title = {Feature Generator for Few-Shot Learning}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2024}, pages = {3901-3916} }
Feature Generator for Few-Shot Learning
Abstract
Few-shot learning (FSL) aims to enable models to recognize novel objects or classes with limited labeled data. Feature generators, which synthesize new data points to augment limited datasets, have emerged as a promising solution to this challenge. This paper investigates the effectiveness of feature generators in enhancing the embedding process for FSL tasks. To address the issue of inaccurate embeddings due to the scarcity of images per class, we introduce a feature generator that creates visual features from class-level textual descriptions. By training the generator with a combination of classifier loss, discriminator loss, and distance loss between the generated features and true class embeddings, we ensure the generation of accurate same-class features and enhance the overall feature representation. Our results show a significant improvement in accuracy over baseline methods, with our approach outperforming the baseline model by 10% in 1-shot and around 5% in 5-shot approaches. Additionally, both visual-only and visual + textual generators have also been tested in this paper. The code is publicly available at https://github.com/heethanjan/Feature-Generator-for-FSL.
Related Material