Few-Shot Learning via Feature Hallucination With Variational Inference

Qinxuan Luo, Lingfeng Wang, Jingguo Lv, Shiming Xiang, Chunhong Pan; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 3963-3972

Abstract


Deep learning has achieved huge success in the field of artificial intelligence, but the performance heavily depends on labeled data. Few-shot learning aims to make a model rapidly adapt to unseen classes with few labeled samples after training on a base dataset, and this is useful for tasks lacking labeled data such as medical image processing. Considering that the core problem of few-shot learning is the lack of samples, a straightforward solution to this issue is data augmentation. This paper proposes a generative model (VI-Net) based on a cosine-classifier baseline. Specifically, we construct a framework to learn to define a generating space for each category in the latent space based on few support samples. In this way, new feature vectors can be generated to help make the decision boundary of classifier sharper during the fine-tuning process. To evaluate the effectiveness of our proposed approach, we perform comparative experiments and ablation studies on mini-ImageNet and CUB. Experimental results show that VI-Net does improve performance compared with the baseline and obtains the state-of-the-art result among other augmentation-based methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Luo_2021_WACV, author = {Luo, Qinxuan and Wang, Lingfeng and Lv, Jingguo and Xiang, Shiming and Pan, Chunhong}, title = {Few-Shot Learning via Feature Hallucination With Variational Inference}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2021}, pages = {3963-3972} }