Towards Zero-Shot Learning With Fewer Seen Class Examples

Vinay Kumar Verma, Ashish Mishra, Anubha Pandey, Hema A. Murthy, Piyush Rai; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 2241-2251

Abstract


We present a meta-learning based generative model for zero-shot learning (ZSL) towards a challenging setting when the number of training examples from each seen class is very few. This setup is in contrast to the conventional ZSL approaches, where training typically assumes the availability of a sufficiently large number of training examples from each of the seen classes. The proposed approach leverages meta-learning to train a deep generative model that integrates variational autoencoder an generative adversarial network. To simulate the ZSL behaviour in training, we propose a novel task distribution where meta-train and meta-validation classes are disjoint. Once trained, the model can generate synthetic examples from seen and unseen classes. Synthesize samples can then be used to train the ZSL framework in a supervised manner. The meta-learner enables our model to generates high-fidelity samples using only a small number of training examples from seen classes. We conduct extensive experiments and ablation studies on four benchmark datasets of ZSL and observe that the proposed model outperforms state-of-the-art approaches by a significant margin when the number of examples per seen class is very small.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Verma_2021_WACV, author = {Verma, Vinay Kumar and Mishra, Ashish and Pandey, Anubha and Murthy, Hema A. and Rai, Piyush}, title = {Towards Zero-Shot Learning With Fewer Seen Class Examples}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2021}, pages = {2241-2251} }