Generalized Zero-Shot Learning via Synthesized Examples

Vinay Kumar Verma, Gundeep Arora, Ashish Mishra, Piyush Rai; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 4281-4289

Abstract


We present a generative framework for generalized zero-shot learning where the training and test classes are not necessarily disjoint. Built upon a variational autoencoder based architecture, consisting of a probabilistic encoder and a probabilistic emph{conditional} decoder, our model can generate novel exemplars from seen/unseen classes, given their respective class attributes. These exemplars can subsequently be used to train any off-the-shelf classification model. One of the key aspects of our encoder-decoder architecture is a feedback-driven mechanism in which a discriminator (a multivariate regressor) learns to map the generated exemplars to the corresponding class attribute vectors, leading to an improved generator. Our model's ability to generate and leverage examples from unseen classes to train the classification model naturally helps to mitigate the bias towards predicting seen classes in generalized zero-shot learning settings. Through a comprehensive set of experiments, we show that our model outperforms several state-of-the-art methods, on several benchmark datasets, for both standard as well as generalized zero-shot learning.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Verma_2018_CVPR,
author = {Verma, Vinay Kumar and Arora, Gundeep and Mishra, Ashish and Rai, Piyush},
title = {Generalized Zero-Shot Learning via Synthesized Examples},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}