A Generative Model for Zero Shot Learning Using Conditional Variational Autoencoders

Ashish Mishra, Shiva Krishna Reddy, Anurag Mittal, Hema A. Murthy; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2018, pp. 2188-2196

Abstract


Zero shot learning in Image Classification refers to the setting where images from some novel classes are absent in the training data but other information such as natural language descriptions or attribute vectors of the classes are available. This setting is important in the real world since one may not be able to obtain images of all the possible classes at training. While previous approaches have tried to model the relationship between the class attribute space and the image space via some kind of a transfer function in order to model the image space correspondingly to an unseen class, we take a different approach and try to generate the samples from the given attributes, using a conditional variational autoencoder, and use the generated samples for classification of the unseen classes. By extensive testing on four benchmark datasets, we show that our model outperforms the state of the art, particularly in the more realistic generalized setting, where the training classes can also appear at the test time along with the novel classes.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Mishra_2018_CVPR_Workshops,
author = {Mishra, Ashish and Krishna Reddy, Shiva and Mittal, Anurag and Murthy, Hema A.},
title = {A Generative Model for Zero Shot Learning Using Conditional Variational Autoencoders},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2018}
}