A Joint Generative Model for Zero-Shot Learning

Rui Gao, Xingsong Hou, Jie Qin, Li Liu, Fan Zhu, Zhao Zhang; Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018, pp. 0-0

Abstract


Zero-shot learning (ZSL) is a challenging task due to the lack of data from unseen classes during training. Existing methods tend to have the strong bias towards seen classes, which is also known as the domain shift problem. To mitigate the gap between seen and unseen class data, we propose a joint generative model to synthesize features as the replacement for unseen data. Based on the generated features, the conventional ZSL problem can be tackled in a supervised way. Specifically, our framework integrates Variational Autoencoders (VAE) and Generative Adversarial Networks (GAN) conditioned on class-level semantic attributes for feature generation based on element-wise and holistic reconstruction. A categorization network acts as the additional guide to generate features beneficial for the subsequent classification task. Moreover, we propose a perceptual reconstruction loss to preserve semantic similarities. Experimental results on five benchmarks show the superiority of our framework over the state-of-the-art approaches in terms of both conventional ZSL and generalized ZSL settings.

Related Material


[pdf]
[bibtex]
@InProceedings{Gao_2018_ECCV_Workshops,
author = {Gao, Rui and Hou, Xingsong and Qin, Jie and Liu, Li and Zhu, Fan and Zhang, Zhao},
title = {A Joint Generative Model for Zero-Shot Learning},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV) Workshops},
month = {September},
year = {2018}
}