Generative Dual Adversarial Network for Generalized Zero-Shot Learning

He Huang, Changhu Wang, Philip S. Yu, Chang-Dong Wang; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 801-810

Abstract


This paper studies the problem of generalized zero-shot learning which requires the model to train on image-label pairs from some seen classes and test on the task of classifying new images from both seen and unseen classes. In this paper, we propose a novel model that provides a unified framework for three different approaches: visual->semantic mapping, semantic->visual mapping, and metric learning. Specifically, our proposed model consists of a feature generator that can generate various visual features given class embeddings as input, a regressor that maps each visual feature back to its corresponding class embedding, and a discriminator that learns to evaluate the closeness of an image feature and a class embedding. All three components are trained under the combination of cyclic consistency loss and dual adversarial loss. Experimental results show that our model not only preserves higher accuracy in classifying images from seen classes, but also performs better than existing state-of-the-art models in in classifying images from unseen classes.

Related Material


[pdf]
[bibtex]
@InProceedings{Huang_2019_CVPR,
author = {Huang, He and Wang, Changhu and Yu, Philip S. and Wang, Chang-Dong},
title = {Generative Dual Adversarial Network for Generalized Zero-Shot Learning},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}