Generative Model with Semantic Embedding and Integrated Classifier for Generalized Zero-Shot Learning

Ayyappa Pambala, Titir Dutta, Soma Biswas; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 1237-1246

Abstract


Generative models have achieved impressive performance for the generalized zero-shot learning task by learning the mapping from attributes to feature space. In this work, we propose to derive semantic inferences from images and use them for the generation, which enables us to capture the bidirectional information i.e., visual to semantic and semantic to visual spaces. Specifically, we propose a Semantic Embedding module which not only gives image specific semantic information to the generative model for generation of better features, but also makes sure that the generated features can be mapped to the correct semantic space. We also propose an Integrated Classifier, which is trained along with the generator. This module not only eliminates the requirement of additional classifier for new object categories which is required by the existing generative approaches, but also facilitates the generation of more discriminative and useful features. This approach can be used seamlessly for the task of few-shot learning. Extensive experiments on four benchmark datasets, namely, CUB, SUN, AWA1, AWA2 for both zero-shot learning and few-shot setting show the effectiveness of the proposed approach.

Related Material


[pdf]
[bibtex]
@InProceedings{Pambala_2020_WACV,
author = {Pambala, Ayyappa and Dutta, Titir and Biswas, Soma},
title = {Generative Model with Semantic Embedding and Integrated Classifier for Generalized Zero-Shot Learning},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {March},
year = {2020}
}