Generating Visual Representations for Zero-Shot Classification

Maxime Bucher, Stephane Herbin, Frederic Jurie; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2666-2673

Abstract


This paper addresses the task of learning an image clas- sifier when some categories are defined by semantic de- scriptions only (e.g. visual attributes) while the others are defined by exemplar images as well. This task is often re- ferred to as the Zero-Shot classification task (ZSC). Most of the previous methods rely on learning a common embed- ding space allowing to compare visual features of unknown categories with semantic descriptions. This paper argues that these approaches are limited as i) efficient discrimi- native classifiers can't be used ii) classification tasks with seen and unseen categories (Generalized Zero-Shot Clas- sification or GZSC) can't be addressed efficiently. In con- trast, this paper suggests to address ZSC and GZSC by i) learning a conditional generator using seen classes ii) gen- erate artificial training examples for the categories without exemplars. ZSC is then turned into a standard supervised learning problem. Experiments with 4 generative models ...

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Bucher_2017_ICCV,
author = {Bucher, Maxime and Herbin, Stephane and Jurie, Frederic},
title = {Generating Visual Representations for Zero-Shot Classification},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}