Multi-Cue Zero-Shot Learning With Strong Supervision

Zeynep Akata, Mateusz Malinowski, Mario Fritz, Bernt Schiele; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 59-68

Abstract


Scaling up visual category recognition to large numbers of classes remains challenging. A promising research direction is zero-shot learning, which does not require any training data to recognize new classes, but rather relies on some form of auxiliary information describing the new classes. Ultimately, this may allow to use textbook knowledge that humans employ to learn about new classes by transferring knowledge from classes they know well. The most successful zero-shot learning approaches currently require a particular type of auxiliary information -- namely attribute annotations performed by humans -- that is not readily available for most classes. Our goal is to circumvent this bottleneck by substituting such annotations by extracting multiple pieces of information from multiple unstructured text sources readily available on the web. To compensate for the weaker form of auxiliary information, we incorporate stronger supervision in the form of semantic part annotations on the classes from which we transfer knowledge. We achieve our goal by a joint embedding framework that maps multiple text parts as well as multiple semantic parts into a common space. Our results consistently and significantly improve on the state-of-the-art in zero-short recognition and retrieval.

Related Material


[pdf]
[bibtex]
@InProceedings{Akata_2016_CVPR,
author = {Akata, Zeynep and Malinowski, Mateusz and Fritz, Mario and Schiele, Bernt},
title = {Multi-Cue Zero-Shot Learning With Strong Supervision},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}
}