Less Is More: Zero-Shot Learning From Online Textual Documents With Noise Suppression

Ruizhi Qiao, Lingqiao Liu, Chunhua Shen, Anton van den Hengel; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2249-2257

Abstract


Classifying a visual concept merely from its associated online textual source, such as a Wikipedia article, is an attractive research topic in zero-shot learning because it alleviates the burden of manually collecting semantic attributes. Several recent works have pursued this approach by exploring various ways of connecting the visual and text domains. This paper revisits this idea by stepping further to consider one important factor: the textual representation is usually too noisy for the zero-shot learning application. This consideration motivates us to design a simple-but-effective zero-shot learning method capable of suppressing noise in the text. More specifically, we propose an l_2,1-norm based objective function which can simultaneously suppress the noisy signal in the text and learn a function to match the text document and visual features. We also develop an optimization algorithm to efficiently solve the resulting problem. By conducting experiments on two large datasets, we demonstrate that the proposed method significantly outperforms the competing methods which rely on online information sources but without explicit noise suppression. We further make an in-depth analysis of the proposed method and provide insight as to what kind of information in documents is useful for zero-shot learning.

Related Material


[pdf]
[bibtex]
@InProceedings{Qiao_2016_CVPR,
author = {Qiao, Ruizhi and Liu, Lingqiao and Shen, Chunhua and van den Hengel, Anton},
title = {Less Is More: Zero-Shot Learning From Online Textual Documents With Noise Suppression},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}
}