Decorrelating Semantic Visual Attributes by Resisting the Urge to Share

Dinesh Jayaraman, Fei Sha, Kristen Grauman; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 1629-1636

Abstract


Existing methods to learn visual attributes are prone to learning the wrong thing---namely, properties that are correlated with the attribute of interest among training samples. Yet, many proposed applications of attributes rely on being able to learn the correct semantic concept corresponding to each attribute. We propose to resolve such confusions by jointly learning decorrelated, discriminative attribute models. Leveraging side information about semantic relatedness, we develop a multi-task learning approach that uses structured sparsity to encourage feature competition among unrelated attributes and feature sharing among related attributes. On three challenging datasets, we show that accounting for structure in the visual attribute space is key to learning attribute models that preserve semantics, yielding improved generalizability that helps in the recognition and discovery of unseen object categories.

Related Material


[pdf]
[bibtex]
@InProceedings{Jayaraman_2014_CVPR,
author = {Jayaraman, Dinesh and Sha, Fei and Grauman, Kristen},
title = {Decorrelating Semantic Visual Attributes by Resisting the Urge to Share},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2014}
}