Spoken Attributes: Mixing Binary and Relative Attributes to Say the Right Thing

Amir Sadovnik, Andrew Gallagher, Devi Parikh, Tsuhan Chen; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 2160-2167

Abstract


In recent years, there has been a great deal of progress in describing objects with attributes. Attributes have proven useful for object recognition, image search, face verification, image description, and zero-shot learning. Typically, attributes are either binary or relative: they describe either the presence or absence of a descriptive characteristic, or the relative magnitude of the characteristic when comparing two exemplars. However, prior work fails to model the actual way in which humans use these attributes in descriptive statements of images. Specifically, it does not address the important interactions between the binary and relative aspects of an attribute. In this work we propose a spoken attribute classifier which models a more natural way of using an attribute in a description. For each attribute we train a classifier which captures the specific way this attribute should be used. We show that as a result of using this model, we produce descriptions about images of people that are more natural and specific than past systems.

Related Material


[pdf]
[bibtex]
@InProceedings{Sadovnik_2013_ICCV,
author = {Sadovnik, Amir and Gallagher, Andrew and Parikh, Devi and Chen, Tsuhan},
title = {Spoken Attributes: Mixing Binary and Relative Attributes to Say the Right Thing},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2013}
}