Efficient Relative Attribute Learning using Graph Neural Networks

Zihang Meng, Nagesh Adluru, Hyunwoo J. Kim, Glenn Fung, Vikas Singh; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 552-567

Abstract


A sizable body of work on relative attributes provides compelling evidence that relating pairs of images along a continuum of strength pertaining to a visual attribute yields significant improvements in a wide variety of tasks in vision. In this paper, we show how emerging ideas in graph neural networks can yield a unified solution to various problems that broadly fall under relative attribute learning. Our main idea is the realization that relative attribute learning naturally benefits from exploiting the graphical structure of dependencies among the different relative attributes of images, especially when only partial ordering of the relative attributes is provided in the training data. We use message passing on a probabilistic graphical model to perform end to end learning of appropriate representations of the images, their relationships as well as the interplay between different attributes to best align with provided annotations. Our experiments demonstrate that this simple end-to-end learning framework using GNNs is very effective in achieving competitive accuracy with specialized methods for both relative attribute learning and binary attribute prediction, while significantly relaxing the requirements on the training data and/or the number of parameters or both.

Related Material


[pdf]
[bibtex]
@InProceedings{Meng_2018_ECCV,
author = {Meng, Zihang and Adluru, Nagesh and Kim, Hyunwoo J. and Fung, Glenn and Singh, Vikas},
title = {Efficient Relative Attribute Learning using Graph Neural Networks},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}