Predicting Functional Regions on Objects

Chaitanya Desai, Deva Ramanan; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2013, pp. 968-975

Abstract


We revisit the notion of object affordances, an idea that speaks to an object's functional properties more than its class label. We study the problem of spatially localizing affordances in the form of 2D segmentation masks annotated with discrete affordance labels. For example, we use affordance masks to denote on what surfaces a person sits, grabs, and looks at when interacting with a variety of everyday objects (such as chairs, bikes, and TVs). We introduce such a functionally-annotated dataset derived from the PASCAL VOC benchmark and empirically evaluate several approaches for predicting such functionally-relevant object regions. We compare "blind" approaches that ignore image data, bottom-up approaches that reason about local surface layout, and top-down approaches that reason about structural constraints between surfaces/regions of objects. We show that the difficulty of functional region prediction varies considerably across objects, and that in general, top-down functional object models do well, though there is much room for improvement.

Related Material


[pdf]
[bibtex]
@InProceedings{Desai_2013_CVPR_Workshops,
author = {Desai, Chaitanya and Ramanan, Deva},
title = {Predicting Functional Regions on Objects},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2013}
}