Learning to Segment Affordances

Timo Luddecke, Florentin Worgotter; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 769-776

Abstract


The goal of this work is to densely predict a comparatively large set of affordances given only single RGB images. We approach this task by using a convolutional neural network based on the well-known ResNet architecture, which we blend with refinement modules recently proposed in the semantic segmentation literature. A novel cost function, capable of handling incomplete data, is introduced, which is necessary because we make use of segmentations of objects and their parts to generate affordance maps. We demonstrate both, quantitatively and qualitatively, that learning a dense predictor of affordances from an object part dataset is indeed possible and show that our model outperforms several baselines.

Related Material


[pdf]
[bibtex]
@InProceedings{Luddecke_2017_ICCV,
author = {Luddecke, Timo and Worgotter, Florentin},
title = {Learning to Segment Affordances},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}