Inpaint2Learn: A Self-Supervised Framework for Affordance Learning

Lingzhi Zhang, Weiyu Du, Shenghao Zhou, Jiancong Wang, Jianbo Shi; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2022, pp. 2665-2674

Abstract


Perceiving affordances -- the opportunities of interaction in a scene, is a fundamental ability of humans. It is an equally important skill for AI agents and robots to better understand and interact with the world. However, labeling affordances in the environment is not a trivial task. To address this issue, we propose a task-agnostic framework, named Inpaint2Learn, that generates affordance labels in a fully automatic manner and opens the door for affordance learning in the wild. To demonstrate its effectiveness, we apply it to three different tasks: human affordance prediction, Location2Object and 6D object pose hallucination. Our experiments and user studies show that our models, trained with the Inpaint2Learn scaffold, are able to generate diverse and visually plausible results in all three scenarios.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Zhang_2022_WACV, author = {Zhang, Lingzhi and Du, Weiyu and Zhou, Shenghao and Wang, Jiancong and Shi, Jianbo}, title = {Inpaint2Learn: A Self-Supervised Framework for Affordance Learning}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2022}, pages = {2665-2674} }