Visual Semantic Planning Using Deep Successor Representations

Yuke Zhu, Daniel Gordon, Eric Kolve, Dieter Fox, Li Fei-Fei, Abhinav Gupta, Roozbeh Mottaghi, Ali Farhadi; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 483-492

Abstract


A crucial capability of real-world intelligent agents is their ability to plan a sequence of actions to achieve their goals in the visual world. In this work, we address the problem of visual semantic planning: the task of predicting a sequence of actions from visual observations that transform a dynamic environment from an initial state to a goal state. Doing so entails knowledge about objects and their affordances, as well as actions and their preconditions and effects. We propose learning these through interacting with a visual and dynamic environment. Our proposed solution involves bootstrapping reinforcement learning with imitation learning. To ensure cross task generalization, we develop a deep predictive model based on successor representations. Our experimental results show near optimal results across a wide range of tasks in the challenging THOR environment. The supplementary video can be accessed at the following link: https://goo.gl/vXsbQP.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Zhu_2017_ICCV,
author = {Zhu, Yuke and Gordon, Daniel and Kolve, Eric and Fox, Dieter and Fei-Fei, Li and Gupta, Abhinav and Mottaghi, Roozbeh and Farhadi, Ali},
title = {Visual Semantic Planning Using Deep Successor Representations},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}