GPLAC: Generalizing Vision-Based Robotic Skills Using Weakly Labeled Images

Avi Singh, Larry Yang, Sergey Levine; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 5851-5860

Abstract


We tackle the problem of learning robotic sensorimotor control policies that can generalize to visually diverse and unseen environments. Achieving broad generalization typically requires large datasets, which are difficult to obtain for task-specific interactive processes such as reinforcement learning or learning from demonstration. However, much of the visual diversity in the world can be captured through passively collected datasets of images or videos. In our method, which we refer to as GPLAC (Generalized Policy Learning with Attentional Classifier), we use both interaction data and weakly labeled image data to augment the generalization capacity of sensorimotor policies. Our method combines multitask learning on action selection and an auxiliary binary classification objective, together with a convolutional neural network architecture that uses an attentional mechanism to avoid distractors. We show that pairing interaction data from just a single environment with a diverse dataset of weakly labeled data results in greatly improved generalization to unseen environments, and show that this generalization depends on both the auxiliary objective and the attentional architecture that we propose. We demonstrate our results in both simulation and on a real robotic manipulator, and demonstrate substantial improvement over standard convolutional architectures and domain adaptation methods.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Singh_2017_ICCV,
author = {Singh, Avi and Yang, Larry and Levine, Sergey},
title = {GPLAC: Generalizing Vision-Based Robotic Skills Using Weakly Labeled Images},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}