Curious Representation Learning for Embodied Intelligence

Yilun Du, Chuang Gan, Phillip Isola; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 10408-10417

Abstract


Self-supervised visual representation learning has achieved remarkable success in recent years. By subverting the need for supervised labels, such approaches are able to utilize the numerous unlabeled images that exist on the Internet and in photographic datasets. Yet to build truly intelligent agents, we must construct representation learning algorithms that can learn not only from datasets but also learn in environments. An agent in a natural environment will not typically be fed curated data. Instead, it must explore its environment to acquire the data it will learn from. We propose a framework, curious representation learning (CRL), which jointly learns a reinforcement learning policy and a visual representation model. The policy is trained to maximize the error of the representation learner, and in doing so is incentivized to explore its environment. At the same time, the learned representation becomes stronger and stronger as the policy feeds it ever harder data to learn from. Our learned embodied representations enable promising transfer to downstream embodied semantic and language-guided navigation, performing better or comparable to ImageNet pretraining without using any supervision at all. In addition, despite being trained in simulation, our learned representations can obtain interpretable results on real images.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Du_2021_ICCV, author = {Du, Yilun and Gan, Chuang and Isola, Phillip}, title = {Curious Representation Learning for Embodied Intelligence}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {10408-10417} }