Auxiliary Tasks for Efficient Learning of Point-Goal Navigation

Saurabh Satish Desai, Stefan Lee; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 717-725

Abstract


Top-performing approaches to embodied AI tasks like point-goal navigation often rely on training agents via reinforcement learning over tens of millions (or even billions) of experiential steps -- learning neural agents that map directly from visual observations to actions. In this work, we question whether these extreme training durations are necessary or if they are simply due to the difficulty of learning visual representations purely from task reward. We examine the task of point-goal navigation in photorealistic environments and introduce three auxiliary tasks that encourage learned representations to capture key elements of the task -- local scene geometry, transition dynamics of the environment, and progress towards the goal. Importantly, these can be evaluated independent of task performance and provide strong supervision for representation learning. Our auxiliary tasks are simple to implement and rely on supervision already present in simulators commonly used for point-goal navigation. Applying our auxiliary losses to agents from prior works, we observe a greater than 4 times improvement in sample efficiency -- in 17 million steps, our augmented agents outperform state-of-the-art agents after 72 million steps.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Desai_2021_WACV, author = {Desai, Saurabh Satish and Lee, Stefan}, title = {Auxiliary Tasks for Efficient Learning of Point-Goal Navigation}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2021}, pages = {717-725} }