Take the Scenic Route: Improving Generalization in Vision-and-Language Navigation

Felix Yu, Zhiwei Deng, Karthik Narasimhan, Olga Russakovsky; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 920-921

Abstract


In the Vision-and-Language Navigation (VLN) task, an agent with egocentric vision navigates to a destination given natural language instructions. The act of manually annotating these instructions is timely and expensive, such that many existing approaches automatically generate additional samples to improve agent performance. However, these approaches still have difficulty generalizing their performance to new environments. In this work, we investigate the popular Room-to-Room (R2R) VLN benchmark and discover that it's not only about the amount data you synthesize, but how you do it. We find that shortest path sampling, which is used by both the R2R benchmark and existing augmentation methods, encode biases in the action space of the agent which we dub as action priors. We then show that these action priors offer one explanation toward the poor generalization of existing works. To mitigate such priors, we propose a path sampling method based on random walks to augment the data. By training with this augmentation strategy, our agent is able to generalize better to unknown environments compared to the baseline, significantly improving model performance in the process.

Related Material


[pdf]
[bibtex]
@InProceedings{Yu_2020_CVPR_Workshops,
author = {Yu, Felix and Deng, Zhiwei and Narasimhan, Karthik and Russakovsky, Olga},
title = {Take the Scenic Route: Improving Generalization in Vision-and-Language Navigation},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}