TOUCHDOWN: Natural Language Navigation and Spatial Reasoning in Visual Street Environments

Howard Chen, Alane Suhr, Dipendra Misra, Noah Snavely, Yoav Artzi; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 12538-12547

Abstract


We study the problem of jointly reasoning about language and vision through a navigation and spatial reasoning task. We introduce the Touchdown task and dataset, where an agent must first follow navigation instructions in a Street View environment to a goal position, and then guess a location in its observed environment described in natural language to find a hidden object. The data contains 9326 examples of English instructions and spatial descriptions paired with demonstrations. We perform qualitative linguistic analysis, and show that the data displays a rich use of spatial reasoning. Empirical analysis shows the data presents an open challenge to existing methods.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Chen_2019_CVPR,
author = {Chen, Howard and Suhr, Alane and Misra, Dipendra and Snavely, Noah and Artzi, Yoav},
title = {TOUCHDOWN: Natural Language Navigation and Spatial Reasoning in Visual Street Environments},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}