NEAT: Neural Attention Fields for End-to-End Autonomous Driving

Kashyap Chitta, Aditya Prakash, Andreas Geiger; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 15793-15803

Abstract


Efficient reasoning about the semantic, spatial, and temporal structure of a scene is a crucial prerequisite for autonomous driving. We present NEural ATtention fields (NEAT), a novel representation that enables such reasoning for end-to-end imitation learning models. NEAT is a continuous function which maps locations in Bird's Eye View (BEV) scene coordinates to waypoints and semantics, using intermediate attention maps to iteratively compress high-dimensional 2D image features into a compact representation. This allows our model to selectively attend to relevant regions in the input while ignoring information irrelevant to the driving task, effectively associating the images with the BEV representation. In a new evaluation setting involving adverse environmental conditions and challenging scenarios, NEAT outperforms several strong baselines and achieves driving scores on par with the privileged CARLA expert used to generate its training data. Furthermore, visualizing the attention maps for models with NEAT intermediate representations provides improved interpretability.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Chitta_2021_ICCV, author = {Chitta, Kashyap and Prakash, Aditya and Geiger, Andreas}, title = {NEAT: Neural Attention Fields for End-to-End Autonomous Driving}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {15793-15803} }