-
[pdf]
[bibtex]@InProceedings{Zhang_2024_CVPR, author = {Zhang, Jimuyang and Huang, Zanming and Ray, Arijit and Ohn-Bar, Eshed}, title = {Feedback-Guided Autonomous Driving}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {15000-15011} }
Feedback-Guided Autonomous Driving
Abstract
While behavior cloning has recently emerged as a highly successful paradigm for autonomous driving humans rarely learn to perform complex tasks such as driving via imitation or behavior cloning alone. In contrast learning in humans often involves additional detailed guidance throughout the interactive learning process i.e. where feedback often via language provides detailed information as to which part of their trial was performed incorrectly or suboptimally and why. Motivated by this observation we introduce an efficient feedback-based framework for improving behavior-cloning-based training of sensorimotor driving agents. Our key insight is to leverage recent advances in Large Language Models (LLMs) to provide corrective fine-grained feedback regarding the underlying reason behind driving prediction failures. Moreover our introduced network architecture is efficient enabling the first sensorimotor end-to-end training and evaluation of LLM-based driving models. The resulting agent achieves state-of-the-art performance in open-loop evaluation on nuScenes outperforming prior state-of-the-art by over 8.1% and 57.1% in accuracy and collision rate respectively. In CARLA our camera-based agent improves by 16.6% in driving score over prior LIDAR-based approaches.
Related Material