-
[pdf]
[arXiv]
[bibtex]@InProceedings{Jiang_2025_ICCV, author = {Jiang, Sicong and Huang, Zilin and Qian, Kangan and Luo, Ziang and Zhu, Tianze and Zhong, Yang and Tang, Yihong and Kong, Menglin and Wang, Yunlong and Jiao, Siwen and Ye, Hao and Sheng, Zihao and Zhao, Xin and Wen, Tuopu and Fu, Zheng and Chen, Sikai and Jiang, Kun and Yang, Diange and Choi, Seongjin and Sun, Lijun}, title = {A Survey on Vision-Language-Action Models for Autonomous Driving}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2025}, pages = {4524-4536} }
A Survey on Vision-Language-Action Models for Autonomous Driving
Abstract
The rapid progress of multimodal large language models (MLLM) has paved the way for Vision-Language-Action (VLA) paradigms, which integrate visual perception, natural language understanding, and control within a single policy. Researchers in autonomous driving are actively adapting these methods to the vehicle domain. Such models promise autonomous vehicles that can interpret high-level instructions, reason about complex traffic scenes, and make their own decisions. However, the literature remains fragmented and is rapidly expanding. This survey offers the first comprehensive overview of VLA for Autonomous Driving (VLA4AD). We (i) formalize the architectural building blocks shared across recent work, (ii) trace the evolution from early explainer to reasoning-centric VLA models, and (iii) compare over 20 representative models according to VLA's progress in the autonomous driving domain. We also consolidate existing datasets and benchmarks, highlighting protocols that jointly measure driving safety, instruction fidelity, and explanation quality. Finally, we detail open challenges--robustness, real-time efficiency, and formal verification--and outline future directions toward foundation-scale driving models and a standardised traffic language. This survey provides a concise yet complete reference for advancing interpretable, instruction-following, and socially aligned autonomous vehicles.
Related Material
