-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Chen_2025_WACV, author = {Chen, Kai and Li, Yanze and Zhang, Wenhua and Liu, Yanxin and Li, Pengxiang and Gao, Ruiyuan and Hong, Lanqing and Tian, Meng and Zhao, Xinhai and Li, Zhenguo and Yeung, Dit-Yan and Lu, Huchuan and Jia, Xu}, title = {Automated Evaluation of Large Vision-Language Models on Self-Driving Corner Cases}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {7806-7815} }
Automated Evaluation of Large Vision-Language Models on Self-Driving Corner Cases
Abstract
Large Vision-Language Models (LVLMs) have received widespread attentions for advancing the interpretable self-driving. Existing evaluations of LVLMs primarily focus on multi-faceted capabilities in natural circumstances lacking automated and quantifiable assessment for self-driving let alone the severe road corner cases. In this work we propose CODA-LM the very first benchmark for the automatic evaluation of LVLMs for self-driving corner cases. We adopt a hierarchical data structure and prompt powerful LVLMs to analyze complex driving scenes and generate high-quality pre-annotations for the human annotators while for LVLM evaluation we show that using the text-only large language models (LLMs) as judges reveals even better alignment with human preferences than the LVLM judges. Moreover with our CODA-LM we build CODA-VLM a new driving LVLM surpassing all open-sourced counterparts on CODA-LM. Our CODA-VLM performs comparably with GPT-4V even surpassing GPT-4V by +21.42% on the regional perception task. We hope CODA-LM can become the catalyst to promote interpretable self-driving empowered by LVLMs.
Related Material