-
[pdf]
[arXiv]
[bibtex]@InProceedings{Tang_2025_ICCV, author = {Tang, Zheng and Wang, Shuo and Anastasiu, David C. and Chang, Ming-Ching and Sharma, Anuj and Kong, Quan and Kobori, Norimasa and Gochoo, Munkhjargal and Batnasan, Ganzorig and Otgonbold, Munkh-Erdene and Alnajjar, Fady and Hsieh, Jun-Wei and Kornuta, Tomasz and Li, Xiaolong and Zhao, Yilin and Zhang, Han and Radhakrishnan, Subhashree and Jain, Arihant and Kumar, Ratnesh and Murali, Vidya N. and Wang, Yuxing and Pusegaonkar, Sameer Satish and Wang, Yizhou and Biswas, Sujit and Wu, Xunlei and Zheng, Zhedong and Chakraborty, Pranamesh and Chellappa, Rama}, title = {The 9th AI City Challenge}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2025}, pages = {5467-5476} }
The 9th AI City Challenge
Abstract
The ninth AI City Challenge continues to advance real-world applications of computer vision and AI in transportation, industrial automation, and public safety. The 2025 edition featured four tracks and saw a 17% increase in participation, with 245 teams from 15 countries registered on the evaluation server. Public release of challenge datasets led to over 30,000 downloads to date. Track 1 focused on multi-class 3D multi-camera tracking, involving people, humanoids, autonomous mobile robots, and forklifts, using detailed calibration and 3D bounding box annotations. Track 2 tackled video question answering in traffic safety, with multi-camera incident understanding enriched by 3D gaze labels. Track 3 addressed fine-grained spatial reasoning in dynamic warehouse environments, requiring AI systems to interpret RGB-D inputs and answer spatial questions that combine perception, geometry, and language. Both Track 1 and Track 3 datasets were generated in NVIDIA Omniverse. Track 4 emphasized efficient road object detection from fisheye cameras, supporting lightweight, real-time deployment on edge devices. The evaluation framework enforced submission limits and used a partially held-out test set to ensure fair benchmarking. Final rankings were revealed after the competition concluded, fostering reproducibility and mitigating overfitting. Several teams achieved top-tier results, setting new benchmarks in multiple tasks.
Related Material
