ReferGPT: Towards Zero-Shot Referring Multi-Object Tracking

Tzoulio Chamiti, Leandro Di Bella, Adrian Munteanu, Nikos Deligiannis; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops, 2025, pp. 3849-3858

Abstract


Tracking multiple objects based on textual queries is a challenging task that requires linking language understanding with object association across frames. Previous works typically train the whole process end-to-end or integrate an additional referring text module into a multi-object tracker, but they both require supervised training and potentially struggle with generalization to open-set queries. In this work, we introduce ReferGPT, a novel zero-shot referring multi-object tracking framework. We provide a multi-modal large language model (MLLM) with spatial knowledge enabling it to generate 3D-aware captions. This enhances its descriptive capabilities and supports a more flexible referring vocabulary without training. We also propose a robust query-matching strategy, leveraging CLIP-based semantic encoding and fuzzy matching to associate MLLM generated captions with user queries. Extensive experiments on Refer-KITTI, Refer-KITTIv2 and Refer-KITTI+ demonstrate that ReferGPT achieves competitive performance against trained methods, showcasing its robustness and zero-shot capabilities in autonomous driving. The codes will be publicly available on github.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Chamiti_2025_CVPR, author = {Chamiti, Tzoulio and Di Bella, Leandro and Munteanu, Adrian and Deligiannis, Nikos}, title = {ReferGPT: Towards Zero-Shot Referring Multi-Object Tracking}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops}, month = {June}, year = {2025}, pages = {3849-3858} }