-
[pdf]
[bibtex]@InProceedings{Kristan_2025_ICCV, author = {Kristan, Matej and Matas, Ji\v{r}{\'\i} and Tokmakov, Pavel and Luke\v{z}i\v{c}, Alan and Felsberg, Michael and Zajc, Luka \v{C}ehovin and Tran, Khanh-Tung and Vu, Xuan-Son and Bj\"orklund, Johanna and Neoral, Michal and Chang, Hyung Jin and Fern\'andez, Gustavo and Attari, Minasadat and Dunnhofer, Matteo and Feng, Wei and Feng, Zhenhua and Gao, Jin and Gu, Yameng and Han, Ruize and He, Jiawei and He, Zhenyu and Hou, Junhui and Hu, Weiming and Hu, Xiantao and Huang, Xingsen and Huang, Yuqing and Kirichenko, Gleb and Kittler, Josef and Kou, Yutong and Lai, Simiao and Li, Bing and Li, Xin and Lin, Shubo and Lu, Huchuan and Miao, Deshui and Micheloni, Christian and Mogollon, Juan and Nottebaum, Moritz and Palaniappan, Kannappan and Pang, Ziqi and Qian, Zekun and Rahmon, Gani and Romanov, Aleksandr and Shi, Liangtao and Solovyev, Roman and Kazemi, Elham Soltani and Toubal, Imad Eddine and Videnovic, Jovana and Wang, Dong and Wang, Yaowei and Wang, Yu-Xiong and Wang, Zhixiang and Wu, Xiaojun and Xie, Jinxia and Xu, Tianyang and Xue, Chaocan and Xue, Yuanliang and Yang, Ming-Hsuan and Yurtov, Dmitriy and Zhang, Chunui and Zhang, Xiangqun and Zhang, Yunfei and Zheng, Qingfang and Zhong, Bineng and Zhong, Fuan and Zhou, Jinglin and Zhou, Jingmeng and Zhou, Junbao and Zhou, Yong and Zhu, Xuefeng}, title = {The Third Visual Object Tracking Segmentation VOTS2025 Challenge Results}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2025}, pages = {7422-7440} }
The Third Visual Object Tracking Segmentation VOTS2025 Challenge Results
Abstract
The VOTS2025 challenge marks the thirteenth edition of the Visual Object Tracking Segmentation benchmarking activity organized under the VOT initiative. Building on the tracking setup introduced in VOTS2023, the challenge continues to integrate short-term and long-term tracking, as well as single-target and multi-target scenarios, using segmentation masks as the sole form of target annotation. This year's benchmark features three sub-challenges. The first two, VOTS2025 and VOTSt2025, evaluate tracking of conventional objects and objects undergoing topological changes, respectively. A new addition, VOTS-RT2025, aims to foster the development of efficient tracking models by introducing constraints that highlight realtime performance. All sub-challenges adopt a consistent evaluation protocol, with VOTS-RT2025 introducing specific modifications to reflect latency-aware performance. We report and analyze results from 32 submissions. Full tracker descriptions, source code, datasets, and the evaluation toolkit are available on the project website.
Related Material
