-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Hong_2024_CVPR, author = {Hong, Lingyi and Yan, Shilin and Zhang, Renrui and Li, Wanyun and Zhou, Xinyu and Guo, Pinxue and Jiang, Kaixun and Chen, Yiting and Li, Jinglun and Chen, Zhaoyu and Zhang, Wenqiang}, title = {OneTracker: Unifying Visual Object Tracking with Foundation Models and Efficient Tuning}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {19079-19091} }
OneTracker: Unifying Visual Object Tracking with Foundation Models and Efficient Tuning
Abstract
Visual object tracking aims to localize the target object of each frame based on its initial appearance in the first frame. Depending on the input modility tracking tasks can be divided into RGB tracking and RGB+X (e.g. RGB+N and RGB+D) tracking. Despite the different input modalities the core aspect of tracking is the temporal matching. Based on this common ground we present a general framework to unify various tracking tasks termed as OneTracker. OneTracker first performs a large-scale pre-training on a RGB tracker called Foundation Tracker. This pretraining phase equips the Foundation Tracker with a stable ability to estimate the location of the target object. Then we regard other modality information as prompt and build Prompt Tracker upon Foundation Tracker. Through freezing the Foundation Tracker and only adjusting some additional trainable parameters Prompt Tracker inhibits the strong localization ability from Foundation Tracker and achieves parameter-efficient finetuning on downstream RGB+X tracking tasks. To evaluate the effectiveness of our general framework OneTracker which is consisted of Foundation Tracker and Prompt Tracker we conduct extensive experiments on 6 popular tracking tasks across 11 benchmarks and our OneTracker outperforms other models and achieves state-of-the-art performance.
Related Material