Real-time Visual Object Tracking with Natural Language Description

Qi Feng, Vitaly Ablavsky, Qinxun Bai, Guorong Li, Stan Sclaroff; The IEEE Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 700-709


In this work, we argue that conditioning on the natural language (NL) description of a target provides information for longer-term invariance, and thus helps cope with typical tracking challenges. However, deriving a formulation to combine the strengths of appearance-based tracking with the language modality is not straightforward. Therefore, we propose a novel deep tracking-by-detection formulation that can take advantage of NL descriptions. Regions that are related to the given NL description are generated by a proposal network during the detection stage of the tracker. Our LSTM based tracker then predicts the update of the target from regions proposed by the NL based detection stage. Our method runs at over 30 fps on a single GPU. In benchmarks, our method is competitive with state of the art trackers that employ bounding boxes for initialization, while it outperforms all other trackers on targets given unambiguous and precise language annotations. When conditioned on NL descriptions only, our model doubles the performance of the previous best attempt.

Related Material

author = {Feng, Qi and Ablavsky, Vitaly and Bai, Qinxun and Li, Guorong and Sclaroff, Stan},
title = {Real-time Visual Object Tracking with Natural Language Description},
booktitle = {The IEEE Winter Conference on Applications of Computer Vision (WACV)},
month = {March},
year = {2020}