-
[pdf]
[supp]
[bibtex]@InProceedings{Sahin_2023_WACV, author = {Sahin, Gozde and Itti, Laurent}, title = {HOOT: Heavy Occlusions in Object Tracking Benchmark}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {4830-4839} }
HOOT: Heavy Occlusions in Object Tracking Benchmark
Abstract
In this paper, we present HOOT, the Heavy Occlusions in Object Tracking Benchmark, a new visual object tracking dataset aimed towards handling high occlusion scenarios for single-object tracking tasks. The benchmark consists of 581 high-quality videos, which have 436K frames densely annotated with rotated bounding boxes for the targets spanning 74 object classes. The dataset is geared for development, evaluation and analysis of visual tracking algorithms that are robust to occlusions. It is comprised of videos with high occlusion levels, where the median percentage of occluded frames per-video is 68%. It also provides critical attributes on occlusions, which include defining a taxonomy for occluders, providing occlusion masks for every bounding box, per-frame partial/full occlusion labels and more. HOOT has been compiled to encourage development of new methods targeting occlusion handling in visual tracking, by providing training and test splits with high occlusion levels. This makes HOOT the first densely-annotated, large dataset designed for single-object tracking under severe occlusion. We evaluate 15 state-of-the-art trackers on this new dataset to act as a baseline for future work focusing on occlusions.
Related Material