Generating Masks From Boxes by Mining Spatio-Temporal Consistencies in Videos

Bin Zhao, Goutam Bhat, Martin Danelljan, Luc Van Gool, Radu Timofte; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 13556-13566

Abstract


Segmenting objects in videos is a fundamental computer vision task. The current deep learning based paradigm offers a powerful, but data-hungry solution. However, current datasets are limited by the cost and human effort of annotating object masks in videos. This effectively limits the performance and generalization capabilities of existing video segmentation methods. To address this issue, we explore weaker form of bounding box annotations. We introduce a method for generating segmentation masks from per-frame bounding box annotations in videos. To this end, we propose a spatio-temporal aggregation module that effectively mines consistencies in the object and background appearance across multiple frames. We use our predicted accurate masks to train video object segmentation (VOS) networks for the tracking domain, where only manual bounding box annotations are available. The additional data provides substantially better generalization performance, leading to state-of-the-art results on standard tracking benchmarks. The code and models are available at https://github.com/visionml/pytracking.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Zhao_2021_ICCV, author = {Zhao, Bin and Bhat, Goutam and Danelljan, Martin and Van Gool, Luc and Timofte, Radu}, title = {Generating Masks From Boxes by Mining Spatio-Temporal Consistencies in Videos}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {13556-13566} }