TCOVIS: Temporally Consistent Online Video Instance Segmentation

Junlong Li, Bingyao Yu, Yongming Rao, Jie Zhou, Jiwen Lu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 1097-1107

Abstract


In recent years, significant progress has been made in video instance segmentation (VIS), with many offline and online methods achieving state-of-the-art performance. While offline methods have the advantage of producing temporally consistent predictions, they are not suitable for real-time scenarios. Conversely, online methods are more practical, but maintaining temporal consistency remains a challenging task. In this paper, we propose a novel online method for video instance segmentation, called TCOVIS, which fully exploits the temporal information in a video clip. The core of our method consists of a global instance assignment strategy and a spatio-temporal enhancement module, which improve the temporal consistency of the features from two aspects. Specifically, we perform global optimal matching between the predictions and ground truth across the whole video clip, and supervise the model with the global optimal objective. We also capture the spatial feature and aggregate it with the semantic feature between frames, thus realizing the spatio-temporal enhancement. We evaluate our method on four widely adopted VIS benchmarks, namely YouTube-VIS 2019/2021/2022 and OVIS, and achieve state-of-the-art performance on all benchmarks without bells-and-whistles. For instance, on YouTube-VIS 2021, TCOVIS achieves 49.5 AP and 61.3 AP with ResNet-50 and Swin-L backbones, respectively.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Li_2023_ICCV, author = {Li, Junlong and Yu, Bingyao and Rao, Yongming and Zhou, Jie and Lu, Jiwen}, title = {TCOVIS: Temporally Consistent Online Video Instance Segmentation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {1097-1107} }