Hierarchical Spatiotemporal Transformers for Video Object Segmentation

Jun-Sang Yoo, Hongjae Lee, Seung-Won Jung; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023, pp. 795-805

Abstract


This paper presents a novel framework called HST for semi-supervised video object segmentation (VOS). HST extracts image and video features using the latest Swin Transformer and Video Swin Transformer to inherit their inductive bias for the spatiotemporal locality, which is essential for temporally coherent VOS. To take full advantage of the image and video features, HST casts image and video features as a query and memory, respectively. By applying efficient memory read operations at multiple scales, HST produces hierarchical features for the precise reconstruction of object masks. HST shows effectiveness and robustness in handling challenging scenarios with occluded and fast-moving objects under cluttered backgrounds. In particular, HST-B outperforms the state-of-the-art competitors on multiple popular benchmarks, i.e., YouTube-VOS (85.0%), DAVIS 2017 (85.9%), and DAVIS 2016 (94.0%).

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Yoo_2023_ICCV, author = {Yoo, Jun-Sang and Lee, Hongjae and Jung, Seung-Won}, title = {Hierarchical Spatiotemporal Transformers for Video Object Segmentation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2023}, pages = {795-805} }