Temporal Collection and Distribution for Referring Video Object Segmentation

Jiajin Tang, Ge Zheng, Sibei Yang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 15466-15476

Abstract


Referring video object segmentation aims to segment a referent throughout a video sequence according to a natural language expression. It requires aligning the natural language expression with the objects' motions and their dynamic associations at the global video level but segmenting objects at the frame level. To achieve this goal, we propose to simultaneously maintain a global referent token and a sequence of object queries, where the former is responsible for capturing video-level referent according to the language expression, while the latter serves to better locate and segment objects with each frame. Furthermore, to explicitly capture object motions and spatial-temporal cross-modal reasoning over objects, we propose a novel temporal collection-distribution mechanism for interacting between the global referent token and object queries. Specifically, the temporal collection mechanism collects global information for the referent token from object queries to the temporal motions to the language expression. In turn, the temporal distribution first distributes the referent token to the referent sequence across all frames and then performs efficient cross-frame reasoning between the referent sequence and object queries in every frame. Experimental results show that our method outperforms state-of-the-art methods on all benchmarks consistently and significantly.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Tang_2023_ICCV, author = {Tang, Jiajin and Zheng, Ge and Yang, Sibei}, title = {Temporal Collection and Distribution for Referring Video Object Segmentation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {15466-15476} }