HTML: Hybrid Temporal-scale Multimodal Learning Framework for Referring Video Object Segmentation

Mingfei Han, Yali Wang, Zhihui Li, Lina Yao, Xiaojun Chang, Yu Qiao; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 13414-13423

Abstract


Referring Video Object Segmentation (RVOS) is to segment the object instance from a given video, according to the textual description of this object. However, in the open world, the object descriptions are often diversified in contents and flexible in lengths. This leads to the key difficulty in RVOS, i.e., various descriptions of different ob- jects are corresponding to different temporal scales in the video, which is ignored by most existing approaches with single stride of frame sampling. To tackle this problem, we propose a concise Hybrid Temporal-scale Multimodal Learning (HTML) framework, which can effectively align lingual and visual features to discover core object semantics in the video, by learning multimodal interaction hierarchically from different temporal scales. More specifically, we introduce a novel inter-scale multimodal perception module, where the language queries dynamically interact with visual features across temporal scales. It can effectively reduce complex object confusion by passing video context among different scales. Finally, we conduct extensive experiments on the widely used benchmarks, including Ref- Youtube-VOS, Ref-DAVIS17, A2D-Sentences and JHMDB- Sentences, where our HTML achieves state-of-the-art performance on all these datasets.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Han_2023_ICCV, author = {Han, Mingfei and Wang, Yali and Li, Zhihui and Yao, Lina and Chang, Xiaojun and Qiao, Yu}, title = {HTML: Hybrid Temporal-scale Multimodal Learning Framework for Referring Video Object Segmentation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {13414-13423} }