Joint Inductive and Transductive Learning for Video Object Segmentation

Yunyao Mao, Ning Wang, Wengang Zhou, Houqiang Li; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 9670-9679

Abstract


Semi-supervised video object segmentation is a task of segmenting the target object in a video sequence given only a mask annotation in the first frame. The limited information available makes it an extremely challenging task. Most previous best-performing methods adopt matching-based transductive reasoning or online inductive learning. Nevertheless, they are either less discriminative for similar instances or insufficient in the utilization of spatio-temporal information. In this work, we propose to integrate transductive and inductive learning into a unified framework to exploit the complementarity between them for accurate and robust video object segmentation. The proposed approach consists of two functional branches. The transduction branch adopts a lightweight transformer architecture to aggregate rich spatio-temporal cues while the induction branch performs online inductive learning to obtain discriminative target information. To bridge these two diverse branches, a two-head label encoder is introduced to learn the suitable target prior for each of them. The generated mask encodings are further forced to be disentangled to better retain their complementarity. Extensive experiments on several prevalent benchmarks show that, without the need of synthetic training data, the proposed approach sets a series of new state-of-the-art records. Code is available at https://github.com/maoyunyao/JOINT.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Mao_2021_ICCV, author = {Mao, Yunyao and Wang, Ning and Zhou, Wengang and Li, Houqiang}, title = {Joint Inductive and Transductive Learning for Video Object Segmentation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {9670-9679} }