Spatio-Temporal Attention Network for Video Instance Segmentation

Xiaoyu Liu, Haibing Ren, Tingmeng Ye; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0

Abstract


In this paper, we propose a method named spatio-temporal attention network for video instance segmentation. The spatio-temporal attention network can estimate the global correlation map between the successive frames and transfers it to the attention map. Added with the attention information, the new features may enhance the response of the instance for pre-defined categories. Therefore, the detection, segmentation and tracking accuracy will be greatly improved. Experimental result shows that combined with MaskTrack R-CNN, it may improve the video instance segmentation accuracy from 0.293 to 0.400@Youtube VIS test dataset with a single model. Our method took the 6th place in the video instance segmentation track of the 2nd Large-scale Video Object Segmentation Challenge.

Related Material


[pdf]
[bibtex]
@InProceedings{Liu_2019_ICCV,
author = {Liu, Xiaoyu and Ren, Haibing and Ye, Tingmeng},
title = {Spatio-Temporal Attention Network for Video Instance Segmentation},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}
}