Video Propagation Networks

Varun Jampani, Raghudeep Gadde, Peter V. Gehler; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 451-461

Abstract


We propose a technique that propagates information forward through video data. The method is conceptually simple and can be applied to tasks that require the propagation of structured information, such as semantic labels, based on video content. We propose a "Video Propagation Network" that processes video frames in an adaptive manner. The model is applied online: it propagates information forward without the need to access future frames. In particular we combine two components, a temporal bilateral network for dense and video adaptive filtering, followed by a spatial network to refine features and increased flexibility. We present experiments on video object segmentation and semantic video segmentation and show increased performance comparing to the best previous task-specific methods, while having favorable runtime. Additionally we demonstrate our approach on an example regression task of color propagation in a grayscale video.

Related Material


[pdf] [Supp]
[bibtex]
@InProceedings{Jampani_2017_CVPR,
author = {Jampani, Varun and Gadde, Raghudeep and Gehler, Peter V.},
title = {Video Propagation Networks},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}