SegFlow: Joint Learning for Video Object Segmentation and Optical Flow

Jingchun Cheng, Yi-Hsuan Tsai, Shengjin Wang, Ming-Hsuan Yang; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 686-695

Abstract


This paper proposes an end-to-end trainable network, SegFlow, for simultaneously predicting pixel-wise object segmentation and optical flow in videos. The proposed SegFlow has two branches where useful information of object segmentation and optical flow is propagated bidirectionally in a unified framework. The segmentation branch is based on a fully convolutional network, which has been proved effective in image segmentation task, and the optical flow branch takes advantage of the FlowNet model. The unified framework is trained iteratively offline to learn a generic notion, and fine tuned online for specific objects. Extensive experiments on both the video object segmentation and optical flow datasets demonstrate that introducing optical flow improves the performance of segmentation and vice versa, against the state-of-the-art algorithms.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Cheng_2017_ICCV,
author = {Cheng, Jingchun and Tsai, Yi-Hsuan and Wang, Shengjin and Yang, Ming-Hsuan},
title = {SegFlow: Joint Learning for Video Object Segmentation and Optical Flow},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}