Flow Guided Recurrent Neural Encoder for Video Salient Object Detection
Guanbin Li, Yuan Xie, Tianhao Wei, Keze Wang, Liang Lin; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 3243-3252
Abstract
Image saliency detection has recently witnessed significant progress due to deep convolutional neural networks. However, extending state-of-the-art saliency detectors from image to video is challenging. The performance of salient object detection suffers from object or camera motion and the dramatic change of the appearance contrast in videos. In this paper, we present flow guided recurrent neural encoder(FGRNE), an accurate and end-to-end learning framework for video salient object detection. It works by enhancing the temporal coherence of the per-frame feature by exploiting both motion information in terms of optical flow and sequential feature evolution encoding in terms of LSTM networks. It can be considered as a universal framework to extend any FCN based static saliency detector to video salient object detection. Intensive experimental results verify the effectiveness of each part of FGRNE and confirm that our proposed method significantly outperforms state-of-the-art methods on the public benchmarks of DAVIS and FBMS.
Related Material
[pdf]
[
bibtex]
@InProceedings{Li_2018_CVPR,
author = {Li, Guanbin and Xie, Yuan and Wei, Tianhao and Wang, Keze and Lin, Liang},
title = {Flow Guided Recurrent Neural Encoder for Video Salient Object Detection},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}