Fixation Prediction in Videos Using Unsupervised Hierarchical Features

Julius Wang, Hamed R. Tavakoli, Jorma Laaksonen; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2017, pp. 50-57

Abstract


This paper presents a framework for saliency estimation and fixation prediction in videos. The proposed framework is based on a hierarchical feature representation obtained by stacking convolutional layers of independent subspace analysis (ISA) filters. The feature learning is thus unsupervised and independent of the task. To compute the saliency, we then employ a multiresolution saliency architecture that exploits both local and global saliency. That is, for a given image, an image pyramid is initially built. After that, for each resolution, both local and global saliency measures are computed to obtain a saliency map. The integration of saliency maps over the image pyramid provides the final video saliency. We first show that combining local and global saliency improves the results. We then compare the proposed model with several video saliency models and demonstrate that the proposed framework is capable of predicting video saliency effectively, outperforming all the other models.

Related Material


[pdf]
[bibtex]
@InProceedings{Wang_2017_CVPR_Workshops,
author = {Wang, Julius and Tavakoli, Hamed R. and Laaksonen, Jorma},
title = {Fixation Prediction in Videos Using Unsupervised Hierarchical Features},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}