SaltiNet: Scan-Path Prediction on 360 Degree Images Using Saliency Volumes

Marc Assens Reina, Xavier Giro-i-Nieto, Kevin McGuinness, Noel E. O'Connor; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2331-2338

Abstract


We introduce SaltiNet, a deep neural network for scanpath prediction trained on 360-degree images. The model is based on a temporal-aware novel representation of saliency information named the saliency volume. The first part of the network consists of a model trained to generate saliency volumes, whose parameters are fit by back-propagation computed from a binary cross entropy (BCE) loss over downsampled versions of the saliency volumes. Sampling strategies over these volumes are used to generate scanpaths over the 360-degree images. Our experiments show the advantages of using saliency volumes, and how they can be used for related tasks. Our source code and trained models available at https://github.com/massens/saliency-360salient-2017.

Related Material


[pdf]
[bibtex]
@InProceedings{Reina_2017_ICCV,
author = {Assens Reina, Marc and Giro-i-Nieto, Xavier and McGuinness, Kevin and O'Connor, Noel E.},
title = {SaltiNet: Scan-Path Prediction on 360 Degree Images Using Saliency Volumes},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}