Temporally Steered Gaussian Attention for Video Understanding

Shagan Sah, Thang Nguyen, Miguel Dominguez, Felipe Petroski Such, Raymond Ptucha; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2017, pp. 33-41

Abstract


Recent advances in video understanding are enabling incredible developments in video search, summarization, automatic captioning and human computer interaction. Attention mechanisms are a powerful way to steer focus onto different sections of the video. Existing mechanisms are driven by prior training probabilities and require input instances of identical temporal duration. We introduce an intuitive video understanding framework which combines continuous attention mechanisms over a family of Gaussian distributions with a hierarchical based video representation. The hierarchical framework enables efficient abstract temporal representations of video. Video attributes steer the attention mechanism intelligently independent of video length. Our fully learnable end-to-end approach helps predict salient temporal regions of action/objects in the video. We demonstrate state-of-the-art captioning results on the popular MSVD, MSR-VTT and M-VAD video datasets.

Related Material


[pdf]
[bibtex]
@InProceedings{Sah_2017_CVPR_Workshops,
author = {Sah, Shagan and Nguyen, Thang and Dominguez, Miguel and Petroski Such, Felipe and Ptucha, Raymond},
title = {Temporally Steered Gaussian Attention for Video Understanding},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}