RATM: Recurrent Attentive Tracking Model

Samira Ebrahimi Kahou, Vincent Michalski, Roland Memisevic, Christopher Pal, Pascal Vincent; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2017, pp. 10-19

Abstract


We present an attention-based modular neural framework for computer vision. The framework uses a soft attention mechanism allowing models to be trained with gradient descent. It consists of three modules: a recurrent attention module controlling where to look in an image or video frame, a feature-extraction module providing a representation of what is seen, and an objective module formalizing why the model learns its attentive behavior. The attention module allows the model to focus computation on task-related information in the input. We apply the framework to several object tracking tasks and explore various design choices. We experiment with three data sets, bouncing ball, moving digits and the real-world KTH data set. The proposed RATM performs well on all three tasks and can generalize to related but previously unseen sequences from a challenging tracking data set.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Kahou_2017_CVPR_Workshops,
author = {Ebrahimi Kahou, Samira and Michalski, Vincent and Memisevic, Roland and Pal, Christopher and Vincent, Pascal},
title = {RATM: Recurrent Attentive Tracking Model},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}