Learning Attentions: Residual Attentional Siamese Network for High Performance Online Visual Tracking

Qiang Wang, Zhu Teng, Junliang Xing, Jin Gao, Weiming Hu, Stephen Maybank; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 4854-4863

Abstract


Offline training for object tracking has recently shown great potentials in balancing tracking accuracy and speed. However, it is still difficult to adapt an offline trained model to a target tracked online. This work presents a Residual Attentional Siamese Network (RASNet) for high performance object tracking. The RASNet model reformulates the correlation filter within a Siamese tracking framework, and introduces different kinds of the attention mechanisms to adapt the model without updating the model online. In particular, by exploiting the offline trained general attention, the target adapted residual attention, and the channel favored feature attention, the RASNet not only mitigates the over-fitting problem in deep network training, but also enhances its discriminative capacity and adaptability due to the separation of representation learning and discriminator learning. The proposed deep architecture is trained from end to end and takes full advantage of the rich spatial temporal information to achieve robust visual tracking. Experimental results on two latest benchmarks, OTB-2015 and VOT2017, show that the RASNet tracker has the state-of-the-art tracking accuracy while runs at more than 80 frames per second.

Related Material


[pdf] [Supp]
[bibtex]
@InProceedings{Wang_2018_CVPR,
author = {Wang, Qiang and Teng, Zhu and Xing, Junliang and Gao, Jin and Hu, Weiming and Maybank, Stephen},
title = {Learning Attentions: Residual Attentional Siamese Network for High Performance Online Visual Tracking},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}