STCT: Sequentially Training Convolutional Networks for Visual Tracking

Lijun Wang, Wanli Ouyang, Xiaogang Wang, Huchuan Lu; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 1373-1381


Due to the limited amount of training samples, fine-tuning pre-trained deep models online is prone to over-fitting. In this paper, we propose a sequential training method for convolutional neural networks (CNNs) to effectively transfer pre-trained deep features for online applications. We regard a CNN as an ensemble with each channel of the output feature map as an individual base learner. Each base learner is trained using different loss criterions to reduce correlation and avoid over-training. To achieve the best ensemble online, all the base learners are sequentially sampled into the ensemble via important sampling. To further improve the robustness of each base learner, we propose to train the convolutional layers with random binary masks, which serves as a regularization to enforce each base learner to focus on different input features. The proposed online training method is applied to visual tracking problem by transferring deep features trained on massive annotated visual data and is shown to significantly improve tracking performance. Extensive experiments are conducted on two challenging benchmark data set and demonstrate that our tracking algorithm can outperform state-of-the-art methods with a considerable margin.

Related Material

author = {Wang, Lijun and Ouyang, Wanli and Wang, Xiaogang and Lu, Huchuan},
title = {STCT: Sequentially Training Convolutional Networks for Visual Tracking},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}