Local Subspace Collaborative Tracking

Lin Ma, Xiaoqin Zhang, Weiming Hu, Junliang Xing, Jiwen Lu, Jie Zhou; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 4301-4309

Abstract


Subspace models have been widely used for appearance based object tracking. Most existing subspace based trackers employ a linear subspace to represent object appearances, which are not accurate enough to model large variations of objects. To address this, this paper presents a local subspace collaborative tracking method for robust visual tracking, where multiple linear and nonlinear subspaces are learned to better model the nonlinear relationship of object appearances. First, we retain a set of key samples and compute a set of local subspaces for each key sample. Then, we construct a hyper sphere to represent the local nonlinear subspace for each key sample. The hyper sphere of one key sample passes the local key samples and also is tangent to the local linear subspace of the specific key sample. In this way, we are able to represent the nonlinear distribution of the key samples and also approximate the local linear subspace near the specific key sample, so that local distributions of the samples can be represented more accurately. Experimental results on challenging video sequences demonstrate the effectiveness of our method.

Related Material


[pdf]
[bibtex]
@InProceedings{Ma_2015_ICCV,
author = {Ma, Lin and Zhang, Xiaoqin and Hu, Weiming and Xing, Junliang and Lu, Jiwen and Zhou, Jie},
title = {Local Subspace Collaborative Tracking},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}