An Enhanced Adaptive Coupled-Layer LGTracker++

Jingjing Xiao, Rustam Stolkin, Ales Leonardis; Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops, 2013, pp. 137-144


This paper addresses the problems of tracking targets which undergo rapid and significant appearance changes. Our starting point is a successful, state-of-the-art tracker based on an adaptive coupled-layer visual model [10]. In this paper, we identify four important cases when the original tracker often fails: significant scale changes, environment clutter, and failures due to occlusion and rapid disordered movement. We suggest four new enhancements to solve these problems: we adapt the scale of the patches in addition to adapting the bounding box; marginal patch distributions are used to solve patch drifting in environment clutter; a memory is added and used to assist recovery from occlusion; situations where the tracker may lose the target are automatically detected, and a particle filter is substituted for the Kalman filter to help recover the target. We have evaluated the enhanced tracker on a publicly available dataset of 16 challenging video sequences, using a test toolkit [17]. We demonstrate the advantages of the enhanced tracker over the original tracker, as well as several other state-of-the art trackers from the literature.

Related Material

author = {Jingjing Xiao and Rustam Stolkin and Ales Leonardis},
title = {An Enhanced Adaptive Coupled-Layer LGTracker++},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {June},
year = {2013}