Learning Cascaded Context-Aware Framework for Robust Visual Tracking
Ding Ma, Xiangqian Wu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0
Abstract
Context information on each corner of the whole image is useful for visual tracking. However, some trackers may not be able to model such information, this will result in suboptimal performance. To directly model fully context information is intractable since first the region of the foreground is relatively small, the structure of foreground is lost for some part by straightforwardly aware. Second, the target may share a similar structure of the surrounding distractors. To this end, we propose a cascaded context-aware framework based on two networks that progressively model the foreground and background of the various targets over time. The first network pays attention to the most discriminative information within the whole context and coarser structure of the target, the second network focuses on the self-structure information of the target. Depending on the output of these two networks-the final context-aware map, we can generate the bounding box of the target flexibly. Extensive experiments on 3 popular benchmarks demonstrate the robustness of the proposed CAT tracker.
Related Material
[pdf]
[
bibtex]
@InProceedings{Ma_2019_ICCV,
author = {Ma, Ding and Wu, Xiangqian},
title = {Learning Cascaded Context-Aware Framework for Robust Visual Tracking},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}
}