Leveraging Tacit Information Embedded in CNN Layers for Visual Tracking

Kourosh Meshgi, Maryam Sadat Mirzaei, Shigeyuki Oba; Proceedings of the Asian Conference on Computer Vision (ACCV), 2020


Different layers in CNNs provide not only different levels of abstraction for describing the objects in the input but also encode various implicit information about them. The activation patterns of different features contain valuable information about the stream of incoming images: spatial relations, temporal patterns, and co-occurrence of spatial and spatiotemporal (ST) features. The studies in visual tracking literature, so far, utilized only one of the CNN layers, a pre-fixed combination of them, or an ensemble of trackers built upon individual layers. In this study, we employ an adaptive combination of several CNN layers in a single DCF tracker to address variations of the target appearances and propose the use of style statistics on both spatial and temporal properties of the target, directly extracted from CNN layers for visual tracking.Experiments demonstrate that using the additional implicit data of CNNs significantly improves the performance of the tracker. Results demonstrate the effectiveness of using style similarity and activation consistency regularization in improving its localization and scale accuracy.

Related Material

[pdf] [arXiv] [code]
@InProceedings{Meshgi_2020_ACCV, author = {Meshgi, Kourosh and Mirzaei, Maryam Sadat and Oba, Shigeyuki}, title = {Leveraging Tacit Information Embedded in CNN Layers for Visual Tracking}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {November}, year = {2020} }