TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks

Humam Alwassel, Silvio Giancola, Bernard Ghanem; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2021, pp. 3173-3183

Abstract


Due to the large memory footprint of untrimmed videos, current state-of-the-art video localization methods operate atop precomputed video clip features. These features are extracted from video encoders typically trained for trimmed action classification tasks, making such features not necessarily suitable for temporal localization. In this work, we propose a novel supervised pretraining paradigm for clip features that not only trains to classify activities but also considers background clips and global video information to improve temporal sensitivity. Extensive experiments show that using features trained with our novel pretraining strategy significantly improves the performance of recent state-of-the-art methods on three tasks: Temporal Action Localization, Action Proposal Generation, and Dense Video Captioning. We also show that our pretraining approach is effective across three encoder architectures and two pretraining datasets. We believe video feature encoding is an important building block for localization algorithms, and extracting temporally-sensitive features should be of paramount importance in building more accurate models. The code and pretrained models are available on our project website.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Alwassel_2021_ICCV, author = {Alwassel, Humam and Giancola, Silvio and Ghanem, Bernard}, title = {TSP: Temporally-Sensitive Pretraining of Video Encoders for Localization Tasks}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2021}, pages = {3173-3183} }