Boundary-Sensitive Pre-Training for Temporal Localization in Videos

Mengmeng Xu, Juan-Manuel Perez-Rua, Victor Escorcia, Brais Martinez, Xiatian Zhu, Li Zhang, Bernard Ghanem, Tao Xiang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 7220-7230


Many video analysis tasks require temporal localization for the detection of content changes. However, most existing models developed for these tasks are pre-trained on general video action classification tasks. This is due to large scale annotation of temporal boundaries in untrimmed videos being expensive. Therefore, no suitable datasets exist that enable pre-training in a manner sensitive to temporal boundaries. In this paper for the first time, we investigate model pre-training for temporal localization by introducing a novel boundary-sensitive pretext (BSP) task. Instead of relying on costly manual annotations of temporal boundaries, we propose to synthesize temporal boundaries in existing video action classification datasets. By defining different ways of synthesizing boundaries, BSP can then be simply conducted in a self-supervised manner via the classification of the boundary types. This enables the learning of video representations that are much more transferable to downstream temporal localization tasks. Extensive experiments show that the proposed BSP is superior and complementary to the existing action classification-based pre-training counterpart, and achieves new state-of-the-art performance on several temporal localization tasks. Please visit our website for more details

Related Material

[pdf] [supp]
@InProceedings{Xu_2021_ICCV, author = {Xu, Mengmeng and Perez-Rua, Juan-Manuel and Escorcia, Victor and Martinez, Brais and Zhu, Xiatian and Zhang, Li and Ghanem, Bernard and Xiang, Tao}, title = {Boundary-Sensitive Pre-Training for Temporal Localization in Videos}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {7220-7230} }