Exploring Scalability of Self-Training for Open-Vocabulary Temporal Action Localization

Jeongseok Hyun, Su Ho Han, Hyolim Kang, Joon-Young Lee, Seon Joo Kim; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 9388-9397

Abstract


The vocabulary size in temporal action localization (TAL) is limited by the scarcity of large-scale annotated datasets. To overcome this recent works integrate vision-language models (VLMs) such as CLIP for open-vocabulary TAL (OV-TAL). However despite the success of VLMs trained on extensive datasets existing OV-TAL methods still rely on human-labeled TAL datasets of limited size to train action localizers limiting their generalizability. In this paper we explore the scalability of self-training with unlabeled YouTube videos for OV-TAL. Our approach consists of two stages: (1) a class-agnostic action localizer is trained on a human-labeled TAL dataset to generate pseudo-labels for unlabeled videos and (2) the large-scale pseudo-labeled dataset is then used to train the localizer. Extensive experiments demonstrate that leveraging web-scale videos in self-training significantly enhances the generalizability of an action localizer. Additionally we identify limitations in existing OV-TAL evaluation schemes and propose a new benchmark for thorough assessment. Finally we showcase the TAL performance of the large multimodal model Gemini-1.5 on our new benchmark. Code is released at https://github.com/HYUNJS/STOV-TAL.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Hyun_2025_WACV, author = {Hyun, Jeongseok and Han, Su Ho and Kang, Hyolim and Lee, Joon-Young and Kim, Seon Joo}, title = {Exploring Scalability of Self-Training for Open-Vocabulary Temporal Action Localization}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {9388-9397} }