- [pdf] [arXiv]
ZEETAD: Adapting Pretrained Vision-Language Model for Zero-Shot End-to-End Temporal Action Detection
Temporal action detection (TAD) involves the localization and classification of action instances within untrimmed videos. While standard TAD follows fully supervised learning with closed-set setting on large training data, recent zero-shot TAD methods showcase the promising openset setting by leveraging large-scale contrastive visuallanguage (ViL) pretrained models. However, existing zeroshot TAD methods have limitations on how to properly construct the strong relationship between two Interdependent tasks of localization and classification and adapt ViL model to video understanding. In this work, we present ZEETAD, featuring two modules: dual-localization and zeroshot proposal classification. The former is a Transformerbased module that detects action events while selectively collecting crucial semantic embeddings for later Recognition. The latter one, CLIP-based module, generates semantic embeddings from text and frame inputs for each temporal unit. Additionally, we enhance discriminative capability on unseen classes by minimally updating the frozen CLIP encoder with lightweight adapters. Extensive experiments on THUMOS14 and ActivityNet-1.3 datasets demonstrate our approach's superior performance in zero-shot TAD and effective knowledge transfer from ViL models to unseen action categories. Code is available at https: //github.com/UARK-AICV/ZEETAD.