ChatVTG: Video Temporal Grounding via Chat with Video Dialogue Large Language Models

Mengxue Qu, Xiaodong Chen, Wu Liu, Alicia Li, Yao Zhao; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 1847-1856

Abstract


Video Temporal Grounding (VTG) aims to ground specific segments within an untrimmed video corresponding to the given natural language query. Existing VTG methods largely depend on supervised learning and extensive annotated data which is labor-intensive and prone to human biases. To address these challenges we present ChatVTG a novel approach that utilizes Video Dialogue Large Language Models (LLMs) for zero-shot video temporal grounding. Our ChatVTG leverages Video Dialogue LLMs to generate multi-granularity segment captions and matches these captions with the given query for coarse temporal grounding circumventing the need for paired annotation data. Furthermore to obtain more precise temporal grounding results we employ moment refinement for fine-grained caption proposals. Extensive experiments on three mainstream VTG datasets including Charades-STA ActivityNet-Captions and TACoS demonstrate the effectiveness of ChatVTG. Our ChatVTG surpasses the performance of current zero-shot methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Qu_2024_CVPR, author = {Qu, Mengxue and Chen, Xiaodong and Liu, Wu and Li, Alicia and Zhao, Yao}, title = {ChatVTG: Video Temporal Grounding via Chat with Video Dialogue Large Language Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {1847-1856} }