Task-Driven Exploration: Decoupling and Inter-Task Feedback for Joint Moment Retrieval and Highlight Detection

Jin Yang, Ping Wei, Huan Li, Ziyang Ren; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 18308-18318

Abstract


Video moment retrieval and highlight detection are two highly valuable tasks in video understanding but until recently they have been jointly studied. Although existing studies have made impressive advancement recently they predominantly follow the data-driven bottom-up paradigm. Such paradigm overlooks task-specific and inter-task effects resulting in poor model performance. In this paper we propose a novel task-driven top-down framework TaskWeave for joint moment retrieval and highlight detection. The framework introduces a task-decoupled unit to capture task-specific and common representations. To investigate the interplay between the two tasks we propose an inter-task feedback mechanism which transforms the results of one task as guiding masks to assist the other task. Different from existing methods we present a task-dependent joint loss function to optimize the model. Comprehensive experiments and in-depth ablation studies on QVHighlights TVSum and Charades-STA datasets corroborate the effectiveness and flexibility of the proposed framework. Codes are available at https://github.com/EdenGabriel/TaskWeave.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Yang_2024_CVPR, author = {Yang, Jin and Wei, Ping and Li, Huan and Ren, Ziyang}, title = {Task-Driven Exploration: Decoupling and Inter-Task Feedback for Joint Moment Retrieval and Highlight Detection}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {18308-18318} }