LocoMotion: Learning Motion-Focused Video-Language Representations

Hazel Doughty, Fida Mohammad Thoker, Cees G. M. Snoek; Proceedings of the Asian Conference on Computer Vision (ACCV), 2024, pp. 50-70

Abstract


This paper strives for motion-focused video-language representations. Existing methods to learn video-language representations use spatial-focused data, where identifying the objects and scene is often enough to distinguish the relevant caption. We instead propose LocoMotion to learn from motion-focused captions that describe the movement and temporal progression of local object motions. We achieve this by adding synthetic motions to videos and using the parameters of these motions to generate corresponding captions. Furthermore, we propose verb-variation paraphrasing to increase the caption variety and learn the link between primitive motions and high-level verbs. With this, we are able to learn a motion-focused video-language representation. Experiments demonstrate our approach is effective for a variety of downstream tasks, particularly when limited data is available for fine-tuning. Code is available: https://hazeldoughty.github.io/Papers/LocoMotion/

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Doughty_2024_ACCV, author = {Doughty, Hazel and Thoker, Fida Mohammad and Snoek, Cees G. M.}, title = {LocoMotion: Learning Motion-Focused Video-Language Representations}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2024}, pages = {50-70} }