ICSVR: Investigating Compositional and Syntactic Understanding in Video Retrieval Models

Avinash Madasu, Vasudev Lal; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 1733-1743

Abstract


Video retrieval (VR) involves retrieving the ground truth video from the video database given a text caption or vice-versa. The two important components of compositionality: objects & attributes and actions are joined using correct syntax to form a proper text query. These components (objects & attributes actions and syntax) each play an important role to help distinguish among videos and retrieve the correct ground truth video. However it is unclear what is the effect of these components on the video retrieval performance. We therefore conduct a systematic study to evaluate the compositional and syntactic understanding of video retrieval models on standard benchmarks such as MSRVTT MSVD and DIDEMO. The study is performed on two categories of video retrieval models: (i) which are pre-trained on video-text pairs and fine-tuned on downstream video retrieval datasets (Eg. Frozen-in-Time Violet MCQ etc.) (ii) which adapt pre-trained image-text representations like CLIP for video retrieval (Eg. CLIP4Clip XCLIP CLIP2Video etc.). Our experiments reveal that actions and syntax play a minor role compared to objects & attributes in video understanding. Moreover video retrieval models that use pre-trained image-text representations (CLIP) have better syntactic and compositional understanding as compared to models pre-trained on video-text data. The code is available at https://github.com/IntelLabs/multimodal_cognitive_ai/tree/main/ICSVR.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Madasu_2024_CVPR, author = {Madasu, Avinash and Lal, Vasudev}, title = {ICSVR: Investigating Compositional and Syntactic Understanding in Video Retrieval Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {1733-1743} }