Cross-Modal Contrastive Learning With Asymmetric Co-Attention Network for Video Moment Retrieval

Love Panta, Prashant Shrestha, Brabeem Sapkota, Amrita Bhattarai, Suresh Manandhar, Anand Kumar Sah; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops, 2024, pp. 607-614

Abstract


Video moment retrieval is a challenging task requiring fine-grained interactions between video and text modalities. Recent work in image-text pretraining has demonstrated that most existing pretrained models suffer from information asymmetry due to the difference in length between visual and textual sequences. We question whether the same problem also exists in the video-text domain with an auxiliary need to preserve both spatial and temporal information. Thus, we evaluate a recently proposed solution involving the addition of an asymmetric co-attention network for video grounding tasks. Additionally, we incorporate momentum contrastive loss for robust, discriminative representation learning in both modalities. We note that the integration of these supplementary modules yields better performance compared to state-of-the-art models on the TACoS dataset and comparable results on ActivityNet Captions, all while utilizing significantly fewer parameters with respect to baseline.

Related Material


[pdf]
[bibtex]
@InProceedings{Panta_2024_WACV, author = {Panta, Love and Shrestha, Prashant and Sapkota, Brabeem and Bhattarai, Amrita and Manandhar, Suresh and Sah, Anand Kumar}, title = {Cross-Modal Contrastive Learning With Asymmetric Co-Attention Network for Video Moment Retrieval}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops}, month = {January}, year = {2024}, pages = {607-614} }