Video and Text Matching With Conditioned Embeddings

Ameen Ali, Idan Schwartz, Tamir Hazan, Lior Wolf; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2022, pp. 1565-1574

Abstract


We present a method for matching a text sentence from a given corpus to a given video clip and vice versa. Traditionally video and text matching is done by learning a shared embedding space and the encoding of one modality is independent of the other. In this work, we encode the dataset data in a way that takes into account the query's relevant information. The power of the method is demonstrated to arise from pooling the interaction data between words and frames. Since the encoding of the video clip depends on the sentence compared to it, the representation needs to be recomputed for each potential match. To this end, we propose an efficient shallow neural network. Its training employs a hierarchical triplet loss that is extendable to paragraph/video matching. The method is simple, provide explainability, and achieves a state-of-the-art-results, for both sentence-clip and video-text by a sizable margin across five different datasets: ActivityNet, DiDeMo, YouCook2, MSR-VTT, and LSMDC. We also show that our conditioned representation can be transferred to video-guided machine translation, where we improved the current results on VATEX. Source code is available at https://github.com/AmeenAli/VideoMatch.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Ali_2022_WACV, author = {Ali, Ameen and Schwartz, Idan and Hazan, Tamir and Wolf, Lior}, title = {Video and Text Matching With Conditioned Embeddings}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2022}, pages = {1565-1574} }