TeachText: CrossModal Generalized Distillation for Text-Video Retrieval

Ioana Croitoru, Simion-Vlad Bogolin, Marius Leordeanu, Hailin Jin, Andrew Zisserman, Samuel Albanie, Yang Liu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 11583-11593

Abstract


In recent years, considerable progress on the task of text-video retrieval has been achieved by leveraging large-scale pretraining on visual and audio datasets to construct powerful video encoders. By contrast, despite the natural symmetry, the design of effective algorithms for exploiting large-scale language pretraining remains under-explored. In this work, we are the first to investigate the design of such algorithms and propose a novel generalized distillation method,TeachText, which leverages complementary cues from multiple text encoders to provide an enhanced supervisory signal to the retrieval model. Moreover, we extend our method to video side modalities and show that we can effectively reduce the number of used modalities at test time without compromising performance. Our approach advances the state of the art on several video retrieval benchmarks by a significant margin and adds no computational overhead at test time. Last but not least, we show an effective application of our method for eliminating noise from retrieval datasets. Code and data can be found at https://www.robots.ox.ac.uk/ vgg/research/teachtext/.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Croitoru_2021_ICCV, author = {Croitoru, Ioana and Bogolin, Simion-Vlad and Leordeanu, Marius and Jin, Hailin and Zisserman, Andrew and Albanie, Samuel and Liu, Yang}, title = {TeachText: CrossModal Generalized Distillation for Text-Video Retrieval}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {11583-11593} }