Long-range Multimodal Pretraining for Movie Understanding

Dawit Mureja Argaw, Joon-Young Lee, Markus Woodson, In So Kweon, Fabian Caba Heilbron; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 13392-13403

Abstract


Learning computer vision models from (and for) movies has a long-standing history. While great progress has been attained, there is still a need for a pretrained multimodal model that can perform well in the ever-growing set of movie understanding tasks the community has been establishing. In this work, we introduce Long-range Multimodal Pretraining, a strategy, and a model that leverages movie data to train transferable multimodal and cross-modal encoders. Our key idea is to learn from all modalities in a movie by observing and extracting relationships over a long-range. After pretraining, we run ablation studies on the LVU benchmark and validate our modeling choices and the importance of learning from long-range time spans. Our model achieves state-of-the-art on several LVU tasks while being much more data efficient than previous works. Finally, we evaluate our model's transferability by setting a new state-of-the-art in five different benchmarks.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Argaw_2023_ICCV, author = {Argaw, Dawit Mureja and Lee, Joon-Young and Woodson, Markus and Kweon, In So and Heilbron, Fabian Caba}, title = {Long-range Multimodal Pretraining for Movie Understanding}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {13392-13403} }