SMAUG: Sparse Masked Autoencoder for Efficient Video-Language Pre-Training

Yuanze Lin, Chen Wei, Huiyu Wang, Alan Yuille, Cihang Xie; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 2459-2469

Abstract


Video-language pre-training is crucial for learning powerful multi-modal representation. However, it typically requires a massive amount of computation. In this paper, we develop SMAUG, an efficient pre-training framework for video-language models. The foundation component in SMAUG is masked autoencoders. Different from prior works which only mask textual inputs, our masking strategy considers both visual and textual modalities, providing a better cross-modal alignment and saving more pre-training costs. On top of that, we introduce a space-time token sparsification module, which leverages context information to further select only "important" spatial regions and temporal frames for pre-training. Coupling all these designs allows our method to enjoy both competitive performances on text-to-video retrieval and video question answering tasks, and much less pre-training costs by 1.9x or more. For example, our SMAUG only needs 50 NVIDIA A6000 GPU hours for pre-training to attain competitive performances on these two video-language tasks across six popular benchmarks.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Lin_2023_ICCV, author = {Lin, Yuanze and Wei, Chen and Wang, Huiyu and Yuille, Alan and Xie, Cihang}, title = {SMAUG: Sparse Masked Autoencoder for Efficient Video-Language Pre-Training}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {2459-2469} }