MGMAE: Motion Guided Masking for Video Masked Autoencoding

Bingkun Huang, Zhiyu Zhao, Guozhen Zhang, Yu Qiao, Limin Wang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 13493-13504

Abstract


Masked autoencoding has shown excellent performance on self-supervised video representation learning. Temporal redundancy has led to a high masking ratio and customized masking strategy in VideoMAE. In this paper, we aim to further improve the performance of video masked autoencoding by introducing a motion guided masking strategy. Our key insight is that motion is a general and unique prior in video, which should be taken into account during masked pre-training. Our motion guided masking explicitly incorporates motion information to build temporal consistent masking volume. Based on this masking volume, we can track the unmasked tokens in time and sample a set of temporal consistent cubes from videos. These temporal aligned unmasked tokens will further relieve the information leakage issue in time and encourage the MGMAE to learn more useful structure information. We implement our MGMAE with an online efficient optical flow estimator and backward masking map warping strategy. We perform experiments on the datasets of Something-Something V2 and Kinetics-400, demonstrating the superior performance of our MGMAE to the original VideoMAE. In addition, we provide the visualization analysis to illustrate that our MGMAE can sample temporal consistent cubes in a motion-adaptive manner for more effective video pre-training.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Huang_2023_ICCV, author = {Huang, Bingkun and Zhao, Zhiyu and Zhang, Guozhen and Qiao, Yu and Wang, Limin}, title = {MGMAE: Motion Guided Masking for Video Masked Autoencoding}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {13493-13504} }