M33D: Learning 3D Priors Using Multi-Modal Masked Autoencoders for 2D Image and Video Understanding

Muhammad Abdullah Jamal, Omid Mohareri; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 2544-2554

Abstract


We present a new pre-training strategy called M^ 3 3D (Multi-Modal Masked 3D) built based on Multi-modal masked autoencoders that can leverage 3D priors and learned cross-modal representations in RGB-D data. We integrate two major self-supervised learning frameworks; Masked Image Modeling (MIM) and contrastive learning; aiming to effectively embed masked 3D priors and modality complementary features to enhance the correspondence between modalities. In contrast to recent approaches which are either focusing on specific downstream tasks or require multi-view correspondence, we show that our pre-training strategy is ubiquitous, enabling improved representation learning that can transfer into improved performance on various downstream tasks such as video action recognition, video action detection, 2D semantic segmentation and depth estimation. Experiments show that M^ 3 3D outperforms the existing state-of-the-art approaches on ScanNet, NYUv2, UCF-101 and OR-AR, particularly with an improvement of +1.3% mIoU against Mask3D on ScanNet semantic segmentation. We further evaluate our method on low-data regime and demonstrate its superior data efficiency compared to current state-of-the-art approaches.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Jamal_2024_WACV, author = {Jamal, Muhammad Abdullah and Mohareri, Omid}, title = {M33D: Learning 3D Priors Using Multi-Modal Masked Autoencoders for 2D Image and Video Understanding}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {2544-2554} }