-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Dondera_2025_WACV, author = {Dondera, Alin-Eugen and Singh, Anuj R and Jamali-Rad, Hadi}, title = {MAGMA: Manifold Regularization for MAEs}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {6890-6899} }
MAGMA: Manifold Regularization for MAEs
Abstract
Masked Autoencoders (MAEs) are an important divide in self-supervised learning (SSL) due to their independence from augmentation techniques for generating positive (and/or negative) pairs as in contrastive frameworks. Their masking and reconstruction strategy also nicely aligns with SSL approaches in natural language processing. Most MAEs are built upon Transformer-based architectures where visual features are not regularized as opposed to their convolutional neural network (CNN) based counterparts which can potentially hinder their performance. To address this we introduce MAGMA a novel batch-wide layer-wise regularization loss applied to representations of different Transformer layers. We demonstrate that by plugging in the proposed regularization loss one can significantly improve the performance of MAE-based models. We further demonstrate the impact of the proposed loss on optimizing other generic SSL approaches (such as VICReg and SimCLR) broadening the impact of the proposed approach. Our code base can be found at https://github.com/adondera/magma.
Related Material