KDC-MAE: Knowledge Distilled Contrastive Mask Auto-Encoder

Maheswar Bora, Saurabh Atreya, Aritra Mukherjee, Abhijit Das; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 7511-7521

Abstract


In this work we attempted to extend the thought and showcase a way forward for the Self-supervised Learning (SSL) learning paradigm by combining contrastive learning self-distillation (knowledge distillation) and masked data modelling the three major SSL frameworks to learn a joint and coordinated representation. The proposed technique of SSL learns by the collaborative power of different learning objectives of SSL. Hence to jointly learn the different SSL objectives we proposed a new SSL architecture KDC-MAE a complementary masking strategy to learn the modular correspondence and a weighted way to combine them coordinately. Experimental results conclude that the contrastive masking correspondence along with the KD learning objective has lent a hand to performing better learning for multiple modalities over multiple tasks.

Related Material


[pdf]
[bibtex]
@InProceedings{Bora_2025_WACV, author = {Bora, Maheswar and Atreya, Saurabh and Mukherjee, Aritra and Das, Abhijit}, title = {KDC-MAE: Knowledge Distilled Contrastive Mask Auto-Encoder}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {7511-7521} }