SSSD: Self-Supervised Self Distillation

Wei-Chi Chen, Wei-Ta Chu; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 2770-2777

Abstract


With labeled data, self distillation (SD) has been proposed to develop compact but effective models without a complex teacher model available in advance. Such approaches need labeled data to guide the self distillation process. Inspired by self-supervised (SS) learning, we propose a self-supervised self distillation (SSSD) approach in this work. Based on an unlabeled image dataset, a model is constructed to learn visual representations in a self-supervised manner. This pre-trained model is then adopted to extract visual representations of the target dataset and generates pseudo labels via clustering. The pseudo labels guide the SD process, and thus enable SD to proceed in an unsupervised way (no data labels are required at all). We verify this idea based on evaluations on the CIFAR-10, CIFAR-100, and ImageNet-1K datasets, and demonstrate the effectiveness of this unsupervised SD approach. Performance outperforming similar frameworks is also shown.

Related Material


[pdf]
[bibtex]
@InProceedings{Chen_2023_WACV, author = {Chen, Wei-Chi and Chu, Wei-Ta}, title = {SSSD: Self-Supervised Self Distillation}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {2770-2777} }