Distill on the Go: Online Knowledge Distillation in Self-Supervised Learning

Prashant Bhat, Elahe Arani, Bahram Zonooz; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 2678-2687

Abstract


Self-supervised representation learning solves pretext prediction tasks that do not require labeled data to learn feature representations. For vision tasks, pretext tasks such as predicting rotation, solve jigsaw are solely created from the input data. Yet, predicting this known information helps in learning representations useful for downstream tasks. However, recent works have shown that wider and deeper models benefit more from self-supervised learning than smaller models. To address the issue of self-supervised pre-training of smaller models, we propose Distill-on-the-Go (DoGo), a self-supervised learning paradigm using single-stage online knowledge distillation to improve the representation quality of the smaller models. We employ deep mutual learning strategy where models of different capacities collaboratively learn from each other to improve one another. Specifically, each model is trained using self-supervised learning along with a distillation loss that aligns each model's softmax probabilities of similarity scores with that of the peer model. We conduct extensive experiments on multiple benchmark datasets, learning objectives, and architectures to demonstrate the potential of our proposed method.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Bhat_2021_CVPR, author = {Bhat, Prashant and Arani, Elahe and Zonooz, Bahram}, title = {Distill on the Go: Online Knowledge Distillation in Self-Supervised Learning}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {2678-2687} }