Accelerating Self-Supervised Learning via Efficient Training Strategies

Mustafa Taha Koçyiğit, Timothy M. Hospedales, Hakan Bilen; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 5654-5664

Abstract


Recently the focus of the computer vision community has shifted from expensive supervised learning towards self-supervised learning of visual representations. While the performance gap between supervised and self-supervised has been narrowing, the time for training self-supervised deep networks remains an order of magnitude larger than its supervised counterparts, which hinders progress, imposes carbon cost, and limits societal benefits to institutions with substantial resources. Motivated by these issues, this paper investigates reducing the training time of recent self-supervised methods by various model-agnostic strategies that have not been used for this problem. In particular, we study three strategies: an extendable cyclic learning rate schedule, a matching progressive augmentation magnitude and image resolutions schedule, and a hard positive mining strategy based on augmentation difficulty. We show that all three methods combined lead up to 2.7 times speed-up in the training time of several self-supervised methods while retaining comparable performance to the standard self-supervised learning setting.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Kocyigit_2023_WACV, author = {Ko\c{c}yi\u{g}it, Mustafa Taha and Hospedales, Timothy M. and Bilen, Hakan}, title = {Accelerating Self-Supervised Learning via Efficient Training Strategies}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {5654-5664} }