Time Does Tell: Self-Supervised Time-Tuning of Dense Image Representations

Mohammadreza Salehi, Efstratios Gavves, Cees G.M. Snoek, Yuki M. Asano; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 16536-16547

Abstract


Spatially dense self-supervised learning is a rapidly growing problem domain with promising applications for unsupervised segmentation and pretraining for dense downstream tasks. Despite the abundance of temporal data in the form of videos, this information-rich source has been largely overlooked. Our paper aims to address this gap by proposing a novel approach that incorporates temporal consistency in dense self-supervised learning. While methods designed solely for images face difficulties in achieving even the same performance on videos, our method improves not only the representation quality for videos - but also images. Our approach, which we call time-tuning, starts from image-pretrained models and fine-tunes them with a novel self-supervised temporal-alignment clustering loss on unlabeled videos. This effectively facilitates the transfer of high-level information from videos to image representations. Time-tuning improves the state-of-the-art by 8-10% for unsupervised semantic segmentation on videos and matches it for images. We believe this method paves the way for further self-supervised scaling by leveraging the abundant availability of videos. The implementation can be found here : https://github.com/SMSD75/Timetuning

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Salehi_2023_ICCV, author = {Salehi, Mohammadreza and Gavves, Efstratios and Snoek, Cees G.M. and Asano, Yuki M.}, title = {Time Does Tell: Self-Supervised Time-Tuning of Dense Image Representations}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {16536-16547} }