Temporal Knowledge Consistency for Unsupervised Visual Representation Learning

Weixin Feng, Yuanjiang Wang, Lihua Ma, Ye Yuan, Chi Zhang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 10170-10180

Abstract


The instance discrimination paradigm has become dominant in unsupervised learning. It always adopts a teacher-student framework, in which the teacher provides embedded knowledge as a supervision signal for the student. The student learns meaningful representations by enforcing instance spatial consistency with the views from the teacher. However, the outputs of the teacher can vary dramatically on the same instance during different training stages, introducing unexpected noise and leading to catastrophic forgetting caused by inconsistent objectives. In this paper, we first integrate instance temporal consistency into current instance discrimination paradigms, and propose a novel and strong algorithm named Temporal Knowledge Consistency (TKC). Specifically, our TKC dynamically ensembles the knowledge of temporal teachers and adaptively selects useful information according to its importance to learning instance temporal consistency. Experimental result shows that TKC can learn better visual representations on both ResNet and AlexNet on linear evaluation protocol while transfer well to downstream tasks. All experiments suggest the good effectiveness and generalization of our method. Code will be made available.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Feng_2021_ICCV, author = {Feng, Weixin and Wang, Yuanjiang and Ma, Lihua and Yuan, Ye and Zhang, Chi}, title = {Temporal Knowledge Consistency for Unsupervised Visual Representation Learning}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {10170-10180} }