-
[pdf]
[supp]
[bibtex]@InProceedings{Zhang_2024_WACV, author = {Zhang, Yunzhe and Lu, Yao and Xuan, Qi}, title = {How Does Contrastive Learning Organize Images?}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops}, month = {January}, year = {2024}, pages = {497-506} }
How Does Contrastive Learning Organize Images?
Abstract
Contrastive learning, a dominant self-supervised technique, emphasizes similarity in representations between augmentations of the same input and dissimilarity for different ones. Although low contrastive loss often correlates with high classification accuracy, recent studies challenge this direct relationship, spotlighting the crucial role of inductive biases. We delve into these biases from a clustering viewpoint, noting that contrastive learning creates locally dense clusters, contrasting the globally dense clusters from supervised learning. To capture this discrepancy, we introduce the "RLD (Relative Local Density)" metric. While this cluster property can hinder linear classification accuracy, leveraging a Graph Convolutional Network (GCN) based classifier mitigates this, boosting accuracy and reducing parameter requirements. The code is available at https://github.com/xsgxlz/How-does-Contrastive-Learning-Organize-Images/tree/main.
Related Material