Can Self-Supervised Representation Learning MethodsWithstand Distribution Shifts and Corruptions?

Prakash Chandra Chhipa, Johan Rodahl Holmgren, Kanjar De, Rajkumar Saini, Marcus Liwicki; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023, pp. 4467-4476

Abstract


Self-supervised representation learning (SSL) in computer vision aims to leverage the inherent structure and relationships within data to learn meaningful representations without explicit human annotation, enabling a holistic understanding of visual scenes. Robustness in vision machine learning ensures reliable and consistent performance, enhancing generalization, adaptability, and resistance to noise, variations, and adversarial attacks. Selfsupervised representation learning paradigms, namely contrastive learning, knowledge distillation, mutual information maximization, and clustering, have been considered to have shown advances in invariant learning representations. This work investigates the robustness of learned representations of SSL approaches focusing on distribution shifts and image corruptions in computer vision. Detailed experiments have been conducted to study the robustness of SSL methods on distribution shifts and image corruptions. The empirical analysis demonstrates a clear relationship between the performance of learned representations within SSL paradigms and the severity of distribution shifts and corruptions. Notably, higher levels of shifts and corruptions are found to significantly diminish the robustness of the learned representations. These findings highlight the critical impact of distribution shifts and image corruptions on the performance and resilience of SSL methods, emphasizing the need for effective strategies to mitigate their adverse effects. The study strongly advocates for future research in the field of self-supervised representation learning to prioritize the key aspects of safety and robustness in order to ensure practical applicability. The source code and results are available on GitHub.

Related Material


[pdf]
[bibtex]
@InProceedings{Chhipa_2023_ICCV, author = {Chhipa, Prakash Chandra and Holmgren, Johan Rodahl and De, Kanjar and Saini, Rajkumar and Liwicki, Marcus}, title = {Can Self-Supervised Representation Learning MethodsWithstand Distribution Shifts and Corruptions?}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2023}, pages = {4467-4476} }