On Learning Contrastive Representations for Learning With Noisy Labels

Li Yi, Sheng Liu, Qi She, A. Ian McLeod, Boyu Wang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 16682-16691

Abstract


Deep neural networks are able to memorize noisy labels easily with a softmax cross entropy (CE) loss. Previous studies attempted to address this issue focus on incorporating a noise-robust loss function to the CE loss. However, the memorization issue is alleviated but still remains due to the non-robust CE loss. To address this issue, we focus on learning robust contrastive representations of data on which the classifier is hard to memorize the label noise under the CE loss. We propose a novel contrastive regularization function to learn such representations over noisy data where the label noise does not dominate the representation learning. By theoretically investigating the representations induced by the proposed regularization function, we reveal that the learned representations keep information related to true labels and discard information related to corrupted labels from images. Moreover, our theoretical results also indicate that the learned representations are robust to the label noise. Experiments on benchmark datasets demonstrate that the efficacy of our method.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Yi_2022_CVPR, author = {Yi, Li and Liu, Sheng and She, Qi and McLeod, A. Ian and Wang, Boyu}, title = {On Learning Contrastive Representations for Learning With Noisy Labels}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {16682-16691} }