Representation Uncertainty in Self-Supervised Learning as Variational Inference

Hiroki Nakamura, Masashi Okada, Tadahiro Taniguchi; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 16484-16493

Abstract


In this study, a novel self-supervised learning (SSL) method is proposed, which considers SSL in terms of variational inference to learn not only representation but also representation uncertainties. SSL is a method of learning representations without labels by maximizing the similarity between image representations of different augmented views of an image. Meanwhile, variational autoencoder (VAE) is an unsupervised representation learning method that trains a probabilistic generative model with variational inference. Both VAE and SSL can learn representations without labels, but their relationship has not been investigated in the past. Herein, the theoretical relationship between SSL and variational inference has been clarified. Furthermore, a novel method, namely variational inference SimSiam (VI-SimSiam), has been proposed. VI-SimSiam can predict the representation uncertainty by interpreting SimSiam with variational inference and defining the latent space distribution. The present experiments qualitatively show that VI-SimSiam could learn uncertainty by comparing input images and predicted uncertainties. Additionally, we described a relationship between estimated uncertainty and classification accuracy.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Nakamura_2023_ICCV, author = {Nakamura, Hiroki and Okada, Masashi and Taniguchi, Tadahiro}, title = {Representation Uncertainty in Self-Supervised Learning as Variational Inference}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {16484-16493} }