-
[pdf]
[supp]
[bibtex]@InProceedings{Hakim_2023_ICCV, author = {Hakim, Gustavo A. Vargas and Osowiechi, David and Noori, Mehrdad and Cheraghalikhani, Milad and Bahri, Ali and Ben Ayed, Ismail and Desrosiers, Christian}, title = {ClusT3: Information Invariant Test-Time Training}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {6136-6145} }
ClusT3: Information Invariant Test-Time Training
Abstract
Deep Learning models have shown remarkable performance in a broad range of vision tasks. However, they are often vulnerable against domain shifts at test-time. Test-time training (TTT) methods have been developed in an attempt to mitigate these vulnerabilities, where a secondary task is solved at training time simultaneously with the main task, to be later used as an self-supervised proxy task at test-time. In this work, we propose a novel unsupervised TTT technique based on the maximization of Mutual Information between multi-scale feature maps and a discrete latent representation, which can be integrated to the standard training as an auxiliary clustering task. Experimental results demonstrate competitive classification performance on different popular test-time adaptation benchmarks. The code can be found at: https://github.com/dosowiechi/ClusT3.git
Related Material