TTTFlow: Unsupervised Test-Time Training With Normalizing Flow

David Osowiechi, Gustavo A. Vargas Hakim, Mehrdad Noori, Milad Cheraghalikhani, Ismail Ben Ayed, Christian Desrosiers; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 2126-2134

Abstract


A major problem of deep neural networks for image classification is their vulnerability to domain changes at test-time. Recent methods have proposed to address this problem with test-time training (TTT), where a two-branch model is trained to learn a main classification task and also a self-supervised task used to perform test-time adaptation. However, these techniques require defining a proxy task specific to the target application. To tackle this limitation, we propose TTTFlow: a Y-shaped architecture using an unsupervised head based on Normalizing Flows to learn the normal distribution of latent features and detect domain shifts in test examples. At inference, keeping the unsupervised head fixed, we adapt the model to domain-shifted examples by maximizing the log likelihood of the Normalizing Flow. Our results show that our method can significantly improve the accuracy with respect to previous works.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Osowiechi_2023_WACV, author = {Osowiechi, David and Hakim, Gustavo A. Vargas and Noori, Mehrdad and Cheraghalikhani, Milad and Ben Ayed, Ismail and Desrosiers, Christian}, title = {TTTFlow: Unsupervised Test-Time Training With Normalizing Flow}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {2126-2134} }