TempT: Temporal Consistency for Test-Time Adaptation

Onur Cezmi Mutlu, Mohammadmahdi Honarmand, Saimourya Surabhi, Dennis P. Wall; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023, pp. 5917-5923

Abstract


We introduce Temporal consistency for Test-time adaptation (TempT), a novel method for test-time adaptation on videos through the use of temporal coherence of predictions across sequential frames as a self-supervision signal. TempT is an approach with broad potential applications in computer vision tasks, including facial expression recognition (FER) in videos. We evaluate TempT's performance on the AffWild2 dataset. Our approach focuses solely on the unimodal visual aspect of the data and utilizes a popular 2D CNN backbone, in contrast to larger sequential or attention-based models used in other approaches. Our preliminary experimental results demonstrate that TempT has competitive performance compared to the previous years' reported performances, and its efficacy provides a compelling proof-of-concept for its use in various real-world applications.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Mutlu_2023_CVPR, author = {Mutlu, Onur Cezmi and Honarmand, Mohammadmahdi and Surabhi, Saimourya and Wall, Dennis P.}, title = {TempT: Temporal Consistency for Test-Time Adaptation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {5917-5923} }