Enhancing Adversarial Robustness via Test-Time Transformation Ensembling

Juan C. Pérez, Motasem Alfarra, Guillaume Jeanneret, Laura Rueda, Ali Thabet, Bernard Ghanem, Pablo Arbeláez; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2021, pp. 81-91

Abstract


Deep learning models are prone to being fooled by imperceptible perturbations known as adversarial attacks. In this work, we study how equipping models with Test-time Transformation Ensembling (TTE) can work as a reliable defense against such attacks. While transforming the input data, both at train and test times, is known to enhance model performance, its effects on adversarial robustness have not been studied. Here, we present a comprehensive empirical study of the impact of TTE, in the form of widely-used image transforms, on adversarial robustness. We show that TTE consistently improves model robustness against a variety of powerful attacks without any need for re-training, and that this improvement comes at virtually no trade-off with accuracy on clean samples. Finally, we show that the benefits of TTE transfer even to the certified robustness domain, in which TTE provides sizable and consistent improvements.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Perez_2021_ICCV, author = {P\'erez, Juan C. and Alfarra, Motasem and Jeanneret, Guillaume and Rueda, Laura and Thabet, Ali and Ghanem, Bernard and Arbel\'aez, Pablo}, title = {Enhancing Adversarial Robustness via Test-Time Transformation Ensembling}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2021}, pages = {81-91} }