LAFEAT: Piercing Through Adversarial Defenses With Latent Features

Yunrui Yu, Xitong Gao, Cheng-Zhong Xu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 5735-5745

Abstract


Deep convolutional neural networks are susceptible to adversarial attacks. They can be easily deceived to give an incorrect output by adding a tiny perturbation to the input. This presents a great challenge in making CNNs robust against such attacks. An influx of new defense techniques have been proposed to this end. In this paper, we show that latent features in certain "robust" models are surprisingly susceptible to adversarial attacks. On top of this, we introduce a unified Linfinity white-box attack algorithm which harnesses latent features in its gradient descent steps, namely LAFEAT. We show that not only is it computationally much more efficient for successful attacks, but it is also a stronger adversary than the current state-of-the-art across a wide range of defense mechanisms. This suggests that model robustness could be contingent the effective use of the defender's hidden components, and it should no longer be viewed from a holistic perspective.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Yu_2021_CVPR, author = {Yu, Yunrui and Gao, Xitong and Xu, Cheng-Zhong}, title = {LAFEAT: Piercing Through Adversarial Defenses With Latent Features}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {5735-5745} }