Adversarial Attacks Are Reversible With Natural Supervision

Chengzhi Mao, Mia Chiquier, Hao Wang, Junfeng Yang, Carl Vondrick; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 661-671


We find that images contain intrinsic structure that enables the reversal of many adversarial attacks. Attack vectors cause not only image classifiers to fail, but also collaterally disrupt incidental structure in the image. We demonstrate that modifying the attacked image to restore the natural structure will reverse many types of attacks, providing a defense. Experiments demonstrate significantly improved robustness for several state-of-the-art models across the CIFAR-10, CIFAR-100, SVHN, and ImageNet datasets. Our results show that our defense is still effective even if the attacker is aware of the defense mechanism. Since our defense is deployed during inference instead of training, it is compatible with pre-trained networks as well as most other defenses. Our results suggest deep networks are vulnerable to adversarial examples partly because their representations do not enforce the natural structure of images.

Related Material

[pdf] [arXiv]
@InProceedings{Mao_2021_ICCV, author = {Mao, Chengzhi and Chiquier, Mia and Wang, Hao and Yang, Junfeng and Vondrick, Carl}, title = {Adversarial Attacks Are Reversible With Natural Supervision}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {661-671} }