Multi-way Encoding for Robustness

Donghyun Kim, Sarah Bargal, Jianming Zhang, Stan Sclaroff; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 1352-1360

Abstract


Deep models are state-of-the-art for many computer vision tasks including image classification and object detection. However, it has been shown that deep models are vulnerable to adversarial examples. We highlight how one-hot encoding directly contributes to this vulnerability and propose breaking away from this widely-used, but highly-vulnerable mapping. We demonstrate that by leveraging a different output encoding, multi-way encoding, we decorrelate source and target models, making target models more secure. Our approach makes it more difficult for adversaries to find useful gradients for generating adversarial attacks. We present robustness for black-box and white-box attacks on four benchmark datasets: MNIST, CIFAR-10, CIFAR-100, and SVHN. The strength of our approach is also presented in the form of an attack for model watermarking, raising challenges in detecting stolen models.

Related Material


[pdf] [supp] [video]
[bibtex]
@InProceedings{Kim_2020_WACV,
author = {Kim, Donghyun and Bargal, Sarah and Zhang, Jianming and Sclaroff, Stan},
title = {Multi-way Encoding for Robustness},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {March},
year = {2020}
}