What Does It Mean to Learn in Deep Networks? And, How Does One Detect Adversarial Attacks?

Ciprian A. Corneanu, Meysam Madadi, Sergio Escalera, Aleix M. Martinez; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 4757-4766

Abstract


The flexibility and high-accuracy of Deep Neural Networks (DNNs) has transformed computer vision. But, the fact that we do not know when a specific DNN will work and when it will fail has resulted in a lack of trust. A clear example is self-driving cars; people are uncomfortable sitting in a car driven by algorithms that may fail under some unknown, unpredictable conditions. Interpretability and explainability approaches attempt to address this by uncovering what a DNN models, i.e., what each node (cell) in the network represents and what images are most likely to activate it. This can be used to generate, for example, adversarial attacks. But these approaches do not generally allow us to determine where a DNN will succeed or fail and why . i.e., does this learned representation generalize to unseen samples? Here, we derive a novel approach to define what it means to learn in deep networks, and how to use this knowledge to detect adversarial attacks. We show how this defines the ability of a network to generalize to unseen testing samples and, most importantly, why this is the case.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Corneanu_2019_CVPR,
author = {Corneanu, Ciprian A. and Madadi, Meysam and Escalera, Sergio and Martinez, Aleix M.},
title = {What Does It Mean to Learn in Deep Networks? And, How Does One Detect Adversarial Attacks?},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}