Adversarial Examples for Edge Detection: They Exist, and they Transfer

Christian Cosgrove, Alan Yuille; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 1070-1079

Abstract


Convolutional neural networks have recently advanced the state of the art in many tasks including edge and object boundary detection. However, in this paper, we demonstrate that these edge detectors inherit a troubling property of neural networks: they can be fooled by adversarial examples. We show that adding small perturbations to an image causes HED, a CNN-based edge detection model, to fail to locate edges, to detect nonexistent edges, and even to hallucinate arbitrary configurations of edges. More importantly, we find that these adversarial examples blindly transfer to other CNN-based vision models. In particular, attacks on edge detection result in significant drops in accuracy in models trained to perform unrelated, high-level tasks like image classification and semantic segmentation.

Related Material


[pdf] [supp] [video]
[bibtex]
@InProceedings{Cosgrove_2020_WACV,
author = {Cosgrove, Christian and Yuille, Alan},
title = {Adversarial Examples for Edge Detection: They Exist, and they Transfer},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {March},
year = {2020}
}