Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation

Chaowei Xiao, Ruizhi Deng, Bo Li, Fisher Yu, Mingyan Liu, Dawn Song; The European Conference on Computer Vision (ECCV), 2018, pp. 217-234


Deep Neural Networks (DNNs) have been widely applied in various recognition tasks. However, recently DNNs have been shown to be vulnerable against adversarial examples, which can mislead DNNs to make arbitrary incorrect predictions. While adversarial examples are mainly studied in classification, special properties may be inherited for those targeting on segmentation models which require additional components such as dilated convolutions and multiscale processing. In this paper, we aim to characterize adversarial examples based on spatial context information in segmentation. We observe that spatial consistency information can be potentially leveraged to recognize/detect adversarial examples robustly even when facing adaptive attacker who has access to the model and detection strategies. We also show that adversarial examples based on attacks we considered barely transfer among models, even though transferability is common in classification. Our observations shed new light on developing adversarial attacks and defenses to better understand the vulnerabilities of DNNs.

Related Material

author = {Xiao, Chaowei and Deng, Ruizhi and Li, Bo and Yu, Fisher and Liu, Mingyan and Song, Dawn},
title = {Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}