On Visible Adversarial Perturbations & Digital Watermarking

Jamie Hayes; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2018, pp. 1597-1604

Abstract


Given a machine learning model, adversarial perturbations transform images such that the model's output is classified as an attacker chosen class. Most research in this area has focused on adversarial perturbations that are imperceptible to the human eye. However, recent work has considered attacks that are perceptible but localized to a small region of the image. Under this threat model, we discuss both defenses that remove such adversarial perturbations, and attacks that can bypass these defenses.

Related Material


[pdf]
[bibtex]
@InProceedings{Hayes_2018_CVPR_Workshops,
author = {Hayes, Jamie},
title = {On Visible Adversarial Perturbations & Digital Watermarking},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2018}
}