Evasion Attack STeganography: Turning Vulnerability of Machine Learning To Adversarial Attacks Into a Real-World Application

Salah Ghamizi, Maxime Cordy, Mike Papadakis, Yves Le Traon; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2021, pp. 31-40

Abstract


Evasion Attacks have been commonly seen as a weakness of Deep Neural Networks. In this paper, we flip the paradigm and envision this vulnerability as a useful application. We propose EAST, a new steganography and watermarking technique based on multi-label targeted evasion attacks. The key idea of EAST is to encode data as the labels of the image that the evasion attacks produce. Our results confirm that our embedding is elusive; it not only passes unnoticed by humans, steganalysis methods, and machine-learning detectors. In addition, our embedding is resilient to soft and aggressive image tampering (87% recovery rate under jpeg compression). EAST outperforms existing deep-learning-based steganography approaches with images that are 70% denser and 73% more robust and supports multiple datasets and architectures.

Related Material


[pdf]
[bibtex]
@InProceedings{Ghamizi_2021_ICCV, author = {Ghamizi, Salah and Cordy, Maxime and Papadakis, Mike and Le Traon, Yves}, title = {Evasion Attack STeganography: Turning Vulnerability of Machine Learning To Adversarial Attacks Into a Real-World Application}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2021}, pages = {31-40} }