ADVFilter: Adversarial Example Generated by Perturbing Optical Path

Lili Zhang, Xiaodong Wang; Proceedings of the Asian Conference on Computer Vision (ACCV) Workshops, 2022, pp. 29-40

Abstract


Deep Neural Networks (DNNs) have achieved great success in many applications, and they are taking over more and more systems in the real world. As a result, the security of DNN system has attracted great attention from the community. In typical scenes, the input images of DNN are collected through the camera. In this paper, we propose a new type of security threat, which attacks a DNN classifier by perturbing the optical path of the camera input through a specially designed filter. It involves many challenges to generate such a filter. First, the filter should be input-free. Second, the filter should be simple enough for manufacturing. We propose a framework to generate such filters, called ADVFilter. ADVFilter models the optical path perturbation by thin plate spline, and optimizes for the minimal distortion of the input images. ADVFilter can generate adversarial pattern for a specific class. This adversarial pattern is universal for the class, which means that it can mislead the DNN model on all input images of the class with high probability. We demonstrate our idea on MNIST dataset, and the results show that ADVFilter can achieve up to 90% success rate with only 16 corresponding points. To the best of our knowledge, this is the first work to propose such security threat for DNN models.

Related Material


[pdf] [code]
[bibtex]
@InProceedings{Zhang_2022_ACCV, author = {Zhang, Lili and Wang, Xiaodong}, title = {ADVFilter: Adversarial Example Generated by Perturbing Optical Path}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV) Workshops}, month = {December}, year = {2022}, pages = {29-40} }