-
[pdf]
[supp]
[bibtex]@InProceedings{Vats_2023_ICCV, author = {Vats, Vanshika and Jerripothula, Koteswar Rao}, title = {Adversarial Examples with Specular Highlights}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2023}, pages = {3602-3611} }
Adversarial Examples with Specular Highlights
Abstract
We introduce specular highlight as a natural adversary and examine how deep neural network classifiers can get affected by them, resulting in a reduction in their prediction performance. We also curate two separate datasets, ImageNet-AH with artificially generated Gaussian specular highlights and ImageNet-PT by flashing natural specular highlights on printed images, both demonstrating significant degradations in the performance of the classifiers. We note around 20% drop in the model prediction accuracy with artificial specular highlights and around 35% accuracy drop in torch-highlighted printed images. These drops indeed question the robustness and reliability of modern-day image classifiers. We also find that finetuning these classifiers with specular images does not improve the prediction performance enough. To understand the reason, we finally do an activation mapping analysis and examine the network attention areas in images with and without high-lights. We find that specular highlights shift the attention of models which makes fine-tuning ineffective, eventually broadly leading to performance drops.
Related Material