- [pdf] [arXiv]
Exposing Fine-Grained Adversarial Vulnerability of Face Anti-Spoofing Models
Face anti-spoofing aims to discriminate the spoofing face images (e.g., printed photos and replayed videos) from live ones. However, adversarial examples greatly challenge its credibility, where adding some perturbation noise can easily change the output of the target model. Previous works conducted adversarial attack methods to evaluate the face anti-spoofing performance without any fine-grained analysis that which model architecture or auxiliary feature is vulnerable. To handle this problem, we propose a novel framework to expose the fine-grained adversarial vulnerability of the face anti-spoofing models, which consists of a multitask module and a semantic feature augmentation (SFA) module. The multitask module can obtain different semantic features for further fine-grained evaluation, but only attacking these semantic features fails to reflect the vulnerability which is related to the discrimination between spoofing and live images. We then design the SFA module to introduce the data distribution prior for more discrimination-related gradient directions for generating adversarial examples. And the discrimination-related improvement is quantitatively reflected by the increase of attack success rate, where comprehensive experiments show that SFA module increases the attack success rate by nearly 40% on average. We conduct fine-grained adversarial analysis on different annotations, geometric maps, and backbone networks (e.g., Resnet network). These fine-grained adversarial examples can be used for selecting robust backbone networks and auxiliary features. They also can be used for adversarial training, which makes it practical to further improve the accuracy and robustness of the face anti-spoofing models.