-
[pdf]
[bibtex]@InProceedings{Kim_2024_CVPR, author = {Kim, Hyojin and Lee, Jiyoon and Jeong, Yonghyun and Jang, Haneol and Yoo, Youngjoon}, title = {Advancing Cross-Domain Generalizability in Face Anti-Spoofing: Insights Design and Metrics}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {970-979} }
Advancing Cross-Domain Generalizability in Face Anti-Spoofing: Insights Design and Metrics
Abstract
This paper presents a novel perspective for enhancing anti-spoofing performance in zero-shot data domain generalization. Unlike traditional image classification tasks face anti-spoofing datasets display unique generalization characteristics necessitating novel zero-shot data domain generalization. One step forward to the previous frame-wise spoofing prediction we introduce a nuanced metric calculation that aggregates frame-level probabilities for a video-wise prediction to tackle the gap between the reported frame-wise accuracy and instability in real-world use-case. This approach enables the quantification of bias and variance in model predictions offering a more refined analysis of model generalization. Our investigation reveals that simply scaling up the backbone of models does not inherently improve the mentioned instability leading us to propose an ensembled backbone method from a Bayesian perspective. The probabilistically ensembled backbone both improves model robustness measured from the proposed metric and spoofing accuracy and also leverages the advantages of measuring uncertainty allowing for enhanced sampling during training that contributes to model generalization across new datasets. We evaluate the proposed method from the benchmark OMIC dataset and also the public CelebA-Spoof and SiW-Mv2. Our final model outperforms existing state-of-the-art methods across the datasets showcasing advancements in Bias Variance HTER and AUC metrics.
Related Material