-
[pdf]
[arXiv]
[bibtex]@InProceedings{Balykin_2025_ICCV, author = {Balykin, Andrei and Ganiev, Anvar and Kondranin, Denis and Polevoda, Kirill and Liudkevich, Nikolai and Petrov, Artem}, title = {Paired-Sampling Contrastive Framework for Joint Physical-Digital Face Attack Detection}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2025}, pages = {3247-3254} }
Paired-Sampling Contrastive Framework for Joint Physical-Digital Face Attack Detection
Abstract
Modern face recognition systems remain vulnerable to spoofing attempts, including both physical presentation attacks and digital forgeries. Traditionally, these two attacks vectors have been addressed by separate models or pipelines, each targeted to its specific artifacts and modalities. However, maintaining distinct detectors leads to increased system complexity, higher inference latency, and a combined attack vectors. We propose the Paired-Sampling Contrastive Framework, a unified training approach that leverages automatically matched pairs of genuine and attack selfies to learn modality-agnostic liveness clues. Evaluated on the 6th Face Anti-Spoofing Challenge "Unified Physical-Digital Attack Detection" benchmark, our method obtained an average classification error rate (ACER) of 2.10%, outperforming prior solutions. The proposed framework is lightweight, requires only 4.46 GFLOPs and a training runtime under one hour, making it practical for real-world deployment. Code and pretrained models are available at https://github.com/xPONYx/iccv2025_deepfake_challenge.
Related Material
