-
[pdf]
[supp]
[bibtex]@InProceedings{Liao_2023_WACV, author = {Liao, Chen-Hao and Chen, Wen-Cheng and Liu, Hsuan-Tung and Yeh, Yi-Ren and Hu, Min-Chun and Chen, Chu-Song}, title = {Domain Invariant Vision Transformer Learning for Face Anti-Spoofing}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {6098-6107} }
Domain Invariant Vision Transformer Learning for Face Anti-Spoofing
Abstract
Existing face anti-spoofing (FAS) models have achieved high performance on specific datasets. However, for the application of real-world systems, the FAS model should generalize to the data from unknown domains rather than only achieve good results on a single baseline. As vision transformer models have demonstrated astonishing performance and strong capability in learning discriminative information, we investigate applying transformers to distinguish the face presentation attacks over unknown domains. In this work, we propose the Domain-invariant Vision Transformer (DiVT) for FAS, which adopts two losses to improve the generalizability of the vision transformer. First, a concentration loss is employed to learn a domain-invariant representation that aggregates the features of real face data. Second, a separation loss is utilized to union each type of attack from different domains. The experimental results show that our proposed method achieves state-of-the-art performance on the protocols of domain-generalized FAS tasks. Compared to previous domain generalization FAS models, our proposed method is simpler but more effective.
Related Material