Fast and Interpretable Face Identification for Out-of-Distribution Data Using Vision Transformers

Hai Phan, Cindy X. Le, Vu Le, Yihui He, Anh “Totti” Nguyen; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 6301-6311

Abstract


Most face identification approaches employ a Siamese neural network to compare two images at the image embedding level. Yet, this technique can be subject to occlusion (e.g., faces with masks or sunglasses) and out-of-distribution data. DeepFace-EMD (Phan et al. 2022) reaches state-of-the-art accuracy on out-of-distribution data by first comparing two images at the image level, and then at the patch level. Yet, its later patch-wise re-ranking stage admits a large O(n^3 log n) time complexity (for n patches in an image) due to the optimal transport optimization. In this paper, we propose a novel, 2-image Vision Transformers (ViTs) that compares two images at the patch level using cross-attention. After training on 2M pairs of images on CASIA Webface (Yi et al. 2014), our model performs at a comparable accuracy as DeepFace-EMD on out-of-distribution data, yet at an inference speed more than twice as fast as DeepFace-EMD (Phan et al. 2022). In addition, via a human study, our model shows promising explainability through the visualization of cross-attention. We believe our work can inspire more explorations in using ViTs for face identification.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Phan_2024_WACV, author = {Phan, Hai and Le, Cindy X. and Le, Vu and He, Yihui and Nguyen, Anh {\textquotedblleft}Totti{\textquotedblright}}, title = {Fast and Interpretable Face Identification for Out-of-Distribution Data Using Vision Transformers}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {6301-6311} }