-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Sun_2024_CVPR, author = {Sun, Zhonglin and Feng, Chen and Patras, Ioannis and Tzimiropoulos, Georgios}, title = {LAFS: Landmark-based Facial Self-supervised Learning for Face Recognition}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {1639-1649} }
LAFS: Landmark-based Facial Self-supervised Learning for Face Recognition
Abstract
In this work we focus on learning facial representations that can be adapted to train effective face recognition models particularly in the absence of labels. Firstly compared with existing labelled face datasets a vastly larger magnitude of unlabeled faces exists in the real world. We explore the learning strategy of these unlabeled facial images through self-supervised pretraining to transfer generalized face recognition performance. Moreover motivated by one recent finding that is the face saliency area is critical for face recognition in contrast to utilizing random cropped blocks of images for constructing augmentations in pretraining we utilize patches localized by extracted facial landmarks. This enables our method - namely Landmark-based Facial Self-supervised learning (LAFS) to learn key representation that is more critical for face recognition. We also incorporate two landmark-specific augmentations which introduce more diversity of landmark information to further regularize the learning. With learned landmark-based facial representations we further adapt the representation for face recognition with regularization mitigating variations in landmark positions. Our method achieves significant improvement over the state-of-the-art on multiple face recognition benchmarks especially on more challenging few-shot scenarios. The code is available at https://github.com/szlbiubiubiu/LAFS_CVPR2024
Related Material