Learning From the Web: Webly Supervised Meta-Learning for Masked Face Recognition

Wenbo Zheng, Lan Yan, Fei-Yue Wang, Chao Gou; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 4304-4313

Abstract


Mask wearing has been considered as an effective measure to prevent the spread of COVID-19 during the current pandemic. However, most advanced face recognition approaches are not adequate for masked face recognition, particularly in dealing with the issue of training through the datasets covering only a limited number of images with ground-truth labels. In this work, we propose to learn from the large scale of web images and corresponding tags without any manual annotations along with limited fully annotated datasets. In particular, inspired by the recent success of webly supervised learning in deep neural networks, we capitalize on readily-available web images with noisy annotations to learn a robust representation for masked faces. Besides, except for the conventional spatial representation learning, we propose to leverage the power of frequency domain to capture the local representative information of unoccluded facial parts. This approach learns robust feature embeddings derived from our feature fusion architecture to make joint and full use of information from both spatial and frequency domains. Experimental results on seven benchmarks show that the proposed approach significantly improves the performance compared with other state-of-theart methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Zheng_2021_CVPR, author = {Zheng, Wenbo and Yan, Lan and Wang, Fei-Yue and Gou, Chao}, title = {Learning From the Web: Webly Supervised Meta-Learning for Masked Face Recognition}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {4304-4313} }