Detecting Masked Faces in the Wild With LLE-CNNs

Shiming Ge, Jia Li, Qiting Ye, Zhao Luo; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2682-2690


Detecting masked faces (i.e., faces with occlusions) is a challenging task due to two main reasons: 1)the absence of large datasets of masked faces, and 2)the absence of facial cues from the masked regions. To address these issues, this paper first introduces a dataset with 30,811 Internet images and 35,806 annotated MAsked FAces, which is denoted as MAFA. Different from many previous datasets, each annotated face in MAFA is partially occluded by mask. By analyzing the characteristics of masked faces, we propose LLE-CNNs that detect masked face via three major modules. The proposal module first combines two pre-trained CNNs to extract candidate facial regions from the input image and represent them with high dimensional descriptors. After that, the embedding module turns such descriptors into vectors of weights with respect to the components in pre-trained dictionaries of representative normal faces and non-faces by using locally linear embedding. In this manner, missing facial cues in the masked regions can be largely recovered, and the influences of noisy cues introduced by diversified masks can be greatly alleviated. Finally, the verification module takes the weight vectors as input and identifies real facial regions as well as their accurate positions by jointly performing the classification and regression tasks within unified CNNs. Experimental results on MAFA show that the proposed approach significantly outperforms 6 state-of-the-arts by at least 15.6% in detecting masked faces.

Related Material

author = {Ge, Shiming and Li, Jia and Ye, Qiting and Luo, Zhao},
title = {Detecting Masked Faces in the Wild With LLE-CNNs},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}