A pairwise learning strategy for video-based face recognition

Meng Zhang, Rujie Liu, Hajime Nada, Hidetsugu Uchida, Tomoaki Matsunami, Narishige Abe; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 38-44

Abstract


In recent years, large-scale datasets together with the emergence of deep learning have led to the immense success of face recognition. However, face recognition in surveillance scenarios is still challenging due to severe blur, dramatic occlusion, richer pose, and illuminations. Meanwhile, owing to the source of data and cleaning strategies, existing large-scale datasets are inevitably affected by label noise. In this paper, a pairwise learning strategy is proposed to overcome the challenges of abundant variants in video-based face recognition (VFR). In addition, an online effective example mining (OEEM) method is designed to eliminate noisy samples to force the model focus more on effective examples during training. Experimental results on LFW, COX and one selfie dataset validate the effectiveness of the proposed approach.

Related Material


[pdf]
[bibtex]
@InProceedings{Zhang_2019_CVPR_Workshops,
author = {Zhang, Meng and Liu, Rujie and Nada, Hajime and Uchida, Hidetsugu and Matsunami, Tomoaki and Abe, Narishige},
title = {A pairwise learning strategy for video-based face recognition},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}