Uncertainty Modeling of Contextual-Connections Between Tracklets for Unconstrained Video-Based Face Recognition

Jingxiao Zheng, Ruichi Yu, Jun-Cheng Chen, Boyu Lu, Carlos D. Castillo, Rama Chellappa; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 703-712

Abstract


Unconstrained video-based face recognition is a challenging problem due to significant within-video variations caused by pose, occlusion and blur. To tackle this problem, an effective idea is to propagate the identity from high-quality faces to low-quality ones through contextual connections, which are constructed based on context such as body appearance. However, previous methods have often propagated erroneous information due to lack of uncertainty modeling of the noisy contextual connections. In this paper, we propose the Uncertainty-Gated Graph (UGG), which conducts graph-based identity propagation between tracklets, which are represented by nodes in a graph. UGG explicitly models the uncertainty of the contextual connections by adaptively updating the weights of the edge gates according to the identity distributions of the nodes during inference. UGG is a generic graphical model that can be applied at only inference time or with end-to-end training. We demonstrate the effectiveness of UGG with state-of-the-art results in the recently released challenging Cast Search in Movies and IARPA Janus Surveillance Video Benchmark dataset.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Zheng_2019_ICCV,
author = {Zheng, Jingxiao and Yu, Ruichi and Chen, Jun-Cheng and Lu, Boyu and Castillo, Carlos D. and Chellappa, Rama},
title = {Uncertainty Modeling of Contextual-Connections Between Tracklets for Unconstrained Video-Based Face Recognition},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}