Towards Interpretable Face Recognition

Bangjie Yin, Luan Tran, Haoxiang Li, Xiaohui Shen, Xiaoming Liu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 9348-9357

Abstract


Deep CNNs have been pushing the frontier of visual recognition over past years. Besides recognition accuracy, strong demands in understanding deep CNNs in the research community motivate developments of tools to dissect pre-trained models to visualize how they make predictions. Recent works further push the interpretability in the network learning stage to learn more meaningful representations. In this work, focusing on a specific area of visual recognition, we report our efforts towards interpretable face recognition. We propose a spatial activation diversity loss to learn more structured face representations. By leveraging the structure, we further design a feature activation diversity loss to push the interpretable representations to be discriminative and robust to occlusions. We demonstrate on three face recognition benchmarks that our proposed method is able to achieve the state-of-art face recognition accuracy with easily interpretable face representations.

Related Material


[pdf] [video]
[bibtex]
@InProceedings{Yin_2019_ICCV,
author = {Yin, Bangjie and Tran, Luan and Li, Haoxiang and Shen, Xiaohui and Liu, Xiaoming},
title = {Towards Interpretable Face Recognition},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}