DiscFace: Minimum Discrepancy Learning for Deep Face Recognition
Softmax-based learning methods have shown state-of-the-art performances on large-scale face recognition tasks. In this paper, we discover an important issue of softmax-based approaches: the sample features around the corresponding class weight are similarly penalized in the training phase even though their directions are different from each other. This directional discrepancy, i.e., process discrepancy leads to performance degradation at the evaluation phase. To mitigate the issue, we propose a novel training scheme, called minimum discrepancy learning that enforces directions of intra-class sample features to be aligned toward an optimal direction by using a single learnable basis. Furthermore, the single learnable basis facilitates disentangling the so-called class-invariant vectors from sample features, such that they are effective to train under class-imbalanced datasets.