Rectifying the Data Bias in Knowledge Distillation

Boxiao Liu, Shenghan Zhang, Guanglu Song, Haihang You, Yu Liu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2021, pp. 1477-1486

Abstract


Knowledge distillation is a representative technique for model compression and acceleration, which is important for deploying neural networks on resource limited devices. The knowledge transferred from teacher to student is the mapping of teacher model, or represented by all the input-output pairs. However, in practice the student model only learns from data pairs of the dataset that may be biased, and we think this limits the performance of knowledge distillation. In this paper, we first quantitatively define the uniformity of the sampled data for training, providing a unified view for methods that learn from biased data. Then we evaluate the uniformity on real world dataset and show that existing methods actually improve the uniformity of data. We further introduce two uniformity-oriented methods for rectifying the bias of data for knowledge distillation. Extensive experiments conducted on Face Recognition and Person Re-identification have shown the effectiveness of our method. Moreover, we analyze the sampled data on Face Recognition and show that better balance is achieved between races and between easy and hard samples. And this effect can be also confirmed in training the student model from scratch, resulting in a comparable performance with standard knowledge distillation.

Related Material


[pdf]
[bibtex]
@InProceedings{Liu_2021_ICCV, author = {Liu, Boxiao and Zhang, Shenghan and Song, Guanglu and You, Haihang and Liu, Yu}, title = {Rectifying the Data Bias in Knowledge Distillation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2021}, pages = {1477-1486} }