Student Customized Knowledge Distillation: Bridging the Gap Between Student and Teacher

Yichen Zhu, Yi Wang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 5057-5066

Abstract


Knowledge distillation (KD) transfers the dark knowledge from cumbersome networks (teacher) to lightweight (student) networks and expects the student to achieve more promising performance than training without the teacher's knowledge. However, a counter-intuitive argument is that better teachers do not make better students due to the capacity mismatch. To this end, we present a novel adaptive knowledge distillation method to complement traditional approaches. The proposed method, named as Student Customized Knowledge Distillation (SCKD), examines the capacity mismatch between teacher and student from the perspective of gradient similarity. We formulate the knowledge distillation as a multi-task learning problem so that the teacher transfers knowledge to the student only if the student can benefit from learning such knowledge. We validate our methods on multiple datasets with various teacher-student configurations on image classification, object detection, and semantic segmentation.

Related Material


[pdf]
[bibtex]
@InProceedings{Zhu_2021_ICCV, author = {Zhu, Yichen and Wang, Yi}, title = {Student Customized Knowledge Distillation: Bridging the Gap Between Student and Teacher}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {5057-5066} }