EA-KD: Entropy-based Adaptive Knowledge Distillation

Chi-Ping Su, Ching-Hsun Tseng, Bin Pu, Lei Zhao, Jiewen Yang, Zhuangzhuang Chen, Shin-Jye Lee; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, pp. 731-740

Abstract


Knowledge distillation (KD) enables a smaller "student" model to mimic a larger "teacher" model by transferring knowledge from the teacher's output or features. However, most KD methods treat all samples uniformly, overlooking the varying learning value of each sample and thereby limiting effectiveness. In this paper, we propose Entropy-based Adaptive Knowledge Distillation (EA-KD), a simple yet effective plug-and-play KD method that prioritizes learning from valuable samples. EA-KD quantifies each sample's learning value by strategically combining the entropy of the teacher and student output, then dynamically reweights the distillation loss to place greater emphasis on high-entropy samples. Extensive experiments across diverse KD frameworks and tasks--including image classification, object detection, and large language model (LLM) distillation--demonstrate that EA-KD consistently enhances performance, achieving state-of-the-art results with negligible computational cost. Our code is available at: https://github.com/cpsu00/EA-KD

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Su_2025_ICCV, author = {Su, Chi-Ping and Tseng, Ching-Hsun and Pu, Bin and Zhao, Lei and Yang, Jiewen and Chen, Zhuangzhuang and Lee, Shin-Jye}, title = {EA-KD: Entropy-based Adaptive Knowledge Distillation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {731-740} }