Lipschitz Continuity Guided Knowledge Distillation

Yuzhang Shang, Bin Duan, Ziliang Zong, Liqiang Nie, Yan Yan; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 10675-10684

Abstract


Knowledge distillation has become one of the most important model compression techniques by distilling knowledge from larger teacher networks to smaller student ones. Although great success has been achieved by prior distillation methods via delicately designing various types of knowledge, they overlook the functional properties of neural networks, which makes the process of applying those techniques to new tasks unreliable and non-trivial. To alleviate such problem, in this paper, we initially leverage Lipschitz continuity to better represent the functional characteristic of neural networks and guide the knowledge distillation process. In particular, we propose a novel Lipschitz Continuity Guided Knowledge Distillation framework to faithfully distill knowledge by minimizing the distance between two neural networks' Lipschitz constants, which enables teacher networks to better regularize student networks and improve the corresponding performance. We derive an explainable approximation algorithm with an explicit theoretical derivation to address the NP-hard problem of calculating the Lipschitz constant. Experimental results have shown that our method outperforms other benchmarks over several knowledge distillation tasks (e.g., classification, segmentation and object detection) on CIFAR-100, ImageNet, and PASCAL VOC datasets.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Shang_2021_ICCV, author = {Shang, Yuzhang and Duan, Bin and Zong, Ziliang and Nie, Liqiang and Yan, Yan}, title = {Lipschitz Continuity Guided Knowledge Distillation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {10675-10684} }