A Gift From Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning

Junho Yim, Donggyu Joo, Jihoon Bae, Junmo Kim; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 4133-4141

Abstract


We introduce a novel technique for knowledge transfer, where knowledge from a pretrained deep neural network (DNN) is distilled and transferred to another DNN. As the DNN performs a mapping from the input space to the output space through many layers sequentially, we define the distilled knowledge to be transferred in terms of flow between layers, which is calculated by computing the inner product between features from two layers. When we compare the student DNN and the original network with the same size as the student DNN but trained without a teacher network, the proposed method of transferring the distilled knowledge as the flow between two layers exhibits three important phenomena : (1) the student DNN that learns the distilled knowledge is optimized much faster than the original model; (2) the student DNN outperforms the original DNN; and (3) the student DNN can learn the distilled knowledge from a teacher DNN that is trained at a different task, and the student DNN outperforms the original DNN that is trained from scratch.

Related Material


[pdf] [poster]
[bibtex]
@InProceedings{Yim_2017_CVPR,
author = {Yim, Junho and Joo, Donggyu and Bae, Jihoon and Kim, Junmo},
title = {A Gift From Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}