Distillation-Based Training for Multi-Exit Architectures

Mary Phuong, Christoph H. Lampert; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 1355-1364

Abstract


Multi-exit architectures, in which a stack of processing layers is interleaved with early output layers, allow the processing of a test example to stop early and thus save computation time and/or energy. In this work, we propose a new training procedure for multi-exit architectures based on the principle of knowledge distillation. The method encourages early exits to mimic later, more accurate exits, by matching their probability outputs. Experiments on CIFAR100 and ImageNet show that distillation-based training significantly improves the accuracy of early exits while maintaining state-of-the-art accuracy for late ones. The method is particularly beneficial when training data is limited and also allows a straight-forward extension to semi-supervised learning, i.e. make use also of unlabeled data at training time. Moreover, it takes only a few lines to implement and imposes almost no computational overhead at training time, and none at all at test time.

Related Material


[pdf] [supp] [video]
[bibtex]
@InProceedings{Phuong_2019_ICCV,
author = {Phuong, Mary and Lampert, Christoph H.},
title = {Distillation-Based Training for Multi-Exit Architectures},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}