Small Scale Data-Free Knowledge Distillation

He Liu, Yikai Wang, Huaping Liu, Fuchun Sun, Anbang Yao; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 6008-6016

Abstract


Data-free knowledge distillation is able to utilize the knowledge learned by a large teacher network to augment the training of a smaller student network without accessing the original training data avoiding privacy security and proprietary risks in real applications. In this line of research existing methods typically follow an inversion-and-distillation paradigm in which a generative adversarial network on-the-fly trained with the guidance of the pre-trained teacher network is used to synthesize a large-scale sample set for knowledge distillation. In this paper we reexamine this common data-free knowledge distillation paradigm showing that there is considerable room to improve the overall training efficiency through a lens of "small-scale inverted data for knowledge distillation". In light of three empirical observations indicating the importance of how to balance class distributions in terms of synthetic sample diversity and difficulty during both data inversion and distillation processes we propose Small Scale Data-free Knowledge Distillation (SSD-KD). In formulation SSD-KD introduces a modulating function to balance synthetic samples and a priority sampling function to select proper samples facilitated by a dynamic replay buffer and a reinforcement learning strategy. As a result SSD-KD can perform distillation training conditioned on an extremely small scale of synthetic samples (e.g. 10x less than the original training data scale) making the overall training efficiency one or two orders of magnitude faster than many mainstream methods while retaining superior or competitive model performance as demonstrated on popular image classification and semantic segmentation benchmarks. The code is available at https://github.com/OSVAI/SSD-KD.

Related Material


[pdf]
[bibtex]
@InProceedings{Liu_2024_CVPR, author = {Liu, He and Wang, Yikai and Liu, Huaping and Sun, Fuchun and Yao, Anbang}, title = {Small Scale Data-Free Knowledge Distillation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {6008-6016} }