Data-Free Knowledge Amalgamation via Group-Stack Dual-GAN

Jingwen Ye, Yixin Ji, Xinchao Wang, Xin Gao, Mingli Song; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 12516-12525

Abstract


Recent advances in deep learning have provided procedures for learning one network to amalgamate multiple streams of knowledge from the pre-trained Convolutional Neural Network (CNN) models, thus reduce the annotation cost. However, almost all existing methods demand massive training data, which may be unavailable due to privacy or transmission issues. In this paper, we propose a data-free knowledge amalgamate strategy to craft a well-behaved multi-task student network from multiple single/multi-task teachers. The main idea is to construct the group-stack generative adversarial networks (GANs) which have two dual generators. First one generator is trained to collect the knowledge by reconstructing the images approximating the original dataset utilized for pre-training the teachers. Then a dual generator is trained by taking the output from the former generator as input. Finally we treat the dual part generator as the target network and regroup it. As demonstrated on several benchmarks of multi-label classification, the proposed method without any training data achieves the surprisingly competitive results, even compared with some full-supervised methods.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Ye_2020_CVPR,
author = {Ye, Jingwen and Ji, Yixin and Wang, Xinchao and Gao, Xin and Song, Mingli},
title = {Data-Free Knowledge Amalgamation via Group-Stack Dual-GAN},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}