Generative Zero-Shot Network Quantization

Xiangyu He, Jiahao Lu, Weixiang Xu, Qinghao Hu, Peisong Wang, Jian Cheng; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 3000-3011

Abstract


Convolutional neural networks are able to learn realistic image priors from numerous training samples in low-level image generation and restoration. We show that, for high-level image recognition tasks, we can further reconstruct "realistic" images of each category by leveraging intrinsic Batch Normalization (BN) statistics without any training data. Inspired by the popular VAE/GAN methods, we regard the zero-shot optimization process of synthetic images as generative modeling to match the distribution of BN statistics. The generated images serve as a calibration set for the following zero-shot network quantizations. Our method meets the needs for quantizing models based on sensitive information, e.g., due to privacy concerns, no data is available. Extensive experiments on benchmark datasets show that, with the help of generated data, our approach consistently outperforms existing data-free quantization methods.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{He_2021_CVPR, author = {He, Xiangyu and Lu, Jiahao and Xu, Weixiang and Hu, Qinghao and Wang, Peisong and Cheng, Jian}, title = {Generative Zero-Shot Network Quantization}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {3000-3011} }