Automatic Joint Structured Pruning and Quantization for Efficient Neural Network Training and Compression

Xiaoyi Qu, David Aponte, Colby Banbury, Daniel P. Robinson, Tianyu Ding, Kazuhito Koishida, Ilya Zharkov, Tianyi Chen; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), 2025, pp. 15234-15244

Abstract


Structured pruning and quantization are fundamental techniques used to reduce the size of deep neural networks (DNNs) and typically are applied independently. Applying these techniques jointly via co-optimization has the potential to produce smaller, high-quality models. However, existing joint schemes are not widely used because of (1) engineering difficulties (complicated multi-stage processes), (2) black-box optimization (extensive hyperparameter tuning to control the overall compression), and (3) insufficient architecture generalization. To address these limitations, we present the framework GETA, which automatically and efficiently performs joint structured pruning and quantization-aware training on any DNN. GETA introduces three key innovations: (I) a quantization-aware dependency graph (QADG) that constructs a pruning search space for generic quantization-aware DNN, (II) a partially projected stochastic gradient method that guarantees layer-wise bit constraints are satisfied, and (III) a new joint learning strategy that incorporates interpretable relationships between pruning and quantization. We present numerical experiments on both convolutional neural networks and transformer architectures that show that our approach achieves competitive (often superior) performance compared to existing joint pruning and quantization methods. The source code is available at https://github.com/microsoft/GETA.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Qu_2025_CVPR, author = {Qu, Xiaoyi and Aponte, David and Banbury, Colby and Robinson, Daniel P. and Ding, Tianyu and Koishida, Kazuhito and Zharkov, Ilya and Chen, Tianyi}, title = {Automatic Joint Structured Pruning and Quantization for Efficient Neural Network Training and Compression}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {15234-15244} }