Hyperspherical Quantization: Toward Smaller and More Accurate Models

Dan Liu, Xi Chen, Chen Ma, Xue Liu; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 5262-5272

Abstract


Model quantization enables the deployment of deep neural networks under resource-constrained devices. Vector quantization aims at reducing the model size by indexing model weights with full-precision embeddings, i.e., codewords, while the index needs to be restored to 32-bit during computation. Binary and other low-precision quantization methods can reduce the model size up to 32x, however, at the cost of a considerable accuracy drop. In this paper, we propose an efficient framework for ternary quantization to produce smaller and more accurate compressed models. By integrating hyperspherical learning, pruning and reinitialization, our proposed Hyperspherical Quantization (HQ) method reduces the cosine distance between the full-precision and ternary weights, thus reducing the bias of the straight-through gradient estimator during ternary quantization. Compared with existing work at similar compression levels ( 30x, 40x), our method significantly improves the test accuracy and reduces the model size.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Liu_2023_WACV, author = {Liu, Dan and Chen, Xi and Ma, Chen and Liu, Xue}, title = {Hyperspherical Quantization: Toward Smaller and More Accurate Models}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {5262-5272} }