Instance-Aware Group Quantization for Vision Transformers

Jaehyeon Moon, Dohyung Kim, Junyong Cheon, Bumsub Ham; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 16132-16141

Abstract


Post-training quantization (PTQ) is an efficient model compression technique that quantizes a pretrained full-precision model using only a small calibration set of unlabeled samples without retraining. PTQ methods for convolutional neural networks (CNNs) provide quantization results comparable to full-precision counterparts. Directly applying them to vision transformers (ViTs) however incurs severe performance degradation mainly due to the differences in architectures between CNNs and ViTs. In particular the distribution of activations for each channel vary drastically according to input instances making PTQ methods for CNNs inappropriate for ViTs. To address this we introduce instance-aware group quantization for ViTs (IGQ-ViT). To this end we propose to split the channels of activation maps into multiple groups dynamically for each input instance such that activations within each group share similar statistical properties. We also extend our scheme to quantize softmax attentions across tokens. In addition the number of groups for each layer is adjusted to minimize the discrepancies between predictions from quantized and full-precision models under a bit-operation (BOP) constraint. We show extensive experimental results on image classification object detection and instance segmentation with various transformer architectures demonstrating the effectiveness of our approach.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Moon_2024_CVPR, author = {Moon, Jaehyeon and Kim, Dohyung and Cheon, Junyong and Ham, Bumsub}, title = {Instance-Aware Group Quantization for Vision Transformers}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {16132-16141} }