A2Q: Accumulator-Aware Quantization with Guaranteed Overflow Avoidance

Ian Colbert, Alessandro Pappalardo, Jakoba Petri-Koenig; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 16989-16998

Abstract


We present accumulator-aware quantization (A2Q), a novel weight quantization method designed to train quantized neural networks (QNNs) to avoid overflow when using low-precision accumulators during inference. A2Q introduces a unique formulation inspired by weight normalization that constrains the L1-norm of model weights according to accumulator bit width bounds that we derive. Thus, in training QNNs for low-precision accumulation, A2Q also inherently promotes unstructured weight sparsity to guarantee overflow avoidance. We apply our method to deep learning-based computer vision tasks to show that A2Q can train QNNs for low-precision accumulators while maintaining model accuracy competitive with a floating-point baseline. In our evaluations, we consider the impact of A2Q on both general-purpose platforms and programmable hardware. However, we primarily target model deployment on FPGAs because they can be programmed to fully exploit custom accumulator bit widths. Our experimentation shows accumulator bit width significantly impacts the resource efficiency of FPGA-based accelerators. On average across our benchmarks, A2Q offers up to a 2.3x reduction in resource utilization over 32-bit accumulator counterparts with 99.2% of the floating-point model accuracy.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Colbert_2023_ICCV, author = {Colbert, Ian and Pappalardo, Alessandro and Petri-Koenig, Jakoba}, title = {A2Q: Accumulator-Aware Quantization with Guaranteed Overflow Avoidance}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {16989-16998} }