Conjugate Adder Net (CAddNet) - A Space-Efficient Approximate CNN

Lulan Shen, Maryam Ziaeefard, Brett Meyer, Warren Gross, James J. Clark; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2022, pp. 2793-2797

Abstract


The AdderNet was recently developed as a way to implement deep neural networks without needing multiplication operations to combine weights and inputs. Instead, absolute values of the difference between weights and inputs are used, greatly reducing the gate-level implementation complexity. Training of AdderNets is challenging, however, and the loss curves during training tend to fluctuate significantly. In this paper we propose the Conjugate Adder Network, or CAddNet, which uses the difference between the absolute values of conjugate pairs of inputs and the weights. We show that this can be implemented simply via a single minimum operation, resulting in a roughly 50% reduction in logic gate complexity as compared with AdderNets. The CAddNet method also stabilizes training as compared with AdderNets, yielding training curves similar to standard CNNs.

Related Material


[pdf]
[bibtex]
@InProceedings{Shen_2022_CVPR, author = {Shen, Lulan and Ziaeefard, Maryam and Meyer, Brett and Gross, Warren and Clark, James J.}, title = {Conjugate Adder Net (CAddNet) - A Space-Efficient Approximate CNN}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2022}, pages = {2793-2797} }