Building Efficient Deep Neural Networks With Unitary Group Convolutions

Ritchie Zhao, Yuwei Hu, Jordan Dotzel, Christopher De Sa, Zhiru Zhang; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 11303-11312

Abstract


We propose unitary group convolutions (UGConvs), a building block for CNNs which compose a group convolution with unitary transforms in feature space to learn a richer set of representations than group convolution alone. UGConvs generalize two disparate ideas in CNN architecture, channel shuffling (i.e. ShuffleNet) and block-circulant networks (i.e. CirCNN), and provide unifying insights that lead to a deeper understanding of each technique. We experimentally demonstrate that dense unitary transforms can outperform channel shuffling in DNN accuracy. On the other hand, different dense transforms exhibit comparable accuracy performance. Based on these observations we propose HadaNet, a UGConv network using Hadamard transforms. HadaNets achieve similar accuracy to circulant networks with lower computation complexity, and better accuracy than ShuffleNets with the same number of parameters and floating-point multiplies.

Related Material


[pdf]
[bibtex]
@InProceedings{Zhao_2019_CVPR,
author = {Zhao, Ritchie and Hu, Yuwei and Dotzel, Jordan and Sa, Christopher De and Zhang, Zhiru},
title = {Building Efficient Deep Neural Networks With Unitary Group Convolutions},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}