Network Amplification With Efficient MACs Allocation

Chuanjian Liu, Kai Han, An Xiao, Ying Nie, Wei Zhang, Yunhe Wang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2022, pp. 1933-1942


Recent studies on deep convolutional neural networks present a simple paradigm of architecture design, i.e., models with more MACs typically achieve better accuracies, such as EfficientNet and RegNet. These works try to enlarge the network architecture with one unified rule by sampling and statistical methods. However, the rule is not prospective to the design of large networks because it is obtained from the experience of researchers on small network architectures. In this paper, we propose to enlarge the capacity of CNN models by fine-grained MACs allocation for the width, depth and resolution on the stage level. In particular, starting from a base small model, we gradually add extra channels, layers or resolution by using a dynamic programming manner. With step-by-step modifying the computations on different stages, the enlarged network will be equipped with optimal allocation and utilization of MACs. On EfficientNet, our method consistently outperforms the performance of the original scaling method. In particular, the proposed method is used to enlarge models sourced by GhostNet, we achieve state-of-the-art 80.9% and 84.3% ImageNet top-1 accuracies under the setting of 600M and 4.4B MACs, respectively.

Related Material

[pdf] [supp]
@InProceedings{Liu_2022_CVPR, author = {Liu, Chuanjian and Han, Kai and Xiao, An and Nie, Ying and Zhang, Wei and Wang, Yunhe}, title = {Network Amplification With Efficient MACs Allocation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2022}, pages = {1933-1942} }