MuffNet: Multi-Layer Feature Federation for Mobile Deep Learning

Hesen Chen, Ming Lin, Xiuyu Sun, Qian Qi, Hao Li, Rong Jin; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0

Abstract


The increasing industrial demands to deploy deep neural networks on resource constrained mobile device motivates recent researches of efficient structure for deep learning. One popular approach is to densify network connectivity by sharing feature maps between layers. A side effect of this approach is that the volume of the feature maps and the convolution computation will exponentially blow up. In this work, we propose a novel structure, named Multi-Layer Feature Federation Network (MuffNet), to address this issue. The MuffNet is a densely connected network but con-sumes much less memory and computation at inference. The key idea of the MuffNet is to elaborately split the feature maps of one layer to different groups. Each feature map group is then shared only once with the other layer. In this way we maintain the network computation within bud-get while keeping the topology density of the network. On the theoretical side, we show that under the same compu-tational budget, MuffNet is a better universal approximator for functions containing high frequency components. We validate the superiority of MuffNet on popular image classification and object detection benchmark datasets. The extensive experiments show that MuffNet is more efficient especially for small models under 45 MFLOPs.

Related Material


[pdf]
[bibtex]
@InProceedings{Chen_2019_ICCV,
author = {Chen, Hesen and Lin, Ming and Sun, Xiuyu and Qi, Qian and Li, Hao and Jin, Rong},
title = {MuffNet: Multi-Layer Feature Federation for Mobile Deep Learning},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}
}