Holistic Filter Pruning for Efficient Deep Neural Networks

Lukas Enderich, Fabian Timm, Wolfram Burgard; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 2596-2605

Abstract


Deep neural networks (DNNs) are usually over-parameterized to increase the likelihood of getting adequate initial weights by random initialization. Consequently, trained DNNs have many redundancies which can be pruned from the model to reduce complexity and improve the ability to generalize. Structural sparsity, as achieved by filter pruning, directly reduces the tensor sizes of weights and activations and is thus particularly effective for reducing complexity. We propose Holistic Filter Pruning (HFP), a novel approach for common DNN training that is easy to implement and enables to specify accurate pruning rates for the number of both parameters and multiplications. After each forward pass, the current model size is calculated and compared to the desired target size. By gradient descent, a global solution can be found that allocates the pruning budget over the individual layers such that the desired target size is fulfilled. In various experiments, we give insights into the training and achieve state-of-the-art performance on CIFAR-10 and ImageNet.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Enderich_2021_WACV, author = {Enderich, Lukas and Timm, Fabian and Burgard, Wolfram}, title = {Holistic Filter Pruning for Efficient Deep Neural Networks}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2021}, pages = {2596-2605} }