Neural Network Panning: Screening the Optimal Sparse Network Before Training

Xiatao Kang, Ping Li, Jiayi Yao, Chengxi Li; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 3877-3892

Abstract


Pruning on neural networks before training not only compresses the original models, but also accelerates the network training phase, which has substantial application value. The current work focuses on fine-grained pruning, which uses metrics to calculate weight scores for weight screening, and extends from the initial single-order pruning to iterative pruning. Through these works, we argue that network pruning can be summarized as an expressive force transfer process of weights, where the reserved weights will take on the expressive force from the removed ones for the purpose of maintaining the performance of original networks. In order to achieve optimal expressive force scheduling, we propose a pruning scheme before training called Neural Network Panning which guides expressive force transfer through multi-index and multi-process steps, and designs a kind of panning agent based on reinforcement learning to automate processes. Experimental results show that Panning performs better than various available pruning before training methods.

Related Material


[pdf] [arXiv] [code]
[bibtex]
@InProceedings{Kang_2022_ACCV, author = {Kang, Xiatao and Li, Ping and Yao, Jiayi and Li, Chengxi}, title = {Neural Network Panning: Screening the Optimal Sparse Network Before Training}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {3877-3892} }