Is Pruning Compression?: Investigating Pruning Via Network Layer Similarity

Cody Blakeney, Yan Yan, Ziliang Zong; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 914-922

Abstract


Unstructured neural network pruning is an effective technique that can significantly reduce theoretical model size, computation demand and energy consumption of large neural networks without compromising accuracy. However, a number of fundamental questions about pruning are not answered yet. For example, do the pruned neural networks contain the same representations as the original network? Is pruning a compression or evolution process? Does pruning only work on trained neural networks? What is the role and value of the uncovered sparsity structure? In this paper, we strive to answer these questions by analyzing three unstructured pruning methods (magnitude based pruning, post-pruning re-initialization, and random sparse initialization). We conduct extensive experiments using the Singular Vector Canonical Correlation Analysis (SVCCA) tool to study and contrast layer representations of pruned and original ResNet, VGG, and ConvNet models. We have several interesting observations: 1) Pruned neural network models evolve to substantially different representations while still maintaining similar accuracy. 2) Initialized sparse models can achieve reasonably good accuracy compared to well-engineered pruning methods. 3) Sparsity structures discovered by pruning models are not inherently important or useful.

Related Material


[pdf] [video]
[bibtex]
@InProceedings{Blakeney_2020_WACV,
author = {Blakeney, Cody and Yan, Yan and Zong, Ziliang},
title = {Is Pruning Compression?: Investigating Pruning Via Network Layer Similarity},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {March},
year = {2020}
}