Conflicting Bundles: Adapting Architectures Towards the Improved Training of Deep Neural Networks

David Peer, Sebastian Stabinger, Antonio Rodriguez-Sanchez; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 256-265

Abstract


Designing neural network architectures is a challenging task and knowing which specific layers of a model must be adapted to improve the performance is almost a mystery. In this paper, we introduce a novel theory and metric to identify layers that decrease the test accuracy of the trained models, this identification is done as early as at the beginning of training. In the worst-case, such a layer could lead to a network that can not be trained at all. More precisely, we identified those layers that worsen the performance because they produce conflicting training bundles as we show in our novel theoretical analysis, complemented by our extensive empirical studies. Based on these findings, a novel algorithm is introduced to remove performance decreasing layers automatically. Architectures found by this algorithm achieve a competitive accuracy when compared against the state-of-the-art architectures. While keeping such high accuracy, our approach drastically reduces memory consumption and inference time for different computer vision tasks.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Peer_2021_WACV, author = {Peer, David and Stabinger, Sebastian and Rodriguez-Sanchez, Antonio}, title = {Conflicting Bundles: Adapting Architectures Towards the Improved Training of Deep Neural Networks}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2021}, pages = {256-265} }