Increasing Biases Can Be More Efficient Than Increasing Weights

Carlo Metta, Marco Fantozzi, Andrea Papini, Gianluca Amato, Matteo Bergamaschi, Silvia Giulia Galfrè, Alessandro Marchetti, Michelangelo Vegliò, Maurizio Parton, Francesco Morandin; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 2810-2819

Abstract


We introduce a novel computational unit for neural networks that features multiple biases, challenging the traditional perceptron structure. This unit emphasizes the importance of preserving uncorrupted information as it is passed from one unit to the next, applying activation functions later in the process with specialized biases for each unit. Through both empirical and theoretical analyses, we show that by focusing on increasing biases rather than weights, there is potential for significant enhancement in a neural network model's performance. This approach offers an alternative perspective on optimizing information flow within neural networks. Commented source code at https://github.com/CuriosAI/dac-dev.

Related Material


[pdf]
[bibtex]
@InProceedings{Metta_2024_WACV, author = {Metta, Carlo and Fantozzi, Marco and Papini, Andrea and Amato, Gianluca and Bergamaschi, Matteo and Galfr\`e, Silvia Giulia and Marchetti, Alessandro and Vegli\`o, Michelangelo and Parton, Maurizio and Morandin, Francesco}, title = {Increasing Biases Can Be More Efficient Than Increasing Weights}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {2810-2819} }