Can Selfless Learning Improve Accuracy of a Single Classification Task?
The human brain has billions of neurons. However, we perform tasks using only a few concurrently active neurons. Moreover, an activated neuron inhibits the activity of its neighbors. Selfless Learning exploits these neurobiological principles to solve the problem of catastrophic forgetting in continual learning. In this paper, we ask a basic question: can the selfless learning idea be used to improve the accuracy of deep convolutional networks on a single classification task? To achieve this goal, we introduce two regularizers and formulate a curriculum learning-esque strategy to effectively enforce these regularizers on a network. This has resulted in significant gains over vanilla cross-entropy training. Moreover, we have shown that our method can be used in conjunction with other popular learning paradigms like curriculum learning.