M2SGD: Learning to Learn Important Weights

Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Christian Walder, Gabriela Ferraro, Hanna Suominen; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 236-237

Abstract


Meta-learning concerns rapid knowledge acquisition. One popular approach cast optimisation as a learning problem and it has been shown that learnt neural optimisers update base learners more quickly than their handcrafted counterparts. In this paper, we learn an optimisation rule that sparsely updates the learner parameters and removes redundant weights. We present Masked Meta-SGD (M2SGD), a neural optimiser which is not only capable of updating learners quickly, but also capable of removing 83.71% weights for ResNet20s.

Related Material


[pdf]
[bibtex]
@InProceedings{Kuo_2020_CVPR_Workshops,
author = {Kuo, Nicholas I-Hsien and Harandi, Mehrtash and Fourrier, Nicolas and Walder, Christian and Ferraro, Gabriela and Suominen, Hanna},
title = {M2SGD: Learning to Learn Important Weights},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}