Small Steps and Giant Leaps: Minimal Newton Solvers for Deep Learning

Joao F. Henriques, Sebastien Ehrhardt, Samuel Albanie, Andrea Vedaldi; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 4763-4772


We propose a fast second-order method that can be used as a drop-in replacement for current deep learning solvers. Compared to stochastic gradient descent (SGD), it only requires two additional forward-mode automatic differentiation operations per iteration, which has a computational cost comparable to two standard forward passes and is easy to implement. Our method addresses long-standing issues with current second-order solvers, which invert an approximate Hessian matrix every iteration exactly or by conjugate-gradient methods, procedures that are much slower than a SGD step. Instead, we propose to keep a single estimate of the gradient projected by the inverse Hessian matrix, and update it once per iteration with just two passes over the network. This estimate has the same size and is similar to the momentum variable that is commonly used in SGD . No estimate of the Hessian is maintained. We first validate our method, called CurveBall, on small problems with known solutions (noisy Rosenbrock function and degenerate 2-layer linear networks), where current deep learning solvers struggle. We then train several large models on CIFAR and ImageNet, including ResNet and VGG-f networks, where we demonstrate faster convergence with no hyperparameter tuning. We also show our optimiser's generality by testing on a large set of randomly generated architectures.

Related Material

[pdf] [supp]
author = {Henriques, Joao F. and Ehrhardt, Sebastien and Albanie, Samuel and Vedaldi, Andrea},
title = {Small Steps and Giant Leaps: Minimal Newton Solvers for Deep Learning},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}