Matrix Backpropagation for Deep Networks With Structured Layers

Catalin Ionescu, Orestis Vantzos, Cristian Sminchisescu; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 2965-2973

Abstract


Deep neural network architectures have recently produced excellent results in a variety of areas in artificial intelligence and visual recognition, well surpassing traditional shallow architectures trained using hand-designed features. The power of deep networks stems both from their ability to perform local computations followed by pointwise non-linearities over increasingly larger receptive fields, and from the simplicity and scalability of the gradient-descent training procedure based on backpropagation. An open problem is the inclusion of layers that perform global, structured matrix computations like segmentation (e.g. normalized cuts) or higher-order pooling (e.g. log-tangent space metrics defined over the manifold of symmetric positive definite matrices) while preserving the validity and efficiency of an end-to-end deep training framework. In this paper we propose a sound mathematical apparatus to formally integrate global structured computation into deep computation architectures. At the heart of our methodology is the development of the theory and practice of backpropagation that generalizes to the calculus of adjoint matrix variations. We perform segmentation experiments using the BSDS and MSCOCO benchmarks and demonstrate that deep networks relying on second-order pooling and normalized cuts layers, trained end-to-end using matrix backpropagation, outperform counterparts that do not take advantage of such global layers.

Related Material


[pdf]
[bibtex]
@InProceedings{Ionescu_2015_ICCV,
author = {Ionescu, Catalin and Vantzos, Orestis and Sminchisescu, Cristian},
title = {Matrix Backpropagation for Deep Networks With Structured Layers},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}