Eigendecomposition-free Training of Deep Networks with Zero Eigenvalue-based Losses

Zheng Dang, Kwang Moo Yi, Yinlin Hu, Fei Wang, Pascal Fua, Mathieu Salzmann; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 768-783

Abstract


Many classical Computer Vision problems, such as essential matrix computation and pose estimation from 3D to 2D correspondences, can be solved by finding the eigenvector corresponding to the smallest, or zero, eigenvalue of a matrix representing a linear system. Incorporating this in deep learning frameworks would allow us to explicitly encode known notions of geometry, instead of having the network implicitly learn them from data. However, performing eigendecomposition within a network requires the ability to differentiate this operation. Unfortunately, while theoretically doable, this introduces numerical instability in the optimization process in practice. In this paper, we introduce an eigendecomposition-free approach to training a deep network whose loss depends on the eigenvector corresponding to a zero eigenvalue of a matrix predicted by the network. We demonstrate on several tasks, including keypoint matching and 3D pose estimation, that our approach is much more robust than explicit differentiation of the eigendecomposition, It has better convergence properties and yields state-of-the-art results on both tasks.

Related Material


[pdf]
[bibtex]
@InProceedings{Dang_2018_ECCV,
author = {Dang, Zheng and Yi, Kwang Moo and Hu, Yinlin and Wang, Fei and Fua, Pascal and Salzmann, Mathieu},
title = {Eigendecomposition-free Training of Deep Networks with Zero Eigenvalue-based Losses},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}