Clustering Positive Definite Matrices by Learning Information Divergences

Panagiotis Stanitsas, Anoop Cherian, Vassilios Morellas, Nikolaos Papanikolopoulos; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 1304-1312

Abstract


Data representations based on Symmetric Positive Definite (SPD) matrices are gaining popularity in visual learning applications. When comparing SPD matrices, measures based on non-linear geometries often yield beneficial results. However, a manual selection process is commonly used to identify the appropriate measure for a visual learning application. In this paper, we study the problem of clustering SPD matrices while automatically learning a suitable measure. We propose a novel formulation that jointly (i) clusters the input SPD matrices in a K-Means setup and (ii) learns a suitable non-linear measure for comparing SPD matrices. For (ii), we capitalize on the recently introduced ab-logdet divergence, which generalizes a family of popular similarity measures on SPD matrices. Our formulation is cast in a Riemannian optimization framework and solved using a conjugate gradient scheme. We present experiments on five computer vision datasets and demonstrate state-of-the-art performance.

Related Material


[pdf]
[bibtex]
@InProceedings{Stanitsas_2017_ICCV,
author = {Stanitsas, Panagiotis and Cherian, Anoop and Morellas, Vassilios and Papanikolopoulos, Nikolaos},
title = {Clustering Positive Definite Matrices by Learning Information Divergences},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}