Merging SVMs with Linear Discriminant Analysis: A Combined Model

Symeon Nikitidis, Stefanos Zafeiriou, Maja Pantic; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 1067-1074

Abstract


A key problem often encountered by many learning algorithms in computer vision dealing with high dimensional data is the so called "curse of dimensionality" which arises when the available training samples are less than the input feature space dimensionality. To remedy this problem, we propose a joint dimensionality reduction and classification framework by formulating an optimization problem within the maximum margin class separation task. The proposed optimization problem is solved using alternative optimization where we jointly compute the low dimensional maximum margin projections and the separating hyperplanes in the projection subspace. Moreover, in order to reduce the computational cost of the developed optimization algorithm we incorporate orthogonality constraints on the derived projection bases and show that the resulting combined model is an alternation between identifying the optimal separating hyperplanes and performing a linear discriminant analysis on the support vectors. Experiments on face, facial expression and object recognition validate the effectiveness of the proposed method against state-of-the-art dimensionality reduction algorithms.

Related Material


[pdf]
[bibtex]
@InProceedings{Nikitidis_2014_CVPR,
author = {Nikitidis, Symeon and Zafeiriou, Stefanos and Pantic, Maja},
title = {Merging SVMs with Linear Discriminant Analysis: A Combined Model},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2014}
}