On One-Shot Similarity Kernels: Explicit Feature Maps and Properties

Stefanos Zafeiriou, Irene Kotsia; The IEEE International Conference on Computer Vision (ICCV), 2013, pp. 2392-2399


Kernels have been a common tool of machine learning and computer vision applications for modeling nonlinearities and/or the design of robust 1 similarity measures between objects. Arguably, the class of positive semidefinite (psd) kernels, widely known as Mercerís Kernels,constitutes one of the most well-studied cases. For every psd kernel there exists an associated feature map to an arbitrary dimensional Hilbert space H, the so-called feature space. The main reason behind psd kernelsí popularity is the fact that classification/regression techniques (such as Support Vector Machines (SVMs)) and component analysis algorithms (such as Kernel Principal Component Analysis (KPCA)) can be devised in H, without an explicit definition of the feature map, only by using the kernel (the so-called kernel trick). Recently, due to the development of very efficient solutions for large scale linear SVMs and for incremental linear component analysis, the research towards finding feature map approximations for classes of kernels has attracted significant interest. In this paper, we attempt the derivation of explicit feature maps of a recently proposed class of kernels, the so-called one-shot similarity kernels. We show that for this class of kernels either there exists an explicit representation in feature space or the kernel can be expressed in such a form that allows for exact incremental learning. We theoretically explore the properties of these kernels and show how these kernels can be used for the development of robust visual tracking, recognition and deformable fitting algorithms.

Related Material

author = {Zafeiriou, Stefanos and Kotsia, Irene},
title = {On One-Shot Similarity Kernels: Explicit Feature Maps and Properties},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2013}