Linear Sequence Discriminant Analysis: A Model-Based Dimensionality Reduction Method for Vector Sequences

*Bing Su, Xiaoqing Ding*; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 889-896

**Abstract**

Dimensionality reduction for vectors in sequences is challenging since labels are attached to sequences as a whole. This paper presents a model-based dimensionality reduction method for vector sequences, namely linear sequence discriminant analysis (LSDA) , which attempts to find a subspace in which sequences of the same class are projected together while those of different classes are projected as far as possible. For each sequence class, an HMM is built from states of which statistics are extracted. Means of these states are linked in order to form a mean sequence, and the variance of the sequence class is defined as the sum of all variances of component states. LSDA then learns a transformation by maximizing the separability between sequence classes and at the same time minimizing the within-sequence class scatter. DTW distance between mean sequences is used to measure the separability between sequence classes. We show that the optimization problem can be approximately transformed into an eigen decomposition problem. LDA can be seen as a special case of LSDA by considering non-sequential vectors as sequences of length one. The effectiveness of the proposed LSDA is demonstrated on two individual sequence datasets from UCI machine learning repository as well as two concatenate sequence datasets: APTI Arabic printed text database and IFN/ENIT Arabic handwriting database.

**Related Material**

[pdf]
[

bibtex]

@InProceedings{Su_2013_ICCV,

author = {Su, Bing and Ding, Xiaoqing},

title = {Linear Sequence Discriminant Analysis: A Model-Based Dimensionality Reduction Method for Vector Sequences},

booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},

month = {December},

year = {2013}

}