Robust Domain Adaptation on the L1-Grassmannian Manifold

Sriram Kumar, Andreas Savakis; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2016, pp. 103-110

Abstract


Domain adaptation aims to remedy the loss in classification performance that often occurs due to domain shifts between training and testing datasets. This problem is known as the dataset bias attributed to variations across datasets. Domain adaptation methods on Grassmann manifolds are among the most popular, including Geodesic Subspace Sampling and Geodesic Flow Kernel. Grassmann learning facilitates compact characterization by generating linear subspaces and representing them as points on the manifold. However, Grassmannian construction is based on PCA which is sensitive to outliers. This motivates us to find linear projections that are robust to noise, outliers, and dataset idiosyncrasies. Hence, we combine L1-PCA and Grassmann manifolds to perform robust domain adaptation. We present empirical results to validate improvements and robustness for domain adaptation in object class recognition across datasets.

Related Material


[pdf]
[bibtex]
@InProceedings{Kumar_2016_CVPR_Workshops,
author = {Kumar, Sriram and Savakis, Andreas},
title = {Robust Domain Adaptation on the L1-Grassmannian Manifold},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2016}
}