Robust Dictionary Learning by Error Source Decomposition

Zhuoyuan Chen, Ying Wu; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 2216-2223

Abstract


Sparsity models have recently shown great promise in many vision tasks. Using a learned dictionary in sparsity models can in general outperform predefined bases in clean data. In practice, both training and testing data may be corrupted and contain noises and outliers. Although recent studies attempted to cope with corrupted data and achieved encouraging results in testing phase, how to handle corruption in training phase still remains a very difficult problem. In contrast to most existing methods that learn the dictionary from clean data, this paper is targeted at handling corruptions and outliers in training data for dictionary learning. We propose a general method to decompose the reconstructive residual into two components: a non-sparse component for small universal noises and a sparse component for large outliers, respectively. In addition, further analysis reveals the connection between our approach and the "partial" dictionary learning approach, updating only part of the prototypes (or informative codewords) with remaining (or noisy codewords) fixed. Experiments on synthetic data as well as real applications have shown satisfactory performance of this new robust dictionary learning approach.

Related Material


[pdf]
[bibtex]
@InProceedings{Chen_2013_ICCV,
author = {Chen, Zhuoyuan and Wu, Ying},
title = {Robust Dictionary Learning by Error Source Decomposition},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2013}
}