One Shot Learning via Compositions of Meaningful Patches
Alex Wong, Alan L. Yuille; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1197-1205
Abstract
The task of discriminating one object from another is almost trivial for a human being. However, this task is computationally taxing for most modern machine learning methods; whereas, we perform this task at ease given very few examples for learning. It has been proposed that the quick grasp of concept may come from the shared knowledge between the new example and examples previously learned. We believe that the key to one-shot learning is the sharing of common parts as each part holds immense amounts of information on how a visual concept is constructed. We propose an unsupervised method for learning a compact dictionary of image patches representing meaningful components of an objects. Using those patches as features, we build a compositional model that outperforms a number of popular algorithms on a one-shot learning task. We demonstrate the effectiveness of this approach on hand-written digits and show that this model generalizes to multiple datasets.
Related Material
[pdf]
[
bibtex]
@InProceedings{Wong_2015_ICCV,
author = {Wong, Alex and Yuille, Alan L.},
title = {One Shot Learning via Compositions of Meaningful Patches},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}