Heterogeneous Image Features Integration via Multi-modal Semi-supervised Learning Model

Xiao Cai, Feiping Nie, Weidong Cai, Heng Huang; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 1737-1744

Abstract


Automatic image categorization has become increasingly important with the development of Internet and the growth in the size of image databases. Although the image categorization can be formulated as a typical multiclass classification problem, two major challenges have been raised by the real-world images. On one hand, though using more labeled training data may improve the prediction performance, obtaining the image labels is a time consuming as well as biased process. On the other hand, more and more visual descriptors have been proposed to describe objects and scenes appearing in images and different features describe different aspects of the visual characteristics. Therefore, how to integrate heterogeneous visual features to do the semi-supervised learning is crucial for categorizing large-scale image data. In this paper, we propose a novel approach to integrate heterogeneous features by performing multi-modal semi-supervised classification on unlabeled as well as unsegmented images. Considering each type of feature as one modality, taking advantage of the large amount of unlabeled data information, our new adaptive multimodal semi-supervised classification (AMMSS) algorithm learns a commonly shared class indicator matrix and the weights for different modalities (image features) simultaneously.

Related Material


[pdf]
[bibtex]
@InProceedings{Cai_2013_ICCV,
author = {Cai, Xiao and Nie, Feiping and Cai, Weidong and Huang, Heng},
title = {Heterogeneous Image Features Integration via Multi-modal Semi-supervised Learning Model},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2013}
}