Category Modeling from Just a Single Labeling: Use Depth Information to Guide the Learning of 2D Models

Quanshi Zhang, Xuan Song, Xiaowei Shao, Ryosuke Shibasaki, Huijing Zhao; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 193-200

Abstract


An object model base that covers a large number of object categories is of great value for many computer vision tasks. As artifacts are usually designed to have various textures, their structure is the primary distinguishing feature between different categories. Thus, how to encode this structural information and how to start the model learning with a minimum of human labeling become two key challenges for the construction of the model base. We design a graphical model that uses object edges to represent object structures, and this paper aims to incrementally learn this category model from one labeled object and a number of casually captured scenes. However, the incremental model learning may be biased due to the limited human labeling. Therefore, we propose a new strategy that uses the depth information in RGBD images to guide the model learning for object detection in ordinary RGB images. In experiments, the proposed method achieves superior performance as good as the supervised methods that require the labeling of all target objects.

Related Material


[pdf]
[bibtex]
@InProceedings{Zhang_2013_CVPR,
author = {Zhang, Quanshi and Song, Xuan and Shao, Xiaowei and Shibasaki, Ryosuke and Zhao, Huijing},
title = {Category Modeling from Just a Single Labeling: Use Depth Information to Guide the Learning of 2D Models},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2013}
}