A Deformable Mixture Parsing Model with Parselets

Jian Dong, Qiang Chen, Wei Xia, Zhongyang Huang, Shuicheng Yan; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 3408-3415

Abstract


In this work, we address the problem of human parsing, namely partitioning the human body into semantic regions, by using the novel Parselet representation. Previous works often consider solving the problem of human pose estimation as the prerequisite of human parsing. We argue that these approaches cannot obtain optimal pixel level parsing due to the inconsistent targets between these tasks. In this paper, we propose to use Parselets as the building blocks of our parsing model. Parselets are a group of parsable segments which can generally be obtained by lowlevel over-segmentation algorithms and bear strong semantic meaning. We then build a Deformable Mixture Parsing Model (DMPM) for human parsing to simultaneously handle the deformation and multi-modalities of Parselets. The proposed model has two unique characteristics: (1) the possible numerous modalities of Parselet ensembles are exhibited as the "And-Or" structure of sub-trees; (2) to further solve the practical problem of Parselet occlusion or absence, we directly model the visibility property at some leaf nodes. The DMPM thus directly solves the problem of human parsing by searching for the best graph configuration from a pool of Parselet hypotheses without intermediate tasks. Comprehensive evaluations demonstrate the encouraging performance of the proposed approach.

Related Material


[pdf]
[bibtex]
@InProceedings{Dong_2013_ICCV,
author = {Dong, Jian and Chen, Qiang and Xia, Wei and Huang, Zhongyang and Yan, Shuicheng},
title = {A Deformable Mixture Parsing Model with Parselets},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2013}
}