Virtual Multi-Modality Self-Supervised Foreground Matting for Human-Object Interaction

Bo Xu, Han Huang, Cheng Lu, Ziwen Li, Yandong Guo; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 438-447

Abstract


Most existing human matting algorithms tried to separate pure human-only foreground from the background. In this paper, we propose a Virtual Multi-modality Foreground Matting (VMFM) method to learn human-object interactive foreground (human and objects interacted with him or her) from a raw RGB image. The VMFM method requires no additional inputs, e.g. trimap or known background. We reformulate foreground matting as a self-supervised multi-modality problem: factor each input image into estimated depth map, segmentation mask, and interaction heatmap using three auto-encoders. In order to fully utilize the characteristics of each modality, we first train a dual encoder-to-decoder network to estimate the same alpha matte. Then we introduce a self-supervised method: Complementary Learning(CL) to predict deviation probability map and exchange reliable gradients across modalities without label. We conducted extensive experiments to analyze the effectiveness of each modality and the significance of different components in complementary learning. We demonstrate that our model outperforms the state-of-the-art methods.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Xu_2021_ICCV, author = {Xu, Bo and Huang, Han and Lu, Cheng and Li, Ziwen and Guo, Yandong}, title = {Virtual Multi-Modality Self-Supervised Foreground Matting for Human-Object Interaction}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {438-447} }