Human-centric Visual Relation Segmentation Using Mask R-CNN and VTransE

Fan Yu, Xin Tan, Tongwei Ren, Gangshan Wu; Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018, pp. 0-0

Abstract


In this paper, we propose a novel human-centric visual relation segmentation method based on Mask R-CNN model and VTransE model. We first retain the Mask R-CNN model, and segment both human and object instances. Because Mask R-CNN may omit some human instances in instance segmentation, we further detect the omitted faces and extend them to localize the corresponding human instances. Finally, we retrain the last layer of VTransE model, and detect the visual relations between each pair of human instance and human/object instance. The experimental results show that our method obtains 0.4799, 0.4069, and 0.2681 on the criteria of R@100 with the m-IoU of 0.25, 0.50 and 0.75, respectively, which outperforms other methods in Person in Context Challenge.

Related Material


[pdf]
[bibtex]
@InProceedings{Yu_2018_ECCV_Workshops,
author = {Yu, Fan and Tan, Xin and Ren, Tongwei and Wu, Gangshan},
title = {Human-centric Visual Relation Segmentation Using Mask R-CNN and VTransE},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV) Workshops},
month = {September},
year = {2018}
}