DANCE: A Deep Attentive Contour Model for Efficient Instance Segmentation

Zichen Liu, Jun Hao Liew, Xiangyu Chen, Jiashi Feng; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 345-354

Abstract


Contour-based instance segmentation methods are attractive due to their efficiency. However, existing contour-based methods either suffer from lossy representation, complex pipeline or difficulty in model training, resulting in subpar mask accuracy on challenging datasets like MS-COCO. In this work, we propose a novel deep attentive contour model, named DANCE, to achieve better instance segmentation accuracy while remaining good efficiency. To this end, DANCE applies two new designs: attentive contour deformation to refine the quality of segmentation contours and segment-wise matching to ease the model training. Comprehensive experiments demonstrate DANCE excels at deforming the initial contour in a more natural and efficient way towards the real object boundaries. Effectiveness of DANCE is also validated on the COCO dataset, which achieves 38.1% mAP and outperforms all other contour-based instance segmentation models. To the best of our knowledge, DANCE is the first contour-based model that achieves comparable performance to pixel-wise segmentation models. Code is available at https://github.com/lkevinzc/dance.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Liu_2021_WACV, author = {Liu, Zichen and Liew, Jun Hao and Chen, Xiangyu and Feng, Jiashi}, title = {DANCE: A Deep Attentive Contour Model for Efficient Instance Segmentation}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2021}, pages = {345-354} }