-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Lan_2023_CVPR, author = {Lan, Shiyi and Yang, Xitong and Yu, Zhiding and Wu, Zuxuan and Alvarez, Jose M. and Anandkumar, Anima}, title = {Vision Transformers Are Good Mask Auto-Labelers}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2023}, pages = {23745-23755} }
Vision Transformers Are Good Mask Auto-Labelers
Abstract
We propose Mask Auto-Labeler (MAL), a high-quality Transformer-based mask auto-labeling framework for instance segmentation using only box annotations. MAL takes box-cropped images as inputs and conditionally generates their mask pseudo-labels.We show that Vision Transformers are good mask auto-labelers. Our method significantly reduces the gap between auto-labeling and human annotation regarding mask quality. Instance segmentation models trained using the MAL-generated masks can nearly match the performance of their fully-supervised counterparts, retaining up to 97.4% performance of fully supervised models. The best model achieves 44.1% mAP on COCO instance segmentation (test-dev 2017), outperforming state-of-the-art box-supervised methods by significant margins. Qualitative results indicate that masks produced by MAL are, in some cases, even better than human annotations.
Related Material