-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Ganeshan_2021_ICCV, author = {Ganeshan, Aditya and Vallet, Alexis and Kudo, Yasunori and Maeda, Shin-ichi and Kerola, Tommi and Ambrus, Rares and Park, Dennis and Gaidon, Adrien}, title = {Warp-Refine Propagation: Semi-Supervised Auto-Labeling via Cycle-Consistency}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {15499-15509} }
Warp-Refine Propagation: Semi-Supervised Auto-Labeling via Cycle-Consistency
Abstract
Deep learning models for semantic segmentation rely on expensive, large-scale, manually annotated datasets. Labelling is a tedious process that can take hours per image. Automatically annotating video sequences by propagating sparsely labeled frames through time is a more scalable alternative. In this work, we propose a novel label propagation method, termed Warp-Refine Propagation, that combines semantic cues with geometric cues to efficiently auto-label videos. Our method learns to refine geometrically-warped labels and infuse them with learned semantic priors in a semi-supervised setting by leveraging cycle consistency across time. We quantitatively show that our method improves label-propagation by a noteworthy margin of 13.1 mIoU on the ApolloScape dataset. Furthermore, by training with the auto-labelled frames, we achieve competitive results on three semantic-segmentation benchmarks, improving the state-of-the-art by a large margin of 1.8 and 3.61 mIoU on NYU-V2 and KITTI, while matching the current best results on Cityscapes.
Related Material