InverseForm: A Loss Function for Structured Boundary-Aware Segmentation

Shubhankar Borse, Ying Wang, Yizhe Zhang, Fatih Porikli; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 5901-5911

Abstract


We present a novel boundary-aware loss term for semantic segmentation using an inverse-transformation network, which efficiently learns the degree of parametric transformations between estimated and target boundaries. This plug-in loss term complements the cross-entropy loss in capturing boundary transformations and allows consistent and significant performance improvement on segmentation backbone models without increasing their size and computational complexity. We analyze the quantitative and qualitative effects of our loss function on three indoor and outdoor segmentation benchmarks, including Cityscapes, NYU-Depth-v2, and PASCAL, integrating it into the training phase of several backbone networks in both single-task and multi-task settings. Our extensive experiments show that the proposed method consistently outperforms baselines, and even sets the new state-of-the-art on two datasets.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Borse_2021_CVPR, author = {Borse, Shubhankar and Wang, Ying and Zhang, Yizhe and Porikli, Fatih}, title = {InverseForm: A Loss Function for Structured Boundary-Aware Segmentation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {5901-5911} }