Network-Free, Unsupervised Semantic Segmentation With Synthetic Images

Qianli Feng, Raghudeep Gadde, Wentong Liao, Eduard Ramon, Aleix Martinez; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 23602-23610

Abstract


We derive a method that yields highly accurate semantic segmentation maps without the use of any additional neural network, layers, manually annotated training data, or supervised training. Our method is based on the observation that the correlation of a set of pixels belonging to the same semantic segment do not change when generating synthetic variants of an image using the style mixing approach in GANs. We show how we can use GAN inversion to accurately semantically segment synthetic and real photos as well as generate large training image-semantic segmentation mask pairs for downstream tasks.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Feng_2023_CVPR, author = {Feng, Qianli and Gadde, Raghudeep and Liao, Wentong and Ramon, Eduard and Martinez, Aleix}, title = {Network-Free, Unsupervised Semantic Segmentation With Synthetic Images}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2023}, pages = {23602-23610} }