-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Abdal_2021_ICCV, author = {Abdal, Rameen and Zhu, Peihao and Mitra, Niloy J. and Wonka, Peter}, title = {Labels4Free: Unsupervised Segmentation Using StyleGAN}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {13970-13979} }
Labels4Free: Unsupervised Segmentation Using StyleGAN
Abstract
We propose an unsupervised segmentation framework for StyleGAN generated objects. We build on two main observations. First, the features generated by StyleGAN hold valuable information that can be utilized towards training segmentation networks. Second, the foreground and background can often be treated to be largely independent and be swapped across images to produce plausible composited images. For our solution, we propose to augment the Style-GAN2 generator architecture with a segmentation branch and to split the generator into a foreground and background network. This enables us to generate soft segmentation masks for the foreground object in an unsupervised fashion. On multiple object classes, we report comparable results against state-of-the-art supervised segmentation networks, while against the best unsupervised segmentation approach we demonstrate a clear improvement, both in qualitative and quantitative metrics. Project Page : https:/rameenabdal.github.io/Labels4Free
Related Material