Multi-Directional Subspace Editing in Style-Space

Chen Naveh; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 7138-7148

Abstract


This paper describes a new technique for finding disentangled semantic directions in the latent space of StyleGAN. Our method identifies meaningful orthogonal subspaces that allow editing of one human face attribute, while minimizing undesired changes in other attributes. Our model is capable of editing a single attribute in multiple directions, resulting in a range of possible generated images. We compare our scheme with three state-of-the-art models and show that our method outperforms them in terms of face editing and disentanglement capabilities. Additionally, we suggest quantitative measures for evaluating attribute separation and disentanglement, and exhibit the superiority of our model with respect to those measures.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Naveh_2023_ICCV, author = {Naveh, Chen}, title = {Multi-Directional Subspace Editing in Style-Space}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {7138-7148} }