Latent Transformations via NeuralODEs for GAN-Based Image Editing

Valentin Khrulkov, Leyla Mirvakhabova, Ivan Oseledets, Artem Babenko; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 14428-14437

Abstract


Recent advances in high-fidelity semantic image editing heavily rely on the presumably disentangled latent spaces of the state-of-the-art generative models, such as StyleGAN. Specifically, recent works show that it is possible to achieve decent controllability of attributes in the face images via linear shifts along with latent directions. Several recent methods address the discovery of such directions, implicitly assuming that the state-of-the-art GANs learn the latent spaces with inherently linearly separable attribute distributions and semantic vector arithmetic properties. In our work, we show that nonlinear latent code manipulations realized as flows of a trainable Neural ODE are beneficial for many practical non-face image domains with more complex non-textured factors of variation. In particular, we investigate a large number of datasets with known attributes and demonstrate that certain attribute manipulations are challenging to be obtained with linear shifts only.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Khrulkov_2021_ICCV, author = {Khrulkov, Valentin and Mirvakhabova, Leyla and Oseledets, Ivan and Babenko, Artem}, title = {Latent Transformations via NeuralODEs for GAN-Based Image Editing}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {14428-14437} }