Text and Image Guided 3D Avatar Generation and Manipulation

Zehranaz Canfes, M. Furkan Atasoy, Alara Dirik, Pinar Yanardag; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 4421-4431

Abstract


The manipulation of latent space has recently become an interesting topic in the field of generative models. Recent research shows that latent directions can be used to manipulate images towards certain attributes. However, controlling the generation process of 3D generative models remains a challenge. In this work, we propose a novel 3D manipulation method that can manipulate both the shape and texture of the model using text or image-based prompts such as 'a young face' or 'a surprised face'. We leverage the power of Contrastive Language-Image Pre-training (CLIP) model and a pre-trained 3D GAN model designed to generate face avatars, and create a fully differentiable rendering pipeline to manipulate meshes. More specifically, our method takes an input latent code and modifies it such that the target attribute specified by a text or image prompt is present or enhanced, while leaving other attributes largely unaffected. Our method requires only 5 minutes per manipulation, and we demonstrate the effectiveness of our approach with extensive results and comparisons.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Canfes_2023_WACV, author = {Canfes, Zehranaz and Atasoy, M. Furkan and Dirik, Alara and Yanardag, Pinar}, title = {Text and Image Guided 3D Avatar Generation and Manipulation}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {4421-4431} }