StyleAvatar: Stylizing Animatable Head Avatars

Juan C. Pérez, Thu Nguyen-Phuoc, Chen Cao, Artsiom Sanakoyeu, Tomas Simon, Pablo Arbeláez, Bernard Ghanem, Ali Thabet, Albert Pumarola; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 8678-8687

Abstract


AR/VR applications promise to provide people with a genuine feeling of mutual presence when communicating via their personalized avatars. While realistic avatars are essential in various social settings, the vast possibilities of a virtual world can also generate interest in using stylized avatars for other purposes. We introduce StyleAvatar, the first method for semantic stylization of animatable head avatars. StyleAvatar directly stylizes the avatar representation, rather than stylizing its renders. Specifically, given a model generating the avatar, StyleAvatar first disentangles geometry and texture manipulations, and then stylizes the avatar by fine-tuning a subset of the model's weights. Our method has multiple virtues, including the ability to describe styles using images or text, preserving the avatar's animatable capacity, providing control over identity preservation, and disentangling texture and geometry modifications. Experiments have shown that our approach consistently works across skin tones, challenging hair styles, extreme views, and diverse facial expressions.

Related Material


[pdf]
[bibtex]
@InProceedings{Perez_2024_WACV, author = {P\'erez, Juan C. and Nguyen-Phuoc, Thu and Cao, Chen and Sanakoyeu, Artsiom and Simon, Tomas and Arbel\'aez, Pablo and Ghanem, Bernard and Thabet, Ali and Pumarola, Albert}, title = {StyleAvatar: Stylizing Animatable Head Avatars}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {8678-8687} }