Heterogeneous Avatar Synthesis Based on Disentanglement of Topology and Rendering

Nan Gao, Zhi Zeng, GuiXuan Zhang, ShuWu Zhang; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 1455-1470

Abstract


There are obviously structural and color discrepancies among different heterogeneous domains. In this paper, we explore the challenging heterogeneous avatar synthesis (HAS) task considering topology and rendering transfer. HAS transfers the topology as well as rendering styles of the referenced face to the source face, to produce high-fidelity heterogeneous avatars. Specifically, first, we utilize a Rendering Transfer Network (RT-Net) to render the grayscale source face based on the color palette of the referenced face. The grayscale features and color style are injected into RT-Net based on adaptive feature modulation. Second, we apply a Topology Transfer Network (TT-Net) to conduct heterogeneous facial topology transfer, where the image content of RT-Net is transferred based on AdaIN controlled by heterogeneous identity embedding. Comprehensive experimental results show that the disentanglement of rendering and topology is beneficial to the HAS task, and our HASNet has comparable performance compared with other state-of-the-art methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Gao_2022_ACCV, author = {Gao, Nan and Zeng, Zhi and Zhang, GuiXuan and Zhang, ShuWu}, title = {Heterogeneous Avatar Synthesis Based on Disentanglement of Topology and Rendering}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {1455-1470} }