AG3D: Learning to Generate 3D Avatars from 2D Image Collections

Zijian Dong, Xu Chen, Jinlong Yang, Michael J. Black, Otmar Hilliges, Andreas Geiger; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 14916-14927

Abstract


While progress in 2D generative models of human appearance has been rapid, many applications require 3D avatars that can be animated and rendered. Unfortunately, most existing methods for learning generative models of 3D humans with diverse shape and appearance require 3D training data, which is limited and expensive to acquire. The key to progress is hence to learn generative models of 3D avatars from abundant unstructured 2D image collections. However, learning realistic and complete 3D appearance and geometry in this under-constrained setting remains challenging, especially in the presence of loose clothing such as dresses. In this paper, we propose a new adversarial generative model of realistic 3D people from 2D images. Our method captures shape and deformation of the body and loose clothing by adopting a holistic 3D generator and integrating an efficient, flexible, articulation module. To improve realism, we train our model using multiple discriminators while also integrating geometric cues in the form of predicted 2D normal maps. We experimentally find that our method outperforms previous 3D- and articulation-aware methods in terms of geometry and appearance. We validate the effectiveness of our model and the importance of each component via systematic ablation studies.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Dong_2023_ICCV, author = {Dong, Zijian and Chen, Xu and Yang, Jinlong and Black, Michael J. and Hilliges, Otmar and Geiger, Andreas}, title = {AG3D: Learning to Generate 3D Avatars from 2D Image Collections}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {14916-14927} }