Learning Generative Models of Textured 3D Meshes From Real-World Images

Dario Pavllo, Jonas Kohler, Thomas Hofmann, Aurelien Lucchi; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 13879-13889

Abstract


Recent advances in differentiable rendering have sparked an interest in learning generative models of textured 3D meshes from image collections. These models natively disentangle pose and appearance, enable downstream applications in computer graphics, and improve the ability of generative models to understand the concept of image formation. Although there has been prior work on learning such models from collections of 2D images, these approaches require a delicate pose estimation step that exploits annotated keypoints, thereby restricting their applicability to a few specific datasets. In this work, we propose a GAN framework for generating textured triangle meshes without relying on such annotations. We show that the performance of our approach is on par with prior work that relies on ground-truth keypoints, and more importantly, we demonstrate the generality of our method by setting new baselines on a larger set of categories from ImageNet - for which keypoints are not available - without any class-specific hyperparameter tuning. We release our code at https://github.com/dariopavllo/textured-3d-gan

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Pavllo_2021_ICCV, author = {Pavllo, Dario and Kohler, Jonas and Hofmann, Thomas and Lucchi, Aurelien}, title = {Learning Generative Models of Textured 3D Meshes From Real-World Images}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {13879-13889} }