-
[pdf]
[supp]
[bibtex]@InProceedings{Mercier_2025_WACV, author = {Mercier, Antoine and Nakhli, Ramin and Reddy, Mahesh and Yasarla, Rajeev and Cai, Hong and Porikli, Fatih and Berger, Guillaume}, title = {HexaGen3D: StableDiffusion is One Step Away from Fast and Diverse Text-to-3D Generation}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {1247-1257} }
HexaGen3D: StableDiffusion is One Step Away from Fast and Diverse Text-to-3D Generation
Abstract
Despite the latest remarkable advances in generative modeling efficient generation of high-quality 3D objects from textual prompts remains a difficult task. A key challenge lies in data scarcity: the most extensive 3D datasets encompass merely millions of samples while their 2D counterparts contain billions of text-image pairs. To address this we propose a novel approach which harnesses the power of large pretrained 2D diffusion models. More specifically our approach HexaGen3D fine-tunes a pretrained text-to-image model to jointly predict 6 orthographic projections and the corresponding 3D latent. We then decode these latents to generate a textured mesh. HexaGen3D does not require per-sample optimization and can infer high-quality and diverse objects from textual prompts in 7 seconds offering significantly better quality-to-latency trade-offs than existing approaches. Furthermore HexaGen3D demonstrates strong generalization to new objects or compositions.
Related Material