-
[pdf]
[bibtex]@InProceedings{Gorelik_2025_WACV, author = {Gorelik, Liat Sless and Fan, Yuchen and Armstrong, Omri and Iandola, Forrest N and Li, Yilei and Lifshitz, Ita and Ranjan, Rakesh}, title = {Make-A-Texture: Fast Shape-Aware 3D Texture Generation in 3 Seconds}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {4872-4881} }
Make-A-Texture: Fast Shape-Aware 3D Texture Generation in 3 Seconds
Abstract
We present Make-A-Texture a new framework that efficiently synthesizes high-resolution texture maps from textual prompts for given 3D geometries. Our approach progressively generates textures that are consistent across multiple viewpoints with a depth-aware inpainting diffusion model in an optimized sequence of viewpoints determined by an automatic view selection algorithm. A significant feature of our method is its remarkable efficiency achieving a full texture generation within an end-to-end runtime of just 3.07 seconds on a single NVIDIA H100 GPU significantly outperforming existing methods. Such an acceleration is achieved by optimizations in the diffusion model and a specialized backprojection method. Moreover our method reduces the artifacts in the backprojection phase by selectively masking out non-frontal faces and internal faces of open-surfaced objects. Experimental results demonstrate that Make-A-Texture matches or exceeds the quality of other state-of-the-art methods. Our work significantly improves the applicability and practicality of texture generation models for real-world 3D content creation including interactive creation and text-guided texture editing.
Related Material