Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models

Xianfang Zeng, Xin Chen, Zhongqi Qi, Wen Liu, Zibo Zhao, Zhibin Wang, Bin Fu, Yong Liu, Gang Yu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 4252-4262

Abstract


This paper presents Paint3D a novel coarse-to-fine generative framework that is capable of producing high-resolution lighting-less and diverse 2K UV texture maps for untextured 3D meshes conditioned on text or image inputs. The key challenge addressed is generating high-quality textures without embedded illumination information which allows the textures to be re-lighted or re-edited within modern graphics pipelines. To achieve this our method first leverages a pre-trained depth-aware 2D diffusion model to generate view-conditional images and perform multi-view texture fusion producing an initial coarse texture map. However as 2D models cannot fully represent 3D shapes and disable lighting effects the coarse texture map exhibits incomplete areas and illumination artifacts. To resolve this we train separate UV Inpainting and UVHD diffusion models specialized for the shape-aware refinement of incomplete areas and the removal of illumination artifacts. Through this coarse-to-fine process Paint3D can produce high-quality 2K UV textures that maintain semantic consistency while being lighting-less significantly advancing the state-of-the-art in texturing 3D objects.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Zeng_2024_CVPR, author = {Zeng, Xianfang and Chen, Xin and Qi, Zhongqi and Liu, Wen and Zhao, Zibo and Wang, Zhibin and Fu, Bin and Liu, Yong and Yu, Gang}, title = {Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {4252-4262} }