Learning to Generate Textures on 3D Meshes

Amit Raj, Cusuh Ham, Connelly Barnes, Vladimir Kim, Jingwan Lu, James Hays; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 32-38

Abstract


Recent years have seen a great deal of work in photorealistic neural image synthesis from 2D image datasets. However, there are only a few works that exploit 3D shape information to aid in image synthesis. To this end, we leverage data from 2D image datasets as well as 3D model corpora to generate textured 3D models. We propose a framework for texture generation for meshes from multiview images. Our framework first uses 2.5D information rendered using the 3D models, along with user inputs to generate an intermediate view dependent representation. These intermediate representations are then used to generate realistic textures for particular views in an unpaired manner. Finally, we use a differentiable renderer to combine the generated multiview texture into a single textured mesh. We demonstrate results of realistic texture synthesis on cars.

Related Material


[pdf]
[bibtex]
@InProceedings{Raj_2019_CVPR_Workshops,
author = {Raj, Amit and Ham, Cusuh and Barnes, Connelly and Kim, Vladimir and Lu, Jingwan and Hays, James},
title = {Learning to Generate Textures on 3D Meshes},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}