3D Photo Stylization: Learning To Generate Stylized Novel Views From a Single Image

Fangzhou Mu, Jian Wang, Yicheng Wu, Yin Li; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 16273-16282

Abstract


Visual content creation has spurred a soaring interest given its applications in mobile photography and AR / VR. Style transfer and single-image 3D photography as two representative tasks have so far evolved independently. In this paper, we make a connection between the two, and address the challenging task of 3D photo stylization - generating stylized novel views from a single image given an arbitrary style. Our key intuition is that style transfer and view synthesis have to be jointly modeled. To this end, we propose a deep model that learns geometry-aware content features for stylization from a point cloud representation of the scene, resulting in high-quality stylized images that are consistent across views. Further, we introduce a novel training protocol to enable the learning using only 2D images. We demonstrate the superiority of our method via extensive qualitative and quantitative studies, and showcase key applications of our method in light of the growing demand for 3D content creation from 2D image assets.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Mu_2022_CVPR, author = {Mu, Fangzhou and Wang, Jian and Wu, Yicheng and Li, Yin}, title = {3D Photo Stylization: Learning To Generate Stylized Novel Views From a Single Image}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {16273-16282} }