A Neural Rendering Framework for Free-Viewpoint Relighting

Zhang Chen, Anpei Chen, Guli Zhang, Chengyuan Wang, Yu Ji, Kiriakos N. Kutulakos, Jingyi Yu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 5599-5610

Abstract


We present a novel Relightable Neural Renderer (RNR) for simultaneous view synthesis and relighting using multi-view image inputs. Existing neural rendering (NR) does not explicitly model the physical rendering process and hence has limited capabilities on relighting. RNR instead models image formation in terms of environment lighting, object intrinsic attributes, and light transport function (LTF), each corresponding to a learnable component. In particular, the incorporation of a physically based rendering process not only enables relighting but also improves the quality of view synthesis. Comprehensive experiments on synthetic and real data show that RNR provides a practical and effective solution for conducting free-viewpoint relighting.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Chen_2020_CVPR,
author = {Chen, Zhang and Chen, Anpei and Zhang, Guli and Wang, Chengyuan and Ji, Yu and Kutulakos, Kiriakos N. and Yu, Jingyi},
title = {A Neural Rendering Framework for Free-Viewpoint Relighting},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}