Multi-View Neural Human Rendering

Minye Wu, Yuehao Wang, Qiang Hu, Jingyi Yu; The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 1682-1691

Abstract


We present an end-to-end Neural Human Renderer (NHR) for dynamic human captures under the multi-view setting. NHR adopts PointNet++ for feature extraction (FE) to enable robust 3D correspondence matching on low quality, dynamic 3D reconstructions. To render new views, we map 3D features onto the target camera as a 2D feature map and employ an anti-aliased CNN to handle holes and noises. Newly synthesized views from NHR can be further used to construct visual hulls to handle textureless and/or dark regions such as black clothing. Comprehensive experiments show NHR significantly outperforms the state-of-the-art neural and image-based rendering techniques, especially on hands, hair, nose, foot, etc.

Related Material


[pdf]
[bibtex]
@InProceedings{Wu_2020_CVPR,
author = {Wu, Minye and Wang, Yuehao and Hu, Qiang and Yu, Jingyi},
title = {Multi-View Neural Human Rendering},
booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}