Surface-Aligned Neural Radiance Fields for Controllable 3D Human Synthesis

Tianhan Xu, Yasuhiro Fujita, Eiichi Matsumoto; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 15883-15892

Abstract


We propose a new method for reconstructing controllable implicit 3D human models from sparse multi-view RGB videos. Our method defines the neural scene representation on the mesh surface points and signed distances from the surface of a human body mesh. We identify an indistinguishability issue that arises when a point in 3D space is mapped to its nearest surface point on a mesh for learning surface-aligned neural scene representation. To address this issue, we propose projecting a point onto a mesh surface using a barycentric interpolation with modified vertex normals. Experiments with the ZJU-MoCap and Human3.6M datasets show that our approach achieves a higher quality in a novel-view and novel-pose synthesis than existing methods. We also demonstrate that our method easily supports the control of body shape and clothes. Project page: https://pfnet-research.github.io/surface-aligned-nerf/.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Xu_2022_CVPR, author = {Xu, Tianhan and Fujita, Yasuhiro and Matsumoto, Eiichi}, title = {Surface-Aligned Neural Radiance Fields for Controllable 3D Human Synthesis}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {15883-15892} }