FlexNeRF: Photorealistic Free-Viewpoint Rendering of Moving Humans From Sparse Views

Vinoj Jayasundara, Amit Agrawal, Nicolas Heron, Abhinav Shrivastava, Larry S. Davis; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 21118-21127

Abstract


We present FlexNeRF, a method for photorealistic free-viewpoint rendering of humans in motion from monocular videos. Our approach works well with sparse views, which is a challenging scenario when the subject is exhibiting fast/complex motions. We propose a novel approach which jointly optimizes a canonical time and pose configuration, with a pose-dependent motion field and pose-independent temporal deformations complementing each other. Thanks to our novel temporal and cyclic consistency constraints along with additional losses on intermediate representation such as segmentation, our approach provides high quality outputs as the observed views become sparser. We empirically demonstrate that our method significantly outperforms the state-of-the-art on public benchmark datasets as well as a self-captured fashion dataset. The project page is available at: https://flex-nerf.github.io/.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Jayasundara_2023_CVPR, author = {Jayasundara, Vinoj and Agrawal, Amit and Heron, Nicolas and Shrivastava, Abhinav and Davis, Larry S.}, title = {FlexNeRF: Photorealistic Free-Viewpoint Rendering of Moving Humans From Sparse Views}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2023}, pages = {21118-21127} }