ActorsNeRF: Animatable Few-shot Human Rendering with Generalizable NeRFs

Jiteng Mu, Shen Sang, Nuno Vasconcelos, Xiaolong Wang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 18391-18401

Abstract


While NeRF-based human representations have shown impressive novel view synthesis results, most methods still rely on a large number of images / views for training. In this work, we propose a novel animatable NeRF called ActorsNeRF. It is first pre-trained on diverse human subjects, and then adapted with few-shot monocular video frames for a new actor with unseen poses. Building on previous generalizable NeRFs with parameter sharing using a ConvNet encoder, ActorsNeRF further adopts two human priors to capture the large human appearance, shape, and pose variations. Specifically, in the encoded feature space, we will first align different human subjects in a category-level canonical space, and then align the same human from different frames in an instance-level canonical space for rendering. We quantitatively and qualitatively demonstrate that ActorsNeRF significantly outperforms the existing state-of-the-art on few-shot generalization to new people and poses on multiple datasets.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Mu_2023_ICCV, author = {Mu, Jiteng and Sang, Shen and Vasconcelos, Nuno and Wang, Xiaolong}, title = {ActorsNeRF: Animatable Few-shot Human Rendering with Generalizable NeRFs}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {18391-18401} }