GHNeRF: Learning Generalizable Human Features with Efficient Neural Radiance Fields

Arnab Dey, Di Yang, Rohith Agaram, Antitza Dantcheva, Andrew I. Comport, Srinath Sridhar, Jean Martinet; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 2812-2821

Abstract


Recent advances in Neural Radiance Fields (NeRF) have demonstrated promising results in 3D scene representations including 3D human representations. However these representations often lack crucial information on the underlying human pose and structure which is crucial for AR/VR applications and games. In this paper we introduce a novel approach termed GHNeRF designed to address these limitations by learning 2D/3D joint locations of human subjects with NeRF representation. GHNeRF uses a pre-trained 2D encoder streamlined to extract essential human features from 2D images which are then incorporated into the NeRF framework in order to encode human biomechanic features. This allows our network to simultaneously learn biomechanic features such as joint locations along with human geometry and texture. To assess the effectiveness of our method we conduct a comprehensive comparison with state-of-the-art human NeRF techniques and joint estimation algorithms. Our results show that GHNeRF can achieve state-of-the-art results in near real-time.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Dey_2024_CVPR, author = {Dey, Arnab and Yang, Di and Agaram, Rohith and Dantcheva, Antitza and Comport, Andrew I. and Sridhar, Srinath and Martinet, Jean}, title = {GHNeRF: Learning Generalizable Human Features with Efficient Neural Radiance Fields}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {2812-2821} }