Neural Reconstruction of Relightable Human Model from Monocular Video

Wenzhang Sun, Yunlong Che, Han Huang, Yandong Guo; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 397-407

Abstract


Creating relightable and animatable human characters from monocular video at a low cost is a critical task for digital human modeling and virtual reality applications. This task is complex due to intricate articulation motion, a wide range of ambient lighting conditions, and pose-dependent clothing deformations. In this paper, we introduce a novel self-supervised framework that takes a monocular video of a moving human as input and generates a 3D neural representation capable of being rendered with novel poses under arbitrary lighting conditions. Our framework decomposes dynamic humans under varying illumination into neural fields in canonical space, taking into account geometry and spatially varying BRDF material properties. Additionally, we introduce pose-driven deformation fields, enabling bidirectional mapping between canonical space and observation. Leveraging the proposed appearance decomposition and deformation fields, our framework learns in a self-supervised manner. Ultimately, based on pose-driven deformation, recovered appearance, and physically-based rendering, the reconstructed human figure becomes relightable and can be explicitly driven by novel poses. We demonstrate significant performance improvements over previous works and provide compelling examples of relighting from monocular videos of moving humans in challenging, uncontrolled capture scenarios.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Sun_2023_ICCV, author = {Sun, Wenzhang and Che, Yunlong and Huang, Han and Guo, Yandong}, title = {Neural Reconstruction of Relightable Human Model from Monocular Video}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {397-407} }