-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Liao_2024_CVPR, author = {Liao, Zhouyingcheng and Golyanik, Vladislav and Habermann, Marc and Theobalt, Christian}, title = {VINECS: Video-based Neural Character Skinning}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {1377-1387} }
VINECS: Video-based Neural Character Skinning
Abstract
Rigging and skinning clothed human avatars is a challenging task and traditionally requires a lot of manual work and expertise. Recent methods addressing it either generalize across different characters or focus on capturing the dynamics of a single character observed under different pose configurations. However the former methods typically predict solely static skinning weights which perform poorly for highly articulated poses and the latter ones either require dense 3D character scans in different poses or cannot generate an explicit mesh with vertex correspondence over time. To address these challenges we propose a fully automated approach for creating a fully rigged character with pose-dependent skinning weights which can be solely learned from multi-view video. Therefore we first acquire a rigged template which is then statically skinned. Next a coordinate-based MLP learns a skinning weights field parameterized over the position in a canonical pose space and the respective pose. Moreover we introduce our pose- and view-dependent appearance field allowing us to differentiably render and supervise the posed mesh using multi-view imagery. We show that our approach outperforms state-of-the-art while not relying on dense 4D scans. More details can be found on our project page.
Related Material