-
[pdf]
[supp]
[bibtex]@InProceedings{Liao_2025_ICCV, author = {Liao, Zhanfeng and Tu, Hanzhang and Peng, Cheng and Zhang, Hongwen and Zhou, Boyao and Liu, Yebin}, title = {HADES: Human Avatar with Dynamic Explicit Hair Strands}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {12318-12327} }
HADES: Human Avatar with Dynamic Explicit Hair Strands
Abstract
We introduce HADES, the first framework to seamlessly integrate dynamic hair into human avatars. HADES represents hair as strands bound to 3D Gaussians, with roots attached to the scalp. By modeling inertial and velocity-aware motion, HADES is able to simulate realistic hair dynamics that naturally align with body movements. To enhance avatar fidelity, we incorporate multi-scale data and address color inconsistencies across cameras using a lightweight MLP-based correction module, which generates color correction matrices for consistent color tones. Besides, we resolve rendering artifacts, such as hair dilation during zoom-out, through a 2D Mip filter and physically constrained hair radii. Furthermore, a temporal fusion module is introduced to ensure temporal coherence by modeling historical motion states. Experimental results demonstrate that HADES achieves high-fidelity avatars with realistic hair dynamics, outperforming existing state-of-the-art solutions in terms of realism and robustness.
Related Material
