Efficient 3D Implicit Head Avatar with Mesh-anchored Hash Table Blendshapes

Supplementary Material ⋅ CVPR 2024 ⋅ PDF

Please see the webpage for more results.


Results of comparing with the state-of-the-art approaches.

Compare with PointAvatar[1], INSTA[2], NeRFBlendshape[3], MonoAvatar[4].

Our method is able to achieve one of the best rendering quality while maintaining real-time rendering speed.






More Results of Our Approach.

Labels - Left: Input Driving Video, Center: Rendered Avatar, Right: Rendered Depth








Real-time Live Demo of Avatar Driving (Stereo Rendering).

Labels - Left: Driving Video, Center: Rendered Avatar, Right: Viewing in Headset

We build a real-time demo based on our method, where we track the facial performance of the actor with a webcam, and render our avatar on a workstation with a RTX3080Ti. Finally, we display the rendered stereo pair on a headset and a webpage. The tracking results and rendered videos are streaming through the internet, which may cause latency between the driving and the rendering, and slightly unsmoothed videos.





References

[1] Yufeng Zheng, Wang Yifan, Gordon Wetzstein, Michael J Black, and Otmar Hilliges. Pointavatar: Deformable point based head avatars from videos. In CVPR, 2023. [link]
[2] Wojciech Zielonka, Timo Bolkart, and Justus Thies. Instant volumetric head avatars. In CVPR, 2023. [link]
[3] Xuan Gao, Chenglai Zhong, Jun Xiang, Yang Hong, Yudong Guo, and Juyong Zhang. Reconstructing personalized semantic facial nerf models from monocular video. In SIGGRAPH Asia, 2022. [link]
[4] Ziqian Bai, Feitong Tan, Zeng Huang, Kripasindhu Sarkar, Danhang Tang, Di Qiu, Abhimitra Meka, Ruofei Du, Ming song Dou, Sergio Orts-Escolano, et al. Learning personalized high quality volumetric head avatars from monocular rgb videos. In CVPR, 2023. [link]