Supplementary Material ⋅ CVPR 2024 ⋅ PDF
Please see the webpage for more results.
Compare with PointAvatar[1], INSTA[2], NeRFBlendshape[3], MonoAvatar[4].
Our method is able to achieve one of the best rendering quality while maintaining real-time rendering speed.Labels - Left: Input Driving Video, Center: Rendered Avatar, Right: Rendered Depth
Labels - Left: Driving Video, Center: Rendered Avatar, Right: Viewing in Headset
We build a real-time demo based on our method, where we track the facial performance of the actor with a webcam, and render our avatar on a workstation with a RTX3080Ti. Finally, we display the rendered stereo pair on a headset and a webpage. The tracking results and rendered videos are streaming through the internet, which may cause latency between the driving and the rendering, and slightly unsmoothed videos.