Snapmoji: Instant Generation of Animatable Dual-Stylized Avatars
Supplemental Materials
We present additional results generated by the Snapmoji system. Contents of this gallery are as follows:
3D avatars with facial expressions rendered from novel views
A demo of Snapmoji avatars rendered for augmented reality, and on mobile devices
An ablation study on using 3DMM and blendshape features for animation vs. only using blendshape
features.
Screen captures of the Snapmoji user interfaces for avatar generation and blendshape editing
Video 1a (Dual-Stylized Avatars in 3D):
Dual-stylized avatars and their normals rendered with various facial expressions. Facial
expressions were captured from an iPhone with ARKit blendshapes. By choosing to animate the
avatars implicitly with blendshape weights, we can animate expressions which generalize to a
wide range of dual-stylized avatars.
Video 1b (Dual-Stylized Avatars in 3D):
Dual-stylized avatars rendered from front and back. Although our LGM model takes only one image
as input, the model is still able to reconstruct a full 3D avatar.
Videos 2 & 3 (AR Puppeting):
We combine 3DMM and blendshape features to animate our avatars. Alpha-compositing
the avatars with the original video then allows us to puppet the avatars in AR.
Videos 4 & 5 (More Dual-Stylized Avatar Puppeting):
We demonstrate more examples of dual-stylized avatar puppeting.
Video 4 (Web Rendering Application):
Our avatars, represented by Gaussian Splats, are small enough to
be rendered in a web browser at 90-100 FPS on a laptop, or 30-40 FPS on a phone. We can
control the
avatar's expression with a face tracker, then render them in 3D.
The face tracking is done by Mediapipe in WebAssembly, while the avatar is animated
using
the gsplat.js library. All computation is done on the client.
Video 5 (Mobile Rendering Application):
Because our AR demo can be run in a browser, it is inherently cross platform. On a
iPhone 13
Pro, the avatars are rendered at 30-40 FPS. Note that the true FPS is higher than what
is
shown in the video due to screen recording.
Video 6 (Ablation Study on 3DMM Tracking):
This video demonstrates the effects of using 3DMM features in conjunction with FACS
blendshape weights.
The combination enhances the expressiveness and fidelity of avatar animation,
accommodating
both realistic and stylized facial expressions.
Video 7 (Avatar Generation UI):
A screen capture of the Snapmoji avatar generator interface. The interface includes
controls to
balance identity preservation and style in the generated avatars. Due to the system's
speed,
a
user
can
rapidly prototype different avatar designs to their liking. The avatar generator is
built
with
Gradio.
Video 8 (Blendshape Editor UI):
A screen capture of the Snapmoji blendshape editor. The editor allows one to preview an
Snapmoji avatar in 3D, and modify facial expressions with blendshapes. Blendshape
weights
give
precise control over regions in the face. Linear combinations of different blendshapes
can
represent a variety of expressions. The editor is built with Viser.