MoRF: Mobile Realistic Fullbody Avatars From a Monocular Video

Renat Bashirov, Alexey Larionov, Evgeniya Ustinova, Mikhail Sidorenko, David Svitov, Ilya Zakharkin, Victor Lempitsky; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 3545-3555

Abstract


We present a system to create Mobile Realistic Fullbody (MoRF) avatars. MoRF avatars are rendered in real-time on mobile devices, learned from monocular videos, and have high realism. We use SMPL-X as a proxy geometry and render it with DNR (neural texture and image-2-image network). We improve on prior work, by overfitting per-frame warping fields in the neural texture space, allowing to better align the training signal between different frames. We also refine SMPL-X mesh fitting procedure to improve the overall avatar quality. In the comparisons to other monocular video-based avatar systems, MoRF avatars achieve higher image sharpness and temporal consistency. Participants of our user study also preferred avatars generated by MoRF.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Bashirov_2024_WACV, author = {Bashirov, Renat and Larionov, Alexey and Ustinova, Evgeniya and Sidorenko, Mikhail and Svitov, David and Zakharkin, Ilya and Lempitsky, Victor}, title = {MoRF: Mobile Realistic Fullbody Avatars From a Monocular Video}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {3545-3555} }