-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Yun_2025_CVPR, author = {Yun, Kwan and Hong, Seokhyeon and Kim, Chaelin and Noh, Junyong}, title = {AnyMoLe: Any Character Motion In-betweening Leveraging Video Diffusion Models}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {27838-27848} }
AnyMoLe: Any Character Motion In-betweening Leveraging Video Diffusion Models
Abstract
Despite recent advancements in learning-based motion in-betweening, a key limitation has been overlooked: the requirement for character-specific datasets. In this work, we introduce AnyMoLe, a novel method that addresses this limitation by leveraging video diffusion models to generate motion in-between frames for arbitrary characters without external data. Our approach employs a two-stage frame generation process to enhance contextual understanding. Furthermore, to bridge the domain gap between real-world and rendered character animations, we introduce ICAdapt, a fine-tuning technique for video diffusion models. Additionally, we propose a "motion-video mimicking" optimization technique, enabling seamless motion generation for characters with arbitrary joint structures using 2D and 3D-aware features. AnyMoLe significantly reduces data dependency while generating smooth and realistic transitions, making it applicable to a wide range of motion in-betweening tasks.
Related Material