StyleMotif: Multi-Modal Motion Stylization using Style-Content Cross Fusion

Ziyu Guo, Young Yoon Lee, Joseph Liu, Yizhak Ben-Shabat, Victor Zordan, Mubbasir Kapadia; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, pp. 13349-13359

Abstract


We present StyleMotif, a novel Stylized Motion Latent Diffusion model, generating motion conditioned on both content and style from multiple modalities. Unlike existing approaches that either focus on generating diverse motion content or transferring style from sequences, StyleMotif seamlessly synthesizes motion across a wide range of content while incorporating stylistic cues from multi-modal inputs, including motion, text, image, video, and audio. To achieve this, we introduce a style-content cross fusion mechanism and align a style encoder with a pre-trained multi-modal model, ensuring that the generated motion accurately captures the reference style while preserving realism. Extensive experiments demonstrate that our framework surpasses existing methods in stylized motion generation and exhibits emergent capabilities for multi-modal motion stylization, enabling more nuanced motion synthesis. Project Page: https://stylemotif.github.io.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Guo_2025_ICCV, author = {Guo, Ziyu and Lee, Young Yoon and Liu, Joseph and Ben-Shabat, Yizhak and Zordan, Victor and Kapadia, Mubbasir}, title = {StyleMotif: Multi-Modal Motion Stylization using Style-Content Cross Fusion}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {13349-13359} }