-
[pdf]
[arXiv]
[bibtex]@InProceedings{Huang_2025_WACV, author = {Huang, Hsin-Ping and Su, Yu-Chuan and Sun, Deqing and Jiang, Lu and Jia, Xuhui and Zhu, Yukun and Yang, Ming-Hsuan}, title = {Fine-Grained Controllable Video Generation via Object Appearance and Context}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {3698-3708} }
Fine-Grained Controllable Video Generation via Object Appearance and Context
Abstract
While text-to-video generation shows state-of-the-art results fine-grained output control remains challenging for users relying solely on natural language prompts. In this work we present FACTOR for fine-grained controllable video generation. FACTOR provides an intuitive interface where users can manipulate the trajectory and appearance of individual objects in conjunction with a text prompt. We propose a unified framework to integrate these control signals into an existing text-to-video model. Our approach involves a multimodal condition module with a joint encoder control-attention layers and an appearance augmentation mechanism. This design enables FACTOR to generate videos that closely align with detailed user specifications. Extensive experiments on standard benchmarks and user-provided inputs demonstrate a notable improvement in controllability by FACTOR over competitive baselines.
Related Material