DyNCA: Real-Time Dynamic Texture Synthesis Using Neural Cellular Automata

Ehsan Pajouheshgar, Yitao Xu, Tong Zhang, Sabine Süsstrunk; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 20742-20751

Abstract


Current Dynamic Texture Synthesis (DyTS) models can synthesize realistic videos. However, they require a slow iterative optimization process to synthesize a single fixed-size short video, and they do not offer any post-training control over the synthesis process. We propose Dynamic Neural Cellular Automata (DyNCA), a framework for real-time and controllable dynamic texture synthesis. Our method is built upon the recently introduced NCA models and can synthesize infinitely long and arbitrary-size realistic video textures in real-time. We quantitatively and qualitatively evaluate our model and show that our synthesized videos appear more realistic than the existing results. We improve the SOTA DyTS performance by 2 4 orders of magnitude. Moreover, our model offers several real-time video controls including motion speed, motion direction, and an editing brush tool. We exhibit our trained models in an online interactive demo that runs on local hardware and is accessible on personal computers and smartphones.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Pajouheshgar_2023_CVPR, author = {Pajouheshgar, Ehsan and Xu, Yitao and Zhang, Tong and S\"usstrunk, Sabine}, title = {DyNCA: Real-Time Dynamic Texture Synthesis Using Neural Cellular Automata}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2023}, pages = {20742-20751} }