Task Agnostic Restoration of Natural Video Dynamics

Muhammad Kashif Ali, Dongjin Kim, Tae Hyun Kim; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 13534-13544

Abstract


In many video restoration/translation tasks, image processing operations are naively extended to the video domain by processing each frame independently, disregarding the temporal connection of the video frames. This disregard for the temporal connection often leads to severe temporal inconsistencies. State-Of-The-Art (SOTA) techniques that address these inconsistencies rely on the availability of unprocessed videos to implicitly siphon and utilize consistent video dynamics to restore the temporal consistency of frame-wise processed videos which often jeopardizes the translation effect. We propose a general framework for this task that learns to infer and utilize consistent motion dynamics from inconsistent videos to mitigate the temporal flicker while preserving the perceptual quality for both the temporally neighboring and relatively distant frames without requiring the raw videos at test time. The proposed framework produces SOTA results on two benchmark datasets, DAVIS and videvo.net, processed by numerous image processing applications. The code and the trained models will be open-sourced upon acceptance.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Ali_2023_ICCV, author = {Ali, Muhammad Kashif and Kim, Dongjin and Kim, Tae Hyun}, title = {Task Agnostic Restoration of Natural Video Dynamics}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {13534-13544} }