Boosting Neural Video Codecs by Exploiting Hierarchical Redundancy

Reza Pourreza, Hoang Le, Amir Said, Guillaume Sautière, Auke Wiggers; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 5355-5364

Abstract


In video compression, coding efficiency is improved by reusing pixels from previously decoded frames via motion and residual compensation. We define two levels of hierarchical redundancy in video frames: 1) first-order: redundancy in pixel space, i.e, similarities in pixel values across neighboring frames, which is effectively captured using motion and residual compensation, 2) second-order: redundancy in motion and residual maps due to smooth motion in natural videos. While most of the existing neural video coding literature addresses first-order redundancy, we tackle the problem of capturing second-order redundancy in neural video codecs via predictors. We introduce generic motion and residual predictors that learn to extrapolate from previously decoded data. These predictors are lightweight, and can be employed with most neural video codecs in order to improve their rate-distortion performance. Moreover, while RGB is the dominant colorspace in neural video coding literature, we introduce general modifications for neural video codecs to embrace the YUV420 colorspace and report YUV420 results. Our experiments show that using our predictors with a well-known neural video codec leads to 38% and 34% bitrate saving in RGB and YUV420 colorspaces measured on the UVG dataset.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Pourreza_2023_WACV, author = {Pourreza, Reza and Le, Hoang and Said, Amir and Sauti\`ere, Guillaume and Wiggers, Auke}, title = {Boosting Neural Video Codecs by Exploiting Hierarchical Redundancy}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {5355-5364} }