Extending Neural P-Frame Codecs for B-Frame Coding

Reza Pourreza, Taco Cohen; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 6680-6689

Abstract


While most neural video codecs address P-frame coding (predicting each frame from past ones), in this paper we address B-frame compression (predicting frames using both past and future reference frames). Our B-frame solution is based on the existing P-frame methods. As a result, B-frame coding capability can easily be added to an existing neural codec. The basic idea of our B-frame coding method is to interpolate the two reference frames to generate a single reference frame and then use it together with an existing P-frame codec to encode the input B-frame. Our studies show that the interpolated frame is a much better reference for the P-frame codec compared to using the previous frame as is usually done. Our results show that using the proposed method with an existing P-frame codec can lead to 28.5% saving in bit-rate on the UVG dataset compared to the P-frame codec while generating the same video quality.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Pourreza_2021_ICCV, author = {Pourreza, Reza and Cohen, Taco}, title = {Extending Neural P-Frame Codecs for B-Frame Coding}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {6680-6689} }