Checkerboard Context Model for Efficient Learned Image Compression

Dailan He, Yaoyan Zheng, Baocheng Sun, Yan Wang, Hongwei Qin; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 14771-14780

Abstract


For learned image compression, the autoregressive context model is proved effective in improving the rate-distortion (RD) performance. Because it helps remove spatial redundancies among latent representations. However, the decoding process must be done in a strict scan order, which breaks the parallelization. We propose a parallelizable checkerboard context model (CCM) to solve the problem. Our two-pass checkerboard context calculation eliminates such limitations on spatial locations by re-organizing the decoding order. Speeding up the decoding process more than 40 times in our experiments, it achieves significantly improved computational efficiency with almost the same rate-distortion performance. To the best of our knowledge, this is the first exploration on parallelization-friendly spatial context model for learned image compression.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{He_2021_CVPR, author = {He, Dailan and Zheng, Yaoyan and Sun, Baocheng and Wang, Yan and Qin, Hongwei}, title = {Checkerboard Context Model for Efficient Learned Image Compression}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {14771-14780} }