Towards Backward-Compatible Continual Learning of Image Compression

Zhihao Duan, Ming Lu, Justin Yang, Jiangpeng He, Zhan Ma, Fengqing Zhu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 25564-25573

Abstract


This paper explores the possibility of extending the capability of pre-trained neural image compressors (e.g. adapting to new data or target bitrates) without breaking backward compatibility the ability to decode bitstreams encoded by the original model. We refer to this problem as continual learning of image compression. Our initial findings show that baseline solutions such as end-to-end fine-tuning do not preserve the desired backward compatibility. To tackle this we propose a knowledge replay training strategy that effectively addresses this issue. We also design a new model architecture that enables more effective continual learning than existing baselines. Experiments are conducted for two scenarios: data-incremental learning and rate-incremental learning. The main conclusion of this paper is that neural image compressors can be fine-tuned to achieve better performance (compared to their pre-trained version) on new data and rates without compromising backward compatibility. The code is publicly available online.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Duan_2024_CVPR, author = {Duan, Zhihao and Lu, Ming and Yang, Justin and He, Jiangpeng and Ma, Zhan and Zhu, Fengqing}, title = {Towards Backward-Compatible Continual Learning of Image Compression}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {25564-25573} }