Subjective Quality Optimized Efficient Image Compression
In this paper, we propose an efficient image compression framework that is optimized for subjective quality. Our framework is mainly based on the NLAIC (NonLocal Attention Optimized Image Coding) model which applied Variational Autoencoder (VAE) and non-local attention module to end-to-end image compression. This work makes two major contributions to the NLAIC framework. First, our models are optimized for subjective-friendly loss functions rather than conventional MSE (Mean Squared Error) or MS-SSIM (Multiscale Structural Similarity) which was widely used in previous works. Second, we introduce block-based inference mechanism to reduce the running memory consumption of the image compression network, and suggest a partial post-processing step to alleviate block artifacts caused by block-based inference in a lightweight computational fashion. Experiments have proved that the image reconstructed by our method can preserve more texture details than models trained for optimal MSE or MS-SSIM and also present capability for high-throughput decoding.