DeepVQ: A Deep Network Architecture for Vector Quantization

Dang-Khoa Le Tan, Huu Le, Tuan Hoang, Thanh-Toan Do, Ngai-Man Cheung; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2018, pp. 2579-2582


Vector quantization (VQ) is a classic problem in signal processing, source coding and information theory. Leveraging recent advances in deep neural networks (DNN), this paper bridges the gap between a classic quantization problem and DNN. We introduce -- for the first time -- a deep network architecture for vector quantization (DeepVQ). Applying recent binary optimization theory, we propose a training algorithm to tackle binary constraints. Notably, our network outputs binary codes directly. As a result, DeepVQ can perform quantization of vectors with a simple forward pass, and this overcomes the exponential complexity issue of previous VQ approaches. Experiments show that our network is able to achieve encouraging results and outperforms recent deep learning-based clustering approaches that have been modified for VQ. Importantly, our network serves as a generic framework which can be applied for other networks in which binary constraints are required.

Related Material

author = {Le Tan, Dang-Khoa and Le, Huu and Hoang, Tuan and Do, Thanh-Toan and Cheung, Ngai-Man},
title = {DeepVQ: A Deep Network Architecture for Vector Quantization},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2018}