CODA: Repurposing Continuous VAEs for Discrete Tokenization

Zeyu Liu, Zanlin Ni, Yeguo Hua, Xin Deng, Xiao Ma, Cheng Zhong, Gao Huang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, pp. 18906-18916

Abstract


Discrete visual tokenizers transform images into a sequence of tokens, enabling token-based visual generation akin to language models. However, this process is inherently challenging, as it requires both compressing visual signals into a compact representation and discretizing them into a fixed set of codes. Traditional discrete tokenizers typically learn the two tasks jointly, often leading to unstable training, low codebook utilization, and limited reconstruction quality. In this paper, we introduce CODA(COntinuous-to-Discrete Adaptation), a framework that decouples compression and discretization. Instead of training discrete tokenizers from scratch, CODA adapts off-the-shelf continuous VAEs---already optimized for perceptual compression---into discrete tokenizers via a carefully designed discretization process. By primarily focusing on discretization, CODA ensures stable and efficient training while retaining the strong visual fidelity of continuous VAEs. Empirically, with 6 xless training budget than standard VQGAN, our approach achieves a remarkable codebook utilization of 100% and notable reconstruction FID (rFID) of 0.43 and 1.34 for 8 xand 16 xcompression on ImageNet 256x256 benchmark.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Liu_2025_ICCV, author = {Liu, Zeyu and Ni, Zanlin and Hua, Yeguo and Deng, Xin and Ma, Xiao and Zhong, Cheng and Huang, Gao}, title = {CODA: Repurposing Continuous VAEs for Discrete Tokenization}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {18906-18916} }