Randomized Autoregressive Visual Generation

Qihang Yu, Ju He, Xueqing Deng, Xiaohui Shen, Liang-Chieh Chen; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, pp. 18431-18441

Abstract


This paper presents Randomized AutoRegressive modeling (RAR) for visual generation, which sets a new state-of-the-art performance on the image generation task while maintaining full compatibility with language modeling frameworks. The proposed RAR is simple: during a standard autoregressive training process with a next-token prediction objective, the input sequence-typically ordered in raster form-is randomly permuted into different factorization orders with a probability r, where r starts at 1 and linearly decays to 0 over the course of training. This annealing training strategy enables the model to learn to maximize the expected likelihood over all factorization orders and thus effectively improve the model's capability of modeling bidirectional contexts. Importantly, RAR preserves the integrity of the autoregressive modeling framework, ensuring full compatibility with language modeling while significantly improving performance in image generation. On the ImageNet-256 benchmark, RAR achieves an FID score of 1.48, not only surpassing prior state-of-the-art autoregressive image generators but also outperforming leading diffusion-based and masked transformer-based methods. Code and models are available at https://github.com/bytedance/1d-tokenizer

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Yu_2025_ICCV, author = {Yu, Qihang and He, Ju and Deng, Xueqing and Shen, Xiaohui and Chen, Liang-Chieh}, title = {Randomized Autoregressive Visual Generation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {18431-18441} }