-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Tao_2025_CVPR, author = {Tao, Keda and Qin, Can and You, Haoxuan and Sui, Yang and Wang, Huan}, title = {DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {18992-19001} }
DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models
Abstract
Video large language models (VLLMs) have significantly advanced recently in processing complex video content. Yet, their inference efficiency remains constrained because of the high computational cost stemming from the thousands of visual tokens generated from the video inputs. We empirically observe that, unlike single image inputs, VLLMs typically attend visual tokens from different frames at different decoding iterations. This makes a one-shot pruning strategy prone to removing important tokens by mistake. Motivated by this, we present DyCoke, a training-free token compression method to optimize token representation and accelerate VLLMs. DyCoke incorporates a plug-and-play temporal compression module to minimize temporal redundancy by merging redundant tokens across frames and applying dynamic KV cache reduction to prune spatially redundant tokens selectively. It ensures high-quality inference by dynamically retaining the critical tokens at each decoding step. Extensive experimental results demonstrate that DyCoke can outperform the prior SoTA counterparts, achieving 1.5x inference speedup, and 1.4x memory reduction against the baseline VLLM, while still improving the performance, with no training.
Related Material