Breaking the Encoder Barrier for Seamless Video-Language Understanding

Handong Li, Yiyuan Zhang, Longteng Guo, Xiangyu Yue, Jing Liu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, pp. 23167-23176

Abstract


Most Video-Large Language Models (Video-LLMs) adopt an encoder-decoder framework, where a vision encoder extracts frame-wise features for processing by a language model. However, this approach incurs high computational costs, introduces resolution biases, and struggles to capture fine-grained multimodal interactions. To overcome these limitations, we propose ELVA, an encoder-free Video-LLM that directly models nuanced video-language interactions without relying on a vision encoder. ELVA employs token merging to construct a bottom-up hierarchical representation and incorporates a video guidance supervisor for direct spatiotemporal representation learning. Additionally, a hybrid-resolution mechanism strategically integrates high- and low-resolution frames as inputs to achieve an optimal balance between performance and efficiency. With only 7M publicly available video-text pairs, ELVA achieves competitive performance compared to encoder-based Video-LLMs while reducing FLOPs by up to 95% and inference latency by 92%, offering a scalable and efficient solution for real-time video understanding.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Li_2025_ICCV, author = {Li, Handong and Zhang, Yiyuan and Guo, Longteng and Yue, Xiangyu and Liu, Jing}, title = {Breaking the Encoder Barrier for Seamless Video-Language Understanding}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {23167-23176} }