Perceiver-VL: Efficient Vision-and-Language Modeling With Iterative Latent Attention

Zineng Tang, Jaemin Cho, Jie Lei, Mohit Bansal; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 4410-4420

Abstract


We present Perceiver-VL, a vision-and-language framework that efficiently handles high-dimensional multimodal inputs such as long videos and text. Powered by the iterative latent-cross-attention of Perceiver, our framework scales with linear complexity, in contrast to the quadratic complexity of self-attention used in many state-of-the-art transformer-based models. To further improve the efficiency of our framework, we also study applying LayerDrop on cross-attention layers and introduce a mixed-stream architecture for cross-modal retrieval. We evaluate Perceiver-VL on diverse video-text and image-text benchmarks, where Perceiver-VL achieves the lowest GFLOPs and latency, while maintaining competitive performance. In addition, we also provide comprehensive analyses over various aspects of our framework, including pretraining data, scalability of latent size and input size, dropping cross-attention layers at inference to reduce latency, modality aggregation strategy, positional encoding, and weight initialization strategy.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Tang_2023_WACV, author = {Tang, Zineng and Cho, Jaemin and Lei, Jie and Bansal, Mohit}, title = {Perceiver-VL: Efficient Vision-and-Language Modeling With Iterative Latent Attention}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {4410-4420} }