VMRNN: Integrating Vision Mamba and LSTM for Efficient and Accurate Spatiotemporal Forecasting

Yujin Tang, Peijie Dong, Zhenheng Tang, Xiaowen Chu, Junwei Liang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 5663-5673

Abstract


Combining Convolutional Neural Networks (CNNs) or Vision Transformers(ViTs) with Recurrent Neural Networks (RNNs) for spatiotemporal forecasting has yielded unparalleled results in predicting temporal and spatial dynamics. However modeling extensive global information remains a formidable challenge; CNNs are limited by their narrow receptive fields and ViTs struggle with the intensive computational demands of their attention mechanisms. The emergence of recent Mamba-based architectures has been met with enthusiasm for their exceptional long-sequence modeling capabilities surpassing established vision models in efficiency and accuracy which motivates us to develop an innovative architecture tailored for spatiotemporal forecasting. In this paper we propose the VMRNN cell a new recurrent unit that integrates the strengths of Vision Mamba blocks with LSTM. We construct a network centered on VMRNN cells to tackle spatiotemporal prediction tasks effectively. Our extensive evaluations show that our proposed approach secures competitive results on a variety of tasks while maintaining a smaller model size. Our code is available at https://github.com/yyyujintang/VMRNN-PyTorch.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Tang_2024_CVPR, author = {Tang, Yujin and Dong, Peijie and Tang, Zhenheng and Chu, Xiaowen and Liang, Junwei}, title = {VMRNN: Integrating Vision Mamba and LSTM for Efficient and Accurate Spatiotemporal Forecasting}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {5663-5673} }