Cloud-Device Collaborative Learning for Multimodal Large Language Models

Guanqun Wang, Jiaming Liu, Chenxuan Li, Yuan Zhang, Junpeng Ma, Xinyu Wei, Kevin Zhang, Maurice Chong, Renrui Zhang, Yijiang Liu, Shanghang Zhang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 12646-12655

Abstract


The burgeoning field of Multimodal Large Language Models (MLLMs) has exhibited remarkable performance in diverse tasks such as captioning commonsense reasoning and visual scene understanding. However the deployment of these large-scale MLLMs on client devices is hindered by their extensive model parameters leading to a notable decline in generalization capabilities when these models are compressed for device deployment. Addressing this challenge we introduce a Cloud-Device Collaborative Continual Adaptation framework designed to enhance the performance of compressed device-deployed MLLMs by leveraging the robust capabilities of cloud-based larger-scale MLLMs. Our framework is structured into three key components: a device-to-cloud uplink for efficient data transmission cloud-based knowledge adaptation and an optimized cloud-to-device downlink for model deployment. In the uplink phase we employ an Uncertainty-guided Token Sampling (UTS) strategy to effectively filter out-of-distribution tokens thereby reducing transmission costs and improving training efficiency. On the cloud side we propose Adapter-based Knowledge Distillation (AKD) method to transfer refined knowledge from large-scale to compressed pocket-size MLLMs. Furthermore we propose a Dynamic Weight update Compression (DWC) strategy for the downlink which adaptively selects and quantizes updated weight parameters enhancing transmission efficiency and reducing the representational disparity between cloud and device models. Extensive experiments on several multimodal benchmarks demonstrate the superiority of our proposed framework over prior Knowledge Distillation and device-cloud collaboration methods. Notably we also validate the feasibility of our approach to real-world experiments.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Wang_2024_CVPR, author = {Wang, Guanqun and Liu, Jiaming and Li, Chenxuan and Zhang, Yuan and Ma, Junpeng and Wei, Xinyu and Zhang, Kevin and Chong, Maurice and Zhang, Renrui and Liu, Yijiang and Zhang, Shanghang}, title = {Cloud-Device Collaborative Learning for Multimodal Large Language Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {12646-12655} }