ViM: Vision Middleware for Unified Downstream Transferring

Yutong Feng, Biao Gong, Jianwen Jiang, Yiliang Lv, Yujun Shen, Deli Zhao, Jingren Zhou; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 11696-11707


Foundation models are pre-trained on massive data and transferred to downstream tasks via fine-tuning. This work presents Vision Middleware (ViM), a new learning paradigm that targets unified transferring from a single foundation model to a variety of downstream tasks. ViM consists of a zoo of lightweight plug-in modules, each of which is independently learned on a midstream dataset with a shared frozen backbone. Downstream tasks can then benefit from an adequate aggregation of the module zoo thanks to the rich knowledge inherited from midstream tasks. There are three major advantages of such a design. From the efficiency aspect, the upstream backbone can be trained only once and reused for all downstream tasks without tuning. From the scalability aspect, we can easily append additional modules to ViM with no influence on existing modules. From the performance aspect, ViM can include as many midstream tasks as possible, narrowing the task gap between upstream and downstream. Considering these benefits, we believe that ViM, which the community could maintain and develop together, would serve as a powerful tool to assist foundation models.

Related Material

[pdf] [supp] [arXiv]
@InProceedings{Feng_2023_ICCV, author = {Feng, Yutong and Gong, Biao and Jiang, Jianwen and Lv, Yiliang and Shen, Yujun and Zhao, Deli and Zhou, Jingren}, title = {ViM: Vision Middleware for Unified Downstream Transferring}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {11696-11707} }