MTLoRA: Low-Rank Adaptation Approach for Efficient Multi-Task Learning

Ahmed Agiza, Marina Neseem, Sherief Reda; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 16196-16205

Abstract


Adapting models pre-trained on large-scale datasets to a variety of downstream tasks is a common strategy in deep learning. Consequently parameter-efficient fine-tuning methods have emerged as a promising way to adapt pre-trained models to different tasks while training only a minimal number of parameters. While most of these methods are designed for single-task adaptation parameter-efficient training in Multi-Task Learning (MTL) architectures is still unexplored. In this paper we introduce MTLoRA a novel framework for parameter-efficient training of MTL models. MTLoRA employs Task-Agnostic and Task-Specific Low-Rank Adaptation modules which effectively disentangle the parameter space in MTL fine-tuning thereby enabling the model to adeptly handle both task specialization and interaction within MTL contexts. We applied MTLoRA to hierarchical-transformer-based MTL architectures adapting them to multiple downstream dense prediction tasks. Our extensive experiments on the PASCAL dataset show that MTLoRA achieves higher accuracy on downstream tasks compared to fully fine-tuning the MTL model while reducing the number of trainable parameters by 3.6x. Furthermore MTLoRA establishes a Pareto-optimal trade-off between the number of trainable parameters and the accuracy of the downstream tasks outperforming current state-of-the-art parameter-efficient training methods in both accuracy and efficiency.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Agiza_2024_CVPR, author = {Agiza, Ahmed and Neseem, Marina and Reda, Sherief}, title = {MTLoRA: Low-Rank Adaptation Approach for Efficient Multi-Task Learning}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {16196-16205} }