Efficient Computation Sharing for Multi-Task Visual Scene Understanding

Sara Shoouri, Mingyu Yang, Zichen Fan, Hun-Seok Kim; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 17130-17141


Solving multiple visual tasks using individual models can be resource-intensive, while multi-task learning can conserve resources by sharing knowledge across different tasks. Despite the benefits of multi-task learning, such techniques can struggle with balancing the loss for each task, leading to potential performance degradation. We present a novel computation- and parameter-sharing framework that balances efficiency and accuracy to perform multiple visual tasks utilizing individually-trained single-task transformers. Our method is motivated by transfer learning schemes to reduce computational and parameter storage costs while maintaining the desired performance. Our approach involves splitting the tasks into a base task and the other sub-tasks, and sharing a significant portion of activations and parameters/weights between the base and sub-tasks to decrease inter-task redundancies and enhance knowledge sharing. The evaluation conducted on NYUD-v2 and PASCAL-context datasets shows that our method is superior to the state-of-the-art transformer-based multi-task learning techniques with higher accuracy and reduced computational resources. Moreover, our method is extended to video stream inputs, further reducing computational costs by efficiently sharing information across the temporal domain as well as the task domain. Our codes are available at https://github.com/sarashoouri/EfficientMTL.

Related Material

[pdf] [supp] [arXiv]
@InProceedings{Shoouri_2023_ICCV, author = {Shoouri, Sara and Yang, Mingyu and Fan, Zichen and Kim, Hun-Seok}, title = {Efficient Computation Sharing for Multi-Task Visual Scene Understanding}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {17130-17141} }