-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Sinodinos_2025_WACV, author = {Sinodinos, Dimitrios and Armanfard, Narges}, title = {Cross-Task Affinity Learning for Multitask Dense Scene Predictions}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {1546-1555} }
Cross-Task Affinity Learning for Multitask Dense Scene Predictions
Abstract
Multitask learning (MTL) has become prominent for its ability to predict multiple tasks jointly achieving better per-task performance with fewer parameters than single-task learning. Recently decoder-focused architectures have significantly improved multitask performance by refining task predictions using features from related tasks. However most refinement methods struggle to efficiently capture both local and long-range dependencies between task-specific representations and cross-task patterns. In this paper we introduce the Cross-Task Affinity Learning (CTAL) module a lightweight framework that enhances task refinement in multitask networks. CTAL effectively captures local and long-range cross-task interactions by optimizing task affinity matrices for parameter-efficient grouped convolutions without concern for information loss. Our results demonstrate state-of-the-art MTL performance for both CNN and transformer backbones using significantly fewer parameters than single-task learning.
Related Material