Optimizing Dense Visual Predictions Through Multi-Task Coherence and Prioritization

Maxime Fontana, Michael Spratling, Miaojing Shi; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 8995-9004

Abstract


Multi-Task Learning (MTL) involves the concurrent training of multiple tasks offering notable advantages for dense prediction tasks in computer vision. MTL not only reduces training and inference time as opposed to having multiple single-task models but also enhances task accuracy through the interaction of multiple tasks. However existing methods face limitations. They often rely on suboptimal cross-task interactions resulting in task-specific predictions with poor geometric and predictive coherence. In addition many approaches use inadequate loss weighting strategies which do not address the inherent variability in task evolution during training. To overcome these challenges we propose an advanced MTL model specifically designed for dense vision tasks. Our model leverages state-of-the-art vision transformers with task-specific decoders. To enhance cross-task coherence we introduce a trace-back method that improves both cross-task geometric and predictive features. Furthermore we present a novel dynamic task balancing approach that projects task losses onto a common scale and prioritizes more challenging tasks during training. Extensive experiments demonstrate the superiority of our method establishing new state-of-the-art performance across two benchmark datasets. The code is available at: https://github.com/Klodivio355/MT-CP.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Fontana_2025_WACV, author = {Fontana, Maxime and Spratling, Michael and Shi, Miaojing}, title = {Optimizing Dense Visual Predictions Through Multi-Task Coherence and Prioritization}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {8995-9004} }