Achievement-Based Training Progress Balancing for Multi-Task Learning

Hayoung Yun, Hanjoo Cho; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 16935-16944

Abstract


Multi-task learning faces two challenging issues: (1) the high cost of annotating labels for all tasks and (2) balancing the training progress of various tasks with different natures. To resolve the label annotation issue, we construct a large-scale "partially annotated" multi-task dataset by combining task-specific datasets. However, the numbers of annotations for individual tasks are imbalanced, which may escalate an imbalance in training progress. To balance the training progress, we propose an achievement-based multi-task loss to modulate training speed based on the "achievement," defined as the ratio of current accuracy to single-task accuracy. Then, we formulate the multitask loss as a weighted geometric mean of individual task losses instead of a weighted sum to prevent any task from dominating the loss. In experiments, we evaluated the accuracy and training speed of the proposed multi-task loss on the large-scale multi-task dataset against recent multitask losses. The proposed loss achieved the best multi-task accuracy without incurring training time overhead. Compared to single-task models, the proposed one achieved 1.28%, 1.65%, and 1.18% accuracy improvement in object detection, semantic segmentation, and depth estimation, respectively, while reducing computations to 33.73%. Source code is available at https://github.com/ samsung/Achievement-based-MTL.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Yun_2023_ICCV, author = {Yun, Hayoung and Cho, Hanjoo}, title = {Achievement-Based Training Progress Balancing for Multi-Task Learning}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {16935-16944} }