Self-Training and Multi-Task Learning for Limited Data: Evaluation Study on Object Detection

Hoàng-Ân Lê, Minh-Tan Pham; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023, pp. 1003-1009

Abstract


Self-training allows a network to learn from the predictions of a more complicated model, thus often requires well-trained teacher models and mixture of teacher-student data while multi-task learning jointly optimizes different targets to learn salient interrelationship and requires multi-task annotations for each training example. These frameworks, despite being particularly data demanding have potentials for data exploitation if such assumptions can be relaxed. In this paper, we compare self-training object detection under the deficiency of teacher training data where students are trained on unseen examples by the teacher, and multi-task learning with partially annotated data, i.e. single-task annotation per training example. Both scenarios have their own limitation but potentially helpful with limited annotated data. Experimental results show the improvement of performance when using a weak teacher with unseen data for training a multi-task student. Despite the limited setup we believe the experimental results show the potential of multi-task knowledge distillation and self-training, which could be beneficial for future study. Source code and data splits are at https://lhoangan.github.io/multas

Related Material


[pdf]
[bibtex]
@InProceedings{Le_2023_ICCV, author = {L\^e, Ho\`ang-\^An and Pham, Minh-Tan}, title = {Self-Training and Multi-Task Learning for Limited Data: Evaluation Study on Object Detection}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2023}, pages = {1003-1009} }