-
[pdf]
[supp]
[bibtex]@InProceedings{Feng_2023_WACV, author = {Feng, Qi and He, Kun and Wen, He and Keskin, Cem and Ye, Yuting}, title = {Rethinking the Data Annotation Process for Multi-View 3D Pose Estimation With Active Learning and Self-Training}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {5695-5704} }
Rethinking the Data Annotation Process for Multi-View 3D Pose Estimation With Active Learning and Self-Training
Abstract
Pose estimation of the human body and hands is a fundamental problem in computer vision, and learning-based solutions require a large amount of annotated data. In this work, we improve the efficiency of the data annotation process for 3D pose estimation problems with Active Learning (AL) in a multi-view setting. AL selects examples with the highest value to annotate under limited annotation budgets (time and cost), but choosing the selection strategy is often nontrivial. We present a framework to efficiently extend existing single-view AL strategies. We then propose two novel AL strategies that make full use of multi-view geometry. Moreover, we demonstrate additional performance gains by incorporating pseudo-labels computed during the AL process, which is a form of self-training. Our system significantly outperforms simulated annotation baselines in 3D body and hand pose estimation on two large-scale benchmarks: CMU Panoptic Studio and InterHand2.6M. Notably, on CMU Panoptic Studio, we are able to reduce the turn-around time by 60% and annotation cost by 80% when compared to the conventional annotation process.
Related Material