Meta Omnium: A Benchmark for General-Purpose Learning-To-Learn

Ondrej Bohdal, Yinbing Tian, Yongshuo Zong, Ruchika Chavhan, Da Li, Henry Gouk, Li Guo, Timothy Hospedales; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 7693-7703

Abstract


Meta-learning and other approaches to few-shot learning are widely studied for image recognition, and are increasingly applied to other vision tasks such as pose estimation and dense prediction. This naturally raises the question of whether there is any few-shot meta-learning algorithm capable of generalizing across these diverse task types? To support the community in answering this question, we introduce Meta Omnium, a dataset-of-datasets spanning multiple vision tasks including recognition, keypoint localization, semantic segmentation and regression. We experiment with popular few-shot meta-learning baselines and analyze their ability to generalize across tasks and to transfer knowledge between them. Meta Omnium enables meta-learning researchers to evaluate model generalization to a much wider array of tasks than previously possible, and provides a single framework for evaluating meta-learners across a wide suite of vision applications in a consistent manner.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Bohdal_2023_CVPR, author = {Bohdal, Ondrej and Tian, Yinbing and Zong, Yongshuo and Chavhan, Ruchika and Li, Da and Gouk, Henry and Guo, Li and Hospedales, Timothy}, title = {Meta Omnium: A Benchmark for General-Purpose Learning-To-Learn}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2023}, pages = {7693-7703} }