Quality and Relevance Metrics for Selection of Multimodal Pretraining Data

Roshan Rao, Sudha Rao, Elnaz Nouri, Debadeepta Dey, Asli Celikyilmaz, Bill Dolan; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 956-957

Abstract


Self-supervised pretraining has become a strong force in both language and vision tasks. Current efforts to improve the effects of pretraining focus on improving network architecture or defining new tasks to extract representations from the data. We focus on a third axis, the data itself, to quantify and measure how different sources and quality of data can affect the learned representations. As pretraining datasets grow larger and larger, the cost of pretraining will continue to increase. This issue is especially acute for visuolingusitic data, as the cost of storage and processing for image and video data will rise quickly. We therefore examine four visuolinguistic datasets (three preexisting datasets and one collected by us) for their utility as pretraining datasets. We define metrics for dataset quality and relevance, propose a method for subsampling large corpuses for the data most relevant to a set of downstream multimodal vision and language tasks of interest, and show that this method increases performance across the board for all downstream tasks.

Related Material


[pdf]
[bibtex]
@InProceedings{Rao_2020_CVPR_Workshops,
author = {Rao, Roshan and Rao, Sudha and Nouri, Elnaz and Dey, Debadeepta and Celikyilmaz, Asli and Dolan, Bill},
title = {Quality and Relevance Metrics for Selection of Multimodal Pretraining Data},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}