Exploring Hate Speech Detection in Multimodal Publications

Raul Gomez, Jaume Gibert, Lluis Gomez, Dimosthenis Karatzas; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 1470-1478

Abstract


In this work we target the problem of hate speech detection in multimodal publications formed by a text and an image. We gather and annotate a large scale dataset from Twitter, MMHS150K, and propose different models that jointly analyze textual and visual information for hate speech detection, comparing them with unimodal detection. We provide quantitative and qualitative results and analyze the challenges of the proposed task. We find that, even though images are useful for the hate speech detection task, current multimodal models cannot outperform models analyzing only text. We discuss why and open the field and the dataset for further research.

Related Material


[pdf]
[bibtex]
@InProceedings{Gomez_2020_WACV,
author = {Gomez, Raul and Gibert, Jaume and Gomez, Lluis and Karatzas, Dimosthenis},
title = {Exploring Hate Speech Detection in Multimodal Publications},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {March},
year = {2020}
}