DisplaceNet: Recognising Displaced People from Images by Exploiting Dominance Level

Grigorios Kalliatakis, Shoaib Ehsan, Maria Fasli, Klaus D McDonald-Maier; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 33-38

Abstract


Every year millions of men, women and children are forced to leave their homes and seek refuge from wars, human rights violations, persecution, and natural disasters. The number of forcibly displaced people came at a record rate of 44,400 every day throughout 2017, raising the cumulative total to 68.5 million at the years end, overtaken the total population of the United Kingdom. Up to 85% of the forcibly displaced find refuge in low- and middle-income countries, calling for increased humanitarian assistance worldwide. To reduce the amount of manual labour required for human-rights-related image analysis, we introduce DisplaceNet, a novel model which infers potential displaced people from images by integrating the control level of the situation and conventional convolutional neural network (CNN) classifier into one framework for image classification. Experimental results show that DisplaceNet achieves up to 4% coverage-the proportion of a data set for which a classifier is able to produce a prediction-gain over the sole use of a CNN classifier. Our dataset, codes and trained models will be available online at https: //github.com/GKalliatakis/DisplaceNet

Related Material


[pdf] [dataset]
[bibtex]
@InProceedings{Kalliatakis_2019_CVPR_Workshops,
author = {Kalliatakis, Grigorios and Ehsan, Shoaib and Fasli, Maria and D McDonald-Maier, Klaus},
title = {DisplaceNet: Recognising Displaced People from Images by Exploiting Dominance Level},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}