Context-Guided Super-Class Inference for Zero-Shot Detection

Yanan Li, Yilan Shao, Donghui Wang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 944-945

Abstract


Zero-shot object detection (ZSD) is a newly proposed research problem, which aims to simultaneously locate and recognize objects of previously unseen classes. Existing algorithms usually formulate it as a simple combination of a typical detection framework and zero-shot classifier, by learning a visual-semantic mapping from the visual features of bounding box proposals to semantic embeddings of class labels. In this paper, we propose a novel ZSD approach that leverages the context information surrounding objects in the image, following the principle that objects tend to be found in certain contexts. It also incorporates the semantic relations between seen and unseen classes to help recognize located instances. Comprehensive experiments on PASCAL VOC and MS COCO datasets show that context and class hierarchy truly improve the performance of detection.

Related Material


[pdf]
[bibtex]
@InProceedings{Li_2020_CVPR_Workshops,
author = {Li, Yanan and Shao, Yilan and Wang, Donghui},
title = {Context-Guided Super-Class Inference for Zero-Shot Detection},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}