No-Frills Human-Object Interaction Detection: Factorization, Layout Encodings, and Training Techniques

Tanmay Gupta, Alexander Schwing, Derek Hoiem; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 9677-9685

Abstract


We show that for human-object interaction detection a relatively simple factorized model with appearance and layout encodings constructed from pre-trained object detectors outperforms more sophisticated approaches. Our model includes factors for detection scores, human and object appearance, and coarse (box-pair configuration) and optionally fine-grained layout (human pose). We also develop training techniques that improve learning efficiency by: (1) eliminating a train-inference mismatch; (2) rejecting easy negatives during mini-batch training; and (3) using a ratio of negatives to positives that is two orders of magnitude larger than existing approaches. We conduct a thorough ablation study to understand the importance of different factors and training techniques using the challenging HICO-Det dataset.

Related Material


[pdf]
[bibtex]
@InProceedings{Gupta_2019_ICCV,
author = {Gupta, Tanmay and Schwing, Alexander and Hoiem, Derek},
title = {No-Frills Human-Object Interaction Detection: Factorization, Layout Encodings, and Training Techniques},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}