-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Zhou_2022_CVPR, author = {Zhou, Desen and Liu, Zhichao and Wang, Jian and Wang, Leshan and Hu, Tao and Ding, Errui and Wang, Jingdong}, title = {Human-Object Interaction Detection via Disentangled Transformer}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {19568-19577} }
Human-Object Interaction Detection via Disentangled Transformer
Abstract
Human-Object Interaction Detection tackles the problem of joint localization and classification of human object interactions. Existing HOI transformers either adopt a single decoder for triplet prediction, or utilize two parallel decoders to detect individual objects and interactions separately, and compose triplets by a matching process. In contrast, we decouple the triplet prediction into human-object pair detection and interaction classification. Our main motivation is that detecting the human-object instances and classifying interactions accurately needs to learn representations that focus on different regions. To this end, we present Disentangled Transformer, where both encoder and decoder are disentangled to facilitate learning of two subtasks. To associate the predictions of disentangled decoders, we first generate a unified representation for HOI triplets with a base decoder, and then utilize it as input feature of each disentangled decoder. Extensive experiments show that our method outperforms prior work on two public HOI benchmarks by a sizeable margin. Code will be available.
Related Material