Agglomerative Transformer for Human-Object Interaction Detection

Danyang Tu, Wei Sun, Guangtao Zhai, Wei Shen; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 21614-21624

Abstract


We propose an agglomerative Transformer (AGER) that enables Transformer-based human-object interaction (HOI) detectors to flexibly exploit extra instance-level cues in a single-stage and end-to-end manner for the first time. AGER acquires instance tokens by dynamically clustering patch tokens and aligning cluster centres to instances with textual guidance, thus enjoying two benefits: 1) Intergrality: each instance token is encouraged to contain all discriminative feature regions of an instance, which demonstrates a significant improvement in the extraction of different instance-level cues, and subsequently leads to a new state-of-the-art performance of HOI detection with 36.75 mAP on HICO-Det. 2) Efficiency: the dynamical clustering mechanism allows AGER to generate instance tokens jointly with the feature learning of the Transformer encoder, eliminating the need of an additional object detector or instance decoder in prior methods, thus allowing the extraction of desirable extra cues for HOI detection in a single-stage and end-to-end pipeline. Concretely, AGER reduces GFLOPs by 8.5% and improves FPS by 36%, even compared to a vanilla DETR-like pipeline without extra cue extraction.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Tu_2023_ICCV, author = {Tu, Danyang and Sun, Wei and Zhai, Guangtao and Shen, Wei}, title = {Agglomerative Transformer for Human-Object Interaction Detection}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {21614-21624} }