Dyadformer: A Multi-Modal Transformer for Long-Range Modeling of Dyadic Interactions

David Curto, Albert Clapés, Javier Selva, Sorina Smeureanu, Julio C. S. Jacques Junior, David Gallardo-Pujol, Georgina Guilera, David Leiva, Thomas B. Moeslund, Sergio Escalera, Cristina Palmero; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2021, pp. 2177-2188

Abstract


Personality computing has become an emerging topic in computer vision, due to the wide range of applications it can be used for. However, most works on the topic have focused on analyzing the individual, even when applied to interaction scenarios, and for short periods of time. To address these limitations, we present the Dyadformer, a novel multi-modal multi-subject Transformer architecture to model individual and interpersonal features in dyadic interactions using variable time windows, thus allowing the capture of long-term interdependencies. Our proposed cross-subject layer allows the network to explicitly model interactions among subjects through attentional operations. This proof-of-concept approach shows how multi-modality and joint modeling of both interactants for longer periods of time helps to predict individual attributes. With Dyadformer, we improve state-of-the-art self-reported personality inference results on individual subjects on the UDIVA v0.5 dataset.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Curto_2021_ICCV, author = {Curto, David and Clap\'es, Albert and Selva, Javier and Smeureanu, Sorina and Junior, Julio C. S. Jacques and Gallardo-Pujol, David and Guilera, Georgina and Leiva, David and Moeslund, Thomas B. and Escalera, Sergio and Palmero, Cristina}, title = {Dyadformer: A Multi-Modal Transformer for Long-Range Modeling of Dyadic Interactions}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2021}, pages = {2177-2188} }