Can Language Models Learn to Listen?

Evonne Ng, Sanjay Subramanian, Dan Klein, Angjoo Kanazawa, Trevor Darrell, Shiry Ginosar; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 10083-10093

Abstract


We present a framework for generating appropriate facial responses from a listener in dyadic social interactions based on the speaker's words. Given an input transcription of the speaker's words with their timestamps, our approach autoregressively predicts a response of a listener: a sequence of listener facial gestures, quantized using a VQ-VAE. Since gesture is a language component, we propose treating the quantized atomic motion elements as additional language token inputs to a transformer-based large language model. Initializing our transformer with the weights of a language model pre-trained only on text results in significantly higher quality listener responses than training a transformer from scratch. We show that our generated listener motion is fluent and reflective of language semantics through quantitative metrics and a qualitative user study. In our evaluation, we analyze the model's ability to utilize temporal and semantic aspects of spoken text.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Ng_2023_ICCV, author = {Ng, Evonne and Subramanian, Sanjay and Klein, Dan and Kanazawa, Angjoo and Darrell, Trevor and Ginosar, Shiry}, title = {Can Language Models Learn to Listen?}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {10083-10093} }