Emotional Listener Portrait: Neural Listener Head Generation with Emotion

Luchuan Song, Guojun Yin, Zhenchao Jin, Xiaoyi Dong, Chenliang Xu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 20839-20849

Abstract


Listener head generation centers on generating non-verbal behaviors (e.g., smile) of a listener in reference to the information delivered by a speaker. A significant challenge when generating such responses is the non-deterministic nature of fine-grained facial expressions during a conversation, which varies depending on the emotions and attitudes of both the speaker and the listener. To tackle this problem, we propose the Emotional Listener Portrait (ELP), which treats each fine-grained facial motion as a composition of several discrete motion-codewords and explicitly models the probability distribution of the motions under different emotional contexts in conversation. Benefiting from the "explicit" and "discrete" design, our ELP model can not only automatically generate natural and diverse responses toward a given speaker via sampling from the learned distribution but also generate controllable responses with a predetermined attitude. Under several quantitative metrics, our ELP exhibits significant improvements compared to previous methods.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Song_2023_ICCV, author = {Song, Luchuan and Yin, Guojun and Jin, Zhenchao and Dong, Xiaoyi and Xu, Chenliang}, title = {Emotional Listener Portrait: Neural Listener Head Generation with Emotion}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {20839-20849} }