Low Bandwidth Video-Chat Compression Using Deep Generative Models

Maxime Oquab, Pierre Stock, Daniel Haziza, Tao Xu, Peizhao Zhang, Onur Celebi, Yana Hasson, Patrick Labatut, Bobo Bose-Kolanu, Thibault Peyronel, Camille Couprie; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 2388-2397

Abstract


To unlock video chat for hundreds of millions of people hindered by poor connectivity or unaffordable data costs, we propose to authentically reconstruct faces on the receiver's device using facial landmarks extracted at the sender's side and transmitted over the network. In this context, we discuss and evaluate the benefits and disadvantages of several deep adversarial approaches. In particular, we explore quality and bandwidth trade-offs for approaches based on static landmarks, dynamic landmarks or segmentation maps. We design a mobile-compatible architecture based on the first order animation model of Siarohin et al. In addition, we leverage SPADE blocks to refine results in important areas such as the eyes and lips. We compress the networks down to about 3MB, allowing models to run in real time on iPhone8. This approach enables video calling at a few kbits per second, an order of magnitude lower than currently available alternatives.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Oquab_2021_CVPR, author = {Oquab, Maxime and Stock, Pierre and Haziza, Daniel and Xu, Tao and Zhang, Peizhao and Celebi, Onur and Hasson, Yana and Labatut, Patrick and Bose-Kolanu, Bobo and Peyronel, Thibault and Couprie, Camille}, title = {Low Bandwidth Video-Chat Compression Using Deep Generative Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {2388-2397} }