Show Your Face: Restoring Complete Facial Images From Partial Observations for VR Meeting

Zheng Chen, Zhiqi Zhang, Junsong Yuan, Yi Xu, Lantao Liu; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 8688-8697

Abstract


Virtual Reality (VR) headsets allow users to interact with the virtual world. However, the device physically blocks visual connections among users, causing huge inconveniences for VR meetings. To address this issue, studies have been conducted to restore human faces from images captured by Headset Mounted Cameras (HMC). Unfortunately, existing approaches heavily rely on high-resolution person-specific 3D models which are prohibitively expensive to apply to large-scale scenarios. Our goal is to design an efficient framework for restoring users' facial data in VR meetings. Specifically, we first build a new dataset, named Facial Image Composition (FIC) data which approximates the real HMC images from a VR headset. By leveraging the heterogeneity of the HMC images, we decompose the restoration problem into a local geometry transformation and global color/style fusion. Then we propose a 2D light-weight facial image composition network (FIC-Net), where three independent local models are responsible for transforming raw HMC patches and the global model performs a fusion of the transformed HMC patches with a pre-recorded reference image. Finally, we also propose a stage-wise training strategy to optimize the generalization of our FIC-Net. We have validated the effectiveness of our proposed FIC-Net through extensive experiments.

Related Material


[pdf]
[bibtex]
@InProceedings{Chen_2024_WACV, author = {Chen, Zheng and Zhang, Zhiqi and Yuan, Junsong and Xu, Yi and Liu, Lantao}, title = {Show Your Face: Restoring Complete Facial Images From Partial Observations for VR Meeting}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {8688-8697} }