BlendFace: Re-designing Identity Encoders for Face-Swapping

Kaede Shiohara, Xingchao Yang, Takafumi Taketomi; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 7634-7644

Abstract


The great advancements of generative adversarial networks and face recognition models in computer vision have made it possible to swap identities on images from single sources. Although a lot of studies seems to have proposed almost satisfactory solutions, we notice previous methods still suffer from an identity-attribute entanglement that causes undesired attributes swapping because widely used identity encoders, e.g., ArcFace, have some crucial attribute biases owing to their pretraining on face recognition tasks. To address this issue, we design BlendFace, a novel identity encoder for face-swapping. The key idea behind BlendFace is training face recognition models on blended images whose attributes are replaced with those of another mitigates inter-personal biases such as hairsyles and head shapes. BlendFace feeds disentangled identity features into generators and guides generators properly as an identity loss function. Extensive experiments demonstrate that BlendFace improves the identity-attribute disentanglement in face-swapping models, maintaining a comparable quantitative performance to previous methods.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Shiohara_2023_ICCV, author = {Shiohara, Kaede and Yang, Xingchao and Taketomi, Takafumi}, title = {BlendFace: Re-designing Identity Encoders for Face-Swapping}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {7634-7644} }