ComFace: Facial Representation Learning with Synthetic Data for Comparing Faces

Yusuke Akamatsu, Terumi Umematsu, Hitoshi Imaoka, Shizuko Gomi, Hideo Tsurushima; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 5263-5273

Abstract


Daily monitoring of intra-personal facial changes associated with health and emotional conditions has great potential to be useful for medical healthcare and emotion recognition fields. However the approach for capturing intra-personal facial changes is relatively unexplored due to the difficulty of collecting temporally changing face images. In this paper we propose a facial representation learning method using synthetic images for comparing faces called ComFace which is designed to capture intra-personal facial changes. For effective representation learning ComFace aims to acquire two feature representations i.e. inter-personal facial differences and intra-personal facial changes. The key point of our method is the use of synthetic face images to overcome the limitations of collecting real intra-personal face images. Facial representations learned by ComFace are transferred to three extensive downstream tasks for comparing faces: estimating facial expression changes weight changes and age changes from two face images of the same individual. Our ComFace trained using only synthetic data achieves comparable to or better transfer performance than general pre-training and state-of-the-art representation learning methods trained using real images.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Akamatsu_2025_WACV, author = {Akamatsu, Yusuke and Umematsu, Terumi and Imaoka, Hitoshi and Gomi, Shizuko and Tsurushima, Hideo}, title = {ComFace: Facial Representation Learning with Synthetic Data for Comparing Faces}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {5263-5273} }