Face Sketch Matching via Coupled Deep Transform Learning

Shruti Nagpal, Maneet Singh, Richa Singh, Mayank Vatsa, Afzel Noore, Angshul Majumdar; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 5419-5428

Abstract


Face sketch to digital image matching is an important challenge of face recognition that involves matching across different domains. Current research efforts have primarily focused on extracting domain invariant representations or learning a mapping from one domain to the other. In this research, we propose a novel transform learning based approach termed as DeepTransformer, which learns a transformation and mapping function between the features of two domains. The proposed formulation is independent of the input information and can be applied with any existing learned or hand-crafted feature. Since the mapping function is directional in nature, we propose two variants of DeepTransformer: (i) semi-coupled and (ii) symmetrically-coupled deep transform learning. This research also uses a novel IIIT-D Composite Sketch with Age (CSA) variations database which contains sketch images of 150 subjects along with age-separated digital photos. The performance of the proposed models is evaluated on a novel application of sketch-to-sketch matching, along with sketch-to-digital photo matching. Experimental results demonstrate the robustness of the proposed models in comparison to existing state-of-the-art sketch matching algorithms and a commercial face recognition system.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Nagpal_2017_ICCV,
author = {Nagpal, Shruti and Singh, Maneet and Singh, Richa and Vatsa, Mayank and Noore, Afzel and Majumdar, Angshul},
title = {Face Sketch Matching via Coupled Deep Transform Learning},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}