Learning Causal Representation for Training Cross-Domain Pose Estimator via Generative Interventions

Xiheng Zhang, Yongkang Wong, Xiaofei Wu, Juwei Lu, Mohan Kankanhalli, Xiangdong Li, Weidong Geng; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 11270-11280

Abstract


3D pose estimation has attracted increasing attention with the availability of high-quality benchmark datasets. However, prior works show that deep learning models tend to learn spurious correlations, which fail to generalize beyond the specific dataset they are trained on. In this work, we take a step towards training robust models for cross-domain pose estimation task, which brings together ideas from causal representation learning and generative adversarial networks. Specifically, this paper introduces a novel framework for causal representation learning which explicitly exploits the causal structure of the task. We consider changing domain as interventions on images under the data-generation process and steer the generative model to produce counterfactual features. This help the model learn transferable and causal relations across different domains. Our framework is able to learn with various types of unlabeled datasets. We demonstrate the efficacy of our proposed method on both human and hand pose estimation task. The experiment results show the proposed approach achieves state-of-the-art performance on most datasets for both domain adaptation and domain generalization settings.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Zhang_2021_ICCV, author = {Zhang, Xiheng and Wong, Yongkang and Wu, Xiaofei and Lu, Juwei and Kankanhalli, Mohan and Li, Xiangdong and Geng, Weidong}, title = {Learning Causal Representation for Training Cross-Domain Pose Estimator via Generative Interventions}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {11270-11280} }