Learning Bias-Invariant Representation by Cross-Sample Mutual Information Minimization

Wei Zhu, Haitian Zheng, Haofu Liao, Weijian Li, Jiebo Luo; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 15002-15012

Abstract


Deep learning algorithms mine knowledge from the training data and thus would likely inherit the dataset's bias information. As a result, the obtained model would generalize poorly and even mislead the decision process in real-life applications. We propose to remove the bias information misused by the target task with a cross-sample adversarial debiasing (CSAD) method. CSAD explicitly extracts target and bias features disentangled from the latent representation generated by a feature extractor and then learns to discover and remove the correlation between the target and bias features. The correlation measurement plays a critical role in adversarial debiasing and is conducted by a cross-sample neural mutual information estimator. Moreover, we propose joint content and local structural representation learning to boost mutual information estimation for better performance. We conduct thorough experiments on publicly available datasets to validate the advantages of the proposed method over state-of-the-art approaches.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Zhu_2021_ICCV, author = {Zhu, Wei and Zheng, Haitian and Liao, Haofu and Li, Weijian and Luo, Jiebo}, title = {Learning Bias-Invariant Representation by Cross-Sample Mutual Information Minimization}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {15002-15012} }