Fair Robust Active Learning by Joint Inconsistency

Tsung-Han Wu, Hung-Ting Su, Shang-Tse Chen, Winston H. Hsu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023, pp. 3622-3631

Abstract


We introduce a new learning framework, Fair Robust Active Learning (FRAL), generalizing conventional active learning to fair and adversarial robust scenarios. This framework enables us to achieve fair-performance and fair-robustness with limited labeled data, which is essential for various annotation-expensive visual applications with safety-critical needs. However, existing fairness-aware data selection strategies face two challenges when applied to the FRAL framework: they are either ineffective under severe data imbalance or inefficient due to huge computations of adversarial training. To address these issues, we develop a novel Joint INconsistency (JIN) method that exploits prediction inconsistencies between benign and adversarial inputs and between standard and robust models. By leveraging these two types of easy-to-compute inconsistencies simultaneously, JIN can identify valuable samples that contribute more to fairness gains and class imbalance mitigation in both standard and adversarial robust settings. Extensive experiments on diverse datasets and sensitive groups demonstrate that our approach outperforms existing active data selection baselines, achieving fair-performance and fair-robustness under white-box PGD attacks.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Wu_2023_ICCV, author = {Wu, Tsung-Han and Su, Hung-Ting and Chen, Shang-Tse and Hsu, Winston H.}, title = {Fair Robust Active Learning by Joint Inconsistency}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2023}, pages = {3622-3631} }