-
[pdf]
[supp]
[bibtex]@InProceedings{Long_2024_CVPR, author = {Long, Lin and Wang, Haobo and Jiang, Zhijie and Feng, Lei and Yao, Chang and Chen, Gang and Zhao, Junbo}, title = {Positive-Unlabeled Learning by Latent Group-Aware Meta Disambiguation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {23138-23147} }
Positive-Unlabeled Learning by Latent Group-Aware Meta Disambiguation
Abstract
Positive-Unlabeled (PU) learning aims to train a binary classifier using minimal positive data supplemented by a substantially larger pool of unlabeled data in the specific absence of explicitly annotated negatives. Despite its straightforward nature as a binary classification task the currently best-performing PU algorithms still largely lag behind the supervised counterpart. In this work we identify that the primary bottleneck lies in the difficulty of deriving discriminative representations under unreliable binary supervision with poor semantics which subsequently hinders the common label disambiguation procedures. To cope with this problem we propose a novel PU learning framework namely Latent Group-Aware Meta Disambiguation (LaGAM) which incorporates a hierarchical contrastive learning module to extract the underlying grouping semantics within PU data and produce compact representations. As a result LaGAM enables a more aggressive label disambiguation strategy where we enhance the robustness of training by iteratively distilling the true labels of unlabeled data directly through meta-learning. Extensive experiments show that LaGAM significantly outperforms the current state-of-the-art methods by an average of 6.8% accuracy on common benchmarks approaching the supervised baseline. We also provide comprehensive ablations as well as visualized analysis to verify the effectiveness of our LaGAM.
Related Material