Contrastive Max-correlation for Multi-view Clustering

Yanghao Deng, Zenghui Wang, Songlin Du; Proceedings of the Asian Conference on Computer Vision (ACCV), 2024, pp. 499-512

Abstract


Multi-view clustering exhibits advantages over single-view clustering due to its ability to fully utilize complementary information between multiple views. However, most mainstream methods have the following two drawbacks: 1) Ignoring structural conflicts between views leads to a deterioration in clustering performance, because merging a certain view actually worsens the clustering results; 2) Rather than globally extracting the maximum correlation between views, their approaches center on individual instances, consequently making models more susceptible to interference from local noise points. To address these issues, this paper proposes a novel framework, entitled Contrastive Max-correlation for Multi-view Clustering (CMMC) for robust multi-view clustering. In particular, the network framework incorporates two effective methods. The first method, maximum structure correlation learning, enhances the downstream task representations by incorporating complementary structural information. Additionally, the framework achieves simultaneous mining of view correlations and alignment of views through the global max-correlation contrastive learning method. As the above methods operate globally, CMMC can effectively reduce the impact of noise information. Experiments on various types of multi-view datasets demonstrate that CMMC outperforms existing methods in terms of clustering accuracy and robustness.

Related Material


[pdf]
[bibtex]
@InProceedings{Deng_2024_ACCV, author = {Deng, Yanghao and Wang, Zenghui and Du, Songlin}, title = {Contrastive Max-correlation for Multi-view Clustering}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2024}, pages = {499-512} }