Quantum Federated Learning for Multimodal Data: A Modality-Agnostic Approach

Atit Pokharel, Ratun Rahman, Thomas Morris, Dinh C. Nguyen; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops, 2025, pp. 545-554

Abstract


Quantum federated learning (QFL) has been recently introduced to enable a distributed privacy-preserving quantum machine learning (QML) model training across quantum processors (clients). Despite recent research efforts, existing QFL frameworks predominantly focus on unimodal systems, limiting their applicability to real-world tasks that often naturally involve multiple modalities. To fill this significant gap, we present for the first time a novel multimodal approach specifically tailored for the QFL setting with the intermediate fusion using quantum entanglement. Furthermore, to address a major bottleneck in multimodal QFL, where the absence of certain modalities during training can degrade model performance, we introduce a Missing Modality Agnostic (MMA) mechanism that isolates untrained quantum circuits, ensuring stable training without corrupted states. Simulation results demonstrate that the proposed multimodal QFL method with MMA yields an improvement in accuracy of 6.84% in independent and identically distributed (IID) and 7.25% in non-IID data distributions compared to the state-of-the-art methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Pokharel_2025_CVPR, author = {Pokharel, Atit and Rahman, Ratun and Morris, Thomas and Nguyen, Dinh C.}, title = {Quantum Federated Learning for Multimodal Data: A Modality-Agnostic Approach}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops}, month = {June}, year = {2025}, pages = {545-554} }