The integration of domain knowledge into the learning process of artificial intelligence (AI) has received significant attention in the last few years. Most of the approaches proposed so far have focused on centralized machine learning scenarios, with less emphasis on how domain knowledge can be effectively integrated in decentralized settings. In this paper, we address this gap by evaluating the effectiveness of domain knowledge integration in distributed settings, specifically in the context of Decentralized Federated Learning (DFL). We propose the Physics-Informed DFL (PIDFL) architecture by integrating domain knowledge expressed as differential equations. We introduce a serverless data aggregation algorithm for PIDFL, prove its convergence, and discuss its computational complexity. We performed comprehensive experiments across various datasets and demonstrated that PIDFL significantly reduces average loss across diverse applications. The proposed PIDFL framework achieves on average over 40% lower test loss compared with the baseline DFLA, and outperforms benchmark approaches (FedAvg, SegGos, and Scaffold) across a variety of datasets. This highlights the potential of PIDFL and offers a promising avenue for improving decentralized learning through domain knowledge integration.

Decentralized federated learning meets Physics-Informed Neural Networks

Alfano G.;Greco S.;Mandaglio D.;Parisi F.;Shahbazian R.;Trubitsyna I.
2025-01-01

Abstract

The integration of domain knowledge into the learning process of artificial intelligence (AI) has received significant attention in the last few years. Most of the approaches proposed so far have focused on centralized machine learning scenarios, with less emphasis on how domain knowledge can be effectively integrated in decentralized settings. In this paper, we address this gap by evaluating the effectiveness of domain knowledge integration in distributed settings, specifically in the context of Decentralized Federated Learning (DFL). We propose the Physics-Informed DFL (PIDFL) architecture by integrating domain knowledge expressed as differential equations. We introduce a serverless data aggregation algorithm for PIDFL, prove its convergence, and discuss its computational complexity. We performed comprehensive experiments across various datasets and demonstrated that PIDFL significantly reduces average loss across diverse applications. The proposed PIDFL framework achieves on average over 40% lower test loss compared with the baseline DFLA, and outperforms benchmark approaches (FedAvg, SegGos, and Scaffold) across a variety of datasets. This highlights the potential of PIDFL and offers a promising avenue for improving decentralized learning through domain knowledge integration.
2025
Deep learning
Federated learning
Heterogeneous data
Physics Informed Neural Networks (PINNs)
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11770/389130
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact