In Human-Computer Interaction (HCI), the Large Language Model (LLM) plays a crucial role in making HCI in Human-Computer Intimacy. As the size of LLMs continues to expand, the limited availability of high-quality training data has emerged as a notable challenge. Federated learning (FL) offers a viable solution by enabling collaborative training across distributed data sources, all while safeguarding privacy. However, the intersection of LLMs and FL offers transformative potential that extends beyond merely enhancing the performance of LLMs. While FL systems can benefit from the advanced capabilities of LLMs, including their ability to generalize across diverse tasks, LLMs can also address fundamental challenges in distributed learning environments. Integrating these two technologies can revolutionize industries that handle sensitive and private data, such as healthcare, finance, and education. By combining the decentralized nature of FL with the task generalization and problem-solving strengths of LLMs, it becomes possible to build AI applications that are not only highly scalable but also inherently secure. This fusion allows organizations to harness the power of AI without centralizing data, thereby reducing risks related to privacy breaches and ensuring compliance with strict regulatory standards. In this paper, we deeply analyze the role of FL in LLMs. Moreover, we highlight the research directions in three different fusion categories of FL and LLMs.

Analyzing the Fusion of Federated Learning and Large Language Model

Thakur Dipanwita.
Writing – Original Draft Preparation
;
Guzzo Antonella.;Fortino Giancarlo.
2025-01-01

Abstract

In Human-Computer Interaction (HCI), the Large Language Model (LLM) plays a crucial role in making HCI in Human-Computer Intimacy. As the size of LLMs continues to expand, the limited availability of high-quality training data has emerged as a notable challenge. Federated learning (FL) offers a viable solution by enabling collaborative training across distributed data sources, all while safeguarding privacy. However, the intersection of LLMs and FL offers transformative potential that extends beyond merely enhancing the performance of LLMs. While FL systems can benefit from the advanced capabilities of LLMs, including their ability to generalize across diverse tasks, LLMs can also address fundamental challenges in distributed learning environments. Integrating these two technologies can revolutionize industries that handle sensitive and private data, such as healthcare, finance, and education. By combining the decentralized nature of FL with the task generalization and problem-solving strengths of LLMs, it becomes possible to build AI applications that are not only highly scalable but also inherently secure. This fusion allows organizations to harness the power of AI without centralizing data, thereby reducing risks related to privacy breaches and ensuring compliance with strict regulatory standards. In this paper, we deeply analyze the role of FL in LLMs. Moreover, we highlight the research directions in three different fusion categories of FL and LLMs.
2025
Federated Learning
Human-Computer Interaction
Large Language Model
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11770/403319
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact