The widespread diffusion of Internet of Things (IoT) devices has led to an exponential growth in the volume of data generated at the edge of the network. With the rapid spread of machine learning (ML)-based applications, performing compute and resource-intensive learning tasks at the edge has become a critical issue, resulting in the need for scalable and efficient solutions that can overcome the resource constraints of edge devices. This paper analyzes the problem of scaling ML applications and algorithms at the edge-cloud continuum from a distributed computing perspective. In particular, we first highlight the limitations of traditional distributed architectures (e.g., clusters, clouds, and HPC systems) when running ML applications that use data generated at the edge. Next, we discuss how to enable traditional ML algorithms combining the benefits of edge computing, such as low-latency processing and privacy preservation of personal user data, with those of cloud computing, such as virtually unlimited computational and storage capabilities. Our analysis provides insights into how properly separated parts of a ML application can be deployed across edge-cloud architectures in order to optimize its execution. More-over, examples of ML applications and algorithms appropriately adapted for the edge-cloud continuum are shown.

Scaling Machine Learning at the Edge-Cloud: A Distributed Computing Perspective

Marozzo F.
;
Orsino A.;Talia D.;Trunfio P.
2023-01-01

Abstract

The widespread diffusion of Internet of Things (IoT) devices has led to an exponential growth in the volume of data generated at the edge of the network. With the rapid spread of machine learning (ML)-based applications, performing compute and resource-intensive learning tasks at the edge has become a critical issue, resulting in the need for scalable and efficient solutions that can overcome the resource constraints of edge devices. This paper analyzes the problem of scaling ML applications and algorithms at the edge-cloud continuum from a distributed computing perspective. In particular, we first highlight the limitations of traditional distributed architectures (e.g., clusters, clouds, and HPC systems) when running ML applications that use data generated at the edge. Next, we discuss how to enable traditional ML algorithms combining the benefits of edge computing, such as low-latency processing and privacy preservation of personal user data, with those of cloud computing, such as virtually unlimited computational and storage capabilities. Our analysis provides insights into how properly separated parts of a ML application can be deployed across edge-cloud architectures in order to optimize its execution. More-over, examples of ML applications and algorithms appropriately adapted for the edge-cloud continuum are shown.
2023
979-8-3503-4649-7
cloud computing
distributed machine learning
edge computing
edge-cloud continuum
Internet of Things
Machine learning
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11770/360720
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact