The rapid spread of the Internet of Things (IoT), with billions of connected devices, has generated huge amounts of data and asks for decentralized solutions for machine learning. However, performing complex learning tasks at the edge of the network is posing great challenges in terms of efficient management of data storing, transfer, and analysis. For these reasons, a lot of research and development effort is devoted to adapt different machine learning algorithms so that cooperative training and inference on local data occur directly at the edge of the network. This scenario represents a major challenge today due to the limited capacities of edge devices, the different technologies with which these devices work and communicate, and the lack of common software stacks to easily manage them. In this paper, we analyze distributed machine learning algorithms and how they should be adapted to run at the network edge and, if needed, cooperate with the cloud to ensure low latency, energy savings, privacy preserving and scalability. In particular, we briefly discuss how the main machine learning algorithms have been adapted to work in traditional distributed platforms (such as clusters, clouds, and HPC systems) and the main research work that has led these algorithms to run on resource-constrained edge devices. Then, a layered approach is introduced and discussed for adapting machine learning algorithms on edge-cloud architectures. Finally, we conclude the paper by describing some application scenarios that can benefit from this approach.

Edge Computing Solutions for Distributed Machine Learning

Marozzo F.
;
Orsino A.;Talia D.;Trunfio P.
2022-01-01

Abstract

The rapid spread of the Internet of Things (IoT), with billions of connected devices, has generated huge amounts of data and asks for decentralized solutions for machine learning. However, performing complex learning tasks at the edge of the network is posing great challenges in terms of efficient management of data storing, transfer, and analysis. For these reasons, a lot of research and development effort is devoted to adapt different machine learning algorithms so that cooperative training and inference on local data occur directly at the edge of the network. This scenario represents a major challenge today due to the limited capacities of edge devices, the different technologies with which these devices work and communicate, and the lack of common software stacks to easily manage them. In this paper, we analyze distributed machine learning algorithms and how they should be adapted to run at the network edge and, if needed, cooperate with the cloud to ensure low latency, energy savings, privacy preserving and scalability. In particular, we briefly discuss how the main machine learning algorithms have been adapted to work in traditional distributed platforms (such as clusters, clouds, and HPC systems) and the main research work that has led these algorithms to run on resource-constrained edge devices. Then, a layered approach is introduced and discussed for adapting machine learning algorithms on edge-cloud architectures. Finally, we conclude the paper by describing some application scenarios that can benefit from this approach.
2022
978-1-6654-6297-6
cloud computing
distributed machine learning
edge computing
edge-cloud continuum
Internet of Things
Machine learning
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11770/360724
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 13
  • ???jsp.display-item.citation.isi??? 1
social impact