In this paper, autonomous vehicles are considered for addressing logistic operations in manufacturing systems. The starting idea consists in organizing a given group of autonomous robots/vehicles in a finite set of platoons in charge to accomplish prescribed job(s) within the manufacturing system. Three aspects are then needed to be formally outlined: task scheduling, routing decisions and command inputs computations. Here, a new distributed multi-layer architecture has been conceived by using three methodologies: timed colored Petri nets, deep reinforcement learning and model predictive control. Roughly speaking, timed colored Petri nets are exploited to formally model the manufacturing system so that an optimal scheduling task complying with the required jobs and the available vehicles is derived; then, run-time routing decisions are obtained by using a distributed reinforcement learning algorithm which exploits the available information provided by the vehicle sensor module; finally, the distributed model predictive control algorithm is built by resorting to a set-theoretic approach where most of the computations are offline performed. A flexible manufacturing system consisting of four machines and a Load/Unload station is used for simulation purposes.
A multi-tiered control framework designed for managing the logistical activities of self-driving vehicles within manufacturing environments
Famularo D.;Franze G.;Giannini F.
;Pupo F.;Fortino G.;Tedesco F.
2024-01-01
Abstract
In this paper, autonomous vehicles are considered for addressing logistic operations in manufacturing systems. The starting idea consists in organizing a given group of autonomous robots/vehicles in a finite set of platoons in charge to accomplish prescribed job(s) within the manufacturing system. Three aspects are then needed to be formally outlined: task scheduling, routing decisions and command inputs computations. Here, a new distributed multi-layer architecture has been conceived by using three methodologies: timed colored Petri nets, deep reinforcement learning and model predictive control. Roughly speaking, timed colored Petri nets are exploited to formally model the manufacturing system so that an optimal scheduling task complying with the required jobs and the available vehicles is derived; then, run-time routing decisions are obtained by using a distributed reinforcement learning algorithm which exploits the available information provided by the vehicle sensor module; finally, the distributed model predictive control algorithm is built by resorting to a set-theoretic approach where most of the computations are offline performed. A flexible manufacturing system consisting of four machines and a Load/Unload station is used for simulation purposes.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.