Multi-target regression algorithms are designed to predict multiple outputs at the same time, and allow us to take all output variables into account during the training phase. Despite the recent advances, this context of machine learning is still an open challenge for developing a low-cost and high accurate algorithm. The main challenge in multi-target regression algorithms is how to use different targets' information in the training and/or test phases. In this paper, we introduce a low-cost multi-target Gaussian process regression (GPR) algorithm, called joint GPR (JGPR) that employs a shared covariance matrix among the targets during the training phase and solves a sub-optimal cost function for optimization of hyperparameters. The proposed strategy reduces the computational complexity considerably during the training and test phases and simultaneously avoids overfitting of the multi-target regression algorithm upon the targets. We have performed extensive experiments on both simulated data and 18 benchmark datasets to assess the proposed method compared with other multi-target regression algorithms. Experimental results show that the proposed JGPR outperforms the state-of-the-art approaches on most of the given benchmark datasets.

JGPR: a computationally efficient multi-target Gaussian process regression algorithm

Shahbazian R.
2022-01-01

Abstract

Multi-target regression algorithms are designed to predict multiple outputs at the same time, and allow us to take all output variables into account during the training phase. Despite the recent advances, this context of machine learning is still an open challenge for developing a low-cost and high accurate algorithm. The main challenge in multi-target regression algorithms is how to use different targets' information in the training and/or test phases. In this paper, we introduce a low-cost multi-target Gaussian process regression (GPR) algorithm, called joint GPR (JGPR) that employs a shared covariance matrix among the targets during the training phase and solves a sub-optimal cost function for optimization of hyperparameters. The proposed strategy reduces the computational complexity considerably during the training and test phases and simultaneously avoids overfitting of the multi-target regression algorithm upon the targets. We have performed extensive experiments on both simulated data and 18 benchmark datasets to assess the proposed method compared with other multi-target regression algorithms. Experimental results show that the proposed JGPR outperforms the state-of-the-art approaches on most of the given benchmark datasets.
2022
Machine learning
Gaussian process regression
Multi-task learning
Multi-target regression
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11770/381038
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 6
social impact