The problem of understanding the reasons behind why different machine learning classifiers make specific predictions is a difficult one, mainly because the inner workings of the algorithms underlying such tools are not amenable to the direct extraction of succinct explanations. In this paper, we address the problem of automatically extracting balanced explanations from predictions generated by any classifier, which include not only why the prediction might be correct but also why it could be wrong. Our framework, called Balanced English Explanations of Forecasts, can generate such explanations in natural language. After showing that the problem of generating explanations is NP-complete, we focus on the development of a heuristic algorithm, empirically showing that it produces high-quality results both in terms of objective measures - with statistically significant effects shown for several parameter variations - and subjective evaluations based on a survey completed by 100 anonymous participants recruited via Amazon Mechanical Turk.

BEEF: Balanced English Explanations of Forecasts

Pulice C.;Simari G. I.;Subrahmanian V. S.
2019-01-01

Abstract

The problem of understanding the reasons behind why different machine learning classifiers make specific predictions is a difficult one, mainly because the inner workings of the algorithms underlying such tools are not amenable to the direct extraction of succinct explanations. In this paper, we address the problem of automatically extracting balanced explanations from predictions generated by any classifier, which include not only why the prediction might be correct but also why it could be wrong. Our framework, called Balanced English Explanations of Forecasts, can generate such explanations in natural language. After showing that the problem of generating explanations is NP-complete, we focus on the development of a heuristic algorithm, empirically showing that it produces high-quality results both in terms of objective measures - with statistically significant effects shown for several parameter variations - and subjective evaluations based on a survey completed by 100 anonymous participants recruited via Amazon Mechanical Turk.
2019
Decision support systems
knowledge engineering
machine learning
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11770/386174
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 21
  • ???jsp.display-item.citation.isi??? ND
social impact