The development of data-driven Artificial Intelligence systems has seen successful application in diverse domains related to social platforms; however, many of these systems cannot explain the rationale behind their decisions. This is a major drawback, especially in critical domains such as those related to cybersecurity, of which malicious behavior on social platforms is a clear example. In light of this problem, in this paper we make several contributions: (i) a proposal of desiderata for the explanation of outputs generated by AI-based cybersecurity systems; (ii) a review of approaches in the literature on Explainable AI (XAI) under the lens of both our desiderata and further dimensions that are typically used for examining XAI approaches; (iii) the Hybrid Explainable and Interpretable Cybersecurity (HEIC) application framework that can serve as a roadmap for guiding R&D efforts towards XAI-based socio-technical systems; (iv) an example instantiation of the proposed framework in a news recommendation setting, where a portion of news articles are assumed to be fake news; and (v) exploration of various types of explanations that can help different kinds of users to identify real vs. fake news in social platform settings.

The HEIC application framework for implementing XAI-based socio-technical systems

Martinez M. V.;Simari G. I.
2022-01-01

Abstract

The development of data-driven Artificial Intelligence systems has seen successful application in diverse domains related to social platforms; however, many of these systems cannot explain the rationale behind their decisions. This is a major drawback, especially in critical domains such as those related to cybersecurity, of which malicious behavior on social platforms is a clear example. In light of this problem, in this paper we make several contributions: (i) a proposal of desiderata for the explanation of outputs generated by AI-based cybersecurity systems; (ii) a review of approaches in the literature on Explainable AI (XAI) under the lens of both our desiderata and further dimensions that are typically used for examining XAI approaches; (iii) the Hybrid Explainable and Interpretable Cybersecurity (HEIC) application framework that can serve as a roadmap for guiding R&D efforts towards XAI-based socio-technical systems; (iv) an example instantiation of the proposed framework in a news recommendation setting, where a portion of news articles are assumed to be fake news; and (v) exploration of various types of explanations that can help different kinds of users to identify real vs. fake news in social platform settings.
2022
Application frameworks
Cybersecurity
Explainable and Interpretable Artificial Intelligence
Hybrid AI
Malicious behavior in social networks
News recommender systems
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11770/386180
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? 4
social impact