Explainability in Artificial Intelligence (XAI) is crucial for enhancing the transparency and trustworthiness of AI systems. Our work focuses on providing clear explanations for why certain atoms in a given answer set are evaluated as such, hence contributing to the understanding of the decisions made by Answer Set Programming (ASP) systems. We employ simple inference rules to elucidate these decisions, avoiding complex derivations to maintain clarity. Moreover, we introduce the notion of preferred unit-provable unsatisfiable subsets (preferred 1–PUS) to identify relevant portions of ASP encodings, prioritizing program rules over assignments, with the objective of minimizing the assumptions involved in the explanation process. The proposed principles are implemented in a new XAI system.

Answer Set Explanations via Preferred Unit-Provable Unsatisfiable Subsets

Alviano M.
;
2024-01-01

Abstract

Explainability in Artificial Intelligence (XAI) is crucial for enhancing the transparency and trustworthiness of AI systems. Our work focuses on providing clear explanations for why certain atoms in a given answer set are evaluated as such, hence contributing to the understanding of the decisions made by Answer Set Programming (ASP) systems. We employ simple inference rules to elucidate these decisions, avoiding complex derivations to maintain clarity. Moreover, we introduce the notion of preferred unit-provable unsatisfiable subsets (preferred 1–PUS) to identify relevant portions of ASP encodings, prioritizing program rules over assignments, with the objective of minimizing the assumptions involved in the explanation process. The proposed principles are implemented in a new XAI system.
2024
9783031742088
9783031742095
Answer Set Programming
eXplainable Artificial Intelligence
Knowledge Representation and Reasoning
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11770/376940
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact