This short paper reports on a line of research exploiting a conditional logic of commonsense reasoning to provide a semantic interpretation to neural network models. A “concept-wise" multi-preferential semantics for conditionals is exploited to build a preferential interpretation of a trained neural network starting from its input-output behavior. The approach is a general one; it has first been proposed for Self-Organising Maps (SOMs), and exploited for MultiLayer Perceptrons (MLPs) in the verification of properties of a network by model-checking. An MLPs can be regarded as a (fuzzy) conditional knowledge base (KB), in which the synaptic connections correspond to weighted conditionals. Reasoners for many-valued weighted conditional KBs are under development based on Answer Set solving to deal with entailment and model-checking.
Towards a Conditional and Multi-preferential Approach to Explainability of Neural Network Models in Computational Logic (Extended Abstract)
Alviano M.;
2022-01-01
Abstract
This short paper reports on a line of research exploiting a conditional logic of commonsense reasoning to provide a semantic interpretation to neural network models. A “concept-wise" multi-preferential semantics for conditionals is exploited to build a preferential interpretation of a trained neural network starting from its input-output behavior. The approach is a general one; it has first been proposed for Self-Organising Maps (SOMs), and exploited for MultiLayer Perceptrons (MLPs) in the verification of properties of a network by model-checking. An MLPs can be regarded as a (fuzzy) conditional knowledge base (KB), in which the synaptic connections correspond to weighted conditionals. Reasoners for many-valued weighted conditional KBs are under development based on Answer Set solving to deal with entailment and model-checking.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.