This work tackles the problem of designing explainable by design anomaly detectors, which provide intelligible explanations to abnormal behaviors in input data observations. In particular, we adopt heatmaps as explanations, where a heatmap can be regarded as a collection of per-feature scores. To explain anomalies, our approach, called AE–XAD1 (for AutoEncoder-based eXplainable Anomaly Detection), extends a recently introduced semi-supervised variant of the Autoencoder architecture. The main idea of our proposal is to exploit a reconstruction error strategy for detecting deviating features. Unlike standard Autoencoders, it leverages a semi-supervised loss designed to maximize the distance between the reconstruction and the original value assumed by anomalous features. By means of this strategy, our approach learns to isolate anomalous portions of the input observations using only a few anomalous examples during training. Experimental results highlight that AE–XAD delivers high-level performance in explaining anomalies in different scenarios while maintaining a minimal CO2 footprint, showcasing a design that is not only highly effective but also environmentally conscious.

Explaining anomalies through semi-supervised Autoencoders

Angiulli F.;Fassetti F.;Ferragina L.;Nistico' S.
2025-01-01

Abstract

This work tackles the problem of designing explainable by design anomaly detectors, which provide intelligible explanations to abnormal behaviors in input data observations. In particular, we adopt heatmaps as explanations, where a heatmap can be regarded as a collection of per-feature scores. To explain anomalies, our approach, called AE–XAD1 (for AutoEncoder-based eXplainable Anomaly Detection), extends a recently introduced semi-supervised variant of the Autoencoder architecture. The main idea of our proposal is to exploit a reconstruction error strategy for detecting deviating features. Unlike standard Autoencoders, it leverages a semi-supervised loss designed to maximize the distance between the reconstruction and the original value assumed by anomalous features. By means of this strategy, our approach learns to isolate anomalous portions of the input observations using only a few anomalous examples during training. Experimental results highlight that AE–XAD delivers high-level performance in explaining anomalies in different scenarios while maintaining a minimal CO2 footprint, showcasing a design that is not only highly effective but also environmentally conscious.
2025
Anomaly detection
Explainability by design
Explainable Artificial Intelligence
Green-aware AI
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11770/393466
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? 0
social impact