This work tackles the problem of designing explainable by design anomaly detectors, which provide intelligible explanations to abnormal behaviors in input data observations. In particular, we adopt heatmaps as explanations, where a heatmap can be regarded as a collection of per-feature scores. To explain anomalies, our approach, called AE–XAD1 (for AutoEncoder-based eXplainable Anomaly Detection), extends a recently introduced semi-supervised variant of the Autoencoder architecture. The main idea of our proposal is to exploit a reconstruction error strategy for detecting deviating features. Unlike standard Autoencoders, it leverages a semi-supervised loss designed to maximize the distance between the reconstruction and the original value assumed by anomalous features. By means of this strategy, our approach learns to isolate anomalous portions of the input observations using only a few anomalous examples during training. Experimental results highlight that AE–XAD delivers high-level performance in explaining anomalies in different scenarios while maintaining a minimal CO2 footprint, showcasing a design that is not only highly effective but also environmentally conscious.
Explaining anomalies through semi-supervised Autoencoders
Angiulli F.;Fassetti F.;Ferragina L.;Nistico' S.
2025-01-01
Abstract
This work tackles the problem of designing explainable by design anomaly detectors, which provide intelligible explanations to abnormal behaviors in input data observations. In particular, we adopt heatmaps as explanations, where a heatmap can be regarded as a collection of per-feature scores. To explain anomalies, our approach, called AE–XAD1 (for AutoEncoder-based eXplainable Anomaly Detection), extends a recently introduced semi-supervised variant of the Autoencoder architecture. The main idea of our proposal is to exploit a reconstruction error strategy for detecting deviating features. Unlike standard Autoencoders, it leverages a semi-supervised loss designed to maximize the distance between the reconstruction and the original value assumed by anomalous features. By means of this strategy, our approach learns to isolate anomalous portions of the input observations using only a few anomalous examples during training. Experimental results highlight that AE–XAD delivers high-level performance in explaining anomalies in different scenarios while maintaining a minimal CO2 footprint, showcasing a design that is not only highly effective but also environmentally conscious.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


