Ensuring robust cybersecurity in modern network environments is increasingly challenging due to the growing complexity and volume of network traffic data. Traditional detection systems often fail to identify stealthy and sophisticated attacks, such as Distributed Denial of Service (DDoS), ARP poisoning, and reconnaissance scans. Moreover, many existing methods lack transparency and produce reports that are difficult for analysts to interpret, slowing both threat comprehension and response. This paper addresses these challenges by introducing a novel methodology that integrates Knowledge Graphs, XAI techniques and Large Language Models (LLMs) to enhance network threat detection, classification, explainability, and automated reporting. The proposed approach employs Graph-BERT to encode complex communication patterns and semantic relationships into enriched knowledge graphs constructed from network logs. To ensure model transparency and interpretability, Local Interpretable Model-Agnostic Explanations (LIME) are incorporated, while structured prompts guide report generation using Generative AI. Experimental results obtained on benchmark datasets demonstrate that the methodology achieves a classification accuracy exceeding 84 %, outperforming existing detection techniques. Additionally, a comprehensive evaluation involving ablation analysis, LLM-based assessments, and expert reviews shows that incorporating structured knowledge and explainability significantly enhances the clarity, correctness, and informativeness of generated reports. These findings confirm the system’s effectiveness both as a detection mechanism and as a practical tool that helps analysts understand threats and craft informed responses.

Enhancing network security using knowledge graphs and large language models for explainable threat detection

Belcastro L.;Cosentino C.;Marozzo F.
2026-01-01

Abstract

Ensuring robust cybersecurity in modern network environments is increasingly challenging due to the growing complexity and volume of network traffic data. Traditional detection systems often fail to identify stealthy and sophisticated attacks, such as Distributed Denial of Service (DDoS), ARP poisoning, and reconnaissance scans. Moreover, many existing methods lack transparency and produce reports that are difficult for analysts to interpret, slowing both threat comprehension and response. This paper addresses these challenges by introducing a novel methodology that integrates Knowledge Graphs, XAI techniques and Large Language Models (LLMs) to enhance network threat detection, classification, explainability, and automated reporting. The proposed approach employs Graph-BERT to encode complex communication patterns and semantic relationships into enriched knowledge graphs constructed from network logs. To ensure model transparency and interpretability, Local Interpretable Model-Agnostic Explanations (LIME) are incorporated, while structured prompts guide report generation using Generative AI. Experimental results obtained on benchmark datasets demonstrate that the methodology achieves a classification accuracy exceeding 84 %, outperforming existing detection techniques. Additionally, a comprehensive evaluation involving ablation analysis, LLM-based assessments, and expert reviews shows that incorporating structured knowledge and explainability significantly enhances the clarity, correctness, and informativeness of generated reports. These findings confirm the system’s effectiveness both as a detection mechanism and as a practical tool that helps analysts understand threats and craft informed responses.
2026
Anomaly detection
Explainable AI
Generative AI
Intrusion detection
Knowledge graphs
Large language models
Security log analysis
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11770/401638
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 2
social impact