Brain tumors pose a significant medical challenge, characterized by high incidence and mortality rates, which underscore the critical need for accurate and early diagnosis using minimally invasive techniques such as magnetic resonance imaging. In this context, Artificial Intelligence has emerged as a promising tool to enhance diagnostic precision and efficiency. However, its widespread adoption in clinical practice remains limited due to the opacity of Artificial Intelligence-driven decision-making processes. To address this challenge, we introduce BrAInVision, a hybrid and doubly explainable AI framework for brain tumor detection. The novelty of our approach is the integration of both deep learning and traditional machine learning techniques, combining deep-extracted features with hand-crafted features to create a more robust and interpretable classification system. In contrast to conventional single-explanation methods, our framework provides comprehensive explainability through a multi-level analytical approach, enhancing both interpretability and transparency. The first level employs Grad-CAM to visualize regions of interest identified by the deep feature extractor, while the second level utilizes Permutation Feature Importance and Partial Dependence Plots to understand and quantify the contribution of specific image characteristics to diagnostic decisions. The proposed framework achieved an F1-score of 97% on the four classes (Glioma/Meningioma/Pituitary/NoTumor) and an average 99% in binary classification (Glioma/NoTumor), outperforming current state-of-the-art methods. The proposed approach has been validated on both the original dataset and an independent dataset with radiologist-annotated tumor masks, demonstrating strong generalizability. Designed for seamless integration into radiologists’ workflows as a decision support system, BrAInVision ensures a high degree of explainability, thereby fostering greater trust in AI-assisted medical decision-making.

BrAInVision: A hybrid explainable Artificial Intelligence framework for brain MRI analysis

Gagliardi, Marco;Maurmo, Danilo;Ruga, Tommaso;Vocaturo, Eugenio;Zumpano, Ester
2025-01-01

Abstract

Brain tumors pose a significant medical challenge, characterized by high incidence and mortality rates, which underscore the critical need for accurate and early diagnosis using minimally invasive techniques such as magnetic resonance imaging. In this context, Artificial Intelligence has emerged as a promising tool to enhance diagnostic precision and efficiency. However, its widespread adoption in clinical practice remains limited due to the opacity of Artificial Intelligence-driven decision-making processes. To address this challenge, we introduce BrAInVision, a hybrid and doubly explainable AI framework for brain tumor detection. The novelty of our approach is the integration of both deep learning and traditional machine learning techniques, combining deep-extracted features with hand-crafted features to create a more robust and interpretable classification system. In contrast to conventional single-explanation methods, our framework provides comprehensive explainability through a multi-level analytical approach, enhancing both interpretability and transparency. The first level employs Grad-CAM to visualize regions of interest identified by the deep feature extractor, while the second level utilizes Permutation Feature Importance and Partial Dependence Plots to understand and quantify the contribution of specific image characteristics to diagnostic decisions. The proposed framework achieved an F1-score of 97% on the four classes (Glioma/Meningioma/Pituitary/NoTumor) and an average 99% in binary classification (Glioma/NoTumor), outperforming current state-of-the-art methods. The proposed approach has been validated on both the original dataset and an independent dataset with radiologist-annotated tumor masks, demonstrating strong generalizability. Designed for seamless integration into radiologists’ workflows as a decision support system, BrAInVision ensures a high degree of explainability, thereby fostering greater trust in AI-assisted medical decision-making.
2025
Brain tumor
Explainable AI
Hand-crafted features
Hybrid features
Machine learning
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11770/390280
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact