Providing accurate diagnosis of diseases generally requires complex analyses of many clinical, biological and pathological variables. In this context, solutions based on machine learning techniques achieved relevant results in specific disease detection and classification, and can hence provide significant clinical decision support. However, such approaches suffer from the lack of proper means for interpreting the choices made by the models, especially in case of deep-learning ones. In order to improve interpretability and explainability in the process of making qualified decisions, we designed a system that allows for a partial opening of this black box by means of proper investigations on the rationale behind the decisions; this can provide improved understandings into which pre-processing steps are crucial for better performance. We tested our approach over artificial neural networks trained for automatic medical diagnosis based on high-dimensional gene expression and clinical data. Our tool analyzed the internal processes performed by the networks during the classification tasks in order to identify the most important elements involved in the training process that influence the network's decisions.We report the results of an experimental analysis aimed at assessing the viability of the proposed approach.
Understanding Automatic Diagnosis and Classification Processes with Data Visualization
Bruno P.
;Calimeri F.
;
2020-01-01
Abstract
Providing accurate diagnosis of diseases generally requires complex analyses of many clinical, biological and pathological variables. In this context, solutions based on machine learning techniques achieved relevant results in specific disease detection and classification, and can hence provide significant clinical decision support. However, such approaches suffer from the lack of proper means for interpreting the choices made by the models, especially in case of deep-learning ones. In order to improve interpretability and explainability in the process of making qualified decisions, we designed a system that allows for a partial opening of this black box by means of proper investigations on the rationale behind the decisions; this can provide improved understandings into which pre-processing steps are crucial for better performance. We tested our approach over artificial neural networks trained for automatic medical diagnosis based on high-dimensional gene expression and clinical data. Our tool analyzed the internal processes performed by the networks during the classification tasks in order to identify the most important elements involved in the training process that influence the network's decisions.We report the results of an experimental analysis aimed at assessing the viability of the proposed approach.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.