In today’s digital world, user-generated reviews play a pivotal role across diverse industries, providing invaluable insights into consumer experiences, preferences, and concerns. These reviews heavily influence the strategic decisions of businesses. Advanced machine learning techniques, including Large Language Models (LLMs) like BERT and GPT, have greatly facilitated the analysis of this vast amount of unstructured data, enabling the extraction of actionable insights. However, while achieving high classification accuracy is crucial, the demand for explainability has gained prominence. It is essential to comprehend the reasoning behind classification decisions to effectively utilize user-generated content analytics. This paper presents a methodology that leverages interpretable and multidimensional classification to generate explanations from user reviews. Compared to basic explanations readily available through systems like Chat-GPT, our methodology delves deeper into the classification of reviews across various dimensions (such as sentiment, emotion, and topics addressed) to produce more comprehensive explanations for user review classifications. Experimental results demonstrate the precision of our methodology in explaining why a particular review was classified in a specific manner.

Exploiting Large Language Models for Enhanced Review Classification Explanations Through Interpretable and Multidimensional Analysis

Cosentino C.;Marozzo F.;
2025-01-01

Abstract

In today’s digital world, user-generated reviews play a pivotal role across diverse industries, providing invaluable insights into consumer experiences, preferences, and concerns. These reviews heavily influence the strategic decisions of businesses. Advanced machine learning techniques, including Large Language Models (LLMs) like BERT and GPT, have greatly facilitated the analysis of this vast amount of unstructured data, enabling the extraction of actionable insights. However, while achieving high classification accuracy is crucial, the demand for explainability has gained prominence. It is essential to comprehend the reasoning behind classification decisions to effectively utilize user-generated content analytics. This paper presents a methodology that leverages interpretable and multidimensional classification to generate explanations from user reviews. Compared to basic explanations readily available through systems like Chat-GPT, our methodology delves deeper into the classification of reviews across various dimensions (such as sentiment, emotion, and topics addressed) to produce more comprehensive explanations for user review classifications. Experimental results demonstrate the precision of our methodology in explaining why a particular review was classified in a specific manner.
2025
9783031789762
9783031789779
BERT
ChatGPT
Explainability
GPT
Interpretable Models
Large Language Models
Natural Language Processing
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11770/401643
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact