Large Language Models (LLMs) lack the ability for commonsense reasoning and learning from text. In this work, we present a system, called LLM2LAS, for learning commonsense knowledge from story-based question and answering expressed in natural language. LLM2LAS combines the semantic parsing capability of LLMs with ILASP for learning commonsense knowledge expressed as answer set programs. LLM2LAS requires only few examples of questions and answers to learn general commonsense knowledge and correctly answer unseen questions. An empirical evaluation demonstrates the viability of our approach.
Using Learning from Answer Sets for Robust Question Answering with LLM
Kareem, Irfan
;Borroto, Manuel
;Ricca, Francesco;Russo, Alessandra
2024-01-01
Abstract
Large Language Models (LLMs) lack the ability for commonsense reasoning and learning from text. In this work, we present a system, called LLM2LAS, for learning commonsense knowledge from story-based question and answering expressed in natural language. LLM2LAS combines the semantic parsing capability of LLMs with ILASP for learning commonsense knowledge expressed as answer set programs. LLM2LAS requires only few examples of questions and answers to learn general commonsense knowledge and correctly answer unseen questions. An empirical evaluation demonstrates the viability of our approach.File in questo prodotto:
Non ci sono file associati a questo prodotto.
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


