Causal learning is the cognitive process of developing the capability ofmaking causal inferences based on available information, often guided bynormative principles. This process is prone to errors and biases, such as theillusion of causality, in which people perceive a causal relationship betweentwo variables despite lacking supporting evidence. This cognitive bias has beenproposed to underlie many societal problems, including social prejudice,stereotype formation, misinformation, and superstitious thinking. In thisresearch, we investigate whether large language models (LLMs) develop causalillusions, both in real-world and controlled laboratory contexts of causallearning and inference. To this end, we built a dataset of over 2K samplesincluding purely correlational cases, situations with null contingency, andcases where temporal information excludes the possibility of causality byplacing the potential effect before the cause. We then prompted the models tomake statements or answer causal questions to evaluate their tendencies toinfer causation erroneously in these structured settings. Our findings show astrong presence of causal illusion bias in LLMs. Specifically, in open-endedgeneration tasks involving spurious correlations, the models displayed bias atlevels comparable to, or even lower than, those observed in similar studies onhuman subjects. However, when faced with null-contingency scenarios or temporalcues that negate causal relationships, where it was required to respond on a0-100 scale, the models exhibited significantly higher bias. These findingssuggest that the models have not uniformly, consistently, or reliablyinternalized the normative principles essential for accurate causal learning.

Do Large Language Models Show Biases in Causal Learning?

Maria Vanina Martinez;Gerardo I. Simari
2024-01-01

Abstract

Causal learning is the cognitive process of developing the capability ofmaking causal inferences based on available information, often guided bynormative principles. This process is prone to errors and biases, such as theillusion of causality, in which people perceive a causal relationship betweentwo variables despite lacking supporting evidence. This cognitive bias has beenproposed to underlie many societal problems, including social prejudice,stereotype formation, misinformation, and superstitious thinking. In thisresearch, we investigate whether large language models (LLMs) develop causalillusions, both in real-world and controlled laboratory contexts of causallearning and inference. To this end, we built a dataset of over 2K samplesincluding purely correlational cases, situations with null contingency, andcases where temporal information excludes the possibility of causality byplacing the potential effect before the cause. We then prompted the models tomake statements or answer causal questions to evaluate their tendencies toinfer causation erroneously in these structured settings. Our findings show astrong presence of causal illusion bias in LLMs. Specifically, in open-endedgeneration tasks involving spurious correlations, the models displayed bias atlevels comparable to, or even lower than, those observed in similar studies onhuman subjects. However, when faced with null-contingency scenarios or temporalcues that negate causal relationships, where it was required to respond on a0-100 scale, the models exhibited significantly higher bias. These findingssuggest that the models have not uniformly, consistently, or reliablyinternalized the normative principles essential for accurate causal learning.
2024
Computer Science - Artificial Intelligence
Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Learning
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11770/386209
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact