Interdisciplinary communication is critical to effective patient care, yet in real-world clinical settings it is often hindered by unclear intent, missing information, and mistrust of external data. Despite widespread adoption of electronic health records, communication breakdowns remain a major source of inefficiency and error. This study evaluated whether large language models (LLMs) can mediate surgeon-specialist communication by revising notes to align with recipients' informational needs while preserving clinical meaning. We collected and annotated over 250 de-identified interdisciplinary clinical notes from a multi-specialty hospital according to the SACCIA framework. Using a blinded vignette-based simulation experiment, clinicians rated original versus AI-revised documents across seven SACCIA dimensions and response times were recorded. Results show that AI-revised notes were consistently rated higher in clarity, contextualization, and trust. While they required longer reading times, clinicians answered evaluation questions more quickly, suggesting reduced ambiguity and greater confidence in interpretation. These findings highlight LLMs as potential assistants in cross-checking documentation quality before transmission. For simulation research, the work demonstrates how controlled vignette-based protocols can capture real-world cognitive effects of AI-mediated communication, advancing methodological frameworks for testing intelligent systems in clinical workflows.

Simulating AI-Mediated Clinical Notes for Surgeon–Specialist Communication: A Randomized Vignette Study

Padovano, Marica;Longo, Francesco;Padovano, Antonio;Nardo, Bruno
2025-01-01

Abstract

Interdisciplinary communication is critical to effective patient care, yet in real-world clinical settings it is often hindered by unclear intent, missing information, and mistrust of external data. Despite widespread adoption of electronic health records, communication breakdowns remain a major source of inefficiency and error. This study evaluated whether large language models (LLMs) can mediate surgeon-specialist communication by revising notes to align with recipients' informational needs while preserving clinical meaning. We collected and annotated over 250 de-identified interdisciplinary clinical notes from a multi-specialty hospital according to the SACCIA framework. Using a blinded vignette-based simulation experiment, clinicians rated original versus AI-revised documents across seven SACCIA dimensions and response times were recorded. Results show that AI-revised notes were consistently rated higher in clarity, contextualization, and trust. While they required longer reading times, clinicians answered evaluation questions more quickly, suggesting reduced ambiguity and greater confidence in interpretation. These findings highlight LLMs as potential assistants in cross-checking documentation quality before transmission. For simulation research, the work demonstrates how controlled vignette-based protocols can capture real-world cognitive effects of AI-mediated communication, advancing methodological frameworks for testing intelligent systems in clinical workflows.
2025
Blinded vignette experiment
Clinical communication
Cognitive load
Large Language Models (LLMs)
Surgical documentation
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11770/399117
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact