The recent hype caused by the use of ChatGPT for text generation, including academic, calls into question the point of view of the language sciences for a discussion on new textual forms generated through Large Language Models , such as BERT, GPT-3 (which ChatGPT is based on) and GPT-4. In this article we propose to consider textuality using journalism as observation point. In fact, in the last decade, the world of information has started to make use of text generator tools and language models. The aim of this article is to launch the debate on how textual criteria can be applied to texts generated by Deep Learning systems that operate on Natural Language Processing. The starting point of our proposal is the textual criteria identified by De Beaugrande and Dressler (1981). Although the main aspects of the criteria of textuality have already been tested by the advent of Web 2.0, the development of Artificial Intelligence has determined several systems that have changed the scenario of journalism. The most obvious case, experienced for about a decade, concerns the use of Large Language Models and text generators for writing journalistic articles of different types, from reports to breaking news. Through examples of journalistic texts produced by generative Artificial Intelligence, we will try to emphasize the need for the revision of certain categories of philosophy of language and communication theory.
I text generator oltre la “testualità”. Il caso del giornalismo
giusy gallo
2023-01-01
Abstract
The recent hype caused by the use of ChatGPT for text generation, including academic, calls into question the point of view of the language sciences for a discussion on new textual forms generated through Large Language Models , such as BERT, GPT-3 (which ChatGPT is based on) and GPT-4. In this article we propose to consider textuality using journalism as observation point. In fact, in the last decade, the world of information has started to make use of text generator tools and language models. The aim of this article is to launch the debate on how textual criteria can be applied to texts generated by Deep Learning systems that operate on Natural Language Processing. The starting point of our proposal is the textual criteria identified by De Beaugrande and Dressler (1981). Although the main aspects of the criteria of textuality have already been tested by the advent of Web 2.0, the development of Artificial Intelligence has determined several systems that have changed the scenario of journalism. The most obvious case, experienced for about a decade, concerns the use of Large Language Models and text generators for writing journalistic articles of different types, from reports to breaking news. Through examples of journalistic texts produced by generative Artificial Intelligence, we will try to emphasize the need for the revision of certain categories of philosophy of language and communication theory.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.