Minds: Big questions for linguistics in the age of AI

Autor/a: BACKUS, A. et al.
Año: 2023
Editorial: Linguistics in the Netherlands, 40, 301-308
Tipo de código: Copyright
Soporte: Digital

Temas

Medios de comunicación y acceso a la información » Tecnologías » Inteligencia artificial, Medios de comunicación y acceso a la información » Tecnologías

Detalles

It is an interesting time to be a linguist. The advent of large language models (LLMs) like GPT-4 has raised fundamental questions about language and its nature, such as whether artificial systems are able to “use” language in a similar way to humans. While such issues are at the core of ongoing scientific and societal debates, the role of linguistics in the development of these technologies has been surprisingly limited. LLMs are suddenly widespread because of the opportunities they provide, but also come with several risks. Schools are therefore regulating their use, and some countries–like Italy–even prohibited them. LLMs can be wrong, for example, in the sense that they can “hallucinate” and come up with structurally correct, but false statements (Ji et al. 2023). This behavior is due to the way in which these models learn to use language; not by assigning meaning to form but rather by learning statistical regularities about how words and sentences typically co-occur. As a result, the ‘meaning’ that these models assign to language is generally not grounded in any experience with the world.

Ubicación