Large Language Models: The best linguistic theory, a wrong linguistic theory, or no theory at all?
DOI:
https://doi.org/10.18148/zs/2025-2001Schlagwörter:
Large Language Model, Syntax, Innateness, LLMAbstract
This paper discusses the claim that Large Language Models (LLMs) are the best linguistic theory we currently have. It discusses claims that LLMs are wrong linguistic theories and concludes that they are not linguistic theories at all. It is pointed out that Chomsky’s claims about innateness, about transformations as underlying mechanisms of the language faculty and about plausible representations of linguistic knowledge have been known to be flawed for quite some time by now and that we would not have needed LLMs for this. Chomsky’s theories are not refuted by LLMs in their current form since LLMs are different in many aspects from human brains. However, the tremendous success of LLMs in terms of applications makes it more plausible to linguists and laymen that the innateness claims are wrong. It is argued that the use of LLMs is probably limited when it comes to typological work and cross-linguistic generalizations. These require work in theoretical linguistics.
Downloads
Veröffentlicht
Lizenz
Copyright (c) 2025 Stefan Müller

Dieses Werk steht unter der Lizenz Creative Commons Namensnennung 4.0 International.
The article is published in Diamond Open Access (DOA) format, under the CC BY 4.0 license (https://creativecommons.org/licenses/by/4.0/).