Journal
Communication & Cognition 2025, Vol. 58, issue 3-4
ISSN
0378-0880
e-ISSN
2953-1454
Title
EVIDENCE FROM LARGE LANGUAGE MODELS, HOW AI VINDICATES CLASSICAL THEORIES OF MEANING: FOR THE SEMANTICS AND PRAGMATICS DISTINCTION; CLASSICAL THEORIES OF GRAMMAR: FOR THE SYNTAX SEMANTICS INTERFACE; THE ALIGNMENT OF GRAMMAR AND LOGIC: FOR THE UNITY OF FORM.
Author
J.-M. Kuczynski
Pages
pp. 101 - 122
Keywords
AI Systems, classical computational architectures, classical theories ofmeaning, connectionist architectures, large language models (LLMs),relationship between syntax and semantics, the alignment of grammatical and logical form
Abstract
In section 1 this paper argues that the demonstrated capabilities of large language models (LLMs) provide surprising empirical support for classical theories of meaning, particularly the distinction between semantics and pragmatics and the reality of compositional literal meaning (Partee, 2018). While LLMs employ connectionist architectures rather than classical computational ones, their ability to systematically process novel sentences and distinguish between literal and contextual meaning suggests that key insights of classical semantic theory capture genuine features of linguistic understanding, even if the underlying mechanisms differ from those traditionally posited (Bommasani et al., 2021). In section 2, this paper argues that the demonstrated capabilities of large language models (LLMs) provide surprising empirical support for classical theories of grammar, particularly regarding the relationship between syntax and semantics (Manning et al., 2022). While LLMs employ connectionist architectures rather than classical computational ones, their ability to process structural relationships independently of meaning while maintaining systematic syntaxsemantics mappings suggests that key insights of classical grammatical theory capture genuine features of language, even if the underlying mechanisms differ from those traditionally posited (Linzen & Baroni, 2021). In section 3, this paper argues that the demonstrated capabilities of large language models (LLMs) provide surprising empirical support for the alignment of grammatical and logical form (Chowdhury & Linzen, 2021). While philosophers have traditionally posited a divergence between grammatical and logical structure, LLMs' ability to make correct inferences without FOL-style logical forms suggests that grammatical structure itself guides valid reasoning (Manning et al., 2022). This indicates that the perceived misalignment between grammatical and logical form may be an artifact of our chosen formal systems rather than a feature of language itself.
Access