Reasoning about stored representations in semantics using the typology of lexicalized quantifiers


  • Maike Züfle
  • Roni Katzir



The typology of lexicalizations in natural languages is highly skewed: some meanings repeatedly receive their own expression as individual morphemes or words in language after language, while many other meanings rarely or never do. For example, while many languages have monomorphemic counterparts of English some and all, no known language has a monomorphemic quantifier that means ‘all or none’ or a quantifier that asserts that its two arguments are of the same cardinality. It seems tempting to reason from this typological skew to properties of stored representations. However, it is not generally safe to assume that if something is typologically unattested then it simply cannot be represented or learned. The representational system for stored denotations is just one of several interacting factors that affect the typology, and other factors such as communicative pressure and learnability are likely to shape patterns of lexicalization. In this paper we propose to reason from the typology to stored representations by modeling the representational framework, communicative pressure, and learnability directly within an evolutionary model, building on work by Brochhagen et al. (2018). Our empirical focus is a lexicalization asymmetry noted by Horn (1972) in the domain of logical operators and framed within the Aristotelian Square of Opposition. We show that, on certain assumptions, Horn’s lexicalization pattern depends on very particular representational costs in the lexicon: it arises if the storage costs for ‘every’ and ‘some’ are lower than those for ‘not every’ and ‘not some’ but not otherwise.



2022-12-22 — Updated on 2023-01-26


How to Cite

Züfle, M., & Katzir, R. (2023). Reasoning about stored representations in semantics using the typology of lexicalized quantifiers. Proceedings of Sinn Und Bedeutung, 26, 923–944. (Original work published December 22, 2022)