Explanatory adequacy in semantics





Roni Katzir (Tel Aviv University and MIT) and Ezer Rasin (MIT)


Wednesday January 25th 2017, 
from 13:30 to 17:00
SFL (61 Rue Pouchet, 75017 Paris: directions are available here)room 124

For two companion presentations on learning and phonology on January 24th and 26th, see here





As part of language acquisition, the child must figure out the semantic denotations of various lexical items. This can be challenging even in the case of lexical categories such as nouns and verbs (as illustrated vividly in Quine's gavagai problem). The problem is potentially even harder in the case of more abstract elements such as quantificational determiners and other logical operators, our main focus here. We start by discussing attempts to characterize the hypothesis space available to the learner, proceed to evaluate the input that the child receives, and conclude with recent computational work on semantic learning.


Part I: Hypothesis space

In order to start discussing the learning challenge, we need to have some idea of the hypothesis space of possible denotations that the child considers. That is, we need to concern ourselves with semantic representations, rather than just with their model-theoretic interpretations. On one, rather simple-minded view, logical operators are defined directly in terms of truth tables (along with appropriate type-lifting operations). This would entail a flat space, in which any denotation is as easy to state as any other. Another possibility is that, at least in the case of quantificational determiners, the possible representations are semantic automata (van Benthem 1986). Yet another possibility is that a small number of primitive operators are the building blocks of all complex ones (Keenan & Stavi 1986).

We will discuss an argument (Katzir & Singh 2013 and M.A. work by Adam Rimon; both building on Horn 1972, 2011) from the cross-linguistic pattern of lexicalization of logical operators that suggests that something along the lines of Keenan & Stavi's approach is right. In particular, we will see an argument that the hypothesis space is defined in terms of a very small set of primitive operators (generalized conjunction, disjunction, and negation) along with a few modes of combination. We will discuss the connection of this work with the recent results of Piantadosi et al. 2016 and Buccola et al. 2016.


Part II: The nature of the stimulus

Language acquisition involves making sense of unanalyzed input: the child brings to the task a hypothesis space, each point in which represents a grammar, and she chooses a point in that space that can generate the input. If two grammars G, G’ are compatible with the input and the child ends up converging on G, we can draw interesting conclusions regarding acquisition: it could be, for example, that G’ is outside of the child’s hypothesis space, or that the child is biased towards choosing G over G’. The literature on acquisition in syntax and phonology has identified cases where the input is not rich enough to eliminate alternatives to the adult grammar, suggesting that learning in those domains is non-trivial.

In Rasin and Aravind (2017), we evaluate the richness of the input in semantics, and our case study is the acquisition of quantificational determiners. We address the following question: are there logically weaker or logically stronger alternatives to quantifier meanings that are compatible with the child's input, or is the input rich enough to eliminate competing hypotheses? We report our conclusions from a study of several English CHILDES corpora:
  1. Truth-conditional evidence for ruling out logically weaker meanings does not seem to be available. Obvious candidates for providing such evidence are the direct rejection of a child’s utterance and the use of quantifiers in downward-entailing environments, but they were either absent from the corpora or consistent with weaker meanings.
  2. Contextual evidence for ruling out logically weaker meanings is abundant. We identify contexts where a weaker meaning for a quantifier would violate some pragmatic constraint. If children can use this contextual evidence early enough, then logically weaker meanings would be incompatible with the input.
  3. With respect to logically stronger alternatives, the situation is quite different. We construct classes of quantifiers with complex, logically stronger meanings designed to be consistent with any finite number of utterances. If such quantifiers are in the child’s hypothesis space, then converging on adult meanings would require non-trivial induction. 

Part III: Learning algorithms

Given parts I, II, we have a true learning challenge in the lexical semantics of logical operators. To address this challenge, the learner cannot rely solely on the simplicity of the hypothesis (as in the evaluation metric of early generative grammar): this would prevent a denotation such as 2-or-3-or-7 from ever being learned, given the availability of the simpler 'some'. Similarly, the learner also cannot rely solely on restrictiveness (as in much work on learning in OT): this would prevent a denotation such as 'some' from ever being learned, given the availability of more specific alternatives such as 2-or-3-or-7. What is needed is the balancing of the two considerations, as in Bayesian learners (e.g., Piantadosi et al. 2016) and the closely related compression-based learners (Peled & Katzir 2016). This mirrors the situation in phonology, as discussed in the part of this mini-course related to phonology (see here). We illustrate with the compression-based semantic learner of Peled & Katzir 2016.


Suggested readings

van Benthem, J. (1986). Semantic automata. In Essays in Logical Semantics, pages 151–176. Springer Netherlands, Dordrecht.

Buccola, B., Križ, M., and Chemla, E. (2016). Conceptual alternatives. Ms.

Horn, L. (1972). On the Semantic Properties of the Logical Operators in English. PhD thesis, UCLA.
Keenan, E. and Stavi, J. (1986). A semantic characterization of natural language determiners. Linguistics and Philosophy, 9.3: 253-326.