South by semantics workshop

Spring 2024 Schedule

Feb 6

WAG 316

3:30-5pm

Matthew Mandelkern

Diamonds are a disjunction's best friend

I introduce a number of new puzzles about the relation between disjunction and possibility. I argue that these puzzles are best solved with a theory on which 'p or q' means (p∨q)∧<>p∧<>q, where <> is a possibility modal underspecified for modal flavor. I show that the resulting theory also yields an elegant account of free choice inferences.

Feb 29

GDC 6.302

2:30-4pm

Entity Tracking in Language Models 

Keeping track of how states of entities change as a text or dialog unfolds is a key prerequisite to discourse understanding. We propose a behavioral task testing to what extent a language model can infer the final state of an entity given a natural language description of the initial state and a series of state-changing operations, following a set of desiderata we lay out for measuring nontrivial entity tracking capacity. Our evaluations of several language models reveal that only (1) in-context learning with models trained on large amounts of code, or (2) finetuning a model directly on the entity tracking task lead to nontrivial entity tracking behavior. This suggests that language models can learn to track entities but pretraining on text corpora alone does not make this capacity surface. In light of these results, I will end with brief discussions of ongoing work investigating the role of code training further, as well as testing for latent representations of entity states.

April 3

GDC 6.302

3:30-5pm

The conceptual structure of the word belief

Within a mentalist approach to semantics, the meanings of words and sentences pertain not to the real world (or a set of possible worlds), but to the world as conceptualized by a language user.  The concept expressed by the word ‘belief’ is part of “folk psychology” or Theory of Mind – the way humans ascribe mental states to others.  The issue for this talk is therefore where the concept of belief fits into the ecology of the Theory of Mind. 

Using grammatical patterns not usually cited in the literature of philosophy of language, I will show that beliefs pattern grammatically and semantically very much like the description of pictures.  In particular, the classical paradoxes pertaining to belief in the philosophical literature find exact parallels in the description of pictures.  I conclude that beliefs are conceptualized as a sort of picture in the head, a representation of a state of affairs.

In addition, there are strong grammatical and semantic parallels between predicates pertaining to beliefs and predicates pertaining to intention.  Again in the domain of intention, it is possible to reconstruct the standard paradoxes of belief.  These parallels suggest that believing and intending are two sides of the same coin.  Ascribing a belief to someone amounts to attributing to them a commitment to a state of affairs, while ascribing an intention to someone attributes to them a commitment to perform an action.

In short, the concept of ‘belief’ and its relatives is rich and ramified, and this structure can be discovered in part through detailed linguistic analysis.

April 9

WAG 316

3:30-5

Decision and Tenable Conditionals

I present a hybrid decision theory, coinciding sometimes with (traditional) EDT, but usually with (traditional) CDT, which is inspired by recent work on unified and fully compositional approaches to the probabilities of conditionals (Bacon, 2015; Goldstein & Santorio, 2021; Schultheis, forthcoming a). The hybrid theory features a few other loci of interest: the partitionality of options fails in an important way, and close attention is paid to how one might (dis)confirm chance hypotheses under the umbrella of the Principal Principle. On this theory, the probabilities of conditionals play a role in underwriting a theory of credal chance that follows Skyrms’s Thesis (Skyrms, 1981, 1984) about the probabilities of counterfactuals. Moreover, the credences it is epistemically rational to assign to these conditionals can guide updating on one’s own acts. This implies some departures from Conditionalization—departures I defend on epistemological grounds. This has important ramifications for cases of diachronic instability.

April 25

GDC 6.302

3:30-5pm

Controlled Rearing of Language Models can reveal Linguistic Insight!

Neural network based language models have overwhelmingly proliferated across a myriad of disciplines. Meanwhile, their precise role in advancing our knowledge about human cognition remains a topic of much debate. What can these black-boxes tell us about language learning, use, and generalization? In this talk I will discuss some approaches my colleagues and I have undertaken to contribute to this debate. In particular, I will discuss the paradigm of “controlled rearing” – the informed manipulation of the developmental environment of learners–in this case a language model. I will then describe how different ways of performing controlled rearing can be applied to test and generate hypotheses about the conditions necessary for generalization in domains covering both rare as well as well-known linguistic phenomena.