Spring 2013

Overview: (for abstracts see below)

January 18: Richard Pettigrew (Bristol) :: Epistemic utility theory: some objections
February 1, Amaia Garcia Odon (Barcelona) :: A pragmatic approach to the phenomenon of presupposition conditionalization
February 15, Scott Grimm (Barcelona) :: Degrees of Countability: A Mereotopological Approach to the Mass/Count Distinction 
March 1, Martin Kåsa (Gothenburg) :: Semantical considerations on experimental logics
April 12, Luca Incurvati (Cambridge) :: That's it, you're grounded!
May 17, Paula Quinon (Lund) :: Extended Frege's Constraint
May 24, Hans Kamp (Stuttgart) :: Deictic and Anaphoric Uses of Demonstrative Noun Phrases 
June 14, Michael Glanzberg (Northwestern) :: Encoding perspective in semantic theories (time 14:00, location Science Park A1.10)

January 18: Richard Pettigrew

Epistemic utility theory: some objections

What norms govern an agent's degrees of belief at a particular time? And what norms govern how those degrees of belief should change in the light of new evidence? Beliefs play two roles: they guide action; and they represent the world. Traditionally, epistemologists have justified norms for degrees of belief by arguing that an agent who violates the norms will have degrees of belief that play the first role poorly. These are the so-called Dutch Book arguments and Representation Theorem arguments. Epistemic Utility Theory seeks to justify these norms by showing that an agent who violates them will have degrees of belief that play the second role poorly. That is, it seeks to provide purely epistemic arguments for norms that govern degrees of belief. In this talk, I will present the simplest epistemic utility theory argument, and I will consider two objections to this argument: one is due to Kenny Easwaran and Branden Fitelson; the other is due to Hilary Greaves. I will argue that both fail.

February 1, Amaia Garcia Odon

A pragmatic approach to the phenomenon of presupposition conditionalization

To date, no theory has provided a comprehensive solution to the projection problem of presupposition in compound sentences. Most of the recent analyses propose methods for deriving the inferences drawn by the hearer from what is known as the `semantic presupposition' of the sentence (Beaver 2001, P.erez Carballo 2008, Singh 2008, Schlenker 2010, Lassiter 2012, among others). Nonetheless, the predictions are still inaccurate in many cases.
I argue that, by combining elements of Gazdar's (1979) and van der Sandt's (1988) theories, together with an additional assumption which is independently motivated, it is possible to construct an analysis which makes correct predictions. These two theories are based on the idea that the potential or elementary presuppositions of a compound sentence, i.e. the presuppositions carried by its constituent clauses, project except for cases in which they are pragmatically constrained. Building on them, I maintain that, upon the utterance of a sentence that contains a presuppositional trigger, it is natural for the hearer to infer that the speaker presupposes the relevant proposition.
However, it may happen that the speaker's utterance contains some element that makes the hearer realize that, if the speaker presupposed the relevant proposition, s/he would be either uninformative or inconsistent in his/her beliefs. If this is the case, the hearer will not infer that the speaker presupposes the relevant proposition. Put another way, the presupposition will not project. In this talk, I will explain what exactly constrains presupposition
projection, and will argue for the hypothesis that the presuppositions that do not project are conditionalized, giving rise to inferable conditional presuppositions.

February 15, Scott Grimm

Degrees of Countability: A Mereotopological Approach to the Mass/Count Distinction 

This talk investigates the semantic basis of grammatical number systems and the countability of nouns. Most work on countability assumes a binary countable/non-countable contrast: countable nouns, such as 'dog', allow plural marking ('dogs') and accept modification by number words ('two dogs'), while non-countable nouns, such as 'sand', which do not permit plural marking (*sands), nor modification involving number (*two sands). Opinion so far has been divided as to whether the countable/non-countable contrast is a substantial, ontologically-based contrast or if it is simply an arbitrary fact about grammars of different languages.

I discuss data from a range of languages which possess three or more categories of grammatical number, often distinguishing entity types such as "collective aggregates" (swarming insects, vegetation) and/or "granular aggregates" (grass, sand). From this broader cross-linguistic perspective, I then propose that the morphosyntactic organization of grammatical number systems reflects the semantic organization of noun types according to the degree of individuation of their referents. Nouns of different types are individuated to different degrees and can accordingly be ordered along a scale of individuation: substances < granular aggregates < collective aggregates < individuals. Noun types which are less individuated are on the lower end of the scale and are cross-linguistically less likely to signal grammatical number, while the converse holds for highly individuated noun types. Understanding morphosyntactic number categories in light of a scale of individuation avoids the difficulties binary accounts face, since languages may divide up the scale of individuation into any number of classes and at different points.

In the second half of the talk, I turn to the formal modeling of countability. Most formal semantic treatments of countability use mereology, or the theory of part-relations; however, I show that it turns out not to be sufficiently expressive to account for the broader typological data. I argue that it is necessary to enrich mereology with connection relations that model ways in which the referents of nouns may come together, resulting in the more expressive "mereotopology". I show that this extension leads to faithfully modeling the degrees of countability found across languages and overcomes problems in the countability literature, e.g. the "minimal parts" problem.

March 1, Martin Kaså

Semantical considerations on experimental logics

Trial and error classifiers - corresponding to concepts that change their extensions over time, but with a limit - are introduced and briefly motivated. A fragment of the language of classical first-order logic (restricted to quantifier depth 1 and without equality) is given a new semantics, using $\omega$-sequences of classical models, in order to interpret the basic predicates as classifiers of this kind. It turns out that we can use a natural deduction proof system that differs from classical logic only in the conditions for application of existential elimination. Soundness and completeness theorems are proved for this system. Time permitting, I will also discuss a natural generalization of this logic and indicate how compactness and completeness are proved by way of a "standard translation" of sentences into theories in classical first-order logic, and also say something about tableaux methods for these trial and error logics.

April 12, Luca Incurvati

That's it, you're grounded!

I will begin by reviewing and further defending the minimalist approach to sets which I advanced in earlier work. Then, I will consider the prospects for extending the minimalist approach to the case of semantics. I will conclude by examining whether a minimalist approach prevents us from giving a common account of grounding assumptions in set theory and semantics.

May 17, Paula Quinon      
Location: Science Park, room B0.204

Extended Frege's Constraint

This paper reconsiders a principle, employed in the foundations of mathematics, called Frege’s Constraint. Frege’s Constraint states that any adequate foundation for a mathematical theory must explicitly account, already at the most fundamental level, for the applications of the enti- ties forming its intended model. I argue that a foundational approach, based on Frege’s Constraint, should be able to account not only for a one arbitrarily chosen application, but for all the important applications. The paper discusses the case of natural numbers, and the consequences of adopting either the neo-Fregean cardinality constraint or the computational structuralist’s computability constraint, and concludes that the latter is better justified when applications are to be taken into account. 

May 24, Hans Kamp

Deictic and Anaphoric Uses of Demonstrative Noun Phrases 

According to the dominant perspective within theoretical linguistics today human languages can be studied as autonomous systems. In particular, the semantics of a language is thought of as embodied in a set of principles that map syntactically well-formed expressions to their denotations, or ‘semantic values’. This perspective and the methods of analysis and description to which it has led have been remarkably successful. 

That languages can be fruitfully investigated in this way is a reflection of the fact that grammar and vocabulary of a language are a common good of the speech community, known by all its linguistically competent members and adhered to by them whenever they use their language to communicate with each other.

To explain the perspective’s success in these terms is to acknowledge that the central purpose of language is to serve as a tool for communication.  But if that is the central purpose of language, then we shouldn’t be surprised to find that there are some linguistic phenomena that can be satisfactorily accounted for only when the communicational dimension of language is brought explicitly into play.  And that, I have become more and more convinced over the past years, is how things are: There is a range of semantic questions that can only be dealt with adequately within a framework in which both sides of verbal communication, language production and language interpretation, are explicitly considered, separately but also in relation to each other. 

This need arises in particular for questions about reference of definite noun phrases and about the contributions these phrases make to the semantic content of the utterances containing them. I assume, following Van Der Sandt, Geurts, Beaver and others, that definite noun phrases come with ‘identification presuppositions’ – signals to the interpreter that he should be in a position to identify what it is the speaker is referring to by her use of the noun phrase. What these ‘identifications’ come to – what is to count as ‘referent’ of the noun phrase and what it is to count as an identification’ of that ‘referent’ – varies between the different types of definite noun phrases (pronouns, definite descriptions, definite descriptions and proper names), and also between different uses that can be made of noun phrases of one and the same type.

Correct identification of the intended ‘referent’ of a definite noun phrase is one of the things the interpreter of an utterance or text has to accomplish as part of capturing the content expressed by that utterance or text. But what is it to ‘capture the content of a text or utterance’?  A central tenet of the general approach adopted in this talk is to explicate this notion articulated accounts must be given of on the one hand (i) the process which derives mental content representations from linguistic input - the process of language interpretation, or verbal decoding – and on the other hand of (ii) the process that turns thoughts into words – the process of language production, or verbal encoding At a minimum such accounts require some detailed assumptions about the representational form in which thoughts are present to speakers when they convert them into speech and about the forms of the representations that interpreters build when they extract the meaning of what they are reading or being told.

The approach to definite noun phrase semantics that will be illustrated in this talk assumes that both the results of language interpretation and the inputs to language production are mental states structured along lines suggested by Discourse Representation Theory (DRT, (Kamp,1981), (Kamp & Reyle,1993), (Beaver & Geurts ,2011). More specifically, we will assume that both input and output states have the structure proposed in MSDRT. (‘MSDRT ‘ is short for ‘Mental State DRT’; MSDRT is a DRT-based formalism for the description of mental states, but one that goes substantially beyond the expressive power of, say, (Kamp & Reyle,1993); for an introduction see the part on propositional attitudes in (Kamp, Van Genabith & Reyle, 2011). 

According to this formalism, mental states consist of a combination of propositional attitudes and entity representations; and the content representations of propositional attitudes can, and often do, contain entity representations as constituents. When a speaker S chooses words to express the content of one of her propositional attitude representation and this representation contains some entity representation ER as a constituent, then she will often use some definite noun phrase to account for the contribution that ER makes to the content of this attitude. And an interpreter of S’s utterance will then have to use an entity representation in his turn to account for the  contribution that the definite noun phrase makes to the content of that utterance (either by reusing an entity representation he already has, or else by creating a new entity representation on the fly).

In the talk I will first present the most important features of this general communication-based approach to the semantics of definite noun phrases and of the formal framework that I am using for its implementation. I will then focus on one particular type of use of one particular definite noun phrase type, viz. on the deictic uses of simple and complex demonstratives. The distinctive property of these uses is that the entity representations involved in their production and interpretation are perceptually anchored to entities in the shared environment of speaker and interpreter. Because of this, deictic uses of demonstratives provide particularly instructive examples of (i) the three-way relation between an agent, an entity representation she has and the referent it represents, as well as (ii) the inter-subjective relations between coreferring entity representations belonging to different agents.

June 14, Michael Glanzberg

Note: This talk will take place at Science Park, room A1.10 and will start at 14:00 in order for people to be able to go to the ILLC Current Affairs meeting at 16:00.

Encoding perspective in semantic theories

Thought and language involve a very broad notion of _perspective_. We rarely (if ever) say `cat' and then 'mat'; rather, we say `the cat was on the mat' (in the past relative to our current temporal `perspective'). We can also add locative information, saying `the cat is on the mat to the right'; information about our current epistemic `perspective', saying `the cat might be on the mat' (as far as we know); or our tastes, saying `It is fun to wake the cat up when it is sleeping on the mat' (relative to our `perspective' on what counts as fun). Among the many questions this fact raises are a number of important ones about how language encodes aspects of perspective. In this talk, I shall explore some foundational issues the encoding of perspective raises for semantic theories. One the one hand, we have theories that encode perspective into semantic values, creating somewhat simpler compositional semantics, but richer outputs. On the other we have theories which keep the outputs of compositional semantics simple, but invoke more complicated mechanisms within the working of the semantics. Though it might appear less elegant, I shall offer some general considerations in favor of the latter, focusing on its effects on the syntax/semantics interface, and on the apparatus of binding.