INVITED SESSIONS


Tuesday (18th of May)


From semantic primitives to conceptual structure: Experimental investigations into the role of meaning in grammar

Jennifer Culbertson (University of Edinburgh)


In this talk I will report results from a series of artificial language learning experiments which highlight the critical role that meaning plays in explaining why languages look the way they do. The first study targets the semantic space of grammatical person, and the second two target word and morpheme order. Person systems -- typically exemplified by pronominal paradigms (e.g. ‘I’, ‘you’, ‘she’) -- are found in every language, but some are much more common that others. I discuss a series of experiments aimed at investigating which person partitions are more natural from the perspective of learning. I then turn to word order and discuss a series of studies using artificial language learning and a related paradigm in which participants use gesture to spontaneously create a new linguistic system. The studies all suggest that a cognitive bias favoring certain types of word orders is at work in the nominal domain. Most theories of nominal word order argue that constraints on syntactic structure and/or movement underlie this bias. However, I present new evidence which suggests the bias may ultimately derive from meaning, or more specifically, conceptual knowledge. Finally, I relate these findings to morpheme ordering generalisations, again showing the role that meaning plays in determining which orders learners prefer, and which are more common across languages. These three series of studies illustrate at a more general level how experimental methods can complement traditional theoretical linguistic investigations into linguistic typology.



Connectedness: a cognitive primitive as revealed by language, and found elsewhere (namely, with baboons)

Emmanuel Chemla (CNRS, LSCP, Ecole Normale Supérieure)


Imagine a word, say 'blicket', that would mean "apple or banana": apples are blickets, and bananas are blickets. Intuitively, 'blicket' is a strange word, it refers to a concept that is unnatural. Why? It has been claimed that words must correspond to "connected" concepts: if mushrooms are blickets and bananas are blickets, then anything in between an apple and a banana should also be a blicket; so if blicket was to be a more traditional word, it may have to include all fruits, not only apples and bananas. By and large, simple "content words", concrete nouns and adjectives, have connected meanings (cf. extensive philosophical work by Gardenförs, and much work in other domains such as computational psychology, language acquisition, or computer science).

Starting from there, we will formalize a notion of connectedness that applies to any type of word, not only content words. We will find that logical words (in particular quantifiers, such as 'all', 'some', 'none' in English), appear to also be connected across languages. We will provide evidence that non-human animals (specifically, baboons, papio papio) tend to form categories that are connected in the same sense, and argue that this tendency may reveal what are natural classes of objects (content word like) or natural classes of patterns (function word like).



Wednesday (19th of May)


A Meaning First Approach to Generative Grammar

Uli Sauerland (Leibniz-Zentrum Allgemeine Sprachwissenschaft)


In a recent paper, I presented a Meaning-First approach (MFA) to grammar (Sauerland & Alexiadou 2020). In this talk, I introduce the core assumptions of the approach, namely 1) that complex thought-structure generation is independent of language, and 2) that human language can communicate thoughts via a compression into an transmissible form. I then survey the empirical support for the proposal focusing on phenomena in child language where children create sentences with extra words in them.



How words structure our concepts

Gary Lupyan (University of Wisconsin-Madison)


Does language reflect the categories of our mind or does it help create them? On one widespread view learning a language involves mapping words onto pre-existing categories, leaving little room for language to affect the conceptual landscape. Alternatively, many of our concepts — including some that seem very basic — may derive from our experience with and use of language. I will argue in favor of this second view and present evidence for the causal role of language in categorization and reasoning.



Thursday (20th of May)


Logical Connectives in Prelinguistic Thought and Early Language Acquisition: Case Studies of not, or, and possible

Susan Carey (Harvard University)


Ever since Descartes, philosophers have speculated that natural language is necessary for deductive reasoning based on abstract, logically structured thought. This hypothesis cannot be confirmed or falsified from the armchair from a priori considerations alone (see Donald Davidson, who argues in favor of Descartes’ conjecture and Jerry Fodor, who argues against it). A recent explosion of scientific work seeks to evaluate the hypothesis that such reasoning arises in phylogenesis only with the evolution of natural language and in ontogenesis only with the acquisition of language. I will present the current state of the art, as I see it, on two related case studies in the recent scientific work: seeking evidence for reasoning according to the disjunctive syllogism in infancy and non-linguistic thought, which requires logically structured thoughts: A or B and not B, and, within the same paradigms, representation of the modal concept possible, as in possibly A and possibly B.

I will argue that in spite of well confirmed phenomena in the animal and infant literature that are certainly consistent with Fodor’s position, well confirmed failures also favor Davidson’s. I will suggest avenues of future research that might resolve the contradictions in the current data.



Foundations of meaning in infancy: the case of abstract relations

Jean-Rémy Hochmann (CNRS. Institut des Sciences Cognitives Marc Jeannerod, Université Lyon 1)


Abstract relations are considered the pinnacle of human cognition, allowing analogical and logical reasoning, and possibly setting humans apart from other animal species. Such relations cannot be represented in a perceptual code but can easily be represented in a propositional language of thought, where relations between objects are represented by abstract discrete symbols.

Focusing on the abstract relations same and different, I will show that (1) there is a discontinuity along ontogeny with respect to the representations of abstract relations, but (2) young infants already possess representations of same and different. Finally, (3) I will investigate the format of representation of abstract relations in young infants, arguing that those representations are not discrete, but rather built assembling abstract representations of entities.



Friday (21st of May)


Culture shapes the expression of meaning in language

Asifa Majid (University of York)


Cross-linguistic studies show substantial differences in how languages package meaning into words and grammar. English makes a distinction between ‘hand’ and ‘arm’, for example, but it is estimated that a third of the world’s languages collapse this distinction, and refer to both with a single term. English has a single verb ‘to cut’ that can be used regardless of whether the action involves a knife or scissors; in Dutch, however, you must specify whether you ‘snijden’ (cut with a knife) or ‘knippen’ (cut with scissors). Most recently, a large-scale study of 20 diverse cultures has shown that even simple sensory experiences of colours, smells, and tactile textures are expressed differently across languages. This linguistic variation raises the question of whether the underlying cognition of people is also variable across cultures or whether diverse languages interface with a universal bedrock of cognition instead. Recent data suggests the answer may vary across domains, such that some aspects of cognition are more malleable to language effects than others.



Insights into how language transforms the mind and brain from studies with blind individuals

Marina Bedny (Johns Hopkins University)


Empiricist philosophers emphasized the role of sensory experience in knowledge acquisition. Locke reasoned that a person born blind could never grasp visual meanings (e.g., yellow, sparkle). Empirical studies with people who are blind contradict this assertion. Language enables sharing of ‘visual’ meaning across sighted and blind people and transforms the brain. First I will briefly present evidence that blindness enables expansion of higher-cognitive functions, including language, into ‘visual’ cortices, revealing the remarkable capacity of human cortex to changes its representational content. By contrast to the large differences in ‘visual’ cortex representations, people who are blind and those who are sighted share conceptual content of visual phenomena. Blind and sighted people have common cognitive and neural representations of ‘visual’ words (e.g, sparkle). Blind and sighted people also share structured causal knowledge of visual perception, light and color that goes well beyond the meanings of individual words. Indeed, language is more effective at transmitting such structured causal models of visual phenomena than at transmitting verbalizable factoids (e.g., bananas are yellow). Studies with people who are blind illustrate how language provides fodder for inference, enabling sharing of meaning across individuals.



The Extension Dogma

Paul Pietroski (Rutgers University)


In studies of meaning, linguists and philosophers have often followed Donald Davidson and David Lewis in assuming that whatever meanings are--if there are any--they determine extensions, at least relative to contexts. After reviewing some reasons for rejecting this assumption, which is especially unfriendly to mentalistic conceptions of meaning, I'll suggest that this assumption became prevalent for bad reasons. As time permits, I'll conclude by reviewing some work which suggests that even if we focus on quantificational determiners, mentalistic conceptions of meaning are motivated and The Extension Dogma should be abandoned.



Composition, comparison, and cognition

Alexis Wellwood (University of Southern California)


The meaning of a sentence depends on the meanings of its parts. Morphological and syntactic theories tell us what the parts are, and semantic theories aim to tell us why those parts and their observed combination mean just as they do. In this talk, I relate contemporary event semantic analyses to event representation in the psychologist's sense. In my case study, I focus on sentences with more X, emphasizing two critical factors for their interpretation: (i) the conceptual class that X points to, and (ii) the semantic commitments imposed by X's syntactic environment. Research in linguistics and cognitive psychology has plainly borne out the relevant facts for nouns: given nonce gleeb, Ann has more gleeb is about volume or weight if gleeb points to a kind of substance, while Ann has more gleebs is about number whatever gleeb means. My exploration concerns occurrences of X as an adjective (Ann was gleeb more) or verb (Ann gleebed more). Here, the relevant conceptual distinction for (i) will be that between events and processes, and the relevant syntactic details for (ii) are considerably more subtle. In the talk, I present my syntactic-semantic theory and the results of novel experiments bearing out its predictions. If successful, such a study provides evidence for a closely-knit relationship between compositional semantic description and structures in non-linguistic cognition, ultimately grounding an understanding of language as a window into the rest of the mind.