Programme

Schedule

All times Central European Summer Time (UTC +2)

Thursday 6th August

10:55 Short introduction

11:00 Keynote: Jennifer Culbertson, Investigating meaning and grammar using artificial language learning experiments (scroll down for abstract) (video)

12:00 Break/discussion

12:20 Lachlan McPheat, Mehrnoosh Sadrzadeh, Adriana Correia and Alexis Toumi: Derivations and Vector Semantics of Anaphora with Ellipsis in Lambek Calculus with a Relevant Modality (video)

12: 50 Sonia Cenceschi, Licia Sbattella and Roberto Tedesco: CALLIOPE: a multi-dimensional model for the prosodic characterisation of Information Units (video)

13:10 Long break

14:10 James Hefford, Vincent Wang and Matthew Wilson: Categories of Semantic Concepts (video)

14:40 Sergey Slavnov: Cobordisms and commutative categorial grammars (video)

15:10 Break/discussion

15:30 Whitney Tabor: On the relationship between syntactic and semantic encoding in vector space language models (video)

16:00 Russell Richie and Sudeep Bhatia: Similarity judgment within and across categories: A comprehensive model comparison (video)

Friday 7th August

11:00 Keynote: Andrea E Martin, Language in biological and artificial neural systems (scroll down for abstract) (video)

12:00 Break/discussion

12:20 Sean Tull and Johannes Kleiner: Integrated Information in Process Theories (video)

12:50 Sanjaye Ramgoolam, Mehrnoosh Sadrzadeh and Lewis Sword: Gaussianity and typicality in matrix distributional semantics (video)

13:10 Long break

14:10 Tiffany Duneau: Solving logical puzzles in DisCoCirc (video)

14:40 Gemma De Las Cuevas, Andreas Klingler, Martha Lewis and Tim Netzer: Cats climb entails mammals move: preserving hyponymy in compositional distributional semantics (video)

15:10 Break/discussion

15:30 Sean Tull: Monoidal Categories for Formal Concept Analysis (video)

15:50 Konstantinos Meichanetzidis, Stefano Gogioso, Giovanni De Felice, Alexis Toumi, Nicolo Chiappori and Bob Coecke: Quantum Natural Language Processing on Near-Term Quantum Computers (video)

16:20 End - Online social

Invited Talks

Jennifer Culbertson, University of Edinburgh

Investigating meaning and grammar using artificial language learning experiments

In this talk I will highlight how artificial language learning experiments can be used to generate new behavioral evidence for theories of meaning and grammar. I’ll focus on two recent studies, the first targeting the semantic space of grammatical person, and the second targeting word order and its relation to conceptual structure. Theories of person systems---typically exemplified by pronominal paradigms (e.g. ‘I’, ‘you’, ‘she’)---make different predictions about which partitions of the person space should be most natural. However, the typological data are very sparse and therefore make it difficult to adjudicate between theories. I discuss a series of experiments aimed at investigating which person partitions are more natural from the perspective of learning. I then turn to word order, and briefly review a series of studies using artificial language learning and a related paradigm in which participants use gesture to spontaneously create a new linguistic system. The studies all suggest that a cognitive bias favoring certain types of word orders is at work in the nominal domain. Most theories of nominal word order argue that constraints on syntactic structure and/or movement underlie this bias. However, I present new evidence which suggests the bias may ultimately derive from meaning. I argue that our conceptual knowledge of how objects relate to properties in the world can explain why some word orders are preferred over others, both in the typology and in our experiments. These two examples illustrate both the need for new sources of evidence in theoretical linguistics, and the range of questions that can be addressed using these experimental methods.

Andrea E. Martin, Max Planck Institute for Psycholinguistics & Donders Centre for Cognitive Neuroimaging, Radboud University

Language in biological and artificial neural systems

Human language is a fundamental biological signal with computational properties that differ from other perception-action systems: hierarchical relationships between sounds, words, phrases, and sentences, and the unbounded ability to combine smaller units into larger ones, resulting in a "discrete infinity" of expressions that are often compositional. These properties have long made language hard to account for from a biological systems perspective and within models of cognition. In this talk, I synthesize insights from the language sciences, computation, and neuroscience that center on the idea that time can be used to combine and separate representations. I describe how a well-supported computational model from a related area of cognition capitalizes on time and rhythm in computation, and how neuroscientific experiments can then be instrumentalized to determine the computational bounds on artificial neural network models. I offer examples of the approach from cognitive neuroimaging data and computational simulations, including leveraging other existing models. I outline a developing a theory of how language is represented in the brain that integrates basic insights from linguistics and psycholinguistics with the currency of neural computation.