🗣️ Dr Jorge Agulló (University of Cambridge)
📅 19th June, 2025 (17:00 - 18:00)
🏫 GR-04, English Faculty Building
Abstract:
This talk contributes to the debate on the syntax of relative clauses and to the movement vs. base generation approach to resumptive pronouns. Ā-Resumption inside syntactic islands has been a hot topic in generative syntax since Ross (1967)and subsequent work. The overwhelming assumption has been that resumptives obey island constraints and are thus base generated, i.e., movement cannot apply (e.g., (Kroch 1981; Chomsky 1982; Sharvit 1999; McCloskey 2002, 2006). Outside island contexts, however, resumptives have garnered little attention—when they have, they have been deemed phonetic (Suñer 1998) or apparent (Aoun, Choueiri, and Hornstein 2001).
In this talk, I counterargue this view. I furnish new evidence from four languages that readily use (direct object) resumptive pronouns outside islands—in only ‘apparent’ free distribution with the gap: Catalan, Spanish, Persian, and Filipino. All these languages will be shown to cluster together with regard to movement diagnostics of resumption: Principle A and B Reconstruction effects, quantifier-variable facts, scope reversal, etc., but with a twist: resumptives in Spanish, Catalan, and Persian are immune to (Secondary) Weak Crossover effects, contrary to Filipino.
The analysis I put forth has two main ingredients: a) a head-raising analysis of relative clauses à la Kayne (1994)(e.g., Bianchi 1999a, 1999b, 2004; Alexiadou et al. 2000; Boeckx 2003); and b) clitic-doubling of the head in the bound variable as part of a Big DP structure (Uriagereka 1995), but simplified, as in Cecchetto (2000)—in which I specifically depart from Kayne’s (1994) and Bianchi’s (1999a, 1999b, 2000) hypotheses, which assume no clitic doubling. I bring to the fore previously unnoticed data, namely upward or inverse Case-attraction phenomena in Catalan and Spanish, and use it in support of the hypothesis that move has applied. I will then go on to show how these data pose a challenge for Connectivity and for ‘standard’, i.e., adjunction hypotheses of RCs and argue, instead, for a head-raising, Determiner Complement hypothesis of RCs.
Alexiadou, Artemis, Paul Law, André Meinunger, and Chris Wilder. 2000. ‘Introduction’. In The Syntax of Relative
Clauses, 1–52. Linguistik Aktuell. Amsterdam / Philadelfia: John Benjamins.
Aoun, Joseph, Lina Choueiri, and Norbert Hornstein. 2001. ‘Resumption, Movement, and Derivational Economy’. Linguistic Inquiry 32 (3): 371–403.
Bianchi, Valentina. 1999a. Consequences of Antisymmetry: Headed Relative Clauses. Studies in Generative Grammar 46. Berlin - New York: Mouton de Gruyter.
———. 1999b. ‘On Resumptive Relatives and the Theory of LF Chains’. Quaderni Del Laboratorio Di Linguistica, 79–99.
———. 2004. ‘Resumptive Relatives and LF Chains’. In The Structure of CP and IP: The Cartography of Syntactic Structures, edited by Luigi Rizzi, 2:76–114. Oxford: Oxford University Press.
Boeckx, Cedric. 2003. Islands and Chains. Resumption as Stranding. Ámsterdam / Philadelphia: John Benjamins.
Cecchetto, Carlo. 2000. ‘Doubling Structures and Reconstruction’. Probus 12 (1).
Chomsky, Noam. 1982. Some Concepts and Consequences of the Theory of Government and Binding. Massachusetts: MIT press.
Cinque, Guglielmo. 2020. The Syntax of Relative Clauses: A Unified Analysis. Cambridge: Cambridge University Press.
Kayne, Richard S. 1994. The Antisymmetry of Syntax. Massachusetts: The MIT Press.
Kroch, Anthony S. 1981. ‘On the Role of Resumptive Pronouns in Amnestying Island Constraint’. In Papers from the Seventeenth Regional Meeting, edited by R. Hendrick, C. Maseh, and M. Miller, 125–35. Chicago: Chicago Linguistic Society.
McCloskey, James. 2002. ‘Resumption, Successive Cyclicity, and the Locality of Operations’. In Derivation and Explanation in the Minimalist Program, edited by Samuel David Epstein and T. Daniel Seely, 184–226. Oxford: Blackwell.
———. 2006. ‘Resumption’. In The Blackwell Companion to Syntax, edited by Martin Everaert and Henk van Riemsdijk, 94–117. Malden, MA, USA: Blackwell Publishing.
Ross, John Robert. 1967. ‘Constraints on Variables in Syntax’. Massachusetts: Massachusetts Institute of Technology. http://hdl.handle.net/1721.1/15166.
Sharvit, Yael. 1999. ‘Resumptive Pronouns in Relative Clauses’. Natural Language & Linguistic Theory 17 (3): 587–612.
Suñer, Margarita. 1998. ‘Resumptive Restrictive Relatives: A Crosslinguistic Perspective’. Language 74 (2): 335–64.
Uriagereka, Juan. 1995. ‘Aspects of the Syntax of Clitic Placement in Western Romance’. Linguistic Inquiry 26 (1): 79–123.
🗣️ Dr Calbert Graham (University of Cambridge)
📅 6th February, 2025 (17:00 - 18:00)
🏫 TBC
Abstract: Many assistive speech technologies rely on automatic speech recognition (ASR) systems predominantly trained on typical speech, resulting in unreliable performance for speakers with atypical articulation. This limitation extends beyond recognition failures, undermining the effectiveness of downstream tools that depend on accurate transcriptions or speech-derived features. Addressing this challenge requires more than improving recognition accuracy: it demands a deeper understanding of how speech balances categorical stability with structured phonetic variation. Most mainstream L1 models—especially structuralist, generative, and constraint-based approaches—focus heavily on categories, rules, and abstract representations, overlooking phonetic gradience, real-world variation, and articulatory flexibility, particularly in fluent connected speech. L2-focused models like Flege’s Speech Learning Model (SLM) and Best’s Perceptual Assimilation Model (PAM) share similar limitations. Exemplar and usage-based models (e.g., Bybee, Pierrehumbert) incorporate phonetic detail but rely on storing individual instances, rather than modelling how speakers internalise algorithmic systems that generate flexible, context-sensitive speech.
In this talk, I briefly introduce VERSA (Variation Encoding and Representation System in Acquisition), a theoretical framework developed through my work in computational phonetics, which proposes that speakers internalise algorithmic systems linking perception and production. VERSA uniquely captures both categorical distinctions and the gradient variation essential for fluent communication, offering a new foundation for understanding individual speech differences and informing theories of L1 and L2 acquisition.
As an application, I discuss my work developing an industry-grade machine learning app that predicts and provides personalised feedback on speech errors in children with autism-related speech disorders. The system employs a hybrid architecture integrating feature-based analyses with deep learning components. It addresses key challenges including data scarcity, annotation complexity, error variability, ASR adaptability, generalisability across speakers, and interpretability of results. This work demonstrates how integrating linguistic theory with advanced data science can foster more inclusive technologies that enhance communication opportunities for underserved populations.
🗣️ Dr Taomei Guo (Beijing Normal University)
📅 15th May, 2025 (17:00-18:00)
🏫 GR-05, English Faculty Building
Abstract: In this talk, I will present recent research on the neural mechanisms underlying emotional word processing in bilinguals, focusing on both positive and negative emotions across two languages. Using meta-analyses, we identified that the left medial prefrontal cortex (mPFC) and the left posterior cingulate cortex (PCC) showed stronger activation in response to positive words compared to neutral ones. Negative emotional processing was associated with six key regions: the left mPFC, inferior frontal gyrus (IFG), PCC, amygdala, inferior temporal gyrus (ITG), and thalamus.
We then explored the universality and specificity of bilingual emotion processing through regions of interest (ROI) analyses. The results revealed no significant differences between the two languages, suggesting a universal neural basis for positive emotion processing. Further findings showed two distinct neural networks for negative word processing in L1: a dorsal pathway (left IFG–mPFC–PCC) and a ventral pathway (amygdala–ITG–thalamus). Cross-language comparisons revealed consistency in the dorsal pathway but divergence in the ventral pathway, indicating both shared and language-specific mechanisms for negative emotion processing.
These findings advance our understanding of bilingual emotion processing and contribute to theoretical models such as the valence hypothesis, the hierarchical emotion model, and the system accommodation hypothesis.:
🗣️ Dr Mireia Cabanes Calabuig (University of Cambridge)
📅 6th March, 2025 (from 17:00)
🏫 S1, Alison Richard Building (Sidgwick Site)
Abstract: In our daily life, we mainly communicate with others using descriptive meanings, that is, content that gives us factual information about referents and situations. However, not everything that surrounds us can be factually described, for example, when it comes to emotions, feelings and attitudes. Expressives and their expressive meaning convey this type of information. It is widely acknowledged that expressive meaning contributes to meaning in a different manner, but how does it interact with descriptive meaning when it comes to utterance meaning? In this talk, I will address this question from an experimental perspective. Using a questionnaire-based study, I present results that (i) reveal patterns in how lexical expressives interact with descriptive content and (ii) shed further light on the meanings expressives convey at the utterance level. I will then use these findings to propose avenues for addressing two core questions in the literature on expressives: what content(s) expressives convey (in the light of a novel typology) and how to analyse them in a theory of meaning (under the framework of radical contextualism).
🗣️ Dr Fang Liu (University of Reading)
📅 6th February, 2025 (from 17:30)
🏫 SB1, Alison Richard Building (Sidgwick Site)
Abstract: Globally, approximately 1% of the population is autistic, with around 30% of autistic individuals being nonspeaking despite numerous interventions. Music training has been shown to enhance auditory processing, sensorimotor integration, motor and imitative skills, cognitive function, social interaction, communication, and, most notably, language processing. In this talk, I will present findings from our ERC-funded research projects, CAASD and MAP. The CAASD project investigated the relationship between music and language processing in autism through behavioural and EEG studies, while the MAP project focused on a feasibility randomised controlled trial exploring how music can support language development in autistic children. I will also discuss the potential mechanisms underlying music’s effectiveness in promoting language acquisition in autism.
🗣️ Dr Víctor Acedo-Matellán (University of Oxford)
📅 23rd January, 2025
🏫 GR-06/07, English Faculty Building (Sidgwick Site)
Abstract: In this collaboration with Veronika Gvozdovaitė(New York University), I develop a syntactic-prosodic account of the linearization of the Lithuanian reflexive clitic/affix si, which may appear as a prefix or a suffix. The exponent of a reflexive Voice head, si is initially positioned by the syntax-linearization algorithm as left-adjacent to the verb. If no other prefix is present, however, the prosodic component ensures that si ends up suffixed to the base, so that it avoids the left edge of a maximal prosodic word (ω max), Ito & Mester 2009). The analysis is couched within the spanning framework to syntax and linearization (Bye & Svenonius 2012, Svenonius 2016, Acedo-Matellán & Kwapiszewski 2024), coupled with an OT algorithm for prosody (Ito & Mester 2009, Bye & Svenonius 2012). The variable linearization of si falls together with that of other prosodically sensitive elements in other languages, like pronominal clitics in European Portuguese (Barbosa 1996), with which a comparison is established. We show the empirical inadequacy of previous morphological and syntactic accounts (Embick & Noyer 2001, Stump 2022, Korostenskienė 2016), and we explore predictions related to morphological cycles and segmental phonology. More generally, we conclude that 1) a subset of cases of displacement must be accouted for via prosody (Chung 2003, Bennett, Elfner and James McCloskey 2016), 2) prosodic categories may show recursion, but only those that have some correspondence with syntactic constituents (Ito & Mester 2009).
🗣️ Prof Yi Xu (University College London)
📅 28th November, 2024
🏫 GR-05, English Faculty Building (Sidgwick Site)
Abstract: I explore in this talk the proposition that the highest coherence between speech perception and production is at the level of contrastive sound categories, rather than at the level of either articulation or acoustics. Hence, other than sound categorization, there is no need for a close match between production and perception. This idea is based on the recognition that human speech is a digital system that is discretized at the phonetic level. To achieve the digitization, production encodes sound categories in such a way that perception can best decode them. The encoding is done through the syllable, which is a synchronization mechanism that aligns the onset of consonantal, vocalic, tonal and phonational articulations. Perceptual decoding is done by processing the acoustic signal of each sound in its entirety without pre-extracting any cues. Both the encoding and decoding capabilities are acquired through extensive learning, a process that resolves problems like variability due to coarticulation, speaker differences and multi-functional interactions. Of the two, however, perceptual learning is much easier, as it does not require knowledge of articulation. Production learning, in contrast, is best achieved under the guidance of perception. Evidence for the proposition comes from our findings from articulatory and acoustic analysis as well as computational simulations.
🗣️ Dr Stefano Banno (University of Cambridge)
📅 31st October, 2024
🏫 GR-05, English Faculty Building (Sidgwick Site)
Abstract: The emergence of large language models (LLMs) has revolutionised computer-assisted language learning (CALL), offering promising results for a range of tasks. However, in situations where extensive training data and high-quality annotations are available, such as holistic assessment of second language (L2) learner essays (i.e., evaluating the overall quality of compositions by considering vocabulary, grammar, coherence, and other aspects as a whole), bespoke models or specialised training approaches often outperform general-purpose LLMs. These models can leverage large, high-quality, and specific data to align more precisely with particular scoring criteria, resulting in improved performance. On the other hand, LLMs are highly effective in zero-shot or few-shot scenarios, where limited training data or inconsistent human annotations make it difficult to build specialised models, such as analytic assessment (i.e., compositions are evaluated based on distinct criteria with separate scores). In such cases, the use of natural-language-based assessment (NLA), which consists of leveraging the information contained in analytic language descriptors without the use of further training data, shows that LLMs can effectively assess specific components of language proficiency by “connecting the dots” between such descriptors and learner essays.
Similarly, for grammatical feedback for L2 learners, while LLMs have shown promise in grammatical error correction (GEC), they are not always the optimal choice when extensive training data is available. In such cases, bespoke models or fine-tuning approaches often outperform LLMs, as they can make better use of the data to produce more accurate corrections. Grammatical error feedback (GEF), however, goes beyond simple error correction, as it consists of summarising and explaining errors in natural language form. This is where LLMs begin to show their true potential. For GEF, LLMs are particularly indicated for tasks like summarisation and natural language generation, making them suitable for providing explanations of language errors. Additionally, their ability to act as "judges" by interpreting complex feedback responses and associating them with learner essays makes them an even better choice.
In this talk, we will show how general-purpose LLMs prove to be particularly effective for challenging tasks, such as analytic assessment and grammatical error feedback, where their strengths in natural language understanding and generation can be fully leveraged.