Most sessions in Room 5; poster sessions in shared poster area
09:10–09:20 Opening Remarks
09:20–10:20 Invited Talk: Dr. Claire Stevenson
Learning to solve analogies: why do children excel where AI models fail?
Recent work with large language models (LLMs) concludes that analogical reasoning, using what you know about one thing to infer knowledge about a new, somehow related instance, has emerged in these systems. My lab has conducted a series of behavioural and mechanistic‑interpretability studies to investigate whether analogical reasoning has indeed emerged, and, if so, whether the developmental phenomena resemble those of humans or follow a different trajectory. We provide evidence of similarities in children's and LLMs' development in learning to solve analogies, while also highlighting key differences. I will propose a theory of how LLMs' reasoning abilities are developing and conclude with a discussion of developmental insights that could help AI models achieve human‑like analogical reasoning.
10:30–11:40 Poster Session 1
Transformer Attention as a Unified Model of Encoding and Retrieval in Human Sentence Processing
Dan Parker
ChineseDevBench: A Chinese Developmental Benchmark for Language Development
Shaonan Wang, Yiwen Wu, Na Li, Zesheng Chen, Gan Wang, Shuchen Zhang, Xin Sun, Luan Li and Yaran Chen
Predicting Sentence Acceptability Judgments in Multimodal Contexts
Hyewon Jang, Nikolai Ilinykh, Sharid Loaiciga, Jey Han Lau and Shalom Lappin
ToM in LLM is not ToM, but a Pragmatic Effect
Agnese Lombardi and Alessandro Lenci
What Kind of Language is Easy to Language-Model Under Curriculum Learning?
Nadine El-Naggar, Tatsuki Kuribayashi and Ted Briscoe
Modeling semantic association in self-paced reading with language model embeddings
Sara Møller Østergaard, Kenneth Enevoldsen, Afra Alishahi and Bruno Nicenboim
Quantifying the Pragmatics of Discourse Markers: A Surprisal-Based Analysis of Taiwanese "Ah"
Ri-Sheng Huang
Acoustic Representations Support Statistical Learning of Syllable Sequences: A Computational Study
Nika Jurov and Thomas Schatz
Simulating Cross-Linguistic Influence in Bilingual Reading - a Knowledge Distillation Approach
Irene E. Winther and Stefan L. Frank
Investigating Brain-LLM Alignment with Category-Deprived Text
Shantanu Nath, Marco Tettamanti and Roberto Zamparelli
11:40–13:00 Oral Session 1
(11:40–12:00) Character-aware Transformers Learn an Irregular Morphological Pattern Yet None Generalize Like Humans
Akhilesh Kakolu Ramarao, Kevin Tang and Dinah Baer-Henney
(12:00–12:20) Comparing Transformer Model Interpretability with Human Cognition: A Dual Analysis of Attention and Attribution
Lingchen Kong, Jinnie Shin and Pavlo Antonenko
(12:20–12:40) Correlating Language Model Surprisal With Cloze and Plausibility: Getting the Best of Both Measures
Kate Rebecca Belcher and Matthew Crocker
(12:40–13:00) Is Cross-Lingual Transfer in Bilingual Models Human-Like? A Study with Overlapping Word Forms in Dutch and English
Iza Škrjanec, Irene Elisabeth Winther, Vera Demberg and Stefan L. Frank
13:00–14:00 Lunch Break
14:00–15:00 Invited Talk: Dr. Cory Shain
Language processing at the computational level and beyond
In this talk, I will survey insights from my group into language processing across Marr's levels of analysis: computational, algorithmic, and implementational. I will first show that the computational-level notion of optimality correctly predicts key empirical properties of human language learning and processing, and I will argue that these results place a burden of proof on algorithmic-level theories. I will then review emerging evidence of deviations from optimality, which plausibly reveal the inner workings of the language processing system at an algorithmic level. Finally, I will review insights from the implementational level about the functional brain networks that likely support these processing abilities.
15:00–16:00 Oral Session 2
(15:00–15:20) Brain-to-Text Decoding with Brain Atlases and Brain Foundation Models
Haruka Akama, Ryo Yoshida, Max Müller-Eberstein and Yohei Oseki
(15:20–15:40) Floating or Suggesting Ideas? A Large-Scale Contrastive Analysis of Metaphorical and Literal Verb–Object Constructions
Prisca Piccirilli, Alexander Fraser and Sabine Schulte im Walde
(15:40–15:50) Social Meaning in Large Language Models: Structure, Magnitude, and Pragmatic Prompting
Roland Mühlenbernd
(15:50–16:00) Semantic Norms and Structural Relations in Chinese Semantic Composition: A Dataset and Analysis
He Zhou, Emmanuele Chersoni and Yu-Yin Hsu
16:00–17:00 Poster Session 2
Headlines You Won’t Forget: Can Pronoun Insertion Increase Memorability?
Selina Meyer, Magdalena Abel and Michael Roth
Aggregated Transformer Attention Measures Predict Reading Times Beyond Surprisal
Lukas Mielczarek and Laura Kallmeyer
‘Layer su Layer’: Identifying and Disambiguating the Italian NPN Construction in BERT’s family
Greta Gorzoni, Ludovica Pannitto and Francesca Masini
Teaching LLMs to unveil tendentious implicit contents of Italian political communication
Walter Paci, Lorenzo Gregori and Alessandro Panunzi
Examining Algebraic Recombination for Compositional Generalisation
Joaquin Cardona Ruiz, Antske Fokkens and Lucia Donatelli
Can LLMs Simulate Human Behavioral Variability? A Case Study in the Phonemic Fluency Task
Mengyang Qiu, Zoe Brisebois and Siena Sun
Translation from the Information Bottleneck Perspective: an Efficiency Analysis of Spatial Prepositions in Bitexts
Antoine Taroni, Ludovic Moncla and Frederique Laforest
Causal Inferences Are Driven by Noun Concept Specificity Evidence from Self-Paced Reading and Large Language Model Surprisal
Fabian Schlotterbeck and Raphael Barth
Polar Questions in SPA–TTR: Linking Dialogue, Acquisition, and Neurosemantics
Jonathan Ginzburg, Shiyun Dong, Robin Cooper, Andy Luecking and Staffan Larsson
Generics as Useful Defeasible Rules of Inference
Justin Helms
17:00–17:50 Oral Session 3
(17:00–17:10) Topic-context Dependency on Continuous Semantic Reconstruction of Language from fMRI Signals
Fermin Travi, Agustin Delmagro, Diego Fernandez Slezak, Bruno Bianchi and Juan E. Kamienkowski
(17:10–17:20) Disentanglement and Compositionality of Letter Identity and Letter Position in Variational Auto-Encoder Vision Models
Bruno Bianchi, Aakash Agrawal, Stanislas Dehaene, Emmanuel Chemla and Yair Lakretz
(17:20–17:30) Plausibility as Commonsense Reasoning: Humans Succeed, Large Language Models Do not
Sercan Karakas
(17:30–17:40) Benchmarking Source-Sensitive Reasoning in Turkish: Humans and LLMs under Evidential Trust Manipulation
Sercan Karakas and Yusuf Şimşek
(17:40–17:50) Toward Cognitive Alignment in Large Language Models: Integrating Linguistic Theory and Human Data
Wajdi Zaghouani
17:50–18:00 Closing Remarks