Please click here for a pdf version of the schedule.
Please click here for a pdf version of the abstract proceedings.
All events will be held in the Teaching and Learning Complex (TLC) on the UC Davis campus.
Note that the events on Day 1 (Friday) will take place in TLC 2218, while the events on Days 2 and 3 (Saturday and Sunday) will take place in TLC 2215.
Click the drop-down menu below for the abstracts of our two invited talks on Friday from Dr. Cory Shain and Dr. Shailee Jain!
INVITED TALKS – ABSTRACTS
Invited Talk #1: Psycholinguists need language models in order to bridge a hidden divide
Speaker: Cory Shain, Stanford University
Abstract: As in any scientific field, there are vastly many dimensions of difference between views of language comprehension. However, I will argue that hidden within this space lie two major classes of views---which I will label "proceduralism" and "inferentialism"---that disagree at a foundational level about the conceptual vocabulary in which to express cognitive theories. This disagreement in turn gives rise to (i) divergent research priorities, (ii) difficulties in assimilating empirical findings from across the divide, and (iii) confusion, especially to the extent that the two subcommunities use the same terms to refer to different constructs. As a result, this disagreement currently presents an obstacle to cumulative progress in theory-building. My hope in this talk is to provide a clear statement of this disagreement, survey key empirical tests that bear on it (in which language models play a starring role), and draw out the implications of those tests, in order to facilitate cross-talk between research communities and progress in the shared goal of understanding human language comprehension.
Invited Talk #2: Interpreting LLMs to interpret the brain
Speaker: Shailee Jain, UC San Francisco
Abstract: Language is central to human communication yet we understand little about how the brain processes it. The predominant research paradigm to investigate this operates through linguistics: first describe features in the input, like syllables or syntax, then look for their neural correlates. While productive, this approach has produced fragmented experiments each studying a specific feature with a single type of brain measurement. Yet, we still lack strong evidence that linguistic theory can fully describe how neural circuits are organised. An increasingly popular alternative is to use pretrained deep neural networks (DNN) as rich feature extractors. While this has led to more accurate predictive models of brain activity (see, for ex., a swath of comparative studies like BrainScore), the utility of replacing one black-box we don’t understand (the brain) with another black-box we don’t understand (the LLM) remains to be proved. In this talk, I present three different approaches to bridge this divide. Specifically, how do we develop AI interpretability tools conjoined with linguistic theory to help us understand the human brain better?