Keynote Speaker Announcement 🔊  30.07.2025
We are delighted to announce the keynote speech t`hat will happen at the special session!
Speaker: Prof. Karen Livescu, Toyota Technological Institute at Chicago
Title: What can interpretability do for us (and what can it not)?
Interpreting neural speech models could in principle do a lot for us: Guide us in choosing the best model for a task; help us design better methods for model use or fine-tuning; guide the design of training objectives; study the relationship between machine and human speech perception and production; and satisfy our scientific curiosity about what is inside the black box of a large model. At the same time, it can be hard to tell whether interpretability methods actually achieve any of these goals. In this talk, I will describe some of my research group's experience with interpretability methods for speech foundation models, including self-supervised and supervised models. I will describe some of our results, and perhaps more importantly, I will try to distill some of the lessons learned along the way about the relative merits of several interpretability methods, and some general "dos and don'ts" to hopefully spark discussion.Â
Session Format ✨ 24.07.2025
The special session will take place on Monday, 18 August, from 11:00 to 13:00. Here's the structure:
Keynote talk (30 minutes)Â
  Poster session (90 minutes) - for all accepted papers
For the poster session, we expect printed A0 posters in portrait format. We’ll provide poster boards and other equipment. Let us know if you have any special requests