The workshop will take place in A301. The poster session will take place in Hall C.
09:00 - 09:15: Welcome
09:15 - 10:00: Keynote #1: Zhicong Lu
10:00 - 10:30: Lightning talks #1
10:30 - 11:00: Break
11:00 - 11:45: Keynote #2: Anjalie Field
How can we enable LLM auditing?
Abstract: Oversight and auditing of AI systems is becoming increasingly difficult as people use systems in a wide variety of ways, with instructions expressed in natural language prompts. We can no longer use readily quantifiable metrics like accuracy or statistical parity to understand model performance and potential impacts. Instead, we need ways of conducting open-ended analyses of models and usage data that do not infringe on user privacy. In this talk, I will discuss ways we are working towards these goals, beginning with an in-depth analysis of LLM usage in a specific domain: AI for querying astronomy literature. While manual analysis of usage data and follow-up interviews with astronomers offer an in-depth look at how astronomers interacted with an LLM-powered system, manual evaluation does not scale to the large volume of usage data in other contexts. Thus, I will next discuss methods for automated inductive coding, which offer more scalability, and finally, leveraging synthetic data to enable increased oversight of model usage and development without compromising privacy.
Bio: Anjalie Field is an Assistant Professor in the Computer Science Department at Johns Hopkins University. She is also affiliated with the Center for Language and Speech Processing (CLSP) and the Data Science and AI Institute. Her research focuses on the ethics and social science aspects of natural language processing, which includes developing models to address societal issues like discrimination and propaganda, as well as critically assessing and improving ethics in AI pipelines. Her work has been published in NLP and interdisciplinary venues, like ACL and PNAS, and in 2024 she was named an AI2050 Early Career Fellow by Schmidt Futures. Prior to joining JHU, she was a postdoctoral researcher at Stanford, and she completed her PhD at the Language Technologies Institute at Carnegie Mellon University.
11:45 - 12:30: Lightning talks #2
12:30 - 14:00: Lunch
14:00 - 14:45: Keynote #3: Heloisa Candello
Human-AI Interactions: Lessons from AI conversational agents in operation in human society
Abstract: As artificial intelligence progresses toward autonomous agents, crucial lessons from conversational AI can be applied to ensure these new systems are safe and trustworthy. This talk synthesizes insights on human-AI interaction, highlighting the need for agents to integrate value-aware controls, reveal uncertainty, and support fairness. This approach is essential for building a future where proactive AI systems engage in meaningful and secure interactions.
Bio: Dr. Heloisa Candello is a Senior Research Scientist at IBM Research – Brazil, based in São Paulo. She specializes in Human-Computer Interaction (HCI), focusing on the design and evaluation of conversational systems and responsible AI technologies. She holds a Ph.D. in Computer Science with a focus on Interactive Technologies from the University of Brighton, UK. At IBM, Dr. Candello leads research in the Responsible Tech group, applying mixed-methods research to develop ethical and engaging AI-driven user experiences. Her work has been published in leading conferences such as CHI, CSCW, and CUI. She has also contributed to several patents related to conversational AI. Dr. Candello is an active member of the ACM community, serving on committees like SIGCHI LATAM and the CHI Steering Committee. She is also an ACM Distinguished Speaker, offering talks on topics including AI’s social impact and design perspectives on generative AI. And currently, she is a Technical Program chair for CHI 2026.
14:45 - 15:30: Lightning talks #3
15:30 - 16:00: Break #2
16:00 - 16:45: Group discussion and closing
16:45 - 18:00: Poster session