The workshop will take place in A301, and the poster session will take place in Hall C. All workshop events will also be available online at the Zoom link on the workshop's Underline page (must be registered to view).
09:00 - 09:15: Welcome
09:15 - 10:00: Keynote #1: Zhicong Lu
Online Creative Communities and Creativity Support Tools: Implications for Human-centered NLP
Abstract: The proliferation of video-sharing and livestreaming platforms such as YouTube and Twitch has catalyzed the growth of online creative communities for cultural expression. In this talk, I will present my group’s research on these vibrant online creative communities, uncovering the innovative ways in which grassroots content creators captivate audiences, safeguard intangible cultural heritage, and reshape cultural production through livestreaming and short videos. I will highlight the complex dynamics, creative labor, and challenges content creators face and showcase the design and implementation of interactive tools and systems to empower them. The insights derived from understanding and supporting these online creative communities offer could inform the design of future human-AI interactive systems for cultural expression and creativity support.
Bio: Prof. Zhicong Lu is an Assistant Professor at the Department of Computer Science, George Mason University, specializing in Human-Computer Interaction (HCI), social computing, and Computer-mediated Communication. Dr. Lu’s work centers on understanding and addressing user needs in diverse contexts in online creative communities by employing qualitative and quantitative methods, design methods, and interactive system development. Dr. Lu has published extensively in premier HCI and social computing conferences such as ACM CHI, CSCW, DIS, and CHI Play, earning a Best Paper Award at CHI 2019 and three Best Paper Honorable Mentions at CHI 2021, CHI 2023, and CHI 2024. His achievements include the Rising Star Award from the International Chinese Association of Computer-Human Interaction. He was also elected as the Chair of the Asia SIGCHI Committee, which is dedicated to activating HCI research and practices in Asia.
10:00 - 10:30: Lightning talks #1
A Survey of LLM-Based Applications in Programming Education: Balancing Automation and Human Oversight
Dialogue Acts as a Lens on Human–LLM Interaction: Analyzing Conversational Norms in Model-Generated Responses
Effects of Collaboration on the Performance of Interactive Theme Discovery Systems
GUARD: Guiding Unbiased Alignment through Reward Debiasing
HILITe: Human-AI Collaborative Framework for Image Transcreation
Re:Member: Emotional Question Generation from Personal Memories
Towards an Automated Framework to Audit Youth Safety on TikTok
Towards Open-Ended Discovery for Low-Resource NLP
Word Clouds as Common Voices: LLM-Assisted Visualization of Participant-Weighted Themes in Qualitative Interviews
10:30 - 11:00: Break
11:00 - 11:45: Keynote #2: Anjalie Field
How can we enable LLM auditing?
Abstract: Oversight and auditing of AI systems is becoming increasingly difficult as people use systems in a wide variety of ways, with instructions expressed in natural language prompts. We can no longer use readily quantifiable metrics like accuracy or statistical parity to understand model performance and potential impacts. Instead, we need ways of conducting open-ended analyses of models and usage data that do not infringe on user privacy. In this talk, I will discuss ways we are working towards these goals, beginning with an in-depth analysis of LLM usage in a specific domain: AI for querying astronomy literature. While manual analysis of usage data and follow-up interviews with astronomers offer an in-depth look at how astronomers interacted with an LLM-powered system, manual evaluation does not scale to the large volume of usage data in other contexts. Thus, I will next discuss methods for automated inductive coding, which offer more scalability, and finally, leveraging synthetic data to enable increased oversight of model usage and development without compromising privacy.
Bio: Anjalie Field is an Assistant Professor in the Computer Science Department at Johns Hopkins University. She is also affiliated with the Center for Language and Speech Processing (CLSP) and the Data Science and AI Institute. Her research focuses on the ethics and social science aspects of natural language processing, which includes developing models to address societal issues like discrimination and propaganda, as well as critically assessing and improving ethics in AI pipelines. Her work has been published in NLP and interdisciplinary venues, like ACL and PNAS, and in 2024 she was named an AI2050 Early Career Fellow by Schmidt Futures. Prior to joining JHU, she was a postdoctoral researcher at Stanford, and she completed her PhD at the Language Technologies Institute at Carnegie Mellon University.
11:45 - 12:30: Lightning talks #2
Cognitive Feedback: Decoding Human Feedback from Cognitive Signals
Collaborative Co-Design Practices for Supporting Synthetic Data Generation in Large Language Models
Culturally-Aware Conversations: A Framework & Benchmark for LLMs
DAMASHA: Detecting AI in Mixed Adversarial Texts via Sentence Segmentation with Human-interpretable Attribution
Dark Patterns Meet GUI Agents: LLM Agent Susceptibility to Manipulative Interfaces and the Role of Human Oversight
Digital Tongues: Internet Language, Collective Identity, and Implications for Human-Computer Interaction
EMOBOT: Explainable Risk Tiering for EMOtional Supportive ChatBOTs
Exploring Gender Differences in Emoji Usage: Implications for Human-Computer Interaction
MEETING DELEGATE Benchmarking LLMs on Attending Meetings on Our Behalf
MobileA3gent: Training Mobile GUI Agents Using Decentralized Self-Sourced Data from Diverse Users
Rethinking Search: A Study of University Students’ Perspectives on Using LLMs and Traditional Search Engines in Academic Problem Solving
Should I Share this Translation? Evaluating Quality Feedback for User Reliance on Machine Translation
Time Is Effort: Estimating Human Post-Editing Time for Grammar Error Correction Tool Evaluation
Towards Human-Centered RegTech: Unpacking Professionals' Strategies and Needs for Using LLMs Safely
12:30 - 14:00: Lunch
14:00 - 14:45: Keynote #3: Heloisa Candello
Human-AI Interactions: Lessons from AI conversational agents in operation in human society
Abstract: As artificial intelligence progresses toward autonomous agents, crucial lessons from conversational AI can be applied to ensure these new systems are safe and trustworthy. This talk synthesizes insights on human-AI interaction, highlighting the need for agents to integrate value-aware controls, reveal uncertainty, and support fairness. This approach is essential for building a future where proactive AI systems engage in meaningful and secure interactions.
Bio: Dr. Heloisa Candello is a Senior Research Scientist at IBM Research – Brazil, based in São Paulo. She specializes in Human-Computer Interaction (HCI), focusing on the design and evaluation of conversational systems and responsible AI technologies. She holds a Ph.D. in Computer Science with a focus on Interactive Technologies from the University of Brighton, UK. At IBM, Dr. Candello leads research in the Responsible Tech group, applying mixed-methods research to develop ethical and engaging AI-driven user experiences. Her work has been published in leading conferences such as CHI, CSCW, and CUI. She has also contributed to several patents related to conversational AI. Dr. Candello is an active member of the ACM community, serving on committees like SIGCHI LATAM and the CHI Steering Committee. She is also an ACM Distinguished Speaker, offering talks on topics including AI’s social impact and design perspectives on generative AI. And currently, she is a Technical Program chair for CHI 2026.
14:45 - 15:30: Lightning talks #3
First Impressions from Comparing Form-Based and Conversational Interfaces for Public Service Access in India
From Noise to Nuance: Enriching Subjective Data Annotation through Qualitative Analysis
From Regulation to Interaction: Expert Views on Aligning Explainable AI with the EU AI Act
How Well Can AI Models Generate Human Eye Movements During Reading?
Hybrid Intelligence for Logical Fallacy Detection
Out of the Box, into the Clinic? Evaluating State-of-the-Art ASR for Clinical Applications for Older Adults
Predictive Modeling of Human Developers’ Evaluative Judgment of Generated Code as a Decision Process
Rethinking Personality Assessment from Human-Agent Dialogues: Fewer Rounds May Be Better Than More
Supporting Online Discussions: Integrating AI Into the adhocracy+ Participation Platform To Enhance Deliberation
The Automated but Risky Game: Modeling Agent-to-Agent Negotiations and Transactions in Consumer Markets
TimE: Hierarchical Evaluation of Temporal Cognitive Abilities in LLMs Across Real-World Contexts
Toward Human-Centered Readability Evaluation
TripleCheck: Transparent Post-Hoc Verification of Biomedical Claims in AI-Generated Answers
User-Centric Design Paradigms for Trust and Control in Human-LLM-Interactions: A Survey
15:30 - 16:00: Break #2
16:00 - 16:45: Group discussion and closing
16:45 - 18:00: Poster session