Shaomei Wu is a person who stutters and the founder and CEO of AImpower.org --~a tech non-profit that has been researching and co-developing inclusive videoconferencing and speech AI technologies for, with, and by the stuttering community since 2022. Her research explores fairness and ethical issues in mainstream technologies including social media, AI, and videoconferencing. She has co-organized research and community workshops at CSCW, Meta, and National Stuttering Association Conference, and has served as a workshop co-chair for ICWSM.
Kimi Wenzel is a PhD student at Carnegie Mellon University's Human-Computer Interaction Institute. Her research centers on understanding the downstream representational harms of speech AI.
Jingjin Li is a research fellow at AImpower.org where she leads research on co-designing inclusive Speech AI and videoconferencing tools with the stuttering community. She has co-organized panels and served on the organizing committee for conferences such as CSCW and DIS.
Qisheng Li is a research engineer at Meta Reality Labs and a research fellow at AImpower.org where she leads the community-led Chinese stuttered speech dataset creation and benchmarking project. Her research interests lie in HCI and AI, with a emphasis on AI for social good, crowdsourcing, and evaluation. She has served on the program committee at ASSETS and CSCW.
Alisha Pradhan is an Assistant Professor in the Department of Informatics at the New Jersey Institute of Technology. Her research has examined design and use of conversational voice technologies by older adults, in particular identifying the accessibility benefits and barriers posed by these technologies, and engaging older adults in design of equitable voice technologies. She has co-organized workshops and panels at conferences including, ACM CHI and UbiComp.
Raja Kushalnagar is a Deaf Professor and Director of the Artificial Intelligence, Accessibility and Sign languages Center with deaf-accented speech. His research explores the accuracy and usability of conversational voice technologies by deaf and hard of hearing people, and fairness, equity and inclusion in AI models and platforms. He has served on the program committee at ASSETS, ICCHP and CSUN.
Colin Lea is a research scientist and manager at Apple. His group focuses on making interactive technologies — especially speech — more inclusive for people with disabilities. His work is at the intersection of machine learning, HCI, and accessibility and emphasizes data collection/curation and ML modeling.
Allison Koenecke is a mainstream American English speaker, and an Assistant Professor of Information Science at Cornell University. Her research on algorithmic fairness includes auditing disparities in ASR system performance, especially among underrepresented speech populations including African American English, d/Dhh, and aphasic speakers.
Christian Vogler is a deaf person who speaks English with an accent that derives both from his German roots and his deafness. He is the director of the Technology Access Program at Gallaudet University and has led numerous grants and projects on accessible technologies for the DHH. Some of his most recent work focuses on both text-to-speech and speech-to-text technologies for DHH people. He has served on the organizing committee of conferences and workshops, including ASSETS, Gesture Workshops, AI-related workshops, and others.
Mark Hasegawa-Johnson is a Professor of Electrical and Computer Engineering at University of Illinois Urbana-Champaign. He is a Fellow of the Acoustical Society of America, a Fellow of ISCA, and a Fellow of the IEEE for contributions to speech processing of under-resourced languages. Professor Hasegawa-Johnson is Editor-in-Chief-elect of the IEEE Transactions on Audio, Speech, and Language, member of the ISCA Diversity Committee, and Technical Program Chair of IEEE ASRU 2025.
Norman Makoto Su is an associate professor in the Department of Computational Media at UC Santa Cruz. His research interests lie in human–computer interaction (HCI) and computer–supported cooperative work (CSCW). He directs the Authentic User Experience Lab (AUX Lab), where they integrate empirical and humanistic methods in HCI to study and design with subcultures. He has published work on collective action and around new forms of and challenges with AI work and the techlash. He has co-organized workshops at CHI, CSCW, and GROUP.
Nan Bernstein Ratner is a professor in the Department of Hearing and Speech Sciences at the University of Maryland, College Park. Her primary areas of research are fluency development and disorder, psycholinguistics, and child language development. Nan is a co-founder and co-manager of FluencyBank~\cite{ratner2018fluency}, a corpus of annotated disfluent speech that has been highly influential in both fluency research and speech AI development. She is a longstanding organizer of sessions for the Annual Meeting of the American Association for the Advancement of Science and the recipient of multiple NIH Conference grants (R15).