A new ‘participatory turn’ in AI has been generating significant interest [1, 2, 3, 4]. This is essentially a more citizen-centred call for AI development that shifts power towards those who use or are affected by AI. This topic is particularly relevant to the IUI community, which has long championed user agency and meaningful human control in intelligent systems. Participatory AI assumes that citizen voices should be integrated as a central part of the development process, but this is a significant challenge. Existing work on Participatory ML has shown that diverse stakeholders can be engaged in the technical development of machine learning systems, but the elicited values are sometimes trivialised and disconnected from social norms [5]. Understanding how to create effective interfaces for participation is crucial as AI systems become more pervasive. A stronger citizen-centred focus would ensure that AI systems more deeply align with public values and serve broader democratic and societal goals. For participatory AI, both ML and Citizen-Centric approaches should work in tandem so that citizen deliberation can be properly integrated into the development, deployment and subsequent evaluation of new systems. Yet this integration is no simple matter and remains a worrying translation gap between participatory initiatives and the technical development of AI systems, not least because the participatory work often relies on smaller-scale community-based methods that simply fail to impact the globalised operation of commercial AI systems [4]. This workshop directly addresses IUI's core mission of designing intelligent user interfaces by examining what kind of interface or 'boundary object' might sit at the intersection of public deliberation and AI development. Unlike the main conference's focus on individual user interactions, this workshop explores collective participation and community-level interaction with AI systems. We ask for contributions that help us understand (i) how we can best reconcile the values of AI developers and user communities and (ii) what methods might ensure the translation of these values into actionable tools that can have a genuine impact on AI systems development? By bringing together theoretical perspectives with interactive demonstrations of actual boundary objects, this full-day workshop offers IUI attendees practical insights into an emerging challenge that will shape the future of human-AI interaction.
This full-day workshop takes a mixed approach to exploring Participatory AI. We combine theoretical presentations with hands-on demonstrations. Our central focus is ‘boundary objects’ – the interfaces, tools, and methods that help citizens and AI developers work together. These objects act as translators, making technical systems accessible to citizens while making citizen values actionable for developers.
We begin with a brief welcome and introduction (15 minutes) to frame the workshop goals, followed by a keynote presentation (30 minutes) on “How might we ensure citizen values influence AI systems?” This sets the stage by exploring both theoretical foundations and real-world challenges. Next comes our provocations and positions session (60 minutes), where contributors present lightning talks of 3-5 minutes each, to surface key tensions around risk articulation, evolving values, platform integration, and democratic legitimacy. Following a morning coffee break, the interactive demonstrations session (60 minutes) shifts from talking to doing. Participants explore submitted design concepts and interactive artefacts at multiple stations. These range from digital interfaces to physical card decks, from risk articulation tools to participatory methods. Visitors rotate through stations, trying each artefact and leaving structured feedback.
After lunch, we move to thematic breakout discussions (60 minutes). Participants self-organise around the challenges that resonate most: risk translation, evolving values, scale and integration, or democratic legitimacy. Each group examines real examples from the morning demonstrations. They explore what works, what fails, and why. Following an afternoon coffee break, we hold a collective synthesis session (30 minutes). Each group shares their key insights. Together, we map the landscape of current approaches, identifying fundamental barriers and promising directions. Most importantly, we acknowledge what remains unsolved and where the hardest challenges lie. The workshop concludes with a wrap-up and next steps session (15 minutes). We identify opportunities for collaboration and plan follow-up activities. This ensures momentum continues beyond the workshop itself.
Materials provided: We distribute all position papers and provocations before the event. During the workshop, we provide discussion templates and note-taking tools. Simple prototyping materials will be available for sketching ideas. All submissions and documentation will be available in a shared digital repository, which will be updated with key discussions and outcomes following the workshop.
people? Opportunities and challenges for participatory AI." In Proceedings of the 2nd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, pp. 1-8. 2022.
Fernando Delgado, Stephen Yang, Michael Madaio, and Qian Yang. "The participatory turn in ai design: Theoretical foundations and the current state of practice." In Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, pp. 1-23. 2023.
Michael Feffer, Michael Skirpan, Zachary Lipton, and Hoda Heidari. "From preference elicitation to participatory ML: A critical survey & guidelines for future research." In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, pp. 38-48. 2023.
Meg Young, Upol Ehsan, Ranjit Singh, Emnet Tafesse, Michele Gilman, Christina Harrington, and Jacob Metcalf. "Participation versus scale: Tensions in the practical demands on participatory AI." First Monday (2024).
Tan Zhi-Xuan, Micah Carroll, Matija Franklin, and Hal Ashton. "Beyond Preferences in AI Alignment: T. Zhi-Xuan et al." Philosophical Studies 182, no. 7 (2025): 1813-1863.