ACM CHI 2026 Workshop
From Generation to Simulation: Responsible Use of AI Personas in Human-Centered Design and Research
13 April 2026
13 April 2026
Personas are designed to represent groups of users who share similar characteristics. Traditionally, they have been static artefacts: brief, typically 1-page profiles detailing demographic attributes, motivations, goals, and pain points, often accompanied by a headshot or photos of personal items to make them feel more vivid. These profiles were intended to help design teams “step into the shoes” of their users, empathize with their experiences, and anticipate their responses to design ideas.
Generative AI tools can now generate and simulate personas and synthetic users, empowering design teams to quickly ideate, prototype, and evaluate. However, without careful use, AI-generated personas risk bias, stereotyping, representational harm, and validity gaps. How can we responsibly embrace AI personas while mitigating their risks?
We invite HCI researchers, UX practitioners, AI ethics experts, design methodologists, and developers to join our half-day CHI workshop. We aim to collaboratively develop practical guidance and resources for the responsible use of AI personas, focusing on transparency, risk awareness, and validity. We invite brief contributions that share real experiences with AI personas: where they improved or misled design, how teams integrated them into workflows, and what tools supported reproducibility. We’re interested in practical ways to evaluate their plausibility, diversity, and reliability, as well as ethical issues around consent, disclosure, and accessibility.
Submission Details:
Interested participants should submit a short position paper (2–4 pages, single column, ACM CHI Extended Abstract format including the references), case study, or artifact abstract, including examples or experiences related to AI personas, synthetic users, or related methods.
Submissions will be reviewed by at least two organizers for their relevance and practicality to the workshop goals, clarity of methods, and diversity of perspectives and domains. Anonymisation is not required. At least one author of each accepted submission must attend the workshop and that all participants must register for the workshop.
With participants’ consent, all accepted position papers will be made available on our workshop website. Proceedings shall be submitted to CEUR-WS.org for online publication.
18 February 2026 - Submissions close
25 February 2026 - Notification of acceptance
13 April 2026 - Workshop day (Session 1: 14:15 - 15:45 CEST - Session 2: 16:30 - 18:00 CEST)
Submissions are now closed.
The workshop invites HCI/CSCW researchers, UX practitioners, tool builders, methodologists, AI ethics/governance leads, accessibility specialists, and social‑computing scholars. We target 25–35 participants with a balance of: (i) AI-personas simulation tool builders; (ii) design researchers who have piloted synthetic users; (iii) critical perspectives (e.g., ethics, marginalized user advocates); and (iv) domain specialists (health, education, civic tech) to share their experiences and insights. We invite short, concrete contributions on (but not limited to):
Studies and Cases
HCD Use Cases: AI personas for ideation, scenarios/counter-scenarios, early walkthroughs, and lightweight evaluation.
Team Workflows: Persona libraries, lifecycle management, multi-agent “populated prototypes,” sprint integration.
Validity & Trust: Alignment with real users (means vs variance, plausibility, edge cases) and trust calibration.
High-stakes Contexts: Marginalized and non-Western groups, ability-based cases, benefits vs harms, required safeguards.
Designer Sensemaking: Empowerment vs deskilling, over-reliance risks, disclosure, communicating uncertainty.
Failure Modes: Stereotyping, mode collapse, model drift, hallucinated constraints, simulation–reality gaps.
Design, Development, and Evaluation
AI Persona Construction: Data-grounded/synthetic personas, retrieval-augmented grounding, memory policies, controllable traits/goals, multi-agent task flows.
UX Evaluation Techniques: Scripted/heuristic walkthroughs, mixed AI-human checks, metrics for central tendency, coverage, ecological plausibility.
Context-specific Guidelines: Persona applicability, intended use scoping, counter-persona creation, bias-reducing prompts/constraints, accessible authoring.
Measurement & Reporting: Standardized Persona Cards (provenance, sources, parameters, biases, evaluation), reproducibility logs, assumptions documentation.
Ethical & Regulatory: Bias audits, accountability records, substitution guardrails.
Simulated Interaction Factors: Feedback, feedforward, affordances, explainability, assumptions documentation.
Domain-specific Practices: Safety/fairness-critical domains (health, finance, services), accessibility, personalization, evolving AI-designer collaboration roles.
14:15 – 14:45 Introduction and Keynote
Brief overview of GenAI personas in HCI: promises and pitfalls.
An invited keynote from a senior researcher emphasizing ethical considerations.
14:45 – 15:45 Session 1 – Lightning Talks
Participants share short presentations on their experiences using GenAI personas.
Focus on lessons learned, challenges faced, and ethical dilemmas encountered.
15:45 – 16:30 Coffee Break
16:30 – 17.10 Session 2 – Anatomy of AI Persona Generation and Simulation
Participants work in small groups to map the end-to-end pipeline of AI persona generation and simulation. Simple templates and colour-coded sticky notes will be used to capture stages, assumptions, risks, and opportunities. This activity will result in a large, annotated visualisation of Anatomy of AI persona pipeline.
17:10– 17:45 Session 3 – Risk & Bias Audit: Building on the pipeline visualization, participants rotate through breakout tables, each focused on a specific risk lens such as bias, misuse, over-trust, and misalignment with stakeholders. Using short scenario cards and a checklist template, groups rapidly identify concrete failure modes and minimal validation steps for different domains (e.g., healthcare, education). The session culminates in a practical Risk & Bias checklist that any team can apply in less than 15 minutes.
17:45 – 18:00 Wrap Up and Roadmap
Summarizing the day and discussing the next steps with the participants