Welcome to our forth Computational User Interface workshop at CHI. This year we will focus our discussion on the challenges and opportunities in computational approaches for Designing and Developing User Interfaces with AI by bringing together researchers from different sub-disciplines of HCI, across the intersections between HCI and adjacent fields (ML, CV, NLP), at different stages of the pipeline, and across the industry and academia boundaries.
Position paper submission DDL: Feb 23, 2025
Workshop date: April 27th (Sunday), 2025
Yuwen Lu*
Notre Dame
USA
Yue Jiang*
Aalto
Finland
Tiffany Knearem
TK Research
USA
Clara Kliman-Silver
USA
Christof Lutteroth
Bath
UK
Jeffrey Nichols
Apple
USA
Wolfgang Stuerzlinger
Simon Fraser
Canada
Sauvik Das, Carnegie Mellon University
Towards Smart, Automated UI Assessments at Scale
Usability remains a major problem for the estimated 1.8 billion web sites worldwide. Past research has found that poor usability leads to increased interaction costs, higher cognitive load, and loss of trust and confidence. Good usability, in contrast, leads to reduced development and maintenance costs, increased web traffic and sales revenue, fewer user errors, higher productivity, and lower support costs. Canonically, there are two primary ways to find usability issues with websites: (i) analytical evaluation, in which experts leverage hard-won experience and know-how to apply classic HCI techniques like heuristic evaluation to assess UIs; and, (ii) empirical evaluation, in which experts conduct user studies, A/B tests, or log analysis to observe and assess how users accomplish tasks on websites. Both are expensive, labor intensive, and require significant expertise, making it hard to scale to the hundreds of UI decisions that must be made when designing modern websites. In this talk, I pose the question: can modern AI help scale analytical and empirical UI evaluation to help website designers identify and fix critical UI issues faster, cheaper, and more effectively? I’ll present work we have done at fuguUX, a new start-up I am co-founding, to explore this question. First, I’ll share work we have done to use AI to conduct heuristic evaluations of websites grounded on real expert assessments. Second, I’ll discuss our approach to using AI agents to conduct believable user testing simulations. I’ll end with a few broader reflections on where I see the promise and perils of automating UI assessments with AI.
Bio:
Sauvik Das is an Assistant Professor at the Human-Computer Interaction Institute at Carnegie Mellon University where he directs the Security, Privacy, Usability and Design Lab. He is also co-founder and CTO at fuguUX, a new start-up that aims to provide smart, automated website usability assessments at scale. His work has recognized with several awards: a best paper at UbiComp (2013) and CHI (2024), a distinguished paper at SOUPS (2020) and USENIX Security (2024), an honorable mention for the NSA’s Best Scientific Cybersecurity Paper (2014), and five additional best paper honorable mention awards at CHI and CSCW. He has received an NSF CAREER award and is Non-Resident Fellow of the Center for Democracy and Technology. His work has also been covered by the popular press, including features in The Atlantic, The Financial Times, ABC, and others.
Max Kreminski, Midjourney
Computational Evaluation of (Co-)Creative Interfaces
It’s notoriously difficulty to evaluate interfaces intended to support creative work – but as software creative tools proliferate, the importance of understanding whether and how these tools support user creativity continues to grow. In this talk, I discuss several related approaches to making sense of user interactions with creativity support tools. I focus in particular on how AI-supported evaluation methods can help us illuminate a design tool’s expressive range; trace user trajectories through design space; and potentially even intervene to shape these trajectories while the interaction is still unfolding.
Bio:
Max Kreminski is a research scientist at Midjourney and the director of the company’s Storytelling Lab. Their research focuses on artificial intelligence, human-computer interaction, and creativity, with a particular emphasis on the design, development, and evaluation of AI-based creativity support tools for human storytellers, poets, and game designers. Max holds a PhD in Computational Media from UC Santa Cruz; in the past, they have also been an assistant professor at Santa Clara University, a resident at Stochastic Labs, and a researcher in several labs at the University of Southern California.
Dessie Sadrzadeh and Frank Bentley, Google
AI Design Tools in Real Product Development
The academic world has explored AI-based approaches to designing and developing front-end code for some time. But what are the biggest problems and use cases in the design and development process in large organizations? How can we apply academic research to improve designer/developer collaboration and time to market? How can we ensure that designs are visually appealing, usable, and adherent to corporate design systems? Dessie and Frank will explore research from Google on this topic as well as solutions that we are building to help bridge this gap.
Bio
Dessie Sadrzadeh is the Senior Director of Engineering for the Google Design Platform, building Material Design and tools for the design to code workflow and broadly for accelerating product experience creation.
Frank Bentley is the Director of Research for the Google Design Platform, exploring opportunities for improving the product design and development process and evaluating the desirability and usability of new products.
The detailed workshop schedule can be found on the schedule page.
The workshop is in-person only.