Computational Approaches for
Understanding, Generating, and Adapting User Interfaces
CHI 2022 Workshop
Saturday Apr 30th, 2022 | Hybrid (New Orleans, LA and online)
A one-day workshop at CHI'22
We will discuss the challenges and opportunities in computational approaches for understanding, generating, and adapting user interfaces by bringing together researchers from different sub-disciplines of HCI, across the intersections between HCI and adjacent fields (ML, CV, NLP), at different stages of the pipeline, and across the industry and academia boundaries.
You can find the extended abstract of the workshop here.
Workshop Schedule & Format
The detailed workshop schedule can be found on the schedule page.
We expect the workshop to be hybrid with the majority of participants attending in person. Synchronous remote participation will be available for those who are unable to join us in person. The plan for the workshop format and logistics is subject to change depending on the final plan for the CHI conference.
Keynote Speakers
Jeffery Nichols
Keynote Title
Explorations of User Interface Malleability: From Automatic Generation to Guided Modification
Abstract
From the point of view of a user, human-computer interfaces can seem fixed and immutable. The user must adapt to how their computer functions in order to complete their tasks. But what if interfaces could adapt to their users instead? How might we make user interfaces adaptable and in what ways might we make interfaces adaptable? In this talk, I will describe a series of research projects that approach these questions from different perspectives. Some projects explore the use of automated user interface generation to create interfaces that can be modified to suit users’ previous experience or their current environment. Other projects explore techniques that allow users to modify existing user interfaces to better suit their tasks and their devices. The projects as a whole demonstrate that there is value in making our user interfaces more malleable and suggest some paths towards making malleable interfaces a reality.
Bio
Jeffrey Nichols is a Research Scientist and Manager in the Human-Centered Machine Intelligence team at Apple where he works in the areas of user interface understanding and accessibility. Prior to that, he worked at Google on the Fuchsia open source operating system and IBM Research on end-user programming, social media analysis, and crowdsourcing. He received his Ph.D. in 2006 from the Human Computer Interaction Institute at Carnegie Mellon University under the supervision of Professor Brad A. Myers. He has published over 50 papers in major conferences and journals in the area of human-computer interaction, and serves as Editor-in-Chief of the Proceedings of the ACM on Human-Computer Interaction (PACMHCI).
Wolfgang Stuerzlinger
Keynote Title
Efficient Authoring of Resizable Graphical User Interfaces
Abstract
Current computing devices offer a vast variety of screen sizes and resolutions. A large set of screens can be used in portrait and landscape orientations. This makes it challenging to create graphics user interfaces and web pages that look good across all such screens. Traditional solutions include authoring a set of layouts for different sizes that has to be kept synchronized, which is inefficient and error-prone. While previous constraint-based approaches help, they do not support different screen orientations nor layout alternatives.
Building on initial explorations of better methods to author graphical user interfaces, I present a method to enable authors to create a single layout specification that can be used across different screen orientations and sizes, based on OR-constraints. These constraints also seamlessly unify grid-based and flow layouts, further increasing the flexibility of layout specifications. I demonstrate an efficient solver for OR-constraints systems and an approach to reverse-engineer layout specifications from a set of examples. I illustrate the approach with both resizable graphical user interfaces and responsive web pages and discuss the ultimate goal of rapid authoring of resizable/responsive user interfaces.
Bio
Building on his deep expertise in Virtual Reality and Human-Computer Interaction, Dr. Stuerzlinger is a leading researcher in Three-dimensional User Interfaces. He got his Doctorate from the Vienna University of Technology, was a postdoctoral researcher at the University of Chapel Hill in North Carolina, and professor at York University in Toronto. Since 2014, he is a full professor at the School of Interactive Arts + Technology at Simon Fraser University in Vancouver, Canada. His work aims to gain a deeper understanding of and to find innovative solutions for real-world problems. Current research projects include better 3D interaction techniques for Virtual and Augmented Reality applications, new human-in-the-loop systems for big data analysis (Visual Analytics and Immersive Analytics), better layout methods for graphical user interfaces and webpages, the characterization of the effects of technology limitations on human performance, investigations of human behaviors with occasionally failing technologies, user interfaces for versions, scenarios, and alternatives, and new Virtual/Augmented Reality hardware and software.
Organizers
Yue Jiang
Yue Jiang is a Ph.D. student at Aalto University, advised by Antti Oulasvirta. Her main research interests are in HCI and Graphics with a focus on adaptive user interfaces and 3D human performance capture. Her recent work with Prof. Wolfgang Stuerzlinger and Prof. Christof Lutteroth focuses on adaptive GUI layout based on OR-Constraints (ORC).
Yuwen Lu
Yuwen Lu is a Ph.D. student in the Department of Computer Science and Engineering at the University of Notre Dame, advised by Toby Li. He is working on using data-driven approaches for understanding and generating user interfaces to support UX research and design work. Prior to joining Notre Dame, Yuwen received a Master's degree in Human-Computer Interaction from Carnegie Mellon University.
Jeffery Nichols
Jeffery Nichols is a Research Scientist in the AI/ML group at Apple working on intelligent user interfaces. Previously he was a Staff Research Scientist at Google working on the open-source Fuchsia operating system. His most important academic contribution recently was the creation of the RICO dataset. He also worked on the PUC project, whose primary focus was creating a specification language that can define any device and an automatic user interface generator that can create control panels from this specification language.
Wolfgang Stuerzlinger
Wolfgang Stuerzlinger is a Professor at the School of Interactive Arts + Technology at Simon Fraser University. His work aims to gain a deeper understanding of and to find innovative solutions for real-world problems. Current research projects include better 3D interaction techniques for Virtual and Augmented Reality applications, new human-in-the-loop systems for big data analysis, the characterization of the effects of technology limitations on human performance, investigations of human behaviors with occasionally failing technologies, user interfaces for versions, scenarios, and alternatives, and new Virtual/Augmented Reality hardware and software.
Chun Yu
Chun Yu is an Associate Professor at Tsinghua University. He is keen to research computational models and AI algorithms that facilitate the interaction between human and computers. Current research directions include novel sensing and interaction techniques, accessibility and user interface modeling. His research outcome has been integrated into commercial products serving hundreds of millions of users on smart phones, such as a touch sensing algorithm, a software keyboard decoding algorithm, a smart keyboard, and a screen reader for visually impaired people.
Christof Lutteroth
Christof Lutteroth is a Reader in the Department of Computer Science at the University of Bath. His main research interests are in HCI with a focus on immersive technology, interaction methods, and user interface design. In particular, he has a long-standing interest in methods for user interface layout. Christof Lutteroth is the director of the REal and Virtual Environments Augmentation Labs (REVEAL), the HCI research centre at the University of Bath.
Yang Li
Yang Li is a Senior Staff Research Scientist at Google, and an affiliate faculty member at the University of Washington CSE, focusing on the area intersecting AI and HCI. He pioneered on-device interactive ML on Android by developing impactful product features such as next app prediction and Gesture Search. Yang has extensively published in top venues across both the HCI and ML fields, including CHI, UIST, ICML, ACL, EMNLP, CVPR, NeurIPS (NIPS), ICLR, and KDD, and has constantly served as area chairs or senior area (track) chairs across the fields. Yang is also an editor of the upcoming Springer book on "AI for HCI: A Modern Approach", which is the first thorough treatment of the topic.
Ranjitha Kumar
Ranjitha Kumar is an Associate Professor of Computer Science at the University of Illinois at Urbana-Champaign (UIUC) and the Chief Research Scientist at UserTesting. At UIUC, she runs the Data Driven Design Group, where she and her students leverage data mining and machine learning to address the central challenge of creating good user experiences: tying design decisions to desired outcomes. Her research has won best paper awards at premier conferences in HCI, and is supported by grants from the NSF, Google, Amazon, and Adobe. She received her BS and PhD from the Computer Science Department at Stanford University, and co-founded Apropose, Inc., a data-driven design startup based on her dissertation work that was backed by Andreessen Horowitz and New Enterprise Associates.
Toby Jia-Jun Li
Toby Jia-Jun Li is an Assistant Professor in the Department of Computer Science and Engineering at the University of Notre Dame and the Director of the SaNDwich Lab. Toby and his group use human-centered methods to design, build, and study human-AI collaborative systems. In the domain of this workshop, Toby has recently done work in building interactive task learning agents that learn from the user's demonstrations on GUIs and natural language instructions about GUIs, graphs models for representing and grounding natural language instructions about GUIs, and semantic embedding techniques for GUIs.
Program Committee
Bryan Wang University of Toronto
Forrest Huang University of California, Berkeley
Amanda Swearngin Apple
Xiaoyi Zhang Apple
Jason Wu Carnegie Mellon University
Jieshan Chen Australian National University
Sungahn Ko Ulsan National Institute of Science & Technology
Luis A. Leiva University of Luxembourg
Mingyuan Zhong University of Washington
Andrea Burns Boston University
Oriana Riva Microsoft Research
Have questions?
Feel free to contact us at user.interface.workshop@gmail.com! We will be happy to answer your questions.