Future of Computational Approaches for Understanding & Adapting User Interfaces
CHI 2023 Workshop
April 2023 | Hybrid (Hamburg, Germany and online)
We will continue our discussion in 2022 on the challenges and opportunities in computational approaches for understanding, generating, and adapting user interfaces by bringing together researchers from different sub-disciplines of HCI, across the intersections between HCI and adjacent fields (ML, CV, NLP), at different stages of the pipeline, and across the industry and academia boundaries.
You can find the extended abstract of the workshop here!
Key Dates
Workshop date: April 23 (Sunday), 2023
Organizers
Yue Jiang
Aalto
Finland
Yuwen Lu
Notre Dame
USA
Christof Lutteroth
Bath
UK
Toby Jia-Jun Li
Notre Dame
USA
Jeffrey Nichols
Apple
USA
Wolfgang Stuerzlinger
Simon Fraser
Keynote Speakers
Yang Li
Towards Foundational UI Understanding
Computational UI understanding is a cornerstone for realizing intelligent user interfaces. The advance of deep models has enabled a wide spectrum of UI understanding tasks, ranging from UI captioning to grounding, which can potentially empower many interaction scenarios, such as accessibility and automation. In this talk, I will talk about several threads of UI understanding tasks conducted in my group, which illustrate our journey in exploring the space. These works pave the way for achieving foundational understanding of user interfaces. I discuss our recent work on Spotlight, a foundational UI model, where a single machine learning model can be tuned to address diverse UI tasks with substantial gains in accuracy. Finally, I will share my thoughts on the direction for developing foundational models for user interfaces.
Yang is a Senior Staff Research Scientist at Google Research, and an affiliate faculty member at University of Washington CSE. His research interests lie at the intersection of Human Computer Interaction and Artificial Intelligence, focusing on general deep learning research, and models for solving human interactive intelligence and improving user experiences. He pioneered on-device interactive ML on Android by developing impactful product features such as next app prediction and Gesture Search. Yang has extensively published in top venues across both the HCI and ML fields, including CHI, UIST, ICML, ACL, EMNLP, CVPR, NeurIPS (NIPS), ICLR and KDD, and has constantly served as area chairs or senior area (track) chairs across the fields. Yang is an editor of the Springer book on AI for HCI: A Modern Approach.
The Future of Mixed Reality Is Adaptive
Mixed Reality (MR) has the potential to transform the way we interact with digital information, and promises a rich set of applications, ranging from manufacturing and architecture to interaction with smart devices. Current MR approaches, however, are static, and users need to manually adjust the visibility, placement and appearance of their user interface every time they change their task or environment. This is distracting and leads to information overload. To overcome these challenges, we aim to understand and predict how users perceive and interact with digital information, and use this information in context-aware MR systems that automatically adapt when, where, and how to display virtual elements. We create computational approaches that leverage aspects such as users’ cognitive load or the semantic connection between the virtual elements and the surrounding physical objects. Our systems increase the applicability of MR, with the goal to seamlessly blend the virtual and physical world.
David Lindlbauer is an Assistant Professor at the Human-Computer Interaction Institute at Carnegie Mellon University, leading the Augmented Perception Lab. His research focusses on understanding how humans perceive and interact with digital information, and to build technology that goes beyond the flat displays of PCs and smartphones to advances our capabilities when interacting with the digital world. To achieve this, I create and study enabling technologies and computational approaches that control when, where and how virtual content is displayed to increase the usability of AR and VR interfaces.
David Lindlbauer
Workshop Schedule & Format
The detailed workshop schedule can be found on the schedule page.
We expect the workshop to be hybrid with the majority of participants attending in person. We strongly encourage participants to attend in person, but synchronous remote participation will also be available for those who are unable to join us in person. The plan for the workshop format and logistics is subject to change depending on the final plan for the CHI conference.
Associate Chairs (Reviewers)
Yi Fei Cheng Carnegie Mellon University
Hyunsung Cho Carnegie Mellon University
Gregory Croisdale University of Michigan
Yue Jiang Aalto University
Tae Soo Kim Korea Advanced Institute of Science and Technology
Sungahn Ko Ulsan National Institute of Science and Technology (UNIST)
Rebecca Krosnick University of Michigan
Luis Leiva University of Luxembourg
Toby Li University of Notre Dame
David Lindlbauer Carnegie Mellon University
Yuwen Lu University of Notre Dame
Christof Lutteroth University of Bath
Kevin Moran George Mason University
Jeffery Nichols Apple
Wolfgang Stuerzlinger Simon Fraser University
Amanda Swearngin Apple
Kashyap Todi Meta
Bryan Wang University of Toronto
Yao Wang Aalto University
Chengzhi Zhang Carnegie Mellon University
Mingyuan Zhong University of Washington