8:30 - 8:35
8:35 - 9:15
Abstract: Heads-up computing, an emerging paradigm in human-computer interaction (HCI), aims to create seamless interactions with technology through wearable intelligent assistants. This vision relies on three crucial components: (1) bodily compatible hardware, (2) multimodal complementary interactions, and (3) interfaces that accommodate fragmented attention and are aware of potential resources. Recent advancements in large language models (LLMs) have significantly accelerated progress in these areas, enabling more natural, context-aware, and proactive systems. These developments are pushing heads-up computing beyond simple notifications to complex, multi-modal interactions that blend seamlessly with our environment and daily activities, allowing for efficient information processing in everyday life. However, as we integrate these AI-driven assistants more deeply into our lives, we must carefully consider ethical implications such as privacy and cognitive load. Balancing technological advancement with human-centered principles is crucial to create systems that enhance productivity while respecting user autonomy and well-being, ultimately augmenting human capabilities without compromising fundamental values.
Bio: Shengdong Zhao is a Professor in the School of Creative Media and the Department of Computer Science at City University of Hong Kong. He established and led the Synteraction (formerly NUS-HCI) research lab in 2009 at the National University of Singapore. Prof. Zhao received his Ph.D. in Computer Science from the University of Toronto and a Master's degree in Information Management Systems from the University of California, Berkeley.
With extensive experience in developing innovative interface tools and applications, Prof. Zhao is a regular contributor to top-tier HCI conferences and journals like CHI, ToCHI, Ubicomp/IMWUT, CSCW, UIST, and IUI. He served as a senior consultant with Huawei Consumer Business Group in 2017. An active member of the HCI community, Prof. Zhao serves on program committees for major HCI conferences and was the paper co-chair for ACM SIGCHI conference in 2019 and 2020, and is the paper co-chair for ACM UIST conference in 2025.
Prof. Zhao introduced the concept of Heads-up Computing in 2017, contributing to several key projects and publications in this area, including a featured article on heads-up computing in the September 2023 issue of Communications of the ACM. His research aims to develop innovative interface tools that enhance daily life through this new interaction paradigm.
9:15 - 10:00
10:00 - 10:30
10:30 - 11:10
Abstract: Eye tracking is now a standard feature of modern XR headsets, enabling new forms of gaze-based analysis in immersive environments. However, much of the existing research is still grounded in free-exploration paradigms and visually driven attention models, which only partially reflect behavior in realistic XR experiences. In this keynote, we discuss how gaze in XR is shaped by users’ goals, cognitive demands, and multisensory context. The talk highlights the role of task-oriented behavior, where activities such as memory or visual search lead to systematic changes in fixation patterns and saccadic strategies compared to free viewing. It also examines how mental workload and crossmodal interactions further modulate visual attention, and reflects on their relevance for understanding gaze in complex, realistic scenarios.
Bio: Ana Serrano is an Associate Professor at the Universidad de Zaragoza, Spain, and a researcher at the Graphics & Imaging Lab. She was previously a postdoctoral researcher at the Max Planck Institute for Informatics. Her research focuses on perceptual modeling, visual attention, and multisensory processing in immersive environments, with the aim of understanding how cognitive demands and sensory interactions shape human perception in XR. Her work has been recognized with several international awards, including the Eurographics PhD Award (2020), the Eurographics Young Researcher Award (2023), and the IEEE VGTC VR Significant New Researcher Award (2024).
11:10 - 12:10
12:10 - 12:25
12:25 - 12:30