Abstract: This presentation will talk about how we shape what we love to actualize. On the surface, it addresses the shift from an academic career to an industrial career. More specifically, it will focus on how my interests in work and research domains evolve throughout my career. I will begin by sharing my personal journey—how I discovered my research topic, transitioning from a somewhat different domain to eye gaze-based interaction research for eXtended Reality (XR). Later, I will discuss how I have bridged my research experience with my industrial career path. Through this talk, I hope this can be a time for us to envision what we should look towards in our research journeys.
Bio: Dr.Sunggeun Ahn is a Software Engineer at Samsung Electronics. He earned his Ph.D. in Computer Science from KAIST, specializing in Human-Computer Interaction under the guidance of Prof.Geehyuk Lee. His research focuses on making interactions with immersive technologies easier and more efficient, with a particular interest in eye gaze-based interaction. Throughout his academic journey, he has researched how to make eye gaze-based interaction a more widely usable interaction method for XR devices. Most recently, his work has focused on building synthetic data generation systems to provide datasets for training vision-based tracking solutions, including face, hand, and eye tracking techniques.
Presenter: Yichuan Zhang (Hong Kong University of Science and Technology (Guangzhou))
Abstract: Gaze input, as a modality inherently conveying user intent, offers intuitive and immersive experiences in extended reality (XR). With eye-tracking now standard in modern XR headsets, gaze has been extensively applied to tasks such as selection, text entry, and object manipulation. However, gaze-based navigation—despite being a fundamental interaction task—remains largely underexplored. In particular, little is known about which path types are well-suited for gaze navigation and under what conditions it performs effectively. To bridge this gap, we conducted a controlled user study evaluating gaze-based navigation across three representative path types: linear, narrowing, and circular. Our findings reveal distinct performance characteristics and parameter ranges for each path type, offering design insights and practical guidelines for future gaze-driven navigation systems in XR.
Presenter: Zhikun Wu (KTH Royal Institute of Technology)
Abstract: Precise manipulation, fast interaction, and lasting comfort are hard to reconcile in VR. Modern controllers can track at sub-millimetre resolution in ideal conditions, yet mid-air reach, tremor, and arm elevation still impair accuracy and induce fatigue. We present Eyes on Target, Pen on Table, combining gaze-driven target selection with desk-supported, pressure-adaptive pen input. In a withinsubjects study (N=12), it was compared with a direct mid-air pen baseline. In 2D tasks, mean placement error dropped by 40% with no significant time cost, while in 3D tasks error fell by 58% at a 52% time cost. With control-display gain with axis constraint, error reductions reached 77% in 2D and 81% in 3D relative to the baseline. Across conditions, the approach also reduced arm fatigue and NASA-TLX workload. Our findings position the Gaze + Pen on Table technique as a balanced, sustainable alternative for production-level VR design, with practical guidelines for implementation.
Bio: Dr. Yucheol Jung is a Software Engineer at Samsung Electronics, working on computer vision and XR technology. His role involves the commercialization of eye-tracking and fit-tracking for XR devices, focusing on how data collection can be used to evaluate and improve vision software. He completed his Ph.D. at the POSTECH Computer Graphics Lab, where his research centered on 3D reconstruction and dataset construction. While his academic research may seem quite different from his day-to-day work in the industry, he finds that the two experiences are tightly interconnected and looks forward to sharing his journey.
Abstract: This presentation explores the challenges and complexities in commercializing eye tracking (ET) technology within XR software development. The discussion highlights key differences between academic research and industrial commercialization, emphasizing the difficulties in evaluating ET features due to the complex network of stakeholders, dependencies on user-specific factors, and environmental variables. The presentation introduces hands-on experiences in ET problem investigation, including incomplete error reports, the need for robust testing methodologies (e.g., automated playback of user recordings), and the development of intuitive evaluation metrics for real-world applications. Additionally, it underscores the importance of collaboration between hardware and software teams to optimize camera placement and calibration for accurate gaze tracking. The session concludes by identifying research gaps in creating standardized metrics for ET robustness and strategies to mitigate testing challenges in real-world user environments.
Presenter: Thorbjørn Mikkelsen (Aarhus University)
Abstract: Extended reality (XR) headsets featuring gaze-tracking enable hands-free selection via explicit dwell or integration with other modalities. This paper demonstrates the novel hands-free interac- tion technique Gaze+Mouth, which combines eye-gaze for pointing with an intraoral tongue gesture for selection. We implement the technique using a wearable mouthpiece and conduct a user study (N=16) comparing Gaze+Mouth to the established Gaze+Pinch technique. Results show that while Gaze+Pinch is significantly faster, Gaze+Mouth achieved a comparable level of accuracy. This suggests that intraoral input is a viable channel for reliable hands- free selection in XR, offering a new modality for situations where users’ hands are occupied or when an alternative is desired.
Persenter: Juan Sánchez Esquivel (Aarhus University)
Abstract: Interacting with eyes and hands inputs is attractive for extended reality (XR). We explore how further human hand capabilities can become compatible with gaze for interaction. We introduce a gazeand air-tap based interaction technique that enables mobile onehanded mid-air gestures for XR. By reimagining trackpad multitouch gestures in 3D space as multi-air-taps based on a handattached control interface, our technique allows users to instantly access shortcuts to commands like object dragging, scrolling, and window switching. We present a prototype application that explores the capability for expressive input in compound object manipulation and UI navigation tasks. Our insights highlight our novel technique’s potential as a natural and efficient input method, expanding the interaction repertoire of extended reality users.
Presenter: Gwangyu Lee (MARTE Lab. Dept. of Multimedia, Dongguk University)
Abstract: With the release of Apple Vision Pro, interest in Mixed Reality (MR) has surged, along with the growing demand for Extended Reality (XR) content that spans Virtual Reality (VR), Augmented Reality (AR), and MR. This study presents the iXR OSC & NDI system, designed for real-time XR interaction using OSC and NDI protocols, enabling seamless integration with tools like TouchDesigner. The system supports gaze-based interaction by combining eye tracking and pinch gestures, allowing intuitive control of virtual objects without complex programming. Tailored for artists, it facilitates easy creation and manipulation of XR content and proposes a scalable framework for use in interactive performances, exhibitions, and real-time media art environments.
Bio: Jeongmi Lee is an Associate Professor in the Graduate School of Culture Technology at KAIST. She earned her Ph.D. in Cognitive Neuroscience from the George Washington University in 2013, following completion of her M.A. and B.A. degrees in Psychology at Seoul National University. Prior to joining KAIST, she was a postdoctoral researcher at the Center for Mind and Brain, UC Davis. Her research focuses on cognitive neuroscience and human–computer interaction in novel media environments.
Abstract: Extended Reality (XR) is reshaping how people perceive and interact in digital environments. This talk highlights how eye-tracking measures, reflecting users’ focus of attention, attentional state, interaction intention, and social cognition, offer a unique lens into XR experiences. Drawing on representative studies from our lab, I will demonstrate how gaze-based measures reveal user dynamics, from attention allocation and task engagement to subtle social signals in collaborative XR spaces. These findings not only advance theoretical understanding of XR HCI but also suggest practical design directions. The talk will conclude with future perspectives on integrating eye-tracking with multimodal signals for more adaptive and socially attuned XR systems.