Contents
Adwait Sharma (University of Bath)
This demo presents GraspUI, a novel design space that extends object-centric interactions beyond the conventional “hold” phase to encompass the entire grasping process. This includes pre-grasp (reach, load), during-grasp (lift, hold), and post-grasp (replace, unload, depart) movements. We conducted ideation sessions with mixed-reality designers to explore gesture integration throughout the grasping process, resulting in 38 storyboards envisioning practical applications. To evaluate the design space’s utility, we performed a video-based assessment with end-users. The results showed that users reacted positively to the gestures and could integrate them into existing usage patterns of objects. We also developed an interactive prototype to demonstrate the technical feasibility of GraspUI and quantify the overhead cost of performing proposed gestures. Finally, we highlight technical and usability guidelines for implementing and extending GraspUI systems.
HanZhen Li (Bytedance PICO), JunLiang Shan (Bytedance PICO), YuanJiang Wang (Bytedance PICO), YunLong Li (Bytedance PICO)
Recent advances in Extended Reality (XR) have largely focused on head-mounted displays (HMDs) and augmented reality (AR) glasses as primary interaction devices, resulting in overreliance on visual interfaces and limiting opportunities for embodied, multimodal interaction. We propose leveraging external tracking accessories—including full-body motion trackers, electromyography (EMG)-based hand tracking, and precision-enhancing smart rings—to extend XR interactions beyond traditional visual-centric devices. These external trackers offer unique affordances for richer, more embodied interactions, reduce visual overload, and improve social acceptability. This position paper highlights promising scenarios enabled by external tracking accessories, discusses key technical and design challenges, and outlines future research directions to explore the physicality of XR interactions beyond the glasses paradigm.
Youjin Sung (KAIST), Heejin Jeong (ASU), Gunhyuk Park (GIST), Sang Ho Yoon (KAIST)
Hand rehabilitation is essential for restoring motor functions and improving the quality of life for individuals with upper extremity impairments. However, traditional rehabilitation approaches often face challenges such as limited patient motivation, lack of personalized feedback, and accessibility issues. This paper introduces ReHabby, an advanced self-guided rehabilitation that leverages extended reality (XR), large language models (LLMs), and multi-modal feedback to address these barriers. By integrating immersive XR environments, haptic and audio cues, and advanced AI-driven assessment, our system delivers personalized and engaging rehabilitation experiences. The design process incorporates insights from therapists and patients, ensuring alignment with clinical practices. This paper details the system’s architecture, including its LLM-driven capabilities, dataset development, and therapist-guided refinements. Future work includes expanding multimodal functionalities, conducting comprehensive evaluations, and facilitating clinical integration to enhance patient outcomes and satisfaction.
Yichen Yu (University of Rochester), Qiao Jin (Carnegie Mellon University)
In VR environments, free movement in real space enhances immersion but increases the risk of collisions with real-world obstacles. Prior solutions investigated using substitute obstacles with contextrelated digital objects in VR but often treat all obstacles uniformly without considering their varying levels of risk. This oversight might result in reduced awareness for high-risk obstacles and a missed opportunity to utilize low-risk objects to enhance haptic feedback and interactivity in VR. In this paper, we propose a system that classifies real-world obstacles by their varying risk levels and substitutes them with context-related virtual objects in VR. The substitutions are designed to align with the obstacles’ real-world risk levels to ensure both safety and immersion.
Yeonsu Kim (KAIST), Jisu Yim (KAIST), Kyunghwan Kim (KAIST), Yohan Yun (KAIST), Geehyuk Lee (KAIST)
We present a demonstration of Pro-Tact, an efficient, eyes-free XR interaction technique that combines proprioception and tactile exploration. Pro-Tact enables users to perform rapid, rough pointing using proprioception and fine-grain adjustments via tactile exploration, achieving speed and precision in eyes-free interaction. In this demonstration, attendees will experience two representative XR use cases of Pro-Tact: 3D sketching and game applications.
Yannick Weiss (LMU Munich)
In Extended Reality (XR), producing realistic haptic feedback remains challenging, often requiring controllers or wearable devices that restrict interaction. Haptic Illusions offer a hardware-free alternative by modifying perception, but most rely on visual dominance, limiting their use to Head-Mounted Displays (HMDs). This work proposes audio-haptic illusions as a novel approach to enhance haptic perception through sound. We explore how auditory cues can potentially modulate sensations of stiffness and temperature in XR, discuss applications for virtual and augmented interactions, and identify challenges in perception, implementation, and evaluation. With this work, we aim to highlight the potential of audio-haptic illusions for more seamless and immersive haptic experiences in XR beyond HMDs.
Hyuckjin Jang (KAIST), Bowon Kim (KAIST), Kidong Baek (KAIST), Cheeyoung Ahn (KAIST), Jeongmi Lee (KAIST)
The current paper proposes a study design for assessing the effects of sharing haptic feedback and its directionality on the collaborative experience among remote users in XR. To verify this, we suggest a novel turn-taking-based joint assembly task in which pairs of participants alternately select and merge the pieces together to complete the 3D shapes. In addition, we present the preliminary quality of experience survey results to provide initial insights into the effects of shared haptic feedback on the perceived collaborative experience of remote users in XR.
Pooria Ghavamian (KTH Royal Institute of Technology), Andrii Matviienko (KTH Royal Institute of Technology)
Visual and audio modalities have long dominated the field of XR because they align with our primary channels of perception and were the first to mature technologically. High-resolution displays and spatial audio systems were easy wins that provided immersive experiences that were relatively straightforward to implement. However, this evolutionary path has funneled all the effort into increasingly high-fidelity visuals and fixing issues like keyboard input and UI manipulation in MR/VR with visual solutions like gaze-based input. By moving beyond head-mounted displays and glasses, we open the door to alternative input modalities and escape this narrow funnel.
Kun-Woo Song (KAIST), Sang Ho Yoon (KAIST)
Neck muscle vibration (NMV) creates a proprioceptive illusion of neck rotation without physically rotating the neck. Using our custom haptic interface, we apply NMV in virtual reality (VR) during camera rotations to match proprioceptive information to the virtual visual information. By decreasing this sensory mismatch, NMV reduces VR sickness and increases presence, providing a more enjoyable virtual experience. In this demo, participants will experience NMV and its application in a VR scene. Participants will keep their necks still while the camera rotates. Meanwhile, NMV will provide an illusory neck rotation matching the camera rotation.
Yatharth Singhal (University of Texas at Dallas), Daniel Honrales (University of Texas at Dallas), Haokun Wang (University of Texas at Dallas), Jin Ryong Kim (University of Texas at Dallas)
We demonstrate a novel interaction technique called thermal motion, which creates the illusion of flowing thermal sensations by combining thermal and tactile actuators. This technique leverages dynamic thermal referral illusions across multiple tactile points, allowing users to perceive moving thermal cues. Our demonstration employs a sleeve form factor that delivers these effects on both the ventral and dorsal sides of the forearm, creating the sensation of thermal motion along the arm. In addition to showcasing the sensory effects, our demo supports multiplayer functionality, where two participants can each wear their respective sleeves. Players can collaborate to defeat a robot army, synchronizing their actions and experiencing coordinated thermal feedback, or compete against one another in a head-to-head challenge. The multiplayer setup highlights the versatility of thermal motion as an interactive and immersive element in shared virtual experiences, enhancing both individual and group engagement.
Hyunjae Gil (University of Texas at Dallas), Ashish Pratap (University of Texas at Dallas), Iniyan Joseph (University of Texas at Dallas), Jin Ryhong Kim (University of Texas at Dallas)
We demonstrate PropType, an interactive interface in Augmented Reality (AR) that leverages daily-life objects as typing surfaces, such as cups, water bottles, boxes, and soda cans. The demonstration showcases an editing tool that allows users to generate and customize on-the-go keyboards on any prop. The interface provides users to customize the geometry, size, and shape of the keyboard. A set of diverse visual and sound effects are available for enhancing the typing experience by making it more enjoyable and engaging.
Marc Satkowski (Dresden University of Technology), Weizhou Luo (Dresden University of Technology), Rufat Rzayev (Dresden University of Technology)
This position paper addresses the fallacies associated with the improper use of affordances in the opportunistic design of augmented reality (AR) applications. While opportunistic design leverages existing physical affordances for content placement and for creating tangible feedback in AR environments, their misuse can lead to confusion, errors, and poor user experiences. The paper emphasizes the importance of perceptible affordances and properly mapping virtual controls to appropriate physical features in AR applications by critically reflecting on four fallacies of facilitating affordances, namely, the subjectiveness of affordances, affordance imposition and reappropriation, properties and dynamicity of environments, and mimicking the real world. By highlighting these potential pitfalls and proposing a possible path forward, we aim to raise awareness and encourage more deliberate and thoughtful use of affordances in the design of AR applications.
Wai Tong (Texas A&M University), Xiaolin Ni (Texas A&M University), Meng Xia (Texas A&M University)
This paper explores opportunities for using augmented reality (AR) to blend paper-based materials with digital content for hybrid task management. Despite the increasing popularity of digital tools, traditional paper-based methods remain popular for task management, as confirmed by a survey (N=153). Rather than choosing between the two, we view AR as a valuable asset in facilitating a hybrid approach, given its ability to overlay digital content onto physical materials. We conducted an ideation workshop and compiled the findings into design requirements for future development.
Taejun Kim (KAIST), Geehyuk Lee (KAIST)
Gaze-input methods offer quick and discreet interface control but rely on gaze-responsive visual feedback. In practice, unintended activations of these gaze-adaptations may be unavoidable in many application contexts, and consequently, the degree of gaze-responsive adaptations are vastly constrained to avoid negatively impacting users’ viewing comfort. We demonstrate Gaze+Quasimode, a tension-based mode separation approach that enables free gaze-adapted support during the user-maintained tension period, and also ensures users’ viewing comfort at other times. A description on the demo process is provided on the next page.
Sieun Park (Seoul National University), Youngki Lee (Seoul National University)
Remote collaboration in mixed reality (MR) enhances interaction across locations, but traditional methods struggle to provide effective guidance for novices using physical tools. MR-based collaboration integrates real-world objects into virtual spaces, improving spatial awareness and communication. However, current MR systems fail to fully replicate functional affordances, limiting effective remote guidance. This paper introduces CRAFT (Context-Responsive Affordance Framework for Teleinteraction), a framework that adapts to user interactions and dynamically modifies virtual object affordances using Vision-Language Models (VLMs). CRAFT enables on-demand, context-aware affordance adaptation, improving hands-on remote collaboration. The paper highlights the limitations of existing MR systems and calls for future research in dynamic affordance generation and interaction-aware object virtualization.
Lars Erik Holmquist (Nottingham School of Art and Design)
Augmented, Mixed and Extended Reality (AR, MR and XR) still has limited impact in the consumer domain. This may be due to the lack of suitable consumer devices such as glasses, as well as that non glasses alternatives such as projection displays require expensive infrastructure. To accelerate research in XR beyond glasses, I propose a two-pronged approach. First, to provide a viable user experience, I argue that graphics for future XR environments will need to fulfill 3 criteria: they need to be perceivable, addressable, and persistent to all users in the physical world. I propose the framework Liberated Pixels as a research program to achieve this. Second, to develop viable XR products beyond glasses, I believe that virtual production (VP) can present a viable test case for new technologies. In VP, virtual environments are combined with human actors and physical sets and props, providing a compelling use case where virtual and physical environments are mixed.
Alexander Schmidt (TU Wien)
This position paper challenges the dominance of glasses-based XR by presenting initial food for thought on alternative, physically embedded interaction modalities in sports. Drawing on examples from ambient 3D dance projections to body-centered feedback in running, mountain biking, and golf, we discuss how modalities such as vibrotactile feedback and integrated displays can foster more natural, engaging, and socially collaborative XR experiences. These insights aim to spark a productive discussion on the next steps, amongst them overcoming technological, cost, and user adaptation challenges, and to explore scalable hybrid feedback systems that unlock the full potential of XR in sports and other physical interaction domains.
Jung-Hwan Youn (University of Illinois Urbana-Champaign), Craig Shultz (University of Illinois Urbana-Champaign)
Wearable tactile interfaces can significantly enhance immersive experiences in virtual and augmented reality (VR/AR) systems by providing tactile stimulation to the skin, complementing the visual and auditory cues. In this work, we explore two innovative approaches—electroosmotic pumps and dielectric elastomer actuators—for developing high-resolution haptic gloves for AR/VR interactions. We present a detailed summary of the actuator design, haptic glove integration, and the VR/AR implementation for each approach.
Hyunyoung Han (KAIST), Jongwon Jang (KAIST), Kitaeg Shim (KAIST), Sang Ho Yoon (KAIST)
We propose AfforDance, an augmented reality (AR)-based dance learning system that generates personalized learning content and enhances learning through visual affordances. Our system converts user-selected dance videos into interactive learning experiences by integrating 3D reference avatars, audio synchronization, and adaptive visual cues that guide movement execution. This work contributes to personalized dance education by offering an adaptable, user-centered learning interface.
Gabriel Vega (Max Planck Institute for Informatics), Valentin Martinez-Missir (Max Planck Institute for Informatics), Easa AliAbbasi (Max Planck Institute for Informatics), Paul Strohmeier (Max Planck Institute for Informatics)
Haptic Servos are embedded platforms for adding vibrotactile rendering to prototype devices. We showcase them in Tangibles++, tokens for tangible interaction with dynamic material properties. Unlike passive tokens, Tangibles++ feature dynamic material properties, altering their perceived hardness and tactile feedback, enabling effects like added friction or mechanical resistance during movement. Our demo lets participants interact with identical-looking Tangibles++ that feel distinct.