Authors: Gyuwon Jung (Aalto University, Finland); Sun-Kyung Lee (Electronics and Telecommunications Research Institute, Republic of Korea)
Abstract: Recent advances in wearable robotics and virtual reality suggest that human morphology is plastic, yet a significant gap remains in transitioning these augmentations into functional, everyday extensions of the self. This paper proposes the Body-Builder platform, a three-stage pipeline designed to facilitate the design and structured internalization of augmented body parts into the human body schema. By utilizing virtual reality as a diagnostic sandbox, the platform interactively guides users through a selection process based on their subjective needs and situational context, followed by a hierarchical mapping of control channels and scaffolded training. We illustrate this methodology through a hypothetical Smart-Wing use case, demonstrating how system-guided customization and active cognitive alignment can mitigate the control bottleneck.
Authors: Saskia Davies (Swansea University, Swansea, UK)
Abstract: As virtual embodiment research increasingly informs wearable robotics and on-body augmentation, understanding which indicators of successful adaptation transfer from virtual to physical systems becomes critical. Traditional measures, such as task performance and self-reported ownership or agency, often overlook hidden costs of adaptation, potentially masking strain that undermines long-term usability. This position paper argues that real-time physiological signals provide a vital, underutilised lens for evaluating embodiment, particularly in sustained or high-stakes contexts. Drawing on therapeutic VR as a stress test for augmented bodies, we propose that its deliberately challenging environments offer a meaningful proxy for real-world augmentation demands, highlighting how apparent success in short-term tasks may conceal fatigue, stress, or loss of agency that emerge only under extended or high-demand use. We further frame embodiment as a dynamic negotiation in closed-loop systems, where adaptation strategies must balance responsiveness, legibility and user control. By integrating physiological metrics with subjective and performance-based evaluations, this perspective aims to inform the design of robust, ethical, and sustainable augmentation strategies across both virtual and physical domains.
Authors: Sungyong Shin (Electronics and Telecommunications Research Institute, Republic of Korea)
Abstract: Animal-inspired supernumerary body parts (e.g. wings, tails, fins) raise a basic question: how should humans control a body part they have never possessed? We propose a mapping-centered approach: start from a species-specific animal actuation model grounded in anatomy and biomechanics, translate it into multiple human-controllable mapping candidates, and evaluate them through VR embodiment where users become the animal and actively operate it. This turns ad hoc interface design into a structured search for mappings that yield learnable, stable, and predictable control, illustrated with a wing vignette.
Authors: Tsubasa Yoshida (The University of Tokyo, Tokyo, Japan); Takuji Narumi (The University of Tokyo, Tokyo, Japan)
Abstract: Virtual reality enables users to inhabit bodies that transcend biological limits, yet facilitating adaptation to such augmented body schemas remains a core challenge. We present findings from a user study of "Newvo," a VR locomotion interface using a lower-limb exoskeleton that supports the user in a "supported-standing" posture and translates center-of-mass shifts into movement. The system was developed as a walking interface, but semi-structured interviews with 18 participants revealed an unexpected outcome: rather than experiencing walking, users reported sensations of "skating," "flying," and operating a "powered suit." While a similar but weaker tendency was observed with seated motion cueing, these metaphors were far more vivid and frequent with the exoskeleton, and references to powered mechanical bodies were exclusive to it. This suggests that the combination of standing posture and exoskeleton-imposed somatosensory alteration on the lower limbs substantially amplifies the shift in body schema away from walking toward novel locomotion metaphors. We argue that the rigid mechanical support and movement constraints characteristic of exoskeletons do not merely limit degrees of freedom but actively scaffold the adoption of non-human body schemas in VR. We discuss how these physical cues could be strategically designed to facilitate adaptation to virtually augmented body parts.
Authors: Romain Nith (University of Chicago); Yun Ho (University of Chicago); Pedro Lopes (University of Chicago). Nith and Ho contributed equally.
Abstract: Electrical-muscle-stimulation (EMS) can support physical-assistance (e.g., shaking a spray-can before painting). However, EMS-assistance is highly-specialized because it is (1) fixed (e.g., one program for shaking spray-cans, another for opening windows); and (2) non-contextual (e.g., a spray-can for cooking dispenses cooking-oil, not paint—shaking it is unnecessary). Instead, we explore a different approach where muscle-stimulation instructions are generated considering the user's context (e.g., pose, location, surroundings). The resulting system is more general—enabling unprecedented EMS-interactions (e.g., opening a pill-bottle) yet also replicating existing systems (e.g., Affordance++) without task-specific programming. It uses computer-vision/large-language-models to generate EMS-instructions, constraining these to a muscle-stimulation knowledge-base & joint-limits. We believe our concept marks a shift toward more general-purpose EMS-interfaces.
Authors: Han Shi (The University of Tokyo, Japan); Yilong Lin (University of Birmingham, England)
Abstract: As one of the core components of virtual reality (VR), avatars bring a wide range of immersive experiences. However, the flexibility of avatar representations makes it challenging to employ passive proxies for haptic feedback when interacting with virtual objects. We propose leveraging visuo-haptic illusions to compensate for discrepancies between virtual hand representations and the physical hand, thereby allowing users to touch external objects as well as their own virtual bodies. We believe that introducing such aligned haptic feedback can enhance avatar-based experiences and facilitate users' adaptation to augmented bodies.
Authors: Sun-Kyung Lee (Electronics and Telecommunications Research Institute, Daejeon, Republic of Korea)
Abstract: Virtually augmented body parts provide a controlled testbed for studying how humans adapt to added morphologies and whether such additions can become embodied. Prior work on supernumerary robotic limbs (SRL) in VR shows that appropriate control mappings and feedback can lead to increased subjective agency. Meanwhile, recent vision-language-action (VLA) models suggest a new interaction recipe: users provide high-level goals while an agent uses visual observations to generate and execute actions. Combining these lines of research, this position paper proposes a User-Agent-VR framework for the virtual embodiment.
Authors: Martin Kocur (University of Central Florida, Orlando, FL, United States); Dirk Reiners (University of Central Florida, Orlando, FL, United States); Gerd Bruder (University of Central Florida, Orlando, FL, United States)
Abstract: Human success depends on excelling at cognitively demanding tasks such as being creative, sustaining attention at work, or solving complex problems. Yet, we still lack safe, legal, on-demand methods that reliably enhance cognition, leaving human potential underutilized. A promising but neglected tool lies within ourselves: self-perception. Due to recent advances in mixed reality (MR), self-perception can be altered in ways that are impossible in the real world by superimposing tailored digital augmentations onto the user's actual body using virtual avatars. While previous work suggests that embodying avatars associated with high intelligence such as Albert Einstein can improve task performance and reduce workload, a systematic exploration of an avatar's characteristics in MR is still missing. Hence, this projects aims to systematically investigate how immersive avatars can be designed to reshape self-perception and enhance cognitive performance. We hypothesize that changes in self-perception via avatars using MR technology can reliably improve human performance in everyday tasks and complex problem solving. In this paper, we propose a project that aims to (1) design and develop a body transformation toolkit allowing to change self-perception in MR and (2) conduct a set of user studies to learn how MR avatars need to be designed to improve cognitive performance.
Authors: Weitao Jiang (Southern University of Science and Technology, Shenzhen, China)
Abstract: Wearable robotics and supernumerary robotic limbs (SRLs) are moving beyond rehabilitation toward everyday body augmentation, yet their adoption in public spaces is often hindered by social gaze, uncanny impressions, and context-dependent norms. This work investigates how SRL aesthetics shape social perception and self-identification across social and professional scenarios. We propose a two-stage approach: co-design workshops with potential users and experts elicit context-specific aesthetic preferences and generate diverse SRL concepts with generative AI, producing nine representative candidates across Mechanical, Biomimetic, and Hybrid styles. We then evaluate these candidates in contextualized public-use vignettes using the WEAR scale for social acceptability, an eeriness index, an adapted IOS measure for self–device inclusion, and forced-choice preferences and qualitative rationales. The study aims to synthesize these findings into an evidence-based SRL Design Strategy Guide that links aesthetic features to reduced social friction and improved acceptance of bodily augmentation.
Authors: Andrea Stevenson Won (Cornell University, Ithaca, NY, USA); Sara Falcone (Pace University, New York City, NY, USA); Shiri Azenkot (Cornell University); Sang-Won Leigh (Cornell University); Mar Gonzalez Franco (Cornell University); Kevin T. Martinez (Cornell University); Shayla K. Reid (Cornell University)
Abstract: Controlling novel avatars has been a focus of embodiment work in XR for years; both as a way to prototype potential novel forms of robot body augmentation, and as a end in itself. Given the opportunities to connect people socially across distance, recent work has explored the ability to create novel embodied social experiences, by examining how people can jointly control avatars or robots. Here, we extend this work, proposing a "one-to-many" paradigm that can be extended to a human-human collaboration. We describe pilot work in which an on-site partner hosts a remote user and, through visual, audio and haptic feedback, they share a co-embodiment experience. We discuss how this paradigm increased self presence of the remote user in the physical site, and increased social presence for both participants.
Authors: Che-Wei Hsu (NTU Dexterous Lab, Taipei, Taiwan)
Abstract: Virtual Reality (VR) research has shown that users can adapt to avatars with altered morphologies and even control additional virtual body parts. However, much of this work implicitly assumes that visuomotor mappings are sufficient for establishing embodiment, focusing on functional control rather than the integration of augmented appendages into the user's internal body representation. As a result, augmented limbs are often operated as external tools rather than incorporated into the user's body schema. In this position paper, we argue that successful body augmentation should be framed not merely as a control-mapping problem, but as a process of sensorimotor learning that requires stable proprioceptive scaffolding. We propose a conceptual framework in which electromyography (EMG) captures motor intentions while electrical muscle stimulation (EMS) provides temporally aligned kinesthetic reinforcement, forming a closed-loop training mechanism for internalizing new degrees of freedom. By reframing virtual body augmentation as a progressive sensorimotor adaptation process, this perspective highlights new research directions in proprioceptive feedback design, embodiment capacity, and training pipelines for both virtual and physical augmentation systems.
Authors: TaeHyun Kim (Kyung Hee University, Republic of Korea); Inhyuk Song (Kyung Hee University, Republic of Korea); Chaeyong Park (Korea University, Republic of Korea); Seungjae Oh (Kyung Hee University, Republic of Korea)
Abstract: Supernumerary Robotic Limbs (SRLs) represent a frontier in human augmentation, aiming to extend physical capabilities by adding novel body parts. A critical challenge lies in fostering a deep sense of embodiment, where the user perceives the artificial limb as an integral part of their own body. This proposal outlines a research direction focusing on two primary dimensions: (1) strategic interventions within the sensorimotor loop to enhance agency and ownership, and (2) the utilization of cognitive prior knowledge regarding kinematics to bridge the gap between virtual and physical embodiment.
Authors: Ulrike Kulzer (Saarland University, Saarland Informatics Campus, Saarbrücken, Germany)
Abstract: Designing augmented body parts in virtual environments poses unique challenges for achieving a coherent sense of embodiment. While visuomotor congruence can support ownership and agency for humanoid avatars, augmented morphologies, such as tails, wings, or on-body haptic agents, require users to reinterpret unfamiliar sensory-motor relationships. In this position paper, we argue that effective embodiment of augmented body parts depends on three key factors: precise multimodal synchrony, spatially aligned haptic feedback, and intuitive mappings that match users' proprioceptive expectations. We draw on recent advances in augmented limb feedback and socially expressive wearable agents to highlight design considerations for creating coherent sensory experiences. Finally, we briefly present our ongoing work on haptically embodied on-body agents and invite discussion on the presented key factors.
Authors: Kenath Perera (Exertion Games Lab, Department of Human-Centred Computing, Monash University, Melbourne, Victoria, Australia); Florian 'Floyd' Mueller (Exertion Games Lab, Monash University); Don Samitha Elvitigala (Exertion Games Lab, Monash University)
Abstract: Increasing sedentary behavior in modern desk-based work contributes to negative physical and mental health outcomes. While existing interventions embed physical activity through movement-based interfaces or spatially distributed systems, they often fail to sustain engagement over time and can negatively impact work efficiency. We present a playful, competitive human–robot interaction system that integrates VR-derived embodiment strategies such as sensorimotor adaptation, agency and multisensory feedback into a physically embodied robotic partner, encouraging repeated upper-body movement and enhancing engagement. These interaction strategies serve as reusable artifacts for wearable robotics, demonstrating how VR-derived embodiment can be effectively integrated into physically embodied systems to sustain engagement.
Authors: Xiang Li (University of Cambridge, Cambridge, United Kingdom)
Abstract: Augmenting the human body is no longer limited to virtual avatars or standalone devices. Instead, augmentation increasingly emerges from distributed systems of wearables, XR interfaces, and environmental sensors that continuously sense, interpret, and respond to users' context. However, existing research often treats augmentation as a property of a specific device or interaction mapping, rather than as a dynamic process unfolding across the coupling of body, environment, and system intelligence. In this position paper, we argue for a shift toward Situated Reality, where augmented capabilities arise from a continuous data loop of perception, inference, adaptation, and embodiment. Within this loop, interaction is not fixed to a particular embodiment, but is dynamically constructed based on the user's situation across wearables, XR devices, and surrounding environments. Drawing on prior work in body-centric interaction and deployable XR systems, we outline three directions: (1) body-centric interaction primitives grounded in human motor capabilities, (2) context-aware adaptive control distributed across devices and environments, and (3) interaction pipelines that treat XR not merely as a prototyping tool, but as a structured representation of adaptive interaction. This perspective reframes augmented body parts not as isolated extensions, but as situational capabilities emerging from responsive, context-aware systems.