Research
My work is on the boundary between real and digital worlds, investigating fundamental questions on interaction, exploring alternate spaces for human-computer interaction, and making creative virtual worlds.
Research
My work is on the boundary between real and digital worlds, investigating fundamental questions on interaction, exploring alternate spaces for human-computer interaction, and making creative virtual worlds.
Table of Contents
Steven Yoo, Sakib Reza, Hamid Tarashiyoun, Akhil Ajikumar, Mohsen Moghaddam
Abstract: Augmented reality (AR) has gained significant attention in recent years for its applications in training and assistance in various industrial settings. Yet, a less understood question is: How can AR systems, coupled with artificial intelligence (AI) capabilities, adaptively tailor instructions and feedback interventions to the specific needs of users, their cognitive states, and levels of expertise during task execution? This paper addresses this question by conducting a systematic review that delves into three specific research areas: the state-of-the-art of AR-based systems for industrial applications in terms of features and training/assistance capabilities, the existing gaps in transforming AR into an “intelligent companion” that adapts to both the work context and the user’s needs, and how these sources of multimodal data captured by AR headsets, wearables, and IoT sensors can be harnessed to interpret, predict, and guide task performance and learning through AR. To this end, this paper synthesizes recent studies in the field of industrial AR, summarizing their main findings, contributions, and associated limitations when integrating AI capabilities into AR. The results suggest that AR can effectively tackle key industry challenges associated with training and upskilling, process improvement, and error prevention. However, limitations remain in integrating multimodal data-driven capabilities into AR to effectively tailor AR guides to how individual workers learn and perform complex industrial tasks. The paper concludes with a framework as well as several research directions and examples to realize intelligent AR systems enhanced with advanced AI capabilities for activity understanding, user modeling, and interventions, serving as adaptive and personalized companions for industrial workers.
Steven Yoo, Casper Harteveld, Nicholas Wilson, Kemi Jona, Mohsen Moghaddam
Abstract: This study aimed to explore how novices and experts differ in performing complex psychomotor tasks guided by augmented reality (AR), focusing on decision-making and technical proficiency. Participants were divided into novice and expert groups based on a pre-questionnaire assessing their technical skills and theoretical knowledge of precision inspection. Participants completed a post-study questionnaire that evaluated cognitive load (NASA-TLX), self-efficacy, and experience with the HoloLens 2 and AR app, along with general feedback. We used multimodal data from AR devices and wearables, including hand tracking, galvanic skin response, and gaze tracking, to measure key performance metrics. We found that experts significantly outperformed novices in decision-making speed, efficiency, accuracy, and dexterity in the execution of technical tasks. Novices exhibited a positive correlation between perceived performance in the NASA-TLX and the GSR amplitude, indicating that higher perceived performance is associated with increased physiological stress responses. This study provides a foundation for designing multidimensional expertise estimation models to enable personalized industrial AR training systems.
Dong Woo Yoo, Hamid Tarashiyoun, and Mohsen Moghaddam
Abstract: This paper explores a new method for estimating expertise in Augmented Reality (AR) learning and training, emphasizing the learner’s familiarity with action-relevant Areas of Interest (AoI) and their decision-making processes. By examining both gaze behavior and object tracking based on egocentric video data from a group of 15 participants, we investigate the interaction dynamics of learners within AR environments to distinguish between expert and novice behaviors. Key findings highlight notable variations in visual attention and engagement strategies, reflecting different degrees of task familiarity and decision-making capabilities among participants with varying self-reported expertise levels. This approach advances the understanding of visual attention behavior and its relation with expertise in the context of AR learning, offering new insights for the design of adaptive, learner-centered AR education systems. The study contributes to the field of learning sciences by enhancing the effectiveness and personalization of AR learning and training models.
Dong Woo Yoo
Abstract: In this doctoral research, a novel approach is developed for the real-time assessment of user expertise, focusing on skill-based and cognitive tasks within Extended Reality (XR) environments. This approach combines advanced user modeling with state-of-the-art sensors and includes an in-depth analysis of gaze behavior and physiological responses. A distinctive aspect of this research is the real-time analysis of multimodal sensor data, which provides deeper and more precise insights into user skills and cognitive abilities. The goal is to advance the field of expertise assessment by introducing a nuanced and dynamic perspective that is suitable for a variety of applications across different domains. This research aims to establish new standards in the assessment of user expertise, meeting the evolving needs of modern educational and professional settings.
Dong Woo Yoo, Hamid Tarashiyoun, and Mohsen Moghaddam
Abstract: Augmented reality (AR) technologies have recently gained substantial attention within the industry due to their potential applications in on-the-job training and assistance across diverse industrial settings. However, personalizing AR instructions and feedback interventions that cater to individual user needs and skill levels remains a relatively less explored area of research. This paper aims to bridge this gap by utilizing eye tracking data coupled with computer vision to examine the gaze and pupil behaviors of individuals with various levels of expertise performing AR-guided procedural tasks...
DOI: IN PROCEEDING
Paper: TBD
Dong Woo Yoo, Sakib Reza, Nicholas Wilson, Kemi Jona, and Mohsen Moghaddam
Abstract: This research seeks to explore how Augmented Reality (AR) can support learning psychomotor tasks that involve complex manipulation and reasoning processes. The AR prototype was created using Unity and used on HoloLens 2 headsets. Here, we explore the potential of AR as a training or assistive tool for spatial tasks and the need for intelligent mechanisms to enable adaptive and personalized interactions between learners and AR. The paper discusses how integrating AR with Artificial Intelligence (AI) can adaptably scaffold the learning of complex tasks to accelerate the development of expertise in psychomotor domains.
Keru Wang, Zhu Wang, Karl Rosenberg, Zhenyi He, Dong Woo Yoo, Un Joo Christopher, Ken Perlin
This project combines immersive VR, multitouch AR, real-time volumetric capture, motion capture, robotically-actuated tangible interfaces at multiple scales, and live coding, in service of a human-centric way of collaborating. Participants bring their unique talents and preferences to collaboratively tackle complex problems in a shared mixed reality world.
Dong Woo Yoo
Virtual reality offers experimentation with human skills beyond what's possible in the real world. The eye gazes instinctively look at the object of interest while the body also responds to it in 3D space. This project aims to explore the relationship between player's engagement vs. physiological data like Blood Volume Pulse (BVP), Interbeat Interval (IBI), Galvanic Skin Response (GSR), and Heart Rate Variability (HRV) using Empatica E4 wristband. I made a simple carnival throwing-style simulation to test the player's engagement and usability. Ultimately, this opportunity led me to create a Unity3D module that retrieve physiological sensing data from the player. This paper explains the process of my design process, prototype phases, and potential implementations.
DOI: TBD
Paper: TBD
The Perceptually-enabled Task Guidance (PTG) program aims to develop artificial intelligence (AI) technologies to help users perform complex physical tasks while making them more versatile by expanding their skillset and more proficient by reducing their errors. PTG seeks to develop methods, techniques, and technology for artificially intelligent assistants that provide just-in-time visual and audio feedback to help with task execution.
This NSF Future of Work at the Human-Technology Frontier: Core Research project imagines the future of work in precision manufacturing where the spatial and causal reasoning and decision-making abilities of workers on complex production and inspection tasks are augmented through teaming with intelligent extended reality (IXR) technologies.
This demo project utilize my Empatica E4 Unity plugins to visualize the biometric data in real-time setting. Thanks to the High Speed Research Network (Corelink) that able to support up to 400GB fiber speed, I was able to seamlessly stream the biometric data with a minimal network lag and successfully retrieved the data via remote web server.
Virtual Reality (VR) experience will increase students’ engagement and excitement for STEM learning, bring out their naturally inquisitive natures, and improve their outcomes with these subjects.
This game is a remix of the popular rhythm based VR games: BeatSaber and Pistol Whip that I built to experiment with a simple question of "What if we combine both game mechanics and conduct a playtest protocol.
This research explores the question: what does well-being mean to children in a digital age?
As digital technology plays an increasingly important role in children’s development, the Responsible Innovation in Technology for Children (RITEC) project, co-founded with the LEGO Group and funded by the LEGO Foundation, aims to create practical tools for businesses and governments that will empower them to put the well-being of children at the center of digital design.
Zodiac is a learning game that is designed to teach middle and high school students about artificial intelligence concepts. Zodiac supports student learning by providing realistic models that learners can explore and interact with. By acting as an AI Agent that explores a network to collect data, players will learn about artificial intelligence concepts such as search, optimization, and machine learning.
"Push" is a puzzle-solving interactive game, that players can push blocks around in a top-down environment. The player pushes these blocks around to solve various puzzles to progress through the game.
A 3D Platformer that makes use of a grappling hook and mechanics like swinging to beat the level. Inspired by Doom and GhostRunner