RESEARCH

Virtual Reality

VR Navigation

VR navigation is an essential interaction task, but there is no standard method yet. Good navigation interface has less simulator sickness and gives user more immersive feeling. Azeem, Juyoung and Dr. Hwang are currently focused on how to make a good navigation interface. We are contributing to this field for years and hope to make a better environment for VR contents designer.

Mixed Reality

Fusion of Virtual and Real world

Creating a new environment by fusing virtual world and real world to make a new world opens new horizons to explore the possibilities. In MR-Lab we are working on creating such worlds by utilizing Computer Graphics, Image recognition and Spatial mapping. Our goal is to create a unified system which can create new worlds in short time without the need of large scale modifications.

Multimodal Interaction

At MRLab we designed and developed a multimodal interaction framework for intelligent virtual agents in wearable mixed reality environments, especially for interactive applications at museums, botanical gardens, and similar places. We envisioned a seamless interaction framework by integrating potential features of spatial mapping, virtual character animations, speech recognition, gazing, domain-specific chatbot and object recognition to enhance virtual experiences and communication between users and virtual agents. It can be adapted to any wearable MR device supporting spatial mapping.

Related Demo: https://youtu.be/zlmVpUgBdew

Related Paper: [Link]

Exploring the Human Perception

For the immersive experience, we investigate the user perception in various interaction situations and conditions, such as virtual pet in handheld AR, virtual human interaction in collision situations, etc.

Related Demo: https://youtu.be/ZQwP17B8ajg

Interaction with Virtual Human

Virtual humans refer to human-like computer graphics manifestations. Due to their human-like appearance, people tend to expect virtual humans to behave in a similar way as real humans do. We investigate the ways to improve the social interaction with virtual human in different situations, e.g., Counseling, medical examination, etc.

Machine Learning

Machine Learning for Virtual Agents

Virtual humans as virtual agents needs meaningful and natural co-speech body gestures, just like humans do. In on-going research, we are developing virtual humans which can communicate with humans with human-like body gestures. This research explores the use of multi-modal interaction using speech, text and in next iteration personality matrix.

Related Rule-based version: https://youtu.be/GIxaI9yTmMc

Related Rule-based paper: [Link]

Auto-generating Storyboard And Previz with Digital Humans

ASAP is a tool for Auto-generating Storyboard And Previz for screenwriters and filmmakers. ASAP intelligently takes story scripts in a predefined format, parsed entities (action, character, Parenthetical, and dialogue), and simulates their stories in 3D animated/visual scenes with digital humans in a virtual environment.

Demo page: [Link]

Remote Collaboration

Remote Collaboration in VR

Virtual Reality (VR) devices' utilization is getting considerable interest in many applications such as video-mediated communication or video surveillance. Some researches has shown how the VR HMD headset could improve presence feeling of the users. However, streaming a 360-degree video to HMD provides some challenges to tackle such as low resolution view and high bandwidth usage and computational costs. Lubis, Chanho, and Dr. Hwang are currently focus how to improve the quality of 360-degree video streaming interaction through HMD. We are hoping that we could contribute in developing a better HMD-based 360-degree video interaction for society.