CS 5678: Topics in Mixed Reality

Selected Projects - Spring 2021

Course Overview

This course explores the field of Mixed Reality through research topics at the intersection of Computer Vision, Computer Graphics, Human-Computer Interaction. Topics covered may include but not limited to: 3D interaction techniques, remote collaboration, tracking methods, photometric registration, navigation and more.


Read more on the course website

Looking Through Multi-View Windows Towards the Future of Remote Collaboration

There has been a growing number of workers who collaborate and are trained remotely. Oftentimes, especially in training scenarios, there exists one user who instructs the other users on how to complete tasks. In these scenarios, the instructor is commonly referred to as the remote expert, while the trainees are referred to as the local users. To properly instruct the local users, the remote expert must understand their perspectives. However, it is often more difficult for users to understand one another’s perspectives in remote collaboration than it is in face-to-face collaboration. To help the remote experts to understand the local user’s perspective, view-sharing is often utilized. Many techniques have been used for view-sharing in one-to-many remote collaboration, but the most common method is streaming a 2D view of the local users’ perspectives. In this study, we compared the traditional 2D version of view-sharing to a novel technique that utilizes virtual reality’s affordances of stereo rendering to create a 3D view-sharing window. We found that, while the 3D window may allow remote experts to better understand the local users’ perspectives, it also causes significantly more cybersickness than the traditional 2D method.

MVPie Menu: A Multimodal Voice-enhanced Pie Menu for VR System Control

System control in immersive virtual environments play a critical role in determining the user experience of interacting with 3D User Interfaces (UI). While a lot of the 2D desktop UI elements can be adapted to Virtual Reality (VR) systems, they may not be the most suitable for complex tasks. We created a multimodal interaction technique combining a pie menu with voice command support to improve the efficiency of completing complex tasks. A pilot study was conducted to evaluate this technique against a basic hierarchical pie menu. Our result from the study shows that users use voice commands more often for tasks that require users to navigate nested layers on the menu. However, users prefer to use the raycasting technique for easy tasks and found the voice control technique requiring higher workload.