Extended Reality

Nov. 20, 2020

10:00 a.m. - 1:00 p.m. Taiwan Time (GMT+8)

đź“Ť Delta 106, NTHU & Webex

Navigation in 360-Degree Video Content

Speaker: Prof. Klara Nahrstedt

đź—“ 10:00 A.M.-10:45 A.M. (Taiwan), 8:00 P.M.-8:45 P.M. Nov. 19 (Illinois)

With the emergence of 360-degree cameras, VR/AR display devices, ambisonics auditory devices, more diverse 360-degree multi-modal content has become available and with it the challenging demands for the capability of navigating within the 360-degree multi-modal content to enhance users’ multi-modal experience. In this talk, we will discuss the challenges of navigating through the 360-degree multi-modal content in a HMD-Cloud-based distributed environment, and discuss the concept of the navigation graph to organize the 360-video content for successful delivery and viewing. We will dive into more details of the navigation graph’ concept to represent views and objects navigation. We will show how navigation graphs are serving as models for viewing behaviors in the temporal and spatial domains to perform a better rate adaptation of tiled media associated with view and object predictions. The experimental results are encouraging and support the claim that the navigation graph modeling provides a strong representation of navigation and viewing patterns of users, and its usage enhances the streaming and viewing quality in 360-degree video applications.

Joint work with Dr. Jounsup Park, Mingyuan Wu, Eric Lee, Bo Chen, Ariel Rosenthal, Yash Shah, John Murray, Kevin Spiteri, Dr. Michael Zink, Dr. Ramesh Sitaraman.

An IoT Framework for Partitioning Neural Networks Computation across Multiple Enclaves

Speaker: Mr. Tarek Elgamal

đź—“ 10:45 A.M.-11:00 A.M. (Taiwan), 8:45 P.M.-9:00 P.M.Nov. 19 (Illinois)

Recent advances in Deep Neural Networks (DNN) and Edge Computing have made it possible to automatically analyze streams of videos from home/security cameras over hierarchical clusters that include edge devices, close to the video source, as well as remote cloud compute resources. However, preserving the privacy and confidentiality of users' sensitive data as it passes through different devices remains a concern to most users. Private user data is subject to attacks by malicious attackers or misuse by internal administrators who may use the data in activities that are not explicitly approved by the user. To address this challenge, we present Serdab, a distributed orchestration framework for deploying deep neural network computation across multiple secure enclaves (e.g., Intel SGX). Secure enclaves provide a guarantee on the privacy of the data/code deployed inside it. However, their limited hardware resources make them inefficient when solely running an entire deep neural network. To bridge this gap, Serdab presents a DNN partitioning strategy to distribute the layers of the neural network across multiple enclave devices or across an enclave device and other hardware accelerators. Our partitioning strategy achieves up to 4.7x speedup compared to executing the entire neural network in one enclave.

Extended Reality for Art & Sports

Speaker: Prof. Hung-Kuo Chu and Prof. Min-Chun Hu

đź—“ 11:00 A.M.-11:45 A.M. (Taiwan), 9:00 P.M.-9:45 P.M. Nov. 19 (Illinois)

We will share the experience of developing Extended Reality systems for art creation and sports training. Three issues will be emphasized in this talk, including the design of human-computer interaction, the design of multi-user interaction, and AI-based VR content creation.

Feasibility Study on VR-based Basketball Tactic Training

Speaker: Ms. Wan Lun Tsai

đź—“ 11:45 A.M.-12:00 P.M. (Taiwan), 9:45 P.M.-10:00 P.M. Nov. 19 (Illinois)

To investigate the feasibility of VR-based basketball tactical training, we conducted an experiment to compare the training effectiveness of various degrees of immersion, including a conventional basketball tactic board, a 2D monitor, and virtual reality. A multi-camera-based human tracking system is developed and built around a real-world basketball court to record and analyze the running trajectory of each player during tactical execution. The accuracy of the running path and hesitation time at each tactical step is evaluated for each participant. Furthermore, we assess several subjective measurements, including simulator sickness, presence, and sport imagery ability, to carry out a more comprehensive exploration of the feasibility of the proposed VR framework for basketball tactics training. The results indicate that the proposed system is useful for learning complex tactics.