Pan Hui

Talk1:

Practical Mobile Edge Computing: in the Cases of Augmented and Virtual Reality

đź—“26, Nov (Tue) 3:30 p.m. - 5:30 p.m.

đź“ŤDelta 102

Smartphone is now a necessity of people's daily life, and we are enjoying various services on it with numerous mobile applications. However, the resource and communication limitations of a single mobile device make it insufficient in satisfying the real-time and interactive constraints of some computation intensive applications, such as mobile Augmented Reality (AR) and mobile Virtual Reality (VR). To bridge the gap, we utilize the processing power of edge servers via task offloading and build practical mobile systems which significantly outperforms state-of-the-art mobile systems in terms of latency, scalability, quality of experience (QoE), and many other aspects.

In the case of mobile Augmented Reality, large-scale object recognition is an essential but time-consuming task. To offload the object recognition task and enhance the system performance, we explore how the GPU and the multi-core architecture on the edge servers would accelerate the large-scale object recognition process. With the carefully designed offloading pipeline and edge acceleration, we are able to finish the whole AR pipeline within one camera frame interval while maintaining high recognition accuracy with large-scale datasets.

Despite the performance concerns on the mobile devices, edge servers are also facing with scalability issue, as too many concurrent offloading requests would exhaust the processing capacity of the edge and result in delayed execution with tasks queuing on the edge. To address this issue, we propose a two-tier caching architecture for mobile AR: 1) when there is enough processing capacity remaining on the edge, the mobile client would offload the recognition task to achieve the best performance, and 2) when the edge is heavily occupied, the mobile client would derive its own result based on the previous recognition result from nearby device's cache, which is based on the fact that users in vicinity have a high chance of querying the same physical objects. This two-tier caching architecture not only guarantees the system performance when serving massive concurrent users, but also enables innovative features such as multi-player AR.

In the case of mobile Virtual Reality, existing 360 degree video streaming systems are suffering from insufficient pixel density, as the video resolution falling within the user's field of view (FoV) is relatively low. We utilize the edge server for ultra-high resolution video transcoding and implement a system which streams tile-based viewport adaptive 360 degree videos onto the mobile client. With this edge proxy, we successfully achieve 16K 360 degree video streaming onto off-the-shelf smartphones, achieving high frame quality and fluent playback without overwhelming the processing capacity of the smartphones.


Discussion session:

đź—“27, Nov (Wed) 10:00 a.m. - 12:00 p.m.

đź“ŤDelta 644

10:20~10:40: ChengHao Wu

10:40~11:00: MinHan Tsai

11:00~11:20: ChingLing Fan

11:20~11:40: TseHou Hung

Talk2:

Building Intelligent Mobile Camera Systems: Visual Privacy by Design Meets Social Interaction

đź—“28, Nov (Thu): 10:00 a.m. - 12:00 p.m.

đź“ŤDelta 106

In recent years, cameras become ubiquitous in smartphones, smart glasses, IoT devices, and surveillance systems. By seeing the physical world and capturing beyond what humans can see explicitly, camera has been playing an important role in building intelligent systems and applications such as augmented reality to facilitate people’s daily lives, together with the advances in computer vision and mobile computing. While cameras bring people great convenience, concerns on visual privacy invasion are raised at the same time. Due to the ease of taking photos and recording videos, the popularity of online social media networks, and the possibility of inferring private information from images and videos using recognition techniques, people’s attitudes towards the increased number of cameras are not completely positive.

In this talk, we build intelligent mobile camera systems to enhance social interaction and bystander visual privacy. We first introduce Talk2Me, a mobile social network framework that helps users initiate conversations and make new friends with others in the proximity. Users of Talk2Me can share information with nearby users in Device-to-Device fashion and view others’ information in an augmented reality way. On the other hand, to solve visual privacy issues raised from pervasive mobile cameras, we first propose a visual indicator-based approach for people to control their visual privacy by wearing tags or showing hand gestures. We also design a beacon-based visual privacy protection solution that enables recorders to inform nearby bystanders of camera use and receive privacy preferences form bystanders following a trigger-and-notification protocol. Finally, we propose Cardea, a context-aware visual privacy protection mechanism that protects bystander visual privacy in photos according to users’ context-dependent privacy preferences. We build prototypes on smartphones. The evaluation results demonstrate the feasibility and effectiveness of our systems.

Photos