After the emergence of video streaming services, more creative and diverse multimedia content has become available, and now the capability of streaming 360-degree videos will open a new era of multimedia experiences. However, streaming these videos requires larger bandwidth and less latency than what is found in conventional video streaming systems. Rate adaptation of tiled videos and view prediction techniques are used to solve this problem. In this paper, we introduce the Navigation Graph, which models viewing behaviors in the temporal (segments) and the spatial (tiles) domains to perform the rate adaptation of tiled media associated with the view prediction. The Navigation Graph allows clients to perform view prediction more easily by sharing the viewing model in the same way in which media description information is shared in DASH. It is also useful for encoding the trajectory information in the media description file, which could also allow for more efficient navigation of 360-degree videos. This paper provides information about the creation of the Navigation Graph and its uses. The performance evaluation shows that the Navigation Graph based view prediction and rate adaptation outperform other existing tiled media streaming solutions. Navigation Graph is not limited to 360-degree video streaming applications, but it can also be applied to other tiled media streaming systems, such as volumetric media streaming for augmented reality applications.
Future view prediction for 360-degree video streaming system is important to save the network bandwidth and improve Quality of Experience (QoE) of the video. View history and video semantic information help predicting future behavior of the viewer. Most of the existing view prediction methods use only the history of viewing behaviors and semantic information is rarely utilized to predict future view. Extracting video semantic information requires powerful computing hardware and large memory space to perform deep learning-based video analysis, which are not desirable conditions for most of client devices, such as small mobile devices or Head Mounted Display (HMD). Therefore, in this paper, we propose a system that performs video semantic analysis on the media server and provides the analysis result to the clients as a part of Media Presentation Description (MPD), so called Semantic Flow Descriptor (SFD). We propose a Semantic Aware View Prediction System (SEAWARE) to improve view prediction performance. The presented SEAWARE system can perform semantic-aware view prediction without a deep learning network at the client-side. The evaluation results with 360-degree videos and real view traces show that the proposed SEAWARE system can improve view prediction performance and stream high quality video under limited network bandwidth.
Traffic safety is the foremost value that automotive radar systems aim to pursue. Unlike in mobile communication systems, the literature for radar systems did not adequately address inter-radar interference and security threats such as jamming and spoofing, which in turn threatens the traffic safety. In this context, we introduce a novel frequency-modulated continuous-wave (FMCW) radar scheme (namely, BlueFMCW) that mitigates both interference and spoofing signals. BlueFMCW randomly hops frequency to avoid interference and spoofing signals. Our phase alignment algorithm is capable of removing the phase discontinuity while combining the beat signals from the randomly-hopped chirps, and thereby radar's resolution is not compromised. The simulation results show that BlueFMCW can efficiently mitigate the interference and spoofing signals in various scenarios without paying its resolution.
Volumetric media, popularly known as holograms, need to be delivered to users using both on-demand and live streaming, for new augmented reality (AR) and virtual reality (VR) experiences. As in video streaming, hologram streaming must support network adaptivity and fast startup, but must also moderate large bandwidths, multiple simultaneously streaming objects, and frequent user interaction, which requires low delay. In this paper, we introduce the first system designed specifically for streaming volumetric media. The system reduces bandwidth by introducing 3D tiles, and culling them or reducing their level of detail depending on their relation to the user’s view frustum and distance to the user. To allocate bits among different tiles across multiple objects, we introduce a simple greedy yet provably optimal algorithm for rate-utility optimization, whose utility measures is based not only on the underlying quality of the representation, but on the level of detail relative to the user’s viewpoint and device resolution. Simulation results show that the proposed algorithm provides superior quality compared to existing video-streaming approaches adapted to hologram streaming, in terms of utility and user experience over variable, throughput-constrained networks.