VHCIE is scheduled on March 22th, 2020. This will be an ONLINE event that will be streamed on Twitch: https://www.twitch.tv/ieeevr2020_vhcie. Speakers were kindly asked to prepare a pre-recorded video in order to minimize technical issues. Questions to authors will be asked using the Slido Channel of IEEE VR conference and chat will be done through VHCIE Channel of IEEE VR Slack workspace. To make VHCIE a lively event with interactions between authors and the audience, it's time to connect to the slack channel (https://ieeevr-slack-invite.glitch.me/ ). You must be registered to the conference to access the slack channel . As a reminder the registration is free if you are not an author in the conference, so please do not hesitate to register!!
Please find below the link to the youtube Channel of VHCIE Workshop. You can find the videos from authors' presentations
Tabitha C. Peck is an assistant professor of Mathematics and Computer Science at Davidson College. She completed her Ph.D. in computer science from The University of North Carolina at Chapel Hill under the direction of Henry Fuchs and Mary Whitton. She has worked in numerous virtual reality research labs including the Palo Alto Research Center and the Experimental Virtual Environments (EVENT) Lab for Neuroscience at the University of Barcelona with Mel Slater. She is an associate editor for Presence, a review editor for Frontiers in Virtual Reality, and is an IEEE Virtual Reality conference paper chair.
Abstract: A virtual reality user can become embodied in a self-avatar and experience sensations that their own body is substituted by a self-avatar’s body. This sense of embodiment in a self-avatar is complex, especially when embodying an avatar that is a visually different age, gender, or race from the user. In this talk I will discuss measurements of embodiment, the cognitive implications of self-avatars, and how gender and race-swapped avatars can be used to reduce bias, or to either induce or mitigate stereotype threat.
Reimer, D.1,2 , Langbehn E. 2 , Kaufmann, H.1 , Scherzer, D.3
1 Vienna University of Technology2 Ravensburg-Weingarten University3 University of HamburgRedirected Walking (RDW) techniques allow users to navigate immersive virtual environments much larger than the available tracking space by natural walking. Whereas several approaches exist, numerous RDW techniques operate by applying gains of different types to the user’s viewport. These gains must remain undetected by the user in order for a RDW technique to support plausible navigation within a virtual environment. The present paper explores the relationship between detection thresholds of redirection gains and the presence of a self-avatar within the virtual environment. In four psychophysical experiments we estimated the thresholds of curvature and translation gain with and without a virtual body. The goal was to evaluate whether a full-body representation has an impact on the detection thresholds of these gains. The results indicate that although the presence of a virtual body does not significantly affect the detectability of these gains, it supports users with the illusion of easier detection. We discuss the possibility of a future combination of full-body representations and redirected walking and if these findings influence the implementation of large virtual environments with immersive virtual body representation.
Bönsch , A., Jonda, M., Ehret, J., Kuhlen, T.,
Visual Computing Institute, RWTH Aachen University, GermanySimulating a realistic navigation of virtual pedestrians through virtual environments is a recurring subject of investigations. The various mathematical approaches used to compute the pedestrians’ paths result, i.a., in different computation-times and varying path characteristics. Customizable parameters, e.g., maximal walking speed or minimal interpersonal distance, add another level of complexity. Thus, choosing the best-fitting approach for a given environment and use-case is non-trivial, especially for novice users. To facilitate the informed choice of a specific algorithm with a certain parameter set, crowd simulation frameworks such as Menge provide an extendable collection of approaches with a unified interface for usage. However, they often miss an elaborated visualization with high informative value accompanied by visual analysis methods to explore the complete simulation data in more detail – which is yet required for an informed choice. Benchmarking suites such as SteerBench are a helpful approach as they objectively analyze crowd simulations, however they are too tailored to specific behavior details. To this end, we propose a preliminary design of an advanced graphical user interface providing a 2D and 3D visualization of the crowd simulation data as well as features for time navigation and an overall data exploration.
Yao, H., Alappattu, M., Robinson, M., Lok, B
University of FloridaIn this paper, we want to explore which verbal and non-verbal communication behaviors health professions trainees will employ during training of interpersonal skills through the interaction experience with virtual patients. We investigated participants’ eye gaze position, nodding gesture, speeches, interaction distance, and questions participants asked in a pilot study. We compared the results of verbal behaviors and non-verbal behaviors observed in the pilot study with what we would expect to find given prior research from real-world interactions. Then we tried to explore the potential reasons if our results are different from what we expect.
Bönsch , A., Kuhlen, T.,
Visual Computing Institute, RWTH Aachen University, GermanyVirtual environments are increasingly often enriched by virtual populations consisting of computer-controlled, human-like virtual agents to resemble realistic and lively places. While the applications often provide limited user-agent interaction based on, e.g., collision avoidance or mutual gaze, complex user-agent dynamics such as joint locomotion combined with a secondary task, e.g., conversing, are rarely considered yet. These dual-tasking situations, however, are beneficial for various use-cases: guided tours and social simulations will become more realistic and engaging if a user is able to traverse a scene as a member of a social group, while platforms to study crowd and walking behavior will become more powerful and informative. To this end, this presentation deals with different areas of interaction dynamics, which need to be combined for modeling dual-tasking with virtual agents. Areas covered are kinematic parameters for the navigation behavior, group shapes in static and mobile situations as well as verbal and non-verbal behavior for conversations.
Bruneau, J.1, Duverne, T.2, Rougnant, T.2, Le Yondre, F.2, Berton, F.1, Hoyet, L.1, Pettré, J.1, Olivier, AH.1,2
1Inria Rennes²Univ. Rennes, M2S, VIPS, ENS RennesProxemics is fundamental when immersing someone in a virtual crowd. If the Virtual Humans do not respect the proper social norms, it can make the user uncomfortable and/or break the immersion. In this presentation, we propose to study the effect of the social context of the environment over this norm. Is the distance you keep from each other the same between an everyday setting and a festive setting? We first designed a protocol that aimed at breaking the proxemics in real conditions: a confederate approached a male subject and stood right in front of him. We considered two environments with different social context: the area around a football stadium on match day and a train station. Individual behavior was observed using ethnography methods before structured and semi-structured interviews were conducted. Individuals were not aware of being subjects of an experiment. Results reveals that people tend to show more embarrassment in a regular situation such as the train station than around the stadium in a festive setting. Secondly, we performed the same experiment in VR to assess whether these findings still apply in VR but also to collect more quantitative data. In addition to the variables measured in real conditions, we considered participants’ position and movements. Results showed that body reactions were similar between the two virtual spaces, only the time took to get away from the proxemics transgressor was shorter in the station. The variation of proxemics norms depending on the social context of the environment was less noticeable in VR. We discuss the application of our finding for the design of populated virtual environments.
Zibrek, K.1, Niay, B.1, Olivier, AH.1,2 , Hoyet, L.1, Pettré, J.1, McDonnell, R.3
1 Inria Rennes2 Univ. Rennes, M2S3 Trinity College, DublinIn human interaction, people will keep different distances from each other depending on their gender: males will stand further away from males and closer to females. However, many other variables influence proximity, such as appearance characteristics of the virtual character (e.g., attractiveness, etc.). Our study focuses on proximity to virtual walkers in virtual reality (VR), where gender could be inferred from motion only. We applied a set of male and female walking motions (motion capture) to a wooden mannequin, and displayed them to the participant embodied in a virtual avatar in VR. Participants used the controller to stop the approaching mannequin when they felt it was uncomfortably close to them. We hypothesized that proximity will be affected by the gender of the character, but also the gender of the participant. We additionally expected some motions to be rated more attractive than others and that attractive motions would reduce the proximity measure. Our results show support for the last two assumptions, but no difference in proximity was found according to the gender of the character’s motion. Our findings have implications for the design of virtual characters in interactive virtual environments
Stuart, J.1, Akinnola I.2 , Guido-Sanz , F.3 , Anderson, M.3 , Diaz, D.3 , Welch, G., Lok, B.1
1 University of Florida2 University of Maryland Baltimore County3 University of Central FloridaExposure to realistic stressful situations during an educational program may help mitigate the effects of stress on performance. We explored how virtual humans in an augmented reality environment induce stress. We also explored if users can effectively utilize stress management techniques taught during a simulation. We conducted a within-subjects pilot experiment (n=12) using an exploratory mixed-method design with a series of virtual patients using the Simple Triage and Rapid Treatment (START) system. This work proposes a need to explore how realistic scenarios using virtual humans can induce stress, and which techniques are most effective in reducing user stress in virtual simulations.
Chakraborty, S.1, Adams, H.1, Stefanucci, J.², Creem-Regehr, S.², Bodenheimer, B.1
1Vanderbilt University, USA² University of Utah, USAAugmented reality is an important technology for learning and training, particularly in such tasks as navigation and assembly. These tasks typically require that people be able to accurately localize the position of objects in space; in the case of augmented reality applications, these objects would include virtual objects. Prior work suggests that people underestimate the location of virtual objects placed in the real world with augmented reality displays, although the reasons are unclear. What is clear is that virtual objects in current augmented reality displays typically lack some salient depth cues that real objects have. In this work, we test whether motion of familiar size object, namely a life-size human avatar, adds a salient depth cue at distances of 15 to 40 meters. We report our preliminary findings and discuss the implications of our work for augmented reality applications.