VHCIE is scheduled on March 24th, 2019.
13h30-13h35: Introduction
13h35-14h20: Invited presentation, "Sixteen Years of Talking to Virtual Humans: Lessons Learned in Social, Affective and Educational Interactions"
Sabarish Babu, Clemson University
Bio:
Sabarish “Sab” Babu is an Associate Professor in the Division of Human Centered Computing in the School of Computing at Clemson University in the USA. He received his BS (2000), MS (2002) and PhD (2007) degrees from the University of North Carolina at Charlotte, and completed a post-doctoral fellowship in the Department of Computer Science at the University of Iowa prior to joining Clemson University in 2010. His research interests are in the areas of virtual environments, virtual humans, applied perception, educational virtual reality, and 3D human computer interaction. He has authored or co-authored over 75 peer-reviewed publications in premiere venues in the research field. He was the General Chair of the IEEE International Conference on Virtual Reality (IEEE VR) 2016. He also served as a Program Co-Chair for IEEE VR 2017. He and his students have received Best Paper Awards in the IEEE International Conference on Virtual Reality, IEEE International Conference on 3D User Interaction, ACM Symposium on Applied Perception, and the IEEE International Conference on Healthcare Informatics. His research has been sponsored by the US National Science Foundation, US Department of Labor, St. Francis and Medline Medical Foundations.
Abstract:
I will trace the arc of my research in human-virtual human interaction over the past sixteen years with an emphasis on important lessons learned. I will start with the creation and evaluation of MARVE, the world’s first interactive office assistant that engaged users in goal oriented and social dialogue in a public kiosk like setting. I will then present an interactive VR simulation that employed multi-party dialogue to interact with users via a virtual instructor and a virtual conversation partner to teach social conversation protocols in a foreign culture. This work showed for the first time that interactive virtual humans can be used in an inter-personal simulation to teach non-verbal behaviors in a foreign culture in an engaging and compelling manner. To highlight the potential of virtual humans in medical education, I will present our work in the creation and evaluation of interactive virtual humans for teaching patient safety practices such as the CDC’s 5 moments of hand hygiene, failure to rescue scenarios, and patient monitoring and surveillance in clinical practitioner centric VR simulations. Beside the creation of novel application for medical training and education, a research question that has inspired my work is how virtual human animation and appearance fidelity affects users’ reactions to the agents in VR simulations. I will present the lessons learned from several studies that explore how animation and appearance fidelity might affect emotional reactions and visual attention of participants to virtual humans in inter-personal simulations. We have also studied how interactive virtual humans can be used in middle school education of computing principles and logical thinking, and in the research and training of safe traffic crossing behaviors in children. Our research shows that immersive virtual humans can be used to successfully engage children in task oriented encounters as virtual peers and learning companions. Finally, I will close my talk with some current and future research trends with virtual humans.
14h20-14h40: "Evaluation of Omnipresent Virtual Agents Embedded as Temporarily Required Assistants in Immersive Environments"
Andrea Bönsch1;2, Jan Hoffmann1, JonathanWendt1;2, Torsten W. Kuhlen1;2
1 Visual Computing Institute, RWTH Aachen University, Germany
2 JARA-HPC, Aachen, Germany
When designing the behavior of embodied, computer-controlled, human-like virtual agents (VA) serving as temporarily required assistants in virtual reality applications, two linked factors have to be considered: the time the VA is visible in the scene, defined as presence time (PT), and the time till the VA is actually available for support on a user’s calling, defined as approaching time (AT). Complementing a previous research on behaviors with a low VA’s PT, we present the results of a controlled within-subjects study investigating behaviors by which the VA is always visible, i.e., behaviors with a high PT. The two behaviors affecting the AT tested are: following, a design in which the VA is omnipresent and constantly follows the users, and busy, a design in which the VAis self-reliantly spending time nearby the users and approaches them only if explicitly asked for. The results indicate that subjects prefer the following VA, a behavior which also leads to slightly lower execution times compared to busy.
14h40-15h00: "Augmented and Virtual Reality Interfaces for Crowd Simulation Software – A Position Statement for Research on Use-Case-Dependent Interaction"
Wolfgang Hurst and Roland Geraerts
Utrecht University, Netherlands
In this position paper, we claim that immersive technologies, such as Augmented and Virtual Reality, are well-suited interfaces for the usage of crowd simulation software in different contexts. We introduce three use cases; planning, awareness creation, and education. Based on an overview of different Augmented and Virtual Reality approaches, we identify the ones most suitable for each of the three scenarios and illustrate related implementations. Initial observations with their usage confirm our statements, but also highlight areas to explore with future research.
Session 2: 15h15-16h45
15h15-16h00: Invited presentation, "The Impact of Avatars on Close Quarters Interaction"
Anthony Steed, University of College London
Bio:
is Head of the Virtual Environments and Computer Graphics group at University College London. His main research interests are in mixed-reality systems and the impact of these systems on telecollaboration. He got started in virtual reality in the early 90s when the main focus was on building systems capable of realizing the vision of immersive displays. Since then he has worked in measurement of presence and immersion, collaborative virtual reality, mixed reality and novel systems engineering. He has published over 160 papers and is the main author of the recent book “Networked Graphics: Building Networked Graphics and Networked Games”. In 2018/2019 he was a Visiting Researcher at Microsoft Research, Redmond and an Erskine Fellow at the Human Interface Technology Laboratory in Christchurch, New Zealand.
Abstract:
There is a compelling theory emerging of how embodiment inside immersive virtual environments enables participants to use their bodies in natural and fluid ways. In this talk, I will discuss recent work on how avatar representation and embodiment affect collaboration in social virtual reality. Our lab-based works shows how users utilize information about avatars in quite complex and surprising ways, and our studies of consumers in their homes shows some of the barriers that users experience in using avatars for extended periods. I will then discuss how these set some near-term challenges to the field, and review some immediate ways forward that could have significant impact on the utility of social virtual reality.
16h00-16h20 : "Speech Breathing in Virtual Humans: An Interactive Model and Empirical Study "
Ulysses Bernardet1, Sin-hwa Kang2, Andrew Feng2, Steve DiPaola3, Ari Shapiro²
1 School of Engineering and Applied Science Aston University
2 Institute for Creative Technologies, University of Southern California
3 Simon Fraser University
Human speech production requires the dynamic regulation of air through the vocal system. While virtual character systems commonly are capable of speech output, they rarely take breathing during speaking – speech breathing – into account. We believe that integrating dynamic speech breathing systems in virtual characters can significantly contribute to augmenting their realism. Here, we present a novel control architecture aimed at generating speech breathing in virtual characters. This architecture is informed by behavioral, linguistic and anatomical knowledge of human speech breathing. Based on textual input and controlled by a set of low and high-level parameters, the system produces dynamic signals in real-time that control the virtual character’s anatomy (thorax, abdomen, head, nostrils, and mouth) and sound production (speech and breathing). In addition, we perform a study to determine the effects of including breathing-motivated speech movements, such as head tilts and chest expansions during dialogue on a virtual character, as well as breathing sounds. This study includes speech that is generated both from a text-to-speech engine as well as from recorded voice.
16h20-16h40: "An Empirical Lab Study Investigating If Higher Levels of Immersion Increase the Willingness to Donate"
Andrea Bönsch1,2, Alexander Kies3, Moritz Jörling3, Stefanie Paluch3, Torsten W. Kuhlen1;2
1 Visual Computing Institute, RWTH Aachen University, Germany
2 JARA-HPC, Aachen, Germany
3 Service and Technology Marketing (STM), TIME Research Area, School of Business and Economics, RWTH Aachen University, Germany
Technological innovations have a growing relevance for charitable donations, as new technologies shape the way we perceive and approach digital media. In a between-subjects study with sixty-one volunteers, we investigated whether a higher degree of immersion for the potential donor can yield more donations for non-governmental organizations. Therefore, we compared the donations given after experiencing a video-based, an augmented-reality-based, or a virtual-reality-based scenery with a virtual agent, representing a war victimized Syrian boy talking about his losses. Our initial results indicate that the immersion has no impact. However, the donor’s perceived innovativeness of the used technology might be an influencing factor.
16h40-16h45 : Closing remarks