Workshop Program

VHCIE is scheduled on March, 19th 2017.

  • Introduction

  • Invited Speaker: Ari Shapiro. "Towards the creation of a digital 'you' for immersive environments"

Dr. Ari Shapiro is a Research Assistant Professor at the University of Southern California (USC) heads the Character Animation and Simulation research group at the USC Institute for Creative Technologies. His research work focuses on understanding human shape, movement and interaction in order to synthesize digital characters.  He has published numerous articles in computer graphics and animation conferences and journals, and is a nine-time SIGGRAPH speaker. For several years, he worked in the visual effects and video games industry focused on 3D character tools and algorithms at companies such as Industrial Light and Magic, LucasArts and Rhythm & Hues Studios. He holds film credits in The Incredible Hulk and Alvin and the Chipmunks 2. In addition, he holds video games credits in the Star Wars: The Force Unleashed series.

Abstract: We have developed a pipeline capable of generating a digital version of a specific person in 20 minutes semi-automatically with no artistic or technical intervention. This 3D construct includes an animatable face, body and fingers. There are numerous commercial and research applications that would benefit from being able to simulate a specific (or recognizable) person in a 3D environment, including social VR, virtual try-on, and digital communication. We discuss the past development of this pipeline, and its’ future directions. 

Abstract: It is increasingly common to embed embodied, human-like, virtual agents into immersive virtual environments for either of the two use cases: (1) populating architectural scenes as anonymous members of a crowd and (2) meeting or supporting users as individual, intelligent and conversational agents. However, the new trend towards intelligent cyber physical systems inherently combines both use cases. Thus, we argue for the necessity of multiagent systems consisting of anonymous and autonomous agents, who temporarily turn into intelligent individuals. Besides purely enlivening the scene, each agent can thus be engaged into a situation-dependent interaction by the user, e.g., into a conversation or a joint task. To this end, we devise components for an agent’s behavioral design modeling the transition between an anonymous and an individual agent when a user approaches.

  • Leandro Dihl, Estêvão S Testa, Paulo Knob, Gabriel L. B. da Silva, Rodolfo M. Favaretto, Marlon F. de Alcântara, and Soraia R. Musse: “ Generating Cultural Characters based on Hofstede Dimensions”

Abstract: The virtual humans behaviors can be endowed with different levels of intelligence and their action/motion can present more or less realism. In this scope, it is still a challenge to create, in an automatic and easy way, virtual humans that move and behave in a way that seems natural. Cultural aspects and population differences can produce deviations in speed, density and flow of the crowd. These aspects can be observed in videos of different Countries and certainly it can produce more realistic agents, if incorporated to simulation models. This paper presents a methodology to generate virtual humans which trajectories are based on real life video sequences, and in addition we considered Hofstede Cultural Dimensions (HCD) to map cultural differences. The method is tested in a set of real environments, but can be scaled for any environment that have video sequences and tracked human trajectories for training.

  • Julien Pettré, Anne-Hélène Olivier, Julien Bruneau, Alexandre Vu, Laurentius Meerhoff: "How gaze reveals human navigation strategies in crowds"

Abstract: Understanding crowd navigation is important. Current microscopic models address crowds at the level of the interacting agents. These interactions are often based on arbitrary assumptions. Recent research has set out to validate existing assumptions. One component of the model considers how agents select their interactions amongst the many available interactions. Validation based on motion capture alone does not allow to deduce an invidual's strategy in selecting the interactions. We test the hypothesis that gaze fixations reveal with whom an agent is interacting with. Using a virtual reality task, we showed that an agent fixates on another agent before making an avoidance maneouvre. Moreover, we showed that gaze fixations are drawn towards the agents that require the most immediate adaptation. We thus propose that gaze behavior can provide insights as to who an agent interacts with. This can be used to validate current microscopic models of crowd movements.


    • Invited Speaker: Betty Mohler. "Perception of Avatars in Virtual Reality"

    Dr. Betty Mohler is a W2 Independent Group Leader of Space & Body Perception. She is an expert in immersive virtual reality and uses virtual reality as a tool to investigate how people perceive spatial and dynamic properties of the visual world. Specifically, she has found that sensory information about the own body influences the way we perceive space and other human bodies in virtual reality. 

    Abstract: Many perceptual challenges exist that make it difficult to animate perceptually realistic avatars and virtual characters. In particular humans are very good at identifying motion in humans; the need for matching realism in appearance and behavior (when it fails we experience the uncanny valley) and finally the importance bodies play in perception and action.  This talk will cover several experiments that have investigated aspects of self-avatars and human perception of space and bodies. In particular we have investigated the influence of strength/power and appeal based on body shape. Additionally we have investigated several aspects of the body shape of a self-avatar and how this influences space perception and behavior. Human bodies are extremely important to perception and action. 

    1)  Fleming, R., Mohler, B, Romero, J, Black, M. J, Breidt, M.  (2016) Appealing female avatars from 3D body scans: Perceptual effects of stylization In 11th Int. Conf. on Computer Graphics Theory and Applications (GRAPP),

    2) Piryankova, I., Stefanucci, J., Romero, J, de la Rosa, S., Black, M. J., Mohler, B (2014) Can I recognize my body’s weight? The influence of shape and texture on the perception of self ACM Transactions on Applied Perception for the Symposium on Applied Perception, 11(3):13:1-13:18, September 2014

    Abstract: Dynamic, moving characters are increasingly a part of interactive virtual experiences enabled by immersive display technologies such as head-mounted displays (HMDs). In this new context, it is important to consider the impact their behavior has on user experiences. Here, we explore the role collision avoidance between virtual agents and the VR user plays on overall comfort and perceptual experience in an immersive virtual environment. Several users participated in an experiment where they were asked to walk through a dense stream of virtual agents who may or may not be using collision avoidance techniques to avoid them. When collision avoidance was used participants took more direct paths, with less jittering or backtracking, and found the resulting simulated motion to be less intimidating, more realistic, and more comfortable.

    Abstract: In this paper, we study the effect of instructional priming on postural responses to virtual crowds using a headset based virtual reality (VR) platform. Specifically, we instruct VR participants that one of the virtual agents in a simulated crowd represents the movement of a real person, and reinforce this instruction by having a single role player present in the experimental arena. Our results show that while VR participants who were primed did not move significantly more when three dimensional movement was considered, they exhibit significantly more movement in the direction perpendicular to the crowd flow indicating possible collision avoidance maneuvers. These results indicate that manipulation of instructions to participants with the intent of impacting pre-exposure expectations may be used to increase engagement with virtual crowds. 

    • Yinxuan Shi, Jan Ondřej, He Wang and Carol O’Sullivan: “Shape Up! Perception based body shape variation for data-driven crowds”

    Abstract: Representative distribution of body shapes is needed when simulating crowds in real-world situations, e.g., for city or event planning. Visual realism and plausibility are often also required for visualization purposes, while these are the top criteria for crowds in entertainment applications such as games and movie production. Therefore, achieving representative and visually plausible body-shape variation while optimizing available resources is an important goal. We present a data driven approach to generating and selecting models with varied body shapes, based on body measurement and demographic data from the CAESAR anthropometric database. We conducted an online perceptual study to explore the relationship between body shape, distinctiveness and attractiveness for bodies close to the median height and girth. We found that the most salient body differences are in size and upper-lower body ratios, in particular with respect to shoulders, waist and hips. Based on these results, we propose strategies for body shape selection and distribution that we have validated with a lab-based perceptual study. Finally, we demonstrate our results in a data-driven crowd system with perceptually plausible and varied body shape distribution

    • Conclusion