Keynotes

Gerard Pons-Moll, Perceiving Systems department, Max Planck Institute for Intelligent Systems

Thursday July 6th, 13:00

Real Virtual Humans

For man-machine interaction it is crucial to develop models of humans that look and move indistinguishably from real humans. Such virtual humans will be key for application areas such as medicine and psychology, virtual and augmented reality and special effects in movies.

Currently, digital models typically lack realistic soft tissue and clothing dynamics or require time-consuming manual editing of physical simulation parameters. Our hypothesis is that better and more realistic models of humans and clothing can be learned directly from real measurements coming from 4D scans, images and depth and inertial sensors. We combine statistical machine learning techniques and physics based simulation to create realistic models from data.

I will give an overview of several of our projects in which we build realistic models of human pose and shape, soft-tissue dynamics and clothing. I will also present a recent technique we have developed to capture human movement from only 6 inertial sensors attached at the body limbs. This will enable capturing human motion of every day activities, for example while we are interacting with other people, while we are riding a bike or driving a car. Such recorded motions will be key to learn models that replicate human behaviour. I will conclude the talk outlining the next challenges to build avatars that enable social interactions.

Andreas Mühlberger, Clinical Psychology and Psychotherapy, University of Regensburg

Thursday July 6th, 16:00

Virtual Reality Exposure Therapy: Tomorrow's first choice treatment for anxiety disorders?

Mental disorders are a major challenge for our society. Even if treatments are available, there is need of further research to enhance dissemination and efficacy. Thus, one prominent use case of Virtual Reality is to enhance basic research in psychology and applied fields as psychotherapy. In my presentation, I will introduce Exposure in Virtual Reality for the treatment of anxiety disorders. Actual evidence for the efficacy in different anxiety disorders as well as other mental disorders, and studies on the mechanisms of action will be given. Beside the use of exposure in Virtual Reality for psychotherapy and the possibilities to enhance treatment efficacy the significance of Virtual Reality for basic research on fear, anxiety, and anxiety disorders will be a further aspect of the presentation.

Jari Hietanen, Human Information Processing Laboratory, Faculty of Social Sciences/Psychology, University of Tampere

Friday July 7th, 9:00

Watching eyes: to see and to be seen

Human social interaction is guided by a complex system of perceptual and higher-level cognitive, affective, and motivational processes. One essential perceptual cue in regulating interaction between individuals is gaze direction. Gaze conveys information about the direction of attention. A person gazing towards another person signals that his or her attention is directed to that person. The observer of another’s direct gaze, in turn, perceives to be a target of another’s attention.

In my presentation, I will describe studies showing the effects of another’s direct gaze on behavioural and various psychophysiological responses indexing attention, arousal, motivation, and cognitive processing. Interestingly, in many of our own studies, the effects of direct gaze have been observed only when the participants were looking at a “live” face of another person, but not when the same face was presented as a picture on a computer monitor. These findings suggest that the effects of another’s direct gaze are based on the observer’s attributions of being a target to another individual’s mind.


Catherine Pelachaud, CNRS - ISIR, University of Pierre and Marie Curie

Friday July 7th, 13:00

Socio-emotional conversational agents

I will present several of our works regarding modeling social and emotional behaviors for embodied conversational agents. To create such agents with whom human participants can interact with, they ought to be endowed with a large lexicon of multimodal behaviors. To characterize them we have applied various methodologies such as user-centered design or methods relying on corpus analysis. For this latter technique we have gathered a corpus of human experts in a domain explaining it to novices. This corpus of more than 80 interactions is annotated on different levels, high level such as impression of competence or social attitude, and multimodal behaviors. By applying sequences mining we extract behavior patterns involved in the change of perception of an attitude or competences. In this talk I will also present our platform of Embodied Virtual Agent Greta/VIB in which these works are implemented.