Keynote Speakers
̅ ̅ ̅ ̅ ̅ ̅ ̅ ̅ ̅ ̅
̅ ̅ ̅ ̅ ̅ ̅ ̅ ̅ ̅ ̅
"Sharing perception with a robot" by Alessandra Sciutti
Abstract: For robots to become an effective component of our society, it is necessary that these agents become primarily cognitive systems, endowed with a cognitive architecture that enables them to adapt, predict, and pro-actively interact with the environment and communicate with the human partners. Our contribution to the roadmap toward cognitive systems leverages on the use of a humanoid robot (iCub) to test some of our assumptions on how to build a cognitive interactive agent. We attempt at modeling the minimal skills necessary for cognitive development, focusing on the visual features that enable to recognize the presence of other agents in the scene, their internal state and the way they perceive the world. In a dual approach, we are trying to understand how to modulate robot behavior to elicit better human understanding and to express different characteristics of the interaction: from the mood to the level of commitment. This approach is propaedeutic to the creation of a cognitive system, by helping in the definition of what is relevant to attend to, starting from signals originating from the intrinsic characteristics of the human body. We believe that only a structured effort toward cognition will in the future allow for more humane machines, able to see the world and people as we do and engage with them in a meaningful manner.
Bio: Alessandra Sciutti is Tenure Track Researcher, head of the CONTACT (COgNiTive Architecture for Collaborative Technologies) Unit of the Italian Institute of Technology (IIT). She received her B.S and M.S. degrees in Bioengineering and the Ph.D. in Humanoid Technologies from the University of Genova in 2010. After two research periods in USA and Japan, in 2018 she has been awarded the ERC Starting Grant wHiSPER (www.whisperproject.eu), focused on the investigation of joint perception between humans and robots. She published more than 80 papers and abstracts in international journals and conferences and participated in the coordination of the CODEFROR European IRSES project (https://www.codefror.eu/). She is currently Associate Editor for several journals, among which the International Journal of Social Robotics, the IEEE Transactions on Cognitive and Developmental Systems and Cognitive System Research. The scientific aim of her research is to investigate the sensory and motor mechanisms underlying mutual understanding in human-human and human-robot interaction. For more details on her research, as well as the full list of publications please check the Contact Unit website or her Google Scholar profile.
"Breathing life into social robots using AI" by Tony Belpaeme
Abstract: About half of social robots in research studies still use Wizard of Oz and the predominant reason for this is that Artificial Intelligence used to drive autonomous interaction still fails. This talk will present a number of approaches which use a data-driven approach to generating interactive behaviour and concludes by arguing that social interaction is one of the biggest challenges faced by AI.
Bio: Tony Belpaeme is Professor at Ghent University and Visiting Professor at the University of Plymouth, UK. He received his PhD in Computer Science from the Vrije Universiteit Brussel (VUB) and currently leads a team studying cognitive robotics and human-robot interaction. Starting from the premise that intelligence is rooted in social interaction, the teams tries to further the science and technology behind artificial intelligence and social human-robot interaction.
"A window into your soul? Promise and pitfalls of automatic facial expression analysis" by Jonathan Gratch
Abstract: Affective computing is a field growing that is exploding in both commercial and scientific interest, but also in controversy. Companies offer to classify if someone is happy or sad from a snippet of audio or video, and use these inferences to predict fraud, consumer purchases or mental well-being. Some claim this is a dangerous invasive technology that will undermine privacy and individual freedoms. Others argue this is “AI snake oil” with no basis in scientific reality. In this talk, I will first review the findings of a recent commercial survey on the ways emotion recognition technology is marketed and discuss the scientific basis, or lack thereof, for these claimed uses. I will then review research in our lab on the ways emotional expressions can reveal mental state and allow for predictions of future human behavior. These findings emphasize the contextual nature of emotional displays and highlight the ways context is often ignored from many proposed uses of automatic expression recognition. I will discuss the implication of these findings for human-machine interaction
Bio: Jonathan Gratch is a Research Full Professor of Computer Science. Psychology and Media Arts and Practice at the University of Southern California (USC) and Director for Virtual Human Research at USC’s Institute for Creative Technologies. He completed his Ph.D. in Computer Science at the University of Illinois in Urbana-Champaign in 1995. Dr. Gratch’s research focuses on computational models of human cognitive and social processes, especially emotion, and explores these models’ role advancing psychological theory and in shaping human-machine interaction. He is the founding Editor-in-Chief (retired) of IEEE’s Transactions on Affective Computing, founding Associate Editor of Affective Science, Associate Editor of Emotion Review and the Journal of Autonomous Agents and Multiagent Systems, and former President of the Association for the Advancement of Affective Computing. He is a Fellow of AAAI, AAAC, and the Cognitive Science Society, and a ACM SIGART Autonomous Agents Award recipient.