Title: To Err is Robot
Summary: In this talk, Manuel will talk about six years of research on when things go wrong in human-robot interaction. Based on large video corpora Manuel and his team were able to show that humans show particular social signals during erroneous interactions with robots. This knowledge can be used to train models with visual data to pick up situations in which something went wrong.
Bio: Manuel Giuliani is Professor in Embedded Cognitive AI for Robotics at the University of the West of England and he is Co-Director of the Bristol Robotics Laboratory (BRL). At BRL, he leads the ECHOS group (Embodied Cognition for Human-RObot InteractionS). He received a Master of Arts in computational linguistics from Ludwig-Maximilian-University Munich, and a Master of Science and a PhD in computer science from Technical University Munich.
Before going to Bristol, Manuel worked at the Technical University Munich, the research transfer institute Fortiss, and the Center for Human-Computer Interaction at the University of Salzburg, where he led the Human-Robot Interaction group.
Manuel's research interests include human-robot interaction, social robotics, robots for nuclear decommissioning, natural language processing, multimodal fusion, multimodal output generation, augmented and virtual reality interfaces, and embedded cognitive robot architectures.
Web: Manuel's UWE webpage
Title: Interactive Repair in Human Conversation and Beyond
Summary: Much of the resilience and flexibility of human communication derives from the rapid back-and-forth of turn-taking and the availability of interactive repair. Interactive repair is ubiquitous in human interaction, occurring on average every 84 seconds in informal conversations in languages around the world (Dingemanse et al. 2015). Far from a mere remedial procedure, people use interactive repair to live-edit the flow of information, calibrate mutual understanding and coordinate joint action. In this talk I review some of the fundamental properties of the repair system as they emerge from empirical work on human interaction. A key challenge of interactive repair is that any element of an utterance may be a cause of trouble or a target for repair, resulting in a combinatory explosion of possibilities that people expertly navigate but that is hard to do justice in dialog models. I review some principles that seem to govern people's use of the repair system, including a principle of Specificity by which people cooperate to formulate the most helpful repair initiations and solutions, and a Three-Strikes principle that seems to put a soft limit on successive repair initiations. While today's language models may allow us to paper over deficits in understanding by generating coherent and confident prose, conversational user interfaces usually need to go beyond simulating conversation, and would ideally support modes of interaction that approach the resilience and flexibility of human language use. My main goal in this talk is to review some of the ways in which the empirical study of human interaction can contribute towards this goal.
Dingemanse, Mark, Seán G. Roberts, Julija Baranova, Joe Blythe, Paul Drew, Simeon Floyd, Rosa S. Gisladottir, et al. 2015. “Universal Principles in the Repair of Communication Problems.” PLOS ONE 10 (9): e0136100. https://doi.org/10.1371/journal.pone.0136100.
Bio: Mark Dingemanse is a linguist based at the Centre for Language Studies at Radboud University in Nijmegen, The Netherlands. His work focuses on how language is shaped by and for social interaction, and how technology can augment and constrain human agency. He was awarded the 2020 Heineken Young Scientists Award in the Humanities for his transdisciplinary work on the structure of social interaction, but the real WTF moment was probably when his team won a 2015 Ig Nobel Prize for discovering that 'Huh?' may be a universal word found in roughly the same form and function in languages around the globe.
Title: Safer Conversational AI
Summary: With continued progress in deep learning, there is increasing activity in learning interactive dialogue behaviour from data, also known as “Conversational AI”. I argue that in order to responsibly apply generative models in the context of user-facing Conversational AI, we need to ensure that their outputs can be trusted, are safe to use, and that their design is bias-free. I will highlight several methods developed in my research team at Heriot-Watt University starting to address these issues -- including reducing `hallucinations’ for task-based systems, mitigating safety critical issues for open-domain chatbots, and the often-overlooked problem of anthropomorphic design. I will conclude with the lessons learnt and upcoming challenges.
Bio: Verena is a Senior Staff Research Scientist at Google DeepMind, where she works on Safer Conversational AI. She is also honorary professor at Heriot-Watt University in Edinburgh and a co-founder of the Conversational AI company ALANA.
She has 20 years of experience in developing and researching data-driven conversational systems. In the early 2000s she developed a series of breakthrough innovations that laid the groundwork for statistical dialogue control using Reinforcement Learning. More recently, Verena and her team pioneered work on identifying and addressing safety risks in neural conversational systems, which was awarded with a Leverhulme Senior Research Fellowship by the Royal Society.
Verena holds a PhD from Saarland University in Germany and a MSc from the University of Edinburgh, where she also spent time as a postdoctoral researcher.
Web: Verena's research profile at Heriot-Watt