Modelling Intended Impact of Assistive Interactions
Explainability is first and foremost grounded in social interaction. While it is important to research transparent algorithms, understand causal attributions and design expressive interfaces when creating explainable agents, the target will always be to understand how to achieve a certain effect on a human perceiver. The need for an explanation only arises when part of the human's world model is flawed. For deciding when, what and how to communicate, it is useful to incorporate inference of human cognitive processes into an agent's behavior planning to assess the possible impact on this world model.
In this talk, I will present research across different applications that demonstrate how this can be approached and used for human-agent interactions. The talk will touch on work incorporating predicted impact of robot's and automated vehicles' actions on human belief, behavior policies, mutual understanding, and situational cost.
Thomas H. Weisswange is a Principal Scientist at the Honda Research Institute Europe in Offenbach, Germany. He has a strongly interdisciplinary background covering bioinformatics, computational neuroscience, intelligent transportation, machine learning, technology ethics, and human-robot interaction. Thomas’ current research projects address robot interactions with groups, human-robot cooperation, theory-of-mind, and intelligent systems design.
Toward Understandable Robots through Human-Inspired Cognition
As robots increasingly operate in close interaction with humans, interpretability becomes a central requirement for effective and trustworthy human–robot interaction. Interpretability should emerge from the robot’s embodied cognitive processes and from the dynamics of interaction. Drawing inspiration from human cognitive development, we adopt an embodied approach to interpretability in which robots acquire transparent and predictable behavior through memory, anticipation, and adaptation. By grounding robot cognition in models of human perception and action, robots can interpret non-verbal cues such as motion dynamics, timing, and effort, while simultaneously expressing their own intentions in ways that are intuitively legible to human partners. We show how robots can serve both as interactive partners and as experimental probes to investigate the mechanisms underlying human social understanding. Through embodied communication and developmental principles, interpretability becomes an intrinsic property of interaction, supporting mutual understanding, long-term adaptation, and trust.
Alessandra Sciutti is a Senior Tenure Track Researcher and head of the CONTACT (COgNiTive Architecture for Collaborative Technologies) Unit of the Italian Institute of Technology (IIT). She received her B.S. and M.S. degrees in Bioengineering and her Ph.D. in Humanoid Technologies from the University of Genova in 2010. After two research periods in the USA and Japan, in 2018, she was awarded the ERC Starting Grant wHiSPER (www.whisperproject.eu), focused on the investigation of joint perception between humans and robots. She has published more than 100 papers and abstracts in international journals and conferences, coordinates the ERC POC Project ARIEL (Assessing Children Manipulation and Exploration Skills), and has participated in the coordination of the CODEFROR European IRSES project (https://www.codefror.eu/). She is currently Chief Editor of the HRI Section of Frontiers in Robotics and AI and Associate Editor for several journals, including the International Journal of Social Robotics and Cognitive System Research. She is an ELLIS scholar and the corresponding co-chair of the IEEE RAS Technical Committee for Cognitive Robotics. Her research aims to investigate the sensory and motor mechanisms underlying mutual understanding in human-human and human-robot interaction.
Cognitive and Developmental Robotics: The Importance of Starting Small
This talk introduces the concept of Cognitive Robotics, i.e. the field that combines insights and methods from AI, as well as cognitive and biological sciences, to robotics (cf. Cangelosi & Asada 2022 for book open access). This is a highly interdisciplinary approach that sees AI computer scientists and roboticists collaborating closely with psychologists and neuroscientists. In particular, we will focus on Cognitive Developmental Robotics, which models the incremental learning and developmental acquisition of cognitive and sensorimotor skills in robots.
We will use the case study of language learning to demonstrate this highly interdisciplinary field, presenting developmental psychology studies on children’s language acquisition and robots’ experiment on language learning. One study focuses on the embodiment biases in early word acquisition and grammar learning. The same developmental robotics method is used for experiments on pointing gestures and finger counting to allow robots to learn abstract concepts such as numbers. We will then present novel developmental robotics models, and human-robot interaction experiments, on Theory of Mind and its relationship to trust. This considers both people’s Theory of Mind of robots’ capabilities, and robot’s own ‘Artificial Theory of Mind’ of people’s intention. Results show that trust and collaboration is enhanced when we can understand the intention of the other agents and when robots can explain to people their decision making strategies.
The implications for the use of such cognitive robotics approaches for embodied cognition in AI and cognitive sciences will be discussed. Moreover, the talk will discuss the approach of “starting small” (Elman 1993), and its repercussions for the current limitations of foundation models and LLMs/VLAs as cognitive models for language learning and understanding in AI and robots.
Angelo Cangelosi is Professor of Machine Learning and Robotics at the University of Manchester (UK) and co-director and founder of the Manchester Centre for Robotics and AI. He was selected for the award of the European Research Council (ERC) Advanced grant (UKRI funded). His research interests are in cognitive and developmental robotics, neural networks, language grounding, human robot-interaction and trust, and robot companions for health and social care. Overall, he has over 400 publications and has secured £40m of research grants as coordinator/PI/co-I, including the ERC Advanced eTALK, the EPSRC CRADLE Prosperity, the US AFRL project CASPER++, and five ongoing Horizon RIA and MSCAs grants. He is Editor-in-Chief of the journals Interaction Studies and IET Cognitive Computation and Systems, and in 2015 was Editor-in-Chief of IEEE Transactions on Autonomous Development. He has chaired numerous international conferences, including ICANN2022 Bristol, and ICDL2021 Beijing. His book “Developmental Robotics: From Babies to Robots” (MIT Press) was published in January 2015, and translated in Chinese and Japanese. His latest book “Cognitive Robotics” (MIT Press), coedited with Minoru Asada, was published in 2022 and translated in Chinese in 2025.