Talk title: Architecting Interaction: Towards a Human-Aware Cognitive Architecture
Talk abstract: Traditionally, the fields of Cognitive Robotics and Social Robotics/HRI have evolved along parallel, often disconnected trajectories. Most cognitive architectures focus on creating autonomous agents capable of intelligent behavior in isolation, with interaction treated as an optional extension. However, insights from developmental psychology and neuroscience challenge this assumption. Human cognition is inherently social—shaped from the ground up by the need to live, act, and learn within a social group. Our perceptual, motor, memory, and decision-making systems are not just capable of social interaction; they are optimized for it.
This perspective invites a fundamental shift in how we design cognitive architectures for robots: interaction is not an add-on, but a core mechanism for sense-making, prediction, and adaptation. Embodied communication, affective processing, internal state modeling are not merely features of HRI—they are cognitive necessities for agents meant to operate in human environments.
In this talk, we propose a principled framework for a human-aware cognitive architecture, where sociality is treated as a foundational constraint rather than a peripheral concern. By embracing interaction as a structural element of cognition, we aim to foster a higher degree of cognitive compatibility between humans and robots, ultimately enabling machines that are not just intelligent, but meaningfully integrated into our social and cognitive spaces.
Alessandra Sciutti is a Tenure Track Researcher and head of the CONTACT (COgNiTive Architecture for Collaborative Technologies) Unit of the Italian Institute of Technology (IIT). She received her B.S. and M.S. degrees in Bioengineering and her Ph.D. in Humanoid Technologies from the University of Genova in 2010. After two research periods in the USA and Japan, in 2018, she was awarded the ERC Starting Grant wHiSPER (www.whisperproject.eu), focused on the investigation of joint perception between humans and robots. She has published more than 100 papers and abstracts in international journals and conferences, coordinates the ERC POC Project ARIEL (Assessing Children Manipulation and Exploration Skills), and has participated in the coordination of the CODEFROR European IRSES project (https://www.codefror.eu/). She is currently Chief Editor of the HRI Section of Frontiers in Robotics and AI and Associate Editor for several journals, including the International Journal of Social Robotics, the IEEE Transactions on Cognitive and Developmental Systems, and Cognitive System Research. She is an ELLIS scholar and the corresponding co-chair of the IEEE RAS Technical Committee for Cognitive Robotics. Her research aims to investigate the sensory and motor mechanisms underlying mutual understanding in human-human and human-robot interaction. For more details on her research and the full list of publications, please check the Contact Unit website or her Google Scholar profile.
Talk title: Embodied Predictive World Models for Adaptive Cognitive Robotics
Talk abstract: Contemporary robots often learn skills in narrow, pre-defined contexts, which limits their ability to adapt to new situations. Insights from embodied cognition suggest a different path: intelligence emerges not from abstract computation alone, but from continuous, sensorimotor engagement with the world. In this view, the body is not just an actuator—it is the medium through which perception, prediction, and reasoning co-develop. This talk will explore how predictive world models, inspired by the hierarchical and modular organization of the human neocortex, can integrate proprioceptive, visual, and tactile streams into unified, adaptive representations. Learned through intrinsic motivation rather than task-specific rewards, these models anticipate future states, adapt strategies, and reuse skills in unfamiliar contexts—demonstrating transfer, flexibility, and robustness. The discussion will highlight how embodied agents can autonomously acquire versatile repertoires of motor and cognitive behaviors—such as dexterous manipulation, coordinated gaze control, and sensorimotor synergy—without explicit supervision. By grounding learning in prediction and exploration, such systems develop richer internal models that are inherently generalizable. In this talk, we will examine the cognitive principles driving this approach, present empirical results with humanoid platforms, and outline broader implications for developmental robotics aiming to show how deeply embedding cognition in the body can yield robots that not only perform tasks, but continuously grow their capabilities.
Esther Colombini is a professor of Robotics and Artificial Intelligence at the University of Campinas (Unicamp). She holds a Ph.D. and an MSc degree in Computer Engineering from Technological Institute of Aeronautics (ITA), and has a bachelor on Computer Science from Universidade Federal da Paraíba (UFPB), partially executed at University of Leeds. She is the coordinator of the Laboratory of Robotics and Cognitive Systems (LaRoCS) at Unicamp. Her research focuses on Robotics, Cognitive Systems, Attention and Machine Consciousness, Machine Learning, Artificial Intelligence, and Robotics on Education.
Talk title: Predictive Coding Light
Talk abstract: Current machine learning systems consume vastly more energy than biological brains. Neuromorphic systems aim to overcome this difference by mimicking the brain’s information coding via discrete voltage spikes. However, it remains unclear how both artificial and natural networks of spiking neurons can learn energy-efficient information processing strategies. Here we propose Predictive Coding Light (PCL), a recurrent hierarchical spiking neural network for unsupervised representation learning. In contrast to previous predictive coding approaches, PCL does not transmit prediction errors to higher processing stages. Instead it suppresses the most predictable spikes and transmits a compressed representation of the input. Using only biologically plausible spike-timing based learning rules, PCL reproduces a wealth of findings on information processing in visual cortex and permits strong performance in downstream classification tasks. Overall, PCL offers a new approach to predictive coding and its implementation in natural and artificial spiking neural networks.
Jochen Triesch is the Johanna Quandt Chair for Theoretical Life Sciences at the Frankfurt Institute for Advanced Studies (FIAS). He also holds professorships at the Dept. of Physics and the Dept. Computer Science and Mathematics at Goethe University Frankfurt. Before joining FIAS in 2005, he was Assistant Professor at UC San Diego, USA. Originally trained as a physicist, he discovered his passion for studying the brain and building brain-like artificial intelligence already during his graduate education. He is fascinated by the question how biological nervous systems can learn so much more autonomously than today’s Artificial Intelligence systems. He uses embodied computational modeling to shed light on the mechanisms that brains use to explore and understand the world around them. A particular focus of his research is the development of cognitive abilities in human infants. He believes that trying to re-create human cognitive development in artificial systems will provide important insights into the human mind and consciousness.