Title: Biological continual learning by contextual inference
Abstract: Humans are expert at continual learning: we spend a lifetime learning, storing and refining a repertoire of memories. However, it is unknown what principles underlie how our continuous stream of sensorimotor experience is segmented into separate memories and how we adapt and use this growing repertoire. Here we develop a theory of continual learning based on the key principle that memory creation, updating and expression are all controlled by a single computation — contextual inference. Our theory reveals that adaptation can arise both by creating and updating memories (proper learning) and by changing how existing memories are differentially expressed (apparent learning). By instantiating these insights in the specific domain of motor learning, our theory accounts for key empirical phenomena that had no unified explanation: spontaneous recovery, savings, anterograde interference, the effects of environmental consistency on learning rate, and the distinction between explicit and implicit learning. Critically, our theory also predicts new phenomena — evoked recovery and context-dependent single-trial learning — which we confirm experimentally. These results suggest that contextual inference is a key component of biological continual learning.
Title: Virtual reality reveals how neurons create universal abstractions
Abstract: All animals must have abstract concepts such as space, time and events. These abstractions must be universal across all species; else, they would collide and animate life would end. Further, they must be formed flexibly, in a rapidly changing environment, often on the first attempt, e.g. during a predator-prey chase or finding one’s way back to nest after foraging. How does the brain form these abstractions rapidly and flexibly? We addressed this by developing a novel virtual reality for rodents that is immersive and has zero-lag and that works equally in humans. We then measured the activity of thousands of well-isolated neurons from a dozen brain areas –from close to the retina to the deepest part of the brain, including the hippocampus, which is crucial for rapid, one-shot learning. Machine learning techniques and explainable, biophysical theories, revealed the algorithms of universal abstractions across the deepnet of the brain across diverse behaviors. Surprisingly, the fundamental computations in the brain are hybrid (analog + digital). The findings are of relevance for building brain like AI and for the diagnosis and treatment of learning and memory disorders.
References:
‘Linking Hebbian synaptic plasticity, hippocampal activity and navigational performance’. Nature 599, 442–448 (2021). https://www.nature.com/articles/s41586-021-03989-z
‘Mega-scale representation of predictive sequences across cortico-hippocampal system’, eLife 12 RP85069 (2023). https://elifesciences.org/articles/85069
‘Dynamics of cortical dendritic membrane potential and spikes during natural behavior’. Science 355, eaaj1497 1-10 2017). https://www.science.org/doi/10.1126/science.aaj1497
More information at https://mayank.pa.ucla.edu/
Title: Mind meets machine: Harnessing cognitive neuroscience theory and methods to advance social robotics
Abstract: Understanding how we perceive and interact with others is a core challenge of social cognition research. This challenge is poised to intensify in importance as the ubiquity of artificial intelligence and the presence of humanoid robots in society grows. As embodied artificially intelligent agents (or social robots, as we’ll call them here) advance from the pages and screens of science fiction into our homes, hospitals, and schools, they are poised to take on increasingly social roles. Consequently, the need to understand the mechanisms supporting human-machine interactions is becoming increasingly pressing, and will require contributions from the social, cognitive and brain sciences in order to make progress. This talk examines how established theories and methods from psychology and neuroscience are revealing fundamental aspects of how people perceive, interact with, and form social relationships with robots. I will also focus on a recently introduced framework for studying the cognitive and brain mechanisms that support human-machine interactions, which leverages advances made in social cognition and cognitive neuroscience to link different levels of description with relevant theory and methods. Also highlighted are unique features that make this endeavour particularly challenging (and rewarding) for brain and behavioural scientists. Overall, the framework offers a way to conceptualize and study the cognitive science of human—machine interactions that respects the diversity of social machines, individuals' expectations and experiences, and the structure and function of multiple cognitive and brain systems.
Title: AI in Basic and Clinical Neuroscience
Abstract: Traditionally, neural networks have been used for data analysis and as models of the mind and the brain. These two areas have both made historically significant contributions. For example, connectionism as a model of the brain has helped cognitive psychologists to understand many computational principles in language acquisition, memory and control of action. Although modern AI inherited many fundamental structures and features in conventional neural network models, its current applications in neuroscience have primarily been data analysis, e.g. for MRI data. In this talk, I will show how modern AI can contribute to both data analysis in neuroscience and also help theorising computational principals implemented by the brain. Examples shown in this presentation will cover both basic and clinical neuroscience including brain imaging, computational linguistic and speech recognition.
Title: Trust in AI: Lessons from Identifying People, Places and Things
Abstract: Trust has emerged as a key concept in understanding how we interact with intelligent technology. A fundamental question is whether trust in technology is an extension of interpersonal trust or a distinct phenomenon altogether. One challenge in addressing this question is the wide range of disciplines that study trust, each bringing its own perspective. While the broad definition of trust as “accepting vulnerability with the expectation of benefit” provides a helpful foundation, it remains important to consider whether that expected benefit stems from perceived competence or benevolence. Put another way: do we view intelligent technology as a tool—or as a teammate?
In this talk, I will review findings from our lab that examine human interactions with various intelligent systems designed to assist in the visual identification of people, places, and things. The roles of transparency, perceived expertise, and anthropomorphism will be discussed.