Lifelong Learning at Scale (L2S): From Neuroscience Theory to Robotic Applications
Invitees
Tobias Fischer (QUT)
Vincenzo Lomonaco (University of Pisa)
Michael Berry (Princeton)
Rajit Manohar (Yale)
Shantanu Chakrabartty (WUSTL)
Hava Siegelmann (U. of Massachusetts Amherst)
Stefano Fusi (Columbia University)
Topic Leaders
Bodo Rueckauer (Donders Centre for Cognition)
Yulia Sandamirskaya (Intel Labs)
Gert Cauwenberghs (UC San Diego)
Terrence Sejnowski (Salk Institute)
Team
Frederic Broccard, Stephen Deiss, Abhinav Uppal, Soumil Jain (UC San Diego)
Leif Gibb (Salk Institute)
Michael Neumeier (Fortiss)
Justin Kinney, Qingbo Wang (Western Digital Corporation)
Amitava Majumdar, Subhashini Sivagnanam (San Diego Supercomputer Center SDSC)
Emre Neftci (UC Irvine and FZ Jülich)
Garrick Orchard, Andreas Wild, Sumit Shresta, Danielle Rager, Sumedh Risbud (Intel Labs)
Goals
This topic area studies bio-inspired continual learning – the ability of an agent to acquire useful representations, form memories, and learn new skills in a continual flow of signals, generated by observing or acting in an environment. We will review theoretical frameworks of continual learning from the perspectives of Deep Learning, computational neuroscience, and cognitive science, and derive algorithms from these theories that are suitable for implementation in neuromorphic hardware using bio-inspired local plasticity rules. Participants will have access to neuromorphic computing platforms for experimental validation, and will benchmark continually learning neural architectures in robotic tasks, in particular on simulated and real mobile robots.
Projects
Our projects will focus on three themes.
Theory: Participants will develop improved algorithms of lifelong learning (e.g. using memory consolidation, unsupervised learning, active inference, and self-supervision based on predictive models).
Hardware acceleration: We will explore how to deploy these algorithms on neuromorphic hardware such as Loihi-2, SpiNNaker-2, and other large-scale reconfigurable platforms open to the research community, and perform benchmarks across these platforms.
Application in robotics: We will train simulated and real agents to perform place recognition and navigation tasks while testing their ability to adapt to dynamic environments and deal with unreliable sensors.
Plan
Week 1: Tutorials and invited talks on bio-inspired (continual) learning mechanisms such as memory consolidation, consistent and predictive representations, attention mechanisms and dynamics of neural and synaptic adaptation. Hardware and software setup.
Week 2-3: Hands-on project work by participants along with invited talks.
We target demonstrations of the project results at the final presentation:
Visual object recognition using the OpenLORIS dataset
Reactive navigation (target reaching, obstacle avoidance) in a dynamic environment
Map-based navigation and map update (SLAM)
Introductory Material
An introduction to continual learning is available as an online course from V. Lomonaco.
The youtube channel from ContinualAI provides a comprehensive overview of recent developments in this field.
Tutorial on using the Avalanche package for continual learning.
Coursera lectures by Barbara Oakley and Terrence Sejnowski on “Learning How to Learn”.
Tutorials on using Loihi 2 with the new Lava API.
Video lectures and tutorials on bio-plausible learning rules.
Getting-started guide for one of the robotic platforms we will be using.
Hardware and Software Setup
Software:
Lava (open source framework for neuromorphic software development),
spike-based learning tools like snnTorch, BindsNET, or SpyTorch,
the continual learning package Avalanche developed by ContinualAI. The latter also features an extension geared towards continual reinforcement learning.
Hardware:
Dynamic Vision Sensor (DVS)
Loihi (both remotely, and with the portable Loihi-1 and Loihi-2 platforms)
Remote access to the Trainable Reconfigurable Development Platform for Large-Scale Neuromorphic Cognitive Computing.
Our main robotic platform will be the UP-board robotic kit, which comes with an Intel Movidius VPU, a depth camera, and allows interfacing with Loihi systems. We will augment this platform with an event-based vision sensor on a pan-tilt unit.
CGX Quick-32r 32-channel dry-electrode electroencephalography (EEG) headset
Reading List
Lesort T, Lomonaco V, Stoian A, Maltoni D, Filliat D, Díaz-Rodríguez N. Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges. Inf Fusion. 2020;58: 52–68.
Kaiser J, Mostafa H, Neftci E. Synaptic Plasticity Dynamics for Deep Continuous Local Learning (DECOLLE). Front Neurosci. 2020;14: 1–11.
Zenke F, Poole B, Ganguli S. Continual Learning Through Synaptic Intelligence. Proceedings of the 34th International Conference on Machine Learning. 2017. Available: http://arxiv.org/abs/1703.04200
Traoré R, Caselles-Dupré H, Lesort T, Sun T, Cai G, Díaz-Rodríguez N, et al. DisCoRL: Continual Reinforcement Learning via Policy Distillation. 2019. Available: https://sites.google.com/view/deep-rl-workshop-neurips-2019/home
Fischer T, Milford M. Event-Based Visual Place Recognition With Ensembles of Temporal Windows. IEEE Robotics and Automation Letters. 2020;5: 6924–6931.
Cartoni E, Montella D, Triesch J, Baldassarre G. An open-ended learning architecture to face the REAL 2020 simulated robot competition. arXiv [cs.RO]. 2020. Available: http://arxiv.org/abs/2011.13880
Davies, M., Wild, A., Orchard, G., Sandamirskaya, Y., Guerra, G. A. F., Joshi, P., ... & Risbud, S. R. (2021). Advancing neuromorphic computing with loihi: A survey of results and outlook. Proceedings of the IEEE, 109(5), 911-934.
Marcus K. Benna and Stefano Fusi, “Computational Principles of Synaptic Memory Consolidation,” Nature Neuroscience, vol. 19 (12), pp. 1697-1706, 2016.
German I. Parisi, Ronald Kemker, Jose L. Part, Christopher Kanan, and Stefan Wermter, “Continual Lifelong Learning with Neural Networks: A Review,” Neural Networks, vol. 113, pp. 54-71, 2019.
Gido M. van de Ven, Hava T. Siegelmann, and Andreas S. Tolias, “Brain-Inspired Replay for Continual Learning with Artificial Neural Networks,” Nature Communications, vol. 11, pp. 4069:1-14, 2020.
G. Detorakis, S. Sheik, C. Augustine, S. Paul, B.U. Pedroni, N. Dutt, J. Krichmar, G. Cauwenberghs, and E. Neftci, “Neural and Synaptic Array Transceiver: A Brain-Inspired Computing Framework for Embedded Learning,” Frontiers in Neuroscience, vol. 12, pp. 583:1-19, 10.3389/fnins.2018.00583, 2018.
M. Wagner, T.M. Bartol, T.J. Sejnowski, and G. Cauwenberghs,“ Markov Abstractions of Electrochemical Reaction-Diffusion in Synaptic Transmission for Neuromorphic Computing,” Frontiers in Neuroscience, vol. 15, pp. 698635:1-12, 10.3389/fnins.2021.698635, 2021.
D. Mehta, M. Rahman, K. Aono, and Chakrabartty, "An Adaptive Synaptic Array Using Fowler–Nordheim Dynamic Analog Memory," Nature Communications, vol. 13, p, 1670, DOI: 10.1038/s41467-022-29320-6, 2022.
R Manohar, "Hardware/Software Co-Design for Neuromorphic Systems," 2022 IEEE Custom Integrated Circuits Conference (CICC), 2022