NC23: Neural Learning for Control
Topic leaders
Chang Gao (TU Delft, Email: chang.gao [at] tudelft.nl)
Yulia Sandarsmirskaya (Intel & ZHAW, Email: yulia.sandamirskaya [at] intel.com)
Tobi Delbruck (UZH-ETH Zurich)
Invited and key participants
Antonio Rios Navarro (U Seville)
Christian Mayr (TU Dresden) (week 1)
Jing Shuang (Lisa) Li (Caltech) (week 2)
Noel Csomay-Shanklin (Caltech) (week 1)
Marcin Paluch (UZH-ETH Zurich)
Qinyu Chen (UZH & Leiden)
Invited talks
(by Zoom) Prof. Guido de Croon (TU Delft, Netherlands): "Neuromorphic sensing and processing for tiny autonomous drones"
Prof. Christian Mayr (TU Dresden): "Beyond strictly neuromorphic: Hybrid AI and non-cognitive computing on SpiNNaker2"
Prof. Joel Burdick (Caltech): "TBD"
Jing Shuang (Lisa) Li (Doyle lab, Caltech): "How SLS can help MPC (and what are those acronyms anyhow?)" (blackboard talk), and "Control theory for neuroscience: from internal feedback to legged locomotion" (part of Comp Neuro series)
Noel Csomay-Shanklin (AMBER lab, Caltech): "A Hierarchical Perspective on Control"
(by Zoom) Prof. Bing Brunton (Brunton Lab, U Washington): "Tracking turbulent odor plumes with deep reinforcement learning"
Prof. Tobi Delbruck (Sensors Group, UZH-ETH Zurich): "Experiments in Neural Optimal Control" (part of Comp Neuro series)
Prof. Antonio Rios Navarro (U Seville): "Hardware neural network nonlinear optimal control of physical cartpole"
Focus and Goals
The NC23 topic area focuses on applying existing and developing new learning approaches for movement control. We put the emphasis on closing the perception-and-action loop with neuromorphic AI software & hardware – an important gap towards neuromorphic (embodied) cognition.
The main goals are to use activity-driven neural networks and event-based computing in nonlinear control tasks and to demonstrate the potential of neuromorphic hardware for
state estimation (using event-based vision and depth sensing along with motor sense and IMUs),
learning models, e.g., internal models such as forward dynamics and inverse kinematics and external models, or maps,
movement control: model-predictive control, adaptive PID, neural imitation control, adaptive neural control, or other, more biologically inspired control methodologies.
We will put all these elements together in simple, but real robotic demonstrations.
We will organize projects on neural learning in a range of control tasks, such as cartpole control, f1tenth driving, and robotic arm control.
We will provide a variety of neuromorphic hardware computing and perception platforms, including EdgeSpartus [2], EdgeDRNN [3], Loihi 2 [4], and SpiNNaker 2 [5] for local participants and Loihi 2 cloud access for remote participants, as well as some DAVIS event cameras.
Additionally, we plan to hold a hybrid cartpole control competition and also attract neuromorphs to form a Telluride team for F1Tenth competition. We look forward to continuing the development of projects beyond the lifetime of the workshop and encourage publications that acknowledge the Telluride workshop.
Projects
Cartpole Control Competition with Spiking Neural Networks on Loihi 2: We will hold a competition for local & remote participants to train their spiking neural networks to control a cartpole in simulation (with a possibility to also test policies on the physical setup). The SNNs can run on different neuromorphic hardware platforms, such as Loihi 2, SpiNNaker 1 or 2, DYNAPSE, etc. Local participants can access physical Loihi 2 chips, while remote ones will be given access to Loihi 2 on the cloud. The rank will be determined by the time taken to swing up the cartpole and the system level power consumption.
Cartpole robot: The cartpole is excellent for benchmarking comparisons between optimal control methods and is a great benchtop system for continual adaptation and RL experiments. Our cartpole simulation framework will be the basis for local and remote experiments and transfer learning to the physical cartpole we will bring. We will be able to drive the cartpole robot from a USB powered FPGA board that can implement an EdgeDRNN and/or EdgeSpartus neuromorphic RNN accelerator.
F1Tenth racing: F1Tenth is an established international robotics car racing competition that explores the limits of competitive time trial and head-to-head racing in simulation and reality. What can neuromorphs bring to the table, and can neuromorphs compete in this league? We now have 2 years of experience with the L2RACE environment. Based on this, we set up the F1Tenth AI Gym simulation environment for F1Tenth racing, and have built a high performance and tough f1tenth race car called Inivincible. It runs ROS under Ubuntu 20, has a high quality LIDAR and IMU, and a powerful Nvidia Xavier GPU. We can presently run a variety of state estimation and controllers on it.
At the workshop, remote particpants can explore controller design in the AI gym environment, and we can set up a small race track in the schoolhouse corridor space for participants to run Inivincible.
1D Drone Control: We will bring a tabletop "drone" with two rotors mounted on the ends of a freely-rotating shaft. The intent is to learn the dynamics of this platform and to use this learned model to control the shaft rotation.
Robotic Arm Control with Visual Information using Loihi 2 or other neuromorphic platforms: We will bring a simple, but fast robotic arm (think Lego Mindstorms, not KUKA) and equip it with a DVS sensor, connected to a Loihi 2 device ("Kapoho Point" board). We will learn to control the arm to reach visually perceived targets, simplifying perception tasks by using blinking LEDs on the end-effector, joints, and the target objects. We could cooperate with a visual perception group to perform more sophisticated (but real time!) visual processing. Learning tasks will include: learning a mapping between the visual space and the "proprioceptive" or motor state of the arm (state estimation), learning motor primitives to move between different start and end points using ballistic movement and/or visual servoying, learning forward and inverse kinematics. We hope that this project will spawn new ideas and long-term projects for application of neuromorphic technology in arm control.
Cooperation with other topic areas: We propose specific cooperation with the EEG group to record EEG during a control task (e.g., the Doyle group mountain biking game) and attempt to decode human control actions.
We will also collaborate with TAs that bring neuromorphic hardware platforms to the workshop - SpiNNaker 1 and 2, DYNAP boards, SPECK, FPAAs, etc. – to use their platforms for control tasks. If enough such groups are present at the workshop, we will define a benchmarking task to find stronger and weaker sides of each platform, and discuss a roadmap for neuromorphic hardware applications in control.
Materials, Equipment, and Tutorials:
Introductory tutorials
Based on 2021 NC video tutorials
MPC and MPPI
SpiNNaker 2 Tutorial
RNN dynamics modeling
The physical cartpole
Training RNNs for cartpole dynamics
EdgeDRNN & Spartus Tutorial
F1Tenth racing & the F1Tenth car
Pure pursuit path planning and control algorithm
Training MLPs and RNNs for car dynamics
Movement control in animals and machines
Equipment/Tools
EdgeDRNN [3] and (potentially) Spartus [2] neuromorphic FPGA RNN accelerators.
SpiNNaker 2 boards [5]
A simple robotic arm
Pan-tilt unit (Gimbal) for vision-driven control
F1Tenth robot car equipped with LIDAR and NVIDIA Jetson Xavier GPU
Cart-pole simulation framework and F1Tenth car simulation framework.
DAVIS event cameras with concurrent brightness change event and grayscale frame output.
Reading Materials
NEUROTECH educational material: https://neurotechai.eu/educational/
RPG's resources: papers, code, datasets, videos: https://rpg.ifi.uzh.ch/
CVPR Workshop on Event-based vision: https://tub-rip.github.io/eventvision2023/
NICE conference proceedings and videos: https://niceworkshop.org/nice-2022/
Neural dynamics, attractor networks: https://dynamicfieldtheory.org/
Relevant Literature:
See paperpile collection https://paperpile.com/shared/9uINWL