NC23: Neural Learning for Control

Topic leaders

Invited and key participants

Invited talks

See workshop schedule

Focus and Goals

The NC23 topic area focuses on applying existing and developing new learning approaches for movement control. We put the emphasis on closing the perception-and-action loop with neuromorphic AI software & hardware – an important gap towards neuromorphic (embodied) cognition. 

The main goals are to use activity-driven neural networks and event-based computing in nonlinear control tasks and to demonstrate the potential of neuromorphic hardware for 

We will put all these elements together in simple, but real robotic demonstrations. 

We will organize projects on neural learning in a range of control tasks, such as cartpole controlf1tenth driving, and robotic arm control

We will provide a variety of neuromorphic hardware computing and perception platforms, including EdgeSpartus [2], EdgeDRNN [3], Loihi 2 [4], and SpiNNaker 2 [5] for local participants and Loihi 2 cloud access for remote participants, as well as some DAVIS event cameras.

Additionally, we plan to hold a hybrid cartpole control competition and also attract neuromorphs to form a Telluride team for F1Tenth competition. We look forward to continuing the development of projects beyond the lifetime of the workshop and encourage publications that acknowledge the Telluride workshop.

Projects

Cartpole Control Competition with Spiking Neural Networks on Loihi 2: We will hold a competition for local & remote participants to train their spiking neural networks to control a cartpole in simulation (with a possibility to also test policies on the physical setup). The SNNs can run on different neuromorphic hardware platforms, such as Loihi 2, SpiNNaker 1 or 2, DYNAPSE, etc. Local participants can access physical Loihi 2 chips, while remote ones will be given access to Loihi 2 on the cloud. The rank will be determined by the time taken to swing up the cartpole and the system level power consumption.

A simulated cartpole executes a swingup by using MPC with the RPGD optimizer

Cartpole robot: The cartpole is excellent for benchmarking comparisons between optimal control methods and is a great benchtop system for continual adaptation and RL experiments. Our cartpole simulation framework will be the basis for local and remote experiments and transfer learning to the physical cartpole we will bring.  We will be able to drive the cartpole robot from a USB powered FPGA board that can implement an EdgeDRNN and/or EdgeSpartus neuromorphic RNN accelerator.

A real cartpole can robustly swing itself back up after being knocked down. Here an MPC controller runs the RPGD optimizer with 32 rollouts per timestep using the cartpole ODE dynamics model.

F1Tenth racing: F1Tenth is an established international robotics car racing competition that explores the limits of competitive time trial and head-to-head racing in simulation and reality. What can neuromorphs bring to the table, and can neuromorphs compete in this league? We now have 2 years of experience with the  L2RACE environment. Based on this, we set up the F1Tenth AI Gym simulation environment for F1Tenth racing, and have built a high performance and tough f1tenth race car called Inivincible. It runs ROS under Ubuntu 20, has a high quality LIDAR and IMU,  and a powerful Nvidia Xavier GPU. We can presently run a variety of state estimation and controllers on it. 

At the workshop, remote particpants can explore controller design in the AI gym environment, and we can set up a small race track in the schoolhouse corridor space for participants to run Inivincible.

A simulated f1tenth car that explores future possible trajectories using the new RPGD stochastic gradient descent optimizer to find the optimal next throttle and steering control.
The INIVincible f1tenth car running pure-pursuit algorithm for testing

1D Drone Control: We will bring a tabletop "drone" with two rotors mounted on the ends of a freely-rotating shaft. The intent is to learn the dynamics of this platform and to use this learned model to control the shaft rotation.

Robotic Arm Control with Visual Information using Loihi 2 or other neuromorphic platforms: We will bring a simple, but fast robotic arm (think Lego Mindstorms, not KUKA) and equip it with a DVS sensor, connected to a Loihi 2 device ("Kapoho Point" board). We will learn to control the arm to reach visually perceived targets, simplifying perception tasks by using blinking LEDs on the end-effector, joints, and the target objects. We could cooperate with a visual perception group to perform more sophisticated (but real time!) visual processing. Learning tasks will include: learning a mapping between the visual space and the "proprioceptive" or motor state of the arm (state estimation), learning motor primitives to move between different start and end points using ballistic movement and/or visual servoying, learning forward and inverse kinematics. We hope that this project will spawn new ideas and long-term projects for application of neuromorphic technology in arm control. 

Cooperation with other topic areas: We propose specific cooperation with the EEG group to record EEG during a control task (e.g., the Doyle group mountain biking game) and attempt to decode human control actions. 


We will also collaborate with TAs that bring neuromorphic hardware platforms to the workshop - SpiNNaker 1 and 2, DYNAP boards, SPECK, FPAAs, etc. – to use their platforms for control tasks. If enough such groups are present at the workshop, we will define a benchmarking task to find stronger and weaker sides of each platform, and discuss  a roadmap for neuromorphic hardware applications in control. 

Materials, Equipment, and Tutorials:

Introductory tutorials

Based on 2021 NC video tutorials


Equipment/Tools

Reading Materials 

Relevant Literature:

See paperpile collection https://paperpile.com/shared/9uINWL