Topic Area

NSI19: Neuromorphic systems for high speed sensorimotor integration, cognitive planning and control

Invited guests

  • Greg Cohen - University of Sydney
  • Terry Stewart - University of Waterloo
  • Saeed Afshar- University of Sydney
  • Sadique Sheik - AiCtx
  • Elisa Donati- University of Zurich
  • Yulia Sandamiskaya - University of Zurich

Topic leaders

  • Chiara Bartolozzi - IIT Genova

  • Chris Eliasmith - U Waterloo

Goal

Develop embodied agents that are capable of autonomous action in a dynamic environment, coupling rapid low-level decision making with higher-level cognitive strategies that evolve on a slower time scale, but that are necessary to accomplish the agent’s long term goal. To achieve this goal, we will explore learning multimodal dynamic representations of the environment for motor planning and control, using neuromorphic sensors and processors.

Projects within the NSI2019 topic will incorporate combinations of sensing, planning, and effector control, while focussing on the learning required to equip the agents with the capability to autonomously evolve its own perceptive, planning and executing skills. NSI2019 will provide hands-on exposure to established theories, frameworks, and tools for constructing neuromorphic sensory-cognitive-motor systems on neuromorphic platforms, with the goal of demonstrating a complete sensing to interaction pathway.

Setups

Two main setups will be available: one automated foosball table and a robot arena where small robots can navigate. Both systems will be equipped with external neuromorphic vision sensors (ATIS/DVS or DAVIS), either fixed, or in a stereo setup mounted on independently moving pan-tilt units and with neuromorphic tactile sensors on the floor. The robots (lego or pushbot) will be equipped with neuromorphic vision sensors (and IMU). Additionally, both systems will be connected to most of the currently available neuromorphic computing platforms: Loihi, SpiNNaker, BrainDrop, DynapSE2, DynapCNN, with the possibility of benchmarking the same task on different platforms, and choosing the best platform for a specific task.

Demonstrator #1

The goal of the first demonstrator is the smooth tracking of a small agent (e.g.,the pushbot or lego) by the external cameras, mounted on a pan-tilt unit. The system will learn the visual appearance of the agent with the external DVS, detect one of them with the visual attention module, track its movement with the DVS cameras and the tactile input, predict its trajectory and smoothly track it in space using low-level spike-based motor control.

Demonstrator #2

The second demonstrator will focus on pushbots interacting with objects. The tactile input will serve as localisation cue to track objects and pushbots in the arena and match with the visual input.

Demonstrator #3

The third, foosball demonstrator will encourage participants to address high-speed decision making in control of a many degree-of-freedom dynamic system, a challenge biological brains solve easily, but current AI does not.

Projects

Learning object-based visual attention This project will learn attentional representations based on DVS event-based vision. The targeted implementation will be based on chip-in-loop learning with DVS and DynapCNN. It will explore recently proposed attention mechanisms such as proto-object saliency and multi-headed attention.

Spike-based low level motor control for tracking This project will drive the stereo DVS mounts to track a moving target (detected by the visual attention module) using spiking networks both for perception and motor control.

Spike-based motor control strategies for stereo vision This project will integrate with the motor control for tracking, using temporal associations based on simultaneous spiking from stereo DVS input, to learn spike-based motor control for stereo DVS mounts.

Learning of high-level movement planning This project will learn high-level planning representations to control agents. The final goal is to implement planning structures, for nstance, by operating DynapCNN in a recurrent architecture, or using the recurrent basal ganglia-thalamus-cortex control structures in the SPA.

Multi-modal integration The project will deploy spike-based training algorithms to learn combined visual/tactile representations, using inputs from the stereo DVS head and SKIN tactile HW. It will be integrated with the spike-based low level motor control and the visual attention module for tracking the pushbot.

Lectures

E. Neftci - Spike-based learning

E. Chicca - CMOS circuits for synaptic learning

E. Niebur, R. Etienne-Cummings - Visual attention

C. Bartolozzi - Neuromorphic tactile sensing

E. Donati - Spike-based motor control

S. Afshar - Spiking cameras

T. Stewart - The Nengo simulator

C. Eliasmith - LMUs: Novel, Better than LSTMs, and Spiking

Hands-on Tutorials

C. Bartolozzi, E. Donati - System description - how to use the robot arena, HW and SW components, overview of the sensors

C. Bartolozzi - Event-driven on YARP - how to use the library for vision, touch, motors and neuromorphic processors

S. Sheik - Implementing networks and algorithms on DynapSE2 and DynapCNN

T. Stewart - The Nengo Simulator

C. Eliasmith - Cognitive control in spiking networks

G. Cohen - Neuromorphics for high-speed visual processing

T. Stewart - Nengo Loihi

Pre-workshop reading list for students (available soon)

References about CMOS circuits for synaptic learning rules, visual attention, spike-based motor control, tactile sensing, spike-based learning

dynamicfieldtheory.org

Documentation of hardware (DYNAP*, tactile sensors, SpiNNaker, Loihi, BrainDrop)

Documentation of SW libraries

A spiking neural model of adaptive arm control

The Nengo spiking neural network simulator

Using Loihi with the Nengo Simulator

How to Install YARP and the event-driven library (for the multi-sensory projects with skin and vision)