SMI21:

Sensory motor integration




Invitees

University of California, Irvine

Peter Grünberg Institute ,

FZ Jülich

Radboud University

Topic Leaders

Team

  • Tiago Marques, Massachusetts Institute of Technology

  • Martin Schrimpf, Massachusetts Institute of Technology

  • Jaap de Ruyter van Steveninck, Radboud University

  • Burcu Kucukoglu,

Radboud University

  • Yuhuang Hu, University of Zurich and ETH Zurich

  • Garrick Orchard, Intel

  • Tim (Hin Wai) Lui,

University of California, Irvine

  • Ugo Albanese, Mahmoud Akl, Manos Angelidis, Benedikt Feldotto, Antoine Detailleur,

Neurobotics Platform, Human Brain Project

Goal

Natural agents optimize their behavior by closing the loop between sensory input, action, and reward. In this topic area we will apply principles of brain-inspired sensori-motor control as observed in biological organisms to artificial agents in a naturalistic simulated environment for solving navigation tasks. We will draw from knowledge of neuro-inspired visual models, deep learning methods (e.g. reinforcement learning), and computer vision algorithms applied on asynchronous DVS spike streams.

Projects

  1. Optimal Models of Visual Processing with DVS Events

What kind of spatio-temporal filters should be applied to the DVS event stream to extract salient cues for navigation? Here we will pursue a brain-inspired implementation of the filter stage with neural networks. One approach will be to create a simplified model of the visual system (LGN, V1) to reproduce some of the spatio-temporal processing happening in downstream areas to extract informative features from the retina output.

  1. Brain-Inspired Learning Rules in Cortical Vision Processing

The LGN / V1 circuitry developed in subproject 1 will be ported to Loihi, for an end-to-end neuromorphic implementation with the DVS as retina model. With this system, we will be able to explore bio-plausible learning rules such as the three-factor rule for feature extraction in V1.

  1. Sensory Motor Control using Reinforcement Learning

Our models will employ reinforcement learning techniques to guide the agent during navigation. One area to incorporate reinforcement learning is in the model of lateral geniculate nucleus (LGN) developed in subproject 1, with reward signals fed back from the cortex or other thalamic nuclei.

Plan

  • Week 1 : Tutorials and invited talks on bio-inspired learning mechanisms, such as reinforcement learning and DVS and retina models for all workshop participants, setup software and possible hardware platforms for experiments.

  • Week 2-3: Project work by participants along with invited talks.


We target a demonstration of the project results at the final presentation.

Introductory Material

  1. Event Cameras (by Tobi Delbruck)

  2. Modelling of the visual system: paper, github, NeurIPS oral presentation (Will have live tutorials with Tiago Marques and Martin Schrimpf)

  3. Using the VOneNet architecture to simulate a primary visual cortex at the front of CNNs: paper, code

  4. Video to DVS event (v2e) tool: website, example, colab tutorial (Will have live tutorial with Yuhuang Hu)

  5. Event-based learning and surrogate gradients (by Emre Neftci)

  6. Reinforcement Learning (by Matthew Botvinick)

  7. Loihi architecture overview: https://www.youtube.com/watch?v=3HRyb7Bmp5U&list=PLJ506hQ4g3Th3sDNaHiqmK6Pr0YTH_aZo&index=50

  8. Loihi NxSDK tutorials: https://www.youtube.com/watch?v=Bf4CskHBTOQ&list=PLJ506hQ4g3Th3sDNaHiqmK6Pr0YTH_aZo&index=52&t=0s

  9. SLAYER tutorial: https://intel-ncl.atlassian.net/wiki/spaces/INRC/pages/1087734558/SLAYER+office+hours

Hardware and Software Setup

  • The sensory input will be provided by a Dynamic Vision Sensor (DVS) located at the host institutes.

  • For ease of access, participants can use the v2e video-to-event tool for generating realistic DVS events from standard video.

  • The navigation tasks will be embedded in a virtual environment developed at the Donders Institute.

  • We will provide access to the Intel Loihi system for training and running the models.