The challenge will be introduced on Friday of the first week and groups will be formed to work on them. These groups met at least once per week to discuss infrastructure and progress. On Sept 11 2020, the teams will present their results. The aim is that each challenge will have sufficient progress to prepare a paper about their results.

Benchmarking event-based meta-learning (METAL)

Meta-Learning (ML) or "learning-to-learn" techniques have been established for fast and data-efficient learning within and across task domains. Neuromorphic embedded learning systems can strongly benefit from ML by acquiring prior knowledge in a relevant task domain. Community-led neruomorphic benchmarks and libraries for ML are still missing. This project/challenge address these gaps in two steps: 1) Defining event-based meta-learning libraries and benchmarks; and 2) Creating and participating in a cross domain meta learning challenges.

Learning to race (L2RACE)

Learning to control dynamical systems is a hot topic. We will explore it in a simple virtual race track where contestants duke it out using their software agent. The cars will have unknown dynamics on a known track (which has unknown traction characteristics). Contestants get full state information (eye of god, so no need for fancy perception/SLAM) and use it to learn how to control the cars to win a time trial or race. Data collection will be real-time (no transfer learning tricks), but can use E2E human data. This will be a pure python ML contest of the best control learning algorithm.

By contrast with the excellent F1TENTH challenge, L2RACE will be much simpler and aimed purely at efficiently learning a good controller from the minimum amount of data.

Insights into the early motion pathway (VISION)

The first computations of any motion analysis involve image motion estimation, segmentation on the basis of movement, and 3D motion estimation. In previous studies, we have proposed hypotheses on basic computational principles in human motion estimation, which can be observed in optical illusions - patterns that cause misestimation, because something goes wrong. Examples of such computational principles are statistical bias in the estimation or causal filtering for estimating temporal derivatives (i.e. to compute changes in time, biology can only use the signal from the present and the past, but not the future). In this project, we seek to explore these principles and look at the properties a neural network would need to replicate the perception in illusions. We also will explore the role of the transient signal (that is only changes are recorded) for the early motion processes. Using the DVS, we have available a technical tool to simulate computations.

Guided reinforcement learning and imitation learning (GRILL)

This project explores ways of speeding up RL and making it more suitable for neuromorphic hardware by giving it guidance in various ways. There are a variety of techniques to explore, and different groups involved in the project can look at different learning rules and different sorts of guidance. We will focus on tasks using the OpenAI Gym framework, with particular attention to the Atari game Montezuma’s Revenge and sequential tasks using the minigrid world framework.