Workshop on biological AND artificial reinforcement learning

NeurIPS - December 2019 - Vancouver, Canada

Motivation

Reinforcement Learning (RL) algorithms learn how to interact with the environment guided by reward signals. Since its origins, this approach has been greatly inspired by psychology and behavioural sciences, and has been proven to be extremely successful for building autonomous agents. Supplementary to these works, reinforcement learning theory has provided a normative perspective and framework to study how animals and humans learn through rewards, contributing to some of the most important discoveries in the field. With the progression of time, biological and artificial reinforcement learning have contributed to their reciprocal development by informing and inspiring each other. Building on this historical and tight connection, our workshop will bring together a vast array of researchers interested in biological and artificial RL, with the aim of both gaining inspiration from neural and cognitive mechanisms to tackle current challenges in RL research for designing intelligent agents as well as using machine learning theory to further our understanding humans’ or animals’ brain and behaviour.

TOPICS

1. Benchmarks: What are suitable common benchmarks for studying decision making across animals /humans/ agents? When is it meaningful/desirable to do so? At what level of abstraction (e.g. behaviour, representations) could we start making comparisons between such systems?

2. Inductive biases: Are there any key priors/biases (e.g. hierarchical structure of behaviour) grounded in experimental findings of human/animal studies that could potentially inform the design of artificial agents? Or should we explicitly avoid doing so and rely instead on building in the least amount of priors hoping to arrive at more flexible and perhaps better solutions?

3. Sample efficient learning: What can we learn from human/animal learning to arrive at more sample efficient agents? Are there built-in inductive biases (e.g. knowledge of 3D world, objects, physics) akin to the core knowledge system identified by Spelke et al. (2007) that could guide agents to require less interactions with the environment? What about model-based learning, or the spectrum from model-based to model-free that is observed in the neuroscience literature?

4. Representations: What kind of representations would facilitate RL in both animals/humans and agents? Could we identify these in agents first, such that we could probe for their signatures in the brain later? Alternatively, which evidence from animal studies (e.g. existence of place cells) could inform and constrain the kind of representations suitable for generalization and lifelong learning in agents?

5. Intrinsic reward signals: How do motivation and boredom contribute to learning in biological agents? Can we get inspirations from what makes humans/animals explore and interact with their environment (e.g. throughout development) to come up with novel intrinsic task agnostic reward signals that could facilitate learning in sparse or no-reward and lifelong learning settings?

6. Memory and continual learning: How does memory contribute to learning? Can we gain further insights from the hierarchical organization of cognitive memory to aid the design of artificial agents with lifelong learning capabilities?

7. Hierarchical RL: What is the role of temporally extended behaviour in learning? Can evidence from human learning (e.g. motor synergies) be inspirational for skill learning and hierarchical RL in artificial agents?

8. Behavioural assessment: How can we evaluate our agents appropriately, are there any ideas we could gain from human/animal learning about the level of generalization we could/should expect from our artificial agents?

Call for papers

We invite you to submit papers (up to 5 pages, excluding references and appendix) in the NeurIPS 2019 format. The focus of the work should relate to biological and/or artificial reinforcement learning. The review process will be double-blind and accepted submissions will be presented as talks or posters. There will be no proceedings for this workshop, however, authors can opt to have their abstracts posted on the workshop website.

In line with the guidelines defined by the NeurIPS organising committee, we can only accept original work that is not published in the main NeurIPS conference. However, we welcome published work from other non-machine learning focused venues, particularly work that has previously appeared in Neuroscience or Cognitive Science venues such as Cosyne, RLDM, CogSci and CCN.


Please submit your papers via the following link: https://cmt3.research.microsoft.com/NeurIPSWSBARL2019/

For any enquiries please reach out to us at BiologicalArtificialRL@gmail.com

Important Dates

Tuesday September 10th Paper Submission Deadline (11:59PM anywhere on Earth)

Tuesday October 1st Paper Acceptance Notification

Wednesday October 30th Camera-ready Submission

Friday December 13th Workshop at NeurIPS


Speakers

RICHARD

SUTTON

JACQUELINE

GOTTLIEB

ANGELA

YU

JEFF

CLUNE

JANE

WANG

IGOR

MORDATCH

IDA

MOMENNEJAD

Panel

GRACE

LINDSAY

We are hosting a panel discussion with our speakers with the help of our panelist, Grace Lindsay who will moderate the discussion. We are accepting questions from the community.


Schedule

9:00 - 9:15

9:15- 9:45

9:45 - 10:30

10:30 - 10:45

10:45 - 11:00

11:00 - 11:30

11:30 - 12:00

12:00 - 14:00

14:00 - 14:30

14:30 - 15:00

15:00 - 15:30

15:30 - 16:15

16:15 - 16:30

16.30 - 17:00

17:00 - 17:50

17:50 - 18:00

Opening Remarks

Invited Talk: Jane Wang

Poster Session (with coffee break)

Contributed Talk: Humans flexibly transfer options at multiple levels of abstractions

Contributed Talk: Slow processes of neurons enable a biologically plausible approximation to policy gradient

Invited Talk: Jacqueline Gottlieb

Invited Talk: Ida Momennejad

Lunch Break/ Poster Session

Invited Talk: Igor Mordatch

Invited Talk: Jeff Clune

Invited Talk: Angela Yu

Poster Session (with coffee break)

Contributed Talk: MEMENTO: Further Progress Through Forgetting

Invited Talk: Richard Sutton

Panel Discussion: Chaired by Grace Lindsay

Closing remarks

ORGANIZERS

Raymond

Chua

Sara

Zannone

Program Committee

  • Luisa Zintgraf
  • Julie Lee
  • Sankirthana Sathiyakumar
  • Jacopo Bono
  • Loic Matthey
  • Nadia M Ady
  • Christos Kaplanis
  • Carlos Brito
  • Olga Lositsky
  • Annik Carson
  • Matthew Schlegel
  • Tim Muller
  • Joshua Achiam
  • Ankur Handa



Accepted papers

  • Humans flexibly transfer options at multiple levels of abstractions. Liyu Xia, Anne Collins
  • Slow processes of neurons enable a biologically plausible approximation to policy gradient. Anand Subramoney, Franz Scherr, Guillaume Bellec, Elias Hajek, Darjan Salaj, Robert Legenstein, Wolfgang Maass
  • MEMENTO: Further Progress Through Forgetting. Liam Fedus, Dibya Ghosh, John Martin, Yoshua Bengio, Marc G. Bellemare, Hugo Larochelle
  • Goal-directed state space models of animal behavior. Blue Sheffer, Scott Linderman
  • Tactile RL-Architecture: a Platform for Robot Haptic Interaction Learning. Alexandra Moringen*, Sascha Fleer*, Guillaume Walck, Helge Ritter
  • Sample-Efficient Reinforcement Learning with Maximum Entropy Mellowmax Episodic Control. Marta Sarrico, Kai Arulkumaran, Andrea Agostinelli, Anil Anthony Bharath, Pierre Richemond
  • Memory-Efficient Episodic Control Reinforcement Learning with Dynamic Online k-means. Andrea Agostinelli, Kai Arulkumaran, Marta Sarrico, Pierre Richemond, Anil Anthony Bharath
  • Curiosity-Driven Multi-Criteria Hindsight Experience Replay. John Lanier, Stephen Mcaleer, Pierre Baldi
  • Overcoming Catastrophic Interference in Online Reinforcement Learning with Dynamic Self-Organizing Map. Yat Long Lo, Sina Ghiassian
  • Biologically inspired architectures for sample-efficient deep reinforcement learning. Pierre Richemond, Arinbjörn Kolbeinsson, Yike Guo
  • Iterative Policy-Space Expansion in Reinforcement Learning. Jan Lichtenberg, Özgür Simsek
  • Bayesian methods for efficient Reinforcement Learning in tabular problems. Efstratios Markou, Carl Edward Rasmussen
  • If MaxEnt RL is the Answer, What is the Question? Ben Eysenbach, Sergey Levine
  • Translating from Animal Cognition to AI. Matthew Crosby, Benjamin Beyret, Jose Hernandez-Orallo, Lucy Cheke, Marta Halina, Murray Shanahan
  • Reinforcement Learning Models of Human Behavior: Reward Processing in Mental Disorders. Baihan Lin, Guillermo Cecchi, Djallel Bouneffouf, Jenna Reinen, Irina Rish
  • Learning efficient task-dependent representations with synaptic plasticity. Colin Bredenberg, Eero P. Simoncelli, Cristina Savin
  • Hippocampal population representations in reinforced continual learning. Samia Mohinta, Stephane Ciocchi, Rui Ponte Costa
  • Rapid learning and efficient exploration by mice navigating a complex maze. Matthew Rosenberg*, Tony Zhang*, Pietro Perona, Markus Meister
  • Designing model-based and model-free reinforcement learning tasks without human guidance. Jae Hoon Shin, Jee Hang Lee, Shuangyi Tong, Sang Hwan Kim, Sang Wan Lee
  • Sparse Skill Coding: Learning Behavioral Hierarchies with Efficient Codes. Sophia Sanborn, Michael Chang, Sergey Levine, Thomas Griffiths