Workshop on biological AND artificial reinforcement learning

NeurIPS - December 2019 - Vancouver, Canada

Motivation

Reinforcement Learning (RL) algorithms learn how to interact with the environment guided by reward signals. Since its origins, this approach has been greatly inspired by psychology and behavioural sciences, and has been proven to be extremely successful for building autonomous agents. Supplementary to these works, reinforcement learning theory has provided a normative perspective and framework to study how animals and humans learn through rewards, contributing to some of the most important discoveries in the field. With the progression of time, biological and artificial reinforcement learning have contributed to their reciprocal development by informing and inspiring each other. Building on this historical and tight connection, our workshop will bring together a vast array of researchers interested in biological and artificial RL, with the aim of both gaining inspiration from neural and cognitive mechanisms to tackle current challenges in RL research for designing intelligent agents as well as using machine learning theory to further our understanding humans’ or animals’ brain and behaviour.

TOPICS

1. Benchmarks: What are suitable common benchmarks for studying decision making across animals /humans/ agents? When is it meaningful/desirable to do so? At what level of abstraction (e.g. behaviour, representations) could we start making comparisons between such systems?

2. Inductive biases: Are there any key priors/biases (e.g. hierarchical structure of behaviour) grounded in experimental findings of human/animal studies that could potentially inform the design of artificial agents? Or should we explicitly avoid doing so and rely instead on building in the least amount of priors hoping to arrive at more flexible and perhaps better solutions?

3. Sample efficient learning: What can we learn from human/animal learning to arrive at more sample efficient agents? Are there built-in inductive biases (e.g. knowledge of 3D world, objects, physics) akin to the core knowledge system identified by Spelke et al. (2007) that could guide agents to require less interactions with the environment? What about model-based learning, or the spectrum from model-based to model-free that is observed in the neuroscience literature?

4. Representations: What kind of representations would facilitate RL in both animals/humans and agents? Could we identify these in agents first, such that we could probe for their signatures in the brain later? Alternatively, which evidence from animal studies (e.g. existence of place cells) could inform and constrain the kind of representations suitable for generalization and lifelong learning in agents?

5. Intrinsic reward signals: How do motivation and boredom contribute to learning in biological agents? Can we get inspirations from what makes humans/animals explore and interact with their environment (e.g. throughout development) to come up with novel intrinsic task agnostic reward signals that could facilitate learning in sparse or no-reward and lifelong learning settings?

6. Memory and continual learning: How does memory contribute to learning? Can we gain further insights from the hierarchical organization of cognitive memory to aid the design of artificial agents with lifelong learning capabilities?

7. Hierarchical RL: What is the role of temporally extended behaviour in learning? Can evidence from human learning (e.g. motor synergies) be inspirational for skill learning and hierarchical RL in artificial agents?

8. Behavioural assessment: How can we evaluate our agents appropriately, are there any ideas we could gain from human/animal learning about the level of generalization we could/should expect from our artificial agents?

Call for papers

We invite you to submit papers (up to 5 pages, excluding references and appendix) in the NeurIPS 2019 format. The focus of the work should relate to biological and/or artificial reinforcement learning. The review process will be double-blind and accepted submissions will be presented as talks or posters. There will be no proceedings for this workshop, however, authors can opt to have their abstracts posted on the workshop website.

In line with the guidelines defined by the NeurIPS organising committee, we can only accept original work that is not published in the main NeurIPS conference. However, we welcome published work from other non-machine learning focused venues, particularly work that has previously appeared in Neuroscience or Cognitive Science venues such as Cosyne, RLDM, CogSci and CCN.


Please submit your papers via the following link: https://cmt3.research.microsoft.com/NeurIPSWSBARL2019/

For any enquiries please reach out to us at BiologicalArtificialRL@gmail.com

Important Dates

Friday September 7th Paper Submission Deadline

Tuesday October 1st Paper Acceptance Notification

Wednesday October 30th Camera-ready Submission

December 13th or 14th Workshop at NeurIPS


Speakers

RICHARD

SUTTON

JACQUELINE

GOTTLIEB

ANGELA

YU

EMMA

BRUNSKILL

JEFF

CLUNE

JANE

WANG

IGOR

MORDATCH

IDA

MOMENNEJAD

Panel

GRACE

LINDSAY

We are hosting a panel discussion with our speakers with the help of our panelist, Grace Lindsay who will moderate the discussion. We are accepting questions from the community. The link for submitting questions will be available soon...


Schedule




ORGANIZERS

Raymond

Chua

Sara

Zannone