The key ingredients for a successful learning algorithm are a useful representation of the world and a method for integrating new information into existing knowledge. Uncovering the representations that dynamically support learning algorithms remains an open challenge. Additionally, recent findings in neuroscience have complicated the standard model of reward prediction error based updating in human RLDM. Advances in deep reinforcement learning have opened new opportunities for probing how representations and learning rules work together to enable adaptive decision-making. Additionally, time-resolved neural data can be an important source of insight and constraints on RLDM algorithms. 

This workshop will bring together researchers at the intersection of representations and learning algorithms who draw on neuroscience techniques or theory to advance our understanding of learning and decision making. Confirmed speakers will present experiments that weave together cutting-edge approaches from neuroscience (intracranial electrophysiology, fMRI, pharmacology, and PET), AI, and cognitive psychology, each answering the guiding questions of the workshop: 

What did we learn about representations and learning rules from these approaches? And what new theoretical questions do these experiments raise?