Ecological Theory of RL
Workshop Schedule
Tuesday, December 14th, 2021 @ NeurIPS 2021 (Virtual)
08:00 - 17:30 (ET)
Workshop Schedule
The theme of the workshop on ecological RL brings together different perspectives from various scientific disciplines. Thus, we have invited a diverse set of speakers that will provide their unique perspective in their areas of expertise, which includes: reinforcement learning, operations research, optimal control, robotics, natural ecology, economics, and cognitive sciences.
Each Invited Talk will be followed by a live Q&A session. If you watch the talk ahead of time, click on each [Q & A] link to submit your question. The session's moderators will do their best to ask all relevant questions to our invited speakers.
Session I (8:00 ET- 11:50 ET)
Chair: Manfred Diaz
08:00 ET: Introductory Remarks (Shane Gu)
08:10 ET: Artificial What? (Shane Legg) [Q & A]
08:40 ET: What makes for an interesting RL problem? (Joelle Pineau) [Q & A]
09:10 ET: HyperDQN: A Randomized Exploration Method for Deep RL (Li et al)
09:25 ET: Grounding an Ecological Theory of Artificial Intelligence in Human Evolution (Eleni Nisioti)
09:50 ET: Sculpting (human-like) AI systems by sculpting their (social) environments (Pierre-Yves Oudeyer) [Q & A]
10:20 ET: Towards RL applications in video games and with human users (Katja Hofmann) [Q & A]
10:50 ET: Habitat 2.0: Training Home Assistants to Rearrange their Habitat (Andrew Zot)
11:05 ET: Embodied Intelligence via Learning and Evolution (Agrim Gupta)
11:20 ET: A Methodology for RL Environment Research (Daniel Tanis) [Q & A]
lUNCH bREAK & pOSTER sESSION (11:50 et - 13:00 et)
11:50 ET: Lunch Break (GatherTown)
12:20 ET: Poster Session (GatherTown)
SESSION II (13:00 ET - 17:30 ET)
Chair: Lisa Lee
13:00 ET: Environment Capacity (Benjamin Van Roy) [Q & A]
13:30 ET: A Universal Framework for Reinforcement Learning (Warren Powell) [Q & A]
14:00 ET: Representation Learning for Online and Offline RL in Low-rank MDPs (Masatoshi Uehara)
14:15 ET: Understanding the Effects of Dataset Composition on Offline Reinforcement Learning (Kajetan Schweighofer)
14:30 ET: Structural Assumptions for Better Generalization in Reinforcement Learning (Amy Zhang) [Q & A]
15:10 ET: Reinforcement learning: It's all in the mind (Tom Griffiths) [Q & A]
15:40 ET: Curriculum-based Learning: An Effective Approach for Acquiring Dynamic Skills (Michiel van de Panne) [Q & A]
16:10 ET: Panel Discussion: Joelle Pineau, Tom Griffiths, Pierre-Yves Oudeyer and Jeff Clune moderated by Shane Gu [Q & A]
17:00 ET: Launch: BigGym: A Crowd-Sourcing Challenge for RL Environments and Behaviors (Shane Gu)
17:30 ET: Closing Remarks
INVITED SPEAKERS
Shane Legg (DeepMind)
Shane Legg: Artificial What?
Shane Legg is the Co-Founder & Chief Scientist at DeepMind. His main research interests are in artificial intelligence, both theory and practice. In particular, he is interested in measures of intelligence for machines, neural networks, artificial evolution, reinforcement learning and the theory of learning.
Joelle Pineau (McGill)
Joelle Pineau: What makes for an interesting RL problem?
Joelle Pineau is an Associate Professor and William Dawson Scholar at the School of Computer Science at McGill University, where she co-directs the Reasoning and Learning Lab. She is a core academic member of Mila and a Canada CIFAR AI chairholder. She is also co-Managing Director of Facebook AI Research. Dr. Pineau's research focuses on developing new models and algorithms for planning and learning in complex partially-observable domains. She also works on applying these algorithms to complex problems in robotics, health care, games and conversational agents.
Pierre-Yves Oudeyer (Inria)
Pierre-Yves Oudeyer: Sculpting (human-like) AI systems by sculpting their (social) environments
Pierre-Yves Oudeyer is research director (DR1) at Inria, heading the Flowers team at Inria Bordeaux Sud-Ouest. His research focuses on lifelong autonomous learning, and the self-organization of behavioural, cognitive and language structures, at the frontiers of artificial intelligence, machine learning, and cognitive sciences. He uses machines as tools to understand better how children learn and develop, and studies how one can build machines that learn autonomously like children, within the new field of developmental artificial intelligence.
Katja Hofmann (MSR)
Katja Hofmann: Towards RL applications in video games and with human users
Katja Hofmann is a Senior Principal Researcher within the Machine Intelligence theme at Microsoft Research Cambridge. She leads a team that focuses on Deep Reinforcement Learning for Games, with a mission to advance the state of the art in reinforcement learning, driven by current and future applications in video games. Her team shares the belief that games will drive a transformation of how we interact with AI technology. Her long-term goal is to develop AI systems that learn to collaborate with people, to empower their users and help solve complex real-world problems.
Daniel Tanis (DeepMind)
Daniel Tanis: A Methodology for RL Environment Research
Daniel Tanis is a Research Scientist at DeepMind. He completed a Ph. D. at the University of Cambridge and previously worked at Google. His current research focuses on understanding how training environments can be used to improve the performance of reinforcement learning agents.
Benjamin Van Roy (Stanford)
Benjamin Van Roy: Environment Capacity
Benjamin Van Roy is a Professor at Stanford University, where he has served on the faculty since 1998. His research focuses on the design, analysis, and application of reinforcement learning algorithms. Beyond academia, he leads a DeepMind Research team in Mountain View, and has also led research programs at Unica (acquired by IBM), Enuvis (acquired by SiRF), and Morgan Stanley.
Warren Powell (Princeton)
Warren Powell: A Universal Framework for Reinforcement Learning
Warren Powell is a professor in the Department of Operations Research and Financial Engineering at Princeton University where he has taught since 1981. His interests focus on optimization under uncertainty, broadly defined, drawing on applications in energy, transportation, health, business analytics and the sciences. He has written books on Approximate Dynamic Programming and Optimal Learning, and is working on a new book that creates a unified framework spanning all the major subfields of stochastic optimization. He is an Informs Fellow, has won the Daniel Wagner prize and was twice an Edelman finalist.
Amy Zhang (Berkeley & FAIR)
Amy Zhang: Structural Assumptions for Better Generalization in Reinforcement Learning
Amy Zhang is a postdoctoral scholar at UC Berkeley and a research scientist at Facebook AI Research. Her research focuses on state abstractions, model-based reinforcement learning, representation learning, and generalization in RL. She completed her PhD at McGill University and Mila - Quebec AI Institute, co-supervised by Joelle Pineau and Doina Precup. She also holds an M.Eng. in EECS and dual B.Sci. degrees in Mathematics and EECS from MIT.
Tom Griffiths (Princeton)
Tom Griffiths: Reinforcement learning: It's all in the mind
Tom Griffiths is a professor of psychology and computer science at Princeton, where he directs the Computational Cognitive Science Lab. His research focuses on developing mathematical models of higher-level cognition and understanding the formal principles that underlie our ability to solve the computational problems we face in everyday life. He has published scientific papers on topics ranging from cognitive psychology to cultural evolution and has received awards from the National Academy of Sciences, the Sloan Foundation, the American Psychological Association, and the Psychonomic Society, among others.
Michiel van de Panne (UBC)
Michiel van de Panne: Curriculum-based Learning: An Effective Approach for Acquiring Dynamic Skills
Michiel van de Panne is a professor in the Computer Science department at University of British Columbia. His research interests span reinforcement learning, control, physics-based simulation of human and animal movement, robotics, computer animation, and computer graphics.
Jeff Clune (OpenAI & UBC)
Jeff Clune: Invited Panelist
Jeff Clune is a Research Team Leader at OpenAI and an Associate Professor of Computer Science at the University of British Columbia. Previously, Jeff was a Senior Research Manager and founding member of Uber AI Labs, which was formed after Uber acquired our startup. Prior to Uber, Jeff was the Loy and Edith Harris Associate Professor in Computer Science at the University of Wyoming.
Agrim Gupta (Stanford)
Agrim Gupta: Embodied Intelligence via Learning and Evolution
Agrim Gupta is a third-year Ph. D. student in Computer Science at Stanford University, advised by Fei-Fei Li and part of the Stanford Vision and Learning Lab. Working at the intersection of machine learning, computer vision and robotics his research focuses on understanding and building embodied agents. His research has been covered by popular media outlets like The Economist, TechCrunch, VentureBeat and MIT Technology Review. Previously, he was a Research Engineer at Facebook AI Research where he worked on building datasets and algorithms for long-tailed object recognition.