2020 Virtual Conference on Reinforcement Learning for Real Life
June 27-28, 2020
Atlantic Run: San Francisco (SF) 9am-12pm, June 27
Pacific Run: SF 6-9pm, June 27
Hours 0:00-1:00 Live panel discussion / Q&A
Hours 1:00-3:00 Online poster sessions
Invited Speakers / Panelists
We will request invited speakers and moderators to share their expertise w.r.t. the real life aspects of RL by pre-recording videos. The moderators will host a live panel discussion at the listed times, and there will be polls for audience members to submit their questions during these live panel discussions.
RL+healthcare, SF 9-10am, June 27
Moderator: Susan Murphy, Harvard
Chair: Omer Gottesman, Harvard
Ask/vote questions to panelists.
Youtube streaming: https://youtu.be/dDSENm2smkQ
RL general topic, SF 6-7pm, June 27
Ed Chi, Google
Jason Gauci, Facebook
Chair: Lihong Li, Google
Ask/vote questions to panelists.
Youtube streaming: https://youtu.be/lDdC8Gjat9w
Call For Papers
Reinforcement learning (RL) is a general learning, predicting, and decision making paradigm that applies broadly in science, engineering and arts. RL has seen prominent successes in many problems, such as Atari games, AlphaGo, robotics, recommender systems, and AutoML. However, applying RL in the real world remains challenging, so a natural question is:
What are the challenges of applying RL in the real world, and how can we solve them?
The main goals of the conference are to:
(1) identify key research problems that are critical for the success of real-world applications;
(2) report progress on addressing these critical issues; and
(3) have practitioners share their success stories of applying RL to real-world problems, and the insights gained from the applications.
We invite you to submit papers that successfully apply RL algorithms to real-life problems by addressing practically relevant RL issues. Our topics of interest are general, including but not limited to topics below:
Practical RL algorithms, which covers all algorithmic challenges of RL, especially those that directly address challenges faced by real-world applications;
Practical issues: generalization, sample/time/space efficiency, exploration vs. exploitation, reward specification and shaping, scalability, model-based learning (model validation and model error estimation), incorporating prior knowledge, safety, accountability, interpretability, reproducibility, hyper-parameter tuning;
Applications: advertisements, autonomous driving, business, chemical synthesis, conversational AI, drawing, drug design, education, energy, finance, healthcare, industrial control, music, recommender systems, robotics, transportation, or other problems in science, engineering and arts.
Deadline: June 15, 2020
Notification: June 21, 2020
Style files. (Changed the header to "Presented as a poster at RL4RealLife 2020".) We will recommend submissions to use these provided style files (adapted from ICLR files). For previously published papers, feel free to use previous formats.
Accepted papers are non-archival and non-peer-reviewed. We welcome submissions of recently published work.
Authors will pre-record video presentation of their work, and they will also host their own video conferencing channels (with Zoom/Google Hangout) during the online poster session portion of the conference.
We do not set the length of a pre-recorded talk for your flexibility. We recommend 5 minutes for a concise introduction, or up to 20 minutes for a full discussion, but not exceeding 30 minutes.
List of papers
Variance Reduction for Evolutionary Strategies via Structured Control Variate, Yunhao Tang, Krzysztof Choromanski, Alp Kucukelbir, paper
Learning to Dock Robustly, Mohamed Elsayed, Hager Radi, A. Rupam Mahmood, video
An Empirical Investigation of the Challenges of Real World Reinforcement Learning, Gabriel Dulac-Arnold, Nir Levine, Daniel J. Mankowitz, Cosmin Paduraru, Sven Gowal, Jerry Li, Todd Hester, paper, video
Conservative Q-learning for Offline Reinforcement Learning, Aviral Kumar, Aurick Zhou, George Tucker, Sergey Levine, paper
RL Unplugged: Benchmarks for Offline Reinforcement Learning, Caglar Gulcehre*, Ziyu Wang*, Alexander Novikov*, Tom Le Paine*, Sergio Gómez Colmenarejo, Konrad Zolna, Rishabh Agarwal, Josh Merel, Daniel Mankowitz, Cosmin Paduraru, Gabriel Dulac-Arnold, Jerry Li, Mohammad Norouzi, Matt Hoffman, Ofir Nachum, George Tucker, Nicolas Heess and Nando de Freitas, paper, video
Automation of the Fresh Food Supply Chain Using Model-Based Planning, Harlan Seymour, Andy Chen, Philip Cerles, Sawyer Birnbaum, Siddarth Sampangi, Danny Nemer, Volodymyr Kuleshov, paper
Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions, Omer Gottesman, Joseph Futoma, Yao Liu, Sonali Parbhoo, Leo Anthony Celi, Emma Brunskill, Finale Doshi-Velez, paper, video
Real-time Text-based Chat
Authors of posters can create their own topics. Audience can have discussions before/during/after the virtual conference. Organizers can make announcements. You are welcome to join our Slack Workspace for RL for Real Life.
We encourage participants to host virtual booths to discuss research topics, to look for job opportunities, to social, etc.
Contact by email