Images (left to right): Pinto & Gupta ICRA '16, Blundell et al. '16, Chen et al. ICCV '15, Levine et al. ISER '16, Wang et al. '16
In conjunction with NIPS 2016, Barcelona.
Organizers: Chelsea Finn, Raia Hadsell, Dave Held, Sergey Levine, Percy Liang
Videos of the workshop are now available here.
This workshop is located in Area 3 of the Centre Convencions Internacional Barcelona.
Deep learning systems that act in and interact with an environment must reason about how actions will change the world around them. The natural regime for such real-world decision problems involves supervision that is weak, delayed, or entirely absent, and the outputs are typically in the context of sequential decision processes, where each decision affects the next input. This regime poses a challenge for deep learning algorithms, which typically excel with: (1) large amounts of strongly supervised data and (2) a stationary distribution of independently observed inputs. The algorithmic tools for tackling these challenges have traditionally come from reinforcement learning, optimal control, and planning, and indeed the intersection of reinforcement learning and deep learning is currently an exciting and active research area. At the same time, deep learning methods for interactive decision-making domains have also been proposed in computer vision, robotics, and natural language processing, often using different tools and algorithmic formalisms from classical reinforcement learning, such as direct supervised learning, imitation learning, and model-based control. The aim of this workshop will be to bring together researchers across these disparate fields. The workshop program will focus on both the algorithmic and theoretical foundations of decision making and interaction with deep learning, and the practical challenges associated with bringing to bear deep learning methods in interactive settings, such as robotics, autonomous vehicles, and interactive agents.
Saturday December 10
Morning Session 19:00 - 9:15: Introductions
9:15 - 9:40: Joelle Pineau: Deep learning models for natural language interaction
9:40 - 10:05: Honglak Lee: Learning Disentangled Representations with Action-Conditional Future Prediction
10:05 - 10:30: Chris Summerfield: How artificial and biological agents ride the subway
10:30 - 11:00: morning coffee break
Morning session 2:11:00 - 11:25: Jianxiong Xiao: Bridging the gap between vision and robotics: Where are my labels?
11:25 - 11:35: Spotlight: Fereshteh Sadeghi, Collision Avoidance via Deep RL: Real Vision-Based Flight without a Single Real Image
11:35 - 11:45: Spotlight: Piotr Mirowski, Learning to Navigate in Complex Environments
11:45 - 12:00: morning poster session
12:00 - 14:00: lunch break
Afternoon session 1:14:00 - 14:25: Abhinav Gupta: Scaling Self-supervision: From one task, one robot to multiple tasks and robots
14:25 - 14:35: Spotlight: Sebastian Höfer, Unsupervised Learning of State Representations for Multiple Tasks
14:35 - 14:45: Spotlight: Jacob Andreas, Modular Multitask Reinforcement Learning with Policy Sketches
14:45 - 15:00: afternoon poster session
15:00 - 15:30: afternoon coffee break (continuation of poster session)
Afternoon session 2:
16:20 - 16:45: Jason Weston: Learning through Dialogue Interactions
16:45 - 16:55: Spotlight: Pararth Shah, Interactive reinforcement learning for task-oriented dialogue management
16:55 - 17:30: contributor “pitch” session
17:30 - 18:15: panel and audience discussion
Joelle Pineau: Deep learning models for natural language interaction
Honglak Lee: Learning Disentangled Representations with Action-Conditional Future Prediction
Chris Summerfield: How artificial and biological agents ride the subway
Recent work in artificial intelligence and machine learning has made great strides towards building agents that behave intelligently in complex environments. For example, the Differentiable Neural Computer (DNC, Graves et al 2016) is a neural network with content-addressable external memory that can plan novel shortest-path trajectories random graphs, such as the London Underground system. In my talk, I will discuss this work in the context of studies of planning in humans. I will show evidence that humans plan by searching through hierarchically nested representations of the environment, describing behaviour and brain activity recorded as humans navigated a virtual subway environment.
Jianxiong Xiao: Bridging the gap between vision and robotics: Where are my labels?
Abhinav Gupta: Scaling Self-supervision: From one task, one robot to multiple tasks and robots
Tim Lillicrap: Data-efficient deep reinforcement learning for continuous control
Call for Papers
We invite the submission of extended abstracts related to machine learning methods for domains involving taking actions and interacting with other agents, including, but not limited to, the following application areas:
Most accepted papers will be presented as posters, but a few selected contributions will be given oral presentations. Accepted papers will be posted in a non-archival format on the workshop website.
Abstracts should be 4 pages long (not including references) in NIPS format. Submissions may include a supplement, but reviewers are not required to read any supplementary material. Abstracts should be submitted by November 8th, 2016 by sending an email to email@example.com. Submissions may be anonymized or not, at the authors' discretion. Work that has already appeared in a journal, workshop, or conference (including NIPS 2016) must be significantly extended to be eligible for workshop submission. Work that is currently under review at another venue or has not yet been published in an archival format as of the date of the deadline (Nov 8th) may be submitted. This includes submissions to ICLR, which are welcome.
Workshop: Saturday, December 10th, 2016
Please refer to the NIPS 2016 website for registration details.
Abhishek Gupta, Coline Devin, YuXuan Liu, Pieter Abbeel, Sergey Levine
Jacob Andreas, Dan Klein, and Sergey Levine
Fereshteh Sadeghi, Sergey Levine
Aravind S. Lakshminarayanan, Sherjil Ozair, Yoshua Bengio
Pararth Shah, Dilek Hakkani-Tür, Larry Heck
Alex X. Lee
, Sergey Levine
, Pieter Abbeel
Pierre Sermanet, Kelvin Xu, Sergey Levine
Yanlin Han, Piotr Gmytrasiewicz
Xiaoxiao Guo, Tim Klinger, Clemens Rosenbaum, Joseph P. Bigus, Murray Campbell, Ban Kawas, Kartik Talamadupula, Gerald Tesauro, Satinder Singh
Stephen James, Edward Johns
Kapil D. Katyal, Edward W. Staley, Matthew S. Johannes, I-Jeng Wang, Austin Reiter, Phillipe Burlina
Ioannis Chiotellis, Rudolph Triebel, Daniel Cremers
Arna Ghosh, Biswarup Bhattacharya, Somnath Basu Roy Chowdhury
Gregory Kahn, Vitchyr Pong, Pieter Abbeel, Sergey Levine
Yevgen Chebotar, Mrinal Kalakrishnan, Ali Yahya, Adrian Li, Stefan Schaal, Sergey Levine
Wenzhen Yuan, Chenzhuo Zhu, Andrew Owens, Mandayam Srinivasan, Edward Adelson
, Razvan Pascanu
, Fabio Viola, Hubert Soyer, Andy Ballard,
Andrea Banino, Misha Denil, Ross Goroshin, Laurent Sifre, Koray Kavukcuoglu,
Dharshan Kumaran, Raia Hadsell
Antonin Raffin, Sebastian Höfer, Rico Jonschkowski, Oliver Brock, Freek Stulp
Rico Jonschkowski, Oliver Brock
Ilya Kostrikov, Dumitru Erhan, Sergey Levine
Ashvin Nair, Pulkit Agrawal, Dian Chen, Phillip Isola, Pieter Abbeel, Jitendra Malik, Sergey Levine
Harm de Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, Aaron Courville
Contributor Pitch Session
During our workshop, attendees were invited to sign up for a 3-minute slot in the "pitch session," during which time they could present an interesting idea, a discussion point, late-breaking work, or some other point they wished to share with the group. The pitches that were presented are listed below:
Rico Jonschkowski - Combining Algorithms and Deep Learning
Denis Steckelmacher - Hierarchical RL in POMDPs with Options
Eric Danziger - Conditioning policies on tasks
Grady Williams (firstname.lastname@example.org) - Benchmarking Deep Control and Perception Algorithms with Aggressive Driving
Jay McClelland - [No title]