Generative Modeling and Model-Based Reasoning for Robotics and AI
ICML 2019 Workshop | June 14th (Friday) | Long Beach, CA, USA
In the recent explosion of interest in deep RL, “model-free” approaches based on Q-learning and actor-critic architectures have received the most attention due to their flexibility and ease of use. However, this generality often comes at the expense of efficiency (statistical as well as computational) and robustness. The large number of required samples and safety concerns often limit direct use of model-free RL for real-world settings.
Model-based methods are expected to be more efficient. Given accurate models, trajectory optimization and Monte-Carlo planning methods can efficiently compute near-optimal actions in varied contexts. Advances in generative modeling, unsupervised, and self-supervised learning provide methods for learning models and representations that support subsequent planning and reasoning. Against this backdrop, our workshop aims to bring together researchers in generative modeling and model-based control to discuss research questions at their intersection, and to advance the state of the art in model-based RL for robotics and AI. In particular, this workshops aims to make progress on questions related to:
- How can we learn generative models efficiently? Role of data, structures, priors, and uncertainty.
- How to use generative models efficiently for planning and reasoning? Role of derivatives, sampling, hierarchies, uncertainty, counterfactual reasoning etc.
- How to harmoniously integrate model-learning and model-based decision making?
- How can we learn compositional structure and environmental constraints? Can this be leveraged for better generalization and reasoning?
May 3: Paper deadline
(23:59 hours AOE time)
May 20: Notifications
June 14: Workshop
Call for Papers
We invite the submission of short papers, up to 4 pages (excluding references and supplementary material). Submissions should be anonymous and in the ICML 2019 format (see official style guidelines). All accepted submissions will be made available on the workshop website and included in the poster session during the workshop (this does not constitute archival publication). Relevant topics include but are not limited to:
- Unsupervised deep learning, structured probabilistic models, and physics engines for learning models of physical interactions.
- Learning abstract and geometric models of scenes.
- Models and algorithms for learning compositional structure and constraints of the world.
- Learning behavioral models of humans and other agents in non-stationary and game theoretic settings.
Model-Based Planning and Control
- Algorithms and theory for trajectory optimization, planning, and tree search.
- Role of derivatives, sampling, hierarchies, uncertainty, and structure for efficient and robust planning systems, in continuous and discrete spaces.
- Role of learning for efficient planning: learning approximate value functions, learning simplified models and representations for efficient search etc.
- Model-based reasoning and decision making in non-stationary and game theoretic settings.
Interplay between model learning and model-based control
- Planning algorithms that actively compensate for uncertainties and imperfections in the models.
- Closing the loop to learn models amenable for model-based planning.
- Safe explorations strategies for learning models of physical systems.
We also highly welcome and encourage scientific position papers under the workshop theme.
Please submit your manuscripts here.