Learning Legged Locomotion Workshop @ ICRA 2019
Legged robots are notoriously difficult to control. Recent progress in machine learning has shown promises to design robust and agile locomotion controllers automatically. However, most of these learning-based methods are limited to simulation or to simple hardware platforms. Many challenges remain in bringing these learning-based control approaches to real legged robots, including the reality gap, safe exploration, continuous data collection, data-efficient learning algorithms, experimental evaluation, and hardware robustness.
This workshop brings together experts in the fields of legged robotics and machine learning/reinforcement learning to discuss the state-of-the-art and challenges in learning-based control of legged robots.
Topics of Interest
Learning based control for legged locomotion:
- Reinforcement learning / evolutionary strategies
- Model based learning
- Learning in simulation
- On-robot learning
- Perception for unstructured terrain locomotion
- Sim-to-real transfer
- Hardware platforms for learning
- Benchmarks
- State estimation
- ...
Call for Posters & Robot Demos
Please consider contributing by submitting an extended abstract (1-2 pages). Accepted abstracts will be invited to present a poster during the workshop. We encourage submission of work in progress, experimental hardware results, and "lessons learned" to benefit the community.
Deadline for submission: March 29th 2019. Author notification: April 26th 2019.
Submit your abstract to: learningleggedlocomotion@gmail.com
We also welcome robot demos! Please contact us for more details.
Organizing Committee
- Ken Caluwaerts - Robotics at Google
- Atil Iscen - Robotics at Google
- Jie Tan - Robotics at Google
- Tingnan Zhang - Robotics at Google
- Karen Liu - Georgia Tech
- Ludovic Righetti - NYU
- Jonathan W. Hurst - OSU/Agility Robotics
contact: learningleggedlocomotion@gmail.com
Schedule
May 24th 2019 - Room 517d
09:00 - 09:10 Introduction
09:10 - 10:00 Session 1 (2 speakers)
- Speaker 1 (09:10 - 09:35): Victor Barasuol (IIT): Learning Applied to Modular Locomotion Frameworks
- Speaker 2 (09:35 - 10:00): Michiel van de Panne (UBC): Learning dynamic locomotion skills for Cassie: an iterative design approach
10:00 - 10:30 Poster session (see below) / Robot demos / Coffee break
10:30 - 12:10 Session 2 (4 speakers)
- Speaker 3 (10:30 - 10:55): Jemin Hwangbo (ETH Zurich): Sim-to-real transfer of dynamic and agile locomotion policies
- Speaker 4 (10:55 - 11:20): Erwin Coumans (Robotics at Google): Sim-to-real for quadruped locomotion
- Speaker 5 (11:20 - 11:45): Shishir Kolathaya & Abhik Singla (IISc, Bengaluru): Realizing Learned Quadruped Locomotion Behaviors through Motion Primitives
- Speaker 6 (11:45 - 12:10): Deepali Jain (Robotics at Google): Learning complex and agile legged locomotion skills
12:10 - 13:20 Lunch break
13:20 - 15:00 Session 3 (4 speakers)
- Speaker 7 (13:20 - 13:45): Aaron Ames (Caltech): Learning the Model to Reality Gap in Dynamic Robots
- Speaker 8 (13:45 - 14:10): Sangbae Kim (MIT): Robots for Robust Physical Interaction
- Speaker 9 (14:10 - 14:35): Aaron Johnson (CMU): Online and offline learning with contact dynamics
- Speaker 10 (14:35 - 15:00): Arun Ahuja (Deepmind): Learning hierarchical policies for control of a simulated humanoid
15:00 - 15:30 Poster session (see below) / Robot demos / Coffee break
15:30 - 16:00 Panel discussion (moderator: Jonathan W. Hurst)
Posters
- Multi-Objective Body Stabilization of a Legged Robot via Distributed Reinforcement Learning: Guillaume Sartoretti, Katayoon Goshvadi, Howie Choset. Poster, Abstract
- Geometric mechanics as a seed for learning-based gait design: Baxi Chong, Guillaume Sartoretti, Yunjin Wu, Yasemin Ozkan Aydin, Chaohui Gong, Jennifer M Rieser, Haosen Xing, Daniel I Goldman, Howie Choset
- Biologically-Inspired Deep Reinforcement Learning of Modular Control for a Six-Legged Robot: Kai Konen, Timo Korthals, Andrew Melnik, Malte Schilling. Abstract
- Learning and adapting quadruped gaits with the "Intelligent Trial & Error" algorithm: Eloïse Dalin, Pierre Desreumaux, Jean-Baptiste Mouret. Poster, Abstract
- Lessons Learned from Real-World Experiments with DyRET: the Dynamic Robot for Embodied Testing: Tønnes F. Nygaard, Jørgen Nordmoen, Charles P. Martin, Kyrre Glette. Poster, Abstract
- Inverse Optimal Control from Demonstrations with Mixed Qualities: Kyungjae Lee, Yunho Choi, Songhwai Oh
- Learning Skills for Humanoid Robots from Video Demonstrations: Jian Zhang, Mario Srouji, Ruslan Salakhutdinov