NOTE: All workshops at ICRA2021 will be virtual. To best account for time-zone conflicts, a cycle of (virtual) talks is organized during the 2 weeks before the conference period (see below for the schedule). These talks will be recorded and uploaded on Youtube and a link will be provided here after the talk. On the day of the workshop, we will have a poster session, a panel discussion, and a live (virtual) demo session.

Demo Session

On Monday May 31st 2021, from 4-6pm (CET) we will have a live demo session where different labs show their most recent real robot experiment. This session will also be over zoom and you can ask questions from the presenter of the demo...

Panel discussion

We will have two panel discussions on Monday 31st of May, one from 3-4pm (CET) and one from 7-8pm (CET). More information about the speakers of each panel will be announced later...

Wednesday, May 19th

18-19 (CET)

Michiel Van de Panne

Title: MPC and RL: Two different roads to Legged Locomotion, and that's OK

Abstract: The state of the art in legged locomotion advances significantly over the past decade, driven by numerous advances in model predictive control (MPC) and reinforcement learning (RL). Which of these methods has greater promise and why? In this talk I'll begin with a sketch of the many versions that exist of the "locomotion problem", and then give a charicature of the respective merits and limitations of MPC and RL approaches. I'll also make connections with general ideas in optimization and behavior, including learning curricula and Kahneman's notion of System 1 and System 2 behaviors. I'll ground these views in a variety of our own recent work on biped and quadruped control, as applied to human movement simulations, biped robots, and quadruped robots. (recording of the talk)

Friday, May 21st

17-18 (CET)

Jemin Hwangbo

Title: Large-scale policy training for robots


Abstract: Deep reinforcement learning is a promising tool for controlling complex articulated systems. Sim-to-real transfer methods, in particular, have been proven to scale to multi-legged systems and challenging natural terrains. However, the behaviors that we get from the existing policies are still single-faceted. Learning diverse and adaptive behaviors require many magnitudes more samples than what has been demonstrated in the literature. In this talk, I'll talk about how we can generate billions and trillions of samples for training policies for diverse and agile behaviors. (recording of the talk)


Friday, May 21st

18-19 (CET)

Jonathan W. Hurst

Title: Learning Legged Locomotion: RL as one tool in an engineered system

Abstract: Legged locomotion is more interesting than just moving one foot in front of another while balancing. It is an interesting and complex dynamical phenomenon, analogous in some ways to pendular dynamics, with stabilizing properties and energy cycles that handle disturbances and regulate inherently as part of the behavior. Simple models, such as the spring-loaded inverted pendulum, seem to capture many of the features of this interesting dynamical phenomenon, but certainly not all of it. Machine learning is a compelling tool to discover and describe much more completely the behaviors that we are after. They key to success is understanding the desired dynamics, creating a machine that is designed as closely as possible to the right dynamics and thus "wants" to want and run, and then applying machine learning tools within carefully crafted constraints and cost functions. In other words, the robot can learn to walk and run when the engineer knows what the end result should look like, and describes the behavior in a language that the learning algorithms can work within. (recording of the talk)

Monday, May 24th

18-19 (CET)

Gerardo Bledt

Title: Generalizing and Improving Regularized Predictive Control for Legged Robots

Abstract: As legged robots have improved dramatically and proved their viability for locomotion, they are starting to enter the real world to accomplish useful tasks. As such, generalized controllers must be developed to handle the realities of operating in unforeseen environments such as unstructured terrains and under interaction disturbances. Regularized Predictive Control (RPC) is an optimization-based predictive controller that relies on using simple regularization heuristics to guide the control inputs towards a known solution, while remaining free to explore for better solutions. Data-driven methods are used to learn improved heuristics for situations that may not have been accounted for due to unmodeled dynamics, difficult to analyze states, or parameter uncertainty. By combining heuristic, optimization, and learning techniques and taking advantage of each of their strengths, we can quickly develop controllers to stabilize various legged robotic systems and transfer the controller to the real robots. Recent results have shown the viability of using RPC to robustly control various quadruped and biped robots with the same underlying controller. (recording of the talk)

Wednesday, May 26th

18-19 (CET)

Daniel Holden

Title: Robotic Characters in Video Games

Abstract: Physically simulating the animation of characters in video games has the potential to greatly increase the realism and immersion we experience when playing them. But it also comes with a whole host of new challenges: performance, memory usage, and of course control – how can we get the player to control and direct a physically simulated character in a way which is fun, responsive, and realistic. In this talk we will show our latest research - applying techniques from Robotics and Reinforcement Learning to video games to create realistic, player-facing controllers for physically simulated characters. (recording of the talk)

Thursday, May 27th

18-19 (CET)

Majid Khadiv

Title: Model and data: two essential ingredients for controlling legged robots

Abstract: Legged robots have hybrid and interinsically unstable nonlinear dynamics with many constraints. Furthermore, these systems are very high-dimentional which makes it interactable to control them with many formal available approaches. Among different approaches, Model Predictive Control (MPC) and Reinforcement Learning (RL) are two valid options that have achieved competitive results during the past few years. In this talk, I will show my recent attempts in tackling the legged locomotion control problem using both model-based and data-driven approaches. I will also summarize the experimental work we did during the past two years on our open-source legged robots that we developed in the Open Dynamic Robot Initiative (ODRI). (recording of the talk)

Friday, May 28th

17-18 (CET)

Nicolas Heess

Title: Towards embodied intelligence

Abstract: Enabling simulated and real-world embodied agents to think and move like animals and humans is one of the shared goals of AI researchers, roboticists and the computer graphics community. In my talk I will discuss work that we have conducted towards the longer term goal of building intelligent simulated humanoid characters that possess locomotion and manipulation skills, that can see and remember, and that interact with each other. I will bring together results from several studies, and explain how large scale RL, imitation learning, hierarchical skill representations, and multi-agent training algorithms can work together to this end. (recording of the talk)

Friday, May 28th

18-19 (CET)

Patrick Wensing

Title: Tailoring Model Complexity in MPC of Legged Locomotion

Abstract: As we look to send legged robots out into the wild, a fundamental need is the ability to creatively adapt their motions to the environment and task at hand. While the solution to this challenge will likely need to rely upon a combination of model-based and learning-based strategies, this talk will primarily concentrate on advances on the model-based side. The presentation will focus on recent work on multi-resolution model predictive control that considers multiple dynamic models over the prediction horizon. Several other advances to the numerical methods for supporting these motion optimization pipelines with be detailed. (recording of the talk)

Poster Session

Poster session will be held on Monday May 31st 2021, 6-7pm (CET) using wonder platform. We have received a very interesting set of abstracts from various labs around the world:

Abstract #1 (2-min teaser) What must be modelled to make model-free reinforcement learning work for legged robots?




Abstract #2 (2-min teaser) A Feasibility-Based MPC Framework for Robust Gait Generation





Abstract #3 (2-min teaser) RHECALL: Receding-Horizon Experience-Controlled Adaptive Legged Locomotion




Abstract #4 (2-min teaser) RLOC: Terrain-Aware Legged Locomotion using Reinforcement Learning and Optimal Control





Abstract #5 (2-min teaser) Visually Guided Agile Quadruped Locomotion





Abstract #6 (2-min teaser) Importance of Local Information in Deep Reinforcement Learning of Locomotion and Decentralized Learning of Local Control Modules





Abstract #7 (2-min teaser) Curricular Policy Search for Quadruped Jumping



Abstract #9 (2-min teaser) Robust Feedback Locomotion for 3D biped Robots Using Reinforcement Learning






Abstract #10 (2-min teaser) Efficient and Accurate Multi-Body Simulation with Stiff Viscoelastic Contacts






Abstract #11 (2-min teaser) Optimizing Impedance Profiles for Uncertain Contact Interactions




Abstract #12 (2-min teaser) Efficient, Generic and Robust Resolution of Constrained Dynamics









Abstract #13 (2-min teaser) Fast MPC for bipedal walking and running control


Abstract #14 (2-min teaser) Model-free Reinforcement Learning for Robust Locomotion Using Trajectory Optimization for Exploration






Abstract #15 (2-min teaser) Quadrupedal Locomotion and Pose Adaptation Via NMPC and Mobility Optimization