Speakers

Anca Dragan

Title: Learning from people beyond imitation

Abstract: We are used to learning from demonstrations, but there is so much more information out there about how the robot should do its task. Of course, people instruct robots in natural language, and, in doing so, reveal not only what actions to take now, but also communicate about their general preferences. When the robot gets too close and they push it away, they reveal information about the proxemics considerations the robot should take into account When they power the robot off, they are giving a very strong signal that whatever the robot was about to do was very bad. Even the state of the world ends up leaking information about what the robot should and shouldn't do, because people have been acting in the environment according to their preferences, and the current state is a result of those actions. In this talk, we will take a journey through some of the explicitly communicated and leaked information we can get from humans.

Ankur Handa

Title: Developing demonstration collection systems for dexterous hand and arm robots and exploring possibilities of smart assistive tele-op

Abstract: Design and development of data collection systems with robots is challenging. How far can we go in developing precise teleop and making them smart to relieve the burden on the user remains an open question. In this talk, we will look into how to develop a tele-op system for highly dexterous hand-arm robots using only vision based input and various challenges involved in the process. Later, we will look into the possibilities of developing future smart teleop systems to facilitate better communication of the task to the robot.

Chelsea Finn

Title: Revisiting the Ins and Outs of Imitation Learning for Robotic Manipulation

Abstract: Imitation learning is a simple and powerful paradigm for data-driven robot learning, but despite its simplicity, still presents a number of design choices that can have a drastic impact on performance and generalization. In this talk, I’ll discuss some of the ins and outs of imitation learning, i.e. the inputs and outputs of the algorithm, for robotic manipulation. More specifically, I’ll discuss how choices like camera placement, choice of sensors, data collection strategies, and checkpoint selection can, in some cases, dramatically affect how the learned policy performs in different scenarios, and the lessons that we have learned.

Lerrel Pinto

Title: Supercharging Imitation from Pixels for Easier Robot Learning

Abstract: I want to teach robots complex and dexterous behaviors in diverse real-world environments. But what is the fastest way to teach robots in the real world? — Among the prominent options in our robot learning toolbox, Sim2real requires careful modeling of the world, while real-world self-supervised learning or RL is far too slow. Currently, the only reasonably efficient approach that I know of is imitating humans. But making imitation learning feasible on real robots is not ‘easy’. They often require complicated demonstration collection setups, rely on having expert roboticists train them, and even then need a significant number of demonstrations to learn effectively. In this talk, I will present two ideas that can make robots learning far easier than it currently is. First, to collect demonstrations more easily we will use vision-based demonstration collection devices. This allows untrained humans to easily collect demonstrations from consumer-grade products. Second, to learn from these visual demonstrations, we will use new imitation learning algorithms that put data efficiency on the forefront. This allows for significantly faster and easier imitation on a variety of real-world robotic tasks.

Mohi Khansari

Title: Practical Visual End-to-End Imitation Learning: From Data Collection to Real World Deployment

Abstract: Recent work in visual end-to-end learning for robotics has shown the promise of imitation learning across a variety of tasks. While the effectiveness of these approaches are shown in the lab settings, deploying imitation learning at scale in the real world environments has been less explored. At Everyday Robots , working alongside teams at Google, we are building a learning robot that can help anyone with (almost) anything. To help us in the real world, helper robots need to be in it. Hence scaling up robot learning in the real-world human environment is our moonshot. In my presentation, I will take you through our journey of developing and applying visual imitation learning to enable everyday robots to open meeting room doors across various Alphabet buildings, day-to-day and in a natural setting. I touch-base on the overlooked aspects of imitation learning and share our findings on data collection, data processing, model training, checkpoint selection, and day-to-day model deployments.

Shimon Whiteson

Title: Learning Realistic & Diverse Agents for Autonomous Driving Simulation

Abstract: In this talk, I discuss the challenge of modelling the behaviour of the road users (human drivers, cyclists, and pedestrians) who share the road with autonomous vehicles. Such models are crucial for building realistic simulators, which in turn are crucial for safely training and testing autonomous driving software. I present Symphony, a new approach to learning from demonstration that can train realistic and diverse agents for this purpose. I also present results evaluating Symphony on a dataset containing over 1M rollouts.