Ludovic Righetti is an Associate Professor in the Electrical and Computer Engineering Department and in the Mechanical and Aerospace Engineering Department at the Tandon School of Engineering of New York University and a Senior Researcher at the Max-Planck Institute for Intelligent Systems in Germany. He holds an engineering diploma in Computer Science and a Doctorate in Science from the Ecole Polytechnique Fédérale de Lausanne, Switzerland. He was previously a postdoctoral fellow at the University of Southern California prior to starting the Movement Generation and Control Group at the Max-Planck Institute for Intelligent Systems in Germany. He has received several awards, most notably the 2010 Georges Giralt PhD Award, the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems Best Paper Award, the 2016 IEEE Robotics and Automation Society Early Career Award and the 2016 Heinz Maier-Leibnitz Prize. His research focuses on the planning and control of movements for autonomous robots, with a special emphasis on legged locomotion and manipulation. He is more broadly interested in questions at the intersection of decision making, automatic control, optimization, applied dynamical systems and machine learning and their applications to physical systems.
In this presentation, I will argue that the main role of impedance modulation is to trade-off the need to reject external disturbances and the unavoidable uncertainty in contact locations. Stiffness helps increase accuracy but can lead to catastrophic results if contact with the environment is made unexpectedly. On the other hand, compliance and damping helps create safe contacts but can result in reduced tracking performance. To support this claim, I will present an algorithm to efficiently compute optimal impedance schedules that explicitly trades-off contact uncertainty and external disturbances and show that this can significantly increase robustness during locomotion on unknown terrains. Then I will discuss a reinforcement learning approach that explicitly incorporates structure in the learned control policy to enforce explicit impedance learning. In particular, I will show that the method can create behaviors that are robust to contact uncertainty (location, stiffness, friction) and transfer on real robots. All results will be demonstrated on our novel open-source quadruped robot capable of a large range of impedance modulation.