Training Robots to Evaluate Robots: Example-Based

Interactive Reward Functions for Policy Learning

Best Paper, Conference on Robot Learning 2022


Kun Huang, Edward S. Hu, Dinesh Jayaraman

GRASP Lab, University of Pennsylvania

Abstract

Physical interactions can often help reveal information that is not readily apparent. For example, we may tug at a table leg to evaluate whether it is built well, or turn a water bottle upside down to check that it is watertight. We propose to train robots to acquire such interactive behaviors automatically, for the purpose of evaluating the result of an attempted robotic skill execution. These evaluations in turn serve as "interactive reward functions" (IRFs) for training reinforcement learning policies to perform the target task, such as screwing the table leg tightly. In addition, even after task policies are fully trained, IRFs can serve as verification mechanisms that improve online task execution. We show how to train IRFs from examples for door locking and weighted block stacking in simulation, and screw tightening on a real robot. In all cases, IRFs enable large performance improvements, even outperforming baselines with access to demonstrations or carefully engineered rewards.

Conference on Robot Learning Oral Presentation

kun-lirf-talk.mp4

Common Realistic Problems

Many important properties of a scene cannot be determined by passive perception alone. Interaction with environment is needed.

Our Method

In a partially observable environment, (left) we first learn an initial task policy using passive classifier-based rewards. (middle) Then we train an IRF policy to distinguish between provided "actionable positive examples" and initial task policy-generated negative examples. (right) Finally, we use the IRF policy to provide the correct rewards for training a LIRF task policy.

Video Examples

Copy of IRF_Supp_Slides