Pulkit Katdare, Shuijing Liu and Katherine-Driggs Campbell
Human Centered Autonomy Lab
University of Illinois at Urbana-Champaign
Accepted at ICRA 2022
[Paper], [Video], [Code]
Applying reinforcement learning (RL) methods on robots typically involves training a policy in simulation and deploying it on a robot in the real world. Because of the model mismatch between the real world and the simulator, RL agents deployed in this manner tend to perform suboptimally. To tackle this problem, researchers have developed robust policy learning algorithms that rely on synthetic noise disturbances. However, such methods do not guarantee performance in the target environment. We propose a convex risk minimization algorithm to estimate the model mismatch between the simulator and the target domain using trajectory data from both environments. We show that this estimator can be used along with the simulator to evaluate performance of an RL agents in the target domain, effectively bridging the gap between these two environments. We also show that the convergence rate of our estimator to be of the order of n^(-0.25), where n is the number of training samples. In simulation, we demonstrate how our method effectively approximates and evaluates performance on Gridworld, Cartpole, and Reacher environments on a range of policies. We also show that the our method is able to estimate performance of a 7 DOF robotic arm using the simulator and remotely collected data from the robot in the real world.
Reinforcement Learning (RL) has been very popular, albiet in simulation. RL directly applied to robotics scenarios is hard and might cause a lot of problems like safety. Instead engineers tend to train their model in simulation and then deploy them on the real world. Often cited issue with this particular method is what is often known as the Sim2Real gap, which is the difference between simulator environment and the real world environment. There have been algorithms trying to get around this and learn a robust policy, but those algorithms still require real world fine tuning to work well.
In this research theme, the main idea is to bridge the gap between simulator and the real world robot by leveraging data from the real world. Instead of designing high fidelity simulators, we try to augment the reward function in an RL setting such that the cumulative augmented reward is much more representative of the real world and evaluates performance accurately.
Email us: katdare2@illinois.edu