Learning Reward Functions from Scale Feedback

Nils Wilde*, Erdem Bıyık*, Dorsa Sadigh, Stephen L. Smith

Paper | Code

Supplementary Video

Abstract

Today's robots are increasingly interacting with people and need to efficiently learn inexperienced user's preferences. A common framework is to iteratively query the user about which of two presented robot trajectories they prefer. While this minimizes the users effort, a strict choice does not yield any information on how much one trajectory is preferred. We propose scale feedback, where the user utilizes a slider to give more nuanced information. We introduce a probabilistic model on how users would provide feedback and derive a learning framework for the robot. We demonstrate the performance benefit of slider feedback in simulations, and validate our approach in two user studies suggesting that scale feedback enables more effective learning in practice.

Approach

Scale feedback enables the users to provide more finely-detailed feedback in preference-based learning. Instead of making strict pairwise comparisons in a given query, i.e., either prefer option A or option B, the users can convey roughly how much more they like one option than the other. This extra information enables more efficient learning of reward functions in robotics. Besides, we adapt state-of-the-art active querying techniques to generate the most informative queries, which further boosts data-efficiency.