Learning Multimodal Rewards from Rankings

Vivek Myers, Erdem Bıyık, Nima Anari, Dorsa Sadigh

Paper | Code

Supplementary Video

Abstract

Learning from human feedback is a popular approach in acquiring robot reward functions. However, expert feedback is often assumed to be drawn from an underlying unimodal reward function. This assumption does not always hold including in settings where multiple experts provide data or when a single expert provides data for different tasks---we thus go beyond learning a unimodal reward and focus on learning a multimodal reward function. We formulate the multimodal reward learning as a mixture learning problem and develop a novel ranking-based learning approach, where the experts are only required to rank a given set of trajectories. Furthermore, as access to interaction data is often expensive in robotics, we develop an active querying approach to present the most informative ranking queries to the experts in order to accelerate the learning process. We conduct experiments and user studies using a multi-task variant of OpenAI's LunarLander and a real Fetch robot, where we collect data from multiple users with different preferences. The results suggest that our approach can efficiently learn multimodal reward functions for robotics tasks, and improve data-efficiency over benchmark methods that we adapt to our learning problem.

Approach

Our method enables learning of multimodal reward functions from rankings provided by experts. This is especially important in the cases where multiple experts who might have different preferences about how a task should be done provide data, or when an expert may provide inconsistent data due to external factors (e.g. a driver might prefer more aggressive driving when in a rush).

We further develop an active querying approach based on information gain for this learning mechanism which improves data-efficiency.