Learning to Influence Human Behavior with Offline Reinforcement Learning

Joey Hong, Anca Dragan, Sergey Levine

UC Berkeley

TLDR:  We learn to influence humans towards effective collaboration using offline RL.

Abstract

In the real world, some of the most complex settings for learned agents involve interaction with humans, who often exhibit suboptimal, unpredictable behavior due to sophisticated biases. Agents that interact with people in such settings end up influencing the actions that these people take. Our goal in this work is to enable agents to leverage that influence to improve the human's performance in collaborative tasks, as the task unfolds. Unlike prior work, we do not assume online training with people (which tends to be too expensive and unsafe), nor access to a high fidelity simulator of the environment. Our idea is that by taking a variety of previously observed human-human interaction data and labeling it with the task reward, offline reinforcement learning (RL) can learn to combine components of behavior, and uncover actions that lead to more desirable human actions. First, we show that offline RL can learn strategies to influence and improve human behavior, despite those strategies not appearing in the dataset, by utilizing components of diverse, suboptimal interactions. In addition, we demonstrate that offline RL can learn influence that adapts with humans, thus achieving long-term coordination with them even when their behavior changes. We evaluate our proposed method with real people in the Overcooked collaborative benchmark domain, and demonstrate successful improvement in human performance.

Overcooked Environment

Goal: Place three ingredients (tomatoes or onions) in a pot, plate the soup from the pot, and deliver it, as many times as possible within the time limit.


To induce suboptimal behavior in humans interacting in this environment, we introduce modified objectives that are unknown to the human:

Result 1: Offline RL can learn effective influencing strategies without seeing them in the dataset 

In the task below, the agent receives higher reward if its human partner delivers the soup. The agent learns to pass a plate to its partner as a way to influence them to plate and deliver the soup. Impressively, this influencing strategy does not appear in the dataset!

overcooked_naive_example1.mov

Naive (Behavior Cloning)

overcooked_influence_example1.mov

Ours

Result 2: Offline RL can learn adaptive strategies based on the human’s current behavior.

Algorithm Summary

In the below task, the agent receives higher reward if the soup contains only tomatoes. The agent learns to direct its human partner away from onions towards tomatoes by blocking access to the onions. Importantly, the agent only does so if and only if it believes its partner intends to pick up onions. 

overcooked_naive_example2.mov

Naive (Behavior Cloning)

overcooked_example2.mov

Ours

For any further questions or suggestions, please email joey_hong@berkeley.edu.