Learning Diverse Risk Preferences in Population-based Self-play

Abstract

Among the great successes of Reinforcement Learning (RL), self-play algorithms play an essential role in solving competitive games. Current self-play algorithms optimize the agent to maximize expected win-rates against its current or historical copies, making it often stuck in the local optimum and its strategy style simple and homogeneous. A possible solution is to improve the diversity of policies, which helps the agent break the stalemate and enhances its robustness when facing different opponents. However, enhancing diversity in the self-play algorithms is not trivial. In this paper, we aim to introduce diversity from the perspective that agents could have diverse risk preferences in the face of uncertainty. Specifically, we design a novel reinforcement learning algorithm called Risk-sensitive Proximal Policy Optimization (RPPO), which smoothly interpolates between worst-case and best-case policy learning and allows for policy learning with desired risk preferences. Seamlessly integrating RPPO with population-based self-play, agents in the population optimize dynamic risk-sensitive objectives with experiences from playing against diverse opponents. Empirical results show that our method achieves comparable or superior performance in competitive games and that diverse modes of behaviors emerge.

Videos for Diversity iIlurstration

Slimevolley 

risk level 0.9 vs risk level 0.1

Risk-seeking agent (left) stands farther from the fence and hit the ball at a lower angle while risk-averse agent (right) does the opposite.

SumoAnt

risk level  0.9 vs risk level 0.1 

SumoAnt

risk level 0.7 vs risk level 0.3

The risk-averse agent tends to maintain a defensive stance: four legs spread out, lowering its center of gravity and holding the floor tightly to keep it as still as possible. In contrast, the risk-seeking agent frequently attempts to attack.