Cooperative AI

Emergent cooperation in mixed-motive multi-agent systems

i.e., How to influence the behavior of selfish agents---whom we cannot directly control---to improve a global objective?

Jiachen Yang, Ethan Wang, Rakshit Trivedi, Tuo Zhao, Hongyuan Zha

We show that a learning-aware incentive designer can improve social welfare of a population of selfish agents.

Critical sectors of human society are progressing toward the adoption of powerful artificial intelligence (AI) agents, which are trained individually on behalf of self-interested principals but deployed in a shared environment. Short of direct centralized regulation of AI, which is as difficult an issue as regulation of human actions, one must design institutional mechanisms that indirectly guide agents' behaviors to safeguard and improve social welfare in the shared environment. Our paper focuses on one important class of such mechanisms: the problem of adaptive incentive design, whereby a central planner intervenes on the payoffs of an agent population via incentives in order to optimize a system objective. To tackle this problem in high-dimensional environments whose dynamics may be unknown or too complex to model, we propose a model-free meta-gradient method to learn an adaptive incentive function in the context of multi-agent reinforcement learning. Via the principle of online cross-validation, the incentive designer explicitly accounts for its impact on agents' learning and, through them, the impact on future social welfare. Experiments on didactic benchmark problems show that the proposed method can induce selfish agents to learn near-optimal cooperative behavior and significantly outperform learning-oblivious baselines. When applied to a complex simulated economy, the proposed method finds tax policies that achieve better trade-off between economic productivity and equality than baselines, a result that we interpret via a detailed behavioral analysis.

Social welfare over training episodes

Jiachen Yang, Ang Li, Mehrdad Farajtabar, Peter Sunehag, Edward Hughes, Hongyuan Zha

We enable cooperation to emerge within a group of selfish agents, by designing an agent who learns to give incentives to other agents.

The challenge of developing powerful and general Reinforcement Learning (RL) agents has received increasing attention in recent years. Much of this effort has focused on the single-agent setting, in which an agent maximizes a predefined extrinsic reward function. However, a long-term question inevitably arises: how will such independent agents cooperate when they are continually learning and acting in a shared multi-agent environment? Observing that humans often provide incentives to influence others' behavior, we propose to equip each RL agent in a multi-agent environment with the ability to give rewards directly to other agents, using a learned incentive function. Each agent learns its own incentive function by explicitly accounting for its impact on the learning of recipients and, through them, the impact on its own extrinsic objective. We demonstrate in experiments that such agents significantly outperform standard RL and opponent-shaping agents in challenging general-sum Markov games, often by finding a near-optimal division of labor. Our work points toward more opportunities and challenges along the path to ensure the common good in a multi-agent future.

Emergent cooperation between two LIO agents

Fully-cooperative multi-agent learning

i.e., How to train and directly control agents to cooperate for a team objective?

Permutation Invariant Policy Optimization for Mean-Field Multi-Agent Reinforcement Learning: A Principled Approach | arXiv 2021

Yan Li, Lingxiao Wang, Jiachen Yang, Ethan Wang, Zhaoran Wang, Tuo Zhao, Hongyuan Zha

We show that a mean-field policy multi-agent algorithm converges theoretically and implement it with permutation-invariant architectures to achieve strong empirical performance.

Multi-agent reinforcement learning (MARL) becomes more challenging in the presence of more agents, as the capacity of the joint state and action spaces grows exponentially in the number of agents. To address such a challenge of scale, we identify a class of cooperative MARL problems with permutation invariance, and formulate it as a mean-field Markov decision processes (MDP). To exploit the permutation invariance therein, we propose the mean-field proximal policy optimization (MF-PPO) algorithm, at the core of which is a permutation-invariant actor-critic neural architecture. We prove that MF-PPO attains the globally optimal policy at a sublinear rate of convergence. Moreover, its sample complexity is independent of the number of agents. We validate the theoretical advantages of MF-PPO with numerical experiments in the multi-agent particle environment (MPE). In particular, we show that the inductive bias introduced by the permutation-invariant neural architecture enables MF-PPO to outperform existing competitors with a smaller number of model parameters, which is the key to its generalization performance.

Permutation invariant critic via Deep Set

Jiachen Yang, Igor Borovikov, Hongyuan Zha

We design hierarchical agents who discover skills and use complementary skills to achieve a team goal

Human players in professional team sports achieve high level coordination by dynamically choosing complementary skills and executing primitive actions to perform these skills. As a step toward creating intelligent agents with this capability for fully cooperative multi-agent settings, we propose a two-level hierarchical multi-agent reinforcement learning (MARL) algorithm with unsupervised skill discovery. Agents learn useful and distinct skills at the low level via independent Q-learning, while they learn to select complementary latent skill variables at the high level via centralized multi-agent training with an extrinsic team reward. The set of low-level skills emerges from an intrinsic reward that solely promotes the decodability of latent skill variables from the trajectory of a low-level skill, without the need for hand-crafted rewards for each skill. For scalable decentralized execution, each agent independently chooses latent skill variables and primitive actions based on local observations. Our overall method enables the use of general cooperative MARL algorithms for training high level policies and single-agent RL for training low level skills. Experiments on a stochastic high dimensional team game show the emergence of useful skills and cooperative team play. The interpretability of the learned skills show the promise of the proposed method for achieving human-AI cooperation in team sports games.

Hierarchical agents with unsupervised skill discovery

Zhi Zhang, Jiachen Yang, Hongyuan Zha

We combine the advantages of independent and centralized multi-agent reinforcement learning for traffic light control.

Traffic congestion in metropolitan areas is a world-wide problem that can be ameliorated by traffic lights that respond dynamically to real-time conditions. Recent studies applying deep reinforcement learning (RL) to optimize single traffic lights have shown significant improvement over conventional control. However, optimization of global traffic condition over a large road network fundamentally is a cooperative multi-agent control problem, for which single-agent RL is not suitable due to environment non-stationarity and infeasibility of optimizing over an exponential joint-action space. Motivated by these challenges, we propose QCOMBO, a simple yet effective multi-agent reinforcement learning (MARL) algorithm that combines the advantages of independent and centralized learning. We ensure scalability by selecting actions from individually optimized utility functions, which are shaped to maximize global performance via a novel consistency regularization loss between individual utility and a global action-value function. Experiments on diverse road topologies and traffic flow conditions in the SUMO traffic simulator show competitive performance of QCOMBO versus recent state-of-the-art MARL algorithms. We further show that policies trained on small sub-networks can effectively generalize to larger networks under different traffic flow conditions, providing empirical evidence for the suitability of MARL for intelligent traffic control.

QCOMBO architecture

Jiachen Yang, Alireza Nakhaei, David Isele, Kikuo Fujimura, Hongyuan Zha

We improve exploration and credit assignment for fully-cooperative multi-agent problems where each agent has an individual goal.

A variety of cooperative multi-agent control problems require agents to achieve individual goals while contributing to collective success. This multi-goal multi-agent setting poses difficulties for recent algorithms, which primarily target settings with a single global reward, due to two new challenges: efficient exploration for learning both individual goal attainment and cooperation for others' success, and credit-assignment for interactions between actions and goals of different agents. To address both challenges, we restructure the problem into a novel two-stage curriculum, in which single-agent goal attainment is learned prior to learning multi-agent cooperation, and we derive a new multi-goal multi-agent policy gradient with a credit function for localized credit assignment. We use a function augmentation scheme to bridge value and policy functions across the curriculum. The complete architecture, called CM3, learns significantly faster than direct adaptations of existing algorithms on three challenging multi-goal multi-agent problems: cooperative navigation in difficult formations, negotiating multi-vehicle lane changes in the SUMO traffic simulator, and strategic cooperation in a Checkers environment.

CM3 architecture

Mean field games

i.e., How to model the behavior of large scale populations where individual actions aggregate to some global behavior and that global behavior affects individual choices?

Jiachen Yang, Xiaojing Ye, Rakshit Trivedi, Huan Xu, Hongyuan Zha

We discover a connection between mean field games (MFG) and reinforcement learning, and we propose the first data-driven method to estimate an MFG model using inverse RL.

We consider the problem of representing collective behavior of large populations and predicting the evolution of a population distribution over a discrete state space. A discrete time mean field game (MFG) is motivated as an interpretable model founded on game theory for understanding the aggregate effect of individual actions and predicting the temporal evolution of population distributions. We achieve a synthesis of MFG and Markov decision processes (MDP) by showing that a special MFG is reducible to an MDP. This enables us to broaden the scope of mean field game theory and infer MFG models of large real-world systems via deep inverse reinforcement learning. Our method learns both the reward function and forward dynamics of an MFG from real data, and we report the first empirical test of a mean field game model of a real-world social media population.

Measured and predicted trajectory of Twitter topic