Talk Date and Time: October 20, 2022 at 04:00 pm - 04:45 pm EST followed by 10 minutes of Q&A in IRB-5105 (in-person) and on Zoom. The speaker will give the talk in-person.
Topic: V-Learning --- A Simple, Efficient, Decentralized Algorithm for Multiagent RL
Abstract:
A major challenge of multiagent reinforcement learning (MARL) is the curse of multiagents, where the size of the joint action space scales exponentially with the number of agents. This remains to be a bottleneck for designing efficient MARL algorithms even in a basic scenario with finitely many states and actions. This paper resolves this challenge for the model of episodic Markov games. We design a new class of fully decentralized algorithms---V-learning, which provably learns Nash equilibria (in the two-player zero-sum setting), correlated equilibria and coarse correlated equilibria (in the multiplayer general-sum setting) in a number of samples that only scales with \max_i A_i, where A_i is the number of actions for the ith player. This is in sharp contrast to the size of the joint action space which is \prod_i A_i. V-learning (in its basic form) is a new class of single-agent RL algorithms that convert any adversarial bandit algorithm with suitable regret guarantees into a RL algorithm. Similar to the classical Q-learning algorithm, it performs incremental updates to the value functions. Different from Q-learning, it only maintains the estimates of V-values instead of Q-values. This key difference allows V-learning to achieve the claimed guarantees in the MARL setting by simply letting all agents run V-learning independently.
Bio:
Dr. Chi Jin is an assistant professor at the Electrical and Computer Engineering department of Princeton University. He obtained his Ph.D. degree in Computer Science at University of California, Berkeley, advised by Professor Michael I. Jordan. His research mainly focuses on theoretical machine learning, with special emphasis on nonconvex optimization and reinforcement learning. His representative work includes proving noisy gradient descent escape saddle points efficiently and proving the efficiency of Q-learning and least-squares value iteration when combined with optimism in reinforcement learning.