Program

Schedule

The workshop takes place virtually on Monday, December 13.

(see NeurIPS virtual site for all details)

Time (PT)

08:45 - 09:00 Opening Remarks

09:00 - 09:40 Keynote 1: Michael Jordan

09:40 - 10:20 Keynote 2: Susan Athey

10:20 - 10:50 Moderated discussion: "Learning and Economics: A Vision for Future Research and Development"

10:50 - 11:10 Break

11:10 - 12:00 Paper highlights x 5

12:00 - 1:00 Poster session on Gather

13:00 - 13:40 Keynote 3: Dorsa Sadigh

13:40 - 14:20 Keynote 4: Vince Conitzer

14:20 - 14:50 Moderated discussion: "Achieving Agent Coordination in Theory and Practice"

14:50 - 15:00 Closing remarks and moving to socialize on Gather


Paper highlights session


Exploration and Incentives in Reinforcement Learning

Max Simchowitz, Aleksandrs Slivkins


Efficient Competitions and Online Learning with Strategic Forecasters

Anish Thilagar, Rafael Frongillo, Bo Waggoner, Robert Gomez


Models of fairness in federated learning

Kate Donahue, Jon Kleinberg


Strategic clustering

Ana-Andreea Stoica, Christos Papadimitriou


Estimation of Standard Asymmetric Auction Models

Yeshwanth Cherapanamjeri, Constantinos Costis Daskalakis, Andrew Ilyas, Manolis Zampetakis


Invited talks abstracts



Speaker: Michael Jordan

Title: On Dynamics-Informed Blending of Machine Learning and Game Theory


Abstract: Statistical decisions are often given meaning in the context of other decisions, particularly when there are scarce resources to be shared. Managing such sharing is one of the classical goals of microeconomics, and it is given new relevance in the modern setting of large, human-focused datasets, and in data-analytic contexts such as classifiers and recommendation systems. I'll discuss several recent projects that aim to explore the interface between machine learning and microeconomics, including the study of exploration-exploitation tradeoffs for bandit learning algorithms that compete over a scarce resource, leader/follower dynamics in strategic classification, and the robust learning of optimal auctions.


_________________________


Speaker: Susan Athey

Title: Machine Learning with Strategic Agents: Lessons from Incentive Theory and Econometrics


Abstract: This talk will apply insights from classic economic incentive theory to problems in machine learning and artificial intelligence. In particular, it will apply multi-task theory to the design of A/B testing platforms and bandit objective functions, and it will analyze human-AI interaction as a problem of optimal delegation. We will also consider how informational asymmetries and observability problems can lead to systematic challenges for learning by bidders at auctions.


______________


Speaker: Vince Conitzer

Title: Automated Mechanism Design for Strategic Classification


Abstract: AI is increasingly making decisions, not only for us, but also about us -- from whether we are invited for an interview, to whether we are proposed as a match for someone looking for a date, to whether we are released on bail. Often, we have some control over the information that is available to the algorithm; we can self-report some information, and other information we can choose to withhold. This creates a potential circularity: the classifier used, mapping submitted information to outcomes, depends on the (training) data that people provide, but the (test) data depend on the classifier, because people will reveal their information strategically to obtain a more favorable outcome. This setting is not adversarial, but it is also not fully cooperative.


Mechanism design provides a framework for making good decisions based on strategically reported information, and it is commonly applied to the design of auctions and matching mechanisms. However, the setting above is unlike these common applications, because in it, preferences tend to be similar across agents, but agents are restricted in what they can report. This creates both new challenges and new opportunities. I will discuss both our theoretical work and our initial experiments.


(This is joint work with Hanrui Zhang, Andrew Kephart, Yu Cheng, Anilesh Krishnaswamy, Haoming Li, and David Rein. Papers on these topics can be found here)


______


Speaker: Dorsa Sadigh

Title: Theory and Practice of Partner-Aware Algorithms in Multi-Agent Coordination



Abstract: Today I will be discussing some of the challenges and lessons learned in partner modeling in decentralized multi-agent coordination. We will start with discussing the role of representation learning in learning effective latent partner strategies and how one can leverage the learned representations within a reinforcement learning loop for achieving coordination, collaboration, and influencing. We will then extend the notion of influencing beyond optimizing for long-horizon objectives, and analyze how strategies that stabilize latent partner representations can be effective in reducing non-stationarity and achieving a more desirable learning outcome. Finally, we will formalize the problem of decentralized multi-agent coordination as a collaborative multi-armed bandit with partial observability, and demonstrate that partner modeling strategies are effective approaches for achieving logarithmic regret.