Schedule

        Monday, June 15 (Building OCC, 1st Floor, Rooms B117-B119)
 
Session I
 9:00am - 9:30am Introduction
Shuchi Chawla, Jason Hartline and Denis Nekipelov
A/B Testing of Auctions
 9:30am - 10:00am Yang Cai, Constantinos Daskalakis and Christos Papadimitriou
Optimum Statistical Estimation with Strategic Data Sources
 10:00am - 10:30am Panos Toulis and David Parkes 
Long-term Causal Effects of Interventions in Multiagent Economic Mechanisms
 10:30am - 11:00am Aaron Roth, Jonathan Ullman and Zhiwei Steven Wu
Watch and Learn: Optimizing from Revealed Preferences Feedback


Coffee break, FCRC Plenary Talk, and Lunch Break

Session II
 1:55pm - 2:20pm
Note the different location: this is a related talk taking place at STOC'15, colocated at Portland 252

Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold and Aaron Roth
Preserving Statistical Validity in Adaptive Data Analysis 
 2:25pm - 3:00pm Short advertisements of related papers at EC'15
Nekipelov, Syrgkanis, Tardos
Econometrics for Learning Agents
 
Huang, Mansour, Roughgarden
Making the Most of Your Samples

Fu, Immorlica, Lucier, Strack
Randomization beats Second-Price as a Prior-Independent Auction

Blum, Mansour, Morgenstern
Learning What's Going On: Reconstructing Preferences and Priorities from Opaque Transactions

Toulis, Parkes, Pfeffer, Zou
Incentive-Compatible Experimental Design
 3:00pm - 3:30pm Jon Kleinberg, Annie Liang, Sendhil Mullianathan
The Theory is Predictive, but is it Complete?  An Application to Human Perception of Randomness


Coffee Break

Session III 
 4:00pm - 4:30pm    Darrell Hoy, Denis Nekipelov and Vasilis Syrgkanis
Price of Anarchy from Data
 4:30pm - 5:30pm  Invited talk by Kevin Leyton-Brown
Modeling Human Play in Unrepeated Games
                                   

Abstracts of Contributed Papers and the Invited Talk:

Shuchi Chawla, Jason Hartline and Denis Nekipelov
For many application areas A/B testing, which partitions users of a system into an A (control) and B (treatment) group to experiment between several application designs, enables Internet companies to optimize their services to the behavioral patterns of their users. Unfortunately, the A/B testing framework can not be applied in a straightforward manner to applications like auctions where the users (a.k.a., bidders) submit bids before the partitioning into the A and B groups is made. This paper combines auction theoretic modeling with the A/B testing framework to develop methodology for A/B testing auctions. The accuracy of our method is directly comparable to ideal A/B testing where there is no interference between A and B. 

Yang Cai, Constantinos Daskalakis and Christos Papadimitriou
We propose an optimum mechanism for providing monetary incentives to the data sources of a statistical estimator such as linear regression, so that high quality data is provided at low cost, in the sense that the weighted sum of payments and estimation error is minimized. The mechanism applies to a broad range of estimators, including linear and polynomial regression, kernel regression, and, under some additional assumptions, ridge regression. It also generalizes to several objectives, including minimizing estimation error subject to budget constraints. Besides our concrete results for regression problems, we contribute a mechanism design framework through which to design and analyze statistical estimators whose examples are supplied by workers with cost for labeling said examples.

Panos Toulis and David Parkes 
Interventions in an economic mechanism have an effect on its performance. For example, raising the reserve price of an auction has an effect on its revenue. The effect from the intervention is causal if the observed effect is better than the counterfactual, i.e., the effect that would be observed under no intervention. As mechanisms are dynamical systems of interacting agents, their response to an intervention fluctuates until the system reaches a new equilibrium. Effects measured in the new equilibrium, the long-term causal effects, are more representative of the value of interventions. However, the statistical estimation of long-term causal effects is difficult because it has to rely, for practical reasons, on data observed before the new equilibrium is reached. Furthermore, agent behavior does not only depend on the mechanism that the agent is situated in, but also on other agents’ behavior, which hinders the estimation of counterfactuals.  

In this paper, we develop a methodology for estimating long-term causal effects of interventions in mechanisms, using the potential outcomes model of causal inference (Neyman, 1923; Rubin, 1974). Our proposed methodology relies on a data augmentation strategy, where agents are assumed to adopt, at each timepoint, a behavior that is latent. A game-theoretic model defines the distribution of the actions an agent takes, conditional on the adopted behavior.  A time-series model defines the temporal evolution of aggregate agent behaviors, and smooths out the estimates of individual agent behaviors. The two models are combined and are fit using short-term data from observed agent actions. The fitted model parameters are then used to predict the long-term agent actions and, thus, estimate the long-term causal effect of interest. We illustrate our methodology on a dataset from behavioral game theory, and discuss open problems to stimulate future research.  


Aaron Roth, Jonathan Ullman and Zhiwei Steven Wu
A Stackelberg game is played between a leader and a follower. The leader first chooses an action, and then the follower plays his best response, and the goal of the leader is to pick the action that will maximize his payoff given the follower’s best response. Stackelberg games capture, for example, the following interaction between a producer and a consumer. The producer chooses the prices of the goods he produces, and then a consumer chooses to buy a utility-maximizing bundle of goods. The goal of the seller here is to set prices to maximize his profit—his revenue, minus the production cost of the purchased bundle. It is quite natural that the seller in this example should not know the buyer’s utility function.  However, he does have access to revealed preference feedback—he can set prices, and then observe the purchased bundle and his own profit. We give algorithms for efficiently solving, in terms of both computation and query-complexity, a broad class of Stackelberg games in which the follower’s utility function is unknown, using only “revealed preference” access to it. This class includes in particular the profit maximization problem, as well as the optimal tolling problem in nonatomic congestion games, when the latency functions are unknown. Surprisingly, we are able to solve these problems even though the optimization problems are non-convex in the leader’s actions.  

Jon Kleinberg, Annie Liang and Sendhil Mullainathan

Objective metrics for the predictive accuracy of a theory are abundant, but these metrics provide little guidance towards assessment of what constitutes a ``reasonable'' predictive accuracy for the question at hand. This problem is especially compelling in the social sciences, in which most behaviors of interest are characterized by high intrinsic variability. This paper provides a data-driven approach to approximating the ``best-possible'' predictive accuracy in a class of related problems. 

The approach is implemented on a simple model problem that has been the subject of study in a number of areas in the social sciences: human generation of random sequences. We use our framework to evaluate existing social-science theories for human biases in attempting to generate i.i.d. Bernoulli sequences, finding that these theories explain only a relatively small fraction of the algorithmically predictable structure in the data. This bound is robust to ``transfer'' to certain different but related problems of interest; it can also be applied to the prediction of field data in which human evaluators making a sequence of yes/no decisions exhibit biases related to what one finds in random generation. 

Darrell Hoy, Denis Nekipelov and Vasilis Syrgkanis
Analysis of welfare in auctions comes traditionally via one of two approaches: precise but fragile inference of the exact details of a setting from data or robust but coarse theoretical price of anarchy bounds that hold in any setting. As markets get more and more dynamic and bidders become more and more sophisticated, the weaknesses of each approach are magnified.  

In this paper, we provide tools for analyzing and estimating the empirical price of anarchy of an auction. The empirical price of anarchy is the worst case efficiency loss of any auction that could have produced the data, relative to the optimal.  

Our techniques are based on inferring simple properties of auctions: primarily the expected revenue and the expected payments and allocation probabilities from possible bids. These quantities alone allow us to empirically estimate the revenue covering parameter of an auction which allows us to re-purpose the theoretical machinery of Hartline et al. [2014] for empirical purposes. Moreover, we show that under general conditions the revenue covering parameter estimated from the data approaches the true parameter with the error decreasing at the rate proportional to the square root of the number of auctions and at most polynomially in the number of agents.  While we focus on the setting of position auctions, and particularly the generalized second price auction, our techniques are applicable far more generally.  

Finally, we apply our techniques to a selection of advertising auctions on Microsoft's Bing and find empirical results that are a significant improvement over the theoretical worst-case bounds.


Kevin Leyton-Brown

This talk will survey our ongoing work on modeling human play of unrepeated, simultaneous-move games, both communicating some of the key lessons we’ve learned and giving previews of the new directions we’re currently pursuing.

I'll begin by surveying our published work, in which we’ve performed a large-scale meta-study of previous work from Economics, evaluated existing models from the literature, and ultimately proposed a variation on one of these models that achieves good across-the-board performance. We’ve also proposed a new framework for describing nonstrategic agents which improves the performance of our models overall.

I’ll also describe the ideas behind two avenues of ongoing research. First, existing behavioral models ignore the fact that people think more strategically in some situations than others; we’ve found a way to capture this, and have some initial evidence that we’re on the right track. Second, deep learning is a machine learning paradigm with many recent successes; I’ll explain why it’s not straightforward to apply to our setting, why it would be desirable to do so anyway, and the progress we’ve made so far.

Comments