EC24 Workshop
Information Acquisition
Location: Room 4200, Yale School of Management.
Webinar link, password sent to registrants' email.
Workshop Theme
In the modern data-driven world, information acquisition plays a critical role in informed decision-making. This event will explore recent works on acquiring, evaluating, and integrating information from strategic sources, with potential decision-making applications. The workshop will cover topics including elicitation, calibration, decision under falsifiable information, and learning with strategic agents, with invited talks and contributed presentations.
Elicitation. Effective decision-making often relies on gathering high-quality human feedback. The ever-increasing reliance on data-driven decision-making makes the ability to gather high-quality human feedback more critical than ever. This topic discusses how to elicit high-quality information with proper payment, with or without the existence of ground truth.
Calibration. Calibration separates decision-making from prediction into two problems. If a predictor is calibrated, a good performance on a particular decision problem (a.k.a. loss function) transfers to all other decision problems. Multicalibration implies fairness across subgroups. This topic covers recent work producing calibrated predictions in learning.
Decision under falsifiable information. Decision under falsifiable information involves making choices when the information sources are strategic. Decision makers categorize agents based on the information they provide, while the agents can strategically manipulate their information for favorable outcomes at a cost. Two lines of literature relate to this topic: the strategic classification in the CS community, and the fraud-proof mechanism design in the Economics theory community.
Learning with strategic agents. In many online decision problems, the principal's desired information is controlled by self-interested agents. An important question in this domain is designing good incentive schemes or information disclosure policies to influence agents' choices for optimal learning. This topic covers recent developments in searching and incentivized exploration.
Schedules: July 8
Invited Talks
8:30 - 9:15 am, Yiling Chen. Context-Aware Elicitation.
Abstract: Eliciting private information from individuals is challenging when there is no ground truth to evaluate the elicited information. Recent successes in designing mechanisms that incentivize truthful elicitation often leverage statistical properties of independent samples, such as assuming that tasks have independently and identically distributed (i.i.d.) ground truth. While this assumption enables truthful elicitation for any information structure, it renders the mechanisms inapplicable for non-i.i.d. elicitation settings. We argue that mechanism designers can often utilize their partial knowledge about specific elicitation settings to create simple mechanisms that achieve truthful elicitation in more complex scenarios. We demonstrate how a simple bonus-penalty scoring function can be used to truthfully elicit pairwise comparisons and opinions over social networks, both of which are non-i.i.d. settings.
Joint work with Shi Feng and Fang-Yi Yu.
9:15 - 10:00 am, Aaron Roth. What Should We Trust in “Trustworthy ML”?
Abstract: “Trustworthy” machine learning has become a buzzword in recent years. But what exactly are the semantics of the promise that we are supposed to trust? In this talk, we will make a proposal, through the lens of downstream decision makers using machine learning predictions of payoff relevant states: Predictions are “trustworthy” if it is in the interests of the downstream decision makers to act as if the predictions are correct, as opposed to gaming the system in some way. We will find that this is a fruitful idea. For many kinds of downstream tasks, predictions of the payoff relevant state that are statistically unbiased, subject to a modest number of conditioning events, suffice to give downstream decision makers strong guarantees when acting optimally as if the predictions were correct — and it is possible to efficiently produce predictions (even in adversarial environments!) that satisfy these bias properties. This methodology also gives an algorithm design principle that turns out to give new, efficient algorithms for a variety of adversarial learning problems, including obtaining subsequence regret in online combinatorial optimization problems and extensive form games, and for obtaining sequential prediction sets for multiclass classification problems that have strong, conditional coverage guarantees — directly from a black-box prediction technology, avoiding the need to choose a “score function” as in conformal prediction.
Joint work with Georgy Noarov, Ramya Ramalingam and Stephan Xie.
10:00 - 10:30am, break.
10:30 - 11:15 am, Alex Slivkins. Exploration under myopic behavior: from social learning to incentivized exploration to clinical trials.
Abstract: When myopic decision-makers face a bandit problem, how well do they explore as a collective? We discuss how this issue plays out in several related scenarios, from social learning in bandit environments to incentivized exploration in recommendation systems to incentivized participation in clinical trials.
Based on three recent papers: NeurIPS'23, EC'24, working paper.
11:15 - 12:00 am, Bobby Pakzad-Hurson. Persuaded Search.
Abstract: We consider sequential search by an agent who cannot observe the quality of goods but can acquire information by buying signals from a profit-maximizing principal with limited commitment power. The principal can charge higher prices for more informative signals in any period, but high prices in the future discourage continued search by the agent, thereby reducing the principal's future profits. A unique stationary equilibrium outcome exists, and we show that the principal (i) induces the socially efficient stopping rule, (ii) extracts the full surplus, and (iii) persuades the agent against settling for marginal goods, extending the duration of surplus extraction. However, introducing an additional, free source of information can lead to inefficiency in equilibrium.
Joint work with Teddy Mekonnen and Zeky Murra-Anton.
Poster Session
Vote for your favorite poster at our poster session 🔥
Best Poster award at our workshop is selected by majority vote!
Our Best Poster award is given to
Forecasting Competitions with Correlated Events with Rafael Frongillo, Manuel Lladser, Anish Thilagar, Bo Waggoner.
July 8, 1-2 pm.
High-Effort Crowds: Limited Liability via Tournaments. Yichi Zhang, Grant Schoenebeck.
Persuading a Learning Agent. Tao Lin, Yiling Chen.
Bandit Social Learning: Exploration under Myopic Behavior. Kiarash Banihashem, Mohammad Taghi Hajiaghayi, Suho Shin, Aleksandrs Slivkins.
A Truth Serum for Eliciting Self-Evaluations in Scientific Reviews. Jibang Wu, Haifeng Xu, Yifan Guo, Weijie J. Su.
Competitive Information Design for Pandora’s Box. Bolin Ding, Yiding Feng, Chien-Ju Ho, Wei Tang, Haifeng Xu.
Prediction-sharing During Training and Inference. Yotam Gafni, Ronen Gradwohl, and Moshe Tennenholtz.
Learning from Imperfect Human Feedback: a Tale from Corruption-Robust Dueling. Yuwei Cheng, Fan Yao, Xuefeng Liu, Haifeng Xu.
Sharp Results for Hypothesis Testing with Risk-Sensitive Agents. Chen Shi, Stephen Bates, Martin J. Wainwright.
Forecasting Competitions with Correlated Events. Rafael Frongillo, Manuel Lladser, Anish Thilagar, Bo Waggoner.
Incentivizing Agents through Ratings. Peiran Xiao.
Confirmed Speakers
Call for Papers
We are now calling for poster submissions. We encourage two-page extended abstract submission but also welcome full-paper submissions of recent or unpublished work that are related to the topics above.
Submission Timelines
June 7th: paper submission deadline.
June 14th: acceptance/rejection notification.
The workshop will take place on Monday morining (July 8) during the EC conference.
Submission Instructions
Please email the submission to ec24workshop-information-acquisition@googlegroups.com.