9:00 - 9:45
Invited talk 1: Jason Hartline
9:45 - 10:00
Contributed talk 1: Benchmarking LLM's Judgments with No Gold Standard
10:00 - 10:15
Contributed talk 2: How AI Aggregators Affect Knowledge Sharing
Break (10:15 - 10:45 AM)
10:45 - 11:30
Invited talk 2: Aranyak Mehta
11:30 - 11:45
Contributed talk 3: Human-AI Interactions and Societal Pitfalls
11:45 - 12:00
[TBD]
Poster Sessions
1:30 - 2:45
The Effect of State Representation on LLM Agent Behavior in Dynamic Routing Games
Tokenized Bandit for LLM Decoding and Alignment
Learning from a Mixture of Information Sources
An Interpretable Automated Mechanism Design Framework with Large Language Models
Tell me Why: Incentivizing Explanations
Is Your LLM Overcharging You? Tokenization, Transparency, and Incentives
Natural Language Mechanisms via Self-Resolution with Foundation Models
How AI Aggregators Affect Knowledge Sharing
Verbalized Bayesian Persuasion
STEER-ME: Evaluating LLMs in Information Economics
Break (2:45 - 3:15 PM)
3:15 - 4:30
Cost-Aware Sequential Testing for Human-in-the-Loop LLM Tasks
Benchmarking LLM's Judgments with No Gold Standard
Understanding LLMs’ Economic Rationality through Sparse Autoencoders
The Value of Costly Signaling in Interactive Alignment with Inconsistent Preferences
Bayesian Persuasion as a Bargaining Game
Framing and Signaling: An LLM-Based Approach to Information Design
Human-AI Interactions and Societal Pitfalls
Proper Dataset Valuation by Pointwise Mutual Information
Incentives for Digital Twins: Task-Based Productivity Enhancements with Generative AI
Persuasive Calibration
Fairness Behind the Veil: Eliciting Social Preferences from Large Language Models