Publications:
School Choice Under Complete Information: An Experimental Study.
Yan Chen, Yingzhi Liang, Tayfun Sönmez. Journal of Mechanism and Institution Design 1.1 (2016): 45-82.
We present an experimental study of three school choice mechanisms under complete information, using the designed environment in Chen & Sönmez (2006). We find that the top trading cycles (TTC) mechanism outperforms both the Gale-Shapley deferred acceptance (DA) and the Boston immediate acceptance (BOS) mechanism in terms of truth-telling and efficiency, whereas DA is more stable than either TTC or BOS. Compared to the incomplete information setting in Chen & Sönmez (2006), the performance of both TTC and BOS improves with more information, whereas that of DA does not.
A Dynamic Matching Mechanism for College Admissions: Theory and Experiment.
Binglin Gong, Yingzhi Liang. Management Science (2025)
Market design has provided many managerial insights into why certain market institutions fail while others succeed in allocating scarce resources in both the for- and non-profit sectors. In this paper, we analyze a new form of dynamic matching mechanism enabled by innovations in information technology. We provide a theoretical and experimental examination of this mechanism in the context of college admissions in Inner Mongolia, China, where students are given real-time allocation feedback and are allowed to revise their choices. Theoretically, we show that efficient and stable outcomes arise in every rationalizable strategy profile if there is a sufficient number of revision opportunities. Experimentally, we find that in an environment with high strategic complexity, the Inner Mongolia Dynamic mechanism performs better than theoretical predictions: It is as stable as the Deferred Acceptance mechanism and as efficient as the Boston mechanism, with higher truth-telling rate than both of them. These results suggest that the Inner Mongolia Dynamic mechanism can be a good substitute for static mechanisms in complex environments. The Inner Mongolia Dynamic mechanism may also be useful in matching potential employers and employees in the labor market.
Working Papers:
Optimal Team Size under Complementary Efforts: Theory and Experiments.
Yan Chen, Yingzhi Liang. Under Review (2025)
We investigate the optimal group size for public goods provision when group members have complementary efforts. We model the complementarity of efforts by adding the constant elasticity of substitution (CES) function into the canonical linear public goods provision model. We find that the optimal team size depends on the level of complementarity. When the complementarity is high, there is an upper bound for the optimal team size, but when the complementarity is low, there is a lower bound for the optimal team size. These theoretical predictions are validated via a lab experiment.
Motivated Self-Control.
Wei Huang, Yingzhi Liang. Working Paper (2025)
We investigate two channels that generate overconfidence in one’s self-control ability: perseverance and imperfect memory. Through a field experiment involving university students, we conducted two rounds of self-control-dependent tasks. The results show that preannounced reminders of first-round performance before the second-round task significantly increase first-round completion, thereby boosting self-confidence regarding the second-round task. This result proves over-perseverance a viable mechanism to boost future self-confidence. Without reminders, students exhibited optimistic memory biases about their first-round performance. However, this biased memory did not translate into overconfidence about the second-round task compared with students who received sudden ex-post reminders. Our results indicate that imperfect memory is a less robust mechanism for generating overconfidence compared with perseverance. We develop an intrapersonal multiple-self model in which a present-biased agent chooses to persist in tasks to signal self-control ability to their future self to resolve time inconsistency, consistent with our experimental findings.
AI Reasoning Can Backfire: Increased Trust Reduces the Performance Gain from Unique Human Knowledge.
Zenan Chen, Ruijiang Gao, Yingzhi Liang. Working Paper (2025)
While it is a common practice for Large Language Models (LLMs) to display their reasoning process, little is known about how this affects user trust and whether the extensiveness of reasoning matters. We conduct a randomized online experiment to examine how brief versus extensive AI reasoning influences decision-making under two conditions: when humans lack unique human knowledge (UHK) and when they possess it. Without UHK, both the brief and extensive reasoning significantly increase user trust—measured by alignment with AI recommendations. With UHK, even when participants are explicitly told that the AI lacks critical information, trust still rises. This over-trust leads to reduced decision accuracy, as users abandon correct judgments based on their private knowledge. Our results reveal that while detailed reasoning can be beneficial in symmetric information settings, it can be detrimental when information is asymmetric, underscoring the need for context-aware explanation strategies in LLM-assisted decision-making.
Incorporating Private Information Into Centralized Algorithms: A Field Experiment at a Ride-Sharing Platform.
Yingzhi Liang. Working Paper (2024)
Many have seen the gig economy as the "future of work". Despite having more flexible working hours than workers at traditional workplaces, ride-sharing drivers have little power over the algorithm that assigns them tasks. As a result, there can exist a misalignment between driver location preference and assigned trips. We examine the effect of allowing drivers to self define working regions through a natural experiment on the largest ride-sharing platform in China, DiDi. We find that treatment drivers increase working hours and income by 5% while maintaining productivity, measured by hourly earnings. We also find no evidence that the treatment lowers matching efficiency, measured by passengers and drivers' waiting time.
Work in Progress:
Revitalizing Dormant Teams in Online Communities: A Field Experiment on Kiva.org.
Wei Ai, Roy Chen, Yan Chen, Yingzhi Liang, Qiaozhu Mei.
We investigate the effect of repeated interventions in charitable giving on microfinance platform Kiva.org. After sending forum messages to inactive teams every month for six months, we find that the treatment lenders lend significantly more in the first month, but the effect gradually decreases over the course of the experiment.
Team Composition: Friends or Strangers?
Yingzhi Liang, Tanya Rosenblat.
We study peer effect by randomly assigning group members in an undergraduate introductory programming course. Students are paired with a friend or a stranger in the same class to complete a group assignment. We find that students paired with friends have a 6% higher completion rate than students paired with strangers. A follow-up survey reveals that the effect comes from students being more accountable towards their friends than unfamiliar classmates. Our findings confirm the effectiveness of using strong ties as a commitment device in the education setting.
Predicting Students Academic Success Using Simple Economics Games.
Yan Chen, Yingzhi Liang, Qiaozhu Mei, Dongwu Wang, Stephanie Wang.
We predict undergraduate student GPA using their choices and behavioral patterns from a set of simple economics games, including the trust game, beauty contest, competitiveness game, risk lottery, and knapsack problem. Among machine learning models used, LassoLars performs the best, but the overall predicting power of these economic games is small. A self-reported procrastination measure is more predictive than any game choices.