Chatbot or Humanaut? How the Source of Advice Impacts Prosocial Behavior (w/ Haritima Chauhan). 2026, Journal of Behavioral and Experimental Economics.
This paper explores how the source of advice -- human or generative AI (genAI) -- relates to behavior in three classic bargaining games commonly used to assess prosociality and cooperative welfare gains. Utilizing a novel experiment, we show that the source of advice matters. While both sources of advice increased prosociality, players preferred human advice over that from genAI and were more willing to pay for it. Prosocial behavior was more prevalent when players received human advice -- advice increased the probability of adopting the Pareto-optimalstrategy by 14% in the stag hunt and boosted contributions of 19% to the public goods game and 8% in dictator. Leveraging language AI advances, we demonstrate that the advice corpora differ significantly. Humans were more objective, specific, intuitive, and norm-oriented; genAI offered guided reasoning and targeted concepts of risk and strategy. Entities adopting genAI technologies should balance AI agency with human oversight, mindful of behavioral salience and moral credibility.Revisiting Erat and Gneezy’s White Lies Paradigm (w/ Haritima Chauhan). 2024, Journal of Economic Psychology.
When Pretty Hurts: Beauty Premia and Penalties in eSports Contracts (w/ Haritima Chauhan and Steven Kistler). 2024, Journal of Economic Behavior and Organization.
Gender Penalties and Solidarity -- Teaching Evaluation Differentials in and out of STEM (w/ Andrew Hussey). 2023, Economics Letters.
Show No Quarter: Combating Plausible Lies with Ex Ante Honesty Oaths (w/ Haritima Chauhan) 2023, Journal of Economic Science Association.
Initiating Free-flow Communication in Trust Games. (w/ Haritima Chauhan) 2023, Frontiers in Behavioral Economics.
You Can’t Hide Your Lying Eyes: Honesty Oaths and Misrepresentation (w/ Fenndy Liu and Haritima Chauhan). 2022, Journal of Behavioral and Experimental Economics, Vol. 98. https://doi.org/10.1016/j.socec.2022.101880
Abstract: Lying about race or personal characteristics for a job or in college admissions is common and has recently become a high-profile issue. In this paper, we explore the decision to misrepresent oneself and determine how honesty oaths impact personal characteristic reporting. To do this, we execute an experiment on Amazon MTurk, using a self-reporting task involving human eye color. We find that honesty oaths elicit more truthful behavior – primarily reducing implausible lies (maximal outcome lies). As a result, we spent 27.6% less on bonuses than we would have without oath-taking. There is some evidence that if one believes lying is common, they are more likely to lie as well. We conclude that oaths decrease extreme misrepresentation and expectations of group behavior significantly impact the decision to deceive.Stay at Home Orders, Loneliness, and Collaborative Behavior (w/ Marine Foray and Andrew Hussey). 2021, Economics and Human Biology, Vol. 43(1).
Linguistic Signaling, Emojis, and Skin Tone in Trust Games Babin JJ. PLoS ONE (2020), 15(6): e0233277.
This paper reports the results of an experiment involving text-messaging and emojis in laboratory trust games executed on mobile devices. Decomposing chat logs, I find that trust increases dramatically with the introduction of emojis to one-shot games, while reciprocation increases only modestly. Skin tones embedded in emojis impact sharing and resulting gains – to the benefit of some and detriment to others. Both light and dark skin players trust less on receipt of a dark skin tone emoji – suggestive of statistical discrimination. In this way, computer-mediated communication leads to reduced gains for dark-skinned persons. These results highlight the complex social judgment that motivates trust in an anonymous counterpart.