Prof. Ernan Haruvy, January 6th
Title: Learning Bad: Optimal Design of Bad Recommendations
Zoom link: https://us02web.zoom.us/j/84796947126
Abstract:
In many economic and organizational contexts, individuals rely on algorithmic or human recommendation systems to guide repeated decisions. While prior research has extensively examined when and why people trust recommendations, less is known about how decision makers learn to follow systematically biased or even self-serving recommendations over time. This work explores the conditions under which decision makers develop trust in recommendations that are accurate most of the time but harmful on average. The motivation stems from a fundamental asymmetry in human memory: individuals recall relative ranks of past outcomes more easily than precise historical payoffs. We leverage this cognitive bias to show how recommendation systems can be strategically designed to exploit decision makers’ learning processes, generating long-run adherence to advice that reduces their welfare. Our key question is: Can a self-interested advisor sustain influence through partial honesty, providing correct advice often enough to build trust while exploiting that trust in critical moments? To answer this question, we conduct a series of controlled behavioral experiments within a newsvendor-style setting involving two roles: a supplier (advisor) and a retailer (advice recipient). The supplier has private information about the payoff structure in each trial and issues binary recommendations (e.g., “Choose A” or “Choose B”). The retailer decides whether to follow or ignore the recommendation. These studies provide experimental evidence of learned adherence to systematically biased advice. The results highlight a critical vulnerability in human trust formation: accuracy frequency dominates expected value in guiding belief updates and compliance behavior.