Research
Research
Working Papers
“What Drives Demand for Fake Reviews? Evidence from Amazon Product Reviews”
Abstract: This paper investigates key drivers influencing online sellers' decisions to invest in fake reviews. I first construct a model in which sellers strategically invest in fake reviews to influence consumers, who form beliefs about product quality based on observed reviews. I then empirically test the model's predictions on the main determinants of fake review prevalence using Amazon product review data. To address the challenge of unobservable fake reviews, I use a machine learning model trained on verified fake Amazon reviews sourced from Facebook groups to estimate the likelihood of fake reviews for Amazon products. I find that products are more likely to have fake reviews when they are durable, are new, have more online shoppers with experience, or (in certain circumstances) face more substitutes. These findings help platforms and regulators better understand the economic environment to combat fake reviews.
Works in Progress
“Estimating the Informativeness of Rating Systems” (with Babur De los Santos and Chungsang Lam)
Abstract: We study how the coarseness of rating systems affects the informativeness of online ratings about product quality. Our analysis combines two approaches. First, in a controlled Bookworm experiment that varies vertical and horizontal differentiation, we find that the two-star scale better distinguishes objective quality in a complex game, whereas the six-star scale provides clearer differentiation in a simple game. Second, using Yahoo Movie and EachMovie data, we construct a benchmark for average perceived quality with film and viewer fixed effects and compare platforms’ raw rating rankings to this benchmark. The six-point scale aligns more closely with the benchmark than the thirteen-point scale. Together, the results suggest that rating scale design should be matched to consumers’ preference heterogeneity to ensure efficient information transmission and welfare.