This paper studies consumer information acquisition in credence-good markets. An expert first sets a price menu, observes the consumer’s problem, and makes a treatment recommendation. The consumer can then acquire costly information to verify the expert's recommendation. We find a non-monotonic relationship between verification costs and consumer surplus. At prohibitively high or zero cost, consumer surplus is zero, sustained by wasteful rejection or full surplus extraction, respectively. For an intermediate range of costs, a mutually beneficial outcome emerges where the expert builds credibility more efficiently and the consumer secures a positive surplus. In contrast to the existing credence-good literature, we show that expert cheating arises as an intrinsic feature of the market, without consumer heterogeneity. This finding also informs the debate on the impact of new information technologies, suggesting that the proliferation of low-cost verification tools (e.g., AI-powered diagnostics) is beneficial to consumers by shifting the market into this Pareto-improving range.
This paper studies the role of consumer information in a monopolistic credence-good environment. The expert posts a price list for his services. Then the expert makes a treatment recommendation after diagnosing the consumer's problem, and the consumer decides whether to accept the recommendation based on her private signal and the expert's recommendation. Our study shows that i) any signal structure promoting honest service recommendations inevitably leads to zero consumer surplus, and ii) the signal structure maximizing the weighted sum between consumer surplus and expert profit prompts the expert to consistently recommend the expensive services, resulting in undertreatment for less severe problems. This exercise sheds light on the inherent limitations of information provision in mitigating the dual advantages wielded by the expert in pricing and information.
Abstract
This paper considers a mechanism design problem with non-transferable utility where agents face arbitrary many decision problems. The agents simultaneously announce their preference for each of the problem, and the utilitarian designer chooses a grand decision which sets the allocation for each of the individual decision problem. By using the typical Groves mechanism in the classical mechanism design problem with transferable utility, we construct a mechanism which can approximate socially efficient outcome as the number of individual problems grow large. The mechanism is characterized by a constrained message space where for each type we associate a fictitious price induced by the Groves-type transfer and the agent must report respecting the price. A surprising feature of our result is that, while agents may not be truthful in the equilibrium, their utility level approximates to the efficient level as if they were almost telling the truth.