Reputation Effects in Two-Sided Incomplete-Information Games (Job Market Paper)
This paper studies reputation effects in a class of games with imperfect public monitoring and two long-lived players, both of whom have private information about their own type and uncertainty over the types of the other player. Players may be either a strategic type who maximizes expected utility or a (simple) commitment type who plays a prespecified action every period. As in standard models, the reputation of a strategic type of player for being the commitment type is established by mimicking the behavior of the commitment type. The distinct feature of my model is that both strategic players aim to establish a (false) reputation for being the commitment type. The class of games I consider, namely one-sided binding moral hazard at the commitment profile, encompasses a wide range of economic interactions between two parties that involve hidden-information (e.g. between a regulator and a regulatee) or hidden-action (e.g between an employer and an employee), where the reputation concerns of both parties are apparent. In both games, one party (principal) prefers that the other party (agent) play in a specific way and use costly auditing to enforce this behavior. The principal aims to establish a reputation for being diligent; whereas the agent want to build a reputation for being virtuous. Extending the techniques of Cripps, Mailath and Samuelson (2004), I find that neither strategic player can sustain a reputation for playing a noncredible behavior, i.e. a behavior which is not optimal given that the opponent is best responding in the stage game. Hence, in this class, the true types of both players will be revealed eventually in all Nash equilibria uniformly and the asymmetric information does not affect equilibrium analysis in the long-run.
This paper studies misrepresentation of information, by a privately-informed agent, to an authority
figure. Misrepresenting private information by agents, even under the existence of a regulator or monitor, is a common feature of many economic interactions. Moreover, in most of such situations, the regulator or monitor, who is supposed to detect deviations from the desirable behavior, may himself have an incentive to be engaged in moral hazard because of costly or timely monitoring. The goal of this paper is to understand how private information can be manipulated by a strategic Sender, in the presence of a strategic Receiver, who aims to deter the manipulation of information using costly auditing, when their interactions are not contractable. The Sender has noisy private information about an underlying state of nature and is able to misrepresent this information by sending false messages. The magnitude of the Sender’s cost of lying is governed by the auditing strategy of the Receiver, which determines the probability of an audit and detecting an undesirable behavior of the Sender. The Receiver, on the other hand, may have an inventive not to audit intensively if he thinks that the Sender is going to give the accurate information. The Receiver believes that the Sender could be an honest type with some strictly positive probability. An honest type always sends the true message, whereas a strategic Sender maximizes expected payoffs. Similarly, the Sender believes that the Receiver could be a tough type with some strictly positive probability. A tough Receiver always chooses high auditing whereas a strategic Receiver maximizes expected payoffs. The fact that the private information of the Sender is imperfect and the auditing by the Receiver is random prevent players from learning each others’ true types. To model this environment, I use a simultaneous-move version of an inspection game, with incomplete-information about the types of players, when their actions are not observable. This paper aims to analyze how uncertainty about each other's types and the concerns for (false) reputation pay of for both parties; and characterize the equilibria in the (1) one-shot game; (2) two-period game; and (3) infinitely-repeated game. The equilibrium strategies are determined by the parameters of the model, as well as the discount factors (in the repeated game).
Optimality of Linear Contracts in Continuous-Time Principal-Agent Models with Collusion (in progress)
We analyze a continuous-time two-agent hidden-action model with exponential utility, where there is strategic interaction and the possibility of communication amongst the agents. In order to give a theoretical justification for the use of linear contracts, we extend the model provided by Sung (1995) into which this communication possibility is incorporated. In this continuous-time repeated agency problem, where the agents’ actions jointly determine the mean and the variance of the outcome process, we prove that there exists an optimal compensation scheme for each agent that is linear in the output.
Teamwork vs. Collusion, joint with Mehmet Barlo, (in progress)
We study two mechanisms considered by a principal who needs to hire two agents to operate her enterprize where collusion amongst the agents is an issue. In our two-agent single-task hidden-action model, where all parties involved have exponential utility functions and the principal owning normally distributed observable and verifiable returns is restricted to offer linear contracts, agents are assumed to be able to exploit all collusion opportunities via enforceable side contracts contingent on effort levels, as well as on outcomes. We formulate collusion-proof contracts and team-work contracts. We show that the principal is always better off by designing a team--work contract rather than offering seperate collusion-proof contracts. We also provide a full characterization of the situations when it is sufficient to restrict attention to optimal incentive compatible and individually rational contracts rather than collusion-proof contracts. The assumptions satisfying this are also the sufficient conditions where offering a team-work contract is always better than offering incentive compatible contracts.