This paper investigates the price discrepancy observed between Credit Default Swap (CDS) auctions and the over-the-counter (OTC) market by employing a structural uniform multi-unit auction framework. We estimate the marginal values attributed by the bidders to the underlying bonds exploiting the intricacies of the two-stage auction mechanisms. The findings reveal an inverse relationship between bid shading and risk aversion, demonstrating that as risk aversion escalates, bid shading diminishes. Therefore, neglecting the risk preferences of the dealers has the effect of overestimating the surplus of the bidders. The study also determines the risk preferences that align the marginal value with OTC prices and delves into the heterogeneity of bid shading across different groups of bidders, such as clients and dealers, providing a nuanced understanding of the hedging and speculative intents of the bidders in CDS auctions.
In this paper, we explore the strengths and weaknesses of an application of Reinforcement Learning to study dynamic first-price sealed-bid auctions. In spite of the importance of these auctions, there is no computational framework for applied economists to study their asymmetric dynamics and understand how learning impacts the strategy of bidders who bid repeatedly. We apply Q-learning algorithm based on Fershtman and Pakes (2012) to the first price-sealed bid auctions with myopic and forward-looking bidders considering independent and correlated errors. We find that this approach computationally outperforms standard approaches like Markov Perfect Bayesian Equilibrium (MPBE). Also, it can replicate the Bayesian Nash equilibrium (BNE) in the case where bidders are myopic. Therefore, we can simulate complex learning dynamics if the errors are time-dependent. However, in the case of a forward-looking agent, the algorithm does not converge to BNE because of the updating protocol. Fershtman and Pakes (2012) addressed these issues by crafting Restricted Experienced Based Equilibrium (REBE). We evaluate the theoretical and computational implications of this approach. We provide further useful insights about the Q learning algorithms and their relationship with theoretical equilibria