I develop a rational model that explains why credit-rating agencies classify bonds into coarse categories. The optimal number of categories and their cutoffs are outcomes of profit-maximization by a rating agency. The trade-off between the number of issuers that are willing to pay for a rating and how much they are willing to pay determines these optima. The model predicts rating standards are tighter during non-crisis periods. It also predicts no entry of new issuer-pay credit-rating agencies with finer categories. My empirical tests provide evidence consistent with the model's predictions.
We investigate the relation between high frequency changes in the supply curve and the behavior of stock prices using variance ratios of 15 second and 5 minute returns based on transaction prices and quote mid-points. Our approach precisely and rapidly detects widely known dislocations in liquidity such as the Flash Crash. Aside from such episodes, there is little evidence that rapid changes in the supply curve generated by quotation activity and/or high frequency trading affects variance ratios in small or large capitalization stocks. Stocks that experience large intraday increases in quotation activity also do not experience degradation in variance ratios. There is some evidence that increased quotation activity improves variance ratios and reduces the implicit cost of trading. Adjustments to make-take fees that reduce the explicit cost of trading also result in modest improvements in variance ratios. At an aggregate level, the evidence suggests that high frequency quotation or trading does not affect prices in a deleterious way – the data point to a highly liquid market in which prices largely behave as a random walk.
US sovereign debt is widely
regarded as risk-free on nominal terms. However, between January 2008 and
September 2010, US sovereign credit default swaps (CDS) traded at premium to a
sample of US corporate CDSs. The implied default probabilities from CDS
premiums show that the US government is
Schennach (2007) has shown that the empirical likelihood (EL) estimator may not be asymptotically normal when a misspecified model is estimated. This problem occurs because the empirical probabilities of individual observations are restricted to be positive. We find that even the EL estimator computed without the restriction can fail to be asymptotically normal for misspecified models if the sample moments weighted by unrestricted empirical probabilities do not have finite population moments. As a treatment for this problem, we propose a group of alternative estimators, which we refer to as modified EL (MEL) estimators. For correctly specified models, these estimators have the same higher order asymptotic properties as the EL estimator. The MEL estimators are obtained by the Generalized Method of Moments (GMM) applied to an exactly identified model. Our simulation results provide promising evidence for the estimators.
This paper introduces and studies the properties of the Demeaned Generalized Empirical Likelihood estimators (DGEL), constructed by subtracting the average from the moment conditions of the Generalized Empirical Likelihood estimators (GEL). We show that DGEL estimators can significantly simplify the computations. Under a symmetric distribution, every DGEL estimator has the same higher-order properties as the commonly used empirical likelihood estimator (EL). We show that a particular member of this group, Demeaned Exponential Tilting estimator (DET), exhibits the best higher-order properties. Specifically, it is influenced by only one source of a potential bias, it is higher-order efficient after bias-correction, and it is also well-defined in the model of misspecification with unbounded moment functions. Results in the paper support wider application of the demeaned estimators and offer econometric guidance to empiricists using this method. ---------- Working papers are available upon request -----------
We compare the stochastic discount factor model and beta pricing model when proxy factors are used. This comparison is important because risk factors used in empirical studies are proxies for the true latent factors. We find that the beta pricing model generates more efficient pricing error estimates when proxy factors are used. This finding contradicts with Jagannathan and Wang (2002), who show that both models produce equally efficient pricing error estimates. The contradiction is driven by the fact that they use true factors in comparison. The second finding is that factor means influence estimate efficiency. Thus, we suggest using demeaned factors in estimation. |