Concentration risk in banks’ portfolio
Concentration risks in credit portfolios arise from an unequal distribution of loans to single borrowers (name concentration) or industrial / regional sectors (sector or country concentration). Supervisory authorities like the Bank of Israel aim to mitigate concentration risk by taking it into account in their rules for setting capital adequacy.
This project aimed to develop a robust Economic Capital model for the Bank of Israel. Economic Capital represents the amount of equity needed to cover potential losses beyond expected defaults, ensuring bank solvency. Computing Economic Capital is typically done internally by banks using a method called a Value at Risk (VaR), via a Monte Carlo simulation. For a large portfolio of loans, Monte Carlo simulations could be a computationally expensive. To overcome these difficulties, Economic Capital was traditionally calculated via an Asymptotic Single Risk Factor model, so that its computation is fast and readily applicable.
However, this approach neglects important risk management aspects, such as name and/or sector concentration the portfolio. This project addressed these limitations by:
Implementing advanced methodologies: Drawing upon research by Dullman & Masschelein (2007), Pykhtin (2004), and Gordy (2003), the model was able to capture the impact of multiple risk factors, including (but not limited to) sector concentration.
Overcoming computational challenges: The model addressed the computational limitations of Monte Carlo simulations for large portfolios via a second-order approximation approach to accurately capture the impact of concentration risk.
The developed model provided the Bank of Israel with a valuable tool to assess the impact of sector concentration on bank portfolios, informing regulatory decisions aimed at enhancing the stability of the banking system.
More details can be found in this document / presentation. A code extract (Matlab) can be found here.
Predicting inflation using ARMA
As part of my research on inflation and the stock market, I used an ARMA model to forecast inflation, which is then used to break down the official (seasonally adjusted) Consumer Price Index inflation series into two components: expected and unexpected. In my research, these two components are then studied in relation to the Israeli stock market.
The implementation of the ARMA model is relatively straightforward, and is performed using the statsmodels package in Python. A code extract (Python) can be found here. A separate Matlab script I wrote was used to seasonally adjust the CPI series for the years before 1996, in order to increase the sample size and make the predictions of the model better for the early years.
Predicting inflation using interest rates
In this project I replicated the results of two papers by Eugene Fama, from 1975 and 1982, and then extended the results to include the more recent experience. The project was a component of the research on inflation and the stock market that eventually did not make it into the paper. The main motivation was to get a market based expected/unexpected inflation decomposition, in order to test the relationship of these components to capital markets. Such decomposition of course exists for the recent experience using Treasury Inflation Protected Securities (TIPS), though due to the long time series nature of the study, I wanted to have a similar decomposition for all postwar data.
All market based measures of expected inflation rest on the idea that market rates somehow reflect information regarding the future realization of inflation, the latter being a payoff-relevant macroeconomic variable. Fama’s paper from 1975 runs the simple least squares regression of inflation on the “short” interest rate. The results from such a regression are only valid if the expected real rate is constant. An academic polemic ensued the publication of this paper, arguing against the hypothesis that the expected real rate was constant. Fama then continued to study this problem and published another paper in 1982 where he accounts for variation in the expected real rate using general least squares methodology.
The procedure to account for the variation in the expected real interest rate follows in two steps. To get an unbiased estimate of the coefficient on the interest rate, Fama used a Generalized Least Squares (GLS) using maximum likelihood, simultaneously estimating two coefficients: the coefficient on interest rates; a moving average coefficient. Instead of this, I used feasible GLS (FGLS) to jointly estimate these coefficients. This requires us to compute a transformation matrix which rotates the covariance matrix of the original model in the equation, which is done in view of the presumption of serial correlation in the error due to the moving average (because the real rate varies over time). For any given value of the moving average coefficient, I computed a closed form version of the inverse covariance matrix, following Uppuluri (1969). This makes the computation of the interest rate coefficient a matter of simply implementing a formula, which is often better than numerically maximizing the likelihood function. This concludes the first step. In the second step, I decompose the residual from the FGLS regression into two components: a moving average plus white noise. Fama used a Kalman filter to do so. Similarly, computed the conditional expectation of each component, given the whole sample.
More details can be found in this document. A code extract (Matlab) can be found here.
Bootstrapping the term structure of the probability of default using CDS prices
A CDS price (par-spread) is the premium paid for insurance against default of a company. It can be used to back out market expectations about the probability of default of the company. This project was a component of chapter in my PhD dissertation. I obtained CDS data from Markit, using the universe of single-name CDS securities at a daily frequency. I used all available tenors to estimate each firm’s default intensities, linearly completing the term structure of par spreads and finding default intensities that best match this term structure. The procedure follows Veronesi & Zingales (2010), first computing the default intensity at the short end of the term structure using the CDS price for that tenor, and then iteratively continuing the procedure with the previous default intensities as inputs for the longer tenors.
More details can be found in this extract from my PhD dissertation. A code extract (Matlab) can be found here.
Computing Merton distance to default
I followed Vassalou & Xing (2004) and Bharath & Shumway (2008) to compute distance to default using the Black-Scholes-Merton (BSM) model. This involves an iterative procedure whereby the value of total assets of a company is computed for each time period solving equation the BSM formula numerically. At the end of each iteration, the standard deviation of total assets is computed in the preceding year to t, which is then used as an input for the next iteration. At the initial iteration, I take the standard deviation of equity returns as an input. The iteration process concludes when the standard deviation converges up to a desired precision.
More details can be found in this extract from my PhD dissertation. A code extract (Matlab) can be found here.
In the figure below I plot the Merton vs the CDS distance to default and probability of default for a single company (American airlines). The two resulting series are strongly positively correlated, though the correlation is far from perfect. This indicates that the information regarding a default coming event out of the two markets (bond/CDS vs stock market) is different, which suggests that at least one of the models is incorrect.
Computing Amihud liquidity measure
Due to market imperfections, the volume of trade in a security is relevant for its pricing. Prof Y. Amihud published an influential paper that attempts to capture this liquidity effect. Calculating the liquidity of a security has several uses in research, for example it is often used as an explanatory variable in asset pricing tests.
The following script (Python) computes the Amihud liquidity measure for stocks in the Compustat database, and compares it with other liquidity measures such as (inverse) volume and as well as the ratio of volume to market cap.