Publications

2023

On the combination of naive and mean-variance portfolios


Lassance, Nathan, Vanderveken, Rodolphe and Vrins, Frédéric (2023

Journal of Business and Economic Statistics 42(3):875-889


Journal rank: ABDC (A*) Impact Factor (7.6, 5Y 2022)

We study how to best combine the sample mean-variance portfolio with the naive equally weighted portfolio to optimize out-of-sample performance. We show that the seemingly natural convexity constraint that \cite{tuzhou2011} impose---the two combination coefficients must sum to one---is undesirable because it severely constrains the allocation to the risk-free asset relative to the unconstrained portfolio combination. However, we demonstrate that relaxing the convexity constraint inflates estimation errors in combination coefficients, which we alleviate using a shrinkage estimator of the unconstrained combination scheme. Empirically, the constrained combination outperforms the unconstrained one in a range of generally small degrees of risk aversion, but severely deteriorates otherwise. In contrast, the shrinkage unconstrained combination enjoys the best of both strategies and performs consistently well for all levels of risk aversion.


Preprint (from SSRN):  https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4161606

Accounting for PD-LGD dependency: A tractable extension to the Basel ASRF framework


Barbagli, Matteo and Vrins, Frédéric (2023

Economic Modelling — Vol. 125:106391


(click to view abstract and link)


Journal rank: ABDC (A) Impact Factor (3.88, 2022)

We extend the asymptotic single risk factor (ASRF) model used in the Basel regulations to accommodate the dependence between the probabilities of default (PD) and the losses given default (LGD) with arbitrary marginal distributions. The PD-LGD link is introduced via the single systematic risk factor and its strength is controlled via a dedicated parameter, in line with the treatment of the default dependence in the current Basel framework. We derive the explicit form of the mapping function translating unconditional LGDs into conditional ones, compute the portfolio-invariant semi-analytical formula of value-at-risk and propose a calibration method. An empirical study featuring defaulted corporate bonds confirms the validity of the calibration procedure and delivers realistic risk metrics. The proposed approach is easy to implement and is useful to design future guidelines for capital requirements, to perform sensitivity analysis or to compute implied downturn LGDs. Compared to the guidelines of the European Banking Authority, our approach delivers more conservative figures on the considered data. 


Preprint (from SSRN):  https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3893870

SVB, Crédit Suisse... Au suivant ?


Vrins, Frédéric (2018)

Regards économiques — Focus no.30, Mars 2023


Open access (click to view abstract and link)

Nous discutons des origines de la crise, de la panique bancaire et de la difficulté de mettre en place une régulation prudentielle des banques robuste à ces réactions émotionnelles. 


Open access: https://www.regards-economiques.be/index.php?option=com_reco&view=article&cid=185

Portfolio selection: a target distribution approach


Lassance, Nathan and Vrins, Frédéric (2023

European Journal of Operational Research — Vol. 310, p. 302-314


Preprint available (click to view abstract and link)


Journal rank: ABS (4) ABDC (A*) Impact Factor (6.36, Feb. 2023)

We introduce a novel framework for the portfolio selection problem in which investors aim to target a return distribution, and the optimal portfolio has a return distribution as close as possible to the targeted one. The proposed framework can be applied to a variety of investment objectives. In this paper, we focus on improving the higher moments of mean-variance-efficient portfolios by designing the target so that its first two moments match those of the chosen efficient portfolio but has more desirable higher moments. We show theoretically that the optimal portfolio is in general different from the mean-variance portfolio, but remains mean-variance efficient when asset returns are Gaussian. Otherwise, it can move away from the efficient frontier to better match the higher moments of the target distribution. An extensive empirical analysis using three characteristic-sorted datasets and a dataset of 100 individual stocks indicates that the proposed framework delivers a satisfying compromise between mean-variance efficiency and improved higher moments. 


Preprint (from SSRN):  https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3893870

2022

Asymmetric short-rate model without lower bound


Wang, Linqi & Vrins, Frédéric (2022

Quantitative Finance Dec. 2022:279-295.


Preprint available


Journal rank: ABDC (A), CiteScore (2.8), Impact Factor (1.986, 2021)

We propose a new short-rate process which appropriately captures the salient features of the negative interest rate environment. The model combines the advantages of the Vasicek and Cox-Ingersoll-Ross (CIR) dynamics: it is flexible, tractable and displays positive skewness without imposing a strict lower bound. In addition, a novel calibration procedure is introduced which focuses on minimizing the Jensen–Shannon (JS) divergence between the model- and market-implied forward rate densities rather than focusing on the minimization of price or volatility discrepancies. A thorough empirical analysis based on cap market quotes shows that our model displays superior performance compared to the Vasicek and CIR models regardless of the calibration method. Our proposed calibration procedure based the JS divergence better captures the entire forward rate distribution compared to competing approaches while maintaining a good fit in terms of pricing and implied volatility errors.   

Preprint : https://dial.uclouvain.be/pr/boreal/object/boreal%3A249984/datastream/PDF_01/view



Meta-Learning Approaches for Recovery Rate Prediction


Gambetti, Paolo, Roccazzella, Francesco & Vrins, Frédéric (2022

Risks — Vol. 10, no. 6, p. 124


Open access (click to view abstract and link)


Journal rank: ABDC (B), CiteScore (2.2)

While previous academic research highlights the potential of machine learning and big data for predicting corporate bond recovery rates, the operations management challenge is to identify the relevant predictive variables and the appropriate model. In this paper, we use meta-learning to combine the predictions from 20 candidates of linear, nonlinear and rule-based algorithms, and we exploit a data set of predictors including security-specific factors, macro-financial indicators and measures of economic uncertainty. We find that the most promising approach consists of model combinations trained on security-specific characteristics and a limited number of well-identified, theoretically sound recovery rate determinants, including uncertainty measures. Our research provides useful indications for practitioners and regulators targeting more reliable risk measures in designing micro- and macro-prudential policies  


Open access: https://doi.org/10.3390/risks10060124

A general firm-value model under partial information


Mbaye, Cheikh, Sagna, Abass & Vrins, Frédéric (2022)

Journal of Computational Finance 1(26):1460-1559


Preprint available (click to view abstract and link)


We introduce a new structural default model which purpose is to combine enhanced economic relevance and affordable  computational complexity. Our approach exploits the information conveyed by a noisy observation of the firm value combined with the firm's actual default state. Moreover, it is rather general since any diffusion can be used to depict the firm's dynamics. However, this realistic setup comes at the expense of important computational challenges. To mitigate them, we propose an implementation based on recursive quantization. A thorough analysis of the approximation error resulting from our numerical procedure is provided. The power of our method is illustrated on the pricing of CDS options. This analysis reveals that the observation noise has a significant impact on the credit spreads' implied volatility.


Preprint : https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4192773

Migration to the PRIIPs framework: What impact on the European risk indicator of UCITS funds?


Clausse, Emilien, Herr, Donovan & Vrins, Frédéric (2022)

Revue bancaire et financièreForthcoming


Preprint available (click to view abstract and link)



Since 2011, managers of European UCITS funds are required to publish a risk indicator, called SRRI, in order to communicate the risk of their investment fund to retail investors in an understandable way. However, as of mid-2022, the implementation of the new PRIIPs regulation will lead to a complete review of the calculation methodology employed to determine this risk indicator. The latter, formerly based on a traditional measure of standard deviation, will now be determined from a more sophisticated tail-risk measure, namely Value-at-Risk (or, more precisely, the modified VaR, which is an approximation based on the first four moments of the fund returns). Additional changes deal with the data frequency and history used in the estimation procedure.

In this article, we break down the changes brought by the regulation and analyze them through an empirical study in order to take a critical look on the new PRIIPs methodology, that will impact a substantial portion of the 4 500 asset management companies active in Europe[1]. Our results, built from a random selection of 200 funds, show that the impact of the change in the risk measure is not as significant as expected. By contrast, the impact resulting from the changes in the chosen frequency and length of returns history seems material. Secondly, the redefinition of volatility buckets used to map the risk measure to the risk indicator has a side effect : a loss of granularity for non-extreme funds, which are now crowded in classes 2 to 4.


Preprint (LFIN WP): https://dial.uclouvain.be/pr/boreal/object/boreal:254715

Affine term structure models : a time-change approach with perfect fit to market curves


Mbaye, Cheikh, Vrins, Frédéric

Mathematical Finance — Vol. 32, no. 2, p. 678-724


Preprint available (click to view abstract and link)


Journal rank: ABS (3) ABDC (A)  Impact Factor (2.67, Nov. 2021)

We address the so-called calibration problem which consists of fitting in a tractable way a given model to a specified term structure like, e.g., yield, prepayment or default probability curves. Time-homogeneous jump-diffusions like Vasicek or Cox-Ingersoll-Ross (possibly coupled with compound Poisson jumps, JCIR, a.k.a. SRJD), are tractable processes but have limited flexibility; they fail to replicate actual market curves. The deterministic shift extension of the latter, Hull-White or JCIR++ (a.k.a. SSRJD) is a simple but yet efficient solution that is widely used by both academics and practitioners. However, the shift approach may not be appropriate when positivity is required, a common constraint when dealing with credit spreads or default intensities. In this paper, we tackle this problem by adopting a time change approach, leading to the TC-JCIR model. On the top of providing an elegant solution to the calibration problem under positivity constraint, our model features additional interesting properties in terms of variance. It is compared to the shift extension on various credit risk applications such as credit default swap, credit default swaption and credit valuation adjustment under wrong-way risk. The TC-JCIR model is able to generate much larger implied volatilities and covariance effects than JCIR++ under positivity constraint, and therefore offers an appealing alternative to the shift extension in such cases. 


Preprint (LFIN WP): https://dial.uclouvain.be/pr/boreal/object/boreal:221793

Optimal portfolio diversification via independent component analysis


DeMiguel, Victor, Lassance, Nathan, Vrins, Frédéric (2022)

Operations ResearchVol. 70, no. 1, p. 55-72


Preprint available (click to view abstract and link)


Journal rank: ABS (4) ABDC (A*) Impact Factor (3.78, Nov. 2021)

A popular approach to enhance portfolio diversification is to use the factor-risk-parity portfolio, which is the portfolio whose return variance is spread equally among the principal components (PCs) of asset returns. Although PCs are unique and useful for dimension reduction, they are problematic for risk parity primarily because they ignore the higher-moment dependence of asset returns. Moreover, they are an arbitrary choice: any rotation of the PCs remains uncorrelated, and we demonstrate that any portfolio is the factor-variance-parity portfolio corresponding to a specific rotation of the PCs. To overcome these issues, we rely on the factor-risk-parity portfolio based on the independent components (ICs), which are the rotation of the PCs that are maximally independent, and thus, account for higher moments in asset returns. We demonstrate that using the IC-variance-parity portfolio helps to reduce the return kurtosis. We also show how to exploit the near independence of the ICs to parsimoniously estimate the factor-risk-parity portfolio based on Value-at-Risk. Finally, we empirically demonstrate that portfolios based on ICs outperform those based on PCs, and several state-of-the-art benchmarks.


Preprint (from SSRN): https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3285156

Optimal and robust combination of forecasts via constrained optimization and shrinkage 


Gambetti Paolo Roccazzlla, Francesco, Vrins, Frédéric (2022

International Journal of ForecastingVol. 38, no. 1, p. 97-116


Preprint available (click to view abstract and link)


Journal rank: ABS (3) ABDC (A) Impact Factor (3.78, Nov. 2021)

We introduce various methods that combine forecasts using constrained optimization with penalty. A non-negativity constraint is imposed on the weights, and several penalties are considered, taking the form of a divergence from a reference combination scheme. In contrast with most of the existing approaches, our framework performs forecasts selection and combination in one step, allowing for potentially sparse combining schemes. Moreover, by exploiting the analogy between forecasts combination and portfolio optimization, we provide the analytical expression of the optimal penalty strength when penalizing with the L2-divergence from the equally-weighted scheme. An extensive simulation study and two empirical applications allow us to investigate the impact of the divergence function, the reference scheme and the non-negativity constraint on the predictive performance. Our results suggest that the proposed models outperform those considered in previous studies.


Preprint (from DIAL): https://dial.uclouvain.be/pr/boreal/object/boreal:229061

2021

Portfolio selection with parsimonious higher comoments estimation


Lassance, Nathan Vrins, Frédéric (2021)

Journal of Banking & Finance — Vol. 128, 106115


Preprint available (click to view abstract and link)


Journal rank: ABS (3) ABDC (A*) Impact Factor (3.07, Nov. 2021)

Large investment universes are usually fatal to portfolio strategies optimizing higher moments because of computational and estimation issues resulting from the number of parameters involved. In this paper, we introduce a parsimonious method to estimate higher moments that consists of projecting asset returns onto a small set of maximally independent factors found via independent component analysis (ICA). In contrast to principal component analysis (PCA), we show that ICA resolves the curse of dimensionality affecting the comoment tensors of asset returns. The method is easy to implement, computationally efficient, and makes portfolio strategies optimizing higher moments appealing in large investment universes. Considering the value-at-risk as a risk measure, an investment universe of up to 500 stocks and adjusting for transaction costs, we show that our ICA approach leads to superior out-of-sample risk-adjusted performance compared with PCA, equally weighted, and minimum-variance portfolios. 


Preprint (from SSRN): https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3455400


Bellotti, Anthony Brigo, Damiano Gambetti, Paolo Vrins, Frédéric (2021

International Journal of Forecasting — Vol. 37, no. 1, p. 428-444


Preprint available (click to view abstract and link)


Journal rank: ABS (3) ABDC (A) Impact Factor (3.78, Nov. 2021)

We compare the performance of a wide set of regression techniques and machine-learning algorithms for predicting recovery rates on non-performing loans, using a private database from a European debt collection agency. We find that rule-based algorithms such as Cubist, boosted trees, and random forests perform significantly better than other approaches. In addition to loan contract specificities, predictors that refer to the bank recovery process — prior to the portfolio’s sale to a debt collector — are also shown to enhance forecasting performance. These variables, derived from the time series of contacts to defaulted clients and client reimbursements to the bank, help all algorithms better identify debtors with different repayment ability and/or commitment, and in general those with different recovery potential. 


Preprint (from SSRN): https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3434412

2019

Recovery rates: Uncertainty certainly matters


Gambetti, Paolo Gauthier, Geneviève Vrins, Frédéric (2019)

Journal of Banking & Finance — Vol. 106, no.9, p. 371-383


Preprint available (click to view abstract and link)


Journal rank: ABS (3) ABDC (A*) Impact Factor (3.07, Nov. 2021)

Previous studies identify default rate as the main systematic determinant of bond recovery rates. We revisit this paradigm by investigating the impact of another factor, economic uncertainty. Based on a wide sample of American default issues and relying on beta regression models, well-suited for the bounded, heteroskedastic and skewed sample of recovery rates, we analyze the determinants of recovery rate distributions. We find economic uncertainty to be of paramount importance, as it proves to be the most important systematic determinant of recovery rate distributions, significant for both their mean and dispersion. By contrast, default rate remains a key determinant of the dispersion of these distributions, but not for their means. Considering this evidence is critical to the sound implementation of stochastic recovery rate models used by financial institutions for the computation of regulatory capital. 


Preprint (from SSRN): https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3366889

Minimum Rényi entropy portfolios


Lassance, Nathan Vrins, Frédéric (2019) 

Annals of Operations Research  - Vol. 299, no. 1, p. 23-46.

Special Issue on Recent Developments in Financial Modeling and Risk Management 


Preprint available (click to view abstract and link)


Journal rank: ABS (2) ABDC (A) Impact Factor (4.85, Nov. 2021)

Accounting for the non-normality of asset returns remains one of the main challenges in portfolio optimization. In this paper, we tackle this problem by assessing the risk of the portfolio through the "amount of randomness" conveyed by its returns. We achieve this using an objective function that relies on the exponential of Rényi entropy, an information-theoretic criterion that precisely quantifies the uncertainty embedded in a distribution, accounting for higher-order moments. Compared to Shannon entropy, Rényi entropy features a parameter that can be tuned to play around the notion of uncertainty. A Gram-Charlier expansion shows that it controls the relative contributions of the central (variance) and tail (kurtosis) parts of the distribution in the measure. We further rely on a non-parametric estimator of the exponential Rényi entropy that extends a robust sample-spacings estimator initially designed for Shannon entropy. A portfolio-selection application illustrates that minimizing Rényi entropy yields portfolios that outperform state-of-the-art minimum-variance portfolios in terms of risk-return-turnover trade-off. We also show how Rényi entropy can be used in risk-parity strategies.

 

Preprint (from ArXiv): https://arxiv.org/abs/1705.05666

SDES with uniform distributions: Peacocks, conic martingales and mean reverting uniform diffusions


Brigo, Damiano Jeanblanc, Monique Vrins, Frédéric (2019) 

Stochastic Processes and Their Applications — Vol. 130, no. 7, p. 3895-3919 


Preprint available (click to view abstract and link)


Journal rank: ABDC (A) Impact Factor (2.32, Nov. 2021)

Peacocks are increasing processes for the convex order. To any peacock, one can associate martingales with the same martinal laws. We are interested in finding the diffusion associated to the uniform peacock, i.e., to the peacock with uniform law at all times on a time-varying support [a(t),b(t)]. Following an idea from Dupire, Madan and Yor propose a construction to find a diffusion martingale associated to a peacock, under the assumption of existence of a solution to a particular stochastic differential equation (SDE). In this paper we derive the SDE associated to the uniform peacock and give sufficient conditions on the (conic) boundary to have a unique strong or weak solution and analyse the local time at the boundary. Eventually, we focus on the constant support case. Given that the only uniform martingale with time-independent support proves to be a constant, we consider more general (mean reverting) diffusions. We prove existence of a solution to the related SDE and derive the moments of transition densities. Limit-laws and ergodic results show that the transition law tends to a uniform distribution. 


Preprint (from ArXiv): https://arxiv.org/abs/1606.01570

Piecewise constant martingales and lazy clocks

Profeta, Christophe Vrins, Frédéric (2019) 

Probability, Uncertainty and Quantitative Risk — Vol. 4, no. 2


Open access (click to view abstract and link)

Conditional expectations (like, e.g., discounted prices in financial applications) are martingales under an appropriate filtration and probability measure. When the information flow arrives in a punctual way, a reasonable assumption is to suppose the latter to have piecewise constant sample paths between the random times of information updates. Providing a way to find and construct piecewise constant martingales evolving in a connected subset of R is the purpose of this paper. After a brief review of possible standard techniques, we propose a construction scheme based on the sampling of latent martingales \tilde Z with lazy clocks \theta. These \theta are time-change processes staying in arrears of the true time but that can synchronize at random times to the real (calendar) clock. This specific choice makes the resulting time-changed process Z_t = \tilde Z_{\theta_t} a martingale (called a lazy martingale) without any assumption on \tilde Z, and in most cases, the lazy clock \theta is adapted to the filtration of the lazy martingale Z, so that sample paths of Z on [0,T] only requires sample paths of (\theta,\tilde Z) up to T. This would not be the case if the stochastic clock \theta could be ahead of the real clock, as is typically the case using standard time-change processes. The proposed approach yields an easy way to construct analytically tractable lazy martingales evolving on (interval of) R. 


Open access: https://probability-risk.springeropen.com/articles/10.1186/s41546-019-0036-4

2018

Conic martingales from Stochastic integrals


Jeanblanc, Monique Vrins, Frédéric (2018) 

Mathematical Finance — Vol. 28, no. 2, p. 516-535


Preprint available (click to view abstract and link)


Journal rank: ABS (3) ABDC (A) Impact Factor (2.67, Nov. 2021)

In this paper we introduce the concept of \textit{conic martingales}. This class refers to stochastic processes having the martingale property, but that evolve within given (possibly time-dependent) boundaries. We first review some results about the martingale property of solution to driftless stochastic differential equations. We then provide a simple way to construct and handle such processes. Specific attention is paid to martingales in $[0,1]$. One of these martingales proves to be analytically tractable. It is shown that up to shifting and rescaling constants, it is the only martingale (with the trivial constant, Brownian motion and Geometric Brownian motion) having a separable diffusion coefficient $\sigma(t,y)=g(t)h(y)$ and that can be obtained via a time-homogeneous mapping of \textit{Gaussian diffusions}. The approach is exemplified by modeling stochastic conditional survival probabilities in the univariate and bivariate cases. 


Preprint (from ArXiv): https://arxiv.org/abs/1603.07488

Disentangling wrong-way risk: pricing credit valuation adjustment via change of measures


Damiano Brigo Vrins, Frédéric (2018) 

European Journal of Operational Research — Vol. 269, p. 1154-1164


Preprint available (click to view abstract and link)


Journal rank: ABS (4) ABDC (A*) Impact Factor (6.36, Feb. 2023)

In many financial contracts (and in particular when trading OTC derivatives), participants are exposed to counterparty risk. The latter is typically rewarded by adjusting the ``risk-free price'' of derivatives; an adjustment known as CVA (Credit Value Adjustment). A key driver of CVA is the dependency between exposure and counterparty risk, known as Wrong-Way Risk (WWR). In practice however, correctly addressing WWR is very challenging and calls for heavy numerical techniques. This might explain why WWR is not explicitly handled in the Basel III regulatory framework in spite of its acknowledged importance. In this paper we propose a sound and tractable method to deal efficiently with WWR. Our approach consists in embedding the WWR effect in the drift of the exposure dynamics. Even though this calls for infinite changes of measures, we end up with an appealing compromise between tractability and mathematical rigor, preserving the level of accuracy typically required for CVA figures. The good performances of the method are discussed in a stochastic-intensity default setup based on extensive comparisons of Expected Positive Exposure (EPE) profiles and CVA figures produced (i) by a full bivariate Monte Carlo implementation of the initial model with (ii) our drift-adjustment technique. 


Preprint (from ArXiv): https://arxiv.org/abs/1611.02877

Sampling the Multivariate Standard Normal Distribution under a Weighted Sum Constraint


Vrins, Frédéric (2018) 

Risks — Vol. 6, no. 3, p. 64


Open access (click to view abstract and link)


Journal rank: ABDC (B)

Statistical modeling techniques—and factor models in particular—are extensively used in practice, especially in the insurance and finance industry, where many risks have to be accounted for. In risk management applications, it might be important to analyze the situation when fixing the value of a weighted sum of factors, for example to a given quantile. In this work, we derive the (n-1)-dimensional distribution corresponding to a n-dimensional i.i.d. standard Normal vector Z = (Z_1, Z_2, . . . , Z_n)' subject to the weighted sum constraint w'Z = c, where w = (w_1,w_2, . . . ,w_n)' and w_i ≠ 0. This law is proven to be a Normal distribution, whose mean vector \mu and covariance matrix \Sigma are explicitly derived as a function of (w, c). The derivation of the density relies on the analytical inversion of a very specific positive definite matrix. We show that it does not correspond to naive sampling techniques one could think of. This result is then used to design algorithms for sampling Z under constraint that w'Z = c or w'Z ≤ c and is illustrated on two applications dealing with Value-at-Risk and Expected Shortfall. 


Open access: https://doi.org/10.3390/risks6030064

Extreme events and the cumulative distribution of net gains in gambling and structured products


Vrins, Frédéric Petitjean, Mikael (2018) 

Applied Economics — Vol. 50, no. 58, p. 6285-6300


Preprint available (click to view abstract and link)


Journal rank: ABS (2) ABDC (A) Impact Factor (0.97, Nov. 2021)

We argue that ethical principles in advertising and market communication cannot be properly discovered and applied to gambling without a deep understanding of its probabilistic implications, in particular when extreme events are influential. We carry out a probabilistic analysis of lottery games with lifetime prizes in order to derive sound recommendations about the pertinent information that should be communicated to nudge gamblers. We propose to focus on the cumulative distribution of net gains, for which there is currently no information available to gamblers. This holds true for structured products in which extreme events matter as well. 


Preprint (from ResearchGate): https://www.researchgate.net/publication/325896927_Extreme_events_and_the_cumulative_distribution_of_net_gains_in_gambling_and_structured_products

A subordinated CIR intensity model with application to wrong-way risk CVA


Mbaye, Cheikh Vrins, Frédéric (2018) 

International Journal of Theoretical and Applied Finance — Vol. 21, no.7, p. 22


Preprint available (click to view abstract and link)


Journal rank: ABS (2) ABDC (B) Impact Factor (1.10, Nov. 2021)

Credit valuation adjustment (CVA) pricing models need to be both flexible and tractable. The survival probability has to be known in closed form (for calibration purposes), the model should be able to fit any valid credit default swap (CDS) curve, should lead to large volatilities (in line with CDS options) and finally should be able to feature significant wrong-way risk (WWR) impact. The Cox–Ingersoll–Ross (CIR) model combined with independent positive jumps and deterministic shift (JCIR++) is a very good candidate: theTvariance (and thus covariance with exposure, i.e. WWR) can be increased with the jumps, whereas the calibration constraint is achieved via the shift. In practice however, there is a strong limit on the model parameters that can be chosen, and thus on the resulting WWR impact. This is because only non-negative shifts are allowed for consistency reasons, whereas the upwards jumps of the JCIR++ need to be compensated by a downward shift. To limit this problem, we consider the two-side jump model recently introduced by Mendoza-Arriaga and Linetsky, built by time-changing CIR intensities. In a multivariate setup like CVA, time-changing the intensity partly kills the potential correlation with the exposure process and destroys WWR impact. Moreover, it can introduce a forward looking effect that can lead to arbitrage opportunities. In this paper, we use the time-changed CIR process in a way that the above issues are avoided. We show that the resulting process allows to introduce a large WWR effect compared to the JCIR++ model. The computation cost of the resulting Monte Carlo framework is reduced by using an adaptive control variate procedure. 


Preprint (from SSRN): https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3366808

A Comparison of Pricing and Hedging Performances of Equity Derivatives Models


Lassance, Nathan Vrins, Frédéric (2018) 

Applied Economics — Vol. 50, no. 10, p. 1122-1137


Preprint available (click to view abstract and link)


Journal rank: ABS (2) ABDC (A) Impact Factor (0.97, Nov. 2021)

This paper investigates the pricing/hedging conundrum, i.e. the observation of a mismatch between derivatives models' pricing and hedging performances, that has so far been under-emphasized as the literature tends to focus on increasingly complicated option pricing models, without adequately addressing hedging performance. Hence, we analyze the ability of the Black-Scholes, Practitioner Black-Scholes, Heston-Nandi and Heston models to Delta-hedge a set of call options on the S&P500 index and Apple stock. We extend earlier studies in that we consider the impact of asset dynamics, apply a stringent payoff replication strategy, look at the impact of moneyness at maturity and test for the robustness to the parameters’ calibration frequency and Delta-Vega hedging. The study shows that adding risk factors to a model, such as stochastic volatility, should only be considered in light of the data dynamics. Even then, however, more complicated models generally fare poorly for hedging purposes. Hence, a better fit of a model to option prices is not a good indicator of its hedging performance, and so of its ability to describe the underlying dynamics. This can be understood for reasons of over-fitting. Those findings hint to a potentially appealing hedging-based calibration of models' parameters, rather than the standard pricing-based one. 


Preprint (from SSRN): https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3000405

Bannissement des produits dérivés: la bonne affaire ?


Vrins, Frédéric (2018)

Regards économiques — , no.142, p. 1-15


Open access (click to view abstract and link)

Dans cet article à vocation pédagogique, nous proposons un éclairage différent sur les produits dérivés. A l’aide d’exemples simples, nous expliquons en quoi certaines positions d’apparence spéculative peuvent en réalité répondre à un réel besoin économique. De même, la titrisation, pointée comme une des principales causes de la crise, peut paradoxalement également faire partie de la solution. Enfin, nous discutons la tendance actuelle -à nos yeux inquiétante- de la régulation qui, dans un souci de renforcement de la résilience des banques, incite à transférer certaines responsabilités des institutions bancaires vers les agents économiques les moins armés pour y faire face. 


Open access: https://www.regards-economiques.be/index.php?option=com_reco&view=article&cid=185

2017

Wrong-Way Risk CVA Models with Analytical EPE Profiles under Gaussian Exposure Dynamics


Vrins, Frédéric (2017) 

International Journal of Theoretical and Applied Finance — Vol. 20, no. 7:1750045


Preprint available (click to view abstract and link)


Journal rank: ABS (2) ABDC (B) Impact Factor (1.10, Nov. 2021)

We consider two classes of wrong-way risk models in the context of CVA: static (resampling) and dynamic (reduced form). Although both potentially suffer from arbitrage problems, their tractability makes them appealing to the industry and therefore deserve additional study. For example, Gaussian copula-based resampling and reduced-form with ``Hull-White intensities'' yield analytical expected positive exposure (EPE) profiles when the portfolio price process (i.e. exposure process) is Gaussian. However, the first approach disregards credit volatility whilst the second can provide default probabilities larger than 1. We therefore enlarge the study by introducing a new dynamic approach for credit risk, consisting in the straight modeling of the survival (Az\'ema supermartingale) process using the $\Phi$-martingale. This method is appealing in that it helps fixing some drawbacks of the above models. Indeed, it is a dynamic method (it disentangles correlation and credit volatility) that preserves probabilities in [0,1] without affecting the analytical tractability of the model. In particular, calibration to any valid default probability curve is automatic and the closed-form expression for the EPE profiles remains available under Gaussian exposures. For each approach, we derive analytically the EPE profiles (conditional upon default) associated to prototypical exposure processes of FRA and IRS in all cases, provide a comparison and discuss the implied CVA figures .


Preprint (from ArXiv): https://arxiv.org/pdf/1605.05100.pdf

2016

Characteristic Function of Time-Inhomogeneous Lévy-Driven Ornstein-Uhlenbeck Processes


Vrins, Frédéric (2016) 

Statistics & Probability Letters — Vol. 116, no.2016, p. 55-61


Preprint available (click to view abstract and link)


Journal rank: ABDC (B) Impact Factor (1.17, Nov. 2021)

We derive the characteristic function (CF) of integrals of Lévy-driven Ornstein-Uhlenbeck processes with time-inhomogeneous coefficients. The resulting expression takes the form of the exponential integral of the time-changed characteristic exponent. This result is applied to some examples leading to closed form expressions. In particular, it drastically simplifies the calculations of the CF of integrated Compound Poisson processes compared to the standard approach relying on joint conditioning on inter-arrival jump times. 


Preprint (from ArXiv): https://arxiv.org/abs/1604.05117

2013

Sibuya copulas


Vrins, Frédéric Hofert, Marius (2013)

Journal of Multivariate Analysis — Vol. 114, p. 318-337


Preprint available (click to view abstract and link)


Journal rank: ABDC (A) Impact Factor (1.01, Nov. 2021)

A new class of copulas referred to as ''Sibuya copulas'' is introduced and its properties are investigated. Members of this class are of a functional form which was first investigated in the work of M. Sibuya. The construction of Sibuya copulas is based on an increasing stochastic process whose Laplace-Stieltjes transform enters the copula as a parameter function. Sibuya copulas also allow for idiosyncratic parameter functions and are thus quite flexible to model asymmetric dependences. If the stochastic process is allowed to have jumps, Sibuya copulas may have a singular component. Depending on the choice of the process, they may be extreme-value copulas, Levy-frailty copulas, or Marshall-Olkin copulas. Further, as a special symmetric case, one may obtain any Archimedean copula with Laplace-Stieltjes transform as generator. Besides some general properties of Sibuya copulas, several examples are given and their properties are investigated in more detail. The construction scheme associated to Sibuya copulas provides a sampling algorithm. Further, it can be generalized, for example, to allow for hierarchical structures, or for an additional source of dependence via another copula. 


Preprint (from ArXiv): https://arxiv.org/pdf/1008.2292.pdf

2011

Getting CVA up and running


Vrins, Frédéric Gregory, Jon (2011) 

Risk Magazine — p. 76-79


Preprint available (click to view abstract and link)

The credit value adjustment that crystallises counterparty risk in a derivative price is genrally thought of as an upfront payment, but could equally well be converted into a running premium in appropriate products. But the obvious ways to do this lead to inconsistencies, or are computationally burdensome. Here, Frédéric Vrins and Jon Gregory show how an analytical approximation can negociate this.


Preprint (from CVA central): https://cvacentral.com/wp-content/uploads/2014/05/VrinsGregory_RunningCVA.pdf 

2010

Analytical Pricing of Basket Default Swaps in a Dynamic Hull & White Framework


Vrins, Frédéric (2010) 

The Journal of Credit Risk — Vol. 6, no.4, p. 85-111


Preprint available (click to view abstract and link)


Journal rank: ABDC (C) Impact Factor (0.80, Nov. 2021)

In this paper, some analytical results related to the Hull & White dynamic model of credit portfolio of N obligors in the case of constant jump size are provided. For instance, this specific assumption combined with the moment generating function of the Poisson process lead to analytical calibration for the model with respect to the underlying CDSs. Further, extremely simple analytical expressions are obtained for first-to-default swaps; the more general case of quantities related to nth-to-default swaps also have a closed form and remain tractable for small n. Similarly, pairwise correlation between default indicators also proves to be simple. Although the purpose of this note is not to compare models, we compare the shape of pairwise default correlations of the Hull & White, the Gaussian copula and the Mai & Scherer model with compound Poisson process as Lévy subordinator. It is shown that only the models including jumps can lead to non-vanishing default correlation for short-term maturities. Further, these models can generate higher default correlation levels compared to the Gaussian one. When calibrated on default probability of first default time, Jump-based models also lead to much higher default probability for the last obligor to default. Finally, we tackle the problem of simultaneous jumps, which prevent the above class of models to be usable when recoveries are name-specific. To that end, we propose a tractable compromise to deal with baskets being non-homogeneous recovery-wise under the Hull & White model by splitting isolated and non-isolated default events. 


Preprint (from SSRN): https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1590932

On the Consistency of "European Proxy" of Structural Models for Credit Derivatives


Vrins, Frédéric (2010) 

Wilmott Journal — Vol. 2, no. 5, p. 247-260


Open access (click to view abstract and link)

In this paper, we focus on what we call “European Proxy” of structural models, or shortly “Proxy-Structural models” for credit derivatives. In standard structural models, default arrives as the first hit of a stochastic process to a barrier, hence involving a path-dependent condition. In “Proxy-Structural models”, by contrast, the path condition modeling the default indicator function is replaced by a point-wise criterion. This approximation has been considered by some authors to be successful in the sense that proper calibration of those on CDO market is fast, simple and yields meaningful results in terms of implied prices and parameter movements. Some people may not find them intuitive but up to now there was no reason to question their relevance for credit derivatives as a whole. We show that this class of models exhibits a philosophical problem, which might potentially have an impact when pricing some specific multivariate products other than CDO tranches. 


Open access: https://onlinelibrary.wiley.com/doi/epdf/10.1002/wilj.44

2009

Double t Copula Pricing of Structured Credit Products : Practical Aspects of a Trustworthy Implementation


Vrins, Frédéric (2009) 

The Journal of Credit Risk — Vol. 5, no.2, p. 91-109


Preprint available (click to view abstract and link)


Journal rank: ABDC (C) Impact Factor (0.80, Nov. 2021)

In this paper, some analytical results related to the Hull & White dynamic model of credit portfolio of N obligors in the case of constant jump size are provided. For instance, this specific assumption combined with the moment generating function of the Poisson process lead to analytical calibration for the model with respect to the underlying CDSs. Further, extremely simple analytical expressions are obtained for first-to-default swaps; the more general case of quantities related to nth-to-default swaps also have a closed form and remain tractable for small n. Similarly, pairwise correlation between default indicators also proves to be simple. Although the purpose of this note is not to compare models, we compare the shape of pairwise default correlations of the Hull & White, the Gaussian copula and the Mai & Scherer model with compound Poisson process as Lévy subordinator. It is shown that only the models including jumps can lead to non-vanishing default correlation for short-term maturities. Further, these models can generate higher default correlation levels compared to the Gaussian one. When calibrated on default probability of first default time, Jump-based models also lead to much higher default probability for the last obligor to default. Finally, we tackle the problem of simultaneous jumps, which prevent the above class of models to be usable when recoveries are name-specific. To that end, we propose a tractable compromise to deal with baskets being non-homogeneous recovery-wise under the Hull & White model by splitting isolated and non-isolated default events. 


Preprint (from DefaultRisk.com): https://core.ac.uk/reader/21751980

Credit default swaps: the quest of the hedge


Vrins, Frédéric Schoutens, Wim (2009) 

Wilmott Journal — Vol. 1, no.5-6, p. 245-253 (2009)


Open access (click to view abstract and link)

It appears from discussions held with practitioners on forums and conferences that hedging products with CDSs is often considered to be an easy task. For instance, a bond-CDS position is frequently seen to be a perfect hedge as is the index CDS and its corresponding single-name counterparts. On the other hand, theoreticians claim that a perfect hedge is impossible to achieve with credit derivatives: currently it seems there is no model from which a perfect hedge of credit derivatives can be set up. So, who's right? This paper aims to show that, surprisingly, both are right, in some sense. The misunderstanding comes from a misspecification of the “hedge” term, together with a confusion between hedging and arbitraging. We focus on Credit Default Swaps and explain why, among other things, the “Bond-CDS on the Bond” position does not result in a perfect hedge (except in a very specific case, which is not market standard). By contrast, partial hedge (that is, hedging some parts of the risk) can be obtained and, of course, arbitrage opportunities could be found. We then explain, adopting a philosophical point of view, why it is so problematic to set up a credit portfolio being perfectly hedged (even in a first-order approximation sense). It appears that the nature of the derivative's underlying prevents us reaching this with the currently available financial instruments. 


Open access: https://onlinelibrary.wiley.com/doi/epdf/10.1002/wilj.21

2008

Blind source separation based on endpoint estimation with application to the MLSP 2006 data competition 


Lee, John Aldo Vrins, Frédéric Verleysen, Michel (2008)

Neurocomputing — Vol. 72, no. 1-3, p. 47-56


Award of the MLSP 2006 data competition


Preprint available (click to view abstract and link)


Journal Impact Factor (5.72, Nov. 2021)

The problem of blind source separation is usually solved by optimizing a contrast function that measures either the independence of several variables or the non-gaussianity of a single variable. If the problem involves bounded sources, this knowledge can be exploited and the solution can be found with a customized contrast that relies on a simple endpoint estimator. The minimization of the least absolute endpoint is closely related to and generalizes the minimization of the range, which has already been studied within the framework of blind-source extraction. Using the least absolute endpoint rather than the range applies to a broader class of admissible sources, which includes sources that are bounded on a single side and, therefore, have an infinite range. This paper describes some properties of a contrast function based on endpoint estimation, such as the discriminacy. This property guarantees that each local minimum of the least absolute bound corresponds to the extraction of one source. An endpoint estimator is proposed, along with a specific deflation algorithm that is able to optimize it. This algorithm relies on a loose orthogonality constraint that reduces the accumulation of errors during the deflation process. This allows the algorithm to solve large-scale and ill-conditioned problems, such as those proposed in the MLSP 2006 data competition. Results show that the proposed algorithm outperforms more generic source separation algorithms like FastICA, as the sources involved in the contest are always bounded on at least one side. 


Preprint (from EPL): https://perso.uclouvain.be/michel.verleysen/papers/neucom08jl.pdf

On the risk of using Renyi's entropy for blind source separation


Pham, Dinh-Tuan Vrins, Frédéric Verleysen, Michel (2008) 

IEEE Transactions on Signal Processing — Vol. 56, no. 10, p. 4611-4620


Open access (click to view abstract and link)


Journal Impact Factor (5.23, Nov. 2021)

Recently, some researchers have suggested Rényi’s entropy in its general form as a blind source separation (BSS) objective function. This was motivated by two arguments: 1) Shannon’s entropy, which is known to be a suitable criterion for BSS, is a particular case of Rényi’s entropy, and 2) some practical advantages can be obtained by choosing another specific value for the Rényi exponent, yielding to, e.g., quadratic entropy. Unfortunately, by doing so, there is no longer guarantee that optimizing this generalized criterion would lead to recovering the original sources. In this paper, we show that Rényi’s entropy in its exact form (i.e., out of any consideration about its practical estimation or computation) might lead to not recovering the sources, depending on the source densities and on Rényi’s exponent value. This is illustrated on specific examples. We also compare our conclusions with previous works involving Rényi’s entropies for blind deconvolution. 


Open access: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4558057

2007

Mixing and non-mixing local minima of the entropy contrast for blind source separation


Vrins, Frédéric Pham, Dinh-Tuan Verleysen, Michel (2007)

IEEE Transactions on Information Theory — Vol. 53, no. 3, p. 1030-1042


Open access (click to view abstract and link)


Journal Impact Factor (5.01, Nov. 2021)

In this paper, both non-mixing and mixing local minima of the entropy are analyzed from the viewpoint of blind source separation (BSS); they correspond respectively to acceptable and spurious solutions of the BSS problem. The contribution of this work is twofold. First, a Taylor development is used to show that the exact output entropy cost function has a non-mixing minimum when this output is proportional to any of the non-Gaussian sources, and not only when the output is proportional to the lowest entropic source. Second, in order to prove that mixing entropy minima exist when the source densities are strongly multimodal, an entropy approximator is proposed. The latter has the major advantage that an error bound can be provided. Even if this approximator (and the associated bound) is used here in the BSS context, it can be applied for estimating the entropy of any random variable with multimodal density. 


Open access: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4106142

A minimum-range approach to blind extraction of bounded sources


Vrins, Frédéric Lee, John Aldo Verleysen, Michel (2007)

IEEE Transactions on Neural Networks — Vol. 18, no. 3, p. 809-822

[Now known as IEEE Trans. on Neural Networks and Learning Systems]


Open access (click to view abstract and link)


Journal Impact Factor (8.79, Nov. 2021)

In spite of the numerous approaches that have been derived for solving the independent component analysis (ICA) problem, it is still interesting to develop new methods when, among other reasons, specific a priori knowledge may help to further improve the separation performances. In this paper, the minimum-range approach to blind extraction of bounded source is investigated. The relationship with other existing well-known criteria is established. It is proved that the minimum-range approach is a contrast, and that the criterion is discriminant in the sense that it is free of spurious maxima. The practical issues are also discussed, and a range measure estimation is proposed based on the order statistics. An algorithm for contrast maximization over the group of special orthogonal matrices is proposed. Simulation results illustrate the performances of the algorithm when using the proposed range estimation criterion. 


Open access: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4182405

Minimum range approach to blind partial simultaneous separation of bounded sources: Contrast and discriminacy properties


Vrins, Frédéric Pham, Dinh-Tuan (2007)

Neurocomputing — Vol. 70, no. 7-9, p. 1207-1214


Open access (click to view abstract and link)


Journal Impact Factor (5.72, Nov. 2021)

The blind source separation (BSS) problem is often solved by maximizing objective functions reflecting the statistical independence between outputs. Since global maximization may be difficult without exhaustive search, criteria for which all the local maxima correspond to an acceptable solution of the BSS problem are of interest. It is known that some BSS criteria used in a deflation procedure benefit from this property. More recently, the present authors have shown that the "spurious maximum free" property still holds for the minimum range approach to BSS in a simultaneous separation scheme. This paper extends the last result by showing that source demixing and local maximization of a range-based criterion are equivalent, even in a partial separation scheme, i,e. when P <= K sources are simultaneously extracted from K linearly independent mixtures of them. (c) 2007 Elsevier B.V. All rights reserved. 

2006

Apprentissage par projet en électricité : exemples et mise en oeuvre


De Vroey, Laurent Vrins, Frédéric Labrique, Francis Trullemans, Charles Eugène, Christian Grenier, Damien (2006) 

J3eA - Journal sur l'enseignement des sciences et technologies de l'information et des systèmes — , no. 5, p. 23 pages


Open access (click to view abstract and link)

In spite of the numerous approaches that have been derived for solving the independent component analysis (ICA) problem, it is still interesting to develop new methods when, among other reasons, specific a priori knowledge may help to further improve the separation performances. In this paper, the minimum-range approach to blind extraction of bounded source is investigated. The relationship with other existing well-known criteria is established. It is proved that the minimum-range approach is a contrast, and that the criterion is discriminant in the sense that it is free of spurious maxima. The practical issues are also discussed, and a range measure estimation is proposed based on the order statistics. An algorithm for contrast maximization over the group of special orthogonal matrices is proposed. Simulation results illustrate the performances of the algorithm when using the proposed range estimation criterion. 


Open access: https://www.j3ea.org/articles/j3ea/pdf/2006/01/j3ea06068.pdf

2005

On the entropy minimization of a linear mixture of variables for source separation


Vrins, Frédéric Verleysen, Michel (2005)

Signal Processing — Vol. 85, no. 5, p. 1029-1044


Preprint available (click to view abstract and link)


Journal Impact Factor (4.66, Nov. 2021)

The marginal entropy h(Z) of a weighted sum of two variables Z = alpha X + beta Y, expressed as a function of its weights, is a usual cost function for blind source separation (BSS), and more precisely for independent component analysis (ICA). Even if some theoretical investigations were done about the relevance from the BSS point of view of the global minimum of h(Z), very little is known about possible local spurious minima. In order to analyze the global shape of this entropy as a function of the weights, its analytical expression is derived in the ideal case of independent variables. Because of the ICA assumption that distributions are unknown, simulation results are used to show how and when local spurious minima may appear. Firstly, the entropy of a whitened mixture, as a function of the weights and under the constraint of independence between the source variables, is shown to have only relevant minima for ICA if at most one of the source distributions is multimodal. Secondly, it is shown that if independent multimodal sources art! involved in the mixture, spurious local minima may appear. Arguments are given to explain the existence of spurious minima of h(Z) in the case of multimodal sources. The presented justification can also explain the location of these minima knowing the source distributions. Finally, it results from numerical examples that the maximum-entropy mixture is not necessarily reached for the 'most mixed' one (i.e. equal mixture weights), but depends of the entropy of the mixed variables. (c) 2005 Elsevier B.V. All rights reserved.  


Preprint (from EPL): https://perso.uclouvain.be/michel.verleysen/papers/signalproc05fv.pdf

Information theoretic versus cumulant-based contrasts for multimodal source separation


Vrins, Frédéric Verleysen, Michel (2005) 

IEEE Signal Processing Letters — Vol. 12, no. 3, p. 190-193


Open access (click to view abstract and link)


Journal Impact Factor (3.02, Nov. 2021)


Recently, several authors have emphasized the existence of spurious maxima in usual contrast functions for source separation (e.g., the likelihood and the mutual information) when several sources have multimodal distributions. The aim of this letter is to compare the information theoretic contrasts to cumulant-based ones from the robustness to spurious maxima point of view. Even if all of them tend to measure, in some way, the same quantity, which is the output independence (or equivalently, the output non-Gaussianity), it is shown that in the case of a mixture involving two sources, the kurtosis-based contrast functions are more robust than the information theoretic ones when the source distributions are multimodal.  


Open access: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1395937


Local minima of information-theoretic criteria in blind source separation


Pham, Dinh-Tuan Vrins, Frédéric (2005)

IEEE Signal Processing Letters — Vol. 12, no. 11, p. 788-791


Open access (click to view abstract and link)


Journal Impact Factor (3.02, Nov. 2021)

Recent simulation results have indicated that spurious minima in information-theoretic criteria with an orthogonality constraint for blind source separation may exist. Nevertheless, those results involve approximations (e.g., density estimation), so that they do not constitute an absolute proof. In this letter, the problem is tackled from a theoretical point of view. An example is provided for which it is rigorously proved that spurious minima can exist in both mutual information and negentropy optima. The proof is based on a Taylor expansion of the entropy. 


Open access: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1518902