Books & chapters

Monography


Vrins, Frédéric (2007

Presses universitaires de Louvain, 309 pp.


Preprint available (click to view abstract and link)

In the recent years, Independent Component Analysis (ICA) has become a fundamental tool in adaptive signal and data processing, especially in the field of Blind Source Separation (BSS). Even though there exist some methods for which an algebraic solution to the ICA problem may be found, other iterative methods are very popular. Among them is the class of information-theoretic approaches, laying on entropies. The associated objective functions are maximized based on optimization schemes, and on gradient-ascent techniques in particular. Two major issues in this field are the following: 1) Does the global maximum point of these entropic objectives correspond to a satisfactory solution of BSS ? and 2) as gradient techniques are used, optimization algorithms look in fact for local maximum points, so what about the meaning of these local optima from the BSS problem point of view? Even though there are some partial answers to these questions in the literature, most of them are based on simulation and conjectures; formal developments are often lacking. This thesis aims at filling this lack and providing intuitive justifications, too. We focus the analysis on Rényi's entropy-based contrast functions. Our results show that, generally speaking, Rényi's entropy is not a suitable contrast function for BSS, even though we recover the wellknown results saying that Shannon's entropy-based objectives are contrast functions. We also show that the range-based contrast functions can be built under some conditions on the sources. The BSS problem is stated in the first chapter, and viewed ... 


Open access (from DIAL): https://dial.uclouvain.be/pr/boreal/object/boreal:5042

Editor of collective book

Advances in Credit Risk Modeling and Management

Vrins, Frédéric (2019) 

Risks books


Open access (click to view abstract and link)

This Special Issue aims at collating papers contributing methodologically and/or computationally, towards a more rigorous and reliable management of credit risk of financial institutions. Theoretical and empirical research works covering theoretical properties and/or computational aspects of risk measures are welcome. 


Open access: https://doi.org/10.3390/books978-3-03928-761-1 

Book chapters

Stochastic recovery rate: impact of pricing measure's choice and financial consequences on single-name products 


Gambetti, Paolo Gauthier, Geneviève Vrins, Frédéric (2019) 

New methods in fixed-income modeling, Springer, 181-209


Open access (click to view abstract and link)

The ISDA CDS pricer is the market-standard model to value credit default swaps (CDS). Since the Big Bang protocol moreover, it became a central quotation tool: just like options prices are quoted as implied vols with the help of the Black-Scholes formula, CDSs are quoted as running (conventional) spreads. The ISDA model sets the procedure to convert the latter to an upfront amount that compensates for the fact that the actual premia are now based on a standardized coupon rate. Finally, it naturally offers an easy way to extract a risk-neutral default probability measure from market quotes. However, this model relies on unrealistic assumptions, in particular about the deterministic nature of the recovery rate. In this paper, we compare the default probability curve implied by the ISDA model to that obtained from a simple variant accounting for stochastic recovery rate. We show that the former typically leads to underestimating the reference entity's credit risk compared to the latter. We illustrate our views by assessing the gap in terms of implied default probabilities as well as on credit value adjustments (CVA) figures and pricing mismatches of financial products like deep in-/out-of-the-money standard CDSs and digital CDSs (main building block of credit linked notes, CLNs) 


Open access: https://link.springer.com/book/10.1007/978-3-319-95285-7

Jeu de hasard en Belgique: la modélisation au service de la transparence


Chevalier, Philippe Vrins, Frédéric (2018

Droit des jeux de hasard, Larcier, 181-209


Open access (click to view abstract and link)

L’essence même du jeu de hasard est de confronter le joueur à des événements aléatoires. Celui-ci tirera l’essentiel de son plaisir de l’adrénaline résultant du caractère incertain de ses gains, ainsi que du sentiment éprouvé qu’il poursuit une stratégie lui permettant de « maximiser ses chances ». Une partie du challenge pour le joueur est que, précisément, l’intuition humaine est facilement prise en défaut dans un environnement aléatoire. Plusieurs expériences scientifiques ont permis de mettre clairement ce biais cognitif en évidence. Dans le cadre des jeux de roulettes par exemple, une partie significative des joueurs ont tendance à miser sur les nombres qui ne sont pas encore sortis. Ce comportement traduit la croyance selon laquelle la nature devrait tendre rapidement vers « son équilibre théorique global ». Cette gageure porte le nom de gambler’s fallacy. Les jeux télévisés nous offrent d’autres exemples emblématiques. C’est le cas notamment du show américain Let’s make a deal, directement lié au paradoxe bien connu qui porte désormais le nom de scène du présentateur : Monty Hall. Si on observe les jeux de hasard les plus populaires auprès des joueurs, ils présentent tous les mêmes caractéristiques en termes de distribution de gains. On observe une probabilité relativement élevée (souvent proche de 25%) de gains modestes, permettant de retrouver tout ou partie de sa mise initiale. Les probabilités de gains plus importants diminuent très vite avec les montants correspondants. Le gain maximal (« jackpot ») est souvent extrêmement important, mais la probabilité de sa réalisation est typiquement très faible. Il en résulte que ce gain mirobolant -bien que très improbable- constitue à lui seul une partie significative du taux de redistribution du jeu. Il n’est pas rare que le jackpot représente 25% du taux de redistribution des mises sous forme de gains. L’estimation statistique du taux de redistribution d’un jeu de hasard nécessite donc d’estimer avec précision la probabilité d’occurrence d’événements très rares. Cela représente rapidement un défi important en termes de collecte de données. A titre d’exemple, pour le simple jeu « pile ou face » l’estimation statistique avec une erreur inférieure à 1% de la probabilité de tomber sur pile ou face de la pièce requiert 38.301 lancés. Pour estimer la probabilité de faire un double 6 avec une paire de dés équilibrés, il ne faudrait pas moins de 1.341.018 lancés d’une paire de dés. Que dire alors de l’estimation avec une précision relative de 1% de la probabilité d’empocher le gros lot au « win for life », cela demanderait le grattage de plus de 40 milliards de billets, ceci représenterait plus de 100 tonnes de papier ! Il n’est évidemment jamais venu à l’idée de personne de vérifier autant de billets de tombola. Le régulateur se base sur la description du processus d’impression des billets pour s’assurer qu’il n’y a pas de tromperie par rapport à la loterie. De même un modèle mathématique d’un jeu de hasard électronique permettrait de calculer de manière tout à fait fiable les probabilités de gain et en conséquence toute la distribution des gains associé au jeu. C’est une manière beaucoup plus efficace que de devoir simuler et enregistrer le résultat de milliards de parties. L’information fournie est aussi beaucoup plus riche en termes des caractéristiques de la distribution de gains. Ceci demande au concepteur du jeu de fournir un peu plus d’informations quant à sa conception, mais en échange la vérification de conformité de son jeu peut être faite de manière beaucoup plus rapide et efficace. A partir d’une caractérisation de la distribution de gains, il est possible de donner de bien meilleures informations aux joueurs. Si nous prenons par exemple la loterie « win for life », celle-ci se vente d’une probabilité de gain d’une chance sur quatre. En effet, vous avez 25% de chance de récupérer le prix d’achat du billet (3 euros) sous forme de gain. Combiné à la possibilité du jackpot, cela suggère un taux de redistribution relativement attrayant. Cependant, l’effet de la répétition de la participation est particulièrement dévastatrice. La perte moyenne par billet ne dépend évidemment pas du nombre de participations : elle est d’environ 1 euro à chaque participation. Cependant, la probabilité de récupérer la totalité de sa mise s’effondre très rapidement à mesure que l’on répète l’opération. En seulement 5 parties, vous n’avez plus 4% de chances de faire un profit. En 20 parties, vous n’avez pas 1 chance sur 100 de récupérer votre mise. Selon toute vraisemblance, la grande majorité des joueurs ne sont pas conscients de cette réalité. Cet exemple permet d’illustrer en quoi une étude approfondie du fonctionnement des jeux et de la distribution des gains permet de mieux comprendre les phénomènes en présence, et ce constat s’applique bien entendu aux machines électroniques. Ces analyses permettraient notamment de mieux informer les joueurs sur la perte horaire encourue, et non seulement sur la moyenne, très peu informative dans ce cas de figure. La modélisation mathématique du jeu permettrait aussi de rendre beaucoup plus efficace le contrôle en temps réel des machines de jeu. En effet les données collectées au fur et à mesure des parties peuvent être directement comparées au modèle mathématique et révéler très rapidement une anomalie sachant que l’on ne se base pas uniquement sur un type de gain, mais sur l’ensemble des événements aléatoires générés par la machine. De nos jours, la quasi-totalité des jeux sont électroniques. En conséquence, une métrologie basée sur la modélisation mathématique permettrait une régulation plus agile et plus efficace. Ceci permettrait d’une part un meilleur dialogue entre les concepteurs de jeux et le régulateur, et d’autre part une meilleure information des joueurs. 


Open access: https://dial.uclouvain.be/pr/boreal/object/boreal:196466

An antithetic approach of multilevel Richardson-Romberg extrapolation estimator for multidimensional SDES


Mbaye, Cheikh Pagès, Gilles Vrins, Frédéric (2017) 

Lecture Notes in Computational Science 10187

Numerical Analysis and its Applications 2017 p. 482--49 



Open access (click to view abstract and link)

The Multilevel Richardson-Romberg (ML2R) estimator was introduced by Pagès & Lemaire in [1] in order to remove the bias of the standard Multilevel Monte Carlo (MLMC) estimator in the 1D Euler scheme. Milstein scheme is however preferable to Euler scheme as it allows to reach the optimal complexity O(ε^{−2}) for each of these estimators. Unfortunately, Milstein scheme requires the simulation of Lévy areas when the SDE is driven by a multidimensional Brownian motion, and no efficient method is currently available to this purpose so far (except in dimension 2). Giles and Szpruch [2] recently introduced an antithetic multilevel correction estimator avoiding the simulation of these areas without affecting the second order complexity. In this work, we revisit the ML2R and MLMC estimators in the framework of the antithetic approach, thereby allowing us to remove the bias whilst preserving the optimal complexity when using Milstein scheme. 


Open access: https://link.springer.com/chapter/10.1007%2F978-3-319-57099-0_54

Wrong-way risk adjusted exposure: Analytical approximations for options in default intensity models


Brigo, Damiano Hvolby, Thomas Vrins, Frédéric (2017)

Innovations in Insurance, Risk- and Asset Management 




Open access (click to view abstract and link)

We examine credit value adjustment (CVA) estimation under wrong-way risk (WWR) by computing the expected positive exposure (EPE) under an equivalent measure as suggested in [1], adjusting the drift of the underlying for default risk. We apply this technique to European put and call options and derive the analytic formulas for EPE under WWR obtained with various approximations of the drift adjustment. We give the results of numerical experiments based on 4 parameter sets, and supply figures of the CVA based on both of the suggested proxys, comparing with CVA based on a 2D-Monte Carlo scheme and Gaussian Copula resampling. We also show the CVA obtained by the formulas from Basel III. We observe that the Basel III formula does not account for the credit-market correlation, while the Gaussian Copula resampling method estimates a too large impact of this correlation. The two proxies account for the credit-market correlation, and give results that are mostly similar to the 2D-Monte Carlo results. 


Open access: https://www.worldscientific.com/doi/abs/10.1142/9789813272569_0002

Is the general form of Renyi's entropy a contrast for source separation?


Vrins, Frédéric Dinh-Tuan Pham Verleysen, Michel (2007)

Lecture Notes in Computational Science 4666

Independent Component Analysis and Signal Separation, ICA 2007, pp 129.136




Open access (click to view abstract and link)

Renyi’s entropy-based criterion has been proposed as an objective function for independent component analysis because of its relationship with Shannon’s entropy and its computational advantages in specific cases. These criteria were suggested based on “convincing” experiments. However, there is no theoretical proof that globally maximizing those functions would lead to separate the sources; actually, this was implicitly conjectured. In this paper, the problem is tackled in a theoretical way; it is shown that globally maximizing the Renyi’s entropy-based criterion, in its general form, does not necessarily provide the expected independent signals. The contrast function property of the corresponding criteria simultaneously depend on the value of the Renyi parameter, and on the (unknown) source densities. 


Open access: https://link.springer.com/chapter/10.1007/978-3-540-74494-8_17

Zero-entropy minimization for blind extraction of bounded sources (BEBS)


Vrins, Frédéric Erdogmus, D Jutten, Christian Verleysen, Michel (2006)

Lecture Notes in Computational Science 3889

Independent Component Analysis and Blind Signal Sepration, ICA 2006, pp 747-754


Open access (click to view abstract and link)

Renyi’s entropy can be used as a cost function for blind source separation (BSS). Previous works have emphasized the advantage of setting Renyi’s exponent to a value different from one in the context of BSS. In this paper, we focus on zero-order Renyi’s entropy minimization for the blind extraction of bounded sources (BEBS). We point out the advantage of choosing the extended zero-order Renyi’s entropy as a cost function in the context of BEBS, when the sources have non-convex supports. 


Open access: https://link.springer.com/chapter/10.1007/11679363_93

Minimum support ICA using order statistics. Part II: Performance analysis


Vrins, Frédéric Verleysen, Michel (2006) 

Lecture Notes in Computational Science 3889

Independent Component Analysis and Blind Signal Sepration, ICA 2006, pp 270-277 




Open access (click to view abstract and link)

Linear instantaneous independent component analysis (ICA) is a well-known problem, for which efficient algorithms like FastICA and JADE have been developed. Nevertheless, the development of new contrasts and optimization procedures is still needed, e.g. to improve the separation performances in specific cases. For example, algorithms may exploit prior information, such as the sparseness or the non-negativity of the sources. In this paper, we show that support-width minimization-based ICA algorithms may outperform other well-known ICA methods when extracting bounded sources. The output supports are estimated using symmetric differences of order statistics. 


Open access: https://link.springer.com/chapter/10.1007/11679363_34

Minimum support ICA using order statistics. Part I: Quasi-range based support estimation


Vrins, Frédéric Verleysen, Michel (2006) 

Lecture Notes in Computational Science 3889

Independent Component Analysis and Blind Signal Sepration, ICA 2006, pp 262-259 




Open access (click to view abstract and link)

The minimum support ICA algorithms currently use the extreme statistics difference (also called the statistical range) for support width estimation. In this paper, we extend this method by analyzing the use of (possibly averaged) differences between the Nm + 1-th and m-th order statistics, where N is the sample size and m is a positive integer lower than N/2. Numerical results illustrate the expectation and variance of the estimators for various densities and sample sizes; theoretical results are provided for uniform densities. The estimators are analyzed from the specific viewpoint of ICA, i.e. considering that the support widths and the pdf shapes vary with demixing matrix updates. 


Open access: https://link.springer.com/chapter/10.1007/11679363_33

Filtering-free blind separation of correlated images


Vrins, Frédéric Lee, John Aldo Verleysen, Michel (2005) 

Lecture Notes in Computational Science 3512

Computational Intelligence and Bioinspired Systems, IWANN 2005, 1091-1099



Open access (click to view abstract and link)

When using ICA for image separation, a well-known problem is that most often a large correlation exists between the sources. Because of this dependence, there is no more guarantee that the global maximum of the ICA contrast matches the outputs to the sources. In order to overcome this problem, some preprocessing can be used, like e.g. band-pass filtering. However, those processings involve parameters, for which the optimal values could be tedious to adjust. In this paper, it is shown that a simple ICA algorithm can recover the sources, without any other preprocessing than whitening, when they are correlated in a specific way. First, a single source is extracted, and next, a parameter-free postprocessing is applied for optimizing the extraction of the remaining sources. 


Open access: https://link.springer.com/chapter/10.1007/11494669_134

Sensor array and electrode selection for non-invasive fetal electrocardiogram extraction by independent component analysis


Vrins, Frédéric Jutten, Christian Verleysen, Michel (2004) 

Lecture Notes in Computational Science 3195

Independent Component Analysis and Blind Signal Separation, ICA 2004, pp 1017-1024


Open access (click to view abstract and link)

Recently, non-invasive techniques to measure the fetal electrocardiogram (FECG) signal have given very promising results. However, the important question of the number and the location of the external sensors has been often discarded. In this paper, an electrode-array approach is proposed; it is combined with a sensor selection algorithm using a mutual information criterion. The sensor selection algorithm is run in parallel to an independent component analysis of the selected signals. The aim of this method is to make a real time extraction of the FECG possible. The results are shown on simulated biomedical signals. 


Open access: https://link.springer.com/chapter/10.1007/978-3-540-30110-3_128