Séances passées


Année 2023-2024

8  Septembre  2023  

 Christian Furrer (Departement de Mathematical Sciences, University of Copenhagen)

Titre: "Conditional Aalen–Johansen estimation"

Abstract: Classic Aalen–Johansen estimation targets transition probabilities in multi-state Markov models subject to for instance right censoring. In particular, it belongs to the standard toolkit of health and disability insurance analytics. We introduce the conditional Aalen—Johansen estimator, an innovative kernel-based estimator that allows for the inclusion of covariates and, importantly, is also applicable in non-Markov models. We establish uniform strong consistency and asymptotic normality under very lax regularity conditions; here, the theory of empirical processes plays a central role and leads to a transparent treatment. We also illustrate the practical implications and potential of the estimation methodology. (Joint work with Martin Bladt from the University of Copenhagen.)


20 Octobre  2023  

Denis Allard  (INRAE, Directeur de Recherche à l'Unité de Recherche Biostatistique et Processus Spatiaux)

Titre: Risques climatiques et Risques environnementaux: quels enjeux pour les statistiques spatiales?

Abstract: Dans cet exposé général, nous commencerons par rappeler quelques notions clés des statistiques spatiales et spatio-temporelles. Nous illustrerons ensuite la façon dont celles-ci peuvent être mobilisées pour la modélisation des risques environnementaux et climatiques en nous appuyant sur quelques travaux récents. Nous terminerons en présentant les enjeux scientifiques portés par la chaire Geolearning qui associe INRAE et Mines Paris.


 17 Novembre  2023  

Fanny Cartellier  (ENSAE IP Paris, CREST)

Titre: Climate Stress Testing

Abstract:  Climate stress testing has been developed these last years to shed light on the exposure and vulnerability of the financial system to climate-related risks. This talk proposes a methodological review of the various climate stress tests that have been carried out by central banks and implemented by scholars. It attempts to answer the following question: are existing methodologies well-suited to assess the extent to which climate-related risks may induce financial risks and impair financial stability ? By comparing available methodologies, we discuss the choices that have been made by financial supervisors and outline further research avenues to help improve these methodologies. In particular, we show that focusing only on long-term frameworks with deterministic scenarios as encouraged so far by the Network for Greening the Financial System would lead to underestimating the financial risks caused by a disorderly transition. We propose complementary methodologies that would allow to assess the resistance of financial institutions to adverse climate-related risks scenarios and to guide them in financing the low-carbon transition.



4 Décembre  2023 

Sébastien Farkas  (Sorbonne Université, LPSM)

Soutenance de Thèse: Mathématiques appliquées à l’assurance des risques numériques

Abstract:  L’émergence des produits d’assurance couvrant les risques numériques s’accompagne d’interrogations relatives à la maîtrise des engagements souscrits par les organismes d’assurance. La volatilité des coûts, la dépendance entre les garanties et les potentielles accumulations de sinistres sont autant de spécificités que nous considérons pour proposer des modèles mathématiques adaptés aux enjeux. Nous introduisons d’abord des arbres de régression adaptés aux valeurs extrêmes pour comprendre l’hétérogénéité des queues de distribution des risques. Ensuite, nous étudions l’estimation de copules dans un contexte de données censurées pour préciser l’impact des interactions entre les garanties sur les engagements globaux. Enfin, nous proposons une analyse de la fréquence des sinistres numériques par des processus ponctuels adaptés aux phénomènes d’accumulation. Nos contributions suggèrent des méthodes d’analyse pour la souscription, le provisionnement et la gestion des risques numériques.


19 Janvier  2024

Alexander Voss  (House of Insurance & IVFM)

Titre:  Towards a Measurement of Cyber Pandemic Risk

Abstract:  Systemic cyber risks like the 2017 WannaCry and NotPetya incidents pose a major threat to societies, governments, and businesses worldwide. For regulatory institutions, preventing cyber pandemics is thus a top-priority issue. Moreover, dealing with systemic accumulation risks is also challenging for insurance companies since risk pooling does not apply to these incidents. 

Based on classical models for network contagion, we capture the spread of systemic cyber risks in a stylized fashion and identify two types of suitable controls: security- and topology-based interventions. In particular, topology-based measures are necessary to control the cyber pandemic risk exposure in large-scale systems with a more heterogeneous – and thus realistic - arrangement of network nodes. Building on this, we present a novel class of risk measures that are concerned with the resilience of networks to cyber contagion. In contrast to existing approaches, these measures aim at the topological structure of the network in order to control the risk of a pandemic spread. For this, we adopt from the axiomatic approach to monetary risk measures the idea to base the risk assessment  on a triplet of acceptable configurations, admissible controls, and a cost functional.


2 Février 2024  

Arthur  Charpentier  (Université du Québec à Montréal, UQAM)

Titre: Using optimal transport to mitigate unfair prediction

Abstract:  The insurance industry is heavily reliant on predictions of risks based on characteristics of potential customers. Although the use of said models is common, researchers have long pointed out that such practices perpetuate discrimination based on sensitive features such as gender or race. Given that such discrimination can often be attributed to historical data biases, an elimination or at least mitigation is desirable. With the shift from more traditional models to machine-learning based predictions, calls for greater mitigation have grown anew, as simply excluding sensitive variables in the pricing process can be shown to be ineffective. In this article, we first investigate why predictions are a necessity within the industry and why correcting biases is not as straightforward as simply identifying a sensitive variable. We then propose to ease the biases through the use of Wasserstein barycenters instead of simple scaling. To demonstrate the effects and effectiveness of the approach we employ it on real data and discuss its implications. The talk will be based on recent work with François Hu and Philipp Ratz (2310.20508, 2309.06627, 2306.12912 and 2306.10155).


16 Février 2024  

Juliette Legrand   (Université de Brest, LMBA)

Titre: Stochastic simulation of extreme wave heights

Abstract:  This study focuses on the stochastic simulation of multivariate peaks  over thresholds. We develop a non-parametric simulation scheme of bivariate generalised Pareto distributions and from such joint simulator, we also derive a conditional simulation model. Both simulation algorithms are applied to numerical experiments and to extreme significant wave heights (a quantity measuring the severity of a sea state) near the French Brittany coast. A further development is addressed regarding the marginal modelling: to take into account non-stationarities, we adapt the extended generalised Pareto model, letting the marginal parameters vary given specific offshore conditions.


21 mars 2024  

3ème Printemps de l'Assurance 

Intelligence Artificielle en assurance

replay sur:  https://dauphine.psl.eu/dauphine/media-et-communication/article/printemps-de-lassurance-2024-intelligence-artificielle-en-assurance





Année 2021-2022

15  Octobre  2021

Valérie CHAVEZ  (Sorbonne Université de LAusanne, Institut Pierre Louis d'Epidémiologie et de Santé Publique) 

Titre: "Causal mechanism of extremes on networks"

Abstract: With the increase in frequency and size of extreme phenomena propagation through real word networks, such as river network flooding, or malware propagation in computer networks, controlling cascading extremes in network settings becomes crucial. One often characterizes the joint structure of the extreme events using the theory of multivariate and spatial extremes and its asymptotically justified models. There is interest however in cascading extreme events and whether one event causes another. In this work, we argue that an improved understanding of the mechanism underlying severe events is achieved by combining extreme value modelling and causal discovery. We construct a causal inference method relying on the notion of the Kolmogorov complexity of extreme conditional quantiles. Tail quantities are derived using multivariate extreme value models and causal-induced asymmetries in the data are explored through the minimum description length principle. We illustrate our approach through an application to a river network although it can be used to any other network setting and may be useful for cyber-security.

Lien vers la vidéo https://www.youtube.com/watch?v=7mTJ2zywzEg


26  Novembre 2021 

Sasha ROMANOSKY   (RAND Corporation)

Titre:  "Cyber risk in the U.S.: analyzing costs, and cyber insurance policies "

Abstract: Cyber risk is consistently rated as a top concern of business leaders in the U.S., and globally. But are the costs of cyber events really that high, or is the concern simply driven by a few extreme events? In response to these threats, the market for cyber insurance has grown consistently over the past 20 years, with total U.S. premiums approaching $5 billion US dollars. And so if this market is to remain robust, one would hope that cyber insurance carriers would have the best information about actual costs to firms from data breaches, and other cyber incidents, and be able to best assess, and differentiate, cyber risk across firms. And so what are the typical costs of these incidents?; and how, exactly, do carriers price cyber risk?  In this talk we will discuss these two main topics. First, we provide an empirical analysis of a dataset of 12,000 cyber incidents, and examine their costs. Second, we perform content analysis on a sample of over 200 cyber insurance policies in order to describe typical coverage areas, exclusions, and actual rate scheduled used to price cyber insurance.


Lien vers la vidéo https://www.youtube.com/watch?v=0ID-InAz7FA



4   Février  2022

Christophe DUTANG   (CEREMADE, Université Paris-Dauphine)

Titre:  "An explicit split point procedure in model-based trees allowing for a quick fitting of GLM trees and GLM forests"

Abstract: Classification and regression trees (CART) prove to be a true alternative to full parametric models such as linear models (LM) and generalized linear models (GLM). Although CART suffer from a biased variable selection issue, they are commonly applied to various topics and used for tree ensembles and  random forests because of their simplicity and computation speed. Conditional inference trees and model-based trees algorithms for which variable selection is tackled via fluctuation tests are known to give  more accurate and interpretable results than CART, but yield longer computation times. Using a closed-form maximum likelihood estimator for GLM, this talk  proposes a split point procedure based on the explicit likelihood in order to save time when searching for the best split for a given splitting variable.

A simulation study for non-Gaussian response is performed to assess the computational gain when building GLM trees. We also propose a benchmark on simulated and empirical datasets of GLM trees against CART, conditional inference trees and LM trees in order to identify situations where GLM trees are efficient.This approach is extended to multiway split trees and log-transformed distributions. Making GLM trees possible through a new split point procedure allows us to investigate the use of GLM in ensemble methods. We propose a numericalcomparison of GLM forests against other random forest-type approaches. Our simulation analyses show cases where GLM forests are good challengers to random forests.

Lien vers la vidéo https://www.youtube.com/watch?v=f284admVjB4


18  Février 2022 

Geoffrey  ECOTO   (CCR & Université de Paris)

Titre:  " Application du machine learning à la modélisation de la sécheresse"

Abstract: La sécheresse géotechnique représente le second péril le plus couteux dans le cadre du régime des catastrophes naturelles français (et le premier pour les particuliers). Si ce péril a marqué ces dernières années par le caractère inhabituel des régions impactées, il a surtout surpris par l’intensité consécutive de ses survenances. Nous aborderons une nouvelle approche pour l’estimation des dommages liés à une sécheresse géotechnique. Ces travaux font l’objet d’une thèse en cours à la CCR en partenariat avec l’Université de Paris et sont menés en développant des techniques de machine learning.


15  Avril 2022

Nabil KAZI-TANI  (Université de Lorraine, IECL)

Title: " Graphs and Cyber-Insurance Protection"

Abstract: In this talk, we consider the situation where a given graph has to be protected against communication interruption, through insurance or prevention measures. The goal of the protection buyer is to maintain good connectivity properties of the graph after a malicious attack, giving rise to a virus spread on the network. We model the epidemic spread using the standard Susceptible-Infected-Susceptible (SIS) Markov process. The connectivity of the graph is measured by a function of the average Laplacian spectrum: the second smallest eigenvalue, known as the algebraic connectivity. Using standard results on eigenvalues optimization, we recast the algebraic connectivity maximization as a semidefinite optimization problem, for which a solution exists and can be efficiently numerically computed. Our results allow to hierarchize the edges of a graph, giving more importance to some edges for which the protection demand is high, hence making optimal insurance demand directly depend on the underlying network topology. Based on a joint work with Thierry Cohignac (CCR).


12  Septembre  2022  - 9h00 à 18h00

 Conférence de l'Initiative de Recherche "Cyber-risk: actuarial modeling"

The closing conference of the Joint Research Initiative "Cyber risk: actuarial modeling" gathered professional and academic experts on cyber-risk and cyber-insurance. It was the occasion to confront different viewpoints and to present new developments in the field of cyber risk evaluation. 

Program and replays available on the web-page of the conference.

https://sites.google.com/view/cyber-actuarial/conference-12092022


Année 2020-2021

06  Novembre  2020 

13h00 : Pierre-Yves BOËLLE (Sorbonne Université, Institut Pierre Louis d'Epidémiologie et de Santé Publique) 

Titre: "Deux problèmes d'estimation dans le cas d'une maladie émergente: Fraction asymptomatique et Durée d'hospitalisation du COVID-19"

Lien vers la vidéo  https://www.youtube.com/watch?v=yc2_Y9XrIgs&feature=youtu.be

27  Novembre 2020  

14h00-15h00 : Jean-David FERMANIAN  (Ensae Paris, CREST).

Titre:  "Estimation semiparamétrique de copules par Maximum Mean Discrepancy"

Abstract: This talk deals with robust inference for parametric copula models. Estimation using Canonical Maximum Likelihood might be unstable, especially in the presence of outliers. We propose to use a procedure based on the Maximum Mean Discrepancy (MMD) principle. We derive non-asymptotic oracle inequalities, consistency and asymptotic normality of this new estimator. In particular, the oracle inequality holds without any assumption on the copula family, and can be applied in the presence of outliers or under misspecification. Moreover, in our MMD framework, the statistical inference of copula models for which there exists no density with respect to the Lebesgue measure on [0,1]^d as the Marshall-Olkin copula, becomes feasible. A simulation study shows the robustness of our new procedures, especially compared to pseudo-maximum likelihood estimation. An R package implementing the MMD estimator for copula models is available.

Lien vers la vidéo :   https://www.youtube.com/watch?v=QlEcxnBkEXs&t=23s

3  Décembre 2020  

14h00-17h00 : Workshop de l'Initiative de Recherche/ Joint Research Initiative "Modélisation actuarielle du risque cyber"

                                     https://sites.google.com/view/cyber-actuarial/events?authuser=0

Orateurs:  Martin ELING (University of St Gallen- I.VW - Institute of Insurance Economics),  

                                        Is Cyber Risk Insurable?

                    https://www.youtube.com/watch?v=WajOHWiNY9w

                    Presentation of the Joint Research Initiative (Caroline Hillairet, Olivier Lopez)

                   https://www.youtube.com/watch?v=KschE9LNoyw

                   David RIOS (University Rey Juan Carlos, ICMAT and the Spanish Royal Academy of Sciences).

                                      Security risk model for cyber insurance

                   https://www.youtube.com/watch?v=KtNqt4Cmy00

 The program can be downloaded here: https://drive.google.com/file/d/14Rq9Jh15WziNj1_4yecJgSvX71NPfC97/view?usp=sharing


4  Décembre 2020 

14h00 : Sarah  KAAKAÏ   (Laboratoire Manceau de Mathématiques, Risk and Insurance Institute, Le Mans Université)

Titre:  "IBMPopSim: a package for the efficient simulation of individual-based population models"

Abstract: The IBMPopSim package (https://daphnegiorgi.github.io/IBMPopSim/) aims at simulating the random evolution of heterogeneous populations, called stochastic Individual Based Models (IBMs). The package allows users to simulate population evolution in which individuals are characterized by their age and some characteristics, and where the population is modified by different types of events including births/arrivals, death/exit events, or changes of characteristics. The frequency at which an event can occur to an individual can depend on his age and characteristics, but also on the other individuals’ characteristics (interactions).

Such models have a wide range of applications in various fields including actuarial sciences, biology, demography, or ecology. For instance, IBMs can be used for simulating the evolution of an heterogeneous insurance portfolio or validating mortality forecasts. In this presentation, we propose an illustration of such applications based on two examples. IBMPopSim overcomes the limitations of time consuming IBMs simulations. This is done by implementing new efficient algorithms based on thinning methods, which are compiled using the Rcpp library. The package allows a wide range of IMBs to be simulated, while being user- friendly thanks to its structure based on simple building blocks. In addition, we provide tools for analyzing outputs, such a age-pyramids or life tables obtained from the simulated data, consistent with the data format of packages for mortality modeling such as StMoMo.

 Joint work with D. Giorgi and V. Lemaire (LPSM, Sorbonne Université).

https://www.youtube.com/watch?v=OwnSeX3Lcw0

22 Janvier 2021 

14h00 : Jan  BEIRLANT    (KU Leuven)

Title:  "Center-outward quantiles and the measurement of multivariate risk"

Abstract: All multivariate extensions of the univariate theory of risk measurement run into the same fundamental problem of the absence, in dimension , of a canonical ordering of . Based on measure transportation ideas, several attempts have been made recently in the statistical literature to overcome that conceptual difficulty. In Hallin (2017), the concepts of center-outward distribution and quantile functions are developed as generalizations of the classical univariate concepts of distribution and quantile functions, along with their empirical versions. The center-outward distribution function  is a cyclically monotone mapping from  to the open unit ball , while its empirical counterpart  is obtained as a cyclically monotone mapping from the sample to a regular grid over ; in dimension ,   reduces to . Based on the concept of Moreau envelope, a smooth interpolation  of  has been proposed in del Barrio et al. (2018). Here, we suggest to adapt the definition of the empirical  so as to relax the presence of ties, which is impractical in the context of risk measurement and propose a class of smooth approximations  ( a smoothness index) of  as an alternative to the  interpolation. Associated with the concepts of center-outward distribution and quantile functions and the associated convex potentials, we construct measures of risk of the maximum correlation type and their estimators based on  and . We also discuss the use of the volumes of the resulting empirical quantile regions. Some simulations and applications to case studies illustrate the value of the approach.

https://www.youtube.com/watch?v=l2GtWmeJawk

5 Février 2021 

14h00 : Anaïs  MARTINEZ    (CNAM)

Titre:  "Modélisation assurantielle du risque cyber"

Résumé: Le risque cyber connaît une expansion fulgurante en raison de la digitalisation de l'économie. Le risque de défaillance informatique s’intensifie et les piratages informatiques se multiplient. Il devient de plus en plus pressant de quantifier le risque cyber. La tâche s’avère complexe en raison de la faible quantité et de l’hétérogénéité des données disponibles.

 L’objectif de ce mémoire est de construire des modèles de prédiction afin de quantifier le risque cyber d’une organisation et de pouvoir tarifer diverses garanties d’assurance. Ainsi, deux modèles prédictifs ont été créés à partir de deux jeux de données distincts. La première partie a consisté à définir les concepts clés relatifs au risque cyber et à présenter les différentes garanties d’assurance répondant à ce risque. La deuxième partie a ensuite permis de fiabiliser et d’enrichir les jeux de données dans la perspective de créer des modèles robustes. Puis, les différentes méthodologies employées pour construire le premier modèle prédictif ont été détaillées au sein de la troisième partie. Ce modèle vise à estimer le nombre d’enregistrements compromis par une fuite de données personnelles en fonction des caractéristiques d’une organisation. Il permet de tarifer les garanties cyber les plus répandues actuellement sur le marché. Quant au second modèle, présenté en quatrième partie, il permet de prédire le coût d'un incident cyber de nature opérationnelle selon les caractéristiques d’une entreprise. Contrairement au premier modèle, il permet de tarifer des couvertures cyber rarement proposées aujourd’hui. Enfin, une analyse des limites des deux modèles construits a été effectuée et des pistes d’amélioration ont été proposées dans la dernière partie. 

12 Février 2021 

14h00 : Michel  DENUIT  (Institut de Statistique, Biostatistique et Sciences Actuarielles, Université Catholique de Louvain)

Titre:  "Reduction des risques par Conditional mean risk sharing"

Résumé: Cet exposé présentera quelques propriétés utiles du partage de risques indépendants mais hétérogènes à l’aide de la « conditional mean risk allocation » proposée par Denuit & Dhaene (2012, Insurance: Mathematics and Economics). Il s’attachera à démontrer le potentiel de cette approche dans le cadre de l’assurance collaborative. Les résultats présentés sont tirés de plusieurs travaux menés avec Christian Robert du Laboratoire Finance et Insurance (LFA) du CREST, ENSAE.

https://www.youtube.com/watch?v=Zg8Fut1Ju68&t=3s

5 Mars 2021 

14h00 : Oskar  LAVERNY    (Université Lyon1 & SCOR)

Title:  " Internal modeling without copulas : the beauty of multivariate Thorin classes"

Abstract: The generalized gamma convolution class of distribution appeared in Thorin's work while looking for the infinite divisibility of the log-Normal and Pareto distributions. Although these distributions have been extensively studied in the univariate case, the multivariate case and the additive risk factor structures that can arise from it have received little interest in the literature. Furthermore, only one projection procedure for the univariate case was recently constructed, and no estimation procedure are available. By expending the densities of multivariate generalized gamma convolutions into a tensorized Laguerre basis, we bridge the gap and provide performant estimations procedures for both the univariate and multivariate cases. We provide some insights about performance of these procedures, and a convergent series for the density of multivariate gamma convolutions, which is shown to be more stable than Moshopoulos's and Mathai's univariate series. We furthermore discuss some examples.

 https://www.youtube.com/watch?v=EJuqAmVkc30&ab_channel=GroupedeTravailARC

12 Mars 2021 

14h00 : Stefan   WEBER    (House of Insurance, Leibniz Universität Hannover)

Titre:  "Pricing of cyber insurance contracts in a network model"

Abstract: We develop a methodology for pricing cyber insurance contracts. The considered cyber threats, such as viruses and worms, diffuse in a structured data network. The spread of the cyber infection is modeled by an interacting Markov chain. Conditional on the underlying infection, the occurrence and size of claims are described by a marked point process. We introduce and analyze a polynomial approximation of claims together with a mean-field approach that allows to compute aggregate expected losses and prices of cyber insurance. Numerical case studies demonstrate the impact of the network topology and indicate that higher order approximations are indispensable for the analysis of non-linear claims. This is joint work with Kerstin Awiszus and Matthias Fahrenwaldt. 


30 Avril 2021 

14h00 : Sandrine   LEMERY    (Le CNAM)

Titre:  "The « annuity puzzle » : peut-on développer le marché de la rente viagère?"

Abstract: Après avoir rappelé pourquoi l’achat d’une rente viagère semble une décision rationnelle de l’agent économique, nous ferons un tour des raisons qui expliquent la difficulté d’un marché de la rente viagère en France et dans le monde, et explorerons les moyens de développer des incitations à de telles couvertures.



Année 2019-2020

27 Septembre 2019

14h00-15h00 : Christian-Yann ROBERT (Ensae, CREST), 

"Composite likelihood estimation method for hierarchical Archimedean copulas defined with multivariate compound distributions "

Résumé :  We consider the family of hierarchical Archimedean copulas obtained from multivariate exponential mixture distribution through compounding, as introduced by Cossette et al. (2017). We investigate ways of determining the structure of these copulas and estimating their parameters. An agglomerative clustering technique based on the matrix of Spearman’s rhos, combined with a bootstrap procedure, is used to identify the tree structure. Parameters are estimated through a top-down composite likelihood. The validity of the approach is illustrated through two simulation studies in which the procedure is explained step by step. The composite likelihood method is also compared to the full likelihood method in a simple case where the latter is computable.

 

4 Octobre  2019

14h00 : Soutenance de thèse  de  Yohann Le Faou

"Contributions à la modélisation des données de durée en présence de censure : application à l’étude des résiliations de contrats d’assurance santé".

Jury: Katrien Antonio (KU Leuven), Gérard Biau (Sorbonne Université, Arnaud Cohen (Forsides), Olivier Lopez (Sorbonne Université),Christian Robert (Ensae), Philippe Saint Pierre (Université Paul Sabatier, Toulouse).

29 Novembre  2019

100% Actuaires- 100% Data Science

https://100-actuaires.institutdesactuaires.com/

10 Janvier 2020

14h00 : Charlotte DION (Sorbonne Université, LPSM)

"Consistent procedures for multiclass classification of discrete diffusion paths".

Résumé: The recent advent of modern technology has generated a large number of datasets which can be frequently modeled as functional data. 

This paper focuses on the problem of multiclass classification for stochastic diffusion paths. In this context we establish a closed formula for the optimal Bayes rule.  We provide new statistical procedures which are built either on the plug-in principle or on the empirical risk minimization principle. We show the consistency of these procedures under mild conditions. We apply our methodologies to the parametric drift case.

24 Janvier 2020

14h00 : Avner  BAR-HEN (CNAM)

"Influence Measures for CART Classification Trees".

Résumé: Classification And Regression Trees (CART) have proven to be very useful in various applied contexts mainly because models can include numerical as well as nominal explanatory variables and because models can be easily represented. This talk present tools to measure the influence of observations on the results obtained with CART classification trees. We define influence measures and propose criterions to measure the sensitivity of the CART classification tree analysis. The proposals are based on predictions and use jackknife trees. The analysis is extended to the pruned sequences of CART trees to produce  CART specific notions of influence. Using the framework of influence functions, distributional results are derived. A numerical example, the well known spam dataset, is presented to illustrate the notions developed throughout the paper. A real dataset relating the administrative classification of cities surrounding Paris, France, to the characteristics of their tax revenues distribution, is finally analyzed using the new influence-based tools.

This is a joint work with Servane Gey and Jean-Michel Poggi.

28 Février 2020

14h00 : Jenifer ALONSO-GARCIA   (Université Libre de Bruxelles)

"Continuous time model for notional defined contribution pension schemes: liquidity and solvency".

Résumé:Notional Defined Contribution (NDC) pension schemes are defined contribution plans which are pay-as-you-go financed. From a design viewpoint, the countries where NDCs have been implemented cannot guarantee sustainability due to the choice of notional return paid to the contributions and the indexation rate paid to pensions. We study how the scheme should be designed to achieve liquidity and solvency with a limited set of assumptions in a continuous overlapping generations model that increases traceability of the results. The adequacy and actuarial fairness are also jointly studied in the numerical example for the population of Belgium. We find that the proposed indexation and notional rate act as automatic balancing mechanisms that ensure sustainability and actuarial fairness. However, the effect on pension adequacy depends on the generosity of the annuity scheme at retirement.

10  Mars  2020

9h00-17h00 : Workshop on Hawkes process

Orateurs: Martin Bompaire (Criteo), Félix Cheysson (AgroParisTech/INRA), Simon  Clinet (KeioUniversity), Eva Löcherbach (Université Paris 1 Pantheon-Sorbonne), 

Marcello  Rambaldi (Capital Fund Management), Judith Rousseau (Université Paris-Dauphine/Oxford University).

Page web: https://workshophawkes.sciencesconf.org

25  Mars  2020 ATTENTION  conférence reportée à une date ultérieure 

9h00-18h00 : Conférence annuelle de l'Initiative de Recherche/ Joint Research Initiative "Modélisation actuarielle du risque cyber"

Orateurs: Valérie Chavez-Demoulin (HEC Lausanne), Véronique Legrand (CNAM), Sébastien Farkas (Sorbonne Université)

Christophe Delcamp (FFA), Jérôme Notin (Cybermalveillance.gouv.fr), Guy Van hecke (AXA GlobalRe), Julien Fursat (AXA Next), un représentant d'AXA XL. 

LIEU :  AXA, 25 avenue Matignon, 75008 Paris

Année 2018-2019

05 octobre 2018

14h00-15h00 : Pierre THEROND (ISFA - Université Lyon 1, et Galea & Associés)

« Modélisation de la longévité de populations d’assurés : une approche par crédibilité »

Résumé : La modélisation de la longévité est un élément important tant en termes de gestion des risques, que de détermination des exigences de solvabilité des assureurs vie. Ceux-ci sont confrontés aux problématiques de la taille restreinte des portefeuilles et de la profondeur d’historique pour calibrer de manière robuste les modèles usuels utilisés sur des populations nationales. Ceci les conduit généralement à se positionner à une référence et à ne pas tenir compte de l’hétérogénéité entre les différents portefeuilles couvrant des risques en cas de vie. Nous proposons une approche par crédibilité permettant d’intégrer cette hétérogénéité et de disposer d’un processus de mise à jour des hypothèses best estimate pour les différents portefeuilles. Le modèle est mise en œuvre sur 14 portefeuilles d’assurance présentant des caractéristiques et des niveaux de risque variés. Travail en cours avec Yahia Salhi et Jean-Baptiste Coulomb.

16 novembre 2018

14h00-15h30 : Thomas MAILLART (Université  de Genève)

« Cognition : The New Cybersecurity Frontier »

Résumé : In this talk, I will present two empirical studies on bug bounty programs and cyber incidents to demonstrate that human cognitive limitations associated with cyber security tasks (e.g., software security research, cybersecurity incidents management and response) represent a fundamental weakness in cyber security as a time critical phenomenon. I will then illustrate these limitations through a standardized experiment conducted in collaboration with Columbia University and aimed at testing the human capacity to overcome complicated problems. Results show that humans are particularly bad at converging toward sparse or unique solutions in high dimensional solution spaces, which are the hallmark of cyber security. These results and evidence from cyber security have implications for the management of cyber risks, at the security operational center level, at the organization level, including regarding third party risk and risk transfer, and even for cyber insurance and policy making.

Bio: Thomas Maillart aims to investigate, model and enhance human collective intelligence, through better understanding of incentives, structures and dynamics of social interactions online and in the physical world.Thomas Maillart holds a Master from EPFL (2005) and a PhD from ETH Zurich (2011). At ETH Zurich, Thomas received the Zurich Dissertation Prize in 2012 for his pioneering work on cyber risks and spent . Before joining the University of Geneva, Thomas Maillart was a post-doctoral researcher at UC Berkeley until 2016. Thomas Maillart has co-founded a cybersecurity startup in 2005 and has consulted on cybersecurity for various governmental and private organizations.

14 décembre  2018

14h00-15h00 : Michael  RERA (Sorbonne Université), 

«An evolutionarily conserved predictor of impending death: implications for the study of ageing and human health»

Résumé : Our conception of phenomena strongly affect the way we study them. Based on our day-to-day perception, we generally define ageing as a continuous and progressive process driving the increasing risk for diseases and occurring death. Based on a the first description of the age-related increase of intestinal permeability to a non-toxic blue food dye in drosophila a couple years ago (Smurf phenotype), we developed new mathematical models and theoretical framework for studying ageing by considering as a discontinuous process. We are here presenting the implications of this work on the study of ageing in model organisms as well as the questions raised concerning human beings.

18  janvier 2019

14h00-15h00 : Elena DI BERNARDINO (Le Cnam), 

"Estimation of the Multivariate Conditional-Tail-Expectation for extreme risk levels: illustration on a environmental data-set"

Résumé : This talk  deals with the problem of estimating the multivariate version of the conditional tail expectation introduced in the recent literature. We propose a new semiparametric estimator for this risk measure, essentially based on statistical extrapolation techniques, well designed for extreme risk lev els. We prove a central limit theorem for the obtained estimator. We illustrate the practical properties of our estimator on simulations. The performances of our new estimator are discussed and compared with the ones of the empirical Kendall's process‐based estimator, previously proposed by the authors. We conclude with two applications on real data sets: rainfall measurements recorded at three stations located in the south of Paris (France) and the analysis of strong wind gusts in the northwest of France.  This is a  joint work with Clémentine Prieur. 

 

15 février 2019 

14h00-15h00 : Pierrick PIETTE  (LSAF, Université Lyon 1, LPSM and Sinalys), 

"Economic measure of lapse risk management with machine learning models  "

Résumé :  This talk tackles the lapse risk problematics in a life insurance portfolio by taking into consideration the latest advances from the quantitative marketing literature. Indeed, from the insurance company’s point of view, a lapse is similar to a churn, i.e. the company loses a customer. We apply advanced machine learning algorithms, used in the churn literature, to detect lapses. We pay special attention to the validation metrics that can be used to compare the models’ performance, especially by introducing an economic profit-based measure. Furthermore, we highlight the importance of the chosen loss function to be optimized in the classification algorithm. This is a joint work with Jason Tsai and Stéphane Loisel.​

 

08  mars 2019

Human Mortality Database users #2,

08:45 à 17:45, Université Paris Dauphine

https://www.weezevent.com/human-mortality-database-users-conference-2

29  mars 2019

14h00-15h00 : Claire MOUMINOUX (ISFA, Université Lyon 1), 

"Obfuscation and Honesty Experimental Evidence on Insurance Demand with Multiple Distribution Channels"

Résumé : This talk  aims to shed light on the dilemma faced by insurance purchasers faced with multiple distribution channels. Should the consumer herself choose from a large set of insurance policies or rather delegate a part her decision to an intermediary who is more or less honest? We consider decisions based on a number of real-world insurance distribution channels with different information frames. Beliefs about intermediary honesty are the main determinants of individual choice. In addition, obfuscation and delegation are the main sources of inefficiency in decision-making; the .rst because of focal point effect and the second because of intermediaries dishonesty. Joint work with Stéphane Loisel and Jean-Louis Roullière. 

5 avril 2019

14h00-15h00 : Isaac COHEN-SABBAN  (LPSM, Sorbonne Université, et Pacifica), 

"How to improve the performance of a neural network with unbalanced data for text classification in insurance application

Résumé : Prediction of the evolution of a claim is a challenging problem in insurance, especially for guarantees associated with high volatility of the cost such as third-party insurance. Identifying, soon after occurrence, the claims that require more attention, is particularly interesting for the company since it allows to better adapt its response to the specificity of a claim. With the increase of available data on a claim in order to analyze its severity, artificial intelligence techniques are a promising direction to deal with this problem. In this paper, we propose an ensemble method using Neural Networks as an early warning system for predicting the cost which is not directly observed due to censoring. The model is fed by informations of various types (such as texts reports about the circumstances of claims and nature of the damage) obtained at the opening of the claim. A particular attention is devoted to deal with the unbalanced characteristic of our data, with minority classes representing 2% of our observations. We combine bagging with a rebalancing method to improve our results and reduce the variance of the estimator. We illustrate our methodology on two applications. The first concerns the gravity of the accident, the second the responsibility of our policyholder.

10 Mai  2019

14h00-15h00 : Matthias KIRCHNER (ETH Zurich), 

"On Hawkes processes"

Résumé :  In this talk, we give an introduction to Hawkes processes emphasizing their autoregressive structure - the reason why Hawkes processes `always fit'. Furthermore, we study a discretization of the (multitype) Hawkes process, counting points in bins. We show that the resulting bin-count sequence may be approximated by (multivariate) integer-valued autoregressive (INAR) time series. In the INAR context, we derive asymptotically normal estimators and provide estimates of their standard deviation. Retranslating these results from the time series world into the point process world gives rise to a nonparametric estimation method for Hawkes processes.In an example on multitype limit-order-book event streams, we illustrate how this estimation method may be used to specify Hawkes models from data with few a priori assumptions. At the same time, we point out some issues that arise around the use of Hawkes processes: `Causal' interpretation, discreteness of time lines, and the impossibility of negative autocorrelation.

21 Juin  2019

14h00-15h00 : Sébastien FARKAS (Sorbonne Université, LPSM), 

"Cyber claim analysis through Generalized Pareto regression Trees with applications to insurance pricing and reserving "

Résumé :   In this talk, we propose a methodology to analyze the heterogeneity of cyber claim databases. This heterogeneity is caused by the evolution of the risk but also by the evolution in the quality of data and of sources of information through time. We consider a public database, already studied by Eling and Loperfido [2017], which is considered as a benchmark for cyber event analysis. Using regression trees, we investigate the heterogeneity of the reported cyber claims. A particular attention is devoted to the tail of the distribution, using a Generalized Pareto likelihood as splitting criterion in the regression trees. Combining this analysis with a model for the frequency of the claims, we develop a simple model for pricing and reserving in cyber insurance. This a joint work with Olivier Lopez and Maud Thomas.

25 Juin  2019

9h00-16h00 : Conférence Cyber-Préjudice - L'impact réel des attaques informatiques contre les entreprises.

LIEU :  Grand auditorium de la Fédération Française de l'assurance, 26 bd Haussmann, 75009 Paris. 

Evénement organisé par IRT-System'X et la Fédération Française de l'assurance, avec la participation de la Chaire Cyber Insurance.

https://www.irt-systemx.fr/en/conference-cyber-prejudices/

27 et 28 Juin  2019

 Workshop "Mathematics in longevity risk management" 

 LIEU :  King's college London 

The meeting will bring together leading experts on mathematical economics of pensions from France and the UK. The aim is to promote mathematical techniques of financial economics that have been recently introduced in pensions risk management. The workshop will be based on 14 invited talks of both academics and practitioners on the quantitative side of pensions. 

https://www.kcl.ac.uk/events/mathematics-in-longevity-risk-management