Carole Bernard
(Grenoble Ecole de Management)
Multiple agents who pursue optimal portfolio choice by each optimizing their univariate objective (e.g., an expected utility) obtain optimal payoffs that are increasing with each other (comonotonic). This situation may lead to an undesirable level of systemic risk for society. A regulator may therefore aim to enforce diversification among the various portfolios by optimizing a suitable multivariate objective. We assess the cost of diversification and provide the strategy that the regulator should pursue for obtaining the desired level of diversification.
Holly Brannelly
(University College London)
This research focuses on the construction of a novel class of quantile processes, governing the stochastic dynamics of quantiles in continuous time. The marginals of such quantile processes are obtained by transforming the marginals of a cadlag process under a composite map consisting of a distribution and a quantile function, similar to transmutation maps. As a result, we obtain a one-step approach to constructing widely flexible classes of stochastic models, accommodating extensive ranges of higher-order moment behaviours (e.g., tail behaviours in the finite dimensional distributions, and asymmetry). Such features are directly parameterised in the composite map and are thus interpretable with respect to the driving process. It is shown that the quantile processes induce a distorted probability measure, and as such we propose a general, time-consistent, and dynamic risk valuation principle under the induced measures of quantile processes. This principle allows for pricing in incomplete markets and thus has application in insurance pricing. Here, the distorted measures are considered `subjective’ and are constructed in such a way to account for market characteristics, ambiguity with respect to the probability distributions of risks, and the preferences and perception of risk by market participants. This leads to a parametric system of risk-sensitive probability measures, indexed by such factors.
Simone Cerreia-Vioglio
(Universita' Bocconi)
We use decision theory to confront uncertainty that is sufficiently broad to incorporate models as approximations. We presume the existence of a featured collection of what we call structured models that have explicit substantive motivations. The decision maker confronts uncertainty through the lens of these models, but also views these models as simplifications, and hence, as misspecified. We extend the max-min analysis under model ambiguity to incorporate the uncertainty induced by acknowledging that the models used in decision-making are simplified approximations. Formally, we provide an axiomatic rationale for a decision criterion that incorporates model misspecification concerns.
Tolulope Fadina
(University of Essex)
A risk analyst assesses potential financial losses based on multiple sources of information. Often, the assessment does not only depend on the specification of the loss random variable, but also various economic scenarios. Motivated by this observation, we design a unified axiomatic framework for risk evaluation principles which quantifies jointly a loss random variable and a set of plausible probabilities. We call such an evaluation principle a generalized risk measure. We present a series of relevant theoretical results. The worst-case, coherent, and robust generalized risk measures are characterized via different sets of intuitive axioms. We establish the equivalence between a few natural forms of law invariance in our framework, and the technical subtlety therein reveals a sharp contrast between our framework and the traditional one. Moreover, coherence and strong law invariance are derived from a combination of other conditions, which provides additional support for coherent risk measures such as Expected Shortfall over Value-at-Risk, a relevant issue for risk management practice.
Masaaki Fujii
(The University of Tokyo)
In this talk, we discuss the problem of equilibrium price formation in a securities market, where a large number of financial firms are continuously trading via the securities exchange in the presence of stochastic order flows from their over-the-counter clients. By making use of the mean field game approach, we obtain a special form of forward-backward stochastic differential equations of McKean-Vlasov type with common noise, which is found to provide an approximation of the equilibrium price. In fact, it is proven to achieve an asymptotic market clearing in the large population limit. Under suitable conditions, we also show the existence of a finite agent equilibrium and its strong convergence to the corresponding mean field limit.
Yushi Hamaguchi
(Osaka University)
We investigate a time-inconsistent stochastic recursive control problem where the cost functional is defined by the solution to a backward stochastic Volterra integral equation (BSVIE, for short). We provide a necessary and sufficient condition for an open-loop equilibrium control via variational methods and show that the corresponding first- and second-order adjoint equations become the so-called extended BSVIEs.
Xuedong He
(The Chinese University of Hong Kong)
Although maximizing quantiles is intuitively appealing and has an axiomatic foundation, it is difficult to find the optimal portfolio strategy due to the lack of dynamic programming. By using an intra-personal equilibrium approach and focusing on the class of time-varying affine strategies, we find that the only viable outcome is from the median maximization, because for other quantiles either the equilibrium does not exist or there is no investment in risky assets. Maximizing the median endogenizes the use of portfolio insurance: A time-varying affine strategy is an equilibrium if and if only if it is a portfolio insurance strategy.
This is a joint work with Zhaoli Jiang and Steven Kou.
Camilo Hernandez
(Imperial College London)
We study extended type-I BSVIEs which are extensions of classic Backward Stochastic Differential Equations. They can be understood as infinite families of BSDEs. The noticeable feature of extended type-I BSVIEs is the appearance of the diagonal processes of both elements of the solution in the generator. We will motivate them by a series of practical applications. In particular, they provide a rich framework to address the equilibrium approach to time-inconsistent control problems via either Bellman’s and Pontryagin’s principles and consequently open the door to the study of time-inconsistent contract theory. This is a joint work with Dylan Possamaï.
Ulrich Horst
(Humboldt University of Berlin)
We analyze novel portfolio liquidation games with self-exciting order flow. Both the N-player game and the mean-field game are considered. We assume that players' trading activities have an impact on the dynamics of future market order arrivals thereby generating an additional transient price impact. Given the strategies of her competitors each player solves a mean-field control problem. We characterize open-loop Nash equilibria in both games in terms of a novel mean-field FBSDE system with unknown terminal condition. Under a weak interaction condition we prove that the FBSDE systems have unique solutions. Using a novel sufficient maximum principle that does not require convexity of the cost function we prove that the solution of the FBSDE system do indeed provide an open-loop Nash equilibrium. The talk is based on joint work with Guanxing Fu and Xiaonyu Xia.
T. R. Hurd
(McMaster University)
Repurchase agreements (“repos”) are the basic contracts for institutional collateralized lending that are used universally to underpin modern financial systems. Sometimes, rules allow collateral to be "rehypothecated”, meaning collateral received for one loan is used as collateral for another loan. This talk will discuss the extent of counterparty risk in such repo markets if they allow fully rehypothecated general collateral. The particular aim is to show that chains of rehypothecated collateral are resilient and not a source of systemic risk. The discussion is in the context of a theoretical market design, in which a network of banks exchange repos that reset every 24 hours. The market rules permit full rehypothecation of general collateral which is taken to be government debt securities. The main result is to show that such a system is resilient. It means that at any moment, an arbitrary collection of banks can be removed unambiguously and fairly from the network, with minimal negative impact on the balance sheets of the remaining banks.
Mark Newman
(University of Michigan)
It is well known that the statistical distributions of many quantities or events follow power-law or Pareto distributions: populations of cities, occurrence of words and family names, the sizes of solar flares and moon craters, phone calls, web links, book sales, individual wealth, power outages, forest fires, paper citations, and many other things follow power laws. For decades scientists have asked whether there is a unifying principle that explains why such diverse phenomena all seem to follow the same law. In this talk we argue that there is not, but that there is a relatively small number of different processes that can give rise to power-law behavior. We explain the workings of some of these, including critical and self-organized processes, the Yule process, random multiplicative processes, and optimization processes.
Silvana Pesenti
(University of Toronto)
We study the problem of active portfolio management where an investor aims to outperform a benchmark strategy's risk profile while not deviating too far from it. Specifically, an investor considers alternative strategies whose terminal wealth lie within a Wasserstein ball surrounding a benchmark's – being distributionally close – and that have a specified dependence/copula – tying state-by-state outcomes – to it. The investor then chooses the alternative strategy that minimises a distortion risk measure of terminal wealth. In a general (complete) market model, we prove that an optimal dynamic strategy exists and provide its characterisation through the notion of isotonic projections.
We further propose a simulation approach to calculate the optimal strategy's terminal wealth, making our approach applicable to a wide range of market models. Finally, we illustrate how investors with different copula and risk preferences invest and improve upon the benchmark using the Tail Value-at-Risk, inverse S-shaped, and lower- and upper-tail distortion risk measures as examples. We find that investors' optimal terminal wealth distribution has larger probability masses in regions that reduce their risk measure relative to the benchmark while preserving the benchmark's structure.
Neofytos Rodosthenous
(University College London)
We study the problem of optimally managing an inventory with unknown demand trend. Our formulation leads to a stochastic control problem under partial observation, in which a Brownian motion with non-observable drift can be singularly controlled in both an upward and downward direction. After first deriving the equivalent Markovian problem, our focus is to completely solve the latter. We show substantial regularity of its value function, we construct an optimal control rule, and we show that the free boundaries delineating action and inaction regions are Lipschitz continuous. Our approach uses the transition amongst three different but equivalent problem formulations and a link between two-dimensional bounded-variation stochastic control problems and games of optimal stopping. In order to show that the value function of the control problem possesses the sufficient regularity needed to perform a verification theorem, we develop a probabilistic method in combination with refined viscosity theory arguments.
Andreas Tsanakas
(Bayes Business School)
We present a theoretical framework for stressing multivariate stochastic models. We consider a stress to be a change of measure, placing a higher weight on multivariate scenarios of interest. In particular, a stressing mechanism is a mapping from random vectors to Radon-Nikodym densities. We postulate desirable properties for stressing mechanisms addressing alternative objectives. Consistently with our focus on dependence, we require throughout invariance to monotonic transformations of risk factors. We study in detail the properties of two families of stressing mechanisms, based respectively on mixtures of univariate stresses and on transformations of statistics we call Spearman and Kendall’s cores. Furthermore, we characterize the aggregation properties of those stressing mechanisms, which motivate their use in deriving new capital allocation methods, with properties different to those typically found in the literature. The proposed methods are applied to stress testing and capital allocation, using the simulation model of a UK-based non-life insurer.
Alex Tse
(University College London)
We present a continuous-time portfolio selection problem faced by an agent with S-shaped preference who maximizes the utilities derived from the portfolio's periodic performance over an infinite horizon. The periodic reward structure creates subtle incentive distortion. In some cases, local risk aversion is induced which discourages the agent from risk taking in the extreme bad states of the world. In some other cases, eventual ruin of the portfolio is inevitable and the agent underinvests in the good states of the world to manipulate the basis of subsequent performance evaluations. We outline several important elements of incentive design to contain the long-term portfolio risk.
Hideatsu Tsukahara
(Seijo University)
The purpose of backtesting is the evaluation of risk measurement systems using historical data by comparing ex ante estimates of risk measures with the ex post realized losses. Its main use is to make a decision as to whether the risk measurement system being used is credible or not, based on some minimal standard. In the literature, a backtesting problem is tactfully switched to a statistical hypothesis testing. However it is not in the traditional sense, albeit a 0-1 decision problem. Value-at-Risk is said to be easy to backtest because of its peculiarity, which leads to an unfruitful debate on the general concept of backtestability. We believe that the problem is best approached by using the framework of prequential analysis developed by A. P. Dawid, and we attempt to sort out the issues. Related concept of calibration and backtesting probability forecasting systems will be discussed and we examine the possibility of their refinement and extension.
Ruodu Wang
(University of Waterloo)
Probability distortion is a fundamental tool in decision theory, most notably as the basis for the rank-dependent utility theory and the cumulative prospect theory. In this talk we summarize few recent studies on probability distortion. We characterize probability distortions among all distributional transforms by a few very simple axioms. This characterization result is intimately linked to a new axiomatic characterization of quantiles and that of quantile maximization. Quantile maximization has interesting implications in risk sharing and block rewards with quantile agents.
Stefan Weber
(Leibniz University Hannover)
Protection of creditors is a key objective of financial regulation. Where the protection needs are high, i.e., in banking and insurance, regulatory solvency requirements are an instrument to prevent that creditors incur losses on their claims. The current regulatory requirements based on Value at Risk and Average Value at Risk limit the probability of default of financial institutions, but fail to control the size of recovery on creditors' claims in the case of default. We resolve this failure by developing a novel risk measure, Recovery Value at Risk. Our conceptual approach can flexibly be extended and allows the design of general recovery risk measures for various risk management purposes. By design, these risk measures control recovery on creditors' claims and integrate the protection needs of creditors into the incentive structure of the management. We provide detailed case studies and applications.
The talk is based on joint work with Cosimo Munari and Lutz Wilhelmy.
Jianfeng Zhang (The University of Southern California)
A non-zero sum game may typically have multiple Nash equilibria and different equilibria could lead to different values. We propose to study the set of values over all equilibria, which we call the set value of the game. The set value will play the role of the standard value function in the optimal control problem. In particular, we shall establish two major properties of the set value: (i) the dynamic programming principle; and (ii) the stability. The results are extended to mean field games without monotonicity conditions, for which we shall also establish the convergence of the set values of the corresponding $N$-player games. Some subtle issues on the choices of controls will also be discussed. The talk is based on two works, one with Feinstein and Rudloff, and the other with Iseri.