Abstracts

Abstracts for speakers:

Guillermo Alonso Alvarez: Optimal brokerage in Almgren-Chriss model with multiple clients

The role of the broker is to implement transactions on behalf of a client, charging a fee, which usually depends on the performance of the client, in return for better conditions such as lower price impact. From a rational point of view, the goal of the broker is to set a fee that maximizes her profit while her client trades optimally. In our setting we assume multiple heterogeneous agents trading a single asset whose price follows the Almgren-Chriss model. The broker has the option to choose whose agents she trades for as well as the agents have the option to decline the offer of the broker and trade directly in the market. We constructed optimal brokerage contracts and computed the optimal portfolio of clients. In addition, we conducted numerical experiments which illustrate how these portfolios, as well as the equilibrium profits of all market participants, depend on the price impact coefficients.

Erhan Bayraktar: Prediction problems and second order equations

We study the long-time regime of the prediction problem in both full information and adversarial bandit feedback setting. We show that with full information, the problem leads to second order parabolic partial differential equations in the Euclidian space. We exhibit solvable cases for this equation. In the adversarial bandit feedback setting, we show that the problem leads to a second order equation in the Wasserstein space. Based on joint works with Ibrahim Ekren and Xin Zhang.

Tomasz Bielecki: Risk Filtering and Risk-Averse Control of Markovian Systems Subject to Model Uncertainty

We consider a Markov decision process subject to model uncertainty in a Bayesian framework, where we assume that the state process is observed but its law is unknown to the observer. In addition, while the state process and the controls are observed at time $t$, the actual cost that may depend on the unknown parameter is not known at time $t$. The controller optimizes total cost by using a family of special risk measures, that we call risk filters and that are appropriately defined to take into account the model uncertainty of the controlled system. These key features lead to non-standard and non-trivial risk-averse control problems, for which we derive the Bellman principle of optimality. We illustrate the general theory on two practical examples: optimal investment and clinical trials.

This is a joint work with Igor Cialenco and Andrzej Ruszczynski.

Jin Hyuk Choi: Optimal investment in an illiquid market with search frictions and transaction costs

We consider an optimal investment problem to maximize expected utility of the terminal wealth, in an illiquid market with search frictions and transaction costs. In the market model, an investor's attempt of transaction is successful only at arrival times of a Poisson process, and the investor pays proportional transaction costs when the transaction is successful. We characterize the no-trade region describing the optimal trading strategy. We provide asymptotic expansions of the boundaries of the no-trade region and the value function, for small transaction costs. The asymptotic analysis implies that the effects of the transaction costs are more pronounced in the market with less search frictions.

Qi Feng: Exponential Entropy dissipation for weakly self-consistent Vlasov-Fokker-Planck equations

We study long-time dynamical behaviors of weakly self-consistent Vlasov- Fokker-Planck equations. We introduce Hessian matrix conditions on mean-field kernel functions, which characterizes the exponential convergence of solutions in L1 distances. The matrix condition is derived from the dissipation of a selected Lyapunov functional, namely auxiliary Fisher information functional. We verify proposed matrix conditions in examples. (The talk is based on a joint work with Erhan Bayraktar, and Wuchen Li.)

Christian Keller: Mean viability theorems and second-order Hamilton-Jacobi equations

The notion of mean viability is introduced. Mean viability problems are essentially specific stochastic target problems with expectation constraints. In the spirit of the classical deterministic viability theory, a geometric characterization related to those problems is established. Immediate applications are new proofs of existence and uniqueness for second-order Hamilton-Jacobi-Bellman (HJB) equations. Note that we do not operate in the viscosity solution framework. The motivation of this approach and goal of future research is to cover HJB equations related to more general stochastic optimal control problems with state constraints and stochastic target problems.

Alec Kercheval: James-Stein eigenvector shrinkage for covariance estimation

Portfolio risk forecasts require an estimate of the covariance matrix of asset returns, often for a large number of assets. When only a small number of relevant observations are available, we are in the high-dimension-low-sample-size (HL) regime in which estimation error dominates. Factor models are used to decrease the dimension, but the factors still need to be estimated.

We describe a shrinkage estimator for the first principal component, called James-Stein for Eigenvectors (JSE), that is parallel to the famous James-Stein estimator for a collection of averages. In the context of a 1-factor model, JSE substantially improves optimization-based metrics for the minimum variance portfolio. With certain extra information, JSE is a consistent estimator of the leading eigenvector.

This is based on joint work with Lisa Goldberg, Hubeyb Gurdogan, and Alex Shkolnik.

Nikolaos Kolliopoulos: Propagation of chaos for maxima of particle systems with mean-field drift interaction and applications in stochastic portfolio theory

We study the asymptotic behavior of the normalized maxima of real-valued diffusive particles with mean-field drift interaction. Our main result establishes propagation of chaos: in the large population limit, the normalized maxima behave as those arising in an i.i.d system where each particle follows the associated McKean—Vlasov limiting dynamics. This allows for the asymptotic distribution of the normalized maxima to be determined by using results from standard Extreme—Value Theory. The proof uses a change of measure argument that depends on a delicate combinatorial analysis of the iterated stochastic integrals appearing in the chaos expansion of the Radon-Nikodym density. Our work is motivated by the need to study the top performing assets in a large stochastic portfolio.

Martin Larsson: Sequential statistics by trading: e-processes and competing traders

The goal of sequential statistics is to draw inference from data that is gathered gradually through time. E-processes (`E’ for `Evidence’) form the basis of a recent approach to this problem that simultaneously produces strong statistical error bounds and high statistical power. This method has an interesting connection with mathematical finance: it admits an equivalent description in terms of competing traders in a fictitious financial market, each of whom attempts to profit from the view that certain statistical (null) hypotheses are false while other (alternative) hypotheses are true. I will discuss some problems where this perspective leads to new procedures for sequential testing. This talk is based on work with Philippe Casgrain, Wouter Koolen, Aaditya Ramdas, Johannes Ruf, and Johanna Ziegel.

Eunjung Noh: A unified approach to informed trading via Monge-Kantorovich duality

We solve a generalized Kyle model type problem using Monge-Kantorovich duality and backward stochastic partial differential equations. First, we show that the Kyle problem can be recast into a terminal optimization problem with distributional constraints. Therefore, the theory of optimal transport between spaces of unequal dimension comes as a natural tool. Second, we analyze the structure of the Monge-Kantorovich duality, in particular, the pricing rule is established using the Kantorovich potentials. Finally, we completely characterize the optimal strategies by analyzing the filtering problem from the market maker's point of view. In this context, the Kushner-Zakai filtering SPDE yields to an interesting backward stochastic partial differential equation whose measure-valued terminal condition comes from the optimal coupling of measures.

Dominykas Norgilas: Supermartingale shadow couplings

A classical result of Strassen asserts that given probability measures $\mu,\nu$ on the real line which are in convex-decreasing order, there exists a supermartingale with these marginals, i.e., a random vector $(S_1,S_2)$ such that $S_1\sim\mu$, $S_2\sim\nu$ and $\mathbb{E}[S_2\lvert S_1]=S_1$. However, it is a non-trivial problem to construct particular (or canonical) supermartingales with prescribed marginals. In this talk we introduce a family of such supermartingales, each of which admits canonical characterization in terms of stochastic dominance. We explicit construct the extreme elements (of this family) that solve the martingale optimal transport problem with supermartingale constraints.

Philip Protter: Consequences of Incompleteness on Option Prices

A fundamental problem in Mathematical Finance is the choice of a risk neutral measure with an incomplete market. One classic approach is to choose a risk neutral measure that is as close as is possible to the original probability measure, typically called the objective, or historical measure. These ideas lead naturally to several questions, such as the structure of the collection of risk neutral, or as we call them, martingale measures. For example, how large is the diameter of the space of risk neutral measures? What happens to the martingale measures when a sequence of incomplete models converges to a limit model? This is of course related to the idea of the robustness of one's models. The talk is based on joint work with Jean Jacod.

Alejandra Quintos Lima: Dependent Stopping Times and an Application to Credit Risk Theory

Stopping times are used in applications to model random arrivals. A standard assumption in many models is that the stopping times are conditionally independent, given an underlying filtration. This is a widely useful assumption, but there are circumstances where it seems to be unnecessarily strong. In the first part of the talk, we use a modified Cox construction, along with the bivariate exponential introduced by Marshall & Olkin (1967), to create a family of stopping times, which are not necessarily conditionally independent, allowing for a positive probability for them to be equal. We also present a series of results exploring the special properties of this construction.

In the second part of the talk, we present an application of our model to Credit Risk. We characterize the probability of a market failure which is defined as the default of two or more globally systemically important banks (G-SIBs) in a small interval of time. The default probabilities of the G-SIBs are correlated through the possible existence of a market-wide stress event. We derive various theorems related to market failure probabilities, such as the probability of a catastrophic market failure, the impact of increasing the number of G-SIBs in an economy, and the impact of changing the initial conditions of the economy's state variables. We also show that if there are too many G-SIBs, a market failure is inevitable, i.e., the probability of a market failure tends to one as the number of G-SIBs tends to infinity.

Alexandre Roch: Optimal ratcheting dividends policy with resets

A well-documented feature of firms’ dividend policies in practice is that cash dividend payments to shareholders seldom decrease in time. Indeed, classical dividend signaling assumes that the action of decreasing the dividend to shareholders sends a negative signal to market participants implying reduced future prospects and diminished performance. In practice, firms typically do not decrease the dividend unless they are facing serious financial troubles. Yet the current extensive literature on optimal stochastic control of dividends and capital structure policies rarely addresses this issue as most models of optimal dividend policy do not result in optimal dividend payments that only increase (ratcheting). A notable exception is the recent papers of Albrecher et al. 2020 and Angoshtari et al. 2019. However, an unanticipated consequence of ratcheting constraints is that the firm may end up putting itself in bankruptcy by being forced to pay out dividends at its all-time high rate even when earnings, profitability or cash reserves are low. In this paper, we relax the ratcheting constraint by allowing the firm to reduce (reset) its dividend rate. However, as this sends a negative signal to market participants, the result is that the equity value is assumed to be affected by such an action. We show that the value function is the unique solution of an associated HJB equation and describe the optimal dividend, capital injection and resets policies. We provide numerical examples.

Xiaofei Shi: Deep Learning Algorithms for Equilibrium with Limited Liquidity

Equilibrium models with limited liquidity can be characterized through systems of coupled forward-backward SDEs. Unfortunately, under general market dynamics, the nonlinear system of fully-coupled forward-backward SDEs falls outside the scope of any known well-posedness results. However, these systems can still be solved using deep learning-based algorithms. In this talk, we discuss the advantages and disadvantages of the supervised learning-based and reinforcement learning-based algorithms and present the usage of generative adversarial networks (GANs) as equilibrium solvers.

Frederi Viens: Static Markowitz Mean-Variance portfolio selection model with long-term bonds

We propose a static Markowitz mean-variance portfolio selection model suitable for long-term zero-coupon bonds. The model uses a multi-factor term structure model of Vasicek (Ornstein-Uhlenbeck) type to compute the portfolio’s expected return and its variance in the model. German Government zero-coupon bonds with short to very long time to maturity are considered; the data spans August 2002 to December 2020. The main investment assumption is the re-investment of cash flows of zero-coupon bonds with maturities less than the planning horizon at the current spot interest rate. Solutions for the zero-coupon holding vector and the tangency portfolio are obtained in closed form. Model parameters are estimated under an assumption of modeling ambiguity which takes the form of statistical errors at the level of the latent factors, allowing the use of a Kalman filter. Different investment strategies are examined on various risk portfolios. Results show that one- and two-factor Vasicek models produce attractive out-of-sample portfolio predictions in terms of the Sharpe ratio especially on long-term investments. It is also noted that a small number of risky bonds can adequately produce very attractive portfolio risk-return profiles. This is joint work with Dennis Ikpe (Michigan State University) and Romeo Mawonike (Great Zimbabwe University). This presentation is dedicated to the memory of Romeo Mawonike, who passed away on March 31, 2022.


Poster presentations:

Steven Campbell (University of Toronto): A Mean Field Game of Sequential Testing

This poster will introduce a mean field game for a family of filtering problems related to the classic sequential testing of a Brownian motion’s drift. It is based on recent joint work with Yuchong Zhang at the University of Toronto which, to the best of our knowledge, presents the first treatment of a mean field filtering game with stopping and common noise. We show that the game is well-posed, characterize the solution, and establish the existence of a mean field equilibrium under certain assumptions. Illustrations from numerical studies for several examples of interest will also be provided.

Hubeyb Gurdogan (University of California at Berkeley): Multi Anchor Point Shrinkage for the Sample Covariance Matrix

Estimation of the covariance of a high-dimensional returns vector is well-known to be impeded by the lack of long data history. We extend the work of Goldberg, Papanicolaou, and Shkolnik (GPS) [14] on shrinkage estimates for the leading eigenvector of the covariance matrix in the high dimensional, low sample-size regime, which has immediate application to estimating minimum variance portfolios. We introduce a more general framework of shrinkage targets – multiple anchor point shrinkage – that allows the practitioner to incorporate additional information – such as sector separation of equity betas, or prior beta estimates from the recent past – to the estimation. We prove some asymptotic statements and illustrate our results with some numerical experiments. Joint with Alec Kercheval.

Peiyao Lai (Worchester Polytechnic Institute): The Convergence Rate Of The Equilibrium Measure For The LQG Mean Field Game With A Common Noise

In this work, we study the convergence rate of the N-player LQG game with a Markov chain common noise towards its asymptotic Mean Field Game. The approach relies on an explicit coupling of the optimal trajectory of the N-player game driven by N-dimensional Brownian motion and the Mean Field Game counterpart driven by one-dimensional Brownian motion. As a result, the convergence rate is O(N−1/2) with respect to 2-Wasserstein distance.

Benjamin Weber (Carnegie Mellon)

Abstract to follow.

Zhenhua Wang (University of Michigan): Convergence of Policy Iteration for Entropy Regularized Stochastic Control Problems

We study an entropy-regularized continuous time stochastic control problem. We provide the policy improvement property. With a sequel of novel estimates, we then show the compactness of the value sequence $(v^n)_n$ generated by the Policy Iteration Algorithm (PIA), and derive the convergence of the sequence to the optimal value. The existence of the solution to the HJB equation and regularity of the optimal value are achieved as by-products. This is a joint work with Yu-Jui Huang and Zhou Zhou.

Weixuan Xia (Boston University): Regulating stochastic clocks

Stochastic clocks represent a class of time change methods for incorporating trading activity into continuous-time financial models, with the ability to deal with typical asymmetrical and tail risks in financial returns. In this paper we propose a significant improvement of stochastic clocks for the same objective but without decreasing the number of trades or changing the trading intensity. Our methodology targets any L\'{e}vy subordinator, or more generally any process of nonnegative independent increments, and is based on various choices of regulating kernels motivated from repeated averaging. By way of a hyperparameter linked to the degree of regulation, arbitrarily large skewness and excess kurtosis of returns can be easily achieved. Generic-time Laplace transforms, characterizing triplets, and cumulants of the regulated clocks and subsequent mixed models are analyzed, serving purposes ranging from statistical estimation and option price calibration to simulation techniques. Under specified jump-diffusion processes and tempered stable processes, a robust moment-based estimation procedure with profile likelihood is employed for statistical estimation and a comprehensive empirical study involving S\&P500 and Bitcoin daily returns is conducted to demonstrate a series of desirable effects of the proposed methods.